Skip to main content

The Return Of Pragmatism

March 2024
26min read

WILLIAM JAMES’S EXHILARATING movement to sweep aside all philosophies is making a surprising comeback a century later

In ordinary speech, pragmatism connotes practicality, commonsense, feet on the ground—virtues Americans like to think of as specifically American virtues. One thing the term does not connote is philosophical speculation. When we say someone is pragmatic, we are usually implying that he or she is not given to abstract rumination. But pragmatism is also the name of a particular type of philosophy. It was first introduced publicly nearly a hundred years ago, in 1898, by William James, and for several decades arguments over it dominated American philosophy. Then, in the 1930s, it went into a long period of eclipse, almost forgotten amid the emergence of new philosophical schools and theoretical paradigms. But since 1980 it has made an astonishing comeback. Legal writers, literary critics, historians, political theorists, and educators—not to mention philosophers—are starting to call themselves pragmatists. And by that term they mean to invoke the philosophical tradition of a century ago. Why is it back? What was it? Where did it come from? Pragmatism is an account of the way people think. This may not seem like a terribly useful thing to have. After all, if pragmatism’s account of the way people think is accurate, then we are already thinking the way pragmatists tell us we are. Why would we need a description of something we do anyway without it? It is as though someone were to offer us an account of the way our hair grows with the promise that having it will give us nicer hair. But pragmatists don’t believe there is a problem with the way people think. They believe there is a problem with the way people think they think. They believe, in other words, that other accounts of the way people think are mistaken; they believe that these mistaken accounts are responsible for a large number of conceptual puzzles; and they believe that these puzzles, when they are not simply wasting the energy of the people who spend their time trying to solve them, actually get in the way of our everyday efforts to cope with the world. Pragmatism is therefore an effort to unhitch human beings from what pragmatists regard as a useless structure of bad abstractions about thought. The sheer bravado of the attempt, the suggestion that all we need to do to lighten our load is just drop the whole contraption over the side of a cliff and continue on doing what we want to be doing anyway, makes pragmatist writing exhilarating to read. The classic pragmatist essays—Charles Sanders Peirce’s “How to Make Our Ideas Clear,” William James’s “The Will to Believe,” Oliver Wendell Holmes’s “The Path of the Law,” Richard Rorty’s “Philosophy as a Kind of Writing”—have a kind of ground-clearing sweep to them that gives many readers the sense that a pressing but vaguely understood obligation has suddenly been lifted from their shoulders, that some final examination for which they could never possibly have felt prepared has just been canceled.

What has seemed liberating to some readers has, of course, seemed to others like negligence and worse. The nonchalance with which pragmatists tend to dispose of issues that have engaged other thinkers has always struck many people as intellectually slipshod and morally dangerous. “Pragmatism is a matter of human needs,” wrote G. K. Chesterton in 1908, when international interest in pragmatism was first at its height, “and one of the first of human needs is to be something more than a pragmatist.” If the pragmatist account is correct, warned Bertrand Russell a year later, then “ironclads and Maxim guns must be the ultimate arbiters of metaphysical truth.” Pragmatists today have attracted similar sorts of hostility. They have been accused of purveying what one writer has called “a relativismcum-aestheticism that verges on nihilism and that may ultimately subvert liberal democracy” and what another has denounced as an “abandonment of traditional standards of objectivity, truth, and rationality, [which] opens the way for an educational agenda one of whose primary purposes is to achieve social and political transformation.”

Pragmatists—and this, to their critics, may be the most irritating thing about them—love these objections. For as John Dewey (borrowing a figure from William James) wrote of Chesterton’s remark, they spill “the personal milk in the [philosopher’s] cocoanut.” They confirm what the pragmatist has always claimed, which is that what people believe to be true is just what they think it is good to believe to be true. The critic who argues from the consequences of accepting the pragmatist account of the way we think—the critic who warns that dumping those other accounts over a cliff will lead to despair, war, illiberalism, or political correctness—has (in the pragmatist’s view) already conceded the key point, which is that every account of the way people think is, at bottom, a support for those human goods the person making the account believes to be important. The whole force of a philosophical account of anything, pragmatists insist, lies in the advertised consequences of accepting it. When we say to a child, “That’s the way the world is,” we are not making a neutral report. We are saying that understanding the world in that way will put the child into a better relation with it, will enable him or her to cope with it more satisfactorily—even if it means recognizing how unsatisfactory, from a child’s point of view (or anyone’s), the world can be.

What is pragmatism’s account of the way people think, and how did it arise? The term was introduced to the world by William James in a lecture called “Philosophical Conceptions and Practical Results,” which he delivered on a visit to the University of California at Berkeley in 1898. James presented what he called “the principle of Peirce, the principle of pragmatism,” which he defined as follows: “To attain perfect clearness in our thoughts of an object… we need only consider what effects of a conceivably practical kind the object may involve—what sensations we are to expect from it, and what reactions we must prepare. Our conception of these effects, then, is for us the whole of our conception of the object, so far as that conception has positive significance at all.” He went on to suggest that this principle might be expressed “more broadly,” and he proceeded to do so: “The ultimate test for us of what a truth means is indeed the conduct it dictates or inspires. … the effective meaning of any philosophic proposition can always be brought down to some particular consequence, in our future practical experience, whether active or passive; the point lying rather in the fact that the experience must be particular, than in the fact that it must be active.”

What James was doing was stretching a principle of scientific inquiry to cover thinking generally. The principle of scientific inquiry is the “principle of Peirce.” It states that if we want our conception of an object to be meaningful—or, as Peirce put it, to be “clear”—then we should limit that conception to the real-world behavior the object will exhibit under all possible conditions. To use one of Peirce’s examples, what we mean when we call a substance “hard” is that it will scratch glass, resist bending, and so on. “Hardness” is not an abstract property or invisible essence; it is just the sum total of what all hard things do.

James’s idea was to extend this way of understanding scientific concepts to all our beliefs. What makes any belief true? he asked. It is not, he thought, its rational self-sufficiency, its ability to stand up to logical scrutiny. It is that we find that holding the belief leads us into more useful relations with the world. James thought that philosophers had wasted huge amounts of time attempting to derive truths from general first principles, trying to prove or disprove rationally the tenets of various philosophical systems, when all they needed to do was to ask what practical effects our choosing one view rather than another might have. “What is its cash-value in terms of practical experience?” James thought the philosopher ought to ask of any idea, “and what special difference would come into the world according as it were true or false?” Or as he put it more famously, nine years later, in Pragmatism: “The true is the name for whatever proves itself to be good in the way of belief, and good, too, for definite and assignable reasons.”

Words like practical and cash-value may make James seem an advocate of materialism and science. But one of his chief purposes in introducing pragmatism into philosophy was to open a window, in what he regarded as an excessively materialistic and scientific age, for faith in God. We needn’t ask, he thought, whether the existence of God can be proved; we need only ask what difference believing or disbelieving in God will make in our lives. If we wait for absolute proof that there is or is not a God, we will wait forever. We have to choose whether to believe on other criteria—that is, on pragmatic criteria. For this is, James thought, how we make all our choices. We can never hope for absolute proof of anything. All our decisions are bets on what the universe will do.

In crediting the “principle of pragmatism” to Peirce, James was, characteristically, doing a favor for a friend. But he was also helping to establish a genealogy for pragmatism that may contain more legend than history. The attribution was a favor because in 1898 Charles Sanders Peirce was an almost wholly forgotten figure. James had known him well in the 1860s, when both were students at the Lawrence Scientific School at Harvard. Peirce was the son of a distinguished Harvard professor, the mathematician Benjamin Peirce, and he had already, when James met him, acquired a reputation as a prodigy of mathematics, science, and logic. But his career unfolded disastrously. He lost his academic appointment, at Johns Hopkins University, because of a scandal involving his remarriage. (He had begun living with the woman who would become his second wife after he had separated from but before he had legally divorced the woman who had been his first wife.) He lost his other job, working for the Coastal Survey, a government scientific agency, soon afterward. In 1898, having spent part of the decade in New York City, sleeping on the streets and cadging food from the Century Club (until he was evicted) while on the lam from creditors and assault charges, Peirce was living in poverty and neglect in Pennsylvania, on anenormous dilapidated estate that he had purchased in an illconsidered moment of financial optimism.

James asked, What makes any belief true? His answer: Simply that it leads us into more useful relations with the world.
 

James, on the other hand, was a Harvard professor and an international academic celebrity. The publication in 1890 of his Principles of Psychology , a book twelve years in the making, had secured his reputation. So an announcement by him of a new approach to philosophy was assured of attention, as was his attribution of the “principle of pragmatism” to Peirce. In his lecture James referred to an article Peirce had published twenty years before in The Popular Science Monthly (a more scholarly journal than the name suggests), entitled “How to Make Our Ideas Clear.” The word pragmatism does not appear in that article (or anywhere else before 1898), but James mentioned that Peirce had formulated his principle and begun calling it pragmatism even earlier. “I first heard him enunciate it,” James said, “at Cambridge in the early seventies.”

Within just a few years of James’s lecture, pragmatism became a full-fledged intellectual movement, attracting adherents and detractors around the world, and Peirce, still isolated on his Pennsylvania estate, wrote a number of papers—some published, most unpublished or unfinished—in the hope, largely unmet, of being recognized as a participant in the debate. In some of the unpublished papers, composed between 1905 and 1908, Peirce amplified James’s remark about the origins of pragmatism. “It was in the earliest seventies,” he wrote in one of them, “that a knot of us young men in Old Cambridge, calling ourselves half-ironically, half-defiantly, ‘The Metaphysical Club’ … used to meet, sometimes in my study, sometimes in that of William James.” He listed the names of the other participants in this discussion group: Oliver Wendell Holmes, Joseph Warner, Nicholas St. John Green, Chauncey Wright, John Fiske, and Francis Ellingwood Abbot. (Holmes, Warner, and Green were lawyers; Wright, Fiske, and Abbot, like Peirce and James, were scientists and philosophers.) It was within this circle, Peirce suggested, that pragmatism developed.

This has proved an influential account, although corroboration is thin. Whatever the truth of it, James and Holmes (and, for that matter, Wright and Green, the other figures whose work is associated with pragmatism) had already formulated what is distinctively pragmatic in their views before 1872. Peirce may have given James the name, but he could not have given him the idea.

In 1872 James was just emerging from a nervous collapse that had lasted almost three years. After a wildly peripatetic education in Europe and America, he had finally graduated from the Harvard Medical School (the only course of study he ever completed) in 1869, at the age of twenty-seven, and immediately fallen into a state of lassitude, depression, and chronic ill health. Whatever the causes of his various symptoms, James seems to have explained them to himself in intellectual terms. He treated his depression as a kind of philosophical problem that might be relieved by coming up with a philosophical solution, and one day in 1870, in his diary, he announced a breakthrough. “I finished the first part of Renouvier’s second ‘Essais,’” he wrote (Charles Renouvier was a nineteenth-century French philosopher), “and see no reason why his definition of Free Will—‘the sustaining of a thought because I choose to when I might have other thoughts’—need be the definition of an illusion. At any rate, I will assume for the present—until next year—that it is no illusion. My first act of free will shall be to believe in free will. … Hitherto, when I have felt like taking a free initiative, like daring to act originally, without carefully waiting for contemplation of the external world to determine all for me, suicide seemed the most manly form to put my daring into: now, I will go a step further with my will, not only act with it, but believe as well; believe in my individual reality and creative power.” The breakthrough did not prove definitive; James’s complaints persisted. But this passage, with its admonition to act on beliefs without waiting for philosophical confirmation of their validity, is the germ of the doctrine James would announce, twenty-six years later, in “The Will to Believe.” And it is the essence of his pragmatism.

Holmes underwent his own crisis in a very different setting. In 1861, at the end of his senior year at Harvard, he enlisted in the Union Army (something James seems scarcely to have contemplated), and he served for three years and in some of the bloodiest fighting of the Civil War. Although he later gave speeches in which he glorified the soldier’s blind allegiance to duty, Holmes hated war itself. He was seriously wounded three times; the third wound was in the foot, which he hoped would have to be amputated so he could be discharged before his commission was up. That hope was disappointed, but Holmes did emerge from the war purged of illusions. He thought he had paid a high price for the privilege of losing them, and he was careful never to acquire any again.

After his return Holmes attended Harvard Law School and then went into practice with a Boston firm. He also developed an intimate friendship with William James. Their letters were unusually warm and spirited, but their personal relations eventually became strained, and Holmes was always unsympathetic to James’s philosophical writings. They seemed to promote, in their spiritual hopefulness, just the sort of sentimental idealism he had rejected. “His wishes led him to turn down the lights so as to give miracle a chance,” he complained to a friend after James’s death in 1910. Holmes had no high regard for Peirce either; he thought his genius “overrated.” But although Holmes would never have referred to himself as a pragmatist, his twentieth-century disciples have not been wrong to understand his jurisprudence as a form of pragmatism.

In 1870, when he was twenty-nine, Holmes became co-editor of the American Law Review , and the first paragraph of the first article he published there gives, in a very early nutshell, the pragmatist premise of his jurisprudence: “It is the merit of the common law that it decides the case first and determines the principle afterwards. Looking at the forms of logic it might be inferred that when you have a minor premise and a conclusion, there must be a major, which you are also prepared then and there to assert. But in fact lawyers, like other men, frequently see well enough how they ought to decide on a given state of facts without being very clear as to the ratio decidendi [the ground of the decision]. Lord Mansfield’s often-quoted advice to the business man who was suddenly appointed judge, that he should state his conclusions and not give his reasons, as his judgment would probably be right and the reasons wrong, is not without application to more educated courts.”

Holmes’s target in these sentences was legal formalism, the theory that the law has an internal logical consistency and consists of general doctrines—such as “a person shall not use his property in a way that injures the property of another”—that guide the outcomes of particular cases. Holmes devoted his career as a judge and a jurisprudential thinker to demolishing this view of the law, pointing out, for example, that people use their property legally to injure the property of others all the time, as when they set up a shop with the intention of putting the shop owner down the street out of business.

Holmes’s insight into the insufficiency of general principles left him with an obvious question, which is, If general principles don’t decide cases, what does? His answer was unveiled in the opening paragraph of The Common Law (1881), in what is possibly the most famous sentence in American legal thought: “The life of the law has not been logic; it has been experience.” Holmes did not mean that there is no logic in the law. He meant that what guides the direction of the law, from case to case over time, is not immutable reason but changing experience. This assertion has sometimes been misinterpreted to mean that what ultimately determines a judge’s decision is his personal background and taste—what he ate for breakfast. But that is not what Holmes meant by experience. He was not referring to the life history of the individual lawmaker or judge; he was referring to the life history of society. Experience, for him, was the name for everything that arises out of the interaction of the human organism with its environment: beliefs, values, intuitions, customs, prejudices—what he called “the felt necessities of the time.” Our word for it is culture .

For when we think judicially—when we try to determine what would be the just outcome in a legal dispute—Holmes believed that we think the same way we do when we have to make a practical decision of any sort. We don’t do whatever feels pleasant or convenient to us at the moment (since experience teaches that this is rarely a wise basis for making a decision). But we don’t reason logically from abstract principles either. Still, our decision, when we are happy with it, never feels subjective or irrational: How could we be pleased if we knew it to be arbitrary? It just feels like the decision we had to reach, and this is because its Tightness is a function of its “fit” with the whole inchoate set of cultural assumptions of our world, the assumptions that give the moral weight—much greater moral weight than logic or taste could ever give—to every judgment we make. This is why so often we know we’re right before we know why we’re right. We decide, then we deduce.

Holmes sought to demolish the view that the law has internal logical consistency and is guided by general doctrines.
 

Philosophies and theories and formal methodologies are part of our culture, but they are, in Holmes’s view, the dinner jacket and bow tie we instinctively take off when it is time to change the tires. “All the pleasure of life is in general ideas,” he wrote to a correspondent in 1899. “But all the use of life is in specific solutions—which cannot be reached through generalities any more than a picture can be painted by knowing some rules of method. They are reached by insight, tact and specific knowledge.”

The one self-proclaimed pragmatist whose writings Holmes admired (and Holmes was not a man ordinarily given to admiration for the views of other people) was John Dewey. In the final chapter of Experience and Nature (1925), the work of the widest philosophical scope among his many books, Dewey praised Holmes as “one of our greatest American philosophers” and went on to quote a long passage from Holmes’s essay “Natural Law” (1918). Holmes read the book several times with growing pleasure (what was there, after all, not to like?), and his reaction sums up the reaction many of his contemporaries had both to Dewey’s wisdom and to Dewey’s style: “It seemed to me … to have a feeling of intimacy with the inside of the cosmos that I found unequaled. So methought God would have spoken had He been inarticulate but keenly desirous to tell you how it was.”

Dewey’s influence in his own long lifetime—he was born in 1859, the year of On the Origin of Species , and died in 1952, the year of the hydrogen bomb—touched many fields. He was a psychologist, a philosopher, a political activist, a public intellectual, and a social reformer. But his most lasting contribution was in the field of education, and although pragmatism, once he took it up, underwrote everything Dewey did, it is his work as an educator that shows its consequences most dramatically.

Dewey began his career as an absolute idealist. He was trained at Johns Hopkins, at a time when Peirce was on the faculty, by George Sylvester Morris, a neo-Hegelian, and he wrote his first books under the influence of Hegel. His work began turning in a pragmatist direction after he read James’s Principles of Psychology in 1890. In 1894 he accepted a position as chair of the philosophy department at the newly founded University of Chicago. In 1896 he established the Laboratory School there, an experiment in progressive education run by the department of pedagogy (of which he was also the chair), and began to write the works on education for which he quickly became famous around the world: The School and Society (1899), The Child and the Curriculum (1902), How We Think (1910), and Democracy and Education (1916). The first of these, The School and Society , is one of the most influential educational treatises ever written. Its argument for the importance of the practical in education was adopted (and possibly partly formulated) by Dewey’s close friend Jane Addams as the blueprint for the educational programs at Hull-House, her pioneering social welfare institution in Chicago, at the turn of the century. And the book has never gone out of print.

Dewey regarded Democracy and Education , when it appeared, as the summa of his thought. He believed that philosophers had invented an invidious distinction between knowing and doing, a distinction that had had the intellectually pernicious effect of producing a series of pseudoproblems about the relations between the mind and reality and the socially pernicious effect of elevating a leisure class of speculative thinkers above the world’s workers and doers. There was, Dewey thought, no such distinction. Knowing and doing are indivisible aspects of the same process, which is the business of adaptation. We learn, in the progressivist phrase, by doing. We take a piece of acquired knowledge into a concrete situation, and the results we get constitute a new piece of knowledge, which we carry over into our next encounter with the environment. When we try to pin down knowledge by embalming it in a textbook, we cut off thought from experience, and we damage our relations with the world. Knowledge is not a mental copy of a reality external to us; “it is an instrument or organ of successful action.”

What is democratic about Dewey’s theory is that it conceives of learning as a collaborative activity. Dewey thought of the school as a “miniature community,” a kind of training camp for life in a democracy. “The only way to prepare for social life,” as he put it, “is to engage in social life,” and this emphasis on the associated nature of human existence is crucial to most of what he wrote about politics and social reform. He believed that individual fulfillment could be achieved only through participation in the collective life; for, outside the collectivity no such thing as an individual was possible. “The non-social individual,” he wrote in one of his earliest essays, “is an abstraction arrived at by imagining what man would be if all his human qualities were taken away.”

The emphasis on the community as the ground for our conduct and beliefs echoes Holmes’s conception of experience. It echoes Peirce as well, for Peirce regarded truth as a matter of community consensus rather than individual belief (one of the points on which he was at odds with James). But the thinker Dewey credited with introducing this way of thinking into his own work was George Herbert Mead, whom he met at the University of Michigan in the 1880s and with whom he continued to work after moving to Chicago, where Mead joined him. “From the nineties on,” Dewey said in 1939, eight years after Mead’s death, “the influence of Mead ranked with that of James.”

Mead was a physiologist and psychologist, and his method was to apply a Darwinian model to those areas of inquiry. He explained physiological development in adaptive terms, as something that happens as a result of the interaction of the human being with its environment. He explained consciousness, including the sense of individual identity, in the same manner, as something that happens as a result of the interaction of the human being with other human beings. Even our innermost thoughts, in Mead’s view, are social. For we think, as we act, relationally; we talk to ourselves. “Inner consciousness,” as he put it, “is socially organized by the importation of the social organization of the outer world.” The field he developed was social psychology, and its influence on twentieth-century thought extends well beyond pragmatism.

It is common today to speak of a revival of pragmatism, a phenomenon usually dated from the publication of Richard Rorty’s Philosophy and the Mirror of Nature in 1979. The implication is that after Peirce, James, Holmes, Dewey, and Mead, pragmatism went into eclipse and that only in the last fifteen years has it re-emerged as a distinctively American style of thought with wide appeal. This notion is not entirely false. Pragmatism after Dewey did go into relative eclipse, and twentieth-century intellectuals have been more likely to identify themselves with other schools of thought—Marxism, psychoanalysis, existentialism, and structuralism—than to think of themselves as pragmatists.

But the notion that pragmatism was eclipsed by other schools of thought in the twentieth century is also a little misleading, and the reason is that it is part of the nature of pragmatism to decline the honor of becoming a “school of thought.” Pragmatists have always been wary of the danger that pragmatism will turn into a discipline, just another one of the things people “do.” James presented pragmatism, after all, not as a philosophy but as a way of doing philosophy. Pragmatism, in the most basic sense, is about how we think, not what we ought to think.

If we locate pragmatism within the broader picture of turn-of-the-century intellectual life, we can see it as a kind of knot in the tapestry, a pulling together of threads that reach into many other areas of thought, with many other consequences—threads that, running back into the nineteenth century, include the emergence of theories of cultural pluralism and political progressivism, the fascination with pure science and the logic of scientific inquiry, the development of probability theory as a means for coping with randomness and uncertainty, the spread of historicist approaches to the study of culture, the rapid assimilation of the Darwinian theory of evolution, and the Emersonian suspicion of institutional authority. None of these developments is “pragmatist,” but pragmatism was one of the places where they came into focus.

 

The threads that lead out of the pragmatist knot and into twentieth-century thought are as various as the threads that lead into it. Pragmatism served as a kind of philosophical tonic for many twentieth-century thinkers whom it would seem beside the point to call pragmatists. One of the most striking effects of the contemporary pragmatist revival is that a whole array of American (and non-American) writers has suddenly been placed in a new shared context. Cornel West, in The American Evasion of Philosophy (1990), uses pragmatism to show what people like James, W. E. B. Du Bois, Reinhold Niebuhr, and Lionel Trilling have in common, just as Rorty, in Philosophy and the Mirror of Nature and Consequences of Pragmatism (1982), uses pragmatism to show what people like Dewey, Martin Heidegger, Ludwig Wittgenstein, and Jacques Derrida have in common, and Richard Poirier, in Poetry and Pragmatism (1992), uses it to show what Emerson, Robert Frost, and Gertrude Stein have in common. A complete list of American writers who have acknowledged the stimulus of pragmatism would be varied and long and would include, besides those just named, Wallace Stevens, Learned Hand, Benjamin Cardozo, Kenneth Burke, Sidney Hook, C. Wright Mills, Arthur Schlesinger, Jr., Tom Hayden, and Harold Bloom.

Of the various strands emerging from the pragmatism of James and Dewey, four in particular lead into the cluster of concerns that have helped revive interest in pragmatism. One is the development of theories of cultural pluralism in response to the xenophobia induced by the turn-of-the-century waves of immigration and exacerbated by America’s entry into the First World War. Three figures whose writing was seminal to this development were students of James and Dewey: Horace Kallen, whose essay “Democracy Versus the Melting Pot” (1915) is often cited as the founding document of cultural pluralism; Alain Locke, who delivered his groundbreaking series of lectures “Race Contacts and Interracial Relations” at Howard University in 1915 and 1916; and Randolph Bourne, whose essay “Trans-National America” appeared in 1916.

Dewey believed we learn by doing and adapting; when we embalm knowledge in a textbook, we cut off thought from experience.
 

Cultural pluralism owes most, in the pragmatist tradition, to James’s A Pluralistic Universe (1909). One of the consequences of the pragmatic way of thinking, for James, was that the universe is better thought of as a “multi-verse,” something that is never completed, never synthesized into a stable whole. Things are strung together, James argued, but their differences are never completely transcended. “Monism thinks that the all-form or collective-unit is the only form that is rational,” said James. “Pluralism lets things really exist in the each-form or distributively.”

James drew no particular political conclusions from his pluralism (though it undoubtedly had some connection with his impassioned anti-imperialism). But Kallen, Locke, and Bourne saw that if the universe is multiple and unfinished, then society—particularly an ethnically heterogeneous society like the United States in 1915—might be understood as multiple too. Of the three, Locke’s argument involved the subtlest pragmatism: It was that although race has no real basis in biology, and although racial pride is in itself socially divisive, the only way to overcome social divisiveness was to foster racial pride—to encourage the different ethnic groups in American society to take satisfaction in their different cultural practices. This is, Locke said, “only apparently paradoxical. It is not paradoxical when it is worked out in practice, because … the very stimulation to collective activity which race pride or racial self-respect may give will issue into the qualification test and the aim to meet that qualification test, which, of course, must be in terms of the common standard.” In other words, the desire to be recognized as as good as everyone else—to pass the ethnicity-blind meritocratic test, what Locke called “the qualification test”—flows from the desire to be recognized as different from everyone else. You want to prove that your group is as good as every other group. Cultural pluralism is the recipe for civil cohesion, and the pragmatic beauty of the formulation is that neither human sameness nor human difference is made to seem essential.

A second consequence of turn-of-the-century pragmatism was the revolution in American law and legal thinking inspired by the writings, and to some extent the personality, of Holmes. In old age—he was sixty-one when he was appointed to the Supreme Court, in 1902, and he served until he was ninety—Holmes became a hero to progressive political writers. He was himself a progressive only in a neutral sense. He didn’t believe that social and economic reform could do more than shift a few burdens incrementally in one direction or another; he considered economic relations more or less fixed. But he saw no constitutional barrier to legislative attempts to move those burdens, by imposing taxes, passing health and safety regulations, or protecting unions, and this endeared him to progressives who did believe in the powers of reform.

Holmes’s belief in society’s right to try out new forms of self-regulation followed from his belief, shared by all pragmatists, in the virtues of experimentation. If we learn by doing, we have to keep doing new things, since that is how knowledge progresses or at least adapts. This was the rationale for Holmes’s most celebrated opinion as a judge, his dissent in Abrams v. U.S. (1919), in which he rejected state efforts to punish political opinion as a foreclosing of social possibilities. Even the Constitution, he said, “is an experiment, as all life is an experiment. Every year if not every day we have to wager our salvation upon some prophecy based upon imperfect knowledge. While that experiment is part of our system I think that we should be eternally vigilant against attempts to check the expression of opinions that we loathe and believe to be fraught with death.”

Holmes’s vibrant dissents in cases involving the regulation of business and the suppression of opinion bore fruit after his death in the judicial acceptance of New Deal economic policies and in the establishment of the constitutional law of free speech. The argument of his jurisprudential writings—that law is not merely a system of abstract doctrines but a response to changing conditions—helped give rise to a series of fresh approaches to the law. These include the legal pragmatism of Roscoe Pound and Benjamin Cardozo, which emphasized the social aspect of legal reasoning and the experimental nature of judicial decision making; the legal realism of Karl Llewellyn and Jerome Frank, which regarded law sociologically and as an instrument of reform; Critical Legal Studies, which considers law as both a form of rhetoric and a form of politics; and the “law and economics” jurisprudence of Richard Posner, which proposes cost-benefit analysis as a basis for judicial decision making. All these ways of thinking about the law can be said to have grown from seeds planted in Holmes’s 1897 essay “The Path of the Law.”

A third strand that has recently re-emerged from early pragmatist thought involves the educational philosophy developed around the turn of the century by John Dewey. That philosophy—the theory that children “learn by doing”—established itself long ago in the field of early-childhood education, but until the 1980s its relevance to undergraduate education seemed remote. Then, in an essay titled “Toward Pragmatic Liberal Education” in 1995, the historian Bruce A. Kimball argued that trends in undergraduate education since the 1960s reflect a move toward a pragmatic educational philosophy. For a century American higher education was dominated by the model of knowledge that obtains in research universities, where learning is split up among separate scholarly disciplines, or departments; where the emphasis is on “knowledge for its own sake”; where a distinction between “facts” and “values” is rigorously observed; and where education is divorced from practical affairs. Kimball maintained that in smaller liberal arts colleges across the country, educators have been quietly abandoning the research model and have been adopting curricula in which learning is oriented toward values, toward citizenship, toward the recognition of cultural diversity, and toward the Deweyan virtue of “doing.” This new model stresses “general education”—that is, education designed for all students, rather than for future specialists in an academic field of inquiry—and “liberal education”—the education of temperament and sensibility. Whether the educators responsible for this shift in the paradigm of the college experience ever thought of themselves as pragmatists, it is clear that the developments Kimball traced are consistent with the pragmatic, particularly the Deweyan, tradition, and that if this movement ever becomes coherent and self-conscious enough to acquire a philosophical label, “pragmatist” is the obvious choice.

The final strand connecting turn-of-the-century pragmatism with its late-twentieth-century avatar may seem the most obvious, but it is in fact the oddest. This is the strand that runs through philosophy itself. James and Dewey regarded themselves as philosophers, but it is not hard to see how their dismissal of the traditional problems of philosophy made them seem, to many professional philosophers, enemies of the discipline. Pragmatism is antiformalist; it represents a principle of endless assault on every tendency to erect contingent knowledge into a formal system. To the extent that philosophy is an effort to erect what we know about how we know into a formal system, pragmatism cannot help acting the role of termite—under-mining foundations, collapsing distinctions, deflating abstractions, suggesting that the real work of the world is being done somewhere other than in philosophy departments.

In spite of this, James and Dewey not only regarded themselves as philosophers but were, in their day, builders of philosophy departments—James at Harvard and Dewey at Chicago and then Columbia. Thus there has been, ever since James’s lecture “Philosophical Conceptions and Practical Results” in 1898, a tradition of professional pragmatist philosophy, begun by students and colleagues of James and Dewey and running into the present.

Yet this line of philosophical pragmatists is not the line that connects James with Rorty and the contemporary pragmatist revival. That line runs, paradoxically, through the philosophical tradition that is usually regarded as the antagonist of pragmatism and as the tradition that won the battle for the control of modern philosophy departments: analytic philosophy. Analytic philosophy is an overbroad term that embraces a number of philosophical movements since the time of Bertrand Russell, including logical atomism, logical positivism, and the philosophy of language. The differences among these movements are, of course, important to their practitioners, but from the point of view of pragmatism, the notion they all share is that there is a distinctively philosophical method of analysis that can be used to get to the bottom of problems about mind, knowledge, meaning, truth, and so on.

This is the tradition in which Rorty found himself working when he first began teaching philosophy, after getting his degree from Yale. And although Philosophy and the Mirror of Nature can seem, from one point of view, an all-out attack on analytic philosophy, it was actually intended as the culmination of the whole tradition. “The aim of the book,” Rorty explains, “is to undermine the reader’s confidence in ’the mind’ as something about which one should have a ‘philosophical’ view, in ‘knowledge’ as something about which there ought to be a ‘theory’ and which has ‘foundations,’ and in ‘philosophy’ as it has been conceived since Kant.” But, he continued, this argument was “parasitic upon the constructive efforts of the very analytic philosophers whose frame of reference I am trying to put into question. … I hope to convince the reader that … analytic philosophy … needs to be carried a few steps further. These additional steps will, I think, put us into a position to criticize the very notion of ‘analytic philosophy,’ and indeed of ‘philosophy’ itself.”

Pragmatism cannot help suggesting that the real work of the world is being done somewhere other than in philosoply departments.
 

Rorty thus proceeded to construct a kind of staircase out of analytic philosophy, made up of works that questioned a succession of fundamental tenets of the anlytic approach. If he had stopped there, however, it is unlikely that a pragmatist revival would have followed, for Philosophy and the Mirror of Nature is, technically, just a piece of professional philosophy. And merely showing the way out of the reigning philosophical paradigm left Rorty with no obvious paradigm within which to work next. Accepting the pragmatist analysis and seeing how it leads to different ways of conceiving of the traditional problems of analytic philosophy was a path chosen by one philosopher, Hilary Putnam, who has developed a philosophy he calls “pragmatic realism.” But it was not the path chosen by Rorty.

The alternative to operating within a paradigm is to rely on your genius, and although Rorty has not ceased to repeat his analytic argument about the poverty of professional philosophy, since professional philosophers have not ceased to criticize it, he turned, after the publication of Philosophy and the Mirror of Nature , to an unexpectedly imaginative engagement with literature, critical theory, political thought, and social commentary. He transformed himself, in short, from a philosopher into an intellectual. In this his model has clearly been Dewey. But Rorty is a far more exciting writer than Dewey, and his work has served for many people as a model for the kind of wide-ranging engagement with art, ideas, and public affairs that pragmatism might make possible.

As james discovered during his crisis of 1870, pragmatism can encourage us to trust our own judgments, without ever assuming them to be infallible—to have faith that if we do what is right, the metaphysics will take care of themselves. What pragmatism cannot do, though, is to explain where our judgments come from. The easy answer to that question today is to say that our decisions are determined by the cultural “rules” of the social group we happen to belong to. No doubt the cultural rules explain a great deal of what people do, but different individuals in the same group make different judgments—if they didn’t, there would be nothing that needed explaining—and in the end, as Holmes concluded, we can’t say in any determinate way how we make our choices. They seem to arise, in the end, out of the mysteries of personality, which are a scandal to theory. All we can say is that we seem to have, as naturally associated beings, a powerful social incentive to rationalize and justify the choices we make.

It is sometimes complained that pragmatism is a bootstrap theory—that it cannot tell us where we should want to go or how we can get there. The answer to this is that theory can never tell us where to go; only we can tell us where to go. Theories are just one of the ways we make sense of our need to get there. We wake up one morning and find ourselves in a new place, and then we build a ladder to explain how we got there. The pragmatist is the person who asks whether this is a good place to be. The nonpragmatist is the person who admires the ladder.

We hope you enjoy our work.

Please support this 72-year tradition of trusted historical writing and the volunteers that sustain it with a donation to American Heritage.

Donate