What If?


What if any of the pre-Civil War Presidents had gone mad?

What if Andrew Johnson had been successfully impeached?

What if William McKinley had not been assassinated?

What if there had been no tape-recording system in Nixon’s White House?

THESE ARE a few of the questions on the final exam I set last spring for my students at George Washington University, where I give a course on monarchy, republicanism, and the evolution of the American Presidency. (Additional questions from the exam—together with some answers derived from the resourceful essays turned in by the class and from my own speculations—are printed under the illustrations for this article.) My immediate concern is with the status of such pedagogical exercises. Are they just harmless jokes? Are they hopelessly “unhistorical"? Or are they potentially valuable?

“What if” conjectures of the kind I asked my students to write on have intrigued me ever since I was in my teens. This appetite was intensified, I suppose, by my having studied American history as a foreigner. Material and theories that would be immediately familiar to a native-born student were to me often in a literal sense outlandish—puzzling and problematical. Then, by chance, I found myself teaching in an American Studies department. The mix of history and literature encouraged me to ignore the conventional boundaries between fact and fancy and to study the relationship between the two. In such a context it seemed natural to undertake a book ( George Washington: Man and Monument ) about America’s prime hero, in which the emphasis was as much on the Washington image as upon the real person.

More recently I have been writing a book about republicanism in America: why the new nation became a republic, and what effects this had on the United States. Opting for republicanism instead of monarchy was, I believe, a fluke, in the sense that almost no one was advocating a republic in the American colonies before about 1770, and not many even up to 1776. However, it was a kind of logical fluke—an option that disclosed itself, became useful, and soon was hailed as quintessentially American. Such considerations led me to other matters of what I call counterfactual history: Why, for example, has Canadian development differed from that of the United States? Does the survival of monarchy there enter into the story?


The first thing to be said about speculative, counterfactual questions is that they are not uncommon or new. To commemorate the two-hundredth anniversary of the British surrender at Yorktown, Time magazine printed an ingenious squib by Gerald Clarke, in the shape of a 1981 lecture by “Sir Geoffrey Gabb, George III professor of history at Cornwallis University,” outlining to his freshmen the development of North American history after the surrender of the Franco-Ameri can armies in the Yorktown battle. With a more sober emphasis, Harvard’s Oscar Handlin ( Chance or Destiny , 1955) scrutinized eight incidents in American history so as to assess in each the blend of accident and predestination. One of his cases was the explosion on board the U.S.S. Princeton , cruising on the Potomac in 1844, which killed Secretary of State Abel Upshur and could easily have killed President John Tyler.

In 1931 the English author J. C. Squire edited If , a collection of essays on altered historical outcomes. Among the contributions of American relevance was a piece by Hendrik Van Loon on the supposed continuance of Dutch control of New Amsterdam and a more famous essay by Winston Churchill, imagining what might have happened if Robert E. Lee had won at Gettysburg (Churchill’s scenario envisaged the recognition of the Confederacy as a separate nation; its eventual alliance with the Union and Great Britain; and their combined success as international pacificators in preventing the outbreak of world war in 1914). There have been various other fantasies as to the achievement of Southern independence— including Thurber’s comic story of a fuddled General Grant surrendering his sword to a bewildered but polite Robert E. Lee at Appomattox in 1865.

And in 1979 Daniel Snowman brought out If Had Been … : Ten Historical Fantasies , among which was a clever pastiche by the British scholar Esmond Wright in the guise of Benjamin Franklin, explaining “how I would have prevented American discontent from becoming revolution.” All such imaginings, whatever their degree of earnestness, can be covered by the observation attributed to the philosopher A. N. Whitehead: “Very nearly everything that happened in history very nearly did not happen.”

In general, though, professional historians have been unenthusiastic about what Franklin D. Roosevelt used to call “iffy” questions. Like FDR they have tended to shrug off such inquiry as a waste of time. Lawrence Stone of Princeton cites a colleague’s dismissive reaction to counterfactual history: “Yes, and if my grandmother had wheels she would be a Greyhound bus.”

All sorts of scholarly skepticism underlie such responses. Academics may feel it is hard enough to establish what did happen in history without indulging in fictional daydreams. Perhaps they fear the bad effect on students, whose regard for documented fact is not always as keen as might be desired. There is also the objection that while short-run theoretical alternatives might just be worth studying, longterm ones are not, because the number and divergence of outcomes increases with every year.

On another tack, historians of various persuasions have reacted against the tendency of their nineteenth-century predecessors to organize their narratives around prominent individuals—monarchs, generals, ambassadors—and their exploits. “Great man” and “great event” history of this type strikes many scholars as old-fashioned, elitist, simplistic, and misleading, where not plain wrong. Marxist historians seek more fundamental, economic explanations. Fernand Braudel, LeRoy Ladurie, and other influential French annalistes have discarded narrative formulas, emphasizing instead basic factors like climate and diet, the condition of the mass of mankind rather than their leaders. In such a perspective, palaces, cathedrals, military encounters, and suchlike are relatively insignificant. Change occurs very slowly, for reasons often not perceived by people involved in the process. The writing of history should itself therefore be spread over centuries instead of decades. Collective attitudes are more important than the rivalries of major personages.


However, there are other tendencies that offset the traditional disdain of professional historians for “iffy” ideas of causation. There is, for instance, the rise of quantifying or “cliometric” history, often practiced by economists. This entails the collection and programming of statistical data. Often it leads to the construction of abstract mathematical models that encourage scholars to posit alternative developments. For example, how important were railroads to the growth of the nineteenth-century national economy? Indispensable, valuable, useful, marginal? Historians do try to answer such questions.

Quantifying historians claim that only their methods can provide significant answers. A mathematical model can incorporate a “counterfactual” picture of an American economy minus railroads. “Yes, but,” comes the impatient reply, “there were railroads, and we cannot know what things would have been like without them.” To which the student of causality may object that, in that case, orthodox historians should abandon the pretence that any of their “causes” has any logical validity. Are they not in effect saying that everything that happened in history was bound to happen? If not, what kinds of evidence can they produce to hint at alternatives?

In implication at any rate, counterfactualism is at least tacitly accepted in a number of works dealing with comparative history—an approach that often involves comparing one institution in two or more environments, perhaps at chronologically different times. For example, Carl Degler’s Neither Black Nor White (1971) compares slavery and race relations in Brazil and in the United States. George M. Fredrickson’s White Supremacy (1981) compares American and South African circumstances. The effect in both is to sharpen our awareness of why thing sturned out differently in two situations—each society providing a kind of counterfactual version of the other.


There is, too, the stubborn persistence of the belief—among specialists as well as general readers—that the course of history can be and is changed by personal or other interventions. We resist the idea that everything is predetermined or comes about impersonally through a series of random global accidents.

The Marxist historical philosopher Plekhanov wrestled with the problem of whether a “scientific” theory of social evolution could accommodate the exceptional individual, the heroic leader. A partial answer is to see the “great man” as the representative of his era, which is how Ralph Waldo Emerson portrayed Napoleon and how John W. Ward treats “Old Hickory” in Andrew Jackson, Symbol for an Age (1962). Plekhanov goes further, fastening upon leaders of a strongly religious temperament, such as Luther, Calvin, and Cromwell. Convinced that their own actions are part of a series of inevitable movements, such zealots feel they are God’s instruments, obliged to carry out his commands—a faith that endows them with exceptional power.

The American philosopher Sidney Hook ( The Hero in History , 1955) draws a useful distinction between the “eventful” person and the event-makers in history. Eventful characters affect history by being in the right place at the right moment, like the Dutch boy who is supposed to have saved the town by inserting his finger into a hole in the dike. The event-making hero, on the other hand, is someone whose extraordinary personal qualities can be deemed to change the course of history, as with Julius Caesar, Napoleon Bonaparte, Adolf Hitler, or Franklin D. Roosevelt. The historian William E. Leuchtenburg puts the case for FDR by remarking that he cannot conceive that American history would have been just the same if Roosevelt had been assassinated in 1933 and we had had Vice-President John Nance Garner in the White House.

A final tendency should be noted. Writers of fiction seem increasingly interested in semidocumentary forms that are somewhere between the invented and the actual. John Fowles interrupts the narrative flow of The French Lieutenant’s Woman to supply short essays on Victorian behavior or to insist that his characters are not real and could themselves have counterfactual (counterfictional?) experiences. E. L. Doctorow’s Ragtime mixes actual historical personages with stylized fictitious characters. Gore Vidal’s historical novels Burr and 1876 cleverly blend authentic material with the author’s own inventions.

Conversely, while “fiction” moves toward “history,” some historical writing reveals the attractive pull of fiction or at least of the imaginative, verging on the imaginary. Scholars sympathetic to this impulse may claim that history, being selective and subjective, can never be a science; or that the surface of events is less profound than the hidden realm of the psyche, where the scholar’s speculations as to the makeup and motive resemble those of the novelist: they depend more on the author’s sensibility, his hunches, than upon external “fact.”

In other words, counterfactual history is probably here to stay. It is no newcomer, and its purposes and techniques are numerous. In fiction, for example, the author may seek a comic effect from deliberate anachronism, as with Mark Twain’s A Connecticut Yankee in King Arthur’s Court . We laugh at the absurdity of knights in armor carrying advertisement billboards. Scholars, too, now and then flippantly exploit anachronism. The Roman Catholic historian Hilaire Belloc, asked by a younger aspirant how to deal with predominantly Protestant versions of the English past, gave this advice: “Write ‘William the Conqueror then got out of his aeroplane.’ Then all the dons write in and say that there weren’t any aeroplanes in William’s day, and then you write back and say that that’s their anti-Catholic prejudice.”

Serious history need not be solemn stuff. But to achieve some worthwhile end, a counterfactual approach must obviously abide by a few simple rules:

It must not falsify the established record before one’s point of departure . Suppose we consider the consequences if Andrew Jackson had been beaten and killed at the battle of New Orleans instead of gaining his spectacular victory. We would subvert such an inquiry by introducing other, prior assumptions: for instance, that New England had seceded, or that Louisiana was still French.

It must not violate basic historical data . Jackson’s defeat at New Orleans could thus not be attributed to, say, British employment of Harrier jump jets.

It must not go beyond improbability into impossibility . Thus, a woman could not have become a major political figure in nineteenth-century America, nor could a known Roman Catholic have attained the Presidency.

With these provisos, counterfactual history can be instructive as well as entertaining. It can help us to challenge the crude form of narrative-as-explanation embodied in the tag post hoc, ergo propter hoc (“after, therefore because of”).

Now for the scenarios in the George Washington University test. I think it reasonable to suppose that the American Presidency in its first decades could have develoned differently, though not altogether differently. In the libertarian, constitutionalized, decentralized American context, the office could probably not have become a dictatorship regulated by coups d’état . There was no apparatus of government for George Washington to seize, no palace for him to occupy, no subservient standing army for him to control, even had he wished to. However, the Presidency in 1789 was an untried creation. No one was sure how it should or would evolve. In some respects, whether or not the Founding Fathers recognized this, it was arguably a kind of monarchy—though elective rather than hereditary. The President was head of state, like a king; and in common with constitutional monarchs such as George III, he had a real say in the government of the nation. He could be re-elected indefinitely.

Why did the American Presidency develop as it did? The rectitude and good sense of the first President, George Washington, must be taken into account. He regulated the succession, denying himself a third term, with highly beneficial consequences. There is, too, the element of accident. He was seriously ill during his first term. What if he had died in 1790 or 1791? We need not believe that the United States would have been completely wrecked if that had occurred and the succession had been conceded to Vice-Président John Adams (not a foregone conclusion: the matter of whether a Vice-Président automatically inherited the office on the death of the President was not settled until William Henry Harrison’s sudden demise in 1841). Even so, the shock to the new system might well have weakened its as yet precarious legitimacy. And somewhere between Washington’s conscious policy and sheer accident, there is a specific problem: What if he had had sons? As it happened, few of the early Presidents had male heirs. John Adams did; and sure enough, his son John Quincy Adams eventually attained the Presidency. We need not deem Washington a scoundrel to suppose that, if he had fathered a healthy male or two, paternal pride could conceivably have led him to dream of seeing another generation of Washingtons in the White House. George and Martha were married in January 1759. If a son had been born to them in 1760, he would not have been too young for high office in the 179Os. He could even have directly followed his father as President. If that had happened, the Presidency and the whole nature of American republicanism could have been different.

Washington, too, could have instituted another precedent for us to ponder. He was reluctant to accept the Presidency in 1789 and to indicate his willingness to be re-elected in 1792. In both situations, Washington’s associates offered the encouragement that he need not actually serve out a whole term. Suppose he had taken them at their word and had resigned from office at a midway point during either administration? He might have opened the way for others, long before Richard Nixon, to step down voluntarily.

Other scenarios in the test presuppose shocks to the Presidency, most of them over the crucial issue of succession, the moment in any governmental system when its legitimacy is potentially in doubt. There were several such moments in the first century or so of presidential history. The crisis of 1800, with Thomas Jefferson eventually recognized as the winner, is among the crucial elections. His acceptance, and his conciliatory inaugural address, signaled the capacity of the American system to accommodate the peaceful transfer of power from one national party to the opposition. Jefferson had in fact tied with Aaron Burr in the electoral college. The deciding vote went to the House of Representatives. The defeated Federalists could, out of spleen, have thrown the election to Burr. If so, Thomas Jefferson might never have been President. President Burr might then never have fought the fatal duel with Alexander Hamilton in 1804, and would presumably never have got tangled up in the mysterious Western conspiracy of 1806. Perhaps the nation’s capital today might be graced not with a Jefferson but with a Burr Memorial.

Assassinations are times of extreme tension. The killing of Lincoln and the wounding of his secretary of state, William H. Seward, came near the end of the prolonged crisis of civil war and revealed a possibly widespread conspiracy. If John Wilkes Booth and his little gang had also struck down Vice-Président Andrew Johnson, we can readily picture the resultant chaos within the federal government. The actual succession, however, did not then involve the secretary of state. After the President and Vice-President, the president pro tern of the Senate was next in line, followed by the speaker of the House of Representatives.

This suggests the dictum that big events need not have big consequences. If Lincoln, Johnson, and Seward had all been murdered, the executive branch would have suffered a severe blow. Congress would no doubt have acquired power at the expense of the executive, especially since the new President would come from the legislative branch. Yet as it was, in the real course of history, the Gilded Era was one of legislative supremacy. This would almost certainly have occurred, in revulsion from Lincoln’s wartime executive aggrandizement, even if Booth’s motley conspiracy had failed to hurt anyone. Could something similar be postulated of a successful impeachment of Andrew Johnson in 1868? The attempt failed by only one vote. If it had succeeded, presidential prerogative might have been drastically curtailed. But then, to repeat, executive power was in any case markedly reduced, vis-à-vis Congress, in the last third of the nineteenth century.


We are left, in these instances, with a counterfactual paradox: differing historical patterns of development may sometimes end up with similar rather than different results. To revert to a question raised earlier: Did the railroad boom greatly affect American agriculture? No, says an expert cliometric witness: “Ninety-five per cent of the prairie land in commercial cultivation in 1890 would have been cultivated even in the absence of railroads.” Counterfactural hypotheses do not have to be mind-boggling.

Another modest conclusion to be drawn from “what if” speculation is that, with the American Presidency, as with most institutions, the earliest stages allow the greatest alternative leeway. The twentieth-century Presidency may appear to have undergone amazing ups and downs. The consequences for particular individuals, within bereaved or disgraced families, have been catastrophic, and the drama has filled television screens and front pages. Yet the sundry disasters seem not to have had any fundamental effect. That must be in large part due to the essential cohesion of the American polity. But it also suggests how much national institutions have settled into permanent shapes from which almost no calamity can budge them.

This is by no means the only lesson to be learned from counterfactual history, of course. Much depends on the scenario one devises. The closer the hypothesis comes to the present day, the nearer we are to futurology or to science fiction. The farther we cast such conjectures forward in time, the more staggering our extrapolations may be. Grandma could in a sci-fi universe be equipped with wheels and behave like a Greyhound bus. But so long as we confine ourselves to counterfactual history , we are concerned first with what did actually occur and only then with what else might have happened. And in that realm, Grandma never could have rolled.