Skip to main content

Behind the Cutting Edge

London Bridge Is Falling Down

March 2024
5min read

And other high-tech debacles and how we learned to cope with them

High-technology marvels keep going awry lately—in Cyberspace, where hackers have shown they can crash virtually any Web site any time, in real space, where the nation’s $60 billion missile defense system can’t hit a target, and down on plain old earth, where the most wondrous new bridge anywhere has been vibrating like a guitar string. No matter how good we get at innovation, we keep innovating failure. The latest newsworthy fiasco at this writing is that of the Millennium Bridge over the Thames in London. The bridge was designed as a companion to London’s other engineering marvels of the recent turn of the century: the London Eye (the huge Ferris wheel whose opening kept being postponed), the River of Fire (which failed to ignite at midnight on New Year’s Eve), and the Millennium Dome (which is merely highly unpopular).

The $27 million bridge, the first span across London’s river in more than a century and its first footbridge ever, opened in June. Within minutes, it was in trouble, vibrating and bouncing as people walked on it. Within 72 hours, the bridge was closed and its builders were on the defensive. A spokesman for the Millennium Bridge Trust insisted, “It’s meant to wobble,” but added, “It is moving in a way that it should not move, and at the moment we do not know why that is.”

The engineers’ first response was to have workers try to pound wooden blocks into the structure in crucial places with six-inch nails. Then, after more than a week of intense study, they came up with a diagnosis: “unintentional synchronization.”

“What happens,” explained Tony Fitzpatrick, the bridge’s chief engineer, “is that you get groups of people walking over, and as you walk, you put your weight first on one side and then another. It starts out being completely random, but if the bridge you’re walking on begins to move from side to side, you begin to time your sideways movement to happen with the bridge’s. And because the bridge is doing it, everybody will do it—there’s not the random movement you had at the start.” He added, “We have searched and consulted worldwide on this, and apart from one possible case in Japan, of a bridge that joined a football stadium to a railway station, this appears to be completely new.”

The Daily Telegraph reported, LONDON’S MILLENNIUM PEDESTRIAN BRIDGE SHUT DOWN BECAUSE PEOPLE WALKED ON IT , and the mystification the headline conveys is almost unavoidable. Why don’t people know how to build a bridge by now, even if it is a novel “blade of light” design, with suspension cables strung unconventionally alongside the walkway instead of overhead to make it look weightless?

On this side of the Atlantic, perplexity was just as great recently when a flood of anonymous assaults began crippling a host of major Internet sites. For a short while it looked as if the whole house of cards called the World Wide Web might come fluttering down. But what happened in both cases—the swaying bridge and the crashing Internet—was a perfect example of how a complex technology normally gets improved.

Innovation by its very nature leads into uncharted territory, where the dangers cannot all be predicted. It can never be done in a fail-safe way, and its most deep-seated flaws often take a crisis to get fixed. Despite all the popular talk about the revolutionary mutability and flexibility and adaptability of the Internet, it is an extremely conservative and slow-to-change organism, as is any large-scale technological system. Likewise, a bridge 350 yards long is an extremely complex aggregation of technologies. The bridge designer must build on centuries of accumulated experience about what can go wrong with such structures in a thousand big and little ways, and when he decides to innovate, he can only extrapolate from that experience and calculate and hope.

Crises are inevitable. So how do you prevent them from becoming disasters? How do you ensure that, as in the case of the Millennium Bridge, the trouble will be mild enough so that the operation can be shut down before anyone is hurt? Or that, as in the case of Internet hacking, the threats will be alarming but not yet crippling when they finally scare people into corrective action?

Our technological crises have become more effective in prompting reform and less likely to be truly severe.

The question is of course not a new one. America’s somewhat inchoate but surprisingly effective systems for controlling threats to and from our large technologies began taking shape early in the nineteenth century, when steamboats began appearing on our waters but no one yet knew how to make them safe. In 1816, only nine years after Robert Fulton first steamed up the Hudson, a commercial packet on the Ohio River blew up, killing an unknown number of passengers and crew. The steamboat’s combination of fire, high-pressure steam, and boilers of poorly understood metals to contain that steam was turning out to be a lethal one. Over the next 30 years, another 230 steamboats exploded on the nation’s rivers, costing the lives of uncounted thousands.

Cities and states began passing laws requiring inspections, but no one was exactly sure what would be safe, and the regulations couldn’t be enforced over state lines anyway, so the deaths continued. In 1830 the Franklin Institute, recently formed in Philadelphia to help engineers better understand their technologies, began a concerted effort to figure out the causes of the explosions, and after a very gruesome one in Memphis that year, Congress began backing its work. Seven years later, the institute produced a report that revealed major discoveries about how water and steam and the various metals that boilers were made of interacted, and the report included strong recommendations for setting standards for manufacturing, maintaining, and inspecting boilers.

Congress ignored the whole thing. Daniel Webster captured the general sense of the House when he quoted one congressman as saying, “Let the Government attend to its own business, and let the people attend to theirs.” Then a crisis—an especially deadly one—finally forced action. On April 25, 1838, the steamboat Moselle blasted apart just above Cincinnati and killed perhaps 200 passengers, the worst carnage yet. Shortly thereafter, the Steamboat Act of 1838 was passed into law.

The act did more than just make steamboats safe, though that was an enormous accomplishment in itself. That law paved the way for the later establishment of the Food and Drug Administration, the Federal Aviation Administration, and all the other government regulatory and investigative agencies that work to protect us today.

As our technologies have grown, and our expectations for keeping them safe have grown with them, we have become increasingly responsive to the hazards that lurk within them—once they clearly manifest themselves—and increasingly adept at regulating them. While it still usually takes a crisis to initiate real action, those crises have become more and more effective in prompting reform and less and less likely to be anywhere near as severe as the toll on lives that led up to the Steamboat Act.

DDT was widely accepted as little less than a miracle in the 1940s and 1950s, but once Rachel Carson showed the world the environmental havoc it was wreaking, the experience could never happen again. There will be no more miracle pesticides. Three Mile Island should never have happened, but once it did, it ensured that nothing like it would be likely again, and without its having cost a single life. The great blackouts of 1965 and 1977 revealed an international infrastructural grid collapsing under its own weight and led to reforms and redundancies in those systems that have prevented anything so crippling from happening since.

The price of living in our advanced, dauntingly complex technological world is not only eternal vigilance but also periodic crisis. This is how it must be. Nobody can foresee the consequences of our technologies any more than anyone can foresee the consequences of any of our complex human activities. All the works of man, from political systems and economic structures to power systems and computer networks, can seem to become monsters beyond our control. They are bound at times to appear to run amok, or to defeat themselves, or to completely break down. And then our alarm at the catastrophe is bound to make us fix them.

It is good that they scare us so, for people will never stop courting the hazards of unexplored technical realms. People have always been and will always be moved by the same spirit that guided (or misguided) Lord Foster, the architect of the Millennium Bridge. At the height of that crisis, he asked, undoubtedly with a note of defensiveness in his voice, “Can you ever be overambitious?” His answer: “I would rather be accused of being overambitious than of being lily-livered and retreating into a nostalgic past that never existed.”

We hope you enjoy our work.

Please support this magazine of trusted historical writing, now in its 75th year, and the volunteers that sustain it with a donation to American Heritage.

Donate