London Bridge Is Falling Down


High-technology marvels keep going awry lately—in Cyberspace, where hackers have shown they can crash virtually any Web site any time, in real space, where the nation’s $60 billion missile defense system can’t hit a target, and down on plain old earth, where the most wondrous new bridge anywhere has been vibrating like a guitar string. No matter how good we get at innovation, we keep innovating failure. The latest newsworthy fiasco at this writing is that of the Millennium Bridge over the Thames in London. The bridge was designed as a companion to London’s other engineering marvels of the recent turn of the century: the London Eye (the huge Ferris wheel whose opening kept being postponed), the River of Fire (which failed to ignite at midnight on New Year’s Eve), and the Millennium Dome (which is merely highly unpopular).

The $27 million bridge, the first span across London’s river in more than a century and its first footbridge ever, opened in June. Within minutes, it was in trouble, vibrating and bouncing as people walked on it. Within 72 hours, the bridge was closed and its builders were on the defensive. A spokesman for the Millennium Bridge Trust insisted, “It’s meant to wobble,” but added, “It is moving in a way that it should not move, and at the moment we do not know why that is.”

The engineers’ first response was to have workers try to pound wooden blocks into the structure in crucial places with six-inch nails. Then, after more than a week of intense study, they came up with a diagnosis: “unintentional synchronization.”

“What happens,” explained Tony Fitzpatrick, the bridge’s chief engineer, “is that you get groups of people walking over, and as you walk, you put your weight first on one side and then another. It starts out being completely random, but if the bridge you’re walking on begins to move from side to side, you begin to time your sideways movement to happen with the bridge’s. And because the bridge is doing it, everybody will do it—there’s not the random movement you had at the start.” He added, “We have searched and consulted worldwide on this, and apart from one possible case in Japan, of a bridge that joined a football stadium to a railway station, this appears to be completely new.”

The Daily Telegraph reported, LONDON’S MILLENNIUM PEDESTRIAN BRIDGE SHUT DOWN BECAUSE PEOPLE WALKED ON IT , and the mystification the headline conveys is almost unavoidable. Why don’t people know how to build a bridge by now, even if it is a novel “blade of light” design, with suspension cables strung unconventionally alongside the walkway instead of overhead to make it look weightless?

On this side of the Atlantic, perplexity was just as great recently when a flood of anonymous assaults began crippling a host of major Internet sites. For a short while it looked as if the whole house of cards called the World Wide Web might come fluttering down. But what happened in both cases—the swaying bridge and the crashing Internet—was a perfect example of how a complex technology normally gets improved.

Innovation by its very nature leads into uncharted territory, where the dangers cannot all be predicted. It can never be done in a fail-safe way, and its most deep-seated flaws often take a crisis to get fixed. Despite all the popular talk about the revolutionary mutability and flexibility and adaptability of the Internet, it is an extremely conservative and slow-to-change organism, as is any large-scale technological system. Likewise, a bridge 350 yards long is an extremely complex aggregation of technologies. The bridge designer must build on centuries of accumulated experience about what can go wrong with such structures in a thousand big and little ways, and when he decides to innovate, he can only extrapolate from that experience and calculate and hope.

Crises are inevitable. So how do you prevent them from becoming disasters? How do you ensure that, as in the case of the Millennium Bridge, the trouble will be mild enough so that the operation can be shut down before anyone is hurt? Or that, as in the case of Internet hacking, the threats will be alarming but not yet crippling when they finally scare people into corrective action?

Our technological crises have become more effective in prompting reform and less likely to be truly severe.

The question is of course not a new one. America’s somewhat inchoate but surprisingly effective systems for controlling threats to and from our large technologies began taking shape early in the nineteenth century, when steamboats began appearing on our waters but no one yet knew how to make them safe. In 1816, only nine years after Robert Fulton first steamed up the Hudson, a commercial packet on the Ohio River blew up, killing an unknown number of passengers and crew. The steamboat’s combination of fire, high-pressure steam, and boilers of poorly understood metals to contain that steam was turning out to be a lethal one. Over the next 30 years, another 230 steamboats exploded on the nation’s rivers, costing the lives of uncounted thousands.