November/December 2006 | Volume 57, Issue 6
How the U. S. military reinvented itself after Vietnam
And the U.S. armed forces? Weren’t they the bumblers who had been defeated outright by the Vietnamese and humiliated by the Cambodians, Iranians, and Lebanese in, respectively, the Mayaguez, Desert One, and Beirut operations? Even isolated American successes against weak adversaries, such as those in Grenada (1983) and Panama (1989), had been marred by serious miscalculations that suggested to many the American military was not ready for a real war.
Yet Operation Desert Storm went more smoothly than even its most optimistic architects could have imagined. Three weeks of air attacks were followed by a mere 100 hours of ground war that drove the Iraqis from Kuwait. It was America’s most impressive military victory since 1945. And it had been achieved with the loss of just 147 Americans killed in action and another 467 wounded, “the lowest cost in human life ever recorded for a conflict of such magnitude,” according to the U.S. Army’s official history.
How were the U.S. armed forces able to achieve such an unprecedented victory? The answer may be found in the wholesale transformation wrought in the 15 years since American soldiers had stumbled, dazed, defeated, and demoralized, out of the jungles of Vietnam.
The Human Material
One of the first priorities for post-Vietnam military reformers was increasing the quality of those in uniform. From Gen. H. Norman Schwarzkopf on down, all the senior American commanders in 1991 were veterans of the Vietnam War. They could vividly remember how, in the 1970s, the armed forces were racked by racial tensions, rampant drug use, and alcoholism. Many officers would not venture into enlisted men’s barracks without a sidearm; between 1969 and 1971 there had been 800 “fraggings,” or incidents in which soldiers attacked their own officers or NCOs. Things only got worse after the draft was abolished in 1973. Defense spending plummeted, and recruiting quotas could not be met. Half of the Marine Corps and Army came to be composed of high school dropouts.
This all began to change in 1979, when Maj. Gen. Maxwell Thurman took over the Army’s Recruiting Command. A Vietnam veteran, a devout Catholic, and a lifelong bachelor who, in the words of one journalist, “approached each assignment in the Army with the fervor and devotion of a Trappist monk,” Thurman pushed Congress to approve a major military pay increase as well as a new version of the GI Bill that would offer college scholarships to soldiers after they left the service, and he began to market the Army as a place to learn valuable skills, an approach crystallized in a new slogan he developed with a New York advertising agency, “Be All You Can Be,” which helped spark a recruiting renaissance.
The other services followed Thurman while also working closely with Hollywood to help prepare pro-military movies like Top Gun (1986) and The Hunt for Red October (1990). As the armed forces began to rack up small victories, such as the invasion of Grenada, their popularity rose, and recruiters actually began to turn away low-quality applicants. By 1990, 97 percent of Army recruits were high school graduates. The glut of recruits allowed the military to raise standards and crack down on troublemakers. The Navy led the way in 1981 by instituting a zero-tolerance policy for drug use, backed up by random urinalysis tests, a policy soon emulated by the other services. The number of people in uniform using illicit drugs fell from 27.6 percent in 1980 to 3.4 percent in 1992.
At the same time, the military made a conscientious attempt to improve the integration of African-Americans and women. This was not always a smooth process, but through a combination of outreach, mentoring, and crackdowns on discrimination, the military proved largely successful in achieving racial and gender diversity. The symbol of this accomplishment was the elevation of Colin Powell, who became the first black chairman of the Joint Chiefs of Staff in 1989. “The military had given African-Americans more equal opportunity than any other institution in American society,” Powell wrote in his autobiography.
Yet, despite the growing quality of its soldiers, the U.S. armed forces, with 2.1 million active-duty personnel in 1982, remained badly outnumbered by their Communist counterparts. The U.S.S.R., which had not abolished the draft, could field a force of more than 3.5 million men in the 1970s, increasing to more than 5 million by the early 1980s. The Soviet advantage was equally great in tanks, artillery, and aircraft. Back in the 1950s and 1960s the United States could rely on its nuclear edge to deter Soviet aggression, but that had disappeared by the end of the 1970s.
It became increasingly clear to strategists in the Ford, Carter, and Reagan administrations that they would have to develop a new generation of conventional weapons to offset the Soviet numbers. It was at just about this time that microprocessors were revolutionizing the computer industry. The Soviet Union, of course, had no Silicon Valley of its own. Here was one advantage that the United States still had, and the Pentagon was intent on exploiting it.
Since the dawn of the gunpowder age, projectiles had been on their own once they left a gun barrel or, later, an airplane bomb bay. No matter how carefully a gunner or bombardier might aim, once the trigger had been pulled he no longer had any control over where the munitions went. They were at the mercy of the laws of ballistics and gravity, and hence not very accurate.
That first began to change in World War II. The Germans took the lead; their Fritz X, a radio-controlled bomb, was used against the Allied landing fleet at Salerno, Italy, in 1943. But most of their efforts were not terribly successful; more than half the V-2 rockets aimed at London missed the metropolitan area altogether because of their primitive gyroscopic steering mechanisms. U.S. scientists didn’t fare much better with their initial guided bombs in World War II and the Korean War, and the whole field languished in the 1950s and early 1960s.
The U.S. Air Force was the natural outlet for smart bombs, but until the mid-1960s its bomb development was delegated to the Army and Navy ordnance departments, making it a bureaucratic orphan. Who needed accurate munitions, anyway, if (as the working assumption had it) the bombs of the next war would be atomic? It took the Vietnam War to revive interest in precision-guidance technology and to spark a general renaissance in air warfare.
The U.S. Navy and Air Force, which had put all their energies into getting ready for nuclear conflict against the Soviet Union, were woefully ill prepared for the type of conventional combat they encountered in the skies over North Vietnam. Heavy jet fighters, such as the F-105 Thunderchief, were not agile enough to dogfight against Soviet-built MiG-17s and MiG-21s. They had even worse luck in dealing with ground fire, which had been revolutionized by the development of surface-to-air missiles after World War II. The Soviets supplied their North Vietnamese allies with SA-2 radar-guided batteries and radar-controlled flak guns, later supplemented by SA-7 shoulder-fired missiles. The U.S. Air Force and Navy, both of which operated aircraft over North Vietnam, initially had neither the equipment nor the tactics to deal with this menace. As the war went on, American pilots learned to avoid enemy batteries with evasive maneuvers and to disrupt them with radiation-seeking missiles and electronic jamming equipment, giving birth to the techniques that would be utilized with such success against Iraq decades later. The United States paid a heavy price for these lessons: More than 1,500 of its aircraft were downed in Indochina, 95 percent of them by ground fire.
Besides leading to the death or capture of many pilots, heavy ground fire disrupted bombing patterns and made it hard for U.S. aircraft to achieve their objectives during the Rolling Thunder campaign against North Vietnam from 1965 to 1968. Pilots were further handicapped by the fact that unlike in World War II or the Korean War, they could not simply undertake indiscriminate area bombing. The Johnson administration was sensitive to the political ramifications of “collateral damage” and enforced strict limitations on where and when U.S. aircraft could strike. But with bomb accuracy only slightly improved since World War II, U.S. aircraft lacked the capacity to execute pinpoint raids.
The solution did not come from an intensive Manhattan Project–style crash program of the kind that had produced the moon landing and the atomic bomb. Rather, a stroke of serendipity inspired the Air Force colonel Joe Davis, Jr., to set in motion the first laser-guided bomb project, in 1964. As deputy commander of an Air Force armaments laboratory at Eglin Air Force Base in Florida, Davis was dazzled by a demonstration of a laser invented in 1960 by the physicist Theodore Maiman. The scientists who showed off the device had no intention of using it to guide bombs, but that was the first thing Davis, a fighter ace in World War II and Korea, thought of. He even went aloft with a handheld movie camera to prove that a beam from a cockpit could be consistently directed at a fixed point on the ground. Using discretionary funds that did not need approval from the cumbersome Pentagon procurement bureaucracy, Davis awarded a $99,000 contract to Texas Instruments to develop a laser bomb-aiming system. The result was the Paveway, which initially required two aircraft to deliver—one to drop a bomb with small, movable wings, the other to aim a laser beam at its target. Eventually a single aircraft was equipped with both the laser-guidance pod and the bomb. As soon as the Paveway had proved its effectiveness, it was rushed to Vietnam.
It was later determined that 48 percent of Paveways dropped in 1972–73 around Hanoi and Haiphong achieved direct hits, compared with only 5.5 percent of unguided bombs dropped on the same area a few years earlier. The average Paveway landed within 23 feet of its target, as opposed to 447 feet for a “dumb” bomb. The leap in accuracy brought about primarily by laser guidance made it possible to take out tough objectives that had eluded earlier air raids. The most dramatic example was the Thanh Hoa Bridge, 70 miles south of Hanoi, a crucial supply artery for the North. Starting in 1965, U.S. pilots had flown 871 sorties against it, losing 11 planes without managing to put it out of commission. In 1972 the “Dragon’s Jaw” bridge was attacked with Paveway bombs, and 14 jets managed to do what the previous 871 had not: send the span into the Red River.
The United States wound up employing 28,000 Paveways in Southeast Asia, more smart bombs than have been used in any conflict before or since. They did not save the United States from defeat, partially because they were introduced late in the war (only 0.2 percent of all munitions dropped were precision guided), but mainly because a guerrilla foe hiding in the jungles was not very vulnerable to air attack. Still, the Vietnam experience set the U.S. military on the path to future smart-bomb developments. Better microelectronics led to the invention of improved bombs and missiles with aiming systems utilizing radar, lasers, thermal sensors, satellite navigation, inertial guidance, and electro-optical sensors.
By the time of the Gulf War the most common ground-attack precision munitions in the U.S. arsenal were laser-guided Paveway III bombs, guided missiles like the Maverick and Hellfire, and ship-launched cruise missiles directed by internal computers programmed with precise target coordinates. Though laser-guided bombs and cruise missiles were relatively few in number, they would have a disproportionate impact in the war’s early days by hitting Iraq’s best-protected targets with unprecedented accuracy. The journalist David Halberstam later paraphrased the airpower strategist John Warden: “During World War II, an average B-17 bomb during a bombing run missed its target by some 2,300 feet. Therefore, if you wanted a 90 percent probability of having hit a particular target, you had to drop some nine thousand bombs. That required a bombing run of one thousand bombers and placed ten thousand men at risk. By contrast, with the new weaponry one plane flown by one man with one bomb could have the same level of probability. That was an improvement in effectiveness of approximately ten-thousand-fold.”
The most revolutionary weapons system of all in 1991 was a stealth aircraft equipped with two 2,000-pound laser-guided bombs. Its genesis lay in Lockheed’s famed Skunk Works, the top-secret research lab in Burbank, California, that had produced such revolutionary Cold War aircraft as the U-2 and SR-71 high-altitude spy planes. In the mid-1970s Skunk Works engineers figured out how an airplane could be made virtually invisible at night by using special composite materials and flat panels that absorbed rather than reflected radar emissions. President Carter’s Defense Secretary, Harold Brown, a physicist by training, and his undersecretary for research and engineering, William Perry, another scientist, grasped the possibilities immediately and gave the project their enthusiastic support.
Because it was so highly classified, the stealth work (known initially as Project Harvey, after the invisible rabbit in the 1950 James Stewart movie) cut through normal Pentagon red tape. The prototype of the F-117A stealth fighter was ready to fly in 1977, and the first production-line model was delivered in 1981, a remarkably fast procurement cycle. In the Gulf War, F-117As would fly only 2 percent of all attack sorties, but they hit 40 percent of Iraq’s best-defended targets.
The stealth aircraft was only the most advanced of many new weapons systems that were developed in the 1960s and 1970s and joined the U.S. arsenal in the 1970s and 1980s. The Air Force procured two agile new fighter-bombers, the F-16 Fighting Falcon and F-15 Eagle, the B-1 Lancer bomber, and an aircraft for close support of ground forces, the A-10 Warthog. The Navy had its own superfighters, the F-14 Tomcat and F/A-18 Hornet, as well as Aegis guided-missile cruisers (the first was the Ticonderoga , commissioned in 1983), Los Angeles – class nuclear submarines, and Nimitz -class nuclear-powered aircraft carriers. The Army bought a main battle tank, the M1 Abrams; an armored personnel carrier, the M2/M3 Bradley Infantry Fighting Vehicle; a utility vehicle called the Humvee (high mobility multipurpose wheeled vehicle); the AH-64 Apache attack helicopter and the UH-60 Blackhawk utility helicopter; an air defense system called the Patriot; and a mobile surface-to-surface missile launcher, the M270 Multiple Launch Rocket System.
With the exception of the stealth aircraft, which remained a tightly guarded secret until 1988, every one of these systems was extremely controversial when it was in development. Virtually all were plagued by delays and cost overruns that led to embarrassing stories in the press. The journalist James Fallows, in his influential 1981 book National Defense , derided the Pentagon’s “pursuit of the magic weapon” encumbered with “more and more complex computer systems, whether or not there is reason to think that computers will help on the battlefield, and often when there is reason to think they will hurt.” Such criticisms were echoed by Congress’s Military Reform Caucus, a bipartisan group of more than 100 lawmakers led by Sen. Gary Hart who pushed for simpler, cheaper weapons in greater numbers.
A Better Tank
Luckily the Pentagon did not follow their advice. If it had, the United States would have fought Iraq in 1991 with equipment roughly equivalent to the enemy’s, instead of having weapons at least a full generation ahead.
What the reformers did not realize was that adding sophisticated electronics did not have to make weapons systems less reliable and harder to operate. Thanks to advances in solid-state electronics, new aircraft like the F-15 and F-16 were not only far more lethal than their predecessors but also easier to fly and less prone to malfunction. Far from being an encumbrance, advanced electronics gave such weapons a vital edge over less sophisticated adversaries.
Consider the M1A1 tank, built by General Dynamics starting in 1980. It had a gas turbine engine that allowed it to go nearly 45 mph and Chobham ceramic armor (named for the British research center where it was developed) that could survive frontal hits from the Soviet-built T-72s in Iraq’s arsenal. Its 120-mm main gun fired 45-pound sabot rounds tipped with depleted uranium (more than twice as dense as steel) that could penetrate a T-72 at two and a half miles, well outside the T-72’s own range. But its true advantage lay in a fire-control system that employed laser range-finders, thermal and optical sights, and ballistics computers to let its main gun hit targets while on the move and in fog, night, or other conditions that would have rendered earlier tanks useless. In World War II the average tank needed 17 shots to kill an enemy tank; in the Gulf War, the Abrams would come close to achieving the ideal of one shot, one kill.
In the Night and the Sky
The M1A1’s ability to operate at night was a key advantage shared by most U.S. weapons systems in 1991. Night-vision equipment had been developed by the U.S. Army starting in the 1950s. It came in two versions: image-intensifying devices that amplify small amounts of ambient light and thermal forward-looking infrared detectors that sense differences in temperature between an object and its environment. The former are generally carried by soldiers as goggles; the latter usually come in more cumbersome systems attached to vehicles and aircraft. Since Iraqis had few, if any, comparable devices, the U.S. military owned the night.
Complementing its night-vision devices, the military benefited from unrivaled electronic warfare and reconnaissance capabilities. The U.S. Air Force and Navy operated a variety of aircraft designed to keep an eye on the “battle space,” the most famous of which was the AWACS, a Boeing 707-320B equipped with a huge rotating radar dome that could identify low-flying objects from more than 250 miles away. Onboard sat 13 to 19 mission specialists who could analyze information and co-ordinate air operations in real time, allowing hostile aircraft to be intercepted as soon as they were airborne and friendly aircraft to avoid either hitting or shooting at each other.
What the AWACS did for air operations, the E-8A JSTARS (Joint Surveillance Target Attack Radar System) did for the ground war. Also housed in a 707 airframe, the JSTARS synthetic-aperture radar, in a canoe-shaped appendage under the fuselage, could locate and track moving vehicles over more than 200 miles. It was still in the experimental stages when Iraq invaded Kuwait in August 1990, but two prototypes went to Saudi Arabia and they proved invaluable in locating Iraqi ground forces.
The AWACS and JSTARS were joined by numerous other aircraft designed to listen in on enemy communications (RC-135 Rivet Joint), jam enemy radars (EA-6B Prowler, EC-130H Compass Call), or photograph enemy positions (TR-1/U-2). High above all these planes a constellation of satellites monitored the battlefield from space. Their exact details remain shrouded in secrecy, but satellites are known to have performed myriad functions, including providing meteorological data, creating detailed maps, offering early warning of Scud missile launches, relaying communications, and spying on enemy forces.
The most novel and important use of satellites was to provide navigational help to coalition forces. The Global Positioning System was based on a simple premise, that a user could determine his exact location by timing how long it took a radio beam to travel from his position to several satellites in fixed orbit. Navstar GPS, begun in 1973 by the Pentagon, was designed to orbit at least 24 satellites that would give anyone anywhere line-of-sight to at least 4 of them at one time—the minimum needed to get an accurate fix. Only 16 of the satellites had been deployed when the Gulf War began, so they did not provide continuous coverage. Another major limitation was the lack of GPS receivers. By the time Desert Storm began, following a last-minute shopping spree, the coalition had about 840 military GPS receivers and 6,500 commercial models. Even with its limited availability, however, GPS made possible much more accurate maneuvering and striking than ever before. Allied tank forces would not have been able to move through the vast deserts of Iraq without it.
Against this vast array of air and space sensors, the Iraqis had no satellites of their own and no way to fly air reconnaissance because of the Allies’ domination of the skies. Nor could they buy satellite time from private firms; the United States had bought up all the available capacity. It was almost as if soldiers on horseback were fighting tanks; the disparity between Iraq and the United States was that profound.
The Training Revolution
Developing all this high-tech gadgetry was one thing. Learning to use it properly was another. The United States would not have done so well in the Gulf War had not its armed forces transformed their training and doctrine since the Vietnam War. The training revolution began in 1969 when the Navy, concerned about the poor showing of its aircraft over North Vietnam, established the Fighter Weapons School at Miramar Naval Air Station in San Diego. “Top Gun” offered pilots realistic training in dogfighting that significantly improved their combat performance.
The Air Force took note and in 1975 opened its own version of Top Gun. Red Flag exercises at Nellis Air Force Base in Nevada allowed pilots to compete against an “aggressor squadron” emulating the tactics and equipment of Soviet adversaries. Here a new generation of aviators learned how to put together elaborate “strike packages” designed to penetrate enemy air defenses. Experience showed that a pilot was most likely to be shot down while still green, during his first 10 combat sorties; Red Flag was designed to ensure that those missions occurred only on a training range.
The Army set up a realistic training center of its own at Fort Irwin, California, amid the barren scrubland of the Mojave Desert. Starting in 1981, mechanized battalions would travel to the National Training Center to fight a simulated engagement against a highly skilled “Opfor” (opposing force) modeled on a Soviet motorized rifle regiment. Lasers simulated the effects of actual gunfire, and computers kept track of the action for later analysis. Umpires delivered unsparing after-action reports on what went right and wrong. The visitors usually got whipped by the first-rate Opfor, but they learned a good deal from the experience.
At the start of previous wars, American soldiers had been thrown into battle without much combat experience or realistic training to draw on, and they usually paid a steep price for their inexperience. For instance, the First Armored Division was mauled by veteran German units at Kasserine Pass, Tunisia, in February 1943, losing more than 6,000 men. That didn’t happen this time. “Desert Shield and Desert Storm went so easily,” wrote the Air Force general Chuck Horner, U.S. air commander, “because everyone had been there before.”
It also went well because the armed forces had worked out a doctrine ideally suited for operations against a foe like Iraq. One of the U.S. Army’s most important innovations after Vietnam was the creation in 1973 of the Training and Doctrine Command to fashion an intellectual renaissance. Its first commander was Gen. William DePuy, a veteran of World War II and Vietnam who proceeded to obliterate the traditional American approach toward war. In his first operations manual, which came out in 1976, DePuy noted that traditionally the United States was “accustomed to victory wrought with the weight of matériel and population brought to bear after the onset of hostilities.” This had worked in the industrial age but it was no longer suitable for the dawning information age. Given the lethality of modern weapons, General DePuy did not think it was possible to lose the first battles and still push on to victory. “Today the U.S. Army must, above all else, prepare to win the first battle of the next war.”
This was an important innovation that was eagerly greeted by the Army. So was DePuy’s emphasis on realistic training, which led to the creation of the National Training Center. The actual strategy he ultimately crafted, known as Active Defense, was less popular. As its name implies, it was an essentially reactive approach that called for falling back in the face of a Soviet onslaught in Europe. Other ideas bubbled up at various military institutions; they included advanced schools devoted to the operational art, opened by the Army, Marines, and Air Force in the 1970s.
The eventual result was a new doctrine prepared by DePuy’s successor, Gen. Donn Starry, and adopted in 1982. His approach, known as AirLand Battle, was anything but static. It was essentially a variant of the German blitzkrieg or Russian “deep battle,” and a far cry from the attritional strategy utilized by U.S. forces in all of the country’s major conflicts going back to the Civil War. AirLand Battle called for attacking Red Army rear echelons, seizing the initiative, outmaneuvering the enemy, and utilizing a variety of weapons simultaneously to produce a counteroffensive that would be “rapid, unpredictable, violent, and disorienting to the enemy.” It was predicated on the assumption that the United States had superior weapons and superior personnel that could compensate for its inferiority in total numbers. The Air Force bought into this doctrine, and the Marine Corps came up with its own version, known as Maneuver Warfare.
This was, in essence, the strategy that America put to use in Desert Storm. Originally developed to counter Soviet tank armies on the plains of Europe, AirLand Battle proved ideally suited to fighting Soviet-style tank armies in the deserts of the Middle East.
The final element necessary to produce U.S. victory in the Gulf War was having the right organizational structure in place. Chaotic operations such as the 1980 Iranian hostage rescue attempt had revealed the pitfalls of interservice rivalry. This gave a boost to military reformers on Capitol Hill who wanted to create a more unified command structure. After several years of debate, Sen. Barry Goldwater, an Arizona Republican, and Rep. Bill Nichols, an Alabama Democrat, managed in 1986 to push through the most significant shakeup of the Pentagon since the creation of the Department of Defense in 1947.
The Goldwater-Nichols Defense Reorganization Act emphasized “jointness”: The chairman of the Joint Chiefs of Staff was made principal military adviser to the President and the Defense Secretary, sidelining the individual service chiefs with their more parochial concerns. To assist him, the chairman was given a vice-chairman (another four-star general) and an expanded joint staff of more than 1,000 officers. Service on a joint staff became mandatory for any officer seeking promotion to flag rank.
This legislation also established a clear chain of command running from the President through the Secretary of Defense to a unified field commander. The entire world was broken up into five vast regions—Europe and Africa (European Command), the Atlantic, the Pacific, Latin America (Southern Command), and the Middle East, East Africa, and Central Asia (Central Command, or Centcom)—each placed under the command of a four-star general who had complete authority over all U.S. forces within his jurisdiction. (Other commands were established for responsibilities such as special operations and transportation; they transcended geographical boundaries.) The Goldwater-Nichols Act gave the combat commanders unprecedented authority within their domains. It was power that General Schwarzkopf, who took over as Centcom’s third chief in 1988, used to marshal the forces that won the Gulf War.
Into the Storm
Viewers around the world were amazed by the spectacle that unfolded on their TV screens between January 17 and February 27, 1991. Video clips of bombs and missiles hitting with seemingly unerring accuracy obliterated once and for all the previous image of America’s “hollow” army. In its place came a new vision of an unbeatable superpower.
Yet just as victories can grow out of defeats, so too can defeats grow out of victories. The very prowess displayed by the armed forces in 1991 made future foes wary of challenging this new Goliath on its own terms. In the future, America’s enemies would use unconventional weapons—weapons like hijacked airliners and suicide bombers—to circumvent its dominance in conventional combat. And the U.S. military, superbly configured for a dash through the desert, would find itself ill suited for waging irregular warfare against shadowy enemies who did not present easy targets for smart bombs and Abrams tanks.
That weakness would come back to haunt the United States a decade after Desert Storm, following another flawless blitzkrieg against Iraq. In the years after the fall of Baghdad on April 9, 2003, soldiers and Marines suffered a growing number of casualties to terrorist bombings and ambushes. As one U.S. general complained, “Insurgents don’t show up in satellite imagery very well.” To defeat such an elusive foe requires very different skills from those cultivated in the years after the Vietnam War, skills such as knowledge of foreign languages and cultures, policing, intelligence, information operations, and civil affairs. All these areas were neglected by reformers of the 1970s and 1980s, who recoiled from the horrors of counterinsurgency warfare in Vietnam, but they were to prove an inescapable necessity in the global war on terrorism. Whether the United States can defeat Islamist terrorists and maintain its post–Cold War hegemony will rest in good part on whether its armed forces can pull off another metamorphosis of the kind that produced victory in the Gulf War.
Max Boot is a senior fellow in national security studies at the Council on Foreign Relations. This article is based on his new book, War Made New: Technology, Warfare, and the Course of History, 1500 to Today , just out from Gotham Books.