America has been at war for most of the 20th and 21st centuries and during that time has progressively moved towards a vicarious form of warfare, where key tasks are delegated to proxies, the military’s exposure to danger is limited, and special forces and covert instruments are on the increase. Important strategic decisions are taken with minimal scrutiny or public engagement.
This compelling account charts the historical emergence of this distinctive tradition of war and explains the factors driving its contemporary prominence. It contrasts the tactical advantages of vicarious warfare with its hidden costs and potential to cause significant strategic harm.
There has always been a core tension underlying the experience and understanding of war in human society. On the one hand, it has been deemed by many as necessary and even welcome – a thing to be accepted, pursued and embraced for both coldly instrumental and more complex existential reasons. On the other hand, and often simultaneously, it has been seen as hugely wasteful, disruptive and costly – a thing to be avoided or constrained where possible. War, for most involved, is a realm of loss, pain, privation, anguish, uncertainty and horror. And aside from the injury and ignominy of defeat, it is typically accompanied by many other costs and consequences, for victors as much as for vanquished.1
While war may be necessary and even desirable in some respects, holding out the prospect of personal and group gain through victory or the realization of life-affirming ends, the stakes are usually high and the potential costs extreme. Even in such highly militaristic societies as classical Greece and Rome (on which more later), this duality was not absent. The horrors of war were understood for what they were, as philosophers, dramatists and political thinkers routinely reminded their audiences, and only relatively recently has this tragic view of war come to be seriously challenged.2
These contradictions pervade the long human experience of war, accounting simultaneously for its perpetuation but also the myriad attempts to limit its effects and costs.
* * *
Before moving on to examine the emergence of vicarious warfare in the contemporary American experience, it is first necessary to explore the concept itself in terms of its deeper historical background in socio- political forms, material-technological developments and opportunities, as well as its basis in the evolution of military and strategic thought.
In the previous chapter, we saw how states that had begun to amass significant power often took advantage of opportunities to adopt forms of vicarious warfare and to evade, minimize or limit the costs and requirements of war (typically with harmful strategic effects in the long run). Vicariousness was apparent primarily in financial, organizational and political spheres as well as the growing disinclination among leaders and elites to expose themselves to immediate danger. Meanwhile, public accountability was negligible given absolute rule, except where monarchs seriously overstepped traditional restraints on their authority.1 Even rudimentary forms of proxy war are apparent in the early modern period, such as the French bank-rolling of Gustavus Adolphus’ armies during the Thirty Years War or the later extensive support Louis XVI provided to American Patriots during the Revolutionary War. Similar was Elizabeth I’s support to Dutch rebels in their resistance against attempted Habsburg subjugation and the way some in her court, especially the naval leaders, sought to make war pay for war by conducting plundering ‘descents’ on enemy ports and authorizing privateers to loot Spanish treasure fleets.2
As rulers gradually became more powerful in their domains (at different times in different places), the money resulting from an enhanced capacity for resource extraction meant that they were able to afford rudimentary standing forces or the services of mercenaries.3 Even feudal knightly armies employed light mercenary troops on their margins, such as at Hastings in 1066 and Crécy in 1346.4 Some states opted to essentially hire armies outright, such as when early modern French kings hired Swiss infantry to serve as the main body of their army.
Contemporary American vicarious warfare has no simple or clear-cut origins. As we have seen, taking a wide global historical perspective it is even possible to trace elements of vicariousness in war to much earlier times, well before America’s emergence on the global scene. This tradition seemingly stands in stark contrast to the presumed ‘American way of war’ centred around large-scale conventional battles, and suffers from an awkward fit with dominant American professional military cultures, especially that of the US Army. It nevertheless feeds off long-standing forms of foreign intervention, ongoing developments in the technical dimensions of warfare, and decades of experimentation in more esoteric areas of American strategic behaviour such as frontier campaigns, gunboat diplomacy, special warfare, covert paramilitary action, discrete operations and various forms of military outsourcing and delegation to proxies.
The genealogy of American vicarious warfare is arguably only properly capable of being perceived retrospectively, with the benefit of hindsight and in the light of contemporary practice. That said, we need to be careful not to arbitrarily or anachronistically apply the model in a decontextualized Whiggish manner, forcing earlier periods to conform with how it appears to us today. That is not the intention here. Rather, the purpose of this chapter and the next is to explain how vicarious warfare has iteratively emerged and evolved in an American context. In doing so, the analysis brings into view episodes and events that traditional narratives or popular accounts often leave out. So, contemplating this diverse ‘alternative’ history of American warfare, Echevarria, for instance, has pushed back against the notion that the application of overwhelming force has always been the default option for decision-makers in confronting adversaries.
The Second World War confirmed America’s ascent to great power status. The nation emerged out of the war with great wealth, unparalleled military capabilities and a series of bases spread around the world. This was not just a traditional territorial empire exercising significant and in some places direct control over foreign possessions; the United States also commanded huge indirect global influence, primarily through its role in shaping the post-war international architecture. It had vast commercial interests and its dominance was apparent, not only in traditional metrics of power but in cultural and ideological terms too. Of course, large swathes of the globe were firmly within the Soviet orbit or otherwise under the influence of regimes subscribing to forms of communist ideology, which American institutions, investments and ideas struggled to penetrate. But this applied mainly to the ‘wasted landscapes of Europe’s devastated east’ – the USSR was in disarray in the years after the war, and other communist movements were fighting bitter struggles for survival.1 American hegemony was the dominant fact of the post-war order, and this remained a constant of world politics up to the present day.2
Preoccupied with a convergence of intersecting geopolitical, economic and ideological concerns, the country was led by men who believed America required a preponderance of power to protect and promote its vital interests.3 Porter usefully identifies four key factors that compelled policymakers to aggressively defend American primacy in the new geopolitical context: the nation’s growing power after the Second World War encouraged it to ‘pursue security through expansion’; various crises and strategic shocks – such as Pearl Harbour or the North’s invasion of South Korea in 1950 – inspired a sense of growing vulnerability requiring preventive action beyond its shores; tempted by its growing power, America’s universalizing liberal traditions encouraged it to shape the world in its own image, confident that greater security would come from a world community of like-minded democratic republics; and finally, a ‘self-propelling dynamic of empire’ meant its growing global purview created new frontiers and with it new insecurities, commitments and anxieties.4
Parts I and II surveyed the origins of vicarious warfare in both history and the American experience up to the end of the Cold War. This has allowed us to better understand both the practical and conceptual foundations of this form of war. The discussion revealed that although to an extent vicarious warfare might be understood as the offspring of a unique strategic context defined principally by the long war against terror, in fact it represents something more fundamental and enduring. That analysis also provided us with partial clues regarding the factors that might have contributed to its emergence. The remaining chapters in Part III take these ideas forward into the contemporary post-Cold War context. Vicarious warfare has come to dominate American strategic practice over the last decade, but in its contemporary form it emerged out of developments apparent since at least the early 1990s, and in certain areas well before that. Therefore, in order to properly understand and explain the phenomenon as it confronts us today, we need to account (in this chapter) for the multiple factors driving its modern adoption. Chapter 6 then charts its gradual emergence over recent decades, in terms of both military practice and as an increasingly coherent and influential tradition of war vying for influence in decision-making circles. Finally, Chapter 7 considers some of the prominent strategic consequences arising from the prosecution of vicarious warfare over recent times.
As should be apparent from the foregoing chapters, no single factor can explain the emergence of vicarious warfare as a prominent mode of US warmaking.
The 1990s were supposed to usher in a new world order and more peaceful modes of transformation in international politics. Freed from the constraints of Cold War confrontation, many strategists predicted retrenchment in US global strategic commitments. However, these hopes ran headlong into the so-called ‘new wars’ of the 1990s, such as those in parts of Africa and the Balkans. This was not part of the script. To analysts who had for decades focused their attention on nuclear deterrence or the prospect of major war against the Soviet Union, these conflicts seemed to appear out of nowhere. That they were often continuations of Cold War proxy wars or sparked precisely due to the collapse of long- standing structures was largely irrelevant to a public increasingly exposed to images of human suffering in faraway places. Calls for policymakers to ‘do something’ gathered momentum.
Leaders at the Pentagon were generally opposed to involvement in low- intensity conflicts. After all, in 1991 the military had just demonstrated its prowess in high-end conventional war, thinking that all wars would have that particular shape. Meanwhile, the George H.W. Bush administration, lacking the guiding star of combating communism, steered a cautious path through this new and unfamiliar post-Cold War landscape. Colin Powell, as Chairman of the Joint Chiefs of Staff, was in the process of doubling-down on Weinberger’s earlier restrictive doctrine, now adding the criteria of overwhelming force and clear exit strategies to the list of fundamental principles that would guide the use of military force. In 1991 and 1992, Powell and the Joint Chiefs were actively advising against military involvement in peripheral conflicts such as Somalia or Bosnia.1
The preceding chapters have considered some of the prominent explanations for the emergence and persistence of vicarious warfare. Building on this foundation and drawing from a wide range of recent studies, this chapter will expose its principal operational manifestations to further scrutiny with the aim of uncovering its central dynamics and shedding greater light on the often counterproductive strategic consequences of this form of war, at least as it has been conducted by the United States over recent times.
The first section provides a foundation for the discussion by presenting core Clausewitzian insights that can aid appreciation of the political dynamics underlying the use of force, and specifically as they apply to vicarious warfare. This helps explain how apparent tactical gains can shroud serious deficiencies in strategic terms. The second section outlines how these dynamics play out in relation to three ‘Ds’ of delegation, danger- proofing and darkness, which are employed as short-hand descriptors for some of the central practices that have characterized contemporary American vicarious warfare.
A number of studies advocating versions of vicarious approaches have deemed it necessary to set up their views against what they see as a dominating ‘Clausewitzian’ mentality in American strategic thinking. This, they argue, has compelled America to harmfully apply overwhelming force in pursuit of decisive victories, and has even promoted ‘foolish beliefs about the necessity of slaughter’.1 Instead, they suggest that America should play to its ‘asymmetric’ strengths in air and naval power, special forces and intelligence assets to target enemy vulnerabilities while simultaneously limiting America’s exposure to risks, costs and casualties From a slightly different direction, in a popular recent book McFate bluntly describes Clausewitz as the ‘high priest of conventional war’ who thought ‘brute force and battlefield victory is everything’.3
Vicarious warfare is descriptive of an approach to waging war that seeks to distance its means from its ends. In simple terms, it refers to the prospect of war on the cheap, fought at a reduced price in blood, treasure or political capital relative to ambition. This can manifest itself in behaviour at all levels and in all spheres of war, from the tactical to the strategic, from individual soldier to the wider populace. It is willed to varying extents by all major societal actors, whether political and military leaders or ordinary citizens. It is not synonymous with attempts to banish war from the human experience, as pacifists might desire; rather, the defining characteristic of vicariousness is the attempt by societies, however consciously or unconsciously, to loosen or untether the cords that bound the practice of war to its manifold costs and requirements while still seeking to reap its potential rewards.
In early human history, the conduct of war tightly tethered group members to one another and to its inescapable consequences. The fighting was immediate and bloody, usually close to home, and implicated almost everyone in a community whether directly or indirectly. Sacrifice was an expected and necessary thing. Given that the outcome of battle might determine a group’s chances of survival, it would not be entered into lightly, and decisions would typically be arrived at collectively. Accountability for leadership in war, entailing command in battle and sometimes lasting only for the duration of the immediate crisis, would be similarly direct, sometimes essentially decided by war itself.
Scholars of American grand strategy assess the broad contours of US foreign policy and offer prescriptions regarding the best way forward. Their focus is on the overarching purpose or ‘vision’ of US global engagement, identifying vital interests and suggesting how they can best be secured by employing the resources available to the nation in ways that accord with foundational values. Some analyses closely parallel the argument here, but at the level of the nation’s overarching foreign policy behaviour. For instance, Dueck has described the way America’s ‘limited liability’ grand strategy – whereby it pursues ideologically inspired ambitious ends with limited means to avoid costs – has resulted in suboptimal outcomes in terms of influence, prosperity and security.1 The perspective here has been narrower than this, focused on the application of one specific element of US power: military force. Nevertheless, this book informs those debates insofar as the chief conclusions add weight to realist perspectives that counsel a measure of restraint in US foreign policy, and they raise serious questions for those who would seek to wield force in a habitual fashion, as if it was an unexceptional tool on a par with diplomacy or economic measures.
The application of military force is often seen as unproblematic in some sweeping grand strategic articulations, especially those associated with the pursuit of continued US primacy and liberal interventionism. War, however small-scale or remotely conducted, is a distinctly unpredictable and unwieldy instrument that can easily escape the control of its users and lead them down unintended and perilous paths.2