A Will to Measure

WILLIAM S. MURRAY


From Parameters, Autumn 2001, pp. 134-47.

Go to Autumn issue Table of Contents.

Go to Cumulative Article Index.


The familiarity of measures of effectiveness (MOE) in many areas of life suggests that in modern society quantification is irresistible. Batting averages, stock market figures, returns on investment, air passenger miles, and countless other common measures serve to distill vast amounts of data into relevant information. The armed services also use quantitative methods to analyze and explain their actions to their leaders, political masters, and the public they serve. Aircraft availability and accident rates, combat readiness figures, unit inspection grades, officer and enlisted performance evaluations, personnel retention figures, and other indicators aid organizational processes and quantify the otherwise subjective black art of analyzing military readiness and organizational health.

For better or worse, the interpretation of MOE frequently forms the structure on which senior leaders base their orders. Consequently, the need to carefully select MOE is extreme. On the one hand, measures of effectiveness that adequately distill and accurately reflect reality help decisionmakers make informed, timely decisions. On the other, ill-considered or poorly chosen measures have a multitude of negative effects. In addition to misrepresenting that which they purport to depict (with potentially disastrous consequences), the collection and analysis of inappropriate MOE wastes resources and efforts. This opportunity cost is one most endeavors would be glad to do without. Additionally, military and other organizations tune their behavior to improve the measures upon which they are evaluated, which can result in a host of unintended consequences.

Clearly measures of effectiveness are important and should be chosen with care. This article will examine military MOE in the context of coercion, and seeks to answer this question: What are the characteristics of measures of effectiveness that will allow decisionmakers to determine if the application of force is bringing an adversary closer to agreeing to demands?

The need to address this question is pressing since it is not apparent that current military metrics help decisionmakers determine enemy proximity to defeat. As an example, during the 1999 Kosovo crisis NATO counted the number of tanks, armored personnel carriers, and artillery pieces destroyed, tallied the number of sorties flown and bombs dropped, and estimated the number of Serbian military and civilians killed. As the operation continued, it became increasingly difficult for many observers to understand how changes in any of these measures appreciably reduced Serbian President Milosevic's determination to continue the war. Even if NATO leadership analyzed other classified measures, the MOE that the military reported to the press seemed to validate weapon performance requirements more than they reported progress toward strategic goals. For instance, analysis of the number of sorties flown might have informed interested parties how well airmen kept their planes flying, but it did not provide a sense of whether Yugoslavian leaders were ready to accede to NATO demands. That reported measures did not indicate Milosevic's proximity to defeat suggests a key feature of desired MOE.

The fundamental relationship that bedevils MOE as considered in this article is that of causality. To be meaningful, a measure of effectiveness should represent and report on a linkage between cause (friendly military actions) and effect (enemy defeat). Unfortunately, in many cases, this assumed relationship between action and effect is weak, or doesn't exist at all. For example, in early 1999, NATO leaders might have assumed that the destruction of Serbian artillery, armored personnel carriers, and tanks in Kosovo would force Milosevic's withdrawal from Kosovo. If true, the assumed causality would be that Milosevic's ability to remain in power depended on his military, and that by destroying mechanized forces in Kosovo NATO would endanger that strength to such an extent that Milosevic would agree to the alliance's demands. The accuracy of that causal mechanism is arguable since Milosevic's power base extended well beyond his military's dependence on armor. Even if it did not, NATO's ability to destroy his tanks, armored personnel carriers, and artillery, self-hindered as it may have been, was probably insufficient to have forced his hand.[1] Of course, other reasons, including a desire to halt the atrocities being committed against Kosovar Albanians and a desire to assist the Kosovo Liberation Army (KLA) in its efforts contributed to NATO's decision to use airpower to destroy Serbia's deployed armor. One can certainly question the causality between the effectiveness of Milosevic's troops in ethnic cleansing and their dependence on armor and artillery. Similarly, the relationship between the KLA's combat effectiveness and the number of NATO tank and artillery kills is also open to debate.

In a related vein, NATO leaders may have assumed that in addition to hindering Yugoslavian operations, the destruction of infrastructure such as Serbia's bridges and electrical distribution system would undermine the regime's popular support and lead to a political crisis that would force Milosevic either to yield to NATO demands or abandon office. Perhaps, but even with the benefit of hindsight it is difficult to demonstrate that this mechanism caused either Milosevic's initial capitulation or his eventual final fall from office. Other factors were at work, including the long-term effects of sanctions, diplomatic pressures, Russian support, Milosevic's ability to accomplish his war aims, and NATO's resolve. The situation and context of the scenario was complex, and the idea that one or even a group of numerical indicators could predict proximity to victory invites a certain amount of skepticism. In fact, when one general closely associated with the air war was asked if there were any measures of effectiveness that would have predicted victory in Kosovo, he replied, "No MOE did that. It was sheer weight of effort."[2]

The general's comment demonstrates how hard it is to determine, let alone quantify, the causal mechanisms that link military actions to political results. This is a problem, since history suggests that if analysts are unable to effectively quantify and report strategic progress using precise, descriptive MOE, the default solution for the United States is just to use more: more airplanes, more bombs, more sorties, more artillery, more targets, more money, and more troops until the opponent is destroyed. But this approach is unsatisfactory for a variety of reasons. Perhaps most significantly, it indicates a defeat of precision and a victory for slaughter, which is contrary to guidance, vision,[3] and the moral and legal imperative to use a minimum amount of force to achieve goals. Ideally, measures of effectiveness should incorporate defeat's causal mechanisms in their calculus and thereby indicate progress toward victory. Is this goal achievable?

Rationality, Body Counts, and Dominant Indicators

In his book Strategic Assessment in War, Scott Gartner observes that during conflicts decisionmakers want to know how well they are doing before the war is over, and that the modern battlefield produces too much information for individuals or organizations to assess fully. He then argues that states reduce the available information to specific measures and determine whether or not to change their strategies based on their analysis of these dominant indicators.[4] In so doing he assumes the existence of a logical, ordered, rational state or government making a series of cost-benefit analyses within the framework of their value structure and priorities. Gartner's assumption of rationality is also one this article makes. The alternative is to attempt to quantify irrationality, which is something perhaps best left to psychiatrists; it may even be impossible.

Gartner's analysis and work occurred after the fact. Understanding why strategic assessment occurred after examining the historical record is useful, but more useful still would be the ability to predict such changes before their occurrence or to understand them as they occur. This is a difficult task, and the ability to do it well is uncommon. Some exceptions exist, such as the policy choices informed by analysis of U-boat and merchant sinking rates during the World War II Battle of the Atlantic,[5] but clear cases of success like this are relatively rare. More familiar is the outright failure--or the misuse, misinterpretation, or discounting--of otherwise informative analysis.

During Robert S. McNamara's tenure as Secretary of Defense, advances in communications and computer technology facilitated the local collection of battlefield data and their remote analysis. Conceptually, measures of effectiveness thus derived could influence, shape, or otherwise alter policy, doctrine, and battlefield engagements to further the pursuit of national war goals. However, in contrast to McNamara's previous successes in using quantitative measures to improve processes in the business world and to improve business processes in the Department of Defense, his Vietnam-era attempts to quantify war proved controversial. Perhaps the most contentious and noteworthy example of Vietnam-era MOE is the body count. Indicative of the controversy surrounding the body count is this appraisal:

Central to the McNamara strategy for Vietnam was the application of technological solutions to military problems and the employment of quantitative methods to measure progress. We know now that this approach was the height of arrogance, indeed a manifestation of hubris which, according to the Greeks, is an overweening pride that offends the gods and thereby leads to a fall.[6]

Nevertheless, senior military and civilian leaders did use the body count as a means to gauge progress. This high-level attention, and the underlying organization's efforts to improve the metric, produced a variety of unintended effects. For example, subordinate commanders of one division allegedly had to meet body count quotas,[7] which calls into question whether that division's operations were designed to maximally contribute to war aims, or if were they instead designed to maximize kills at the neglect of more important long-term goals. Compounding this problem, some assert that many officer evaluations and promotions depended on the body count.[8] Perhaps as a result of these pressures and incentives, some officers subordinated integrity to ambition or expedience by inflating their unit's counts.[9]

These specifics highlight the dangers inherent in the observation that individuals and organizations work to improve the metric by which they are evaluated, whether or not such improvements benefit the enterprise or its overarching goals. The Army's selection of the body count as its primary metric may not only have contributed to losing the war, but in the end it proved so morally corrosive that it led to a crisis of soul-searching in the postwar officer corps. Further, since the body count was the primary measure that civilian and military leaders analyzed, and since these same leaders ultimately failed the fielded army, the body count and the use of quantification efforts in general continue to suffer from guilt by association.

Even though it had problems that are obvious in retrospect, the body count did have some positive uses. Alain Enthoven, McNamara's head of the DOD Systems Analysis Office, analyzed the counts and concluded that, in effect, the enemy was controlling the size, duration, and intensity of engagements, and thus limiting his casualties:[10]

[In 1967] "the VC/NVA started the shooting in over 90% of the company-sized fights," Mr. Enthoven reported, and "over 80% began with a well-organized enemy attack. Since their losses rise . . . and fall . . . with their choice of whether or not to fight, they can probably hold their losses to about 2,000 a week regardless of our force levels. If, as I believe, their strategy is to wait us out, they will control their losses to a level low enough to be sustained indefinitely, but high enough to tempt us to increase our forces to the point of US public rejection of the war."[11]

Deriving insights from an analysis of the body count, engagement rates, and other quantitative indicators, Enthoven uncovered one of the pillars of North Vietnamese strategy, but his message and personality[12] cross-threaded with Army culture and doctrine. Rejecting or ignoring his logic, Army leaders did not reassess their policy and persevered in firepower-supported search-and-destroy tactics at the expense of pacification efforts. Many view this decision as a strategic failure.[13] Ironically, further analysis of the body count suggests that the reason for the Army's persistence may now be understood.

Scott Gartner argues that analysis of the body count and enemy casualty rates can shed light on assessments of American strategies during the Vietnam War as conducted by the US Congress, the Johnson Administration, and the Army.[14] Specifically, he claims that the record large numbers of US killed in action and the trend toward larger numbers resulting from the 1968 Tet Offensive tipped Congress from supporting to opposing the war. Gartner uses the same data to explain the Johnson Administration's internal strife and strategic gridlock in early 1968 by analyzing the conflicting signals it received from interpreting the number of enemy and US dead. One the one hand, US casualty figures[15] were bad and getting worse at an increasing rate. On the other hand, the body count showed the enemy's situation was also bad and increasingly getting worse.[16] Gartner explains that these conflicting signals caused the Administration's outlook to become increasingly divided, with then-Secretary of Defense Clark Clifford later observing, "The pressure grew so intense that at times I felt the government might come apart at the seams. Leadership was fraying at its very center."[17]

During the same period the Army, acting in a logical, rational manner, appraised its strategy of firepower-supported search-and-destroy tactics as fundamentally healthy based on the surge and positive direction in the enemy body count. Accepting friendly casualties in war as inevitable, Army leaders evaluated their dominant indicator and saw no need to change strategy even as the Congress and American public derived the opposite conclusion from their analysis of the number of US soldiers killed in action.

These reactions to the Tet Offensive lend understanding to the issue of how leaders interpret and react to changes in their dominant indicators in time of war and suggest an avenue by which meaningful measures of effectiveness might be determined. Gartner insists, perhaps rightly, that if the dominant indicators (the primary MOEs) of a decisionmaking body are known, that body's decisions may be predicted and made more comprehensible by examining and understanding the indicators' magnitudes and rates of change.[18] Recalling that the reason to use military force is to compel compliance with demands, the implications for a dominant indicator are significant. If a warring party can determine an adversary's dominant indicator, and can control or directly affect the amount and rate of damage to that indicator by the use of military force or other means, then one might be able to better influence when the adversary determines that continuing his current strategy is detrimental to his interests.[19] The relevant MOEs are the enemy's set of dominant indicators. If Mr. Enthoven's previously quoted analysis was correct, as it appears to be, North Vietnamese leaders did just this.

Punishment and Denial Strategies

The analysis of quantified data in wars is fairly common. Gartner supports his argument and conclusions by analyzing examples from both world wars and from the failed Iranian hostage rescue attempt. Other analysts have examined other campaigns. Robert Pape's analysis of the use of airpower in the context of compelling an enemy to do one's will in Bombing to Win: Air Power and Coercion in War offers additional insights to the problems of causality and selecting MOE. What, exactly, he asks, causes an enemy to concede to demands?

To answer this, Pape examines the application of airpower to conflicts and concludes that all historical examples essentially fall into two categories. The first, which he terms coercion by punishment, is most closely associated with the use of strategic airpower and consists of bombing civilians and infrastructure in the belief that the ensuing civilian pain will "cause either the government to concede or the population to revolt against the government."[20] The bombing of Japanese and German cities in World War II is the archetypical example of punishment strategies.

Pape's second category, coercion by denial, consists of using airpower to destroy the military of the opposition and is more closely associated with tactical uses such as interdiction and close air support. Pape explains that "denial campaigns generally center on destruction of arms manufacturing, interdiction of supplies from homefront to battlefront, disruption of movement and communication in the theater, and attrition of fielded forces."[21] Allied tactical bombing and strafing of German forces and transportation infrastructure in France, and the destruction of Iraqi forces by coalition airpower during the Gulf War, are two noteworthy examples of denial strategies.

Pape's analysis leads him to several conclusions, one of which is that punishment strategies (strategic bombing) simply do not work. They never cause the bombed population to rise in revolt, or otherwise cause the targeted populations to exert political pressure on the targeted regime. Pape asserts and shows that bombed populations instead become politically apathetic and focus on immediate survival needs rather than on political change. Therefore, the assumed causal mechanism behind strategic bombing or other punishment strategies is mistaken; the linkage does not exist. Bombing civilian populations has never, and probably will never, result in political change. Pape concludes that punishment strategies by themselves are ineffective in obtaining the goal of coercion.

A second Pape conclusion is that denial strategies, those that result in the destruction of an opposing military force, may work, but they require the unequivocal commitment of the nation pursuing them. He reminds his readers that when considering whether to embark on a strategy of denial, leaders should be prepared for an arduous, costly, prolonged, and bloody fight that may last until a very bitter end. Pape cites several examples to support his conclusions regarding denial strategies, including the Japanese and German "unconditional" surrenders in World War II, as well as that of Iraq's agreeing to coalition demands at the end of the Gulf War. As did Gartner, Pape assumes rationality on the part of the appropriate decisionmakers and concludes that these governments' decisions to surrender resulted from their leaders' careful cost/benefit analysis.[22]

If Pape is correct, air and other modern strategists are on the horns of a dilemma, with denial strategies (like that waged by NATO against fielded Yugoslavian forces in Kosovo) becoming prohibitively expensive, and punishment strategies (of which NATO's bombing of Serbia could be considered an example) unlikely or unable to work. Compounding this dilemma, denial and punishment strategies both rely on widespread destruction and attrition mechanisms. Might there be another, less costly way to wage war?

Attrition vs. System Failure

Some analysts suggest that viewing war differently could lead to a different victory mechanism. Warfare, they say, is not just a contest of attrition or a form of violent friction in which both sides wear down the other through the constant stress of combat. Instead, it is the violent competition for survival (or perhaps dominance, if survival is not at stake) between two systems. These theorists postulate that military systems adapt and evolve when attacked, and through a process of feedback, anticipation, and communication change their processes and products in ways to thwart the actions and intentions of the adversarial system.

According to those who advocate a systemic view of warfare, it is the behavior of the opposing system that must be defeated to achieve victory. This premise suggests a new and different victory mechanism. The most obvious way to defeat a system is to destroy its components. This is the way in which wars of attrition have always been fought. The systemic, theoretical "new" way to victory is to destroy an opposing system's ability to adapt by reducing or preventing cohesion between system components.[23] The systemic defeat mechanism creates a condition (or in the vernacular, an "environment") in which the antagonistic system cannot adapt in sufficient time to offset the repercussions of unfriendly acts. This is accomplished in part by denying elements of the targeted system the ability to communicate, reinforce, or otherwise achieve the advantages of mutual support and protection. When this is accomplished, the system under attack ceases to behave in a coherent or coordinated manner and is unable to adapt to enemy acts. In this condition the targeted system can no longer respond to its enemy in a coordinated manner, and will either fail to function at all (leaving the environment dominated by an opposing system), or can be defeated in detail.

Systemic targeting's adherents claim that overhead imagery, precision munitions, and detailed knowledge of enemy networks and systems allow the rapid design and efficient achievement of anti-cohesion strategies. The great difference between today's theoretical capability and possible historical precedents (such as the Allies' destruction of the French and German transportation systems while liberating Western Europe) is the ability to stop system behavior by destroying or otherwise neutralizing key nodes, hubs, or linkages rather than destroying the entire system. Advocates claim that anti-cohesion strategies can create attrition-like effects, even while simultaneously destroying far fewer system elements, and do all this at substantially lower risk and cost to friendly forces.

In a future conflict, leaders may choose to employ an anti-cohesion strategy. This may work quite well and might offer a way to victory without exacting or enduring attrition's terrible costs. However, before placing all our eggs in this basket, a point of caution is in order. First, it ought to be remembered that an enemy system that starts out complex and adaptive will try to return to that condition once it suffers attacks. To use a common biological metaphor, it has the incentive and ability to heal, and will try to do so in ways that may not be anticipated or easily countered, as the resilience of the Ho Chi Minh trail attests. Systems, as collections of formal and informal networks, are very resilient, and this toughness is likely to increase with time as the targeted system and sub-networks "learn" how to adapt to attacks. Compounding this tendency, in many cases the more time a system has to recover, the more likely it is to succeed. It follows that efforts to destroy linkages and cohesion must quickly achieve and indefinitely maintain widespread results or else the targeted system will learn how to adapt to and overcome applied effects.

An example from Kosovo supports the claim that effects have to be permanent to convince. During the conflict the United States used "non-destructive" weapons to disrupt electrical distribution systems. Although the Serbs initially required over 500 technicians and 15 hours to restore power after early attacks, they quickly learned to do it in as little as four hours,[24] and four-hour electrical disruptions or their equivalent are not by themselves a war-winning effect. NATO realized this and later disrupted electrical power by destroying transformers, distribution nodes, and transmission lines.[25] Few would disagree that the demonstrated capability to permanently deny electricity to large portions of Serbia exerted much more pressure than did temporary interruptions of service, but a larger issue is to determine how much the denial of electricity contributed to ending the war. The answer to this, of course, is that no one really knows for sure.

The implication of such admonitions and examples is that before embracing network anti-cohesion strategies, planners must be able to demonstrate that they can maintain system-disabling effects for durations of their choosing. They may have options as to how to accomplish this. After first degrading enemy system cohesion they can either continue to detect, attack, and defeat remaining and emerging system cohesion schemes using both virtual and physical means, or they can take advantage of temporary hostile system incapacitation to facilitate the piecemeal destruction of its components. This second option, if it works, would in effect be one-sided attrition warfare, and in wars of unlimited goals might allow the United States to decisively destroy an adversary's military and infrastructure while experiencing very low casualties.

But what of the more likely limited wars in which the utter destruction of an opponent is not desirable? If the goal of applying military force is to stop or reverse a given opponent's actions for a relatively limited time, then perhaps wholesale network destruction is not required, and temporary network disablement will suffice. Recent experiences in Kosovo, Bosnia, and even the Gulf War suggest that at least for the time being the United States is inclined to goals that allow the use of temporary enemy network disablement. Each of these targeted regimes remained in power and retained its military after conflict termination. The future is likely to present leaders with similar cases of limited goals, but when that happens, friendly forces will have the additional tools of network-centric warfare to help prosecute the war.

Network-Centric Warfare, Coercion, and Quantification

The ideal envisioned in network-centric warfare (NCW) is the possibility of achieving decisive effects without having friendly forces incur attrition's devastation. According to NCW's adherents, advances in communications, computers, and weapons, coupled with changes in organization and doctrine, will produce a lithe, strong, wise, and adaptable military. Backed by superior, shared knowledge and understanding, networked forces will dominate future battlefields, reacting quickly and appropriately to tactical challenges, denying or destroying enemy systems as necessary until compliance with US demands is an adversary's only viable option. Key to such capability is the strength of friendly networks. Instantaneously reporting intelligence events and targets, facilitating the efficient allocation of resources, and automatically tabulating logistics and battlefield results, networks and their associated knowledge management software and computers will enable NCW forces to dominate the physical battlefield at substantially reduced risk.

As beguiling as it is, nothing in either network-centric warfare's vision or the possibilities offered by anti-cohesion strategies reduce decisionmakers' need to know how well they are doing before the war is over. Since the modern battlefield (especially a future one under the scrutiny of networked sensors, forces, and weapons) produces too much information for individuals or organizations to assess fully, battlefield data must and will be reduced to specific indicators. But what sorts of indicators best represent proximity to victory or distance to defeat for friend and foe? What measures of effectiveness are appropriate for network-centric warfare or future warfare under any name?

Earlier sections of this article listed some of the measures reported during the Kosovo conflict. But sorties flown, bombs dropped, and similar metrics are measures of combat performance, and do not really reflect or report on a chosen strategy's effectiveness. This distinction is an important one, and it holds for the future as much as it held for the past. Even though the networks of the future will be able to count, tabulate, and report things or events more rapidly than today's reporting systems do, the availability of such information will not necessarily offer insight to the adversary's cost-benefit calculus.

A team of defense analysts offers a possible framework for analysis of this problem by describing war as the sum of the physical, reason, and belief realms. These analysts define the belief realm to include individual morale, leadership, group cohesion, resolve, emotion, fear, or state of mind as a result of training. The goal within the belief sphere "is to destroy the enemy's will."[26] Reason, as they define it, is the realm of human cognition, and is the ability to grasp complex battlefield situations and to make and act upon decisions. The reason realm encompasses situational awareness, analysis, and decisionmaking capabilities. Last, they define the physical realm to include the activities of move, strike, and protect.[27]

If one accepts this three-pronged description, it may not be too far-fetched to claim that existing measures of weapon system performance adequately describe the physical realm of warfare. Tanks and artillery destroyed, bombs dropped, body counts, miles advanced, and other familiar physical estimates provide reasonable measures by which to define battlefield progress. Similarly, the adaptability of a network to distorting events can be quantified using such concepts as the interconnectedness of components, the time networks need to "heal," and the time a network takes to accomplish a given task. Certainly these and other relevant network features can all be measured for friendly systems, and it seems reasonable to suppose that such indicators are also available for hostile networks and systems. Perhaps then, the means exist that allow the determination of MOE for both friendly and adversary reason realms. But what measures work for the realm of belief?

A fundamental problem with the belief realm is it is difficult to describe, let alone quantify. Belief's definition includes words such as resolve, morale, emotion, fear, and state of mind. But all of these conditions are subjective and can be irrational. They are very difficult, if not impossible, to measure. Since the underlying challenge is to determine how close an enemy is to conceding to demands and whether friendly actions are affecting that calculus, then what is really sought is a measurement of the adversary's strength of will. But this may also be so complicated as to be impossible. Consider again trying to understand what made Milosevic capitulate in Kosovo. Shortly after the termination of the conflict General Shelton wrote, "We may never know what ultimately caused Milosevic to quit. I find it difficult to get inside the mind of someone who is willing to take his nation to war four times in 10 years and sanction 'ethnic cleansing' and other atrocities."[28]

Analyst Timothy Thomas offers a related observation:

NATO had almost perfect intelligence about the intentions, goals, and attitudes of President Milosevic through a multitude of personal discussions with him over the previous four years by representatives from scores of nations (and possibly from communications intercepts), yet could not get him to the negotiating table, foresee his ruthless ethnic cleansing campaign in time to stop him, or predict his asymmetric responses to NATO technological and bombing prowess.[29]

Apparently Milosevic's will confounded NATO leadership. Perhaps this should not be a surprise.

The will of enemy leaders is influenced by the will of the nation or group they represent, the legitimacy of their governance, the strength of their military, and the tenacity of their international allies. No doubt other influences such as religion and cultural factors also affect it, as do battlefield events. The will of an army or a national leader is very adaptive and complex, perhaps immeasurably so. It can defy tremendous pressures and remain strong, as did Russian and British defiance to German advances in World War II, or it can die quickly when attacked, as did the will of the French during the blitzkrieg and that of the British defending Singapore. Consequently, quantifiable indicators for an enemy's will or belief appear to be beyond both our theoretical and empirical grasp. The enemy's will is probably unpredictable, and may be unknowable or even irrational. It defies quantification.

Understanding Enemy Will . . .

Performed logically, the process of determining appropriate measures should be undertaken once a causal mechanism linking political effect to military cause is identified. Unfortunately, many or even most MOE are selected in the absence of known causality. Such MOE are merely informed conjecture. Although they may prove to be useful, they may also prove to be irrelevant to the goal of discerning enemy proximity to defeat, and may even precipitate a variety of detrimental effects, as occurred with the body count.

Meaningful wartime measures that illuminate proximity to victory are elusive simply because their underlying causal mechanisms are exceedingly difficult to determine. This may be because causality, like the systems within which analysts seek it, is not a simple relationship in which a rise in one input causes a corresponding rise in desired output. Instead, it is probably highly complex. In most cases it would seem to stem from a multitude of factors and influences, the blend of which is unique and unpredictable in each circumstance. Understanding causality may not be possible, which leads to the contemplation of other means of gauging enemy will.

Scott Gartner offers a potential alternative with his suggestion that the ability to manipulate an enemy's dominant indicators can lead the way to a convincing victory with an economy of violence. This goal, which lies at the heart of network-centric warfare and other recent concepts, is certainly laudable, but applying Gartner's premise in plans presupposes that the enemy's dominant indicators are something that can be determined beforehand. Perhaps they are, but the ability to know an enemy's dominant indicators with confidence requires a prior, in-depth understanding of his psychology and society, which in turn requires understanding the nuanced influences of habit, personality, religion, governance, and culture for potential adversaries. The costs--as well as the benefits--of gaining such knowledge would be great.

. . . or an Irrelevant Enemy Will?

If the goal of resorting to war is to conquer and subjugate a population, as it was against the Germans and Japanese in World War II or the Confederates in the US Civil War, then the matter of the enemy's will is almost irrelevant if the opposing nation is able to endure the costs of destroying its enemy. Hitler and Germany under him had strong wills, but after their conquest their collective will was irrelevant. It had not been changed, but instead destroyed, at great costs to both sides. This is an effective but costly way of dealing with the problem of an implacable enemy. It is a simple attrition strategy, easily defaulted to, and it is reliable if a country has the physical means and determination necessary to see it through.

Modern Western nations are reluctant to embrace attrition, however, because it is so costly to both sides of a conflict. Not only is it devastating to the vanquished, it is also exorbitantly expensive to the victors. The United States is particularly reluctant to resort to attritional warfare even though it rarely hesitates to inflict heavy casualties on its opponents. This superficial paradox is actually quite understandable. Once started, the object of wars is to win them, which the United States tries to do by lavishly spending treasure rather than blood. Woe to the other side. This is the American way of war and is perhaps best paraphrased in the movie Patton as "making the other poor dumb bastard die for his country."

General Patton pursued an unlimited war's termination, but few wars are fought for unlimited goals. Most past US wars, and perhaps all future ones, will be those of limited goals, wars that end once one side agrees to the other's terms. This restores and again highlights the relevance of wills. But how does one affect another's will in a limited war?

The answer is that no one really knows. The question has no easy answers. The makeup of an enemy's will is so complicated, and varies so much with each scenario, that claims of universal applicability of any given means of countering an enemy's will are immediately suspect. Anti-cohesion advocates suggest that targeting the communication mechanisms might make the will of an opponent irrelevant by facilitating the destruction or neutralization of his military. This might work in cases such as the Gulf War or in future wars against conventional militaries, but it is difficult to imagine its successful use in wars waged in urban areas, or how it could have been effectively used in Vietnam.

Improving military doctrines, networks, and organizations probably will enhance the efficacy of military operations, which justifies the necessary investments. But future limited wars may be of such a nature that efficiently performed operations could prove insufficient to the goal of changing an implacable opponent's will. If this is true, and if future planners and analysts continue to have difficulties determining causality and its derived MOE, then the mathematics of war will remain unclear. In the future, knowledge of the enemy, adaptability, and determination are certain to retain their important roles in warfare, no matter how quick or efficient the application of violence becomes. Warfare will remain an art instead of becoming a science, and Pape's reminder that the ability to destroy things with airpower (and by extension, other long-range fires) will not by itself gain easy victory will remain relevant, despite claims to the contrary.


NOTES

1. See the 8 May 2000 DOD News Briefing in which Brigadier General John D. W. Corley, the Director of Studies and Analysis, Headquarters US Air Forces in Europe, attempted to defend a recent assessment of the damage airpower caused to Serbian armor and artillery deployed in Kosovo, internet, http://www.defenselink.mil/news/May2000/t05082000_t508koso.html, accessed 22 May, 2001.

2. This quote was delivered in a lecture at the Naval War College under the condition of nonattribution.

3. See Joint Vision 2010, with its emphasis on precision; US Department of Defense, Chairman of Joint Chiefs of Staff, Joint Vision 2010 (Washington: Office of the Chairman, Joint Chiefs of Staff, undated [1996]). See also Arthur J. Cebrowski and John Garstka, "Network Centric Warfare," US Naval Institute Proceedings, January 1998, pp. 28-35.

4. Scott S. Gartner, Strategic Assessment in War (New Haven, Conn.: Yale Univ. Press, 1997), pp. 55-58.

5. Ibid., pp. 91-116.

6. Mackubin Thomas Owens, "Technology, the RMA, and Future War," Strategic Review, 26 (Spring 1998), 64.

7. Guenter Lewy, America in Vietnam (Oxford, Eng.: Oxford Univ. Press, 1978), p. 81.

8. Andrew F. Krepinevich Jr., The Army and Vietnam (Baltimore: Johns Hopkins Univ. Press, 1986), p. 254.

9. Ibid., pp. 199, 203.

10. Ibid., pp. 188-90.

11. Memorandum of 1 May 1967, Pentagon Papers, IV: 465, quoted in Lewy, p. 83.

12. "His flair for quantitative analysis was exceeded only by his arrogance. Enthoven held military experience in low regard and considered military men intellectually inferior. He likened leaving military decisionmaking to the professional military to allowing welfare workers to develop national welfare programs." H. R. McMaster, Dereliction of Duty (New York: HarperCollins, 1997), p. 19.

13. Krepinevich, p. 190.

14. Gartner, p. 121.

15. Ibid., pp. 191-93.

16. Ibid., pp. 194-96.

17. Stanley Karnow, Vietnam: A History (New York: Penguin, 1991), p. 560, as quoted in Gartner, p. 141.

18. Gartner, pp. 24, 26, 32.

19. This claim, it should be noted, makes several questionable assumptions. First, it assumes that behavior characteristics that describe three elements of the US government in the late 1960s equally apply to potential adversaries in the future. Clearly, such may not be the case. Other cultures and times may make this sort of mirror-imaging inappropriate. However, in support of the universality of the applicability of the dominant indicator approach, Gartner analyzes German and British decision points during the Battles of the Atlantic during both world wars. He concludes that the methodology works consistently in those cases as well. Perhaps it could equally apply to other cases, including those involving non-Western decisionmaking actors and processes.

20. Robert S. Pape, Bombing to Win: Air Power and Coercion in War (Ithaca, N.Y.: Cornell Univ. Press, 1996), p. 59.

21. Ibid., p. 69.

22. Ibid., pp. 15, 16.

23. Michael Brown, Andrew May, and Matthew Slater, Defeat Mechanisms, Military Organizations as Complex, Adaptive, Nonlinear Systems, report prepared for the Office of the Secretary of Defense, Net Assessment (McLean, Va.: Strategic Assessment Center, Science Applications International Corporation, 2000), pp. 34-37.

24. William A. Arkin, "Smart Bombs, Dumb Targeting?" Bulletin of the Atomic Scientists, May/June 2000, p. 52.

25. Ibid.

26. Measuring the Effects of Network-Centric Warfare, report prepared for the Office of the Secretary of Defense, Net Assessment (Arlington, Va.: Booz Allen & Hamilton, 1999), pp. 2-5, 2-6.

27. Ibid.

28. Letter to the editor from General Henry Shelton, "Kosovo: Joint Chiefs Chairman Disagrees," Christian Science Monitor, 12 July 1999, p. 11.

29. Timothy L. Thomas, "Kosovo and the Current Myth of Information Superiority," Parameters, 30 (Spring 2000), 15.


Lieutenant Commander William S. Murray, USN, is a research analyst in the Strategic Research Department at the Naval War College in Newport, R.I. He served tours on two fast attack submarines, graduated from the Naval War College, and has served as an action officer in the J3 directorate at US Strategic Command. He has previously written on the command and control of nuclear weapons.


Go to Autumn issue Table of Contents.

Go to Cumulative Article Index.

Go to Parameters home page.

Reviewed 15 August 2001. Please send comments or corrections to usarmy.carlisle.awc.mbx.parameters@mail.mil