From Parameters, Spring 1998, pp. 26-35.
The United States has undergone another cycle of political recrimination and bureaucratic maneuvering over the foreign and defense intelligence system and proposals for its reform. Public attention again focused, as it did when intelligence reform was also a hot topic in the mid-1970s, on lurid allegations of bungling or immorality in particular scandals such as the Aldrich Ames case or US association with murderous Guatemalan officers. What ultimately matters most, however, is whether the intelligence system provides the main thing for which it was built up in the decades after Pearl Harbor: timely warning of danger to material national interests.
All in all, US intelligence has performed well over the half century since the National Security Act. This assertion is hard to prove, because it depends on what we make of the dogs that did not bark and what we assume might have happened if we had not had the huge intelligence establishment that evolved during the Cold War. Most of all it is hard to prove because evidence of success is less obvious than evidence of failure. Good news is no news. Many of the cases where the warning system worked well are not noticed, simply because policymakers take such performance for granted. Or such cases are not publicized, because there is no political impetus to reveal classified information as there is when things go badly wrong.
Although the real story is not as bad as most laymen think, it is nevertheless worse than we would like. There are too many cases, too regularly, in which warning fails. It fails for three different reasons: because warning is not given at all, or is not given in time for anything to be done about it, or is not taken seriously by those with the power to do something about it. Only rarely are these failures due to simple stupidity or irresponsibility. There are certain types of problems and situations where the necessary ingredients for proper warning, or at least effective warning, will never exist.
This sounds suspiciously close to a typical excuse for mistakes. Indeed, some believe that this degree of pessimism is a self-serving attempt to pass the buck and explain away intelligence professionals' inadequacies by blaming failure on impossible tasks or irresponsible consumers. It is true that the mainstream scholarship on the subject has emphasized the overwhelming obstacles to avoiding surprise, but hardly any of those authors are absolutely hopeless about the problem. Real hopelessness could only lead to the conclusion that intelligence bureaucracies should be disbanded to free their resources for something useful.
Progress is possible. But officials need to realize that they will occasionally be surprised despite their best efforts. They need to understand that this is not because the professionals responsible for warning let them down, but because effective warning just is not feasible in regard to some problems, or because warning is so unavoidably ambiguous that policymakers who get it can easily discount it.
Part of the trick is to figure out what types of warning pose what types of problems. Ultimately, most intelligence is directed at some sort of warning, and most intelligence officers are responsible in some sense for warning. Even encyclopedic compilations of basic data that stay on the shelf have an indirect role in warning, to the extent that they contribute to the background knowledge required for making sense of new data. Thus the term "warning" covers an unmanageable amount of ground about which to generalize unless we bound it. Although in most cases the proper assessment of indicators depends on seeing them against the background of history, this article discusses warning only in terms of near-term alerting, based on recent evidence and judgments made possible by such evidence, or on new judgments about old data.
To help clarify a few bundles of problems, this essay focuses on two general categories:
"Factual-Technical" Warning: Collection is the Crux
Certain threats are clear and tangible, and the basic challenge is to detect evidence of them and report it to political authorities in time to allow action to avert or counter them. This means that technical functions of collection and processing of relevant observable or audible data are the main parts of the warning problem. Analysis may not be irrelevant in such cases, but it is less often necessary for identifying the threat and engaging the attention of policy-level officials. Warning may consist more or less of running an intercept straight to the Secretary of Defense or Assistant to the President for National Security Affairs.
A classic example of factual-technical warning was the discovery of Japanese plans to attack Midway in 1942, which followed directly from cracking crucial Japanese naval codes, and doing so in time to marshal US naval power in the right place to achieve the victory that turned the tide of the Pacific war. Ariel Levite cites this case as a model, and as evidence that surprise is not as hard to avoid as the traditional school of scholarship on intelligence has suggested. Another classic example was the 1962 discovery that the Soviet Union was constructing intermediate-range missile sites in Cuba, and getting that information in time to mount a blockade, pose an ultimatum, and squeeze the missiles out before they became fully operational. This was the legendary US strategic success of the Cold War.
Both of these classic cases are models that we should hope to emulate, but they are not models we can count on. Dramatic and memorable as they were, they do not encompass the worst difficulties in strategic warning.
First, cases of pure success like Midway are rare. These are cases where:
It is wonderful when skillful collection and cryptography can identify an unambiguous threat, to which the response is obvious, and can do so far enough in advance to allow counteraction. The only recommendation we need offer with such a model in mind is to improve surveillance and codebreaking as much as possible. That is an important prescription but hardly a controversial one. (At least it is not controversial for professionals in security policy. In the future some forms and targets of collection are bound to become more controversial as the Cold War consensus recedes, critics of intelligence get more political running room, and arguments against reading other "gentlemen's mail" are less easily laughed out of court.) Uncontroversial recommendations may pose all sorts of practical difficulties when it comes to implementation. Indeed, this is why the bulk of resources in the intelligence budget will continue to be allocated to collection. Uncontroversial matters, however, are not the ones that need much reflection and debate about what needs to be done.
The second reason to qualify the importance of these models is that the missile crisis case was only half a success. To the analytical community, this case represented failure, because the relevant National Intelligence Estimate (NIE) was wrong about Moscow's intentions, suggesting that Soviet caution would preclude such a provocation. U-2 reconnaissance and photo interpretation did discover the emplacements before they were complete, in time for the US government to deliberate and to formulate and execute a successful strategy to get them out. The President and the National Security Council "ExCom," however, would have preferred to know much earlier that the Russians would gamble in that way, so that they could have tried more ways to deter the action, or at least so that they could have had more than just the famous "Thirteen Days" to figure out how to avert World War III.
If results are all that count, success in the technical terms in which 1962 was a success might be good enough. Providing sufficiently early alert of threatening factual events, of physically observable changes in capabilities, was enough because there was no atmosphere of political confusion to complicate the translation of those facts into a strategic counterinitiative. Focusing on capabilities was enough in 1942 because in wartime, no one is worried about frightening the adversary and provoking a spiral of misperception and miscalculation.
In 1962 focusing on capabilities was also enough because in the political context of that case--the height of the Cold War--there was sufficient consensus on the basic underlying threat in terms of Soviet intentions that detection of the capabilities was sufficient to trigger response. This was typical of the Cold War, and perhaps unlikely to be repeated often when dealing with other countries who are not locked into a long-standing tense contest with the United States, and where neither their intentions nor our own interests are clear enough in advance for Washington to react on short notice.
In instances more politically complicated than Midway or the missile crisis, equally successful factual-technical warning may prove worthless. The difference is illustrated by the events of July 1990, when technical warning was perfect, yet did not lead to prompt action to counter the threat as in 1942 or 1962. There was not the slightest difficulty in discerning that Iraq was mobilizing forces capable of overrunning Kuwait. Had the United States been in a long knock-down-drag-out political struggle with Iraq, with previous crises having energized planning and high-level discussion of options, this information might have been more than enough to trigger action to deter the invasion. Despite ample factual-technical warning, however, the US government did not attempt to deter Iraq, a fact that does not prevent many commentators who became accustomed to using deterrence as an all-purpose buzzword during the Cold War from speaking of the invasion as a failure of deterrence. Washington did not utilize the warning because it was not sure about intentions--either Baghdad's or its own. The Bush Administration did not make up its mind to undertake the huge diplomatic and military operation to expel Iraq until after 2 August. Just as in the North Korean attack of June 1950, Washington had simply been preoccupied with other problems, and had not focused on or thought through what the United States should do in such a contingency.
"Contingent-Political" Warning: Too Vague, Too Soon, or Too Late?
This category is not about detecting changes underway in deployment of capabilities; rather it involves predicting decisions and initiatives by other states, groups, or intelligence targets of interest to the United States. The issue is not just what they can do, which is necessary but not sufficient grounds for high-priority warning, but whether they might choose to do it.
A threat consists of capabilities multiplied by intentions; if either one is zero, the threat is zero. For example, both Britain and France have the capability (in their SLBM warheads) to incinerate several dozen American cities, but US warning officers spend no time at all worrying about this because they know that there is no intention in London or Paris to do this. They face the reverse situation with Libya or Iran, where there is ample reason to worry that either one might well attempt to launch a nuclear attack on the United States if it could, but no reason yet to worry that they can.
Contingent-political warning involves soft or probabilistic judgments ("the junta in X might decide to attack Y, especially if they think A, or Y does B, but it is more likely that X is bluffing and expects Z to intervene diplomatically") rather than categorical statements that are more associated with factual-technical warning ("the entire air force of X is standing down, which is consistent with preparation to attack").
When an established superpower adversary like the Soviet Union was involved, intelligence judgments about its intentions did not matter much because senior policy officials considered it within their competence to make that judgment. Indeed, during the Cold War they seldom credited middle-level professionals anywhere in the bureaucracy with as much insight about the Russians as they thought they had themselves. In such situations, therefore, factual-technical warning was all that was either necessary or taken seriously--as soon as the facts were collected and reported, the principals took over.
After the Cold War, the United States has no established superpower adversary, no comparable focus of attention, to make it likely that the senior policymakers will fill that role. The situation was not even quite the same during the Cold War in regard to novel, lesser threats. A contingent-political threat is a potential disaster whose preconditions and ultimate likelihood may be identified, but whose timing is uncertain. Two prime examples were the oil shock of 1973 and the Iranian Revolution of 1978-79. In both cases there were numerous observable indicators of potential threat, but before the fact neither seemed sufficiently catastrophic or certain to make the politicos take notice and take over. In neither one, moreover, did professional experts ring the alarm bell hard enough or soon enough to avoid nasty surprises.
Long before 1973 objective evidence, freely available, showed that the West was becoming increasingly dependent on Middle Eastern oil. Moreover, the Organization of Petroleum Exporting Countries (OPEC) was formed more than a dozen years before 1973. Better intelligence collection might conceivably have averted the shocks if Saudi and other producer intentions to mount the embargo and impose the price hike had been handed to intelligence on a silver platter. But did such firm intentions even exist before the crisis? The capability of the Arab oil-producing states to disrupt international oil markets could be no surprise, but their decision to do so was.
Intelligence analysis could well have done better, especially if security compartmentation had not limited the ability of analysts to judge the reliability of clandestine sources, but exactly when before late September 1973 should they or anyone in intelligence have issued a warning, and exactly what should that warning have said? If the point was simply to alert policymakers to the Arab capability to disrupt energy supplies, a warning would have been warranted years before the crisis occurred. But what would it have accomplished? It might have been easier to report higher odds of the Arab decision closer to the event, but by then how much would it have mattered? At that point it is unlikely that any US action could have averted the problem, short of supporting the Syrian and Egyptian attack on Israel; Washington, did, after all, help dissuade the Meir government from mounting a preemptive strike.
On Iran, it was certainly possible to have provided longer advance warning of the fragility of the Shah's regime if both collection and analysis had been better. The lack of Farsi-speaking US case officers, the paucity of contacts in opposition groups, the threadbare analytical operation on Iran--all of these can certainly be blamed for not making policymakers think harder, much earlier, about putting all American eggs in the Shah's basket. But until 1978, when would a prediction of revolution have been warranted? It would have been most useful at the beginning of the decade, when the Nixon Administration was fast mortgaging US policy to the Shah, but it would soon have seemed quite wrong. It would have been a bit easier to call it by the end of the first year of the Carter Administration, but by then the United States would have had to unload all the accumulated baggage of years of policy--at great cost not only to immediate relations with Tehran but with other monarchs as well--to distance itself far enough from the Pahlevis to keep the later revolutionary government from fastening on Washington as its main enemy.
Doing better at this type of prediction is certainly not impossible. Intelligence in regard to the Philippine revolution of 1985 was quite good and timely. It is in the nature of most revolutionary social, economic, or political change, however, that underlying causes usually fester for years or even generations until something catalyzes them. At that point things move too fast for better intelligence to give US leaders usable strategic warning.
Depending on whether one cites the opening of the Berlin Wall or the crackup of the Soviet Union as the crucial date, the Cold War has been over for seven to nine years--a long time in either case. It is getting to sound suspicious when we still hear from pundits that we are in a period of transition, and the shape of the post-Cold War world is not yet apparent. For better or worse, however, it is true. No simple concept or consensus on the character of world politics has emerged to replace containment as the basic organizing principle for US foreign policy. The one fact that is clear is that there is no global threat to US interests on the scale of the old Soviet Union's military power or the old ideological challenge from transnational Marxism-Leninism.
There are clear problems to fill the plates for warning officers--indeed a dizzying raft of them--but nothing simple that points obviously to where to focus the effort. Iraq, Iran, North Korea, Kosovo? Russian nationalist groups, the Chinese economy, King Sihanouk's health, Somali clan alliances, Macedonia? Rather we have too many potential sources of limited danger to US interests, and no established consensus on what overarching US interests are. The intelligence community does an admirable job of trying to set priorities in this jumble of problems, but there is yet no stable basis on which to provide enduring guidance to professionals in the warning business. We expect the community to provide policymakers with a day-to-day "heads up" on the latest crisis wherever it may burst out, and to watch everything else and keep them posted. That is both too much and not enough.
The organization and concentrations of skills amassed in over 40 years of focusing on communist states and their military power could not simply be switched over to the new world. In the new world there has been no market for the legions of specialists who lived and breathed subjects like the order of battle of the Polish army. Some have been retrained and redirected, and some functional skills are regionally transferable, but massive reorientation has been required, at the same time that resources to support the intelligence infrastructure have been shrinking.
However the post-Cold War shakeout finishes, factual-technical warning will remain the bread and butter, the first priority, of the business. Whether analysis is good or not, many policymakers will care less about it than they do about collection. They may even resent it, seeing it as naive speculation by junior bureaucrats that wastes their time. They will never, though, say that they do not want more hard facts than they can get from The New York Times or CNN. They soon realize quite well that even if they think journalists are as revealing and enlightening as CIA case officers, CNN is not everywhere (contrary to mythology). There are also many crucial technical data that the Times could never get, which only expensive overhead photography, communications intercepts, and codebreakers can provide.
In a world with many moderate and murky threats rather than one big and clear one, however, it will become harder to view as many warning objectives in terms of the Cuban missile crisis or Midway models. What were always the tougher challenges for warning, but could be considered secondary in wartime or the Cold War, are the contingent-political eruptions in all sorts of small countries. The simplest inferences from this are that the United States needs to cultivate more expertise on the new trouble spots, and to put more effort into human intelligence.
This is not a simple adjustment. For one thing, not everyone agrees. The Clinton Administration put heavy emphasis on support to military operations as the priority of the intelligence community. This is a bit ironic in the post-Cold War era when conventional military threats to the United States are smaller than in any period since the 1920s, but it compromises the notion that resources should be reallocated from technical collection to human intelligence. That question aside, what should really be expected even if the United States does cultivate better collection assets for warning?
For some purposes, factual-technical observation and alerting will be as crucial as ever--whenever Baghdad, Tehran, Pyongyang, or a number of other states of some sensitivity mobilize their forces to war-footing, or test a nuclear weapon, the President will want to know immediately. Otherwise, however, the President and policymaking principals--and not least the military leadership that will be charged with implementing a US reaction--will want to know in advance about small crises that could pose demands for US intervention, logistical support, or other action.
These will be close to impossible to predict with any certainty much before the fact, soon enough to increase US leverage and options. If this implies that the warning system should focus on alerting at an early stage, this will mean a lot of apparently false alarms. Beyond factual-technical warning, where the indicators of capability may be directly observed, cases where contingent-political warning is at issue are ones where only potential problems can be identified long before they erupt. Most of those potential problems take years or decades to come to a boil, yet there are untold numbers of them. That means that there will seldom be any way for long-term warning to induce policymakers to make a commitment to early treatment of the problem, except in the rare cases where the way to do it is both clear and cheap.
For factual-technical warning, the problem is clarifying the agenda of regional problems enough to know where to focus extra collection efforts in the near term, how to patch gaping holes in collection capabilities: How many Tajik speakers can we mobilize? How can we keep up with human sources in countries where budget cuts have closed our embassies or consulates? When new countries or groups in would-be countries are shooting each other up all over the place, where should we send our reconnaissance assets from day to day? And so forth. The problem is how to call attention to nasty problems before they have broken into full-blown crises, without doing so too soon and inundating policymakers with too many problems to contemplate.
To avoid the latter problem, intelligence brokers or managers play a key role and a risky one. They, not working analysts, are the only ones with the position to sense the absorptive capacity of decisionmakers and the responsibility for selecting, packaging, and pushing analytic products in a manner designed to get the proper attention. In a world where the big threats are potential and the actual threats are small but numerous, intelligence managers face a delicate task in keeping consumers informed about the legion of messy contingent-political issues on the strategic menu, without overloading them. They also have to serve as a buffer to allow the experts below them to push more alarm bells than ultimately prove warranted. Outside the factual-technical realm, it is impossible to get it just right in warning.
There are only two choices: cry wolf, warning too soon or too often; or hold your tongue, warning too seldom or too late. It is better to err in the former direction, but it is important to recognize that the consequence is still error, and politicians will not thank anyone for it. Worse than that, their receptivity to subsequent warnings will be dulled. They will be looking for someone's head to roll, however, if they face a surprise, unwarned, because of the opposite error.
The payoffs from factual-technical warnings are clear and welcome, while those from contingent-political warnings often are not. The resources needed to buttress the apparatus for contingent-political warning are also not trivial. The things most applicable to factual-technical warning of the deployment of unfriendly capabilities, however, are far more expensive than those most essential to contingent-political warning. The former are mainly expensive technical collection systems, the latter mainly educated analysts. As expensive as it may be to keep a few Tajik speakers on the shelf, it is far cheaper than even a small share of a satellite's flight time. Although it may be proper to place a higher priority on contingent-political warning, therefore, it could also be reasonable to reallocate a marginal amount of intelligence budgets, stretched as they already are, in the other direction. A shift in dollars that would be a very small portion of the resources for collection would represent a very large increment to the resources for analysis. Even if factual-technical warning remains a higher priority, this could be a reasonable tradeoff in terms of the ratio between costs and benefits.
Finally, it is not as clear as it might seem who the winners and losers within government would be from a decision to put a bit more emphasis on the "softer" intelligence questions of the contingent-political variety. The military naturally is most concerned with having all the information it needs to plan and mount operations, which implies a greater interest in factual-technical intelligence. At the end of the day, however, what matters most is where and when the military is called on to fight, which depends on understanding the mushy political dynamics of unfamiliar societies and conflicts. Would the American military have been worse off if we had spent a bit less on reconnaissance or eavesdropping over the years, if that had meant that we invested more in studying North Korean policy before June 1950, the sources of political conflict in Vietnam before the 1960s, or the psychology of Saddam Hussein before August 1990?
1. See Eliot Cohen and John Gooch, Military Misfortunes (New York: Free Press, 1990) for their critique of what they call, a bit inaccurately, the "no-fault school" of intelligence.
2. See various works by Michael I. Handel, Robert Jervis, Klaus Knorr, Barton Whaley, Harold Wilensky, James Wirtz, Roberta Wohlstetter, and myself. The most thorough survey of ideas in the dominant school is Ephraim Kam, Surprise Attack: The Victim's Perspective (Cambridge, Mass.: Harvard University Press, 1988).
3. The conceivable exception is where information only relates to possible opportunities for gains, not danger of losses. Even there, policymakers often want to be "warned." For example, the US intelligence bureaucracy has often been criticized for failing to predict the liberation of Eastern Europe and collapse of communism and the Soviet Union, although these were essentially unmitigated happy surprises. Even here the record is less clear than many assume; in some respects the intelligence community did warn very well about Soviet weakness. See Douglas J. MacEachin, "The Record Versus the Charges: CIA Assessments of the Soviet Union," Studies in Intelligence, Semiannual Unclassified Edition, No. 1 (1997); Bruce D. Berkowitz and Jeffrey T. Richelson, "The CIA Vindicated: The Soviet Collapse Was Predicted," National Interest, No. 41 (Fall 1995).
4. Ariel Levite, Intelligence and Strategic Surprises (New York: Columbia Univ. Press, 1987).
5. Richard K. Betts, "Surprise, Scholasticism, and Strategy," International Studies Quarterly, 33 (September 1989); see Levite's reply in the same issue. See also Uri Bar-Joseph, "Methodological Magic," Intelligence and National Security, 3 (October 1988).
6. Klaus Knorr, "Failures in National Intelligence Estimates: The Case of the Cuban Missiles," World Politics, 16 (April 1964). Knorr served as a consultant to the Board of National Estimates and participated in the postmortem. For a large sample of declassified documentation on the case see Mary S. McAuliffe, ed., CIA Documents on the Cuban Missile Crisis, 1962 (Washington: History Staff, Central Intelligence Agency, October 1992).
7. See US Senate Select Committee on Intelligence, Staff Report: U.S. Intelligence and the Oil Issue, 1973-1974, 95th Cong., 1st sess., 1977, pp. 3-4.
8. See US House Permanent Select Committee on Intelligence, Staff Report: Iran: Evaluation of U.S. Intelligence Performance Prior to November 1978, Committee Print, January 1979.
9. William E. Kline, "The Fall of Marcos," in Intelligence and Policy Project: Casebook (Cambridge, Mass.: Harvard Univ., Kennedy School of Government, 1991).
Richard K. Betts is Professor of Political Science and Director of the Institute of War and Peace Studies at Columbia University, and Director of National Security Studies at the Council on Foreign Relations. He served on the staff of the original Senate Select Committee on Intelligence and has been an occasional consultant in the intelligence community. Among his books are Military Readiness (Brookings, 1995), Soldiers, Statesmen, and Cold War Crises (Columbia Univ. Press, 1991), and Surprise Attack (Brookings, 1982). An earlier version of this article was delivered as a lecture at the National Defense University.
Reviewed 17 February 1998. Please send comments or corrections to firstname.lastname@example.org