Lost for Words: The
Intelligence Community’s
Struggle to Find its Voice

 

JOSH KERBEL

2008 Josh Kerbel


From Parameters,  Summer 2008, pp. 102-112.


In the wake of the 9/11 attacks and the Iraq intervention, most of the national security components of the US government have had some—mostly overdue—introspective moments. Such reviews can only be considered healthy. For as Sun Tzu, the Chinese military and intelligence theorist, said, “Know the enemy and know yourself; in a hundred battles you will never be in peril.”1 The fact is, however, that many of those governmental components did not necessarily like what they saw looking back at them from the mirror. This result was particularly true of the intelligence community, which found its own self-identity issues staring back with an unnerving intensity.

To be blunt, the intelligence community, which for the purposes of this article refers mainly to the analytic component, still does not “know itself.” That is to say, 60-plus years after its creation as a “community”—making the point that this identity crisis is not solely the product of post-9/11 and Iraq soul-searching—America’s intelligence analysts still cannot agree on an answer to that most fundamental question of analytic identity: What exactly is intelligence analysis?

Quite possibly, this analytic identity crisis has been summarized best in writing by the intelligence community itself. In 2005, the Central Intelligence Agency’s Center for the Study of Intelligence published an unclassified ethnographic study of the community’s analytic component which, based on hundreds of interviews with analysts and countless hours watching them work, found that “heterogeneous descriptions and definitions of intelligence analysis as a profes-

102/103

sional discipline were consistent findings.” Consequently, the study went on to conclude, there still “needs to be a clear articulation and dissemination of the identity and epistemology of intelligence analysis.”2

Art or Science?

In terms of overall analytic identity, perhaps no question is more fundamental or divisive than the question of whether intelligence analysis is art or science. On one side of this debate is the “analysis as science” school of thought whose adherents favor a less individualistic or idiosyncratic and more “rigorous” approach to analysis. On the other side of the divide are the “analysis as art” adherents who argue for an analytic approach that places greater value on experience, intuition, and “feel” versus some artificially sterile scientific approach.

For the science adherents, perhaps the most persuasive outlet so far has been the 2005 CIA study which meticulously examined not only how the community came to perceive analysis as art, but also what intelligence agencies might do to make it more of a science. That study argues that the notion of analysis as art is deeply rooted in the concept of tradecraft, which is defined as “practiced skill in a trade or art.” It elaborates by explaining that in interviews, “analysts, managers, instructors, and academic researchers employed the word ‘tradecraft’ as a catchall for the often-idiosyncratic methods and techniques required to perform analysis.” Moreover, the study asserts that while the term might be appropriate for describing the activities of the operational side of the intelligence community, “the analytic community’s adoption of the concept to describe analysis and analytic methods is not [appropriate]. The obvious logical flaw with adopting the idea of tradecraft as a standard of practice for analytic methodology is that, ultimately, analysis is neither craft nor art.” To the contrary, the study contends that analysis is—or at least should be—“part of a scientific process.”3

The CIA study is not alone in its assessment. Putting a vivid exclamation point on the debate, an article in the journal Survival asserts that “putative CIA tradecraft . . . promotes the cultivation of a kind of ‘Pinball Wizard,’ the deaf, dumb, and blind kid from the rock opera Tommy, who instinctively avoids distractions, plays by intuition, and always achieves success.” The article goes on to argue that “boosting analytical effectiveness

103/104

requires more than the serendipitous cultivation of analytical wizards, whose skills and methods are rarely if ever subjected to testing, validation, and broader organizational application.”4

Clearly disturbed by this unscientific approach to analysis, the CIA study argues that “intelligence analysis can be reconstructed in the context of a scientific method, which is merely an articulated formal process by which scientists, collectively and over time, endeavor to put together a reliable, consistent, and nonarbitrary representation of some phenomena.” Moreover, the study asserts that “the data collected through both interviews and observation indicated that there were, in fact, general methods that could be formalized and that this process would then lead to the development of intelligence analysis as a scientific discipline.” That said, however, the study also notes that “the idea that intelligence analysis is a collection of scientific methods encounters some resistance in the intelligence community.”5

Adherents of the “analysis as art” school of thought have also been active in the debate. In one notable The New York Times op-ed that was widely circulated and discussed within the intelligence community, it was argued that in a misguided effort to be scientific, the intelligence community—as exemplified by the CIA—has over-reached into the realm of scientism. More specifically, the article argued that this scientism emerged from a fashionable post-war belief that “human affairs could be understood scientifically, and that the social sciences could come to resemble hard sciences like physics.” It went on to lament that even some five decades later, one can still sense how this scientism “has factored out all those insights that may be the product of an individual’s intuition and imagination.”6 It is important to recognize that The New York Times is not alone in its lament. A Washington Times column that also received extensive distribution and discussion in the intelligence community argued that “producing useful, useable intelligence is an art . . . a grand exercise in data interpretation, pattern recognition, and intuition.”7

Interestingly, unlike the science adherents who seem almost uniformly inclined to blame the intelligence community itself, the art adherents appear more divided on who is to blame. For instance, some seem inclined to place the blame for “false scientism” on the community, especially via the pernicious influence of the CIA’s “father of analysis,” Sherman Kent. Others, however, apparently feel that policymakers must bear much responsibility. Again, the column in The Washington Times asserted that “[i]t seems very few leaders understand that [intelligence is an art—not science].”8 Consequently, this line of thinking goes, policymakers expect and demand analyses characterized by a degree of precision and certainty that only a science could provide.

Undoubtedly, the issue of blame is debatable. What is not debatable is the fact that the notion of “analysis as art,” like the notion of “analysis as sci-

104/105

ence,” meets considerable resistance from the ranks of analysts themselves. For evidence of this, one need only read the comments engendered by the posting of The New York Times op-ed on one internal analyst discussion board: “Gibberish,” “A rant,” “[The author] just doesn’t understand what we do.”

Alloying Analysis

Notwithstanding the ambivalence of intelligence analysts, both of these perspectives have real merit. To be fair, most adherents of a particular perspective will accept that the question is not a zero-sum, all-or-nothing issue. Rather, what they are really advocating is an analytic approach that—if not dominated by their preferred perspective—at least tempers the perceived excesses of the other. In other words, most advocates of a particular perspective will usually acknowledge, if only begrudgingly, that intelligence analysis is truly a matter of complements, with the real question being one of relative weight.

The necessity for such a balanced perspective was perhaps most articulately acknowledged by the presidential commission investigating intelligence related to Iraqi weapons of mass destruction. Interestingly, however, rather than lament an imbalance in the proportion of art to science in community analyses of Iraq, the commission instead regretted the fundamentally poor application of each perspective. Thus, with regard to the scientific school’s argument for a more formalized and rigorous analytic process, the commission’s report agreed when it found “the (2002 Iraq National Intelligence Estimate [NIE]) fully met the standards for analysis that the community had set for itself. That is the problem.” On the other hand, however, the commission’s report also agreed with the art advocates when it concluded that the 2002 NIE “displayed a lack of imagination” that precluded the asking of “the questions that could have led the intelligence community closer to the truth.”9 In sum, according to the commission, the problem was not so much an imbalance of perspectives but an across-the-board deficiency in practice.

Given this finding, it is clearly necessary for the analytic community to find a new conceptual model, one that raises the level by which both artistic and scientific approaches are applied while simultaneously blending them into a sort of complementary “alloy.” Ideally, this new model would integrate art and science and yet forsake high art and hard science pretensions. Admittedly, this formula may prove a difficult mix to create. Only by formulating it, however, will the intelligence community find the analytical “sweet-spot” that resides somewhere between the prevailing perceptions, which are antagonistic (art or science) on the one hand and alchemic (wizardry and scientism) on the other.

105/106

A Better Model

One such alloyed model that has been proposed is a medical one, since intelligence analysis and medical diagnosis are similar in many ways.10 For example, both intelligence analysts and medical doctors are confronted with problem sets—the international system and living systems respectively— that are highly dynamic and uncertain. Analysts and doctors also follow cyclical procedures that while differing in specific terminology (collection versus testing; analysis versus diagnosis; and dissemination versus prognosis), have details that are fundamentally similar. For the purposes of this article, however, perhaps the greatest similarity is that both intelligence analysis and medicine—done well— require their practitioners to blend art and science.

At present, the medical community appears much more accepting of this need for balance than the intelligence community. There is almost universal acceptance amongst doctors, whether general practitioner or specialist, that the practice of medicine is both art and science. One practicing physician who also is a student of medical intelligence has noted, “While much of clinical medicine is firmly grounded in basic science research, there is a substantial practical component to medical practice which cannot be found in any textbook, and is instead passed down from attending physicians to resident physicians to medical students.”11 This, of course, is not to say that the medical community does not continue to fight over this—it does—as the increasingly vocal “evidence-based” movement, which was originally known as the “science-based” movement, makes abundantly clear. That fight, however, is largely one about the relative weighting each approach should get—not the need for a blend in the first place.

In contrast, the intelligence community continues to wrestle with a fundamental need for both perspectives, never mind what the proper balance between them should be. For evidence of that perspective think back to the resistance from analysts to both the “analysis as art” and “analysis as science” arguments presented earlier in this article. If that is not deemed sufficient evidence, one might consider the extreme swings in managerial emphasis— between the imperative for generalists (with a synthetic macro-perspective that values the ability to connect the proverbial “dots”) and the imperative for experts (with a more analytic micro-perspective that values mastery of a specific “account”)—that periodically sweep across the analytic community. Ideally, the intelligence community would view these unique perspectives in a highly complementary light, much as the medical community has with its embrace of both the general practitioner and the specialist. Alas, the intelligence community—particularly the line analysts, when compared to the analytical methodologists—continues to bicker over the need for a mixed approach that precludes the discussion from addressing the real issue of the proper mixture.

106/107

This is where the adoption of a medical model could really help the intelligence community. The need for an appropriate art and science blend, at least in medicine, is a notion that resonates strongly, if unconsciously, with most people—including intelligence analysts. After all, most people when choosing a doctor tend to look for one who is not only familiar with the “basic science,” but is also in possession of the “practical component” that comes with experience and intuition. Consequently, by modeling the practice of intelligence analysis on the practice of medicine it may be possible to use that unconscious resonance as a means of fostering a similar desire for a balance of art and science amongst analysts.

Finding the “Right” Words

Recognition of the powerful analogy between medicine and intelligence analysis is not new. Historian Walter Laqueur wrote about it more than 20 years ago, and it has been a thin but enduring theme in the literature of intelligence ever since.12 What has not been sufficiently addressed in that literature is the need for more than just a useful analogy. More specifically, what is now required is much more attention on the linguistic aspects of the analogy, the metaphors.

At a fundamental level metaphors are models.13 Which is to say, they are much more than mere “rhetorical flourish—a matter of extraordinary rather than ordinary language.”14 Rather, “our conceptual system [i.e., the way one defines everyday reality] is largely metaphorical.”15 Consequently, metaphors fundamentally “structure how we perceive, how we think, and what we do.”16

Given this fact that the metaphors analysts use directly reflect and reinforce their thinking, metaphors are key focal points in any effort to examine analytic mindsets and subsequently formulate a cohesive analytic identity. This is a point that while not entirely lost on the intelligence community—like the need for an art and science balance—is more readily recognized by the analytic methodologists rather than the line analysts. For evidence of this, one need only consult the CIA study on analytic culture—written by an anthropologist, not an analyst—which noted that “language is a key variable in anthropology and often reveals a great deal about the cognition and culture of a community of interest. The adoption by members of the analytic community of an inappropriate [operational] term [i.e., tradecraft] for the processes and methods employed in their professional lives obfuscates and complicates the reality of their work.”17

Despite this acknowledgement, the fact remains that the predominant linguistic metaphor for intelligence analysis, like that for the larger national security debate of which it is a part, is essentially an unrealistic one. That is to say, it is a mechanical metaphor built upon such terms and concepts as tension, inertia, momentum, leverage, and trajectory that unrealistically

107/108

portray the international system as a sort of machine that behaves linearly: is fully understandable, predictable, and certain. The truth, however, is that the international system is simply not a machine. Rather, it is an organism in that it is made up of “living” beings (people, states, etc.) that learn, change, and adapt to changing circumstances which machines, of course, do not.

To accurately describe and think about such an organism in a way that captures, or at least accepts, the uncertainty that is inherent in its behavior, it is necessary to employ a more realistic nonlinear metaphor. In this case, that would mean a biological one, or more specifically a medical one—using terms such as susceptibility, symptomatic, ripeness, side effects, etc.—that is well-suited to describing an organic problem set. In sum, if intelligence analysts are to start thinking more biologically than mechanically, they need to start communicating more like physicians than the physicists they have long tended to mimic.

Ultimately, it is vitally important that the intelligence community,  when considering language, begin to focus on the metaphorical aspects vice the stylistic aspects that it has traditionally tended to emphasize. In particular, for too long when the intelligence community has talked about precision of language it has really meant concision—the quest to say things with even fewer words and more “white space.” In contrast, what the intelligence community truly needs to appreciate is that precision of language needs to be about using language, the actual words (even if it means more of them) that accurately reflect and reinforce how it conceptualizes its subject matter and, by extension, itself.

A Hard Pill to Swallow

For the intelligence community, the linear mechanical metaphor remains the dominant linguistic and consequently mental model; it is the default setting. This is not surprising considering the powerful historical experiences that have foisted it upon the community. First, and at a most general level, American culture—rooted as it is in western philosophical and intellectual tradition—remains saddled with the heavy weight of Newtonianism. Sir Isaac Newton’s legacy—one of pure science overflowing into alchemy (wizardry and scientism)—continues to fundamentally shape prevailing western perspectives of the universe and how it works.18 Newton may have credited his extraordinary vision to his “standing on the shoulders” of the scientific giants who preceded him, but the West has never managed to climb out from under him. Nowhere is this more manifestly evident than in the way American intelligence analysts talk, write, and think about the world.

At a second, more community-specific level, it is important to understand that the “unified” intelligence community’s formative experience was the relatively linear Cold War. As one former professor at the National War College noted, the Cold War was essentially a two-body problem and

108/109

“two-body problems lie generally in the linear to mildly nonlinear range. In other words, the Cold War marked by the interaction of two world powers habituated participants to an essentially linear environment.”19 In turn, this history contributes to one of the community’s most vexing post-Cold War problems: how to provide adequate numbers of mentors versed in nonlinear thinking for the legions of new analysts when the pool of potential mentors is populated by senior analysts comfortable with highly linear perspectives.

Finally, if one adds to this mix the linear scientism exemplified by Sherman Kent, it is easy to see how the lexicon of linear reductionism—and the corresponding mindset that it, again, reflects and reinforces—is now so infused into the US national security/intelligence discussion so as to seem beyond question. Indeed, it is rather rare to read an American article on foreign affairs, international relations, or national security—not just intelligence analyses—that does not employ mechanical terminology. Consequently, assertions that such terminology is now somehow unsuitable are inevitably met by an almost reflexive resistance.

Aligning Capabilities and Expectations

Given how thoroughly infused the mechanical metaphor is in the US national security and intelligence dialogue, the adoption of a new metaphor and commensurate mindset that accepts uncertainty cannot be done by the intelligence community in a vacuum. Rather, it will require the complicity and cooperation of the community’s beneficiaries and benefactors (i.e., policymakers and the public) whose unrealistic expectations are also rooted in a linear metaphor/mindset. Consequently, any genuine effort in this vein will require a conscious process of education aimed at bringing expectations of policymakers, the general public, and the intelligence community into accordance. In particular, all concerned parties need to come to a mutual understanding that it is simply impossible to expect the intelligence community to predict the behavior of nonlinear systems with certainty and precision, especially over long periods of time. Rather, what should be expected from the community are better (allowing for uncertainty) models for understanding and anticipating—but not predicting—the potential behavior of the complex systems which it is tasked to watch. Presumably, policymakers should find significant value in this perspective. After all, as noted economist and complex systems theory pioneer Brian Arthur observed, “An awful lot of policymaking has to do with finding the appropriate metaphor. Conversely, bad policymaking almost always involves finding inappropriate metaphors.”20

Given that observation, it is not unreasonable to think that the adoption of a more biological metaphor might help in the changing of those expectations. For instance, no reasonable person expects a physician to predict with precision

109/110

and certainty the details (time, severity, lingering impact, etc.) of a patient’s heart attack. Rather, the physician is expected to help the patient identify risk-factors and conditions (hereditary, eating habits, smoking, exercise, stress-level, cholesterol level, etc.) that might potentially contribute to the onset of a heart attack (or other problems) and help the patient formulate a proper mitigating response. In other words, the expectations are understood to be limited. At a fundamental level, it is the language of medicine, with its inherent uncertainty, that greatly contributes to those limited expectations. Moreover, it directly contributes to a doctor’s credibility in its evident honesty and realism. Analysts, then, need to understand this approach and be as “linguistically true” with policymakers and the public as—ideally—physicians are with their patients. For only then will policymakers and the public come to accept that intelligence analysts are not miracle workers and that they do have the proverbial crystal ball.

Of course, some will argue that it is not the intelligence community’s place to educate the public (after all, it is a secret community) or that it has no business telling policymakers (its bosses) what they should and should not expect. Rather these voices argue that if policymakers (and the public) want certainty, the community can provide it—given sufficient (greater) resources, new analytical tools, and such. Should the community adopt such a mentality, however, and consequently do nothing to disabuse policymakers and the public of their illusions then it will be surrendering itself to fate. For it is then guaranteed that these unreasonable and unrealistic expectations will endure, that another surprise will occur at some point, and that a new round of debilitating recriminations will undoubtedly result. If a greater degree of openness, outreach, and candor—with both its customers and itself—can help the community avoid such a fate, it ought to actively seek those opportunities. A better metaphor is a good place to start.

From Ambivalence to Self-Awareness

Given what has been argued here, it might be possible to answer the fundamental question of analytic identity asked at the outset of this article—it is in fact both art and science. The fact that the community remains ambivalent suggests it does not like that answer and suspects customers will not like it either. After all, this is just the type of duality that is often difficult for an individual, never mind an entire community, to effectively reconcile. Nonetheless, there are several fundamental steps that the intelligence community—again, learning from the medical community—could take in conjunction with “metaphor reform” to better prepare the ground for growing a cohesive analytic identity.

First, the community can cultivate a more scientific and analytic perspective via an extensive training and education regimen that is focused on critical thinking. The ability to think critically is key to the provision of

110/111

“better answers” and requires analysts—just as it does medical interns and residents—to master the systematic processing and analysis of evidence such as is possible via so-called “structured analytic methods” (timeline-building, weighted ranking, analysis of competing hypotheses, etc.). Also worth considering is a requirement for analysts to explicate, for managers if not necessarily policymakers, the particular methodological approaches and thought processes they employed in formulating any particular analysis. Too often, analysts approach their jobs in an entirely ad hoc fashion—the so-called pinball wizard approach, as it were—since most receive minimal training in, and have minimal requirements to employ, structured analytic techniques.

Additionally, the complementary artistic (creative) aspect of analysis, actually synthesis, must also be cultivated. One method for doing this would be to require senior analysts, or anyone aspiring to such a title, to mentor junior analysts in how they develop hypotheses (ask better questions) for testing. Structured synthetic methods, as distinct from structured analytic methods, for doing this include scenario-development; brainstorming; modeling, gaming, and simulation; and red-teaming. Unfortunately, mentorship also remains a highly ad hoc community practice that needs to be both institutionalized and mandated. Quite simply, it should be made an absolute promotion requirement for more experienced analysts to systematically share their experience and intuition—their pattern-recognition and synthetic thinking skills—with the burgeoning crop of junior analysts currently flooding the analytical ranks. In turn, senior analysts will benefit from exposure to fresh perspectives that they otherwise might never consider. In many ways this process would mirror the medical community’s practice of having interns and residents learn and work under the supervision of senior physicians.

Beyond this complementary approach to the education of analysts, a similar approach to recruitment is also crucial. More specifically, analytic recruitment should explicitly emphasize the attraction of the critical, analytic, and scientific, as well as the creative, synthetic, and artistic thinkers. Currently, the intelligence community appears to be more attuned to attracting the former, which should not be surprising since the prevailing recruitment terminology describes the job, like the problem sets, in almost exclusively analytic terms. If the community sincerely desires to inject a greater measure of synthetic capability into the analytical mix, it needs to use appropriate and accurate language to convey that objective. In other words, perhaps it is time for the community’s human capital components to begin recruiting with both “analytic/specialist” and “synthetic/practitioner” aptitude and inclinations clearly in mind.

That last point brings us back to the fundamental importance of accurate language and metaphors to the analytic community’s effort to develop a cohesive analytic identity—to “know” itself. Again, the linguistic metaphors

111/112

that one uses directly, if subconsciously, reflect and reinforce the underlying thought. Consequently, if the community continues to speak and write in exclusively analytic, reductionist, linear, and mechanical terms it will continue to think almost exclusively in those terms as well. Moreover, expectations will continue to unproductively focus on “did the intelligence community get it right” versus “did the intelligence community usefully inform.” In sum, the old saying that “you are what you eat, drive, and wear . . .” is not quite true. The essential role that language plays in thinking means that “you are what you say.” Actions do not always speak “louder” than words . . . often it is the words that really do matter.

We conclude then by coming full circle. It is worth noting that Sun Tzu went on to say, “When you are ignorant of the enemy but know yourself, your chances of winning or losing are equal. If ignorant both of your enemy and of yourself, you are certain in every battle to be in peril.”21 Presumably, Sun Tzu left out the variation of knowing one’s enemy but not knowing oneself because he saw it for the impossibility that it is. This implied admonition should be of particular concern to the intelligence community, whose primary task is to help policymakers “know” others. For until the intelligence community “knows” itself it will not be able to reliably fulfill that fundamental mission.


NOTES

1. Sun Tzu, The Art of War, Samuel B. Griffith, trans. (Oxford, U.K.: Oxford Univ. Press, 1963), 84.

2. Rob Johnston, Analytic Culture in the U.S. Intelligence Community (Washington: Central Intelligence Agency, Center for the Study of Intelligence, 2005), 27.

3. Ibid., 17.

4. Dennis Gormley, “The Limits of Intelligence: Iraq’s Lessons,” Survival, 46 (Autumn 2004), 16.

5. Johnston, 19-20.

6. David Brooks, “The C.I.A.: Method and Madness,” The New York Times, 3 February 2004.

7. Austin Bay, “Fixing Intelligence,” The Washington Times, 9 December 2005.

8. Ibid.

9. Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction, Report to the President of the United States (Washington: The White House, 31 March 2005), 12-13.

10. Jonathan D. Clemente and Stephen Marrin, “Improving Intelligence Analysis by Looking to the Medical Profession,” International Journal of Intelligence and CounterIntelligence, 18 (January 2005), 708-16.

11. Stephen Marrin, “Intelligence Analysis: Turning a Craft into a Profession” (paper presented at the International Conference on Intelligence Analysis, McLean, Va., 4 May 2005), https://analysis.mitre.org/proceedings/Final_Papers_Files/97_Camera_Ready_Paper.pdf

12. Clemente and Marrin, 707.

13. Thomas Czerwinski, Coping with the Bounds: Speculations on Nonlinearity in Military Affairs (Washington: National Defense University, 1998), 64.

14. Mark Johnson and George Lakoff, Metaphors We Live By (Chicago: Univ. of Chicago Press, 1980), 3.

15. Ibid.

16. Ibid.

17. Johnston, 18.

18. “Alchemy,” Wikipedia, http://en.wikipedia.org/wiki/Alchemy.

19. Czerwinski, 9-10.

20. M. Mitchell Waldrop, Complexity: The Emerging Science at the Edge of Order and Chaos (New York: Simon and Schuster, 1992), 334.

21. Sun Tzu, 84.


Josh Kerbel is the Studies Coordinator in the Lessons Learned Center, Office of the Director of National Intelligence. Previously he was a senior intelligence analyst for the Navy and the Central Intelligence Agency. The views expressed in this article are his own and do not imply endorsement by the Office of the Director of National Intelligence or any other government agency.


Go to Summer issue Table of Contents.

Go to Cumulative Article Index.

Go to Parameters home page.

Reviewed 15 July 2008. Please send comments or corrections to usarmy.carlisle.awc.mbx.parameters@mail.mil