Tuesday, March 18, 2014

The Inevitability of Intelligence Failure: Sources, Locations and Causes



November 2013

Intelligence failure can be defined as ‘a misunderstanding of the situation that leads a government (or its military forces) to take actions that are inappropriate and counterproductive to its own interests’ (Schmitt & Shulsky, 2002: 62). Explorations of what could or should have been done differently form the basis for millions of pages of government investigations, newspaper articles, books, journals and entire academic careers.

A uniting factor which leads an event to be considered an intelligence failure is that it was or came close to a national disaster, whereas ‘the record of success is less striking because observers tend not to notice disasters that do not happen’ (Betts, 1978: 62). If one takes this definition of intelligence failure and turns it around, an ‘intelligence success’ occurs when a situation is understood properly and leads a government (or its military forces) to take actions appropriate and productive to its own interests. Concern with intelligence failures always far outstrips that of any intelligence successes, which may not become known until long after the event, if ever.

Intelligence failures are inevitable because there are limits to what intelligence can accomplish. Its proponents are often guilty of overselling its capabilities and its ‘consumers’ and observers are guilty of misunderstanding them (Gill & Phythian, 2006: 104-5). This essay will use a framework based upon the work of Betts, Schmitt and Shulsky, and Gill and Phythian, among others, to outline the different ‘sources’ or ‘locations’ of intelligence failure, where they occur in the ‘intelligence cycle’, their ‘causes’ and other factors, followed by the use of the 2003 Iraq WMD controversy as a case study to briefly illustrate real-world examples to show why intelligence failures are inevitable.

Sources & Causes
There are three general locations which can be identified as sources of intelligence failure: ‘Collection’, ‘Analysis’ and ‘Decision-Makers’ (Gill & Phythian, 2006: 103-4, citing Betts, 1978). These categories broadly separate persons involved in the ‘Intelligence Cycle’ by their positions or functional roles within it. CIA defines the Intelligence Cycle as the ‘process by which information is acquired, converted into intelligence, and made available to policymakers’ (Johnson & Wirtz, 2008: 49).
Collection

The first step in the Intelligence Cycle is to develop directions as to what information is necessary, known as ‘requirements’, and how it will be collected based upon the needs of the eventual consumers of the intelligence, usually senior policymakers (ibid.). Based upon this guidance, intelligence agencies then begin Collection, also known as ‘espionage’, which can be defined as, ‘the practice of using spies to collect information about what another government or company is doing or plans to do’ (Williams, 2011: 1165). Put simply, Collection involves obtaining the information or ‘raw intelligence’ which is required to meet Requirements developed at the first step in the Intelligence Cycle. Collection is performed by all means available to intelligence agencies, including human intelligence, signals intercepts, imagery and scientific and technical measurement, among others, and relates to one or more ‘types’ of intelligence, including military, political, economic or cultural intelligence (Johnson & Wirtz, 2008: 51).

Collection Failure
Collection Failure can be seen as the ‘unavailability of information when and where needed’ (Hatlebrekke & Smith, 2010: 151, citing Schmitt & Shulsky, 2002: 64-7;) or part of Betts’ ‘Pathologies of Communication’–the lack of timely collection of information (Betts, 1978: 62-3). Essentially, Collection Failure occurs because the information necessary to respond successfully to the situation was not collected or not when needed.

Inevitability?
Planning and directing intelligence requirements and Collection must still deal with the fact that, in the famous words of U.S. Defence Secretary Donald Rumsfeld, ‘There are no "knowns." There are thing we know that we know. There are known unknowns. That is to say there are things that we now know we don't know. But there are also unknown unknowns. There are things we don't know we don't know’ (North Atlantic Treaty Organisation, 2002), a sentiment similar to one expressed earlier by CIA’s Sherman Kent when he stated there are ‘Things which are knowable but happen to be unknown to us, and…things which are not known to anyone at all’ (Kent, 1964). Simply put, Collection Failure can occur when necessary information, known to exist and known to be needed, is not available due to some limitation. Or it can occur because it was not even considered, dismissed or there were no indicators it was necessary to gather such information. Nonetheless, as Betts (1978: 61) states, ‘In the best-known cases of intelligence failure, the most crucial mistakes have seldom been made by collectors of raw information’.

Analysis
Station three in the Intelligence Cycle is the ‘processing’ of the collected information, followed by the fourth step of Analysis and creation of intelligence ‘products’, such as reports, findings and estimates. This is followed by the fifth step, Dissemination—distributing the intelligence product to decision-makers in a proper format in a timely manner (Johnson & Wirtz, 2008: 49).

The analysis stage begins with an evaluation of the truth of the collected material and the validity of the process which acquired it (Butler Panel, 2008: 527-8). For some forms, such as imagery and signals intelligence, this will be easy; for others, especially human intelligence (usually obtained second-hand from an informant by a case officer) it is more onerous to establish. There may be the wish of the collecting officer to have his source believed, giving rise to the need for an independent appraisal of the information (ibid.). Accuracy in reporting or quoting the information from the source to the analyst must be ensured. It must be checked that the source has actual access to the information or an acceptable explanation of how it was obtained. It must be considered if the source has some ulterior motive for providing the information or if they are involved in a counter-intelligence operation. Their previous track record must also be considered (ibid.). If the veracity of the information is not thoroughly tested and established, anything that follows may be derived from false information and will be fruit from a poison tree.

Actual Analysis of the information, after its validity has been established, involves appraising the value of the information in its own right, deciding how much weight should be assigned to it and compiling it into ‘meaningful strands’ based upon what it relates to. These ‘strands’, compiled from all available sources of related information, are then further used to develop ‘estimates’ (‘assessments’ in the UK) of a particular international situation or set of circumstances (ibid.: 528). They can be rapid, low-level appraisals or they can be the premier product composed of the collective wisdom of the entire intelligence community, as with America’s National Intelligence Estimates (Johnson, 2008: 344). The product is then disseminated and/or briefed to decision-makers in an appropriate format.

Collecting information is difficult enough; deciding what should be done in light of it and how it should be presented creates even more problems. Sherman Kent, ‘father figure of CIA analysis’, felt that estimates consist of, ‘knowledge, reasoning and guesswork’ (Johnson, 2008: 344). The pitfalls of analysing information to attempt to piece together accurate appraisals, predictions and/or advice based upon information of varying types and of contestable accuracy or value, often in a short amount of time, in order to develop an intelligence product are apparent. Estimates are used to develop national policy regarding situations vital to national security. If information, analysis or estimates are wrong or wrongly presented, the policy response will be wrong as well, with serious consequences (Gill & Phythian, 2006: 106-112).

Analysis Failure
Analysis Failure is a broader category. It includes ‘Tendency to concentrate on ‘usual suspects’ for ideological or practical purposes’ (Gill & Phythian, 2006: 104), ‘Opinion governed by “conventional wisdom” without supporting evidence’, ‘”mirror imaging”, in which unfamiliar situations are judged upon the basis of the familiar’ (Hatlebrekke & Smith, 2010: 151, citing Schmitt & Shulsky, 2002, 64-67) and the ‘Paradox of Perception’—failure to properly balance pre-conceived notions based upon previous and historical experience against an unbiased look at information and failure to balance sensitivity of warnings between insufficiency and alarmism (Betts, 1978: 62-3). It also includes failure in ‘effectively communicating with Decision-Makers’, Betts’ other entry under ‘Pathologies of Communication’ (ibid.).

In sum, Analysis Failure can be located in the Processing, Analysis and Production, or Dissemination stages of the Intelligence Cycle. Essentially, Analysis Failure occurs because information is not properly validated or it is improperly dismissed; the wrong degree of emphasis is placed on collected information; opinion developed through collected information is skewed by practical, ideological, or some other cognitive bias, or; where intelligence products do not effectively communicate the necessary information collected to Decision-Makers, either through an ineffective portrayal of the information collected or untimely dissemination to them.

Inevitability?
Writing in defence of the wrongly-concluded 1962 NIE on the likelihood of Soviet nuclear missiles being stationed on Cuba, Sherman Kent (1964) explained the process of estimates thus:

“If NIEs could be confined to statements of indisputable fact the task would be safe and easy. Of course the result could not then be called an estimate. By definition, estimating is an excursion out beyond established fact into the unknown--a venture in which the estimator gets such aid and comfort as he can from analogy, extrapolation, logic, and judgment. In the nature of things he will upon occasion end up with a conclusion which time will prove to be wrong. To recognize this as inevitable does not mean that we estimators are reconciled to our inadequacy; it only means we fully realize that we are engaged in a hazardous occupation.”

If the circumstances for which Decision-Makers require intelligence were unambiguous, then intelligence services could hand them Kent’s ‘indisputable facts’ and leave them to it. However, there are few circumstances in international politics today which possess such clarity that Decision-Makers, often elected officials with little or no security experience, can decide on their own with ‘raw intelligence’. Today the problem may be too much information, as opposed to not enough, when one considers the mass amounts of imagery and signals information collected, processed and analysed (Irwin, 2012). Intelligence analysis remains a necessity.

Schmitt and Schulsky (2002: 72) argue, ’The heart of the problem of intelligence failure, [is] the thought processes of the individual analyst.’ Attempts to counteract the human element in analysis through systemic or procedural reforms, rather than fixing these flaws, may actually serve to build overconfidence in them afterwards through a belief that the problem has been solved and won’t recur (Betts, 1978: 61). However, history shows that the same mistakes continue to be made. At bottom, intelligence analysis is still a flawed human process and will always be as flawed as the human beings conducting the analysis. So long as there are ambiguous situations requiring, as Kent (1964) puts it, ‘guesswork’, there will inevitably be Analysis Failures.
Decision-Makers

In the final step of the Intelligence Cycle, the product, having been disseminated, has been received by its ‘consumers’ and they have informed themselves of the intelligence. They may have further questions or desire more information. This leads to the development of new Requirements, leading back to the first stage of the Intelligence Cycle, which begins anew (Johnson & Wirtz, 2008: 49).

Decision-Makers, a mix of high-level elected officials, executive appointees and military officials, then actually use the intelligence produced to inform their decisions. Making decisions that are vital to protecting a nation’s security are inherently difficult, with or without accurate intelligence. Historically, intelligence failures are most frequently located with Decision-Makers (Betts, 1978: 62-3). Policymakers often make one of Johnson’s ‘seven sins of strategic intelligence’—ignoring intelligence that which does not conform to their view of a situation (Johnson, 1982: 182-4). If intelligence is misunderstood or ignored, the policy decision-makers pursue may lead to an intelligence failure.

Decision-Maker Failure
‘Decision-Maker Failure’ is the ‘subordination of Intelligence to policy’ (Schmitt & Shulsky, 2002: 64-67; Hatlebrekke & Smith, 2010: 151) and may influence the initial Planning and Direction stage of the Intelligence Cycle or be exhibited in how or if Decision-Makers use intelligence to make policy. It also occurs when pressure from Decision-Makers is brought to bear on those involved in analysis at the Processing, Analysis and Production or Dissemination stages. According to Betts (1978: 61), ‘In the best-known cases of intelligence failure, the most crucial mistakes have seldom been made by collectors of raw information, occasionally by professionals who produce finished analyses, but most often by the decision-makers who consume the products of intelligence services.’

Decision-Maker Failure often occurs as a result of ‘Politicisation’, one of the central problems of intelligence. In the words of Robert M Gates as Director of Central Intelligence (Gates, 1992):

“Politicization can manifest itself in many ways, but in each case it boils down to the same essential elements: almost all agree that it involves deliberately distorting analysis or judgments to favor a preferred line of thinking irrespective of evidence. Most consider `classic' politicization to be only that which occurs if products are forced to conform to policymakers' views. A number believe politicization also results from management pressures to define and drive certain lines of analysis and substantive viewpoints. Still others believe that changes in tone or emphasis made during the normal review or coordination process and limited means for expressing alternative viewpoints, also constitute forms of politicization."

Essentially, Decision-Maker Failure most often occurs because of some form of ‘politicisation’ by Decision-Makers where intelligence is deliberately distorted, ignored or selectively applied.

Inevitability?
According to Johnson (1983: 182), ‘No shortcoming of strategic intelligence is more often cited than the self-delusion of policymakers who brush aside-or bend-facts that fail to conform with their Weltanschauung.’ In Western democracies, Decision-Makers are either elected officials or senior civil servants influenced by elected officials. Politics is their job and it affects all aspects of it, including making security decisions based upon intelligence. Politics affects every form of government, from totalitarian dictatorships to communist collectives. As von Clausewitz is often quoted, ‘War is the continuation of politics by other means.’ Attempting to totally remove politics from war and national security is an impossible task. As stated above, Decision-Maker Failure through politicisation is the most common reason for intelligence failure. So long as politics are involved in security decisions, there will inevitably be intelligence failures.


Other Factors

Systemic Factors
Systemic Factors which may lead to intelligence failure include internal bureaucratic obstacles and failure to share information or cooperate with other agencies (Gill & Phythian, 2006: 104). Garciano and Posner (2005: 159) describe this as, ‘lack of prompt and full sharing of intelligence information within intelligence agencies, between different agencies, and between federal, state and local government levels.’ These Systemic Factors may occur at any phase of the Intelligence cycle and lie within Collection, Analysis or with Decision-Makers. If Collection resources or collected information is not shared it cannot be properly analysed and will not make its way to Decision-Makers, leading to intelligence failure.

External Factors
‘External Factors’ are always present. They include the, ‘Intrinsic difficulty in identifying targets’ and the fact that states or organisations which are or may be the target of intelligence agencies cannot be expected to remain passive to intelligence operations against them (Gill & Phythian, 2006: 104-5). Simply put, sometimes it is difficult or impossible to collect some forms of information, or, as Kent (1964) puts it, ‘Something literally unknowable by any man alive.’ Knowing the intent of an adversary before even he has formed it is an example of this impossibility. An intelligence failure for one state is often the result of an intelligence success by another. External Factors most often affect Collection, but they may also affect Analysis and Decision-Makers at any stage of the Intelligence Cycle, especially if the particular intelligence failure is the active effort of a foreign intelligence service, such as infiltration, double-agents, strategic deception or a counter-intelligence operation.

Case Study: Iraq WMD
The false belief and assertion by the U.S. and UK governments that Saddam Hussein’s Iraq possessed WMD capabilities in the lead up to the 2003 Iraq War is an instructive case which illustrates examples of each of the different causes of intelligence failure. Some of these are explored here, though there are more which could be cited.

Collection Failure
According to Morrison (2011: 520), one of the reasons for the intelligence failure relating to the existence of Iraq’s WMD capabilities was a failure to set a Requirement for Collection of political intelligence to place in context technical information on Saddam’s WMD programs. Collection was instead focused on gathering information about the existence of the programs themselves. This led to failure to consider the question in light of Saddam’s ‘political system, fears, and intentions’ (Jervis, 2006: 41). Collecting such information could have offered an explanation as to why Hussein would refuse to disavow WMD programs and cooperate with UN inspections despite not having active WMD programs. Setting this Requirement may have served to balance the dominant presumption that Iraq had WMD, the justification for the war which turned out to be false.

Analysis Failure
UK and U.S. analysis of Iraq’s WMD relied heavily upon unreliable human intelligence sources which were not properly vetted and, in some cases, have since proven to be fabrications. Information was also being quickly disseminated to policymakers without being properly validated or analysed first (Morrison, 2011: 520). In his study of the major U.S. and UK post-mortems on Iraq, Jervis (2006) cites many examples of Analysis Failure: ‘ICs’ judgments were stated with excessive certainty’ (ibid.: 14), ‘no general alternative explanations for Saddam’s behavior were offered’ (ibid.: 15), ‘lack of imagination’ to develop these alternative views (ibid.: 17), ‘failure to challenge assumptions’ regarding existence of WMD programs and a situation ‘whereby assessments were based on previous judgments without carrying forward the uncertainties’ (ibid.: 22), among others. These examples illustrate just some of the places where analysis went wrong.

Decision-Maker Failure
The case of Iraq’s WMD provides a clear example of politicisation of intelligence. The Pentagon’s Office of Special Plans (OSP) was specifically established by the Bush administration to develop assessments based upon the assumption that Iraq had WMD and its products were used in preference to other Intelligence Community assessments that contradicted that hypothesis (Ryan, 2006: 304-5). A ‘Red Team’ analysis conducted by CIA’s WINPAC, designed only as an initial ‘devil’s advocate’ view in the face of evidence Iraq did not have WMD, was also selectively used by the administration despite strong evidence from the rest of the Intelligence Community its conclusion was wrong. Assessments by agencies such as U.S. Department of Energy and the State Department’s Bureau of Intelligence Research provided strong, clear arguments against the central argument made in assessments preferred by the Bush administration, but they were pushed aside by Decision-Makers (Conway, 2012: 490-1).

Systemic Factors
Garciano and Posner (2005: 149) point out the WMD Commission’s finding that there was not enough cooperation or information sharing between agencies, specifically in regard to reports that called into question the credibility of the information by the source ‘Curveball’, whose assertion that he had been involved in active WMD programs in Iraq became a central piece of evidence in the Bush administration’s push for war. Curveball (Rashid al-Janabi) himself has since admitted his evidence was a fabrication (NBC News, 2011). If more than one agency had access to these reports, there may have been more questions asked. However, bureaucratic obstacles and security standards stood in the way. There is a lack of information sharing between U.S. intelligence and law enforcement agencies, seen to have separate functions, which may have led to vital information not being shared and considered (Cilluffo et al, 2002: 70-1).

External Factors
Saddam Hussein and his regime were not inactive players in their fate. There was a systematic denial and deception campaign by the regime and their failure to cooperate with and eventual ejection of UN inspectors supported the belief that Iraq had something to hide regarding its WMD programs. Wherever evidence was not available to prove or disprove hypotheses, Saddam and his regime could be blamed for blocking attempts to collect the necessary intelligence (Jervis, 2006: 27-8). In his debriefing by the FBI, Saddam admitted that he wanted to maintain the façade of possessing WMD to counter enemies, especially Iran (Federal Bureau of Investigation, 2008). Of course, had Saddam been open regarding WMD and cooperated with UN inspections, the likelihood of war would have been greatly reduced. Intelligence is, ‘a game between hiders and finders, and the former usually have the easier job’ (Jervis, 2006: 11).

Conclusion
As Gill and Phythian (2004: 105) point out, ‘the limits of intelligence dictate that intelligence failure is inevitable. Partly, this is a consequence of the impossibility of perfect predictive success; partly it is a consequence of decision-makers’ (politicians with regard to states) natural tendency to err on the side of caution by subscribing to worst-case scenarios, or to simply ignore intelligence that does not fit their own preferences.’ As discussed, there are other locations within the Intelligence Cycle where things can go wrong and there are other possible causes and factors leading to intelligence failure. The debacle of Iraq’s WMD provides many real-world illustrations.

Jervis (2006: 11) makes clear that, ‘Any specific instance of intelligence failure will, by definition, seem unusual, but the fact of the failure itself is quite ordinary’. If the result of the failure is a great national disaster, it will seem to be a great failure. However, factual examinations of how failures occur inevitably show common, ordinary failures by people and organisations that often occur in situations outside of the intelligence context. Jervis usefully compares the Iraq WMD investigations to enquiries into the 1986 Challenger shuttle disaster and abuse allegations in the Catholic Church (ibid.: 4-5). It is only the consequences of intelligence failure that make them appear much larger than they are—the stakes are simply much higher (Betts, 1978: 62). Despite all of the post-mortems conducted on intelligence failures, there is little evidence their conclusions or the reforms enacted afterwards as a result have translated into an elimination or even significant reduction of failures (ibid.).
These human failures are impossible to ever eliminate for good, especially those of a political or cognitive nature. Intelligence failures are inevitable because they seek to make clear ambiguous, complex situations using information that may be difficult to obtain, validate and understand, either through the resistance of the target or the nature of the information itself, and then present information in a digestible manner to Decision-Makers, who may do as they wish with the intelligence produced. Even minor errors at any point in the Intelligence Cycle may throw off the entire enterprise. The fallible and flawed nature of human beings attempting to put together some unknown puzzle with an incomplete picture means mistakes will inevitably occur at some point in the process. There is an old military maxim: ‘what can go wrong, will go wrong.’ Betts (1978) and Gill and Phythian (2006: 105) believe we must accept that intelligence failures are inevitable. Indeed, given the nature of the work of intelligence and all that can go wrong, it would be more extraordinary if intelligence failures did not occur.





BIBLIOGRAPHY
Barrett, D. (2010) “Why Intelligence Failures Are (Still) Inevitable.” Diplomatic History 34 (1): pp. 207–213.

Betts, R. (1978) “Analysis, War, and Decision: Why Intelligence Failures Are Inevitable.” World Politics 31 (1): pp. 61–89.

———. (2002) “Fixing Intelligence.” Foreign Affairs 81 (1): pp.43–59.

———. (2007) “Two Faces of Intelligence Failure: September 11 and Iraq’s Missing WMD.” Political Science Quarterly 122 (4): pp. 585–606.

Butler Panel of Inquiry (2008) “The British Experience with Intelligence Failure.” In Intelligence and National Security: The Secret World of Spies, edited by Johnson, L. and Wirtz, J., 2nd ed., pp. 526–536. Oxford: Oxford University Press.

Cilluffo, F., Marks, R., and Salmoiraghi, G. (2002) “The Use and Limits of U.S. Intelligence.” Washington Quarterly 25 (1): pp. 61–74.

Conway, P. (2012) “Red Team: How the Neoconservatives Helped Cause the Iraq Intelligence Failure.” Intelligence and National Security 27 (4): pp. 488–512.

Federal Bureau of Investigation (2008) “Interviewing Saddam: FBI Agent Gets to the Truth.” Federal Bureau of Investigation – Stories. http://www.fbi.gov/news/stories/2008/january/piro012808 [Accessed 4 January 2014].

Garciano, L., and Posner, R. (2005) “Intelligence Failures: An Organizational Economics Perspective.” The Journal of Economic Perspectives 19 (4). http://www.jstor.org/stable/4134960 [Accessed 4 January 2014].

Gates, R. (1992) “Guarding Against Politicization.” Central Intelligence Agency. https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/volume-36-number-1/html/v36i1a01p_0001.htm [Accessed 5 January 2014].

Gill, P., and Phythian, M. (2006) “Why Does Intelligence Fail?” In Intelligence in an Insecure World, 1st ed., pp. 103–124. Cambridge: Polity.

Hatlebrekke, K., and Smith, M. (2010) “Towards a New Theory of Intelligence Failure? The Impact of Cognitive Closure and Discourse Failure.” Intelligence and National Security, 25 (2): pp. 147–182.

Irwin, S. (2012) “Too Much Information, Not Enough Intelligence.” National Defense Magazine. http://www.nationaldefensemagazine.org/archive/2012/May/Pages/TooMuchInformation,NotEnoughIntelligence.aspx [Accessed 6 January 2014].

Jackson, P., and Scott, L. (2004) “The Study of Intelligence in Theory and Practice.” Intelligence and National Security 19 (2): pp. 139–169.

Jervis, R. (2006) “Reports, Politics, and Intelligence Failures: The Case of Iraq.” Journal of Strategic Studies 29 (1): pp. 3–52.

Johnson, L. (1983) “Seven Sins of Strategic Intelligence.” World Affairs 146 (2): pp. 176–204.

———. (2008) “Glimpses into the Gems of American Intelligence: The President’s Daily Brief and the National Intelligence Estimate.” Intelligence and National Security 23 (3): pp. 333–370.

Johnson, L., and Wirtz, J. (2008) “Part II: Collection.” In Intelligence and National Security, 2nd ed., pp. 49–55. Oxford: Oxford University Press.

Kent, S. (1964) “A Crucial Estimate Relived.” Studies in Intelligence (Central Intelligence Agency). https://www.cia.gov/library/center-for-the-study-of-intelligence/kent-csi/vol8no2/html/v08i2a01p.htm [Accessed 4 January 2014].

Morrison, J. (2011) “British Intelligence Failures in Iraq.” Intelligence and National Security 26 (4). http://dx.doi.org/10.1080/02684527.2011.580604 [Accessed 6 January 2014].

NBC News (2011) “‘Curveball’: I Lied about WMD to Hasten Iraq War.” NBC News. http://www.nbcnews.com/id/41609536/ns/world_news-mideast_n_africa/t/curveball-i-lied-about-wmd-hasten-iraq-war/#.Us12Fp5_u8A [Accessed 5 January 2014].

North Atlantic Treaty Organisation (2002) “Press Conference by US Secretary of Defence, Donald Rumsfeld.” NATO HQ. http://www.nato.int/docu/speech/2002/s020606g.htm [Accessed 5 January 2014].

Renshon, J. (2009) “Assessing Capabilities in International Politics: Biased Overestimation and the Case of the Imaginary ‘Missile Gap.’” Journal of Strategic Studies 32 (1). http://dx.doi.org/10.1080/01402390802407475 [Accessed 5 January 2014].

Roman, P. (1995) “Strategic Bombers over the Missile Horizon.” Journal of Strategic Studies 18 (1): pp. 198–236.

Ryan, M. (2006) “Filling in the ‘Unknowns’: Hypothesis-Based Intelligence and the Rumsfeld Commission.” Intelligence and National Security 21 (2): pp. 286–315.

Schmitt, G., and Shulsky, A. (2002) Silent Warfare: Understanding the World of Intelligence. Washington, D.C.: Brassey’s.

Williams, R. (2011) “(Spy) Game Change: Cyber Networks, Intelligence Collection, and Covert Action.” George Washington Law Review 79 (4): pp. 1162–1200.

No comments:

Post a Comment