As competitive intelligence (CI) analysts, we’re almost always trying to find the answer to a question regarding some aspect of a major GOVCON business. While we rarely ever can answer the question definitively, we can usually provide likely answers and explanations based on available information. Whether or not you’re a CI analyst, this can be a very effective method to sharpen your analysis and ultimately decision making.
For every question, there are many conflicting possible answers. We often lean on the Analysis of Competing Hypotheses (ACH) to help us resolve questions.
ACH forces an analyst to evaluate all potential hypotheses against a range of evidence. For this methodology, we use “hypothesis” to mean potential explanations or answers to a question, and “evidence” to mean available information, including facts, assumptions, rumors, and logical deductions (not necessarily a criminal justice definition of “evidence”). This eight-step process is based on findings from cognitive psychology, decision analysis, and the scientific method.
Using a matrix, we can compare each piece of evidence against each hypothesis, determining whether the evidence is consistent, inconsistent, or is not applicable to each possible answer. Figure 1 demonstrates a sample matrix used to help answer the question “Will Competitor X bid on an upcoming contract?”
Working through the available evidence and assessing how they relate to the hypotheses, we can start to determine the diagnosticity of each piece of evidence; in other words, we can identify the pieces of evidence that truly impact our evaluation on the relative likelihood of different hypotheses. If an item of evidence is consistent or inconsistent across all hypothesis, then it is not useful in helping us determine which hypotheses are more or less likely.
Most pieces of evidence are not very diagnostic and that the determination of a likely hypothesis comes down to a few key pieces of evidence.
After removing non-diagnostic evidence, we can evaluate each hypothesis as a whole against the body of evidence. We should proceed by attempting to reject the hypotheses, instead of trying to prove them true. We do this because no matter how much evidence supports a hypothesis, we cannot prove it is true as the same evidence may also support other hypotheses. Review Figure 1 again and note that both E1 and E2 support the two “Yes” hypotheses.
However, a single piece of evidence may be enough to cast doubt on or completely reject a hypothesis. After reviewing the hypotheses and evidence, the most likely hypothesis is the one with the least amount of evidence refuting it, not the one with the most amount of evidence supporting it.
The matrix will not give the analyst a definitive answer; it should reflect which factors an analyst believes to be influential in relation to the question’s likely answer. However, determining which hypothesis is most likely and “answering the question” is not the final step. The analyst should understand and comment on how sensitive their matrix is to changes in the evidence and set a future date to re-evaluate their matrix, determining how changes have affected their evidence and likelihood of each hypothesis.
Helps avoid confirmation bias – This analysis starts by giving equal weight to every possible outcome, not just the one the analyst believes to be most likely, preventing hypotheses from being dismissed without receiving a fair assessment.
Highlights key pieces of evidence – These key pieces of evidence are those with high diagnosticity. The assessment of which hypothesis is most likely correct will likely hinge on these key, diagnostic pieces of evidence. If none of the evidence is diagnostic, then we know to collect additional evidence or dig deeper into the information we already possess.
Leaves an “audit trail” – ACH (and most analytic methodologies) provide an audit trail that supports an analyst’s conclusions. Anyone can review the work and determine how an analyst came to their answer. Audit trails are very useful in improving our analysis as, when we are wrong, we can re-evaluate our thought process, determine where we failed, and improve on that area in the future.
Open to manipulation – As with any process, this is “garbage in, garbage out.” If an analyst truly wants to prove a hypothesis correct, they can choose which evidence to include or exclude, thus altering the results. This should be used to test or challenge an analyst’s thoughts, not validate them.
Requires a degree of creativity and open-mindedness – For the hypotheses to truly be exclusive and exhaustive, analysts must consider all possibilities. This can be difficult due to pre-conceived notions or views on a subject. Performing this process in teams or having multiple analysts conduct the process on the same question can help provide different viewpoints and a full scope of hypotheses.
Does not provide an “answer” – Analysts must be cautious not to accept the matrix output as the gospel truth. ACH should be viewed as a tool in our toolbox, used to challenge our assumptions and increase the rigor of our analysis.
While this methodology will not guarantee the correct answer (if it did, I’d be boxing it up and selling it instead of writing a blog about it), it does provide a rigorous, rational process to work through complex questions. ACH helps you avoid traps common in analysis and decision making.
© FedSavvy Strategies and FedSavvy Strategies blog, 2012-2019. Unauthorized use and/or duplication of this material without express and written permission from this blog’s author and/or owner is strictly prohibited. Excerpts and links may be used, provided that full and clear credit is given to FedSavvy Strategies and FedSavvy Strategies blog with appropriate and specific direction to the original content.