Mind the Seams: Where Intelligence Can Break
By: Mr. Henry Yep

Disclaimer: The views expressed in this article are those of the author and do not reflect the official policy or position of the Department of Defense or the U.S. Government.
Some of the most contentious moments I observed during intelligence briefings were not about ships, missiles, or war plans. They were about words describing intent. I recall one briefing, early in my career, during which analysts from different organizations described another country’s deliberately ambiguous actions in a tense region. Looking at the same reporting, one briefer insisted it was “preparing” for escalation, while another argued the behavior was “positioning” for leverage. A third rejected both, calling the same activity “reactive” rather than “aggressive” or “opportunistic.” A fourth chimed in with “provoked.” They could not agree whether the moves looked “offensive” or “defensive.” Each briefer’s framing reflected different source access, missions, and analytic baselines. The argument was not semantic, each label implied different judgments about risk and different near-term options. When it came time to speak with one voice, we spent more time defending labels than on the underlying problem. The seam lay in interpretation. Different regional and functional shops brought different mental models to the same set of facts. The gap between how we were trained to argue and what clients needed turned a seemingly straightforward question into an analytic knife fight that colored how they understood the situation. Bridging those seams means integrating competing interpretations and clarifying what evidence would shift judgment—giving senior leaders a clearer basis for action.
I learned to fear seams more deliberately as a student at the Marine Corps War College, from a professor who delighted in beating us at operational-level wargames ranging from the Napoleonic era to World War II. He warned us that he would find and exploit the seams. Then he did exactly that, repeatedly. Our forces often looked solid on the map until a wrong move exposed a gap in our lines. Marine Corps warfighting doctrine describes gaps as vulnerabilities in time, space, or capability, often revealed at seams and boundaries, such as “a seam in an air defense umbrella” or “a boundary between two units.”[1] Intelligence agencies are no different. An organization can do many things right and still lose if it ignores the boundaries where information and mandates meet.
Where the Seams Really Are
On paper, the U.S. intelligence community appears robust. National-level agencies have broad views of issues. Combatant commands see operational detail. Embassies know local personalities and politics. Functional and technical centers focus on issues such as cyber, counterterrorism, and missile systems. Somewhere in that mix, we assume the picture will coalesce.
In practice, seams emerge where responsibility is divided but interdependent. Deliberate overlap helps avoid those seams from becoming gaps. Geographic seams appear at the boundary between global, regional, and country vantage points. Functional seams lie between intelligence, operations, and policy, each with its own vocabulary, timelines, risk tolerance, and goals. Organizational seams appear inside institutions, with regional or country teams on one floor, issue-focused cells on another, and technical experts across town. They read different traffic, attend different meetings, and build different mental models.
Outside of my normal work, my years on a wilderness search-and-rescue team have made me more attuned to seams. In a search, seams exist where two efforts meet, such as between individual searchers and between different teams. Without deliberate overlap at the boundary, the seam becomes a gap—untouched ground where a missing subject can sit just out of view. Teams manage that risk with spacing rules that keep teammates close enough so their points of view overlap. Teams with different capabilities can approach the same area from multiple directions so that narrow lanes and clues are not missed. Intelligence agencies face a similar challenge. A little redundancy at the boundaries usually achieves better results than strict adherence to a tidy organizational chart.
None of this is the result of malice or laziness. It is what happens when a large, specialized enterprise is task-organized around missions, programs, and compartments. Organizations and coordination bodies are often created to harmonize the system. However, without clear authorities or resources they can become additional layers rather than true integrators.
Every organization still ends up needing de facto seam-minders, people who notice those seams before they become gaps and work quietly to bridge them, whether or not the organization acknowledges their role. They are the ones who show up to others’ meetings, keep informal channels open with counterparts, and translate between communities that rarely speak the same language. The health of the system depends more on those habits than on any single reorganization.
When Seams Fail, Information Does Not Add Up
Public post-mortems on major national security surprises tell a similar story. Roberta Wohlstetter’s classic study of Pearl Harbor showed that signals existed but were buried in noise, scattered across organizations, and never reconciled in time to matter.[2] The 9/11 Commission described how agencies held crucial fragments about the hijackers but failed to connect them across FBI–CIA and foreign–domestic seams.[3] The Robb–Silberman Commission found the intelligence community “dead wrong” on Iraq’s weapons of mass destruction, in part because of untested assumptions, stovepipes, and weak mechanisms across the intelligence community.[4]
Eliot Cohen and John Gooch, in Military Misfortunes, argue that disasters usually arise from the interaction of different failures: failure to anticipate, adapt, and learn.[5] Those failures tend to concentrate at the interfaces between organizations rather than at their centers. Somewhere in the system, people had the right pieces. The problem was less a lack of data than the way information moved—or failed to move—across seams.
The Quiet Reasons Seams Persist
If seams matter this much, it is fair to ask why the system gives them so little structured attention. One reason is how we recruit and reward analysts. Many arrive from universities or policy programs where the craft they learn is mostly solo research and writing, and success is measured by individual expertise: depth on a topic, originality of argument, and volume of output. Recruitment processes and performance reviews highlight independent accomplishments rather than experience in teams where unity of effort matters as much as individual talent. That leaves little room to identify or reward people who are naturally good at filling seams.
Inside government, we reinforce that pattern with metrics that track how many products an analyst writes, how well they master a target, and how often their work receives feedback. Those measures are not meaningless, yet they rarely reveal whether an analyst understands how their work lands in the broader decision-making ecosystem. It is difficult to recognize and reward the quiet, messy work of tending seams: bridging relationships across directorates, smoothing handoffs, and helping different communities understand one another’s pressures.
Organizations fall into similar habits. It is far easier to stand up a new fusion cell, working group, or crisis team than to repair the handoffs between existing elements. Seams are hard to see on paper and usually reveal themselves under stress, during exercises or real-world events, sometimes before something breaks—and often after. Sometimes what an organization really needs is less a new cell and more marriage counselors for its existing parts, a kind of work that rarely appears in formal training plans.
What Analysts Can Do from Where They Sit
Analysts have more agency than they think. Some of the most effective seam-minding I have seen came from analysts who take initiative. These include simple actions such as getting out from behind their desks to talk with another team about a fresh report or calling a counterpart before a disagreement hardens into an e-mail fight. They represent trust-building habits that, over time, fills organizational seams before things can go wrong. A small act of consideration can do more to prevent failure than one more slide deck or standard operating procedure.
Analysts should see themselves not just as researchers but as participants in a larger ecosystem that includes intelligence consumers and the liaisons who connect them. That means reaching out to stakeholders, asking what they are seeing. It also means adjusting analysis when new perspectives expose blind spots, while recognizing the limits of any single lens and the tendency to treat what is in front of us as the whole.[6] Before drafting or briefing, it helps to understand what the client actually wants, how the client sees the issue, and what pressures shape the strategic environment. That engagement, often through formal seam-minders, forces clients to clarify their own thinking and lets you shape the work around the decision rather than around the template. Working-level initiative is helpful, but it works best when it complements, rather than bypasses, the chain of command. Going directly to a client without looping in leadership can feel satisfying in the moment but creates new seams in accountability and resourcing. The point is not to avoid freelancing, but to widen the relationships and conversations that make good analysis possible.
Analysts must also drive collection, not just consume it.[7] That starts with being honest about the information gaps and turning them into specific requirements for collectors and technical experts. If analysts working along those seams do not ask for the right information, collection can drift away from what decision makers actually need.
What Leaders of Analysts Owe the System
Working-level initiative is useful, but it cannot substitute for leadership decisions on access, time, and incentives. Intelligence is, in many ways, an apprenticeship profession. Analysts learn most of what matters on the job: how to argue a judgment without burning bridges, navigate the interagency system, and engage clients. That apprenticeship only works if leaders share experience, relationships, and exposure instead of hoarding them. Leaders who keep the most important engagements to themselves may look indispensable in the short term, but they create a single point of failure and leave a gap once they are gone.
Because leaders control tasking and incentives, they can close seams by design. That means distributing access, pairing junior personnel with senior briefers, and giving analysts time and top cover to engage external counterparts. The goal is to build a system where cross-boundary integration is expected, enabled, and rewarded, rather than dependent on individual heroics. For supervisors, one of the main jobs is to create cover and capacity for cross-seam engagement and to protect analysts when that engagement produces friction. That includes naming an integration lead for priority issues, building deliberate overlap at boundaries, and making cross-seam coordination a graded requirement in performance evaluations and promotion decisions.
What Intelligence Consumers Can Do
Commanders and other decision-makers cannot make seams disappear, but they can shape how those seams affect what lands on their desk. The intelligence liaison is there to help, but a few habits can make a difference.
First, know the briefer’s perspective. A simple question helps: “How does your office see this differently than others?” That invites analysts to surface seams instead of quietly papering them over. It also pays to notice who is in the room. A small group of two or three empowered briefers usually reflects more seam work than a ten-person phalanx where everyone is guarding a narrow slice of the issue.
Second, be specific about what is needed. A broad request for a product encourages different offices to interpret the task in ways that reflect their area of responsibility and institutional biases. Spend a few minutes with the liaison to ensure clarity on requirements, which could avoid days of confusion and wasted effort between organizations.
Third, push back when the form starts to drive the content. Analysts can over-optimize for formality, hedge for multiple audiences, and stick to templates because they feel deceptively safe. If a presentation feels scripted or narrow, it is reasonable to say, “Explain this to me the way you would in a conversation with a colleague.” Listening for divergence between the slide and the voice matters as well. Asking, “What are we not showing here?” can surface important details.
Using Technology to Help—and Not Make It Worse
Artificial intelligence (AI) and other tools can help or hurt, depending on how they are used. Systems that cut the burden of repetitive work can give time back for analysts to engage organizational seams, but that time is wasted if it is immediately consumed by requests for more products. Machines can accelerate the movement of information, but only people can buy doughnuts for a team after long nights of senior-level review.
New tools are also creating an internal seam between staff who can use them and those who are skeptical or lack time to experiment. Left unmanaged, that split produces parallel analytic cultures: one group quietly uses AI to move faster but struggles to explain its limits, another distrusts the tools and keeps doing everything the old way. Leaders need to treat that seam as a management issue, not a passing fad, by creating time and space for low-risk experimentation and increasing AI literacy in the workforce.
Intelligence as Stewardship
At the heart of all of this is a simple idea: intelligence is a team sport. It is a service profession in peace and war. Analysts help other people make better decisions amid uncertainty. That requires technical skill and deep knowledge, along with humility about how partial any one person’s view will always be. Even when one analyst is at the podium, there is a supporting cast of collectors, methodologists, editors, and managers behind them. Filling seams depends on knowing what those teammates can do and how to bring their strengths to bear at the right moment.
One of the clearest reminders of that came from an analyst who had previously been an elementary school teacher. During an early rehearsal for a brief, they stumbled through a standard script—eyes down, robotic, clearly uncomfortable. After some supportive coaching, they tried again but in a way that they were more comfortable with for public speaking: no notes, moving around the room, explaining the issue from memory. This is what that analyst had done for years in front of children. They kept the key judgments, simplified wording, and used a few vivid analogies to explain why one variable mattered more than another. Management remained nervous about sending them to the real brief, but the clients received exactly what they needed: a clear, confident explanation that respected their time.
That small episode exposed a seam between what intelligence agencies say they want – clarity, candor, decision-focused analysis – and the habits and templates that can shape what they produce. It also showed how much talent remains unused when analysts are expected to adhere to a standard format instead of bringing the full range of their professional skills to bear.
Seams themselves are not the enemy. Healthy boundaries protect sources and methods, prevent dangerous groupthink by allowing teams to develop independent views, and keep sprawling problems manageable. Those same seams become risky when no one is responsible for bridging them. Adversaries are watching how the system operates and looking for weaknesses along those joints. Institutions expose additional vulnerabilities when they neglect their own seams and let misunderstandings, misaligned resources, or comfortable assumptions drive outcomes in their place. Seams will always exist in complex institutions. The real question is whether we ignore them or mind them.
About the Author:
Henry Yep is a senior intelligence officer in the Office of Enterprise Analysis at the Defense Intelligence Agency, where he helps shape programs that align analytic production with senior decision-maker needs, timelines, and risk across the defense intelligence enterprise. He previously held supervisory and senior analytic roles in various teams focused on China and Indo-Pacific issues and is a Marine Corps War College distinguished graduate (2025). The views expressed are his own and do not reflect the views of the Department of Defense or the U.S. government.
[1] US Marine Corps, Warfighting, MCDP 1 (Washington, DC: Headquarters US Marine Corps, June 20, 1997), 101, https://www.marines.mil/portals/1/publications/mcdp%201%20warfighting.pdf#page=101.
[2] Roberta Wohlstetter, Pearl Harbor: Warning and Decision (Stanford, CA: Stanford University Press, 1962), 394–95.
[3] National Commission on Terrorist Attacks Upon the United States, The 9/11 Commission Report: Executive Summary (Washington, DC: National Commission on Terrorist Attacks Upon the United States, 2004), 12–13, https://govinfo.library.unt.edu/911/report/911Report_Exec.pdf.
[4] Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction, The Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction: Report to the President of the United States (Washington, DC: Commission on the Intelligence Capabilities of the United States Regarding Weapons of Mass Destruction, March 31, 2005), title page, https://govinfo.library.unt.edu/wmd/report/wmd_report.pdf.
[5] Pelham G. Boyer, Elliot A. Cohen, and John Gooch, “Military Misfortunes: The Anatomy of Failure in War,” Naval War College Review 45, no. 2 (1992), https://digital-commons.usnwc.edu/nwc-review/vol45/iss2/19.
[6] Lea Winerman, “A Machine for Jumping to Conclusions,” Monitor on Psychology 43, no. 2 (February 2012), 24, https://www.apa.org/monitor/2012/02/conclusions.
[7] Gregory F. Treverton and C. Bryan Gabbard, Assessing the Tradecraft of Intelligence Analysis, TR-293 (Santa Monica, CA: RAND Corporation, 2008), 30, https://www.rand.org/content/dam/rand/pubs/technical_reports/2008/RAND_TR293.pdf#page=30.

