AI Implementation Strategy for Combatant Commands; A Methodology for Risk Informed LLM Opportunity Mapping
by Colonel Charles Bursi, U.S. Air Force; Colonel John Caldwell, U.S. Air Force; Lieutenant Colonel Ron Garberson, U.S. Army; Commander Jacqueline Kyzer, U.S. Navy
The integration of Artificial Intelligence (AI) and Large Language Models (LLM) into military operations presents both significant opportunities and critical challenges. As the U.S. military faces increasingly complex threats, leveraging AI to enhance decision-making, situational awareness, and operational efficiency is imperative. However, improper implementation characterized by over-reliance, ethical concerns, and cybersecurity risks can undermine mission effectiveness and national security. This paper proposes a risk-informed opportunity mapping methodology to guide AI adoption within Combatant Commands (CCMD). It outlines a structured approach to AI integration, ensuring alignment with operational objectives, doctrinal frameworks, and ethical considerations. The study introduces the Assist, Advise, and Decide framework, categorizing LLM applications based on complexity, risk, and return on investment. Additionally, it highlights key challenges such as bias, human-AI trust calibration, adversarial cyber threats, and secure information-sharing constraints. Through a phased implementation strategy, this paper demonstrates how CCMDs can strategicallyleverage LLMs to maximize its benefits while mitigating risks. By adopting a deliberate, structured, and ethically governed approach, the military can enhance operational superiority, maintain strategic advantage, and safeguard national security in an AI-driven battlespace.
Introduction
The rapid advancement of Artificial Intelligence (AI) has transformed modern society, influencing decision-making, automation, and operational efficiency across industries. In military applications, failing to integrate cutting-edge technology creates a strategic disadvantage, leaving forces vulnerable to adversaries who capitalize on innovation. A historical example that underscores this risk is the 1941 attack on Pearl Harbor. The United States had access to advanced radar technology yet due to inexperience and miscommunication, critical early warnings were dismissed, leading todevastating losses.[i] This highlights a crucial lesson for modern warfare: possessing advanced technology is not enough; effective integration, training, and strategic implementation are essential.
As the U.S. military confronts 21st-century threats, the adoption of AI and Large Language Models (LLM) must be deliberate, structured, and aligned with mission objectives to enhance decision-making speed, situational awareness,and operational superiority. While human cognition is indispensable, it has inherent limitations—cognitive load, memory constraints, and processing speed. Research on decision science demonstrates that as information complexity increases, human accuracy declines.[ii] Unlike human analysis, LLMs can process real-time data instantaneously, analyze vast amounts of information, and detect patterns beyond human capability. However, LLMs are not a substitute for military decision makers; instead, they serve as a force multiplier, augmenting human expertise with data-driven intelligence and predictive analysis.
Despite its advantages, AI integration into military organizations cannot be haphazard. Improper implementationcan lead to over-reliance on AI-generated insights, ethical concerns, and cybersecurity vulnerabilities. Combatant Commands (CCMDs) must adopt a strategic approach that ensures AI aligns with doctrinal framework, operational needs, and ethical considerations. This requires a rigorous process of opportunity mapping—identifying where LLMs can deliver the most tangible benefits in command and control, logistics, intelligence analysis, and cybersecurity. A structured integration strategy enables CCMDs to prioritize AI investments, mitigate risks, and build a resilient infrastructure that supports AI-enabled operations.
This paper presents a methodology for risk-informed opportunity mapping to guide LLM implementation in CCMDs. By approaching AI integration with clear strategic intent, rigorous validation, and responsible governance, the military can harness AI’s power while safeguarding operational integrity and national security.
Foundations of AI Implementation Strategy
This section discusses essential elements of AI integration that account for operational and national security risks.AI implementation and AI Adoption will be discussed first. Then, AI adaptations for CCMDs and special consideration for military use will be reviewed.
AI Implementation and Adoption Strategies. The following literature review first considers generic strategies to AI implementation. Then, AI implementation strategies for a CCMD HQ are considered. In short, regardless of whereit's being implemented, a deliberate approach is required to determine when and how AI should be implemented.[iii]
First, business and operational studies all converge on the importance of leadership support for AI implementationand adoption.[iv] Leadership buy-in to the strategy ensures that personnel see AI implementation as unique from previous technology, software, or procedure implementation. Leadership support also reduces the likelihood of personnel adoptinga negative “just AI it” mentality and increases the likelihood of a positive, tailored mentality of adopting AI as a tool that uniquely increases the efficiency or effectiveness of tasks.
Next, AI adoption should include training on how to realize the unique benefits and innovations of AI usage for each task. AI adoption refers to the process where broad guidelines enable each user to tailor the AI system for their unique use case at their level in the CCMD.[v] This maximizes the benefits of and avoids the common pitfalls associatedwith mandatory AI use or a “one size fits all” approach.[vi] Effective implementation also includes a tiered adoption plan so users with various knowledge levels can access appropriate resources and software engineer support.[vii]
AI Adaptations for CCMDs Use. The unique roles, responsibilities, and legal requirements of CCMDs require three key adaptations to ensure moral, legal, and ethical implementation of AI: active AI adoption, assessment of AI toolimpact on hostilities, and AI tools application in strategy and decision making.
Active adoption of AI is an iterative approach that trains and improves AI tools with the unique sources, methods, and limitations of each command. Specifically, this is where each CCMD directorate should utilize AI tools and evaluate new areas to employ AI, ensuring it is capable of providing a holistic command perspective.[viii] This should start withAI use in low-risk tasks that result in high-payoff time savings, and expand to more significant tasks as trust and familiarity increase.[ix] Lastly, active adoption should include user-to-engineer feedback loops to improve capability, reduce limitations, and increase accountability of AI tools (e.g., update the measure of performance and measure of effectiveness for AI).[x]
Next, an essential CCMD adaptation is whether the AI tool has a “direct impact on the conduct of hostilities.”[xi]AI-supported tasks that have a direct impact on hostilities should be considered for additional human review of outputs(e.g., legal review), add a human-on-the-loop (e.g., target systems analysis), or a human-in-the-loop (e.g., target identification).[xii] In addition to making the tools easy for civilian oversight to audit, these adaptations are required tomeet the moral and ethical requirements of a military headquarters.[xiii]
Challenges and Risks with AI Implementation for CCMD
AI presents transformative opportunities in military operations, but also introduces significant challenges that require both internal policy and legislative regulatory attention, particularly in matters of national security.[xiv] As AI becomes increasingly integrated into autonomous weapons systems, intelligence analysis, and command-and-control networks, concerns about bias, human over-trust and under-trust, advancing adversarial cyber threats, and data-sharing limitations will continue to grow. To address these issues, CCMDs must develop training pipelines that establish a clearhuman-AI interface, address network security concerns, and implement robust information-sharing policies.
Bias in LLM-enabled military applications stems from imbalanced training data, flawed algorithms, and systemic biases, including ethical and cultural considerations.[xv] AI models trained on historical conflict data may disproportionately classify specific demographics as enemy combatants, leading to misjudgments in mission planning,reliance on outdated strategies, and inequitable resource distribution across allied forces. This bias can also contribute to the unintentional escalation of force, as misinterpretations of enemy intentions, misclassification of civilians as combatants, or an over-prioritization of offensive responses could drive unnecessary military engagements. To mitigate these risks, LLMs used in military operations require continuous adversarial testing, diverse training datasets, and human-in-the-loop oversight to prevent biased decision-making that could escalate conflicts unnecessarily.
Trust in LLM-enabled military systems must be carefully calibrated to prevent both over- reliance and skepticism from undermining decision-making.[xvi] Over-trust can lead to blind acceptance of flawed AI recommendations, reducing the ability of action officers to validate intelligence or justify analyses, which in turn erodes leadership’s confidence in AI-driven insights. Conversely, under-trust may cause operators to disregard valuable AI-generated insights due to personal biases or skepticism, diminishing the effectiveness of AI-assisted decision- making. Establishing trust requires rigorous performance validation, continuous adversarial testing, transparent decision-tracing, and ethical deployment frameworks to ensure that AI systems augment rather than replace human judgment in military operations.
Military networks, given their importance to national security, are prime targets for cyberattacks.[xvii] Malign actors will seek to infiltrate LLM training pipelines to introduce false intelligence inputs, leading to flawed strategic recommendations and compromised decision- making. As adversaries also integrate AI, cyber threats become more sophisticated, using machine learning to bypass traditional security measures, automate hacking attempts, and spread misinformation. To counter these evolving threats, military cybersecurity must adopt equally advanced AI-driven defenses, including anomaly detection, automated threat response, and predictive analytics. By leveraging AI, defense networks can identify and mitigate cyber intrusions in real time, adapt to new attack patterns, and enhance resilience against increasingly complex adversarial tactics.[xviii] Additionally, LLM infrastructure must be hardened against hacking, data manipulation, adversarial attacks, and system takeovers to ensure operational integrity and resilience in classified and contested environments. Collaboration between government agencies, private sector cybersecurity firms,and AI researchers is crucial to staying ahead of emerging cyber threats and ensuring the integrity of critical military network infrastructure.
The integration of LLMs within CCMDs presents additional challenges related to data classification and secureinformation sharing. AI systems depend on large-scale data processing, real-time intelligence fusion, and secure communications, yet classification restrictions create technical, operational, and security barriers that limit joint-forceinteroperability. Restrictions on sharing classified LLM-generated insights across military services and CCMDs slow cross- domain operations and reduce efficiency. While classification controls are essential for national security and intelligence protection, they also slow AI-driven innovation and constrain operational effectiveness. To overcome these challenges, CCMDs must develop classified LLM training pipelines, establish secure AI deployment infrastructures, and improve classified- unclassified AI integration protocols while maintaining ethical oversight and compliance with warfare regulations.
As AI continues to shape modern warfare, its responsible integration into military operations will define the future of national security. By implementing robust oversight mechanisms, secure information-sharing protocols, and AI-driven defenses, CCMDs can ensure that AI remains an asset rather than a liability in an increasingly contested and technologically advanced battlespace. The success of military AI depends not only on its capabilities but also on the policies, trust, and security frameworks that govern its use.
LLM Opportunity Mapping
The review of existing literature reveals a significant knowledge gap in risk-informed AI implementation within military organizations. This research aims to address this gap by introducing a new task model for LLM opportunity mapping, along with the means, methods, and organizational structure essential for successful implementation and future scalability.
4a. Assist, Advise, Decide Framework. To initiate risk-informed opportunity mapping, this research presents an original heuristic, the Assist, Advise, and Decide (AAD) model, that offers a useful framework for understanding the risk associated with integrating LLM support into an organization’s workflow. This approach categorizes LLM tasks based on the level of support they provide:
Assist – Automating routine tasks and information processing.
Advise – Delivering contextual insights and data-driven analysis.
Decide – Fusing information to recommend strategic choices.
By assessing LLM functions through the AAD model, CCMDs can begin the first step in risk-consciousimplementation. The following sections describe each element of the AAD model in further detail.
Assist (Descriptive) – Users generate insights through LLM-provided data of past and current conditions. At the most basic level, LLMs assist by automating repetitive tasks, streamlining communication, and generating content to save time and effort. In this role, LLMs act as digital assistants, handling tasks that would traditionally require significant manual effort. By offloading these routine tasks, LLMs enable military personnel to focus on complexdecision- making, significantly improving productivity and operational effectiveness. The low-risk nature of this category stems from LLMs functioning primarily as tools for enhancing efficiency without directly influencing mission-critical decisions.
Advise (Predictive) – LLMs provide foresight and risk assessment for future conditions. In this category, LLMs go beyond automation by leveraging advanced algorithms and data fusion to provide contextual insights and predictive analysis. Instead of simply assisting with data gathering, LLMs apply reasoning to synthesize multiple datasources, allowing users to assess potential risks and outcomes. While this predictive capability is valuable, it alsointroduces medium-level risk due to the possibility of misinterpretation, bias, or incomplete data analysis. Therefore, human oversight remains critical to validate AI-generated insights before operational decisions are made.
Decide (Prescriptive) – LLMs provide actionable recommendations to achieve desired future conditions. At the Decide level, LLMs transition from providing insights to recommending specific courses of action (COAs) based on complex data analysis and simulations. Unlike the previous stages, where humans interpret AI-generated outputs, in the Decide category, LLMs actively suggest strategic decisions based on user-defined objectives. While this phase offerssignificant operational advantages, it also presents the highest level of risk. Over-reliance on AI-generated decisions could lead to flawed recommendations due to biased data, adversarial manipulation, or unforeseen operational variables. To mitigate these risks, human-in-the-loop oversight is critical.
Overall, by assisting with basic time-consuming tasks, providing foresight and risk assessment, and generating actionable recommendations, LLM can provide a strategic and operational advantage to CCMDs. A summary of the AAD model is provided in Figure 1 below.
4b. LLM Opportunity Mapping Informed by Challenges and Risks. Although LLMs have significantcapability across the AAD spectrum, the implementation plan for a CCMD must account for the unique challenges and risks inherent in military operations. To maximize the benefits of LLMs while minimizing potential risks for military organizations, a phased implementation strategy is essential. A phased implementation plan is a deliberate approach that begins with low-risk applications of AI to build trust among personnel, and then later expands AI usage to meet CCMD requirements. As personnel’s confidence in the AI grows and its risks are better understood, the scope of LLM usage can gradually expand to more strategic functions across the AAD model to ensure a secure and effective integration into mission-critical operations.
Risk, Complexity, and Return on Investment Analysis. While the AAD model offers a high-level assessment for LLM tasks, further refinement is required to get to the desired detail for opportunity mapping. Opportunity mapping begins with evaluating LLMs based on the factors of Complexity, Risk (Low, Medium, High), and Return On Investment (ROI). These factors provide a clearer understanding of the challenges and opportunities associated with AI integration. These three factors will first be defined, and then integrated into a structured framework that enables AI implementation strategy opportunity mapping.
Low-Risk: Generally administrative, non-sensitive, and has limited operational consequences if errors occur,reducing security risks. Users are familiar with the data and can quickly identify errors and make corrections. The complexity of the task is low, leading to greater transparency in the means and methods in which the results were derived.
Medium-Risk: Medium-risk tasks present a moderate level of harm but are still manageable with propersafeguards and oversight. They involve complex AI reasoning where mistakes may not be easily identifiable and could mislead decision-making.
High-Risk: High-risk tasks are mission-critical and involve complex decision-making or system integration. Theycarry substantial risk if errors occur but also promise significant returns if executed accurately, underscoring the need for close human supervision.
Complexity: The ability to process complex, layered queries that involve multiple reasoning steps or theintegration of diverse information. It is the amount of reasoning, memory, and learned behavior demanded from the LLM relative to the data set it must draw from. Approximations will approach true understanding based on the quality and quantity of information available to complete the task.
Return on Investment
Part 1 - Time saving and accuracy. Automating these routine activities increases productivity, frees up time and personnelfor strategic tasks, and ensures consistency and accuracy, leading to significant efficiency gains.
Part 2 – Produces data, analysis, and insights that is otherwise unavailable using traditional means (i.e. hidden insights). These definitions of the three factors can be further refined into the following working definitions:
Risk = Consequence & Probability of Errors + Ability to Validate Results
Complexity = Reasoning Level + Memory & Database Requirements + Learned Behavior
ROI = Accuracy & Optimization + Time Savings + Hidden Insights
By integrating these three factors with the AAD model described above, a Task Assessment Matrix (TAM) can becompleted (see Attachment 1). The next step involves filling in the TAM and scoring each task on a ten-point scale (1 = low, 10 = high), applying “max function” analysis to determine each task's overall value for Risk, Complexity, and ROI (see Figure 2).
The accuracy and reliability of each TAM depend on analytical rigor and the expertise of the assessment team scoring each task. Therefore, LLM subject matter experts with in-depth knowledge of the AI system should conduct the scoring. Ideally, this team would be supplemented with industry experts to enhance fidelity and ensure a well-rounded evaluation.
Where quantitative data is available, it should be leveraged to the greatest extent possible. In cases where quantitativeanalysis is not feasible, the team should aggregate qualitative scores to determine the final evaluation for each factor and task.
For demonstration purposes, the research team generated a task list using the AAD model, conducted individual task scoring, and aggregated the results in the TAM in Figure 2.
These scores are estimates designed to illustrate the relative differences among tasks. The overall assessment in each TAM will vary based on the AI platform, classification, and database availability. For example, ChatGPT and Claude may yield different scores due to their different coding and data that influence the Risk, Complexity, and ROI factors. Conducting this type of analysis beforehand is essential to evaluating each AI platform’s suitability for each task based on a CCMD’s requirements. Of note, for ease of analysis, the research team limited each AAD category to five entries. In practical applications, each AAD task list would be more extensive and tailored to the CCMD's needs.
With the TAM complete, the CCMD can now use the TAM from Figure 2 to plot an “Opportunity Map” in Figure 3. For simplicity, “Complexity” is not displayed on this 2- dimensional map (more complex 3-dimensional opportunity maps should reflect all three factors). Once plotted, this Opportunity Map presents the tasks identified in Figure 2 as a visual representation of an AAD phased implementation strategy. The colors represent recommended AI implementation “phases:” green first, then yellow, then red last.
4c. Organizational Structure for AI Implementation. The AAD model and TAMs provide a framework for mapping where to implement LLMs in CCMDs. However, successful implementation also requires an organizational structure to manage how AI is implemented in each CCMD. This structure should consist of three key components:oversight and management, an AI policy manager, and a subject matter expert element. These three components should bein continuous coordination and conduct regular review and approval processes.
This implementation structure also deliberately places CCMD leadership in the cycle of LLM implementation. The goal of this AI implementation structure is to empower CCMD leadership to provide oversight, manage risks, and provide strategic guidance for LLM adoption. By clearly defining authorities, relationships, and task distribution (see Figure 4), this framework enables effective integration of LLMs into established CCMD battle rhythms.
It is also important to note that this is a continuous cycle. The three processes on the bottom left of this chart(Development, Task List, Evaluation, and Feedback) are integrated into a cycle that ensures LLM implementation is accomplished in an effective manner. For illustration, one of these processes is highlighted yellow in Figure 4 above and is expanded in Figure 5. This illustrates how each process is part of a continuous improvement cycle that increases the speed and success of an AI phased implementation strategy.
4c. Application of a Phased Implementation Strategy. By applying the AAD framework in a TAM andvisualizing it with Opportunity Mapping, CCMDs can quickly execute a LLM phased implementation strategy while balancing the three factors of Risk, Complexity, and ROI. This phased implementation strategy ensures that LLMs are introduced in a manner that prioritizes low-risk applications (green) before expanding to more complex (yellow) and mission-critical (red) functions. Furthermore, establishing an implementation structure with oversight, policy management, and subject matter expertise is essential for effective LLM integration. Finally, CCMDs must adopt a continuous improvement cycle to ensure LLM capabilities are regularly evaluated, refined, and aligned with requirements. With this phased implementation strategy, CCMDs can realize the maximum potential of LLMs while mitigating risks and maintaining a strategic edge in an increasingly AI-driven landscape. A summary of this implementation strategy is provided in Figure 6.
Future Direction and Recommendations.
Achieving Full LLM Integration into Decision-Making. Though the AAD model and Opportunity Mapping provides a sound foundation for where and how to rapidly implement LLMs, this approach will be insufficient to gain a relative advantage over our adversaries without timeliness. To capture the full benefit of LLMs (e.g. decision-making speed and effectiveness), CCMDs must clearly identify when to transition through each phase. This must be done quickly, to move from initial low-risk LLM assist/advise support (green, yellow), to fully LLM-driven decision-making for warfighting functions (red), as permitted by U.S. policy.[xix]
Ensuring long-term LLM integration success requires leaders, data scientists, and users to remain committed to continual adaptation based on empirical assessments, lessons learned from external organizations and academia, and user feedback. Successful LLM integration into command decision-making requires mechanisms to systematically reassess LLM-generated outputs, particularly concerning developing user trust and optimizing human-AI interaction.
Continuous automated performance assessments combined with user after-action reviews will ensure that LLM tools remain a valuable force multiplier rather than becoming a disruptive dynamic.
Expanding upon Section Four’s “initial implementation” plan, CCMDs must remain committed to transition to higher ROI “Decide” feature of LLMs through three stages:
(0-1 Years) initial AI literacy and training programs, launching low-risk/high-ROI LLM assist and advise projects, and developing AI risk assessment frameworks for decision- making.
(1-2 Years) progressing to LLM-assisted decision-support tools for operational planning (e.g. mission planning and wargaming).
(Beyond 2 years) achieving LLM-driven decision-support systems in non-lethal domains such as logistics and intelligence fusion beyond the five-year mark.
Mitigating Risks During Expansion. When transitioning to the Decide phase of the AAD model, CCMDs must proactively mitigate risk to implementation by addressing AI trust, cybersecurity vulnerabilities, and decisionaccountability. Establishing a governance framework tailored to the CCMD environment will ensure responsible AI adoption. AI trust and transparency can be enhanced by implementing structured training programs that ensure all personnel understand AI's capabilities and limitations. Transparency in AI decision-making processes is essential to preventing both over-trust and under-trust in AI systems. Cybersecurity measures must be robust to protect AI models from adversarial attacks, such as data poisoning and misinformation campaigns. This can be achieved by conducting regular penetration testing of AI decision-support tools, enforcing a zero-trust architecture in AI data pipelines, and implementing blockchain-based AI decision audit trails. Additionally, as described in Sections 2 and 3, ethical oversight must be emphasized to ensure AI applications comply with international humanitarian law, particularly regarding the ethical use of AI in lethal decision-making contexts.
Future Areas of Study and Research Priorities. Future research supporting CCMD operations should focus on how AI is operationalized for near-real-time decision-making, especially AI applications in time-sensitive, crisis response operations. Additional areas of important research remain assessing AI’s role in battlefield data fusion and predictive analytics. Once these key questions are resolved and superhuman performance is achieved, future research will need to investigate the issue of scaling these intelligence agents to serve as embedded advisors throughout the entire U.S. military. To address the vulnerabilities that this will present, future research will need to address AI resilience against adversarial manipulation, requiring studies on the impact of AI deception tactics employed by adversaries and the development of countermeasures to detect and neutralize manipulated inputs. Finally, ethical and policy implications of AI in combatoperations must be explored to evaluate legal and ethical challenges of AI-driven decision-making in kinetic and non-kinetic contexts, as well as to develop policies that balance AI autonomy with human oversight.
Conclusion
As AI and LLMs become integral to modern warfare, effective phased implementation will determine the U.S. military’s ability to maintain strategic and operational superiority. This AAD Opportunity Mapping research provides a structured, risk-informed framework to integrate the analytical power of LLMs while mitigating ethical, legal, and operational challenges. Successful implementation will require active leadership engagement and practitioner participation. The AAD modelfacilitates these behaviors through its phased implementation strategy, while allowing for continual feedback, trust-building, and mitigation of critical vulnerabilities. User training, transparency, and implementation governance will be keys to successful adoption.
To sustain military advantage in the region, CCMD leadership must remain committed to implementing this AAD framework now and then transitioning within the next 1-2 years from low-risk LLM applications to high ROI applications, as permitted by U.S. policy. Though this implementation path is aggressive, the consequences of failure justify the concerted effort. As AI capabilities evolve, LLMs will increasingly serve as a force multiplier, and the benefits of CCMD’s early adoption will compound. By embracing this paper’s phased implementation strategy, CCMDs will enhance warfighting effectiveness and ensure their ability to decisively counter adversaries, thereby securing U.S. strategic interests.
[i] Joseph Connor, “ONE FALSE STEP: Could a Young Army Pilot in Hawaii Have Prevented the Pearl Harbor Attack? It Was a Question ThatShadowed Kermit Tyler All His Life.” World War II, December 1 2020.
[ii] Kyra Schapiro, et al, “Strategy-Dependent Effects of Working-Memory Limitations on Human Perceptual Decision-Making.” ELife, Vol. 11, March 2022.
[iii] Daragh Murray, “Adapting a Human Rights-Based Framework to Inform Militaries’ Artificial
Intelligence Decision-Making Processes.” St. Louis University Law Journal 68 (2) (2024): 295.
[iv] Mohammad I. Merhi and Antoine Harfouche, “Enablers of Artificial Intelligence Adoption and Implementation in Production Systems.” International Journal of Production Research 62 (15) (2024): 5460.
[v] Jason Downie, “Using AI to Create Innovation and Collaboration.” AMA Quarterly 9 (4) (2023): 34.
[vi] Murray, “Adapting a Human Rights-Based Framework,” 301.
[vii] Downie, “Using AI” 33
[viii] Andrew McAfee, Daniel Rock, and Erik Brynjolfsson, “How to Capitalize on Generative AI.” Harvard Business Review 101 (6) (2023): 44.
[ix] Ibid, 45
[x] Merhi and Harfouche, “Enablers of Artificial Intelligence,” 5458.
[xi] Murray, “Adapting a Human Rights-Based Framework,” 296
[xii] Ibid, 301
[xiii] Ibid, 314
[xiv] Michael Glanzel,. “Artificial Intelligence and Regional Security in the Western Pacific.” Tulane Journal of Technology & Intellectual Property 26 (March, 2024): 53.
[xv] Laurie Harris. “Artificial Intelligence: Overview, Recent Advances, and Considerations for the 118th Congress.” Congressional Research Service Report R47644 (August 4, 2023): 3.
[xvi] Joshua Harguess. et all. “Bias, explainability, transparency, and trust for AI-enabled military systems.” Proceedings, Vol 13054, Issue 1 (June 2024).
[xvii] Dan-Călin Beşliu, “Cyber Terrorism - A Growing Threat in the Field of Cyber Security.” International Journal of Information Security & Cybercrime 6 (2) (2017): 38.
[xviii] Sarika P. S.. et al., “AI Meets Cyber Defense: Enhancing Network Security with GAN-Driven NIDS.” 11th International Conference on Advances in Computing and Communications (ICACC), Advances in Computing and Communications (November, 2024): 5.
[xix] Ibid