March 13, 2026
The metaphor of the omnipresent eye is hardly foreign to the multitudes of discussions, lectures, and research currently–and previously–being conducted on surveillance and technology. But, it is time that we expanded our current scope of discussions about artificial intelligence (AI) to the future of global security, policy, and measures in transparency that will be critical to our survival. Furthermore, it is imperative to consider how human-machine interaction (or HMI) with AI will impact the future of global security policy and military decisions.
There are abundant concerns today concerning the future of artificial intelligence. Questions surrounding its effect on social behaviors, ethics, accuracy, legality, and safety abound in the public consciousness; yet, there is little institutional research being done about these questions. A report published by the United Nations Institute for Disarmament Research (UNIDIR) in 2023 cites Dan Hendrycks, the current Director of Center for AI Safety, claiming that “we are severely behind on safety. 98% of researchers work on making AI more capable, not safer. Safety is under-emphasized” [1]. This discrepancy has only become more apparent over time. In March of 2025, researchers with Model Evaluation & Threat Research concluded that the amount of time it takes a human to complete a given task can now be halved by AI with its capabilities doubling every 7 months for the past six years [2]. From an economics based perspective, however, this outcome appears inevitable, with few incentives existing for increased AI regulation. In February of 2026, an article written by two Senior Fellows with the Council on Foreign Relations indicated that with the rapid growth that AI has been contributing to the US economy, the US would be hard pressed to slow down domestic AI labs without falling drastically behind similarly paced Chinese AI labs who are permitted to continue their work without interruption [3]. The two authors also anticipate AI bringing about social and psychological upheaval “on a scale at least equivalent to the Industrial Revolution” [3]. The Penn Wharton Budget Model estimates that forty percent of 2025’s GDP could be affected by generative AI with predictions that by 2075 AI will lead to a permanent increase in economic activity within the US [4]. Therein lies a massive advantage in the world of AI for those who invest fast and code faster. In full, the current trajectory of AI’s development indicates a path that heavily prioritizes advancing the earning potential of AI lest we fall behind economically or technologically.
Given the attention put forth into the development of AI’s capabilities, it would be reasonable to assume that at least an equal amount of effort and investment would be put forth into the protection of the technology. This proves not to be the case. In one study analyzed by UNIDIR, an attack where input data was manipulated to deceive a trained model to lead to incorrect decisions was able to be completed in one afternoon with about twenty lines of code [1]. The model was an image classification algorithm and with a few simple tweaks, the input data was manipulated from recognizing an image of Georgetown University’s flagship building into recognizing the same image as a triceratops. The changes were made using openly published techniques available through Google Colab’s free Graphic Processing Units [5]. Additionally, there are concerns about the scalability of attacks on AI systems. One attack on an AI system relying heavily on shared code or similar training datasets could lead to system failure and possible capture and control of technology used in the field. UNIDIR’s report then extrapolated that the challenge of protection is amplified due to the wide availability of tools and methods online to attack AI systems [1]. This is due to the availability of open-weight systems. Open-weight systems are AI models that are widely able to be downloaded and adjusted or manipulated to the user’s intent. The upside to open-weight systems is that the strongest systems are produced by large, well-known companies located in nations with precedence of allyship [3]. Some of these include Meta (United States); Cohere–Canada’s $6.8 billion leading AI enterprise with a focus on expanding access into the public sector [6]; and Mistral–France’s $14 billion AI developer [7]. This presents a wider runway for imposing regulations and security measures, but the outlook remains grim.
Investment in AI goes beyond private interest. To describe the breadth of AI usage across the world, we must look to open-source intelligence (OSINT). Open-source intelligence is intelligence capabilities formed by analyzing publicly available information such as public records, social media, media, websites, and the dark web [8]. As of 2023, “OSINT is estimated to make up between [80-90%] of all intelligence activities in many countries… it has an immense impact on how the intelligence community gathers, processes, and uses data retrieved and exploited from a wide range of sources” [1]. While OSINT is useful for intelligence gathering activities, it is also critical to consider the people contributing to the datasets. Around 46% of American adults say they interact with AI one to three times a day [9]. When we create policy regarding security, we have to include considerations for increasing digital literacy and safe practices. With regard to election interference, deepfakes, and financial crime, the American public is relatively uneducated despite having an increasingly negative perception of AI and using it daily [10]. While it is difficult to determine the exact extent to which AI is being utilized in security operations, it is nevertheless important to increase attention on the extent to which humans interact with AI in terms of security, and to what extent it can go wrong.
The first issue of miscalculation can be more generally defined as complications involving human oversight with data handling and over-reliance on technology, the uses and applications of which have the potential to jeopardize relations in diplomacy and international communications. Particular risk lies in areas where miscalculation issues arise. These often occur in models that use AI as forecasting tools and multi-modal models for intel purposes and analysis. One of the main directives in the field of AI and HMI is a concept called trust calibration. Trust calibration refers to the level of trust a person instills in AI’s capabilities. Or, more simply put: judgement calls. UNIDIR observed in 2023, “[that] because in many situations of high stress humans may tend to trust AI systems more, the risks of over-trusting these applications are higher. Such risks may in fact be exacerbated by seemingly ‘faultless visualization’ provided through state-of-the-art applications” [1]. This perception can be influenced, or reinforced, by many factors including training protocols, system design, and user interface. One example from UNIDIR calls back to how in the early 2000’s a US Army’s long range air defense missile system called the Patriot had consistent issues with its automated ‘Identification Friend or Foe’ system, resulting in several situations of fratricide. The report attributes the system’s failures to operator overconfidence in the system and poorly designed interfaces [1]. Miscalculation and human error are inherent to every process that exists from defense to grocery checkouts, but when it is applied to global security the need for effective policy decisions becomes dire.
In analyzing the underlying processes that drive geopolitical escalation, it is important to recognize that its foundations are built upon fluid psychological and perceptual factors. AI has the potential to escalate tensions and responses under intentional, unintentional, or inadvertent decisions. The issue that escalation poses rests within the inconsistency of HMI and the burgeoning technology. In a convenience sample survey, examining the opinions and judgements of officers at the US Army War College and US Naval War college, researchers found that “officers consistently prefer human control of AI to either identify nuanced patterns in enemy activity, generate military options to present an adversary with multiple dilemmas, or help sustain war-fighting readiness during protracted conflict” [11]. This preference is given over capabilities of AI in strategic decision-making with machine oversight. However, there is a growing misalignment between officers’ support for AI’s integration into military capabilities and their trust. The officers indicated that while they support AI’s integration as it shortens the adversary’s reaction time to their advantage, their individual trust in the projected forms of warfare may go against their personal beliefs and attitudes. This presents an extremely unique issue for officers and their chain-of-command. Further, the national policy for senior Pentagon leadership at the Department of War indicates a huge push from the executive to reform the department to becoming “AI first” [12]. With regards to the extent that rank-and-file soldiers interact with AI, it is largely anticipated that AI will be integrated to augment the speed of mission analyses and briefings, leaving commanders with the ultimate decision-making capability [13]. Some anticipated issues with the use of AI in mission analyses and briefings are AI’s inflexibility in adapting to circumstances which could result in faulty recommendations, unexplainability with regards to decision-making processes, and the implementation and amplification of biases [13]. While the military is primed to train officers against fluid psychological and perceptual factors, there is much still unknown about the effects of AI and officers’ opinions on the technology on such a vast scale.
Whereas consistency in HMI presents potential issues, proliferation in AI already exists to a certain extent. On behalf of governments and investors across the globe there is undoubtedly a race afoot to develop the fastest, most capable, and comprehensive AI models. With regards to biosecurity and bioweapons, large language models (LLMs), which are often used to summarize, generate, and predict content, are being used to look for new drug molecules for both benefit and detriment [1]. In terms of cyberspace AI has the capabilities to dramatically increase the scope and scale of cyberthreats. Most of this risk comes from the potential of allowing virtually anyone to request LLMs to generate malware and harmful code. Finally the proliferation of AI extends to AIs capabilities in autonomous weaponry. Much of this concern derives from the speed and lethality of autonomous weaponry combined with low-cost production and deployment that is currently being utilized in active situations in Gaza, Ukraine, and Iran.
Although international governance and standards may seem at first highly improbable to implement, it is nonetheless worth investing resources into. One model proposed by Harvard’s Allen Lab for Democracy Renovation suggests using the Financial Action Task Force–an intergovernmental organization backed by G7 countries–for coordinating financial integrity and compliance between both governments and the private sector [14]. Additionally a more targeted approach is highly recommended to address and educate the public. There is also evidence to support the public’s hesitation for further integrating AI into the United States’ military capabilities. While around 60% of the population expressed support for AI’s assistance in identifying suspects of a crime, only 27% believe it should be capable of making decisions about how to govern a country [10]. However, increasing overall transparency will prove to be critical in striking a balance between enhancing AI capabilities and enforcing safe standards. One thing is certain, that AI will not regress to the background any time soon. There has never been a time in history, where for a protracted amount of time, humankind has willingly and successfully turned their backs on technology. Policymakers should look towards solutions such as market incentives where possible, public awareness, and regulatory frameworks [14].
In the meantime we must acknowledge that private developers have little to no incentive to divert some of their focus to safety. Therefore a more complex solution is called for. As supported by many proponents calling for regulation, AI developers would face a tax on the money invested into the model. Then to offset the tax, the government would conversely offer a tax credit worth a percentage of each dollar the developer dedicates towards safety research. Analyses from contributors at Foreign Affairs further support this solution by arguing that investments in safety research would only delay development in the short term while the long term payoffs like increased public trust, smoother transitions to widespread deployment, and the economic benefits that follow [3].
19th century English philosopher and jurist, Jeremy Bentham once stated: the rarest of human qualities is consistency. In a world of eight billion people, with the vast majority of organizations worldwide using AI in some capacity, we cannot expect consistency. We cannot expect even the highest-level of decision makers to blindly act in tandem with one another when it comes to AI policy. Even worse would be the escalating competition in developing AI capabilities to maintain a stronghold. However, we can counter inconsistency with accountability and transparency. We cannot rely on old methods to predict how AI will affect our decision-making, the most responsible decision-making moving forward will have to come with clarity.
Works Cited
[1] “AI and International Security: Understanding the Risks and Paving the Path for Confidence-Building Measures → UNIDIR.” 2023. UNIDIR. https://unidir.org/publication/ai-and-international-security-understanding-the-risks-and-paving-the-path-for-confidence-building-measures/.
[2] Kwa, Thomas, Ben West, and Joel Becker. 2025. “Measuring AI Ability to Complete Long Tasks.” METR. https://metr.org/blog/2025-03-19-measuring-ai-ability-to-complete-long-tasks/.
[3] Elbaum, Sebastian, and Sebastian Mallaby. 2026. “The AI Trilemma.” Foreign Affairs. https://www.foreignaffairs.com/united-states/ai-trilemma.
[4] “The Projected Impact of Generative AI on Future Productivity Growth | PWBM.” 2025. Penn Wharton Budget Model. https://budgetmodel.wharton.upenn.edu/p/2025-09-08-the-projected-impact-of-generative-ai-on-future-productivity-growth/.
[5] Lohn, Andrew. n.d. “Hacking AI | Center for Security and Emerging Technology.” CSET. Accessed March 2, 2026. https://cset.georgetown.edu/publication/hacking-ai/.
[6] Sobowale, Julie. 2025. “Cohere Is Canada’s Biggest AI Hope. Why Is It So American?” The Walrus. https://thewalrus.ca/cohere-is-canadas-biggest-ai-hope-why-is-it-so-american/.
[7] Schechner, Sam, and Kim Mackrael. 2025. “Mistral AI Doubles Valuation to $14 Billion With ASML Investment.” Wall Street Journal. https://www.wsj.com/tech/ai/asml-to-invest-1-5-billion-in-french-startup-mistral-ai-0d5eb547?gaa_at=eafs&gaa_n=AWEtsqcw_2FFLgGqV2fMu6ggfHt_qPMuhIjPJB22rCZJbuumYq2V5Ts8ponbZ2OxYHM%3D&gaa_ts=6996ac5e&gaa_sig=pQ5DIz5uIPLyhhHTmtAGCikRUkt5BmHLJ5JDU2Iivqc_c3XF1E.
[8] Gill, Ritu. 2023. “What is OSINT (Open-Source Intelligence?).” SANS Institute. https://www.sans.org/blog/what-is-open-source-intelligence.
[9] Kennedy, Brian, Eileen Yam, Emma Kikuchi, Isabelle Pula, and Javier Fuentes. 2025. “Americans’ awareness of AI and views of use in daily life, control over it.” Pew Research Center. https://www.pewresearch.org/science/2025/09/17/ai-in-americans-lives-awareness-experiences-and-attitudes/.
[10] Kennedy, Brian, Eileen Yam, Emma Kikuchi, Isabelle Pula, and Javier Fuentes. 2025. “How Americans View AI and Its Impact on People and Society.” Pew Research Center. https://www.pewresearch.org/science/2025/09/17/how-americans-view-ai-and-its-impact-on-people-and-society/.
[11] Lushenko, Paul. 2023. “AI and the Future of Warfare.” Bulletin of the Atomic Scientists. https://thebulletin.org/2023/11/ai-and-the-future-of-warfare-the-troubling-evidence-from-the-us-military/?gad_source=1&gad_campaignid=22829880916&gbraid=0AAAAAC3qOh-hWJ1uphOiZWBORKCKik3nI&gclid=CjwKCAiA-__MBhAKEiwASBmsBIwfX_2tJPgUFSK1G0J2jkxS_J37KGUyZMR18L.
[12] “Artificial Intelligence Strategy for the Department of War.” 2026. Department of War. https://media.defense.gov/2026/Jan/12/2003855671/-1/-1/0/ARTIFICIAL-INTELLIGENCE-STRATEGY-FOR-THE-DEPARTMENT-OF-WAR.PDF.
[13] Adler, Jason N. n.d. “Modernizing Military Decision-Making: Integrating AI into Army Planning.” Army University Press. Accessed March 2, 2026. https://www.armyupress.army.mil/Journals/Military-Review/Online-Exclusive/2025-OLE/Modernizing-Military-Decision-Making/.
[14] Wagman, Shlomit, and Sarah Huabbard. 2025. “Weaponized AI: A New Era of Threats and How We Can Counter It – Ash Center.” Ash Center. https://ash.harvard.edu/articles/weaponized-ai-a-new-era-of-threats/.