The Question of Accountability and Reliability for the Use of AI in Modern Warfare without International Regulations

0
Realistic battlefield scene showing a robotic soldier, human-operated military systems, drones, tanks, and a digital human brain interface symbolizing artificial intelligence in modern warfare.

A conceptual depiction of AI-driven warfare highlighting the blurred lines between human control, machine autonomy, and the absence of global regulatory oversight.

On 18 March 2026, Defense Secretary Pete Hegseth representing the government of the United States (US) defended  a lawsuit in US court against Anthropic’s blacklisting on a national security bases of ‘supply chain risk,’ reportedly linked to a decision by President Donald Trump. Anthropic, the maker of popular AI assistant model Claude, on March 3 refused to remove guardrails (technical restrictions) preventing its technology from being used for autonomous weapons and domestic surveillance. This moment reflects a critical juncture in history, where artificial intelligence (AI) is increasingly integrated into weapon systems known as Lethal Autonomous Weapon Systems (LAWS) in the contemporary conflicts without any comprehensive international regulatory guardrails. 

Trump administration subsequently excluded Anthropic from a limited set of military contracts by arguing that it is endangering American national security interests. This presidential decision reportedly followed when Anthropic refused to lift its restrictions on the AI use of its products for military operations during start of the Iran invasion. Despite these restrictions, reports suggest that on 28 February, the US conducted a major air attack in Iran with the help of same AI model, Claude anyway.

On the other hand, Anthropic maintains that current AI systems are not sufficiently safe or reliable for deployment in autonomous weapons. The company emphasized its principled stance after months of disputes with the US government. Its principle stance grounded in two major non-negotiable guardrails. First, it denied to share domestic users’ data. Second, it opposed the use of AI in the autonomous weapon systems without meaningful human control, as such technologies remain prone to risks such as accountability, reliability and predictability.

When Anthropic denied the pentagon’s request, the government reportedly blacklisted the company. Shortly thereafter, the US allegedly used the same AI technology without the company’s consent during the initial phase of Iran strike. This situation shows how states may leverage civilian technologies to advance national interests. Although the technology is not fully mature or reliably tested in high-risk environment, it has still been used to compete or to test its capabilities in conflict scenarios in pursuit of strategic superiority.

This practice can be cautiously compared to early use of nuclear weapons in Hiroshima and Nagasaki, where the emerging technologies were deployed despite limited understanding of their full consequences. Notwithstanding, the use of emerging AI systems in conflict environments for testing or strategic advantage reflects a dangerous trend of deploying technologies still in their developmental phase. With its inherent risks, it is still utilized not for merely defence, but for the purpose of power projection, military dominance and geopolitical national interests.

The use of such AI technologies in warfare presents potential risks to humanity. These risks include accountability gaps which is difficult in assigning responsibility, reliability issues when system does errors without any training completion, unpredictability for being uncertain of their use, miscalculation which is an incorrect threat assessment due to hallucination of AI, conflict escalation, and a lower threshold for the use of force. The reported use of Anthopic’s model Claude in the warfare against Iran despite explicit restrictions not merely coincidental. However, this indicates a broader governance challenge in controlling over AI technologies.

This case raises fundamental questions about authority and the control over AI systems; should private companies have a say in determining how their technologies are used in warfare, or is there a need for domestic and international comprehensive regulatory frameworks through international platforms such as Convention on Certain Conventional Weapons (CCW) in Geneva under the United Nations?

Interestingly, 2026 marks a notable procedural shift within CCW. The Chair, Ambassador Robert in den Bosch of Netherlands, has initiated direct text based negotiations from Day 1 by avoiding long general debates among High Contracting Parties (HCPs) to accelerate progress on the ‘rolling text’- a continuous updated draft negotiation document. Informal consultations held in January 2026. This consultations showed cautious optimism regarding text streamlining of the draft text. These developments also reflect ongoing efforts to narrow divergences among states, particularly on issues such as meaningful human control, algorithmic bias regulations – rules addressing errors in AI decision-making – and the scope of application of Lethal Autonomous Weapon Systems (LAWS).

At the same time, there is a concern regarding loopholes of LAWS among Group of Government Experts (GGE). These loopholes include ambiguities about training and testing phases of autonomous weapon systems and exceptions that may allow such systems to operate with fewer restrictions during development. Such risks shows higher concerns of these sophisticated weapon systems related to unpredictable and uncontrollable behavior in real world scenarios. Consequently, many states and advocacy groups are calling for stronger prohibitions on inherently unpredictable systems, including explicit bans on fully autonomous anti-personnel weapons and enhanced oversight of AI supply chains i.e. monitoring how AI technologies are developed and distributed.

Therefore, there is a dire need either to ban the use of AI in autonomous drones and weapons or at minimum, to establish a comprehensive international regulatory framework on LAWS under the UN, a process that has been ongoing for more than a decade. Technological advancements are progressing at a pace far exceeding diplomatic and regulatory efforts. The reported use of AI-enabled systems in military operations demonstrates the urgent necessity for a legally binding international instrument, particularly concerning LAWS operating without meaningful human intervention. Given the intensifying geopolitical conflicts across multiple regions, 2026 represents a critical window to finalize a concrete draft for presentation at the Seventh CCW Review Conference. However, a strong outcome remains uncertain due to the consensus-based decision-making process and the strategic interests of High Contracting Parties currently developing or deploying such technologies.

About Muhammad Ali Baig

Muhammad Ali Baig is a researcher at Center for International Strategic Studies (CISS), Islamabad. His X (formerly twitter) handle is @alibaig111

Leave a Reply