I am Iranian. I am in exile. And I am a researcher.
Those three facts are not in tension. They are the reason I am writing this.
On February 28, 2026, the United States launched Operation Epic Fury against Iran. Within ten days, 5,500 targets had been struck (CENTCOM, 2026a). Among the weapons used was LUCAS — a cheap, GPS-guided, one-way attack drone built by an Arizona startup called SpektreWorks. LUCAS was reverse-engineered from Iran’s Shahed-136: the very drone the US Treasury Department had sanctioned Iran for supplying to Russia, explicitly citing its GPS-only guidance as a threat to civilian life (US Treasury, 2024).
The country that manufactured the original weapon is my country. The country that copied it, then used it on my country, is the one that spent three years calling the original a war crime instrument. And the AI company that refused to let its technology be used to authorize the kills was blacklisted within hours — while its competitor signed a Pentagon deal the same afternoon (NPR, 2026a).
I am not writing this to defend the Islamic Republic. I have no interest in defending it. I am writing this because what happened on February 28, 2026, raises questions that belong to everyone — engineers, lawyers, entrepreneurs, academics, children, and anyone who uses an AI tool today — and nobody is asking them clearly enough.
The Terminator Question
Let me be precise about what LUCAS is, because the word “autonomous” gets used loosely.
LUCAS operates without a human operator guiding each individual strike. Once launched, it navigates autonomously, coordinates with other drones in real-time swarms, and executes without a data link back to a human controller. The human decision — approve this target list — is made once, upstream, at machine speed. After that, the machine decides (Sotoudehfar, Sarkin & Chaari, 2026).
This is not the first autonomous weapon deployed in combat. That distinction belongs to Turkey’s Kargu-2, documented by a UN Panel of Experts as operating in Libya in 2020 with no data link to an operator — a “fire, forget and find” system (UN Panel of Experts on Libya, S/2021/229, p. 17). But what changed on February 28 was actor, scale, and institutionalization. The world’s pre-eminent military power deployed an AI-augmented autonomous swarm at an industrial scale against a nation-state — and is now making the underlying system a permanent fixture of American warfare (Reuters, 2026).
The Terminator is the right cultural reference. Not because LUCAS is sentient, but because the franchise encodes exactly the right fear: machines making lethal decisions at machine speed, with human authorization somewhere upstream and accountability nowhere in particular. Skynet did not need to become self-aware. It only needed to become a Program of Record.
Here is what most coverage misses. DoD Directive 3000.09 — the US policy on autonomous weapons, updated in 2023 — does not prohibit autonomous weapons. It does not require a human-in-the-loop. As Michael Horowitz, one of the directive’s authors, clarified in May 2025: those words do not appear in the document (Horowitz, 2025). What the directive requires is “appropriate human judgment” before deployment. Human judgment was applied once, approving the mission. What happened after launch was the machine’s business. That is not a loophole. That is the policy.
Shahed: The Original Weapon
Between September 2022 and the end of 2024, Russia launched thousands of Iranian-made Shahed-136 drones at Ukraine. Jensen and Atalan documented that the campaign “lasted longer than the infamous Blitz aerial bombing of London during World War II,” with Russia absorbing interception rates above 75% — the drones were cheap enough to sustain the losses (Jensen & Atalan, 2025, p. 3).
The targets were not military. The strategy was a “punishment campaign” aimed at civilian pressure points: power grids, heating infrastructure, water systems (Jensen & Atalan, 2025, p. 5). Ukraine’s population was the target, not its army.
The US response was explicit. Treasury sanctioned Iran’s supply network. State Department officials identified the drone’s GPS-only guidance as the specific legal problem — a weapon incapable of distinguishing between civilian and military objects at the point of impact (US Treasury, 2024). By 2024, Iran was running Shahed production lines inside Russia under a $1.75 billion agreement (ISIS Online, 2024). The weapon reached Houthi forces in Yemen, Sudan’s RSF, and over a dozen non-state actors. It is now everywhere (ISW, 2025).
Then SpektreWorks reverse-engineered it, renamed it LUCAS, and sold it to the Pentagon. The development ran from concept to combat deployment in roughly eighteen months. The Stimson Center confirmed that acquisition speed was the primary metric; legal review was treated as a secondary consideration to avoid bottlenecks (Stimson Center, 2025).
LUCAS: The Same Weapon
LUCAS shares the Shahed’s defining legal characteristic: GPS-primary guidance with no confirmed terminal discrimination capability. It navigates to a coordinate. Whatever is at that coordinate on arrival is what it strikes.
The US argued, explicitly, that this architecture was the Shahed’s legal problem. That argument applies identically to LUCAS. Aiming at a military target does not make a GPS weapon IHL-compliant. The laws of war require discrimination at the point of impact, not only at the point of planning. This is not a contested principle. It is in the US military’s own law of war manual.
The target-type difference is morally real: Russia aimed at civilians; the US aimed at military installations. But moral difference is not the same as legal compliance. The IHL problem is the architecture. The US copied the architecture.
Either both weapons are legal under IHL, or neither is. That is not a rhetorical flourish. It is a genuine dilemma no official has yet addressed.
The ICRC addressed it directly in March 2026 — while the strikes were still ongoing. Its language was unambiguous: “Weapons that cannot be reliably directed at specific military objectives or whose effects cannot be limited as required by IHL are prohibited” (ICRC, 2026, p. 4). It added that IHL rules “presuppose context-specific human judgement” that “cannot be delegated to machine processes” (ICRC, 2026, p. 7). The ICRC called for a binding international treaty. One hundred and twenty-nine countries had endorsed that call. The US and UK were among the twelve that opposed it — in January 2026, six weeks before Operation Epic Fury launched (UNGA, A/RES/79/62, 2024).
The School
Maven Smart System does not work the way science fiction imagines. Its core is computer vision and sensor fusion — the same class of technology that recognises faces in photographs — applied to satellite imagery and drone feeds. Anthropic’s Claude was added later as a natural language search layer: it lets analysts query intelligence reports in plain English. It does not select targets (Guardian, 2026). What selects targets is a database.
That database had not been updated in approximately a decade.
On February 28, a Tomahawk cruise missile — drawn from the same Maven-generated target list as LUCAS — struck the Shajareh Tayyebeh school in Minab, a southern Iranian town. The school was triple-tapped: three distinct strikes on the same building in sequence, collapsing the roof onto the students inside. At least 175 people were killed. Over 100 were schoolchildren (Wikipedia, 2026; Amnesty International, 2026).
The building was listed in the DIA database as a military facility. It had been converted to a school between 2013 and 2016, after the adjacent IRGC compound was relocated. Nobody updated the entry. CENTCOM Admiral Cooper confirmed the system was designed to “compress days or hours of work into seconds” (Washington Post, 2026a). The database was a decade out of date. The system processed it at machine speed. Three missiles followed.
Trump’s response, at Dover AFB on March 7: “In my opinion, based on what I’ve seen, that was done by Iran” (New York Times, 2026a). A US military investigation subsequently confirmed American responsibility (CENTCOM, 2026b).
Three strikes on the same building is not a navigation error. It is a targeting system told three times that a primary school was a valid military object — and wrong three times. Just Security’s legal analysis framed the precise question: not whether AI was used, but whether the belief that the building was a legitimate target “was reasonable given available intelligence” (Just Security, 2026). A decade-old database, unverified, processed at machine speed, is the answer. Human Rights Watch had named this failure mode in April 2025, before the strikes happened: autonomous weapons systems create a structural “accountability gap” because “there are obstacles to holding individual operators criminally liable for the unpredictable actions of a machine they cannot understand” (HRW, 2025). Minab is not an edge case. It is the proof of concept.
The Company That Said No

The day before Operation Epic Fury launched, a parallel confrontation was already underway.
The Pentagon demanded Anthropic grant “unrestricted access” to Claude for “all lawful purposes” (Breaking Defense, 2026a). Anthropic CEO Dario Amodei refused. He drew two explicit red lines: no mass surveillance of Americans, no fully autonomous weapons without human oversight. “We cannot in good conscience accede to this request,” he wrote publicly (NPR, 2026a).
The Trump administration’s response was swift. Federal agencies were ordered to stop using Anthropic products. Defence Secretary Hegseth designated Anthropic a “supply chain risk” — a label previously reserved for US adversaries, never before applied to an American company (NPR, 2026b). DoD Undersecretary Emil Michael accused Amodei of a “God-complex” and said it was “not democratic” for a private company to limit military use of its own technology (Breaking Defense, 2026b). Within hours, OpenAI signed a Pentagon deal. Claude was replaced by ChatGPT.
This was not a surprise. OpenAI had quietly deleted its prohibition on military use in January 2024 (The Intercept, 2024). Google removed its weapons pledge in February 2025 (Washington Post, 2025). By the time Anthropic drew its red lines, the rest of the industry had already crossed them without announcement. Anthropic is now suing the Trump administration. A federal judge has indicated the blacklisting looks like retaliation for ethical refusal (NPR, 2026c).
The dispute reveals the structural question the legal void leaves unanswered: when no international framework governs autonomous AI weapons, who sets the limits? On February 28, the answer was the entity with the most to lose from having them set at all.
Why This Is Your Problem Too
The AI at the centre of this story is not a military secret. Palantir’s Maven runs on the same computer vision architecture that powers enterprise analytics platforms. Anthropic’s Claude is used in legal research, medical documentation, and financial analysis. OpenAI’s models are embedded in tools engineers and entrepreneurs use every day. The supply chain connecting Silicon Valley to autonomous weapons is not a metaphor. It is a business relationship — and it has already produced a mass-casualty event at a primary school.
LUCAS will proliferate. The Shahed already has — to Yemen, Sudan, and a dozen non-state actors, supplied by the same network that supplied Russia (ISW, 2025; ISIS Online, 2024). Cheap, GPS-guided, autonomous swarm drones will follow the same curve. Several US adversaries are producing comparable systems now (Belfer Center, 2025). By deploying LUCAS, the US has closed the diplomatic pathway that might have constrained this proliferation. Washington now has every institutional incentive to prevent the international frameworks that would expose its own strikes to legal scrutiny.
Maven is a Program of Record. LUCAS is in production. The legal void is being built into the infrastructure of modern warfare — one procurement decision at a time.
Amodei asked the right question the day before the bombs fell: who sets the limits on AI in war? I am asking the same question — as a researcher, and as someone whose country was on both ends of this weapon’s journey.
I do not yet have the answer. I am not sure anyone does. But the question cannot wait for the scholarship to catch up to the kill chain.
Bibliography
Primary Sources and Official Documents
CENTCOM (2026a). Operation Epic Fury: Initial Strike Assessment. US Central Command Public Affairs, March 2, 2026.
CENTCOM (2026b). Statement on Minab Strike Investigation. US Central Command Public Affairs, March 11, 2026.
International Committee of the Red Cross ICRC. Autonomous Weapon Systems and International Humanitarian Law: Selected Issues. Position paper, March 3, 2026. https://www.icrc.org/en/article/autonomous-weapon-systems-and-international-humanitarian-law-selected-issues
United Nations General Assembly (2024). Lethal Autonomous Weapons Systems. Resolution A/RES/79/62, December 2, 2024. https://docs.un.org/en/A/RES/79/62
United Nations Panel of Experts on Libya (2021). Final Report of the Panel of Experts on Libya Established Pursuant to Resolution 1973 (2011). S/2021/229, March 8, 2021. https://www.undocs.org/S/2021/229
US Department of Defense (2023). Directive 3000.09: Autonomy in Weapon Systems. Updated November 25, 2023.
US Treasury Department (2024). Treasury Sanctions Iran for Supplying Shahed Drones to Russia. Press release, February 23, 2024.
Think Tank and Institutional Reports
Belfer Center for Science and International Affairs (2025). Autonomous Weapons and Strategic Stability. Harvard Kennedy School, December 2025.
Human Rights Watch HRW. A Hazard to Human Rights: Autonomous Weapons Systems and Digital Decision-Making. April 28, 2025. https://www.hrw.org/report/2025/04/28/a-hazard-to-human-rights/autonomous-weapons-systems-and-digital-decision-making
ISIS Online (2024). Iran–Russia Shahed Production Agreement: Documentation and Analysis. May 2024.
Institute for the Study of War ISW. Shahed Drone Proliferation: Non-State Actor Acquisition Timelines. November 2025.
Jensen, B. & Atalan, Y. (2025). Drone Saturation: Russia’s Shahed Campaign. CSIS Brief, Center for Strategic and International Studies, May 13, 2025. https://www.csis.org/analysis/drone-saturation-russias-shahed-campaign
Stimson Center (2025). Speed vs. Oversight: Legal Review in Accelerated Defense Procurement. November 2025.
Legal and Policy Analysis
Horowitz, M. C. (2025). Autonomous weapon systems: No human-in-the-loop required, and other myths dispelled. War on the Rocks, May 22, 2025. https://warontherocks.com/2025/05/autonomous-weapon-systems-no-human-in-the-loop-required-and-other-myths-dispelled/
Just Security (2026). When intelligence fails: A legal targeting analysis of the Minab school strike. Just Security, March 26, 2026. https://www.justsecurity.org/134350/legal-analysis-minab-school-strike/
Sotoudehfar, A., Sarkin, J., & Chaari, L. (2026). LUCAS in the legal void: Urgency, necessity, and a research agenda for the world’s first combat-deployed reverse-engineered drone. Defense & Security Analysis.
Journalism and Investigative Reporting
Amnesty International (2026). Iran: US strike on Minab school requires independent investigation. Amnesty International, March 23, 2026.
Baker, K. T. (2026). AI got the blame for the Iran school bombing. The truth is far more worrying. The Guardian, March 26, 2026. https://www.theguardian.com/news/2026/mar/26/ai-got-the-blame-for-the-iran-school-bombing-the-truth-is-far-more-worrying
Biddle, S. (2024). OpenAI quietly deleted its ban on using ChatGPT for “military and warfare.” The Intercept, January 12, 2024. https://theintercept.com/2024/01/12/openai-military-ban-chatgpt/
Breaking Defense (2026a). Pentagon gives Anthropic Friday deadline to loosen AI policy. Breaking Defense, February 19, 2026. https://breakingdefense.com/2026/02/pentagon-gives-anthropic-friday-deadline-to-loosen-ai-policy/
Breaking Defense (2026b). DoD undersecretary Michael: Anthropic’s refusal is “not democratic.” Breaking Defense, February 26, 2026.
Carchidi, V. (2026). AI in Iran: It’s not just about capabilities. Defense Security Monitor, March 27, 2026. https://dsm.forecastinternational.com/2026/03/27/ai-in-iran-its-not-just-about-capabilities/
Human Rights Watch (2026). US/Iran: Investigate Minab school attack as a potential war crime. HRW, March 7, 2026. https://www.hrw.org/news/2026/03/07/us-iran-investigate-minab-school-attack
New York Times (2026a). Trump says Minab school strike was “done by Iran.” New York Times, March 5, 2026.
New York Times (2026b). US investigation confirms American missile struck Minab school. New York Times, March 11, 2026.
NPR (2026a). Anthropic sues the Trump administration over “supply chain risk” label. NPR, March 9, 2026. https://www.npr.org/2026/03/09/nx-s1-5742548/anthropic-pentagon-lawsuit-amodai-hegseth
NPR (2026b). Pentagon labels AI company Anthropic a supply chain risk. NPR, March 6, 2026. https://www.npr.org/2026/03/06/g-s1-112713/pentagon-labels-ai-company-anthropic-a-supply-chain-risk
NPR (2026c). Judge says government’s Anthropic ban looks like punishment. NPR, March 24, 2026. https://www.npr.org/2026/03/24/nx-s1-5759276/judge-says-governments-anthropic-ban-looks-like-punishment
Washington Post (2025). Google drops pledge not to use AI for weapons. Washington Post, February 4, 2025.
Washington Post (2026a). AI compressed hours of targeting work into seconds in Iran strikes, general says. Washington Post, March 4, 2026.
Washington Post (2026b). Pentagon’s Maven AI system set to become permanent program of record. Washington Post, March 11, 2026.
Wikipedia (2026). 2026 Minab school attack. Wikipedia, accessed March 2026. https://en.wikipedia.org/wiki/2026Minabschool_attack
Leave a Reply