Home War When Artificial Intelligence Enters the Battlefield

When Artificial Intelligence Enters the Battlefield

22
0

Une rationalisation de la guerre

Commander Brad Cooper, in charge of the American war effort in Iran, praised the benefits of using artificial intelligence (AI) in Operation Epic Fury, a joint operation between the United States and Israel in Iran. While Cooper did not specify which “intelligent” systems were used by the Pentagon, the Washington Post revealed that Americans were using a mission control system called Maven.

Developed by Peter Thiel’s company Palantir, Maven acts as the brain behind American strikes. Its strength lies in handling the entire “kill chain,” from target identification to legal approval and initiation of the strike. By analyzing data from satellites, drones, human intelligence, and intercepted communications, the platform enabled the US to attack 1000 targets within the first 24 hours of the conflict.

According to the US Central Command, these strikes mark the first large-scale use of this technology, following previous deployment in Iraq, Syria, and Yemen.

Gaza as a laboratory

Reports indicate that Israel has been using AI tools since October 7, 2023, to conduct airstrikes in Gaza. The Israeli military relies on tools like Gospel to identify targets, Where’s Daddy? to track targets to their homes, and Fire Factory to assess ammunition capabilities and assign targets. Additionally, Israel has identified up to 37,000 targets linked to Hamas and the Palestinian Islamic Jihad through an AI tool named Lavender.

An investigation by the Guardian suggests that decision-making logic behind these tools remains opaque, with operators possibly mechanically approving AI proposals, leaving little room for human verification during operations.

Moreover, AI tools operate within a statistical mechanism that tolerates a certain number of civilian casualties based on military objectives pursued. This could explain the high number of civilian casualties in the Palestinian enclave.

Ethical and legal questions

Professor Olivier Sibony highlighted ethical concerns regarding the recent bombing of a school in Iran, which resulted in 150 casualties on February 28. Initial military investigations revealed a database update issue as a possible cause.

Sibony emphasized the ethical gravity of the decision to deploy lethal weapons, stressing the need for human oversight. He also raised a legal question about accountability in scenarios where AI plays a significant role in decision-making processes.

Sibony’s colleague, Eric Hazan, emphasized the importance of keeping humans in the decision-making loop, urging updates to military doctrines to address the rise of AI in warfare.