Home Sport War in the Middle East: what does international law say about the...

War in the Middle East: what does international law say about the use of artificial intelligence in conflict?

5
0

The armed conflict against Iran launched on February 28th by Washington and Tel Aviv was quickly labeled as the “first AI war.” This assertion is misleading in various aspects. Not only has AI been used extensively in recent conflicts, including by Israel in Gaza, but more broadly, AI, as a digital means of data processing and analysis, has a long history with armed conflicts dating back to World War II.

While the Iranian situation stands out for the unprecedented level of sophistication of these means and the unprecedented reliance of the armies on them, it also differs from the conflict in Gaza in that this time, AI was deployed against a state adversary in a high-intensity war. Additionally, states had never openly communicated their use of these systems before. It is this communication combined with the dramatic consequences of some strikes that raises questions about the compatibility of these practices with international law.

Context: The article discusses the use of AI in the conflict between the US, Israel, and Iran, reflecting on the ethical and legal implications of such actions.

Fact Check: The article raises concerns about the ethical and legal implications of using AI in warfare and discusses a specific incident where an American strike on an Iranian school led to civilian casualties and the subsequent acknowledgment of responsibility by the US.

The Use of AI in the War in Iran

The use of AI by Israel in its war against Hamas was revealed by the newspaper “+972.” In the case of the conflict in Iran, however, it was the American authorities themselves who announced their use of AI.

The US military forces admitted to using AI systems to establish and sort a list of targets at lightning speed. This process led to over 1,000 strikes within the first twenty-four hours of the conflict. They notably utilized the Maven Smart System, a joint project using surveillance and data collection AI software from Palantir, coupled with the generative AI system Claude developed by Anthropic.

However, on the first day of war, one American strike hit a school in Minab, resulting in the deaths of around 170 civilian victims, primarily children. The US acknowledged their responsibility for this strike, which was presented as an error due to outdated information that led to authorizing the strike.

Such a mistake is significant, as many media outlets and NGOs quickly established the link between the school and the naval base. It was suggested that the American military likely targeted the building based on outdated data, blindly following a recommendation from an AI system without conducting the necessary verification.

The Legality of AI Use

The article then delves into the legality of using AI for such strikes and the errors committed, questioning whether they are lawful under international law.

It clarifies that while AI is not specifically prohibited by the laws of armed conflict (also known as international humanitarian law), the general rules of the laws of armed conflict apply to the conduct of hostilities regardless of the means and methods deployed. One of these rules is the principle of distinction, where only military targets can be attacked, and civilian persons and objects must be protected. Directly targeting a school such as Minab, in the absence of any military objective inside it, constitutes a clear violation of this principle.

The article further discusses the responsibility of individuals, companies specializing in AI, and states developing and utilizing military AI. It raises concerns about accountability, ethical considerations, and the need for international regulation regarding the use of AI in warfare.

In conclusion, the article highlights the risks associated with AI in military operations and the potential need for legal frameworks to regulate its use effectively. It questions whether the tragic incident in Minab will prompt nations to establish specific legal guidelines for military AI or if such a wake-up call is still possible or even desirable in the current geopolitical landscape.