Home World In the United States, a bill supported by OpenAI could exempt AI...

In the United States, a bill supported by OpenAI could exempt AI giants from any…

8
0

OpenAI has endorsed a bill being debated in the state of Illinois. The text aims to protect AI laboratories from any civil liability in cases where their models are involved in major disasters. This stance reopens the debate on the regulation of a poorly regulated sector.

Code name: SB 3444. Behind this unknown legislative reference lies a bill being debated in Illinois that could profoundly redefine the liability regime applicable to artificial intelligence laboratories, according to Wired.

Supported by OpenAI, the project aims to exonerate developers of “cutting-edge” AI models from any legal liability in cases of “serious harm,” under certain strict conditions.

Vague Definitions

The key question is what lies behind these very vague terms. According to the text, extreme scenarios, such as cases involving the death or serious injury of at least one hundred people, or material damages estimated at more than one billion dollars, are covered. The project identifies several sources of concern identified by the AI sector, including the malicious use of these systems to design chemical, biological, radiological, or nuclear weapons.

Thus, if an AI model autonomously adopts behavior that, if done by a human, would constitute a criminal offense and lead to such extreme consequences, it could fall under the category of “critical harm.”

In these situations, companies could escape all liability… as long as they did not act intentionally or negligently, and have published security and transparency reports. In other words, the text does not eliminate all obligations but provides a framework for limited immunity in the most catastrophic cases. Very reassuring.

The definition of the models in question is also very broad. Systems that have required over 100 million dollars in calculations for their training would be considered “cutting-edge.” This means that most major players in the sector, such as OpenAI, Google, Anthropic, Meta, and xAI, would fall under the scope of the law. Quite practical…

OpenAI’s Explicit Support

In this sensitive issue, OpenAI has chosen to step out of its usual restraint. The company has officially expressed support for the text, taking a more offensive stance on regulatory grounds.

“We support this approach as it prioritizes the essentials: reducing the risks of serious damages related to the most advanced AI systems, while allowing this technology to remain accessible,” said Jamie Radice, spokesperson in a statement.

This support marks a significant shift. Previously, OpenAI was mainly known for its defensive posture. The company often opposed bills that could expand the legal responsibility of sector companies. With SB 3444, the move is more nuanced. It is not just about resisting regulation but also contributing to defining it.

The text is an opportunity for OpenAI, entangled in scandals. The project focuses solely on the most extreme cases, carefully sidelining individual harms. Last year, several families of teenagers who died by suicide filed lawsuits against the company, alleging that ChatGPT may have promoted unhealthy relationships or a form of dependency. These cases are still ongoing, highlighting the risks on a more intimate scale.

Several experts interviewed by Wired also highlight the unique nature of the text, deemed more ambitious, or permissive depending on the perspectives, than previously supported initiatives by the laboratories themselves.

The start-up has taken the opportunity to highlight the need to avoid a fragmentation of regulations at the level of American states, in favor of more consistent national standards. “It also helps to avoid a patchwork of state regulations and move towards clearer and more consistent national standards,” added Jamie Radice.

Towards a Federal Framework?

During her testimony in support of the bill, Caitlin Niedermeyer, a member of OpenAI’s international affairs team, emphasized the need for harmonized regulation at the federal level. The goal? Prevent a proliferation of inconsistent local rules and emphasize a single federal framework.

“At OpenAI, we believe that the guiding principle of regulating advanced technologies should be the secure deployment of the most advanced models, while preserving American leadership in innovation,” summarized the leader.

This position aligns with a widely shared vision in Silicon Valley. Several companies fear that a stack of state regulations could hinder innovation and weaken American competitiveness in the global race for artificial intelligence, especially against China. This view is also echoed by Donald Trump.

Although SB 3444 is a state law on safety, Caitlin Niedermeyer argued that these laws can be effective if they “foster harmony with federal systems.”

An Uncertain Future

For now, this bill comes at a time of uncertainty. Despite several initiatives by the Trump administration, discussions in Congress have struggled to reach a conclusion.

In the meantime, some states are moving forward independently. California and New York have already adopted their own rules, particularly to impose more transparency on companies. Illinois, on the other hand, seems to be moving in a different direction, further fragmenting the American regulatory landscape.

But the bill has sparked strong reservations. Opponents believe SB 3444 is unlikely to be adopted in a state like Illinois, which has a history of strict technology regulation. Scott Wisor, policy director of Secure AI, believes the initiative goes against local expectations and lacks political support. Last August, this state became the first in the country to adopt a law limiting the use of AI in mental health services.

According to him, a large majority of citizens would oppose the bill.

“We surveyed Illinois residents to see if they believed AI companies should be exempt from liability, and 90% of them opposed it. There is no reason existing AI companies should benefit from reduced liability,” he insisted.

Between the desire for regulation, innovation imperatives, and growing concerns about systemic risks associated with AI, the debate remains ongoing in the United States.