Home Sport AI

AI

3
0

In times of peace, deepfake pornography and other AI-generated content, especially created by paid creators, are acceptable on X. However, in “times of war”, the platform now requires its users who monetize their posts to label content produced with AI when it depicts images of armed conflict.

This is the picture painted by the evolution of the social network’s terms of use announced on March 3rd by product director Nikita Bier. “In times of war,” he writes, “access to authentic information from the field is critical. With current AI technologies, it has become trivial to create content that deceives the population.”

The new policy focuses on a specific type of content: those depicting armed conflicts. If users decide to publish AI-generated videos depicting this type of violence without labeling them, they risk a 90-day monetization program suspension. Further violations of the new measure “will lead to a permanent suspension of the program.”

To identify the relevant content, X will rely on Community Guidelines and generative AI tools, “if the content contains metadata (or other signals),” according to Nikita Bier. Accounts not monetizing their posts are not affected by the new measure.

Formally, the United States has not declared war on Iran: according to local law, only Congress has that power.

However, fake content depicting armed conflicts has been a disinformation issue for years. For example, since the release of the game Arma 3 in 2013, clips are regularly misused to falsely illustrate ongoing conflicts on social media. Like other areas of the information space, the ease of use of generative AI tools poses a risk of widespread dissemination of fake content.

While X already puts watermarks on images and videos created with Grok, the company had not previously required users to explicitly declare their use of AI. According to Social Media Today, X is currently testing a labeling feature for AI-generated content.