US Govt Proposes Anti-AI Porn Bill Post-Taylor Swift Scandal

US legislators have suggested allowing people to file lawsuits for fake explicit photos of themselves after the indecent artificial intelligence- (AI) generated photo controversy of renowned singer Taylor Swift.

The Disrupt Explicit Forged Images and Non-Consensual Edits (DEFIANCE) Act proposes a civil rights action that lets individuals sue for damages from nonconsensual intimate “digital forgeries” that were deliberately created or owned for the purpose of distributing them.

Such a bill would grant victims the right to request financial compensation from perpetrators.

The bill was put forward by Senate Majority Whip Dick Durbin (D-IL) along with Senators Lindsey Graham (R-SC), Amy Klobuchar (D-MN), and Josh Hawley (R-MO).

It is based on a provision in the Violence Against Women Act (VAWA) Reauthorization Act of 2022, which included a similar measure for inappropriate images not involving digital forgeries.

The sponsors said the legislation was a response to the significantly escalating prevalence of digitally fabricated explicit AI photos. They used Swift’s controversy as an instance of how such AI-generated fakes can be utilized to exploit and harass women, particularly public figures.

The technology to perform deepfakes, coined in 2017, has advanced, leading to the widespread and sophisticated creation of AI-manipulated indecent images.

Readily available generative AI tools have allowed the production of such to be easier, bypassing platform safeguards on inappropriate content and impersonation, and they have been used for harassment and blackmail.

However, many parts of the US need clear legal redress. Most states have enacted laws prohibiting unsimulated nonconsensual explicit content, although they are moving slowly. Fewer states have imposed rules concerning simulated imagery.

The move is part of President Joe Biden’s AI regulation agenda. White House Press Secretary Jean-Pierre has urged Congress to enact new laws following the previous week’s incident involving Swift.

X Temporarily Bans Taylor Swift Searches After Deepfakes

X has temporarily blocked search queries for Swift after deepfaked indecent photos of the pop singer spread across the platform.

X’s head of business operations, Joe Benarroch, said the ban was a short-term action to focus on safety.

The move has resulted in an error message saying, “Something went wrong. Try reloading.” when searching for Swift on the website.

The AI-generated explicit images of Swift surfaced on X earlier this week, with some of the posts gaining millions of views, alerting US officials and the singer’s supporters.

Swift’s fans identified and flagged posts and accounts that shared the fabricated images. They also flooded the platform with the pop star’s authentic photos, using the hashtag “protect Taylor Swift.”

The White House on Friday took notice of the problem, expressing their concern about the worrying spread of the AI images.

Sending
User Review
0 (0 votes)

RELATED POSTS

Leave a Reply