Will the Taylor Swift deepfake inspire Congress to finally act on AI?

By Michael Jones
Three weeks ago today, Senate Majority Leader Chuck Schumer (D-N.Y.) spoke on the Senate floor about the immediate need for Congress to pass bipartisan artificial intelligence reform.
Since the emergence of social apps like Facebook, Instagram, Snapchat, and Twitter in the late 2000s, few technologies have rapidly reshaped how Americans use the internet than AI. And it’s been Big Tech companies that have been the main economic beneficiaries of this boon, as evidenced by their billion-dollar bottom lines. But social apps have also contributed to the erosion of US democracy, a highly racialized creator economy, and a national mental health crisis thanks to a lack of federal regulation. Schumer is urging Congress to wake up at the wheel on AI, a second chance to rein in a sprawling digital phenomenon.
“2023 was a year to remember in the world of AI, with the popularization of technologies like generative AI,” he said in the floor speech. “It is impossible to predict what 2024 will bring, so we must act and act quickly to ensure the US keeps leading the way.”
As I listened to the speech, I admit I was bearish. We’re in an election year with the House and Senate scheduled to be out of Washington all August and October. As I wrote in my first column of the year, one or both chambers are scheduled to be in recess for another 12-ish weeks out of the year. This translates to five months back home for members to campaign ahead of the November general election. In other words, not a lot of time to get much done outside of the must-pass bills Congress punted at the end of 2023.
My calculus slightly changed after Taylor Swift became the victim of a graphic deepfake that captured the news cycle before the current conservative backlash the pop icon is facing from the online right for her high-profile relationship with Travis Kelce, the Super Bowl-bound Kansas City Chiefs star.
In case you missed it, AI-generated sexually explicit images of Swift were posted to X, the app formerly known as Twitter. From there, the deepfake spread to other apps, including Facebook, Reddit and Instagram. The images were reportedly viewed 45 million times before they were removed and searches for Swift’s name were temporarily disabled on X.
The deepfake resulted in widespread condemnation from tech leaders, including Microsoft CEO Satya Nadella, whose company backs OpenAI, the San Francisco-based firm that owns ChatGPT. The Rape, Abuse, Incest National Network, which carries out programs to prevent sexual assault, and SAG-AFTRA—the labor union that represents 160,000 media professionals worldwide—also spoke out against the deepfake.
White House Press Secretary Karine Jean-Pierre said the administration was alarmed by the fake sexually explicit images of Swift and that social media companies have an important role to play in enforcing their own rules to prevent the spread of misinformation and nonconsensual, intimate imagery of real people.
“Sadly, though, too often, we know that lax enforcement disproportionately impacts women and they also impact girls, who are the overwhelming targets of online harassment and also abuse,” Jean-Pierre said before adding that Congress should take action to deal with this issue.
The problem, beyond the aforementioned realities of the legislative calendar, is that Congress is leading from behind.
Last April, Leader Schumer launched a major effort in partnership with Democratic Sen. Martin Heinrich of New Mexico and Republican Sens. Mike Rounds of South Dakota and Todd Young of Indiana to get ahead of AI. The goal was to ensure lawmakers had a baseline knowledge of the technology to create policies that established sensible guardrails without stifling innovation.
The bipartisan AI gang convened three all-senators briefings throughout the spring and summer on the current state of AI, where the technology is headed in the future and AI’s impact on national security. Before the briefings, Schumer said he met with nearly 100 CEOs, academics and other stakeholders. Schumer also introduced a comprehensive framework at the Center of Strategic and International Studies last June on how Congress can and should act on AI. And following the briefings, Schumer held a series of forums with leaders in business, civil rights, defense, research, labor and the arts to discuss AI regulation.
Despite all this work, Congress has yet to pass any meaningful measures. This is partly due to a legislative process that moves at a glacial pace by default. It’s an institution optimized for lawmakers to make policy in response to crises. But the politics of an issue can be just the catalyst that inspires a sense of urgency.
And that’s where we seem to be on artificial intelligence after the Swift fallout.
Since the deepfake, Rep. Joe Morelle (D-N.Y.) has been making the rounds promoting a bill he introduced in 2022 that would make it a crime to intentionally disclose a digital image that has been altered using AI or similar technology of a person engaging in sexually explicit conduct. On Tuesday, a bipartisan group of senators introduced a bill that would empower victims to sue the creators of digital forgery or those who shared a nonconsensual, sexually explicit deepfake with reckless disregard.
Meanwhile, President Biden is doing what he can with his relatively limited power.
The White House on Monday announced eight key actions it has taken since President Biden signed an executive order three months ago to manage safety and security risks to safety and innovate AI for good.
Now it’s up to members of Congress to get in on the action to prevent as many young women and girls without celebrity platforms as possible from experiencing the same harm as Swift.
It would be Sad Beautiful Tragic if they didn’t.
Michael Jones is an independent Capitol Hill correspondent and contributor for COURIER. He is the author of Once Upon a Hill, a newsletter about Congressional politics.