In April 2025, the U.S. House of Representatives passed the TAKE IT DOWN Act, a bipartisan effort to curb the growing threat of AI-generated deepfake pornography.
Officially, the Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks Act, it’s poised to reshape how the U.S. tackles nonconsensual intimate visual depictions (NCII).
Ronin Legal takes a closer look at the legislation, and how it improves on the existing law relating to the issue.
The Act’s Origins and Passage
Introduced as S. 146 on January 16, 2025, by Senators Ted Cruz (R-Texas) and Amy Klobuchar (D-Minn.), with cosponsors like Shelley Moore Capito and Richard Blumenthal, the act passed the Senate unanimously in February 2025.
The House followed on April 28, 2025, with a 409-2 vote under suspension of the rules, showing rare bipartisan unity.
Current Status and Next Steps
As of April 30, 2025, the TAKE IT DOWN Act has passed both the Senate and House and awaits President Trump’s signature. Under Article I, Section 7 of the U.S. Constitution, the President has 10 days (excluding Sundays) to sign or veto the bill once presented; if no action is taken, it becomes law unless Congress adjourns, triggering a pocket veto.
What the Act Says
Amending Section 223 of the Communications Act of 1934, the TAKE IT DOWN Act targets NCII, including AI-generated deepfakes. Its key provisions are:
- Criminal Penalties: Knowingly publishing NCII of adults without consent, with intent to harm, faces fines or up to two years in prison. For minors, penalties rise to three years if intent is to abuse or harass. Threats to share NCII carry similar punishments.
- Platform Obligations: Covered platforms—social media, email services, or sites hosting user-generated content—must remove offending material within 48 hours of a valid notice from the victim or their representative. This applies to platforms like X or YouTube but not those with preselected content.
- FTC Enforcement: The FTC treats platform non-compliance as an unfair or deceptive act, subject to regulatory action, ensuring enforcement teeth.
- Exceptions: Disclosures for law enforcement, legal proceedings, medical purposes, or public interest (e.g., journalism) are exempt, balancing free speech.
Why Ordinary Pornography Laws Fell Short
Traditional criminal and civil laws on pornography, along with judicial precedent, were woefully outmatched by the rise of NCII especially with AI-driven deepfakes flooding the digital landscape. Federal obscenity laws, like 18 U.S.C. § 1465, target the commercial distribution of “obscene” material but stumble when applied to NCII, which often involves personal, non-commercial sharing that doesn’t meet the strict Miller v. California (1973) test—requiring prurient interest, patent offensiveness, and lack of serious value.
That landmark precedent, designed for 1970s print and film, fails to grapple with deepfakes that may not offend “community standards” yet ruin lives through targeted humiliation. Civil remedies, such as defamation or intentional infliction of emotional distress, are equally toothless; victims face steep hurdles proving specific intent or quantifiable harm, especially when perpetrators use anonymous platforms or AI tools to evade identification.
Before 2015, most states lacked even basic revenge porn laws, and while 48 states had such laws by 2024, many didn’t cover AI-generated NCII or failed to hold platforms accountable, leaving victims to chase shadows in court.
Building on a Patchwork: U.S. Legal Context
Prior to the TAKE IT DOWN Act, U.S. laws addressing NCII were a scattered patchwork, offering uneven protection against deepfake harms. By 2024, just 21 states had statutes targeting adult NCII, with definitions varying wildly—some required proof of intent, others ignored AI-generated content.
Remedies were often limited to civil lawsuits, as seen in California’s 2019 Assembly Bill 602, while criminal penalties were sparse and typically reserved for cases involving minors, like Louisiana’s Act 457.
Platforms enjoyed near-total immunity under Section 230 of the Communications Decency Act, dodging responsibility for hosting NCII and leaving victims to pursue elusive perpetrators. Enforcement hinged on overstretched local prosecutors or victim-initiated litigation, creating a system where justice was inconsistent and often unattainable.
The TAKE IT DOWN Act dismantles these barriers by establishing a federal framework that standardizes NCII protections across all states, encompassing both authentic and AI-generated images. It shifts the burden from victims to platforms by requiring content removal within 48 hours, piercing Section 230’s shield.
It also replaces patchy local enforcement with Federal Trade Commission oversight, ensuring uniform accountability and stronger deterrents against NCII distribution
Global Perspectives: How Other Nations Tackle Deepfakes
The TAKE IT DOWN Act doesn’t exist in a vacuum—other countries have grappled with deepfake harms, offering points of comparison:
- South Korea: Since 2020, South Korea’s law bans deepfakes that “cause harm to public interest,” with up to five years in prison or $43,000 fines. It’s broader than the U.S. act, covering political or social harm, but less specific on pornography unless it meets the public interest threshold.
- Indonesia: The Criminal Code and Electronic Information Law explicitly prohibit deepfake pornography, sanctioning creators and sharers. Like the U.S., it targets intimate content but lacks the platform takedown mandate, relying on criminal enforcement.
- European Union: The EU’s AI Act, partially effective since August 2024, regulates AI systems, requiring transparency for deepfake tools but not directly criminalising pornography. It’s less victim-focused than the U.S. act, prioritizing systemic AI oversight over individual harms.
- United Kingdom: The UK’s Online Safety Act (2023) addresses harmful content, including deepfakes, but lacks specific deepfake pornography laws. It relies on platform cooperation and detection tech funding, contrasting with the U.S.’s direct criminal and takedown rules.
The U.S. act stands out for its victim-centric approach and platform accountability, requiring swift content removal—a step beyond South Korea’s broad penalties or the EU’s AI-focused rules. Indonesia’s laws align closely but miss the FTC’s enforcement muscle.
Conclusion
The TAKE IT DOWN Act is a pivotal response to AI’s dark side, aimed at protecting victims of NCII with federal muscle. By criminalising nonconsensual deepfakes, mandating swift platform takedowns, and leveraging FTC oversight, it offers a robust framework—surpassing the U.S.’s prior state-by-state mess and rivalling global efforts. As it nears enactment, it sets a precedent for safeguarding dignity in a world where reality can be faked with a click.