In a rare bipartisan move , the U.S. House of Representatives passed the Take It Down Act by a vote of 409-2 on April 28, 2025. The bill is an effort to confront one of the internet's most appalling abuses: the viral spread of nonconsensual sexual imagery, including AI-generated deepfake pornography and real photos shared as revenge porn.
Author
- Sylvia Lu
Faculty Fellow and Visiting Assistant Professor of Law, University of Michigan
Now awaiting President Trump's expected signature, the bill offers victims a mechanism to force platforms to remove intimate content shared without their permission - and to hold those responsible for distributing it to account.
As a scholar focused on AI and digital harms , I see this bill as a critical milestone. Yet it leaves troubling gaps. Without stronger protections and a more robust legal framework, the law may end up offering a promise it cannot keep. Enforcement issues and privacy blind spots could leave victims just as vulnerable.
The Take It Down Act targets " non-consensual intimate visual depictions " - a legal term that encompasses what most people call revenge porn and deepfake porn. These are sexual images or videos, often digitally manipulated or entirely fabricated, circulated online without the depicted person's consent.
The bill compels online platforms to build a user-friendly takedown process. When a victim submits a valid request, the platform must act within 48 hours. Failure to do so may trigger enforcement by the Federal Trade Commission, which can treat the violation as an unfair or deceptive act or practice. Criminal penalties also apply to those who publish the images: Offenders may be fined and face up to three years in prison if anyone under 18 is involved, and up to two years if the subject is an adult.
A growing problem
Deepfake porn is not just a niche problem. It is a metastasizing crisis . With increasingly powerful and accessible AI tools , anyone can fabricate a hyper-realistic sexual image in minutes. Public figures, ex-partners and especially minors have become regular targets. Women, disproportionately, are the ones harmed .
These attacks dismantle lives. Victims of nonconsensual intimate image abuse suffer harassment, online stalking, ruined job prospects, public shaming and emotional trauma. Some are driven off the internet. Others are haunted repeatedly by resurfacing content. Once online, these images replicate uncontrollably - they don't simply disappear.
In that context, a swift and standardized takedown process can offer critical relief. The bill's 48-hour window for response has the potential to reclaim a fragment of control for those whose dignity and privacy were invaded by a click. Despite its promise, unresolved legal and procedural gaps can hinder its effectiveness.
Blind spots and shortfalls
The bill targets only public-facing interactive platforms that primarily host user-generated content such as social media platforms. It may not reach the countless hidden private forums or encrypted peer-to-peer networks where such content often first appears. This creates a critical legal gap: When nonconsensual sexual images are shared on closed or anonymous platforms, victims may never even know - or know in time - that the content exists, much less have a chance to request its removal.
Even on platforms covered by the bill, implementation is likely to be challenging. Determining whether the online content depicts the person in question, lacks consent and affects the hard-to-define privacy interests requires careful judgment. This demands legal understanding, technical expertise and time. But platforms must reach that decision within 24 hours or less.
On the other hand, time is a luxury victims do not have. But even with the 48-hour removal window, the content can still spread widely before it is taken down. The bill does not include meaningful incentives for platforms to detect and remove such content proactively. And it provides no deterrent strong enough to discourage most malicious creators from generating these images in the first place.
This takedown mechanism can also be subject to abuse. Critics warn that the bill's broad language and lack of safeguards could lead to overcensorship , potentially affecting journalistic and other legitimate content. As platforms may be flooded with a mix of real and malicious takedown requests - some filed in bad faith to suppress speech or art - they may resort to poorly designed and privacy-invasive automated monitoring filters that tend to issue blanket rejections or err on the side of removing content that falls outside the scope of the law.
Without clear standards, platforms may act improperly. How - and even whether - the FTC will hold platforms accountable under the act is another open question.
Burden on the victims
The bill also places the burden of action on victims, who must locate the content, complete the paperwork, explain that it was nonconsensual, and submit personal contact information - often while still reeling from the emotional toll.
Moreover, while the bill targets both AI-generated deepfakes and revenge porn involving real images, it fails to account for the complex realities victims face. Many are trapped in unequal relationships and may have "consented" under pressure, manipulation or fear to having intimate content about them posted online. Situations like this fall outside the bill's legal framing. The bill bars consent obtained through overt threats and coercion, yet it overlooks more insidious forms of manipulation.
Even for those who do engage the takedown process, the risks remain. Victims must submit contact information and a statement explaining that the image was nonconsensual, without legal guarantees that this sensitive data will be protected. This exposure could invite new waves of harassment and exploitation.
Loopholes for offenders
The bill includes liability-evasive conditions and exceptions that could allow distributors to escape liability. If the content was shared with the subject's consent, served a public concern, or was unintentional or caused no demonstrable harm, they may avoid consequences under the Take It Down Act. If offenders deny causing harm, victims face an uphill battle. Emotional distress, reputational damage and career setbacks are real, but they rarely come with clear documentation or a straightforward chain of cause and effect.
Equally concerning, the bill allows exceptions for publication of such content for legitimate medical, educational or scientific purposes. Though well-intentioned, this language creates a confusing and potentially dangerous loophole. It risks becoming a shield for exploitation masquerading as research or education.
Getting ahead of the problem
The notice and takedown mechanism is fundamentally reactive. It intervenes only after the damage has begun. But deepfake pornography is designed for rapid proliferation. By the time a takedown request is filed, the content may have already been saved, reposted or embedded across dozens of sites - some hosted overseas or buried in decentralized networks. The current bill provides a system that treats the symptoms while leaving the harms to spread.
In my research on algorithmic and AI harms , I have argued that legal responses should move beyond reactive actions. I have proposed a framework that anticipates harm before it occurs - not one that merely responds after the fact. That means incentivizing platforms to take proactive steps to protect the privacy, autonomy, equality and safety of users exposed to harms caused by AI-generated images and tools. It also means broadening accountability to cover more perpetrators and platforms, supported by stronger safeguards and enforcement systems.
The Take It Down Act is a meaningful first step. But to truly protect the vulnerable, I believe that lawmakers should build stronger systems - ones that prevent harm before it happens and treat victims' privacy and dignity not as afterthoughts but as fundamental rights.
Sylvia Lu does not work for, consult, own shares in or receive funding from any company or organization that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.