Regulating the Removal of Content: Examining the Impact on Online Platforms' Liability
In a significant move to combat technology-driven abuse, the Take It Down Act has been signed into law by President Donald Trump, marking the first federal legislation in the United States to directly address the issue of non-consensual intimate imagery and AI-generated deepfakes.
Passed with broad bipartisan support, the Take It Down Act, enacted as part of the 119th Congress legislation (S. 146), criminalizes the creation and distribution of intimate images without consent, including AI-generated deepfakes. This federal crime carries penalties ranging from fines to prison terms of up to two years, or more in severe cases.
The law requires online platforms to remove reported non-consensual intimate images (NCII), including deepfakes, within 48 hours of notification by a victim. This is a significant step in targeting the spread of harmful digital content by establishing notice-and-takedown rules and explicitly protecting victims of technologically enabled abuse.
To enhance online safety, the Act creates clear federal guardrails that compel websites and platforms to act swiftly to remove harmful content. It also protects entities acting in good faith to assist victims, potentially reducing legal uncertainty for platforms engaging in proactive moderation.
The legislation encourages greater cooperation among law enforcement, technology companies, and victims, addressing a rapidly growing form of online abuse enabled by advances in AI technology. Advocates and counties support the Act as it provides essential protections against the nefarious use of AI to create and distribute intimate images without consent, helping foster a safer internet environment amid the proliferation of general AI.
In addition to the Take It Down Act, the Federal Trade Commission is drafting new rules aimed at addressing impersonation, personal data misuse, and fraud tied to AI-generated content. Another proposal, the No Fakes Act, aims to ban the unauthorized use of a person's name, voice, or likeness in AI-generated content.
Human moderators need clear training on how to meet the legal requirements of the Take It Down Act and how to recognize synthetic or manipulated media. Platforms must maintain transparent records and audit trails of takedown activity, create secure takedown systems that verify the identity of users and their connection to the content, and implement systems to verify the origin of content.
The law applies to both adults and minors, with tougher penalties when children are involved. The Federal Trade Commission has the authority to issue civil penalties against companies that fail to meet the requirements of the Take It Down Act.
The Take It Down Act is a landmark legislation that reflects a shift in how our website, consent, and control are being understood in digital spaces. It signifies a concrete legal response to digital sexual abuse integrating AI-generated content, and sets a precedent for future regulations aimed at ensuring online safety and protecting individuals from harmful digital content.
The law's enactment was influenced by real cases, such as the story of Elliston Berry, a 14-year-old girl whose AI-generated explicit images were spread online without her knowledge. The Take It Down Act is intended to give people more control over how their image and likeness are used online, and is a step towards a safer and more secure digital future.
In response to the growing push to verify the authenticity and origin of online content, proposals to reduce liability protections for platforms that fail to label or detect manipulated media are being considered. This development underscores the importance of the Take It Down Act in setting a precedent for future regulations aimed at ensuring online safety and protecting individuals from harmful digital content.
The Take It Down Act, a landmark federal legislation aimed at combating technology-driven abuse, specifically targets the creation and distribution of intimate images without consent, including AI-generated deepfakes, in the realm of general-news and crime-and-justice. To address the rising concern of AI-enabled online abuse, the Act encourages cooperation among law enforcement, technology companies, and victims, and sets clear federal guardrails for online platforms to act swiftly and remove harmful content.