The issue of nonconsensual, sexualized deepfakes in the tech world has become a widespread problem that is now being addressed by U.S. senators. In a recent letter to the leaders of major tech companies such as X, Meta, Alphabet, Snap, Reddit, and TikTok, the senators are calling for proof of robust protections and policies in place to combat the rise of sexualized deepfakes on their platforms.
The letter specifically demands that the companies preserve all documents and information related to the creation, detection, moderation, and monetization of sexualized, AI-generated images. This includes any policies implemented to address this issue. The senators are concerned that existing guardrails may not be sufficient to prevent users from posting nonconsensual, sexualized imagery, as seen in recent media reports.
One of the companies mentioned in the letter, X, recently updated its Grok platform to prohibit edits of real people in revealing clothing and restricted certain image creation and editing features to paying subscribers. However, the senators point out that other platforms are also facing challenges in addressing this issue.
The issue of deepfakes first gained attention on Reddit, where synthetic porn videos of celebrities went viral before being taken down in 2018. Since then, sexualized deepfakes targeting celebrities and politicians have multiplied on platforms like TikTok and YouTube. Meta’s Oversight Board has addressed explicit AI images of female public figures, and there have been reports of kids spreading deepfakes of peers on Snapchat.
In response to the letter, X emphasized its commitment to preventing non-consensual intimate media on Reddit. However, other companies like Alphabet, Snap, TikTok, and Meta have not yet responded to the requests for comment.
The letter outlines several specific demands for the companies, including policy definitions, enforcement approaches, content policies, and measures to prevent the distribution of deepfakes. It also calls for mechanisms to identify and prevent deepfake content from being re-uploaded, as well as steps to notify victims of nonconsensual sexual deepfakes.
The issue of nonconsensual manipulated sexualized imagery extends beyond just one platform or company. While some AI-based services allow users to generate explicit content, others enable the creation of deepfakes with harmful consequences. The complexity of the issue is further exacerbated by the involvement of Chinese tech companies and apps that offer easy ways to edit faces, voices, and videos.
Overall, the letter from the senators reflects a growing concern about the proliferation of sexualized deepfakes in the tech industry and the need for stronger protections and policies to address this issue. While legislation has been passed to regulate deepfake pornography, more comprehensive measures are needed to combat this problem effectively. The Take It Down Act, a federal law passed in May, aims to criminalize the creation and dissemination of nonconsensual, sexualized imagery. However, there are concerns that the law places more emphasis on holding individual users accountable rather than image-generating platforms.
One of the key issues with the law is that it may be difficult to enforce against platforms that host user-generated content. This could potentially limit the effectiveness of the legislation in combating the spread of nonconsensual imagery online.
In response to these challenges, some states are taking proactive measures to protect consumers and elections. New York Governor Kathy Hochul recently unveiled proposals that would address these concerns. One of the proposed laws would require AI-generated content to be clearly labeled as such, providing transparency to consumers about the origin of the content they are viewing.
Additionally, Governor Hochul’s proposals include a ban on nonconsensual deepfakes in the period leading up to elections. This measure aims to prevent the dissemination of misleading or harmful content, such as deepfake videos depicting opposition candidates. By implementing these regulations, New York hopes to safeguard the integrity of elections and protect individuals from the potential harms of manipulated media.
Overall, the efforts of states like New York to supplement federal legislation with additional protections demonstrate a commitment to addressing the challenges posed by nonconsensual imagery and deepfakes. By enacting these measures, policymakers are working to create a safer online environment for all users.

