Addressing the Challenge of ClothOff: A Legal Battle Against Non-Consensual Imagery
For over two years, ClothOff has been a menacing presence on the internet, particularly targeting young women. Despite being removed from major app stores and banned on most social platforms, it continues to thrive on the web and through a Telegram bot. The recent lawsuit filed by a clinic at Yale Law School aims to shut down the app completely, compelling the owners to delete all images and cease operations. However, tracking down the defendants has proven to be a challenging task.
Professor John Langford, co-lead counsel in the lawsuit, reveals, “It’s incorporated in the British Virgin Islands, but we believe it’s run by a brother and sister in Belarus. It may even be part of a larger network worldwide.” This global nature of the operation adds layers of complexity to the legal battle.
The lawsuit sheds light on the disturbing use of ClothOff to manipulate Instagram photos of an anonymous high school student in New Jersey. The victim was only 14 years old when the original photos were taken, making the AI-altered versions classified as child abuse imagery. Despite the clear illegality of the modified images, local authorities have hesitated to prosecute the case, citing challenges in gathering evidence from suspects’ devices.
The legal proceedings have been slow-moving since the complaint was filed in October. Langford and his team are working on serving notice to the defendants, a daunting task given the app’s global reach. Once the defendants are served, the clinic can proceed with court hearings and seek a judgment. However, the road to justice for ClothOff’s victims remains arduous.
In comparison, the case of Grok, Elon Musk’s xAI, presents a different set of challenges. While laws banning deepfake pornography exist, holding the entire platform accountable is a complex task. Existing laws require evidence of intent to harm, posing difficulties in proving that platforms like Grok knowingly facilitate illegal activities.
Langford points out, “ClothOff is designed and marketed specifically as a deepfake pornography image and video generator. When you’re suing a general system that users can query for all sorts of things, it gets a lot more complicated.” The legal battle against platforms like Grok requires a nuanced understanding of free speech protections and the platform’s role in enabling harmful activities.
Efforts to hold xAI accountable have faced pushback, particularly in regions with stricter regulations on free speech. Countries like Indonesia and Malaysia have taken steps to block access to the Grok chatbot, while the UK has initiated an investigation that could lead to a ban. Other nations, including members of the European Commission, France, Ireland, India, and Brazil, have also taken preliminary actions. In contrast, the US regulatory response has been minimal.
As investigations unfold, regulators are faced with critical questions regarding the distribution of non-consensual imagery. Langford emphasizes, “If you are posting, distributing, disseminating Child Sexual Abuse material, you are violating criminal prohibitions and can be held accountable.” The focus now lies on understanding the extent of platforms’ knowledge and actions in response to illegal content.

