

The topic itself is not completely understood, but it received a lot of media attention a few weeks ago through X/Grok: nude image creation using AI tools, or more precisely, non-consensual creation of so-called deepfakes. The problem here is not the creation of nude images, but rather the creation and distribution of corresponding representations based on real people. In the case of Grok, there were also more than 23,000 documented depictions of children in sexualized contexts. The European Union is on the verge of banning so-called Nudify apps. Parliament and the Council have agreed on changes to the AI Act, which impose significant restrictions on corresponding tools. You start directly with the tools
AI systems that depict sexually explicit actions by identifiable people without their consent will therefore be banned in the future after adoption. The EU regulation is therefore not only aimed at the subsequent distribution of such images, but also directly at tools that enable their creation. The “Digital Services Act” already obliges large platforms to assess and mitigate systemic risks. The planned Nudify ban is also intended to prevent anyone from offering a system expressly with the purpose of generating such content. Providers whose systems do not have adequate protection measures against the creation of non-consensual sexual images or CSAM content may also be covered.
New requirements will take effect from December
Some details can be found in the EU Commission’s news section. According to the European Parliament, companies should adapt their systems to the new requirements by December 2, 2026. From now on, protective mechanisms, blocks, moderation processes and technical verifiability are no longer just voluntary security functions, but rather part of regulatory requirements.
Intervene before it becomes normal
The EU obviously does not want to wait until Nudify services become established as a normal use case for generative AI. After the debate about Grok, non-consensual deepfakes are not classified as a moderation problem for social networks, but rather as an inadmissible AI practice. For general AI image services, this means that anyone who wants to be available in the EU will have to prove in the future that such content cannot simply be generated on a prompt. If you already have the appropriate protection mechanisms in place, which is the case with most services anyway, little is likely to change at first. Of course, the situation is different with apps and services that are advertised for precisely this purpose and can also be found in abundance in the App Store.
















