
The European Union Agency for Cybersecurity (ENISA) warned of the dangers of AI in a current situation report. The problem: The EU authority apparently used AI tools itself when writing the report, which led to errors. Many of the sources cited do not even exist. A commentary analysis.
ENISA warns about AI
- ENISA’s “threat situation report” was published in October 2025, in which the authority urgently warned of the dangers of artificial intelligence. According to the 87-page report, AI helps hackers in around 80 percent of all casesto extract their passwords from users. The legitimate warning is that fraud is increasing due to artificial intelligence.
- As Spiegel (€) first reported, IT security researchers from the Institute for Internet Security at the Westphalian University have discovered that the ENISA report on the dangers of AI was apparently not only written by humans, but also by artificial intelligence. Of the total 496 sources in the form of Therefore, 26 links resulted in 404 error messagessince they never existed.
- The ENISA is based in Greece. Its task is to “contribute to the EU’s cyber policy”. The aim is to strengthen trust in digital products, services and processes “by designing systems for cybersecurity certification”. To this end, it works with EU countries and institutions.
EU authority embarrasses itself with AI glitch
The process is less an AI scandal than one classic quality and control failure. It is now common practice for authorities to also use AI and is not initially problematic.
What makes the case explosive is that fundamental journalistic and scientific aspects Minimum standards such as simply checking sources were apparently not met became.
The sad reversal: the majority justified key content statements and warnings of the report are hardly discussed because ENISA has tripped itself up with its technical failure.
Individual sources that cannot be verified therefore undermine the credibility of the entire report – although this is mostly true. Admitting mistakes is laudable, but without concrete disclosure of one’s own failures there remains a real loss of trust.
Voices
- Juhan Lepassaar, Executive Director of ENISAsaid in releasing the report: “The systems and services we rely on in our daily lives are interconnected, so a disruption at one end can impact the entire supply chain. This is linked to an increase in misuse of cyber dependencies by threat actors, which can amplify the impact of cyberattacks.”
- Christian Dietrich, professor at the Institute for Internet Security at the Westphalian Universitysays: “It really bothers me that a public authority, which in my eyes has the very important task of issuing reliable, comprehensible reports, has not done so in this case.” Regarding the incorrect links, he said: “You only had to click on them once”.
- ENISA has already acknowledged “shortcomings” for which it wants to “take responsibility”. The authority used AI for “minor editorial revisions”. The contents of the report are valid regardless of the non-existent sources. Linus Neumann from the Chaos Computer Club about this: “ENISA is supposed to be the central contact point in Europe. If work is carried out in such an unclean manner in the very superficial threat reports, it reflects very badly on the institution.”
Loss of trust at the authority level
The case throws fundamental questions about how European authorities work on. When an institution that is supposed to set standards and trust in cybersecurity fails to check and secure sources, there is more at stake than a single report.
The decisive factor will be whether ENISA fulfills its internal processes are transparently sharpened – for example through clear rules on the use of AI, mandatory human reviews and traceable corrections.
Otherwise there is a risk lasting loss of trustwhich undermines the authority of future warnings and recommendations.
Also interesting:



