
19 percent of all websites in the Google index are already AI-generated – and the trend is rising. What sounds like a technical problem threatens trust, discourse and the credibility of the entire Internet. Because AI garbage works because it wants to be believed and is believed. But the network must not serve as a final storage facility. A commentary analysis.
From 2 to 19 percent: How AI garbage floods the internet
- Be it half-naked celebrities, Jesus as a shrimp or lurid sensational reports: Since AI models have become free to use for almost everyone, the news has been circulating more and more garbage on the internet. The content detector Originality.ai documented that the proportion of AI-generated websites in the Google index was 2.27 percent at the beginning of 2020. Three years later it has tripled. Another year later, the curve rose dramatically. At the end of 2025, the proportion of AI-generated content had more than doubled from 8.5 percent to 19 percent.
- Currently, more and more AI-generated videos are flooding the Internet. This applies not only to digital media, but especially to portals such as YouTube, from where such clips are distributed. The problem: Instead of handmade entertainment, nerdy content or high-quality documentaries, more and more AI junk and fake videos are ending up online. Or in short: posts that only aim to get clicks or spread manipulative content.
- The much greater danger is based on so-called deepfakes. So: From audio and video recordings that were created using artificial intelligence. For example, faces and people can be swapped or inserted into other recordings. However, it is also AI garbage is not harmless. Because important and relevant content is at risk of being lost in the increasing flood of scrap.
Why the Internet is becoming a final repository for AI waste
The Internet is transforming quietly and secretly to a final repository for AI waste. Because artificial intelligence not only generates text and images, but also an endless loop of content that references, copies and condenses one another.
This unspeakable avalanche of scrap doesn’t just spill relevance. It undermines discourse, democracy and facts. The sad consequence: What is visible on the Internet is no longer necessarily relevant or valuable, but can only be connected algorithmically.
However, the price we pay for this development is horrendous. Because it is not only felt culturally and socially but also physically. Data centers and server farms, for example, are sprouting up faster than any garbage dump, consume as much energy as entire metropolises and drink water for cooling, which is increasingly becoming a scarce commodity in some regions. Everything at the expense of the climate.
At the same time, large corporations and politicians are selling promises of the future such as nuclear fusion or mini-nuclear power plants as an alibi for a hunger for energy that they have not only ignited themselves, but which in some cases amounts to waste. What I mean by that: AI garbage often seems harmless. But both deepfakes and fantasy worlds or cat videos are deliberately placed in order to Fears, entertainment or emotions to trigger.
Who believes that in an urgently needed AI waste repository only inconsequential waste would end up overlooking the danger that this garbage works because it wants to be believed – and is believed.
Reactions and voices
- Akhil Bhardwaj, Professor at the School of Management, University of Bathtold the Guardian: “AI junk is flooding the internet with content that is essentially garbage. This scrapping is ruining online communities. One way to regulate AI junk is to ensure that it cannot be monetized, removing the incentive to create it.”
- Presenter, actor and comedian John Oliver on his weekly HBO show: “It’s not just that we can be fooled by fake content, but that its very existence allows malicious actors to dismiss real videos and images as fake. I’m not saying some of this stuff isn’t fun to watch, but I’m saying that some of it is potentially very dangerous.”
- Both YouTube and Meta rely on automated systems to identify AI content. While the video platform primarily aims to combat low-quality content, Meta has set its sights on AI fraud, according to a statement: “While we will continue to have employees reviewing content, these systems will take on tasks that are better suited to the use of technology, such as repeatedly reviewing graphic content where malicious actors are constantly changing their tactics, such as when selling illegal drugs or in fraud cases.”
Can the flood of AI garbage still be stopped?
The problem of finding a final storage facility for AI waste is not the result of a natural event, but rather man-made. That means: It is in principle controllable. While AI is undoubtedly a practical tool when it helps speed up processes, improve diagnoses or facilitate research; and also in the private sector.
But as long as quantity triumphs over quality, this is a threat tools become dull. This also applies to the trust many users place in it. The problem: The real erosion takes place in our heads. Because if everything is potentially fake, the real thing also loses weight because you can hardly trust the content anymore.
This uncertainty has long since seeped into traditional media and questioned their credibility. But if doubt becomes a default attitude, we will reach a state where disinformation can flourish by sowing doubt. However, the responsibility does not only lie with average users, To refrain from AI shenanigansbut above all with the digital platforms and their operators. Because they decide what becomes visible and what disappears.
The only problem is that so-called social media sometimes benefits from AI garbage and disinformation. Namely when such content attracts a lot of attention, which in turn generates advertising revenue for the operators. Without clear rulestransparent labeling and real consequences, the AI ​​waste repository remains a profitable permanent state.
Also interesting:



