Artificial intelligence is the topic of the moment and seems to be creeping into almost all areas of our lives. Nevertheless, AI is still “new territory” for many people and they know relatively little about it apart from a few basics. That’s why we’ve taken a look at ten of the most persistent AI myths.
Artificial intelligence is not a new invention. The technology’s roots date back to 1956, when researchers first outlined the vision of machines that could mimic human learning.
70 years later, countless companies are using AI to automate simple and complex processes. Nevertheless, there are many half-truths and myths among the population about the topic, which we would like to (partially) dispel at this point.
AI myths reveal the complexity of the technology
Western society currently tends to think in black and white. This is also evident when it comes to AI. Some consider technology to be the devil and condemn any use of it, while others think that any problem can be solved with artificial intelligence. As is often the case, the truth lies somewhere in the middle.
Not every young person needs training in AI, and not every company needs a designated strategy to transform all areas of work. Instead, the new technology should – if at all – be used prudently and specifically. The same applies to private individuals who should not use AI as a replacement for their family doctor or independent thinking.
The sheer mass of available data and powerful computing power have enabled artificial intelligence to make its breakthrough. And technology is here to stay. Sooner or later, every person should have to deal with their abilities and limitations. That’s why we’re taking a look at ten of the most persistent AI myths below.
Myth 1: Does “AI” even exist?
There is no such thing as “the” AI, even though the term is often used by laypeople to refer to all sorts of things. In reality, the terminology hides completely different machine learning methods that differ fundamentally in aspects such as data sources, areas of application and risks. Language models, for example, are based on text predictions, while systems for analyzing images use completely different algorithms.

