Site icon Read Fanfictions | readfictional.com

Will AI ever be as intelligent as humans?

With OpenAI boss Sam Altman and Tesla CEO Elon Musk, two AI pioneers are fueling the hype around so-called Artificial General Intelligence (AGI) – in German: Artificial General Intelligence. What this often means is a kind of superintelligence with superhuman abilities. Many scientists and entrepreneurs are pursuing this dream. A commentary analysis.

There are numerous definitions of AGI

  • The problem with AGI starts with the fact that it no clear definition of the term. For Amazon, for example, AGI is a “field of theoretical AI research that seeks to develop software with human-like intelligence and the ability to self-study.” The goal is software that can perform tasks on its own without special training.
  • A distinction is often made between “Strong AI” and “Weak AI”. Weak AI includes, for example, ChatGPT, chatbots and recommendation tools that can solve specified problems themselves using training data. A strong AI, on the other hand, has creativity, the ability to transfer knowledge and feelings across different specialist areas and to develop its own solution strategies. We are in the Age of weak AI.
  • Another problem is the perceived proximity to achieving AGI. The first AI researchers around Alan Turing and Herbert A. Simon predicted in the 1950s and 1960s that an AGI would exist in 20 years – i.e. between 1970 and 1980. And that’s how it goes Narrative through the generations. Now it is Sam Altman and Elon Musk who believe breakthroughs are likely soon.

Disagreement in research

It is unclear whether and when the first AGI will actually be created. In 2024, the largest study to date on the advancement of artificial intelligence will be carried out under the title “Thousands of AI Authors on the Future of AI“ appeared in which over 2,700 AI researchers made their predictions.

The result looks like a search for the proverbial one Needle in a haystack blindfolded. If AI progress continues unhindered, experts see a ten percent chance by 2037 that all human jobs will be fully automated and can be taken over by AI. In the year 2,116 the probability of this occurring is said to be 50 percent.

Or to put it another way: As you can see, you don’t see anything. Even after over 70 years of AI research, researchers themselves only agree that they disagree. The mere fact that in almost 100 years there will be only a 50 percent chance that a strong AI will exist shows that Predictions about the future of AI are simply nonsense.

Voices

  • In a blog post looks OpenAI CEO Sam Altman Reflecting on the advances in artificial intelligence since the launch of ChatGPT: “We are now confident that we know how to build AGI as we have traditionally understood it. We believe that in 2025 we could see the first AI agents ‘entering the world of work’ and significantly changing the performance of companies.”
  • Rolf Pfister researches artificial intelligence at Lab42 in Davos, Switzerland. In an interview with Schweizerischer Rundfunk, he said: “There are several hundred definitions of AGI. And it has to be said that it is actually just a marketing term that was introduced to differentiate itself from conventional AI research.”
  • Katja Grace, researcher at Berkeley University and lead author of the AI ​​studysaid: “More than half of respondents said there was ‘significant’ or ‘extreme’ concern in six different AI-related scenarios – including the spread of false information, authoritarian surveillance and increasing inequality. There was disagreement about whether faster or slower AI progress would be better for the future. However, there was broad agreement that research to minimize potential risks should be given greater priority.”

Marketing term AGI as a money printing machine in the AI ​​competition

When we take a look at who is saying positive things about AGI, we see that they are mostly leaders from companies that benefit directly or indirectly from it Success of artificial intelligence are dependent.

The term AGI is still unknown to many people and is therefore ideal for standing out from the inscrutable mass of AI companies. Those who promise that AGI will soon become a reality have a better chance of attracting new investor money. And they are important. Because despite all the progress, it suffers The entire AI industry suffers from a chronic lack of money. The development of ChatGPT shows this impressively.

As end users, we can currently be happy that the current AI models are improving at regular intervals – for example in image creation – and can follow the competition from the sidelines. And when the desire for an AGI arises again, we simply watch classic films like “A space odyssey” or “I, Robot”.

Also interesting:

Source link

Exit mobile version