Tech

Google AI spreads medical misinformation

Google has integrated an AI overview into its search. AI-generated short summaries of specific content appear in the search results. However, these always contain incorrect information – this can have serious consequences when it comes to health issues. A commentary analysis.

Overview with AI: Google spreads false information

  • Overview with AI is a search feature from Google that displays AI-generated summaries in search results. Artificial intelligence automatically creates short summaries with relevant information for certain search queries. These are displayed above the traditional search results. The aim is to provide users with information more quickly and save research effort.
  • Whether hellhounds, spaghetti with gasoline or pizza with glue: Google AI regularly impresses with nonsensical or completely wrong answers. In many cases this may be harmless. But when it comes to health or law, you can AI summaries wreak havoc have.
  • In one particularly explosive case, Google incorrectly advised people with pancreatic cancer to avoid high-fat foods. But experts recommend the opposite. The Google recommendation could even increase the risk of patients dying. AI Overviews also provides problematic answers for search queries about mental illnesses.

Our classification

Similar to ChatGPT or AI, you should Don’t blindly rely on artificial intelligence. Time and again, content is shortened or errors in content are formulated in such a way that they appear convincing when read superficially.

Specifically, what Google delivers with its AI is not a neutral shortcut to the truth, but a new form of algorithmic authority. Although the summaries appear precise, they are based on patterns and probabilities. But anyone who provides answers without taking responsibility sells uncertainty as certainty.

In the logic of the search engine, it is often not the best source that decides, but rather the most prominent representation. The fact that AI dominates is not a technical gimmick, but rather one Shift on the Internet – and not just for the positive.

Voices

  • Athena Lamnisos, head of the charity Eve Appeal for gynecological cancers: “Even with identically worded questions, it is virtually impossible to get the same answer from AI, raising pressing concerns about the consistency and accuracy of the information provided or the ability for someone to revisit advice.”
  • Google told CNBC that many of the examples were unusual requests: “The vast majority of AI overviews provide high-quality information with links to further information on the Internet.”
  • Stephanie Parker, head of digital at the end-of-life organization Marie Curiepoints out the emotional background of many health-related searches: “People turn to the Internet in moments of worry – inaccurate or decontextualized information can seriously endanger their health.”

Alerts and human review

It is foreseeable that politics stronger quality and transparency standards for AI-powered information services will demand – especially in areas such as health or law. Countries like the EU already have AI legal frameworks in the works that could address exactly such risks.

Google and other platforms should explicitly label AI summaries in the future, introduce warnings for critical topics or make human review mandatory before they are published be unleashed on the general public.

In the long term, this debate could help users to differentiate more consciously between quick answers and qualified information – including the willingness to consult medical or professional sources directly instead of relying on AI short answers.

Also interesting:

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Adblock Detected

kindly turn off ad blocker to browse freely