Site icon Read Fanfictions | readfictional.com

Too dangerous? Anthropic is keeping Claude Mythos under wraps

OpenAI competitor Anthropic has developed a new AI model that discovered a 27-year-old security hole in an operating system that is actually considered particularly secure. Claude Mythos is now tasked with identifying vulnerabilities in the software of companies such as Microsoft, Amazon and Apple. However, Anthropic does not make the model publicly available because it is too dangerous. A commentary analysis.

What is Claude Mythos?

  • According to Anthropic, the AI ​​model Claude Mythos is said to be so good at finding previously undiscovered software vulnerabilities that it in the wrong hands could become a devastating cyber weapon. The company therefore does not want to make it publicly available. Under the name Project Glasswing
    Only companies should have access to the AI ​​in order to be able to detect security gaps in their software.
  • According to Anthropic, Claude Mythos has already managed to thousands of serious software vulnerabilities to discover. These include a 27-year-old security hole in the OpenBSD operating system, which is considered particularly secure, and a vulnerability in the video software FFmpeg that has been dormant for 13 years.
  • The AI ​​is said to have been able to develop programs within a few hours to exploit identified vulnerabilities. Experts would need several weeks to do this. According to one
    Anthropic employee was tasked with breaking out of a shielded computer environment and reporting on a former version of Claude Mythos. The AI ​​succeeded circumvent security measures and gaining access to the Internet to send an email to the employee while the employee was allegedly eating a sandwich in the park.

Security AI or marketing coup?

Almost every headline that contains even a hint of AI is prophesying either the next miracle weapon or the end of the world. This well-known cocktail of fascination and fear also swirls around Claude Mythos. Always shaken vigorously but not stirred; from an AI industry that thrives primarily on attention.

As is so often the case, what remains is a fog of allegations that can hardly or not be verified from the outside. However, in contrast to OpenAI, Anthropic has cleverly implemented it polished responsible image. The fact that the company refuses to join the US military and does not reveal all of its models to the public may seem honorable, but it is simply corporate strategy.

Because: AI can be better monetized in the entrepreneurial sector, while The cost-benefit factor is becoming increasingly unclear to the public becomes. Keyword: deepfakes, disinformation, energy consumption or addictive behavior. Admittedly, Anthropic’s AI models have so far proven particularly successful in the IT and data protection areas.

That’s why the company isn’t there pompous marketing campaigns immune. It is undisputed that models like Mythos could solve real problems, find security gaps more quickly, limit damage and stabilize digital infrastructures. But the real irony lies elsewhere: While AI is celebrated as a tool to avert danger, it has long since become part of the problem itself. The difference between a protective shield and a gateway is shrinking – and the public debate can hardly keep up.

What experts and Anthropic boss Amodei say

  • Anthropic boss Dario Amodei in a post on In a second post, he added: “The dangers of failure are clear. But if we get it right, we have a real opportunity to create an Internet and a world that is fundamentally safer than it was before the advent of AI-powered cyber capabilities.”
  • Elia Zaitsev, Chief Technology Officer at Anthropic partner Crowdstrikein a statement: “The window between the discovery of a vulnerability and its exploitation by an attacker has shrunk – what once took months now happens in minutes thanks to AI. Claude Mythos Preview shows the opportunities available to defenders at scale today, and attackers will inevitably try to exploit the same capabilities.”
  • AI expert and neuroscientist Gary Marcus quotes a text message from a cyber expert friend in a blog post: “I haven’t had time to go through all the reports, but to me it sounds like exaggerated hype. I don’t doubt some of the results, but what was made of them, under what conditions the vulnerabilities were found and what role people played in them is unclear. (…) It seems to me that they are sowing seeds in the garden of hype.”

Who controls AI like Claude Mythos – CEOs or the state?

Whether the new Anthropic AI is a digital super sniffer dog or rather a well-marketed myth, becomes almost irrelevant. What is more important is who will decide in the future what such systems are allowed to do and what they are not allowed to do. At the moment it is primarily the tech CEOs who are drawing this line. There is about as much democratic legitimacy in this as there is in the terms and conditions of X or Meta.

The bigger problem: The lines between legitimate concern and calculated panic are becoming increasingly blurred. But when risk is also a marketing tool, every warning becomes ambiguous. This is exactly where the long-standing narrative that regulation is anti-innovation is taking its revenge.

Because: It is currently becoming more and more apparent that this is a convenient thesis overtaken by reality becomes. Even where regulation exists, it acts like a fax machine in the age of autonomous systems: functional but hopelessly slow. Meanwhile: While politicians are still designing AI guardrails, companies have long been building new information highways.

Whether Claude Mythos is as dangerous as claimed or not: the fact is that without government controls we are completely at risk At the mercy of individual CEOs some of whom certainly don’t deserve our trust. While Anthropic still seems to be one of the most responsible players in the AI ​​competition, other companies such as OpenAI or xAI with comparable AI models may act differently in the future.

Also interesting:

Source link

Exit mobile version