Tech

Experts warn of a dangerous loss of control

A current analysis shows how the use of artificial intelligence is increasingly accelerating military decisions and making human control significantly more difficult. Experts therefore warn of a dangerous dynamic in which autonomous systems could trigger real escalations.

The topic of artificial intelligence has gained significant momentum in recent years and is changing the economy, society and politics at a rapid pace. Their ability to evaluate large amounts of data and support complex decisions opens up new possibilities in many areas.

At the same time, this also raises fundamental questions about control and regulation – especially when AI is used in security-relevant areas. Particularly in the military context, this can lead to a dynamic in which decisions are made ever faster and the opportunities for human intervention are increasingly dwindling.

Exactly before this development, experts from the think tank Center for European Politics (cep) were conducting a current analysis. Accordingly, the increasing use of AI in war could result in a dangerous loss of control.

Why military AI leaves little time for human control

According to the cep, AI-supported systems are already being used in current conflicts in the Gaza Strip, Iran and Ukraine “sometimes without functioning supervision”. Human control is now just an illusion.

The use of AI-supported systems significantly shortens analysis and reaction times in the military environment. What is considered a strategic advantage can also become a problem if decisions are prepared or made automatically under high time pressure.

Above all, this time pressure leaves little room for human control and consideration in individual cases. There is an increasing risk that incorrect data or misleading signals will quickly have far-reaching consequences.

There would also be a lack of reliable experience in dealing with language models or other AI systems in a military context. The cep therefore warns of “incalculable consequences” that could ultimately lead to a “dangerous loss of control”.

AI in war: What rules experts demand

“In many cases, operators have very little time to check an AI proposal,” explains Anselm Küsters, study author and cep AI expert. The actors often cannot understand “how the system came to its conclusion or what unintended consequences they could have”.

Under these conditions, control quickly becomes dependence, says the researcher. However, it is crucial whether human control works under operational conditions. To achieve this, the cep calls for binding standards as well as reliable and verifiable procedures.

Common rules are not only ethically necessary, but also make sense militarily, as they would reduce false attacks and prevent escalations.

The military use of AI must be based on international standards, for example through EU or NATO standards for military AI. This requires, among other things, disclosure obligations and limitations for automated systems. The cep also requires an obligation to report malfunctions.

Also interesting:

You don’t want to be left behindwhen it comes to AI, green tech and the tech topics of tomorrow? Over 12,000 smart readers receive UPDATE every day, our tech briefing with the most important news of the day – and thus secure their advantage. You can register for free here.

Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
Close

Adblock Detected

kindly turn off ad blocker to browse freely