Site icon Read Fanfictions | readfictional.com

We’ve been talking about AI for 10 years – but what we have to do now

Fabian Westerheide is a founding partner of the AI-focused venture capital investor AI.FUND and invites you to the Rise of AI conference in Berlin every year.
Getty Images/Science Photo Library/Westerheide

Anyone who works in an industry long enough realizes that the big discussions change less than you think. Tags change, excitement does too – the basic pattern remains. Artificial intelligence (AI) is the best example of this.

For around ten years I have been experiencing AI not just as a technology, but as an ecosystem – first global, then European, and increasingly also German-influenced. And if you want to summarize this time in one sentence, then maybe it’s like this: The themes have remained. But the balance has shifted – and with it the consequences.

2016 was the turning point: from IT to economic policy

Until around 2016, AI was an IT topic for many: machine learning, models, research – exciting, but far removed from operational business. At this point at the latest, AI became relevant to economic policy. Not because it was new, but because it had an impact: data, computing power and the first breakthroughs marked the transition.

2016 was also a key moment for me personally: This year, together with my wife Veronika, I launched the Rise of AI Conference – with the aim of making this development visible and bringing the different actors together.

Those who began to seriously learn and invest back then are visibly better off today. Not because of genius, but because of the learning curve. Technology is unfair: those who start earlier build up a lead – in talent, infrastructure, networks and experience.

Corona has interrupted a lot of things and accelerated them at the same time. Since then it has been clear: we are no longer just talking about potential. We talk about integration, scaling – and sovereignty.

Read too

How a single partner brought this Berlin startup to millions of ARR

You can recognize a timeline not by buzzwords, but by people

If I want to understand where we are, I don’t first look at products, but at the space: Who is there? Whose questions dominate?

2016: Researchers, founders, tech journalists. The key question: What is possible?
2019: Corporates, politics, investors. The key question: How do we industrialize this?
Since 2022: decision-maker, responsible person, governance. The key question: How do we do this now – securely, scalably, confidently?

This is not a change of mood, but a structural change. AI is moving from the innovation corner into the engine room of companies and states.

What has remained the same: ethics, jobs, risks, regulation

We have been talking about ethics, transparency, labor impacts and regulation for ten years. And we will still be talking about it in ten years.

Not because we don’t learn anything – but because these are the basic questions of every strong technology: Who benefits? Who loses? Who controls? Who is liable?

Regulation in particular will be with us permanently – including its dark side: bureaucracy. There is a fundamental dilemma behind this: technology develops exponentially, institutions often develop linearly.

Read too

The next AI elite? These 5 startups are very popular with top VCs

What has really changed: from “talking” to “doing”

The crucial difference is not that AI now writes better texts or generates images. The crucial difference is: AI has moved from an object of discussion to an operating system.

Companies are no longer faced with the question of whether they should deal with AI, but rather how to reorganize work, decision-making paths and processes with it.

The next few years will change the job market, make AI a strategic topic and question entire organizational structures with agent systems. At the same time, security – from resilience to defense – is becoming a central dimension.

Agents don’t just change tasks – they change structures

The term “agent systems” is currently often used. But there is a clear development behind this: AI is moving from assistance systems to actionable systems that orchestrate processes, prepare decisions and partially execute processes independently.

This presents organizations with new questions. The classic structures – departments, interfaces, releases – are not built so that part of the work is taken over by systems that are permanently available, work in parallel and prepare decisions.

The real challenge is therefore not: Can AI do that? But: Can we as an organization deal with it? Governance, compliance, quality control and monitoring become basic requirements.

Read too

“Everyone is a target”: Two Israeli ex-elite soldiers sell their cybersecurity startup for almost 8 billion

Sovereignty is not a feeling. It’s a bill.

At the same time, the issue of sovereignty is becoming central – not as a political debate, but as an economic reality.

Sovereignty specifically means: Who runs the models? Where is the data located? What dependencies arise? This affects medium-sized businesses, corporations and startups alike.

The state becomes the decisive factor

There is one theme that has been going on for years – and is precisely what determines the future viability of locations: the role of the state.

AI not only works in marketing or sales, but also in administration, education, health, security and infrastructure. If the state is unable to act here – in procurement, data access, standards and training – a structural bottleneck will arise.

Large systems are often slow. But that’s exactly why a clear focus is needed: the ability to act instead of declarations of intent.

It’s not too late to get started – but you need the right attitude

The good news: It’s not too late to get in. Many fields are just beginning to open up.

The decisive factor is the attitude: Don’t look for technology and invent problems for it – but rather understand problems and solve them in a targeted manner.

Anyone who is serious about this needs perseverance and a clear niche. AI is big enough for specialists. Those who consistently learn, build, test and iterate will win.

The ecosystem needs diversity – not homogeneity

What is often underestimated is that good AI debates do not arise in homogeneous groups. Politics, research, business, startups – every perspective is important, but none is sufficient on its own. The interaction is crucial. Because this is exactly what we need in the next few years: a system that can learn. The topics remain the same. But now it’s operational.



Source link

Exit mobile version