Site icon Read Fanfictions | readfictional.com

Fake it till you never make it

LLMs today make it extremely easy to produce clever-sounding concepts. In just a few minutes, business plans, strategy papers, product visions and pitch decks are created at a high level. Language is clean. Logic seems consistent. Slides look professional.

The problem: Good outputs now look like competence. But they aren’t.

I see this regularly in startup teams, especially in the pre-seed and early stage phases, but also in innovation units of established organizations. The speed at which qualitative documents are created is increasing. The quality of implementation is often poor, and sometimes it even drops.

My theory: We are currently experiencing a new version of “Fake it till you make it”. Just with one difference. Many get stuck in the fake.

Why LLMs reinforce this illusion

The current LLMs are primarily amplifiers. They reinforce existing ideas, mental models and assumptions. They do not replace real experience or a deep understanding of the context.

Fast results create security. Anyone who has a finished strategy paper in an hour feels prepared. The brain gets a wrong signal: “Problem solved.”

In many teams there is currently no major gain in efficiency, but rather additional overhead. AI outputs must be constantly checked, corrected and adjusted. This feels like acceleration, but is often just an improvised quality assurance process. The real problem is rarely the AI. There is a lack of a setup for standards and clean quality assurance.

Where things fall apart in reality

A typical pattern from my work: Academically strong founding teams build impressive concepts. Deep research. Clean models. Good storylines. But as soon as it comes to execution, the system breaks.

Customer conversations are avoided or conducted mechanically. Conversations act like script queries. No real empathy. No real listening. No relationship with the customer.

I see something similar on the tech side. My current survey with 58 mostly senior engineers shows clearly: AI code is almost always checked manually. There is no trust in AI as an autonomous decision-maker. AI is used as an accelerator, not an authority.

This is no coincidence. Because productive software is not created by copy-pasting. It must fit into existing systems and framework conditions, meet security requirements, integrate deployment processes and remain maintainable in the long term. This cannot currently replace a simple prompt abbreviation.
Teams that understand this move away from individual prompts towards agent-based workflows with clear handoffs, review mechanisms and responsibilities.

The real problem is rarely the team. It is the lack of a strategic decision at the management level to think of AI not just as a productivity tool, but as a new operating model.

Which teams are particularly at risk

Teams most at risk are:

  • work strongly conceptually, but deliver little operationally
  • have little real customer contact
  • focus on presentations instead of systems
  • Confusing output with progress

Innovation units in corporates often fall into this category. Lots of slides. Little real market interaction. Lots of strategy. Little ownership for implementation.

Execution is the new rare skill

Execution today means more than “doing”. It means:

  • understand complex systems and context
  • work with uncertainty
  • Integrate feedback from reality
  • Understanding and orchestrating technology, people and processes
  • Take responsibility if it doesn’t work
  • and, above all, gain further experience and learn to grow with AI

Another pattern emerges from my survey results: The greatest effort is not in writing code, but in building production-ready foundations. CI/CD, infrastructure, security, deployment. This is exactly where it is decided whether a project will be viable.

Execution isn’t always sexy. But she is the bottleneck.

How LLMs should be used sensibly

LLMs are not a replacement for thinking. She is currently a sparring partner.

The best teams use LLMs like this:

  • as an idea generator, not as a decision-making authority
  • as an accelerator, not a shortcut
  • embedded in agent-based workflows and review processes
  • coupled with clear standards and quality barriers

Anyone who blindly accepts LLM output is giving up control. If you ignore technology, you will lose speed. In practice, this means: AI speeds up work, but it doesn’t take responsibility away from you. And this is exactly where automated quality mechanisms become crucial. Without clear checks, standards and continuous validation, AI will not scale productively, only errors will scale.

What does this mean for the future?

We will see more and more brilliant concepts in the startup ecosystem in the next few years. But only a few startups will actually be able to implement them. The new competitive advantage is not having ideas. Not slides. Not prompts. It is the ability to translate complex reality into functioning systems and execute them reliably.

Or to put it another way: In a world with artificial intelligence, whoever executes consistently wins. Because this is the hardest skill when building reproducible, automated systems.

About the author
Peyman Pouryekta has been working in technology and product development for almost two decades. With his company, he deals intensively with the topic of how startups can get to grips with rapid success and the associated growing pains.

Startup jobs: Looking for a new challenge? In ours Job exchange You will find job advertisements from startups and companies.

Photo (above): Shutterstock

Source link

Exit mobile version