Screenshots show: Anthropic is testing its own vibe coding function – and could therefore pose a threat to Lovable’s business model.
Lovable, one of Europe’s leading AI startups, has to fear for its business model: Anthropic, the company behind Claude AI, is apparently testing similar functions to Lovable’s core offering.
The problem: Until now, Lovable’s offer was based on Claude, Anthropic’s AI language model. But now it turns out that Anthropic itself could integrate a very similar function into Claude.
Screenshots suggest that the new feature is much more than an add-on. It is intended to cover central steps in software development – from databases to user registration and deployment. The process would be almost identical: users describe an idea and receive a finished application. “Vibe coding” in its purest form.
Old problem – new sector
This follows a well-known dynamic in the tech industry: startups build on platforms that later begin to integrate successful features themselves. What is often referred to as a “copycat strategy” is actually a power play for control and distribution.
This is particularly sensitive for Lovable because a central part of the product depends directly on Claude. If the function becomes available there natively, Lovable not only loses differentiation – but potentially also its basis for existence.
Elena Verna, Head of Growth at Lovables, said at the beginning of the year that this dependency is a risk. In the “20VC” podcast, she said that the biggest concerns for her were not smaller providers in the Vibe coding market, but rather the large platforms such as OpenAI, Anthropic, Google and Apple.
Their reasoning: When products become more technologically advanced, distribution – i.e. the question of who can get the product to users quickly and widely – becomes a decisive factor. And this is exactly where the big players have a clear advantage. And that’s exactly what will happen.

