AI

Companies Are Discovering a Grim Problem With “Vibe Coding”

Lovable, the so -called “coding” that allows anyone to build web sites and applications using the natural language by harnessing the strength of artificial intelligence, has a huge problem for cybersecurity.

like Semafor Reports, a decisive safety defect has been not fixed for months, allowing anyone to access important information about site users, including names, email addresses and even financial information.

In March, Matt Palmer, an employee of AI Asted Assistant Company, wrote a report that 170 out of 1645 applied applications were suffering from the same blatant security defect, allowing easily to infiltrators to stay away from very sensitive information.

But it seems that the error has not been dealt with useful.

“The beloved later presented” a security scanner “, but it only verifies the existence of any [row level security] Politics, not its right or alignment with the logic of application, “Tweet Palmer on Thursday.

Role -level safety (RLS) is “the exercise of controlling data access in a database depending on the row, so that users are only able to access the data they declare”, for each safety company.

Palmer and his colleagues discovered e -mail addresses for approximately 500 users who participated in a beloved website that turns the LinkedIn profile into a web page.

Software engineer Daniel Asaria claimed that he was able to infiltrate into multiple loved sites “launched”, extracting personal debt amounts, home addresses, applications interface keys, and “hot claims” in just 47 minutes.

“This is not the story of a breach (I reported), this is an invitation to wake up,” Asaria tweets in April. “Be careful not to” the full programmer “that you trust in your personal data.

Three months after “the lack of a meaning or notification of the user from the lovable”, Palmer and his colleagues made the cause of their discovery in the national weakness database.

“This is the largest single challenge with VIBE encryption,” said veteran software developer Simon Wilison Semafor. “The most obvious problem is that they will build things with insecurity.”

However, the beloved founder Anton Osika, CEO of Reform, Amjad Masad, who indicated that Lobable made it very easy to expose private data “, from feeling jealous because it was overlooked in” use and made safe coding “.

It is really a sign of times, as experts have warned for years so far that artificial intelligence coding tools can easily provide a set of mistakes that can be easily ignored. The researchers have also found that many of the most advanced artificial intelligence models do not simply have what is necessary to solve the majority of coding tasks.

This trend has some uncomfortable effects on the programming industry as a whole, as young programmers began to rely heavily on artificial intelligence tools-which can largely undermine their foundational knowledge, and often derived from solving difficult and manual problems.

Since then, Lobable has again pushed X-Formerly-Twitter, claiming that she is “better now to build safe applications for more than a few months before that, and this is improving quickly.”

The company wrote: “However, we are not yet where we want to be in terms of security and we are committed to continuing to improve the security position of all beloved users,” the company wrote.

More about artificial intelligence coding: The advanced Openai model has been captured by the sabotage symbol that aims to close

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-05-31 14:30:00

Related Articles

Back to top button