AI

The looming crackdown on AI companionship

As long as there was Amnesty International, there were people looking for an alert about what it might do to us: the rogue extended, collective unemployment, or environmental ruin from the extension of the data center. But this week showed that another threat is completely – that children who pose unhealthy bonds with artificial intelligence – it is the one that pulls the integrity of artificial intelligence from the academic margin and to the intersection of the organizers.

This was stepping for a while. Two high -level claims last year, against the character. Air and Openai, claim that the accompanying behavior in their models contributed to suicides in teenagers. A study conducted by the media in the United States, which was published in July, found that 72 % of adolescents used the artificial intelligence of the companion. Stories in a good reputation outlets have highlighted about “artificial intelligence minds” how endless conversations with Chatbots can lead people to the bottom of fake tubes.

It is difficult to overestimate the impact of these stories. For the public, it is evidence that artificial intelligence is not just incomplete, but it is a more harmful technique than useful. If you suspect that this anger will be taken seriously by the organizers and companies, three things have occurred this week your opinion may change.

California law passes through the Legislative Commission

On Thursday, the California Legislative Council approved a draft law of its kind. It will require artificial intelligence companies to include reminders of users who know that they are minors that the responses were created from artificial intelligence. Companies will also need a protocol to process suicide, self -harm and annual reports on suicide thinking in user conversations with their Chatbots. Democratic senator Steve Badilla, who was approved with heavy support from the two parties, was now expected to be signed by Governor Gavin New Off.

There are reasons for skepticism about the impact of the bill. Do not specify the efforts that companies must make to determine users from the palace, and many artificial intelligence companies already include references to crisis providers when someone talks about suicide. (In the case of Adam Rene, one of the teenagers who sued the survivor, he included his talks with ChatGPT before his death this type of information, but he claims that Chatbot continued to provide advice related to suicide anyway.)

However, they are undoubtedly the most important efforts made to curb the facilities -like behaviors in artificial intelligence models, which are in business in other states as well. If the draft law becomes a law, it will affect the position taken by Openai, which is that “America leads better with clear rules worldwide, not a group of government or local regulations,” as the chief international affairs official in the company, Chris Lean, wrote on LinkedIn last week.

The Federal Trade Committee takes the goal

On the same day, the Federal Trade Committee announced an investigation into seven companies, which seeks to obtain information on how to develop accompanying personalities, including participation, measuring and testing the effect of Chatbots, and more. Companies are Google, Instagram, Meta, Openai, Snap, X, Technologies, and Maker of Character.ai.

The White House now has a tremendous political impact, and possibly illegal on the agency. In March, President Trump launched its only Democratic Commissioner, Rebecca Siloter. In July, a federal judge spent that the shooting was illegal, but last week the United States Supreme Court permitted the shooting.

And Andrew Ferguson, Chairman of the Federal Trade Committee, said in a press statement on the investigation:

At the present time, it is just an inquiry – but the process may be (depending on how the FTC has reached the audience) reveals the internal works of how to build companies its comrades of artificial intelligence to keep users returning again and again.

Sam Al -Tamman in suicide cases

also On the same day (crowded day for Ai News), Tucker Carlson published an hour -old interview with Openai CEO, Sam Altman. It covers a lot of land – the battle of the length with Elon Musk, Openai military customers, conspiracy theories about the death of a former employee – but also include the most healthy comments made by Altman so far about suicide after talks with artificial intelligence.

Altman talks about “the stress between user freedom, privacy and protecting weak users” in such cases. But then he presented something that I have never heard before.

He said: “I think it will be very reasonable for us to say that in cases of young people who talk about suicide seriously, where we cannot communicate with parents, we call on the authorities.” “This will be a change.”

So where does all this go after that? Currently, it is clear that – at least in the case of children who are disturbed by AI ResVORPLICSHIP – he will not keep the familiar playing book. They can no longer convert responsibility by dependence on privacy, customization, or “choosing the user”. Click to take a more difficult line is an escalating of the laws of the state, the organizers and the angry audience.

But how does that look like that? Politically, the left and the right now pay attention to the harm of artificial intelligence for children, but their solutions differ. On the right, the proposed solution is in line with the wave of laws to determine the internet life that has now been approved in more than 20 states. This aims to protect children from adult content while defending “family values”. On the left, the revival of the suspended aspirations to account for large technology by combating monopoly and consumer protection forces.

The consensus on the problem is easier than agreeing on treatment. As it seems likely to end up with a mixture of government and local regulations that Openai (and many others) pressed.

Currently, it comes to companies to determine the location of the fonts. They have to decide things like: Should Chatbots cut conversations when users rise towards self -harm, or will this leave some people worse? Should they be licensed and organized like therapists, or are they treated as entertainment products with warnings? Understanding stems from the basic contradiction: companies have built Chatbots to work like human care, but they stopped developing the standards and accountability that we ask for real care providers. The time now is running out.

This story originally appeared in the algorithm, our weekly newsletter on artificial intelligence. For stories like this in your inbox first, subscribe here.

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-09-16 09:00:00

Related Articles

Back to top button