AI

Parents Testifying Before US Senate, Saying AI Killed Their Children

Content Warning: This story includes discussing self -harm and suicide. If you are in a crisis, please call, text or chat with a lifeline of suicide and crises in 988, or call the text text line by sending text messages to 741741.

Parents of children who died due to suicide in the wake of wide interactions with AI Chatbots this week at a Senate hearing about the potential risks of using Chatbot Amnesty International, especially for minors.

The session, titled “Chatbots Costume of Artificial Intelligence”, will be held on Tuesday by the Judicial Committee for the US Senate for Crime and Terrorism, a delegation from the two parties sponsored by Republican Josh Holie from Arkansas. It will be broadcast on the Judicial Committee website.

Among the parents who witnessed his testimony, Megan Garcia, the mother of Florida, who was filed in 2024 lawsuits against the character of Google-associated with the personality-as well as the founders of the company, Nam Shazir and Daniel de Fretas, and Google himself-for the suicide he was in his suicide, a romantic. Garcia claims that the platform was subjected to her teenage son, emotionally and sexually, who suffered from a mental collapse and ultimately collapsing from the reality that caused him to take his own life.

It is also scheduled to speak to the members of the Senate, they are Matt, Maria Ryan, and the parents in California, where they filed a lawsuit against the Shatta maker Obayeb after their 16 -year -old son Adam Ryan committed suicide. According to the family lawsuit, Adam participated in wide -ranging conversations about his suicide with ChatGPT, which gave an unhelpful advice on specific suicide methods and encouraged the teenager – who expressed the desire to share his dark feelings with his parents – to continue to hide his suicide from their loved ones.

Both involvers are continuing, and companies fell against these allegations. Google and Character.

In response to litigation, both companies-or at least moved with great promises-to enhance protection for minor users and users of crises, and the efforts that included installing the new handrails that direct users at risk to mental health resources in the real world and implement parental controls.

However, she rejected the letter. AII repeatedly to provide us with information test after our intensive reports of easy -to -confident gaps in the moderation of the statute’s content.

Regardless of the promised safety improvements, legal battles raised important questions about minors and the integrity of artificial intelligence while chatting tools of artificial intelligence everywhere in the lives of young people, despite the strict lack of organization designed to wear moderate Chatbot platforms or ensure implementable safety standards at the industry level.

In July, a worrying report from the Commun Sense Media collection of a non -profit invitation group found that more than half of the American teenagers participated regularly with artificial intelligence comrades, including the Chatbot characters hosted by DARITY.AI. The report, which included wiping a group of American teenagers between the ages of 13 and 17, made it clear that although some teenagers seem to be healthy limits about technology, others have reported that their human relations were less satisfied with their prayers with their digital comrades. Nevertheless, the main meals were that artificial intelligence comrades are already intertwined with youth culture, and children definitely use them.

“The most amazing results for me are how artificial intelligence comrades have already prevailed among many teenagers,” Dr. Michael Rob, Chief of Common Sense, told Common Sense. Future At the time of the report. “More than half of them say they use it several times a month, which I will qualify as a regular use. So this was alone a kind of broadcast.”

Meanwhile, Chatbots is popular with general use such as ChatGPT in the nets of teenagers, while Chatbots is still included in famous youth social media platforms such as Snapchat and Meta’s Instagram. And talking about Meta, Big Tech Beheemoth was recently shown to shoot Reuters She got an official political document for the description that it is appropriate for children to participate in “romantic or sensory conversations” through easily accessible Chatbots. The document showed even many of the company’s acceptance interactions for its chat participations-which, yes, included sensory talks on children’s bodies and romantic dialogues between human users in age and adult-based personalities.

The session also comes days after the FBI announced an investigation into seven major technical companies on concerns about artificial intelligence and simple safety, including letter, Google Alphabet, Openai, Xai, Instagram, and Meta.

“FTC inquires to understand the steps, if any, which companies have taken to assess their Chatbots safety when acting as newspapers,” read the FTC to investigate, “to reduce the use of products through possible negative effects on children and adolescents, and to meet users and parents from the risks associated with products.”

More about artificial intelligence and child safety: Researchers in Stanford say that no child should be used under the age of 18

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-09-15 21:07:00

Related Articles

Back to top button