OpenAI warns that its new ChatGPT Agent has the ability to aid dangerous bioweapon development

The latest Openai products make it easier for someone to collect data automatically, create data schedules, travel travel, and spin the floors of slides – and perhaps only build a biological weapon. Chatgpt Agent, a new tool for the artificial intelligence agent that can take a action on behalf of the user, is the first product that has been classified as a “high” capacity on BIORISK.
This means that the model can provide meaningful assistance to “beginners” actors and enable them to create known biological or chemical threats. The effects of this world may mean that biological or chemical terrorist events by non -governmental actors become more likely to be more likely, according to Openai “alert frame”, which the company uses to track and new equipment from extreme harm from border models.
“Some people may think that borisk is not real, and that models only provide information that can be found by searching. This may have been true in 2024 but it is definitely not true today. Our assessments and expert assessments have been based, the risks are very real.”
He added: “Although we cannot say with certainty that this model can enable the beginner to create severe biological damage, I believe that it was not responsible for the depth of launching this model without a comprehensive relief like that we put in place.”
Openai said that the classification of the model as a great danger to Bio-Misuse was a “precautionary approach” and has led to additional guarantees of the tool.
Kirin Joe, Openai Safety Researcher, said that although the company had no final evidence that the model could use the novice to create a bit of severe biological harm, it has activated guarantees with that. These guarantees include claims of Agent Chatgpt that can aim to help someone produce a biological weapon, systems indicating unsafe requests to review experts, strict rules that prevent risky content, fastest problems of problems, and strong monitoring of any signs of misuse.
One of the main challenges in mitigating the vital sales potential is that the same capabilities can open life -saving medical breakthroughs, which is one of the great promises of advanced artificial intelligence models.
The company has become increasingly anxious about the possibility of a model to use the model in the development of biological weapons. In the blog post last month, Openai announced that it increases the safety test to reduce the risk of using its models to help create biological weapons. Artificial Intelligence Laboratory has warned that without these precautions, models can soon be able to “high beginners” – weakening individuals with a small scientific background to develop dangerous weapons.
Barack said: “Contrary to nuclear and radiological threats, obtaining materials less than an obstacle to create vital threats, and thus security depends to a greater extent on the scarcity of knowledge and laboratory skills.” “Based on our external assessments and experts, the ChatGPT agent, which has been distributed, can narrow this gap in knowledge and provide advice closer to a subject expert.”
Chatgpt agent
The new ChatGPT feature of Openai is an attempt to take advantage of one of the most risky areas, risky, in developing artificial intelligence: agents.
The new feature, such as a personal assistant, is able to handle tasks such as reservation of restaurants and online shopping reservations and organizing the lists of candidates for jobs. Unlike previous versions, the tool can use a virtual computer to actively control web browsers, interact with files, and move through applications such as data schedules and slidess.
The company merged the teams behind the operator, the first AI agent, and deep research, a tool that was developed to conduct multiple online research for complex tasks, to form one group that developed the new tool.
Artificial intelligence laboratories are currently racing to create agents who can manage complex digital tasks independently, and launch comes after similar versions by Google and Anthropology. The major technology companies believe that artificial intelligence agents are a commercial opportunity, as companies are increasingly moving to implement artificial intelligence in the workflow and automating some tasks.
Openai has acknowledged that the biggest self -rule provides more risks and emphasizes the use of the user to mitigate these risks. For example, the agent He asks for permission Before taking great action and can be temporarily stopped, redirect or stop by the user at any time.
Don’t miss more hot News like this! Click here to discover the latest in Business news!
2025-07-18 13:59:00