Technology

OpenAI promises greater transparency on model hallucinations and harmful content

Openai has launched a new web page called the Safety Evaluation Center for public exchange of information related to things like hallucinations in its models. The axis will also emerge whether the model produces harmful content, and how it behaves well as it is dedicated and try to break the prisons.

The Technology Company claims that this new page will provide additional transparency on OpenAi, a company, for the context, and has faced many lawsuits that claim to use illegal copyright materials to train artificial intelligence models. Oh, yes, and it should be noted New York Times Claiming that the technology company has accidentally deleted evidence in the issue of plagiarism in the newspaper.

The safety evaluation center aims to expand the Openai system cards. They only define safety measures in development at launch, while the axis must provide continuous updates.

“With the development of artificial intelligence evaluation science, we aim to share our progress in developing more developmental ways to measure typical and safety,” says Openai in its announcement. “By sharing a sub -set of our safety evaluation results here, we hope that this is not only easy to understand the safety performance of Openai with time, but also support society’s efforts – to increase transparency throughout the field.” Openai adds that his work to get more proactive communication in this field throughout the company.

To view this content, you will need to update your privacy settings. Please click here Display “Content and social media partners” Prepare to do this.

Interested parties can consider each of the axis sections and sees information about relevant models, such as GPT-4.1 to 4.5. Openai notes that the information provided in this axis is only a “snapshot” and that the parties concerned must look at their system cards. Other reviews and versions of more details.

One of the adults Boot To the entire safety evaluation center is that Openai is the entity that does these tests and choose the information that must be publicly shared. As a result, there is no way to ensure that the company shares all its problems or fears with the public.


Don’t miss more hot News like this! Click here to discover the latest in Technology news!

2025-05-14 18:45:00

Related Articles

Back to top button