AI

The More Scientists Work With AI, the Less They Trust It

Illustration by Taj Hartmann-Simkins/Future. Source: Getty Images

Scientists are a skeptical bunch, and it’s in the job description. But when it comes to artificial intelligence, researchers are increasingly distrustful of the technology’s capabilities.

In a preview of its 2025 report on the impact of technology on research, academic publisher Wiley has released preliminary findings on attitudes toward artificial intelligence. One surprising takeaway: The report found that scientists expressed this less They trusted AI more than they did in 2024, when it was decidedly less advanced.

For example, in the 2024 edition of the survey, 51% of scientists surveyed were concerned about potential “hallucinations,” a widespread issue where large language models (LLMs) present completely fabricated information as fact. This number will reach a whopping 64% in 2025, even as the use of AI among researchers rises from 45 to 62%.

Concern about security and privacy rose 11% from last year, while concerns about ethical AI and transparency have also increased.

In addition, there was a significant reduction in hype compared to last year, when AI research startups dominated one headline after another. In 2024, scientists surveyed said they believe AI already outperforms human capabilities in more than half of all use cases. In 2025, this belief fell into the abyss, falling to less than third.

These findings follow previous research that found that the more people learn how AI works, the less they trust it. The opposite was also true: the biggest fans of AI were those who didn’t understand much about the technology.

While more studies are needed to show how widespread this phenomenon is, it’s not hard to guess why professionals are starting to have doubts about their algorithmic assistants.

For one thing, those hallucinations are a serious issue. They have already caused major disruptions to the courts, medical practice, and even travel. It’s not exactly a simple solution; In May, tests showed that AI models were hallucinating more often even as they became more technically powerful.

There is also the difficult issue of using AI as a profit-making tool. Experts say users overwhelmingly prefer confident LLM holders over those who admit when they can’t find data or provide an accurate answer — even when that information is fully made up. If a company like ChatGPT eliminates dirty hallucinations forever, it will scare away users in droves.

So, if you’re not sure what to make of all the hype around AI, ask a researcher — they’ll likely be happy to burst your bubble.

More about artificial intelligence: AI chatbots are getting worse at summarizing data

Don’t miss more hot News like this! Click here to discover the latest in AI news!

2025-10-13 11:00:00

Related Articles

Back to top button