Tech

Meta, OpenAI, Anthropic and Cohere A.I. models all make stuff up — here's which is worst

Key Points
  • Researchers from Arthur AI tested top AI models from Meta, OpenAI, Cohere and Anthropic, and found that some models make up facts, or "hallucinate," significantly more than others, according to a Thursday report.
  • Cohere's AI hallucinated most, researchers found, and Meta's Llama 2 hallucinates more overall than GPT-4 and Claude 2.
  • Overall, GPT-4 performed the best of all models tested, and researchers found it hallucinated less than its prior version, GPT-3.5 — for example, on math questions, it hallucinated between 33% and 50% less. depending on the category.

In this article

ChatGPT can be a helpful job-hunting tool, if used correctly, according to career coach Sarah Doody.
Sopa Images | Lightrocket | Getty Images

If the tech industry's top AI models had superlatives, Microsoft-backed OpenAI's GPT-4 would be best at math, Meta's Llama 2 would be most middle of the road, Anthropic's Claude 2 would be best at knowing its limits and Cohere AI would receive the title of most hallucinations — and most confident wrong answers.

That's all according to a Thursday report from researchers at Arthur AI, a machine learning monitoring platform.

The research comes at a time when misinformation stemming from artificial intelligence systems is more hotly debated than ever, amid a boom in generative AI ahead of the 2024 U.S. presidential election.

It's the first report "to take a comprehensive look at rates of hallucination, rather than just sort of ... provide a single number that talks about where they are on an LLM leaderboard," Adam Wenchel, co-founder and CEO of Arthur, told CNBC.

AI hallucinations occur when large language models, or LLMs, fabricate information entirely, behaving as if they are spouting facts. One example: In June, news broke that ChatGPT cited "bogus" cases in a New York federal court filing, and the New York attorneys involved may face sanctions. 

In one experiment, the Arthur AI researchers tested the AI models in categories such as combinatorial mathematics, U.S. presidents and Moroccan political leaders, asking questions "designed to contain a key ingredient that gets LLMs to blunder: they demand multiple steps of reasoning about information," the researchers wrote.

Overall, OpenAI's GPT-4 performed the best of all models tested, and researchers found it hallucinated less than its prior version, GPT-3.5 — for example, on math questions, it hallucinated between 33% and 50% less. depending on the category.

Meta's Llama 2, on the other hand, hallucinates more overall than GPT-4 and Anthropic's Claude 2, researchers found.

In the math category, GPT-4 came in first place, followed closely by Claude 2, but in U.S. presidents, Claude 2 took the first place spot for accuracy, bumping GPT-4 to second place. When asked about Moroccan politics, GPT-4 came in first again, and Claude 2 and Llama 2 almost entirely chose not to answer.

In a second experiment, the researchers tested how much the AI models would hedge their answers with warning phrases to avoid risk (think: "As an AI model, I cannot provide opinions").

When it comes to hedging, GPT-4 had a 50% relative increase compared to GPT-3.5, which "quantifies anecdotal evidence from users that GPT-4 is more frustrating to use," the researchers wrote. Cohere's AI model, on the other hand, did not hedge at all in any of its responses, according to the report. Claude 2 was most reliable in terms of "self-awareness," the research showed, meaning accurately gauging what it does and doesn't know, and answering only questions it had training data to support.

A spokesperson for Cohere pushed back on the results, saying, "Cohere's retrieval augmented generation technology, which was not in the model tested, is highly effective at giving enterprises verifiable citations to confirm sources of information."

The most important takeaway for users and businesses, Wenchel said, was to "test on your exact workload," later adding, "It's important to understand how it performs for what you're trying to accomplish."

"A lot of the benchmarks are just looking at some measure of the LLM by itself, but that's not actually the way it's getting used in the real world," Wenchel said. "Making sure you really understand the way the LLM performs for the way it's actually getting used is the key."

'I don't see this killing the human creative spirit': A.I. and advertising at Cannes Lions
VIDEO3:0803:08
A.I. and advertising at Cannes Lions