Non Cult Crypto News

Non Cult Crypto News

in

AI chatbots are getting worse over time — academic paper

A dwindling consumer interest in chatbots caused a drop in AI-sector revenues during the second business quarter of 2024.

Own this piece of crypto history

Collect this article as NFT

COINTELEGRAPH IN YOUR SOCIAL FEED

A recent research study titled “Larger and more instructable language models become less reliable” in the Nature Scientific Journal revealed that artificially intelligent chatbots are making more mistakes over time as newer models are released.

Lexin Zhou, one of the study’s authors, theorized that because AI models are optimized to always provide believable answers, the seemingly correct responses are prioritized and pushed to the end user regardless of accuracy.

These AI hallucinations are self-reinforcing and tend to compound over time — a phenomenon exacerbated by using older large language models to train newer large language models resulting in “model collapse.”

Editor and writer Mathieu Roy cautioned users not to rely too heavily on these tools and to always check AI-generated search results for inconsistencies:

“While AI can be useful for a number of tasks, it’s important for users to verify the information they get from AI models. Fact-checking should be a step in everyone’s process when using AI tools. This gets more complicated when customer service chatbots are involved.”

To make matters worse, “There’s often no way to check the information except by asking the chatbot itself,” Roy asserted.

Related: OpenAI raises an additional $6.6B at a $157B valuation

The stubborn problem of AI hallucinations

Google’s artificial intelligence platform drew ridicule in February 2024 after the AI started producing historically inaccurate images. Examples of this included portraying people of color as Nazi officers and creating inaccurate images of well-known historical figures.

Unfortunately, incidents like this are far too common with the current iteration of artificial intelligence and large language models. Industry executives, including Nvidia CEO Jensen Huang, have proposed mitigating AI hallucinations by forcing AI models to conduct research and provide sources for every single answer given to a user.

However, these measures are already featured in the most popular AI and large language models, yet the problem of AI hallucinations persists.

More recently, in September, HyperWrite AI CEO Matt Shumer announced that the company’s new 70B model uses a method called “Reflection-Tuning” — which purportedly gives the AI bot a way of learning by analyzing its own mistakes and adjusting its responses over time.

Magazine: How to get better crypto predictions from ChatGPT, Humane AI pin slammed: AI Eye

This article first appeared at Cointelegraph.com News

What do you think?

Written by Outside Source

Bitcoin whale transfers $3.6 million to Kraken after 15 years of dormancy

Coinbase to delist non-compliant EU stablecoins under new MiCA regulations

Back to Top

Ad Blocker Detected!

We've detected an Ad Blocker on your system. Please consider disabling it for Non Cult Crypto News.

How to disable? Refresh

Log In

Or with username:

Forgot password?

Don't have an account? Register

Forgot password?

Enter your account data and we will send you a link to reset your password.

Your password reset link appears to be invalid or expired.

Log in

Privacy Policy

To use social login you have to agree with the storage and handling of your data by this website.

Add to Collection

No Collections

Here you'll find all collections you've created before.