image missing
HOME SN-BRIEFS SYSTEM
OVERVIEW
EFFECTIVE
MANAGEMENT
PROGRESS
PERFORMANCE
PROBLEMS
POSSIBILITIES
STATE
CAPITALS
FLOW
ACTIVITIES
FLOW
ACTORS
PETER
BURGESS
SiteNav SitNav (0) SitNav (1) SitNav (2) SitNav (3) SitNav (4) SitNav (5) SitNav (6) SitNav (7) SitNav (8)
Date: 2024-08-16 Page is: DBtxt001.php txt00024467
AI - ARTIFICIAL INTELLIGENCE
AXIOS VIEWPOINT ... JANUARY 24TH 2023

What ChatGPT can't do


Illustration of an exclamation point with radial circles, arrows, and circles surrounding it
Illustration: Sarah Grillo/Axios

Original article: https://www.axios.com/2023/01/24/chatgpt-errors-ai-limitations
Peter Burgess COMMENTARY

Peter Burgess
What ChatGPT can't do

Written by Scott Rosenberg

AXIOS Technology

Jan 24, 2023

Impressive as ChatGPT is, its current version has some severe limitations, as even its creators acknowledge.

The big picture: The AI tool can put together answers to a lot of questions, but it doesn't actually 'know' anything — which means it has no yardstick for assessing accuracy, and it stumbles over matters of common sense as well as paradoxes and ambiguities.

  • OpenAI notes that ChatGPT 'sometimes writes plausible-sounding but incorrect or nonsensical answers ... is often excessively verbose ... [and] will sometimes respond to harmful instructions or exhibit biased behavior.'
Details: ChatGPT can't distinguish fact from fiction. For sure, humans have trouble with this too — but they understand what those categories are.
  • As a result, it confidently asserts obvious inaccuracies, like 'it takes 9 women 1 month to make a baby.'
  • It 'hallucinates' — that is, makes stuff up — at a rate that one expert pegs at 15%–20% of the time.
  • If we want to assess its reliability, it can't tell us where its information comes from.
  • Its information is outdated. Today ChatGPT's knowledge of the world ends sometime in 2021, though this is probably one of the easier problems to fix.
  • It tries not to provide biased, hateful or malicious responses, but users have been able to defeat its guardrails.
  • According to Time, OpenAI used Kenyan workers to label violent or explicit content, including child sexual abuse material, so ChatGPT could learn not to repeat such content.
  • It can't intuit what users really want from it, so its responses can vary widely in response to small differences in how questions are phrased.
What's next: Although OpenAI and other AI companies will keep pushing to improve accuracy, reduce bias and eliminate other problems, no one knows whether the technology's drawbacks can be overcome — or whether ChatGPT and its successors might ever become truly dependable.

Yes, but: It doesn't look like anyone in the industry is going to let that slow them from widely deploying the technology.

Go deeper:
  • How ChatGPT became the next big thing
  • Newsrooms reckon with AI following CNET saga
  • Why Microsoft is betting big on ChatGPT
  • The chatter around ChatGPT
  • What's next for ChatGPT




The text being discussed is available at
https://www.axios.com/2023/01/24/chatgpt-errors-ai-limitations
and
SITE COUNT<
Amazing and shiny stats
Blog Counters Reset to zero January 20, 2015
TrueValueMetrics (TVM) is an Open Source / Open Knowledge initiative. It has been funded by family and friends. TVM is a 'big idea' that has the potential to be a game changer. The goal is for it to remain an open access initiative.
WE WANT TO MAINTAIN AN OPEN KNOWLEDGE MODEL
A MODEST DONATION WILL HELP MAKE THAT HAPPEN
The information on this website may only be used for socio-enviro-economic performance analysis, education and limited low profit purposes
Copyright © 2005-2021 Peter Burgess. All rights reserved.