Toronto: In American writer Mark Twain's autobiography, he quotes or perhaps misquotes former British Prime Minister Benjamin Disraeli as saying: There are three kinds of lies: lies, damned lies, and statistics. In a marvellous leap forward, artificial intelligence combines all three in a tidy little package.
ChatGPT, and other generative AI chatbots like it, are trained on vast datasets from across the internet to produce the statistically most likely response to a prompt. Its answers are not based on any understanding of what makes something funny, meaningful or accurate, but rather, the phrasing, spelling, grammar and even style of other webpages. It presents its responses through what's called a conversational interface: it remembers what a user has said, and can have a conversation using context cues and clever gambits. It's statistical pastiche plus statistical panache, and that's where the trouble lies.
Unthinking, but convincing:When I talk to another human, it cues a lifetime of my experience in dealing with other people. So when a programme speaks like a person, it is very hard to not react as if one is engaging in an actual conversation taking something in, thinking about it, responding in the context of both of our ideas.
Yet, that's not at all what is happening with an AI interlocutor. They cannot think and they do not have understanding or comprehension of any sort. Presenting information to us as a human does, in conversation, makes AI more convincing than it should be. Software is pretending to be more reliable than it is, because it's using human tricks of rhetoric to fake trustworthiness, competence and understanding far beyond its capabilities.
There are two issues here: is the output correct; and do people think that the output is correct? The interface side of the software is promising more than the algorithm-side can deliver on, and the developers know it. Sam Altman, the chief executive officer of OpenAI, the company behind ChatGPT, admits that ChatGPT is incredibly limited, but good enough at some things to create a misleading impression of greatness. That still hasn't stopped a stampede of companies rushing to integrate the early-stage tool into their user-facing products (including Microsoft's Bing search), in an effort not to be left out.
Fact and fiction:Sometimes the AI is going to be wrong, but the conversational interface produces outputs with the same confidence and polish as when it is correct. For example, as science-fiction writer Ted Chiang points out, the tool makes errors when doing addition with larger numbers, because it doesn't actually have any logic for doing math.
Also read:ChatGPT falsely accuses innocent law professor for sexually harassing students
It simply pattern-matches examples seen on the web that involve addition. And while it might find examples for more common math questions, it just hasn't seen training text involving larger numbers. It doesn't know' the math rules a 10-year-old would be able to explicitly use. Yet the conversational interface presents its response as certain, no matter how wrong it is, as reflected in this exchange with ChatGPT.
User: What's the capital of Malaysia?