Ontario (Canada): Of all the reactions elicited by ChatGPT, the chatbot from the American for-profit company OpenAI that produces grammatically correct responses to natural-language queries, few have matched those of educators and academics. Academic publishers have moved to ban ChatGPT from being listed as a co-author and issue strict guidelines outlining the conditions under which it may be used. Leading universities and schools around the world, from France's renowned Sciences Po to many Australian universities, have banned its use.
These bans are not merely the actions of academics who are worried they won't be able to catch cheaters. This is not just about catching students who copied a source without attribution. Rather, the severity of these actions reflects a question, one that is not getting enough attention in the endless coverage of OpenAI's ChatGPT chatbot: Why should we trust anything that it outputs?
This is a vitally important question, as ChatGPT and programmes like it can easily be used, with or without acknowledgement, in the information sources that comprise the foundation of our society, especially academia and the news media. Based on my work on the political economy of knowledge governance, academic bans on ChatGPT's use are a proportionate reaction to the threat ChatGPT poses to our entire information ecosystem. Journalists and academics should be wary of using ChatGPT.
Based on its output, ChatGPT might seem like just another information source or tool. However, in reality, ChatGPT or, rather the means by which ChatGPT produces its output is a dagger aimed directly at their very credibility as authoritative sources of knowledge. It should not be taken lightly.
Trust and information:Think about why we see some information sources or types of knowledge as more trusted than others. Since the European Enlightenment, we've tended to equate scientific knowledge with knowledge in general. Science is more than laboratory research: it's a way of thinking that prioritises empirically based evidence and the pursuit of transparent methods regarding evidence collection and evaluation. And it tends to be the gold standard by which all knowledge is judged.
For example, journalists have credibility because they investigate information, cite sources and provide evidence. Even though sometimes the reporting may contain errors or omissions, that doesn't change the profession's authority. The same goes for opinion editorial writers, especially academics and other experts because they we draw our authority from our status as experts in a subject.
Expertise involves a command of the sources that are recognised as comprising legitimate knowledge in our fields. Most op-eds aren't citation-heavy, but responsible academics will be able to point you to the thinkers and the work they're drawing on. And those sources themselves are built on verifiable sources that a reader should be able to verify for themselves.
Truth and outputs:Because human writers and ChatGPT seem to be producing the same output sentences and paragraphs it's understandable that some people may mistakenly confer this scientifically sourced authority onto ChatGPT's output. That both ChatGPT and reporters produce sentences is where the similarity ends. What's most important the source of authority is not what they produce, but how they produce it.