ChatGPT's Impact on Academia:
A Canary in the Coalmine of a Rapidly Changing Landscape

14 January 2023

Since its release over the last month, the ChatGPT has caused a stir in the academic and education sectors, triggering a debate about the future of research and teaching. The AI tool is now banned in all New York City schools out of fear students are using it to write essays. Lecturers in Australian universities and UK have been encouraged to review their review assessment methods, return to ‘pen and paper’ exams.

ICML Conference, one of the world’s most prestigious machine learning conferences, has banned researchers from using ChatGPT to write their scientific papers. A preprint posted on the bioRxiv, researchers show that ChatGPT, can create research-paper abstracts that are so realistic, scientists may have difficulty distinguishing them from those written by humans. Some researchers try to show the point by including name of the bot as the co-author of their peer-reviewed publications.

The possibility of co-authorship with a bot took the online world by storm. People begin to question the future of scientific publication: will metrics such as the number of co-authored papers and index numbers continue to be relevant indicators of an individual’s academic career? Particularly now that they know ChatGPT could pass the multiple-choice portions of the Multistate Bar Exam (MBE), and one of the most difficult standardised tests in the US: medical licensing exam (USMLE).   

They are all in fact a canary in a coalmine situation for the academics, indicating that there are deeper changes ahead. 

Although we call AI tools like ChatGPT chatbots, perhaps given their research capability we can also call them ‘ResearchBots’: A sub-category of chatbots capable of mimicking some research activities such as reviewing the literature, summarizing difficult concepts and theories, writing an op-ed and essay, or answering difficult disciplinary questions.

The responses from the universities, as a place of higher learning and research, however has been relatively weak and inadequate. Some possible reasons could be the emergent and evolving nature of the issue, limited resources and competing priorities, and the divide between the academics over the implication of the technology for science. 

In this short article, I briefly explain what’s ChatGPT and discuss two areas of challenge that require attention: first policy and regulation at the university level, regarding research and teaching; and second public understanding of science. 

New AI generative models: ChatGPT and Galactico

Before start discussing the implication of ResearchBots on research and higher education, let’s first talk about AI generative models.

2022 became the year of ground-breaking AI generative models creating astonishing images and coherent texts. Recently two AI text generators, ChatGPT and Galactico, have sparked a flurry of excitement, controversy, and debates. ChatGPT and Galactico are both large language models designed to identify patterns when people talk about things on Wikipedia pages, articles, books (remember English texts only) and then train themselves to learn how to take the best guesses about which bits of text belong together in a sequence. 

Trained on millions of open-access papers, textbooks, lecture notes, and scientific websites, Galactico designed to act like a research assistant with multiple skills such as summarizing papers, writing scientific code, annotating molecules and proteins, and more. All skills to support the human researcher with collecting, summarising, and reasoning about scientific information. The result however was very disappointing. Just after three days of public demo, the bot was taken down by its owner, Meta. Problem? Generating fake and racist research presented with very authoritative look.  

In contrast, OpenAI’s ChatGPT won hearts of people reaching to one million users only five days after its launch. Aside from writing jokes and TV episodes, ChatGPT showed high capability in writing computer code and essays. The bot stuns academics, in particular, with its skills in explaining scientific concepts— a quality expected from tutors, subject experts, and generally those who have an academic training. People shared screenshots of their solid conversation with the bot on twitter, captioned by excitement, fears, and tricks they use to circumvent the bot’s guardrails. The bot remembers the user’s questions and learns from the previous conversation, making the experience a brilliant and weird chat.

Now, a student can think of taking their assignment to ChatGPT and getting a natural and well-researched essay. A researcher can think of using it as a collaborator to bounce ideas off of and getting new perspectives.  

A lot of hocus-pocus 

You say a few words (i.e. put prompts) and the ChatGPT quickly provides realistic and intelligent-sounding content, like someone who has done their research to answer your question. But sometimes the answer is nothing more than hocus-pocus, far from ‘useful’. ChatGPT still gets a lot wrong. The generated content is mix of truth and fiction, sometimes with fake or irrelevant  references and quotes.

In neural language models, this is commonly known as the problem of ‘hallucination’. This is when the system suffers from producing factually invalid statements. Sometime the result is like having a conversation with an encyclopedia with no logical or reasoning ability: highly polished, but nonsense.

For this very reason, Stack Overflow, the most popular discussion platform for software developers, very soon banned user to post answers created by ChatGPT. The generated answers look like correct ones, but in average they weren’t. This is of course harmful both to the platform reputation and to people looking to find the correct answers from the subject experts. To handle the situation, the platform gives its moderators the power to impose sanctions on people who use ChatGPT to answer questions, even if they’re correct.

What the platform actually does is to buy time to think and decide how to regulate ChatGPT and equivalent tools to protect the quality of the discussion in the forum.  This is the same policy some schools and educational institutions around the world are currently taking to ban students from using the tool to complete  their assignments.

Things advance fast and we need to adjust   

As we move forward into the future, we can expect to see more in the world of language models. Now Microsoft is exploring to add ChatGPT to Word, email, and its other products and services.

With the feedback ChatGPT is currently receiving from the millions of users and a more powerful GPT-4 model coming up soon, these bots may significantly improve. One trend that is already beginning to emerge is the development of models that can cite external sources to back up their claims. This would be a game-changer for anyone looking to do research or make important decisions (prototypes already exist). Some even speculating about more specific intelligent disciplinary chatbots specialist in a subject or field. And of course, there are people who have already started to get ChatGPT to write malware.  

For good or bad, it wouldn’t be surprising to see a new genre of ResearchBots are developed, and embedded into different apps and platforms, become pervasive in training, research, public understanding of science.  

Students are the main stakeholders in this change. They need to be actively included and seen as an integral part of this conversation. 

For this very reason, universities need to adjust quickly. Two domains of challenge require serious attention:

First policy/regulation at the university level, regarding research and teaching; and second how science communication could be different in the age of ResearchBots. 

1) Research and Teaching 

Let’s get back to the results of study mentioned in the beginning. In the experiment researchers use ChatGPT to produce scientific-like abstracts. They found that the abstracts written by ChatGPT could fool both scientists and the AI detection tool. The blinded human reviewers only 68% of the time could correctly identify the content produced by ChatGPT, and 86% if it was written by a real scientist. That simply shows that even academics who were trained in a discipline had difficulty to distinguish if the text was generated by AI or a real scientist. 

So, the integrity of the scientific research is one area that requires closer attention and action.   

Universities must develop new polices to regulate the use of these bots for research and training purposes. The very notion of ‘research infrastructure’ needs to be revisited as research activities at the university could be different in future. Just think about the way these bots could change the way researchers interact with archives, structured scientific information or computing, software and communication.  

Smart essay writing could be another challenge facing universities. Many professors who tested ChatGPT with questions that they often give to the students in their assignment, impressed by the quality of the essay. Of course, this is not a new problem for the universities as they are already dealing with the problem of plagiarism and outsourcing essay and thesis writing to human third parties. The course convenors must start to rethink about their assessments, prioritizing critical thinking and analytical skills over writing skills. Reflexive writing and creative assignments can be important instruments to engage students with the course in new ways.  

ChatGPT has the potential to be an invaluable tool for writing assignments, like how calculators are an essential tool for students in math classes or engineering courses. For example, students can play with the tool (if it remains free, which is unlikely) to get a response from their prompt and then think about the ways to enhance the writing style and particularly ‘reasoning’ through revision.

All said, we shouldn’t forget that the students are the main stakeholders in this change.They need to be actively included and seen as an integral part of this conversation. 
 

2) Public Understanding of Science 

The biggest challenge, however, will be in the science communication domain where the public use these AI tools to find out what science says about different issues, particularly controversial ones. Unlike education or teaching, there has been very little interest or concern shown in social media about the long-term implications of these bots on science communication, when people turn to ResearchBots to understand complex issues and topics such as health and environment.

The current design of the bot allows people to create answer and sharing it with others, without the expertise or willingness, to verify the answer is correct or legitimate. Answers are easy to generate, there will be a lot of answers, with no expert or scientist in the loop to moderate the content.

This could lead to an explosion of misinformation with unprecedented impact on society’s trust in science and the academics.

In addition, we need to consider the risks of adversaries abusing these AI tools to produce biased content. The research shows that these tools are capable of achieving high levels of accuracy on their primary task, while also being manipulated to satisfy an adversarial objective. This means when someone wants to do their research about the topic, the chatbot can spin the summary of the research by making it positive, toxic, or entail a certain hypothesis, as chosen by model owner/hacker.

Now think about all controversial topics that are already dividing the societies and how the future ResearchBots has the potential to exacerbate or mitigate the conditions that contribute to the conflict.  

Answers are easy to generate,  with no expert or scientist in the loop to moderate the content.

We need a holistic approach in understanding their long-term implications for science and society 

Universities must take note of the changes and act quickly, and more importantly holistically. ChatGPT is one of example of language models which are going to change the way we teach and research in the years to come. Our approach should not be guided by our reaction to OpenAI’s ChatGPT, but by deeper reflection on how these new genre of AI tools, more generally, could reshape our societies in the next decade.

We need to start reimagining education, research, and science communication for the time these technologies are pervasive in the lives of students, researchers and the public.

Taking a holistic approach thus is necessary. It allows us to consider the big picture and all the interconnected elements, not only to better understand the challenge but to develop more effective solutions. It can help us to anticipate potential unintended consequences and take a proactive approach to addressing them. But first, academics from all fields and disciplines need to familiarize themselves with language models and pay attention to their implications in their teaching and research.  

Of course, the technology is not there yet, but it is evolving rapidly. Universities need to get prepared to face up to the risks and opportunities presented by new AI technologies.  

 

Ehsan Nabavi is a Senior Lecturer at Australian National Centre for the Public Awareness of Scienceand the Head of Responsible Innovation Lab at the Australian National University.

Â