WASHINGTON — The surging popularity of ChatGPT has raised concerns about the future of learning, but several experts say educators should embrace the tool.
ChatGPT – short for Chat Generative Pre-trained Transformer – is an artificial intelligence chatbot that was developed by San Francisco-based startup OpenAI in November 2022.
ChatGPT takes written input from users and generates human-like responses through natural language processing – allowing it to write papers, speeches, poems and even generate code. Its robust abilities have caused educators to worry about declines in learning and academic integrity in their classrooms.
Some of the nation’s largest school districts, like New York City’s Department of Education, Baltimore County Public Schools, Oakland Unified in California, and Seattle Public Schools, have already moved to block ChatGPT from their devices and networks, citing concerns about cheating and plagiarism.
Edward Maloney, English professor and executive director of The Center for New Designs in Learning and Scholarship at Georgetown University, who helps other educators think about course designs and assignments, said ChatGPT is just another opportunity to reimagine teaching.
“The most important thing is to be transparent, to be open, to help your students understand what these tools can do and what they can’t do, and to potentially incorporate them into teaching experiences,” Maloney said.
He added that there are several other tools that have augmented students’ abilities, such as calculators and the internet, and ChatGPT is no different. He said the way to adapt is by creating assignments that require more than what the tool is capable of.
The tool’s rapid growth has also concerned many educators. The platform is estimated to have reached 100 million monthly active users just two months after it was created, according to a research report by investment banking company UBS. By comparison, it took TikTok approximately nine months to reach 100 million users and Instagram more than two years, according to Sensor Tower, an app analysis firm.
Naomi Baron, professor emerita of world languages and cultures at American University, said educators will now have to be more reflective when evaluating essays to determine whether a student wrote them. She said the “foolproof” way to do this is by talking with students about their ideas, offering feedback, and having them submit multiple drafts.
“That doesn’t happen in most classes and most classrooms in the United States,” Baron said. “Therefore, we’re going to have to figure out something else as a way of stimulating and then assessing students’ thinking about the kinds of issues they write about in essays or term papers.”
Baron, who has spent time testing out the chatbot, said educators can be alert for high frequencies of words like “the,” “a,” and “is” in papers because ChatGPT has been designed to predict what the next word will be based on the large dataset of texts it draws from. She also said a paper being “too perfect” may be an indicator that it was written by artificial intelligence.
OpenAI, in a statement to Medill News Service, said that the company doesn’t want ChatGPT to be used for misleading purposes in schools or elsewhere. The company has since released a “classifier” to help educators distinguish between human-written and AI-written text.
“The classifier aims to help mitigate false claims that AI-generated text was written by a human. However, it still has a number of limitations – so it should be used as a complement to other methods of determining the source of text instead of being the primary decision-making tool,” an OpenAI spokesperson said.
The concerns of educators may be justified. More than 60% of college students and 95% of high school students have admitted to some form of cheating, according to the International Center for Academic Integrity (ICAI), a research center that conducts wide-scale surveys of academic integrity.
But many school districts and universities are still assessing whether this new technology actually poses a threat to academia and how to handle it.
D.C. Public Schools is one of them, and it remains unclear if they will also go as far as banning the app from its 115 schools.
“DC Public Schools has been made aware of the issues surrounding Open AI’s ChatGPT. We are having internal discussions with our experts to explore what measures we can take,” a spokesperson said in a statement to Medill News Service.
Stanford University said that its faculty and lecturers continue to design assignments to develop students’ thinking and writing skills by requiring them to draft and revise their ideas while citing sources and evidence.
“These learning processes are central to the ways in which Stanford prepares students for lives of active citizenship, and faculty will continue to guide students on the role of emerging tools in their courses,” a spokesperson said.
The university also said its Board of Judicial Affairs (BJA) has been monitoring AI tools and will be discussing how they relate to the university’s honor code.
Kenny Ching, assistant professor and expert at Worcester Polytechnic Institute, said incorporating this new technology may be easier said than done, but it is necessary.
He said schools that ban ChatGPT from their campuses create an artificial barrier because students will find ways around it. Rather than merely slowing down the process, he said it’s better for educators to embrace it.
He also said banning the tool will only make education look more distant from the real world.
“Students are going to be using it in the future, and if we are supposed to be preparing students for effective work in the future, why are we preventing them from using that in the classroom,” Ching said.