Since the widely acclaimed release of ChatGPT 4, generative AI has been touted by many as the savior of education. Case in point: British education expert Sir Anthony Seldon has predicted that by 2027, AI will replace human teachers on a global scale.
Unfortunately, more than 40 years of academic research exploring human cognition suggests that generative AI could also harm learning at all levels, from online tutoring to employee training, for three reasons.
Problem one: Empathy
Intellectual heavyweights from Bill Gates to Sal Kahn have argued that the personalized tutoring enabled by ChatGPT and other generative AI tools based on large language models will close achievement gaps across education. However, individualized instruction is not the most important driver of learning. After analyzing data from thousands of studies, educational researcher John Hattie recently reported that a strongly empathetic learner-teacher relationship imparts two and half times greater impact on learning than personalization.
“Using AI to help learners avoid the tedious process of memorizing facts is the best way to ensure higher-order thinking skills will never emerge.”
The hormone oxytocin is the foundation of empathy. When two individuals connect and release oxytocin simultaneously, their brain activity begins to synchronize—a process known as “neuronal coupling” that leads them to not only learn from one another but to quite literally think alike. Given that algorithms have neither a brain nor oxytocin, it is biologically impossible for humans and AI to develop an empathetic relationship: the transpersonal nature of empathy precludes its emergence in the digital realm.
This is one major reason why students operating in purely digital environments perform worse and are significantly less likely to graduate than comparable students engaged in face-to-face instruction. Without empathy, students become passive receivers of information with little impetus to push through the requisite struggles inherent in the learning process.
Even among highly skilled human educators, failure to cultivate an empathetic relationship inevitably hinders learning. And this only serves as a further warning against AI, as it reveals that neither knowledge nor pedagogy (presumably the forte of digital tutors) are sufficient for effective teaching.
Problem two: Knowledge
University College London Professor Rose Luckin recently argued that, since ChatGPT can access and organize all the world’s knowledge, learners need no longer waste time learning “facts.” Instead, they can focus on higher-order thinking skills like creative and critical thinking.
Unfortunately, much of what we term “creative” and “critical” thinking occurs via subconscious processes that rely on internalized knowledge. When we consciously think about a problem, humans can only actively consider a very finite amount of information due to the cognitive limits of working memory.
However, once we stop consciously thinking about a problem, we enter into an incubation period whereby our brains subconsciously sort through our memory stores by seeking out relevant ideas. It’s during this sorting process (known as reconsolidation) that novel connections are made and better thinking emerges.
“Even among highly skilled human educators, failure to cultivate an empathetic relationship inevitably hinders learning.”
Here’s the problem: Subconscious reconsolidation only works with information that is stored within a person’s long-term memory, which means it cannot leverage information that is externally accessed or stored. This explains why experts almost always demonstrate stronger problem-solving skills than novices within their field of expertise, but rarely outside of it. This also explains why semantic dementia (whereby patients lose long-term memories but maintain cognitive faculties) impairs creativity nearly twice as much as frontotemporal dementia (whereby patients lose cognitive faculties but maintain long-term memory stores).
Simply put, using AI to help learners avoid the tedious process of memorizing facts is the best way to ensure higher-order thinking skills will never emerge.
But, you may be asking, what about learners who use AI to merely assist with fact memorization? Well, consider that textbooks have historically been written by experts—people with enough deep knowledge to aptly vet and organize information into a meaningfully structured curricula. Large language models (at least in their current form) have neither oversight nor vetting. This means learners who use AI are very likely to encounter wrong, oddly sequenced, or irrelevant information which—if memorized—might very well derail their path to mastery.
Of course, AI models will improve and information will surely increase in accuracy. Unfortunately, this won’t address the issue of vetting. Just as with Wikipedia today, users will only ever be able to work up to their current level of knowledge: Anything beyond that must be taken on faith. When learning relies on faith, it’s imperative that faith is placed where the likelihood of success is highest; this is why having the assurance that an expert has evaluated and organized key information remains invaluable.
Problem three: Multitasking
It has long been known that multitasking harms accuracy, speed, memory formation, and even enjoyment. In fact, I have no qualms calling this the single worst thing human beings can do for learning.
A pre-COVID survey revealed that students across the United States spent nearly 200 hours annually using digital devices for learning purposes. However, they spent 10 times as long—more than 2,000 hours—using these same devices to rapidly jump between divergent media content. Other studies have shown that, when people use a computer for self-guided learning, they typically last fewer than six minutes before engaging with digital distractions and, when using a laptop in the classroom, students typically spend 38 minutes of every hour off-task. In other words, the digital devices learners use to access and engage with ChatGPT have become veritable multitasking machines.
It’s not that computers can’t be used for learning; it’s that they so often aren’t used for learning that whenever we attempt to shoehorn this function in, we place a very large (and unnecessary) obstacle between the learner and the desired outcome—one many struggle to overcome.
What does work?
There is one area of learning where generative AI may prove beneficial: cognitive offloading. This is a process whereby people employ an external tool to manage “grunt work” that would otherwise sap cognitive energy.
However, as noted above, when novices try to offload memorization and organization, learning is impaired, the emergence of higher-order thinking skills is stifled, and without deep-knowledge and skill, they’re unable to adequately vet outputs.
“When we regularly offload certain tasks, our related skills and mental faculties can atrophy, making external support a requirement in the future.”
Experienced learners or experts can benefit from cognitive offloading. Imagine a mathematician using a calculator to avoid arithmetic, an event planner using a digital calendar to organize a busy conference schedule, or a lawyer using a digital index to alphabetize case files. In each of these scenarios, the individual has the requisite knowledge and skill to ensure the output meaningfully matches the desired outcome.
But there is still the risk of digital reliance. When we regularly offload certain tasks, our related skills and mental faculties can atrophy, making external support a requirement in the future. For instance, I’ve used digital programs to run statistical analyses for over a decade. Although I have the relevant knowledge to vet the output, I can no longer remember the specific equations each statistical test employs. Accordingly, unless I return to my textbooks, I’m now reliant upon these programs.
Consider the costs
Whenever we employ digital tools to amplify, hasten, or circumvent aspects of a particular process, something is inevitably lost along the way. Or, in the words of Thomas Sowell, “There are no solutions, only trade-offs.”
Sometimes this trade-off is worthwhile—such as discarding complex equations to run statistical analyses in seconds rather than hours. However, when we use AI to supplement education, that thing which is lost is the very essence of the endeavor itself: learning.
Whenever the primary reason for using a tool is negated by its own adoption, we are well justified in questioning its continued use.
If the primary reason for using a tool is negated by its own adoption, we are well justified in questioning its continued use.