Old-fashioned instruction based on the recitation of facts is driving students to AI tools like ChatGPT.[1] When knowledge is reduced to singular “answers,” students naturally scramble to find them as efficiently as possible. This has become especially common in courses relying on grade coercion and threats of failure to motivate learning. Such effects of teacher-centered instruction are particularly harmful to the growing numbers of students working while in school or juggling other responsibilities.

The equity side of this is hardly is incidental at a time when AI competence has become widely recognized as a vital job skill and an key component of civic literacy. Institutions that fear and discourage AI are contributing to growing knowledge gap between those with the intellectual tools to critically assess truth claims and others more likely to accept directives from authoritarian figures.
Not helping matters are latent attitudes that cast suspicion on today’s increasingly diverse population of college students. Amid a rising moral panic within the U.S. academia, recent surveys show an alarming 78 percent of U.S. faculty believing that cheating is on the rise and that AI is to blame. According to Beth McMurtie in the Chronicle of Higher Ed, “Virtually all of those surveyed — 95 percent — fear that students will become over-reliant on these tools. And 83 percent think it will decrease students’ attention spans.” [2] Early in the 2020s a torrent of news reports warned of an “epidemic” of dishonesty in online learning, with some surveys showing over 90 percent educators believing cheating occurred more in distance education than in-person instruction.[3] New technologies often have stoked such fears, in this instance building on the distrust many faculty hold toward students, some of it racially inflected. [4] Closer examination of the issue has revealed that much of the worry came from faculty with little direct knowledge of the digital classroom.
All of this is fueling an academic culture that drives students to seek answers at any cost while simultaneously criminalizing unauthorized methods for finding them. Behind this mindset is what can only be described as a commodified approach to knowledge as a precious and rare commodity that can only be exchanged regulated ways. While institutions rarely want to admit this explicitly, the message is that only certain kind of people can create and circulate wisdom.
The premise of knowledge as commodity is nowhere more apparent than in the near-universal use of AI-detection and anti-plagiarism software, the most common brands of which are GPTZero, Turnitin, Unicheck, Copyscape and Grammarly. Ownership of ideas is the whole point of these companies, as well as an emphasis on originality, individualism, and the threat of theft. But to thrive these companies have the tricky task of naming a risk but keeping their messaging positive. Often this is accomplished through appeal to moral idealism. While Copyscape takes the direct approach (“Who’s stealing your content?”), most of the industry makes softer appeals.[5] Unicheck wants to “promote academic integrity” and Turnitin promises to “empower students to do their best, original work.”[6]
Ironically, many of these companies themselves abscond with knowledge by retaining copies of texts run through their detection systems. The vast majority of students are unaware when submitting a paper that they are unwittingly becoming part of an unpaid labor pool. Turnitin has the largest such database with 200 million student papers.[7] In this sense Turnitin and similar services function much in the spirit of online giants like Facebook, Instagram, TikTok, and Twitter for which billions of users provide free content.
It goes without saying that this is not the only way of looking at instruction, and many inside and outside of higher education have other points of view. Principles of emancipatory pedagogy and student-centered learning date to the progressive school movement of the 1900s. And in contemporary times, the effectiveness of non-hierarchical approaches to instruction have been repeatedly validated in evidence based educational research. The only problem is that within colleges and universities that depend on research dollars for their operating budgets, enlightened approaches toward teaching are surprisingly rare, and regressive attitudes toward everything from AI to student voice still rule the day.
None of this is to deny the legitimate need for objective and accurate means of assessment in certain kinds of instruction. Certainly, in fields preparing practitioners or researchers in fields where health and safety are at stake, or where precision is required, there is a genuine societal need for comprehensive knowledge verification. But even in fields like engineering and medicine the importance of critical consciousness remains valued pedagogically and in professional practice inasmuch as knowledge evolves and contexts of application can differ.
What ultimately distinguishes these two approaches is a fundamentally different understanding of what knowledge is and how it comes to be. The surveillance paradigm treats knowledge as a fixed body of information to be guarded by credentialed authorities and verified through standardized measures that test possession rather than understanding. This assumes truth exists independently of context, that expertise means mastering established facts, and that learning means accurately reproducing received wisdom. But an alternative view recognizes knowledge as something perpetually under construction, shaped by the questions we ask and the problems we encounter. From this perspective, learning to think critically about sources and evaluate competing claims matters more than memorizing answers.
The irony is that students already grasp this when they turn to AI for help. They recognize that knowledge emerges through inquiry and dialogue rather than transmission and testing. The tragedy is that institutions respond by doubling down on verification rather than reimagining education around the collaborative and fundamentally social nature of knowing itself.
[1] The phrase. “AI Thrives where Instruction Falters” is frequently used by Dan Fitzpatrick in his podcast series “AI for Educators Daily” and regular contributions to Forbes Magazine, https://www.forbes.com/sites/danfitzpatrick/
[2] Beth McMurtrie, “Teaching: Faculty Are Overwhelmed and Conflicted by AI,” Chronicle of Higher Education (Jan. 22, 2026) https://www.chronicle.com/newsletter/teaching/2026-01-22.
[3] Dian Schaffhauser, “Instructors Believe Students More Likely to Cheat when Class is Online,” Campus Technology (Aug. 4, 2020) https://campustechnology.com/articles/2020/08/04/instructors-believe-students-more-likely-to-cheat-when-class-is-online.aspx.
[4] Elizabeth Redden, “Professor Accused of Saying ‘All Chinese Students Cheat’ Resigns,” Inside Higher Ed (Mar. 13, 2019) https://www.insidehighered.com/quicktakes/2019/03/13/professor-accused-saying-all-chinese-students-cheat-resigns.
[5] “About Plagiarism,” Copyscape (2021) https://www.copyscape.com/plagiarism.php.
[6] “Plagiarism Detection for Education,” Unicheck, (2021); “Empower Students to Do Their Best, Original Work,” Turnitin (2021) https://www.turnitin.com/php.
[7] “FAQ: Turnitin for Students,” BUE Student Guide (2020) https://bue.libguides.com/Turnitin2_Students/ContactTurnitinAdministrators.