The Assessment Crisis is Bigger than AI

The last few months have seen my campus scrambling to get back to in-person assessment and to reopen testing centers. Like many universities that quietly had deemphasized such exams during the COVID years, now at UC Irvine there is rising faculty demand to quickly change course. Many worry about the validity of take-home and online assessments, as campus officials search for rooms or even build new ones. Meanwhile, already stressed students feel increasingly desperate over high-stakes tests that can make or break academic success. While the crisis seems recent at UCI, what’s really happening predates the rise of generative AI and won’t be fixed with more exam rooms.

Much of higher education now sees online assessment as an arms race it can’t win, with over 150 institutions planning to end it this year. Earlier this month, the Law School Admission Council (LSAC) announced t it would return the LSAT to in-person testing by summer 2026, citing “security concerns,” “score inflation,” and “the misuse of technology to facilitate cheating.”[1]  All Ivy League schools also are reverting to standardized tests for admissions after eliminating them during the last decade.  Complicating matters further is the reality of cash-strapped schools facing infrastructure bottlenecks because they’ve  repurposed or sold off testing centers.[2]  Driving this frantic backtracking is the logical but incorrect belief that assessment is losing meaning at a time when ChatGPT can generate answers in a few seconds. Hence the current retreat to blue books, testing rooms, and internet-free conditions.

“Generative AI did not create assessment issues. It revealed them,” according to Emma Ransome of Birmingham City University.[3]  Ransome explains that traditional measures like timed exams, standardized tasks, and recall-based tests historically have done poorly in evaluating skills universities claim to instill such as critical thinking, ethical judgement, and synthesizing ideas. Generative AI has made the disconnect between what is being measured and what is being taught even more apparent. If a large language model can successfully complete a multiple choice pharmacology exam, or if an LLM can generate a decent survey essay about the causes of World War I, one shouldn’t be asking how to stop students from using it.  Instead the issue should be what kind of knowledge those conventional measures assessed in the first place.

Continue reading “The Assessment Crisis is Bigger than AI”

AI Thrives where Instruction Falters

Old-fashioned instruction based on the recitation of facts is driving learners to AI tools like ChatGPT.[1]  When learning is reduced to scores and “answers,” students naturally seek the most efficient paths to get them  This has become especially common in courses relying on grade coercion and threats of failure to drive motivation.  Such effects of teacher-centered instruction are particularly harmful to the growing numbers of students working while in school or juggling other responsibilities.

The equity side of this is hardly is incidental at a time when AI competence has become widely recognized as a vital job skill and an key component of civic literacy. Institutions that fear and discourage AI are contributing to growing knowledge gap between those with the intellectual tools to critically assess truth claims and others more likely to accept directives from authoritarian figures.

Not helping matters are latent attitudes that cast suspicion on today’s increasingly diverse population of college students. Amid a rising moral panic within the U.S. academia, recent surveys show an alarming 78 percent of U.S. faculty believing that cheating is on the rise and that AI is to blame. According to Beth McMurtie in the Chronicle of Higher Ed, “Virtually all of those surveyed — 95 percent — fear that students will become over-reliant on these tools. And 83 percent think it will decrease students’ attention spans.” [2]  Early in the 2020s a torrent of news reports warned of an “epidemic” of dishonesty in online learning, with some surveys showing over 90 percent educators believing cheating occurred more in distance education than in-person instruction.[3] New technologies often have stoked such fears, in this instance building on the distrust many faculty hold toward students, some of it racially inflected. [4] Closer examination of the issue has revealed that much of the worry came from faculty with little direct knowledge of the digital classroom.

Continue reading “AI Thrives where Instruction Falters”