Nicholas Agar
Knowledge production in the humanities is undergoing a step change, a sudden transformation driven, in part, by AI technologies.
Many things in the humanities won’t change, simply because there are constants in the ways humans agree or disagree, fall in love or into hate. So long as there are humans in 2075 there will be human philosophers pondering humanity’s problems. The insights of today’s philosophical geniuses will presumably be as interesting to the philosophers of that time as are the insights of Ludwig Wittgenstein to us today.
But step changes in knowledge production place into starker relief some of the bad practices that we have fallen into. Just as Warren Buffett observed about economic downturns, “only when the tide goes out do you discover who’s been swimming naked”, so too abrupt changes in technology and student expectations expose the moral compromises of academic humanists. We have long been swimming naked, clinging to outdated practices that no longer serve students, society or truth.
Prompting ChatGPT to “critically discuss Plato’s theory of forms” isn’t a way to do philosophy. But that’s precisely the path many students now take, motivated by high tuition fees and high-stakes assessments.
There is a race between AI writing tools and AI detection tools, one that the detectors are destined to lose. Companies like OpenAI, which created ChatGPT, don’t reveal their secrets to firms like Turnitin LLC, a business in plagiarism detection. The result? The detection tech will always be playing catch-up.
The numbers tell the story about how far behind they will lag. Turnitin was acquired by Advance Publications for US$1.75 billion in 2019. OpenAI now has a US$500 billion valuation. The first-mover wins, and OpenAI has more money to spend training AIs to produce human-like speech than Turnitin can spend on detecting it.
Does the difficulty in detecting AI-cheating mean that the professors should give up? Perhaps it casts the defenders of human writing in the role of Sarah Connor in the Terminator franchise. The odds are clearly against her. But Hollywood produces many movies in which she heroically beats the odds and the machines.
The problem is that our movie analogy is ill chosen. We don’t face machines like the hulking T-800 cyborg. A better representation is the Cylon from the television series Battlestar Galactica. In the 2004 version, machines that perfectly pass for humans engineer our downfall by infiltrating us. What befalls the humans in that story is also happening to the humanities in real life: even as they proclaim that they are fighting AI, humanities scholars are abetting its infiltration.
The vulnerability of the humanities is more ideological than technological. It comes in the form of the teaching-research nexus, a prized feature of the Humboldtian university invented in Prussia in the late-nineteenth century. Ernest Boyer, former president of the Carnegie Foundation for the Advancement of Teaching, expressed it well when he wrote:
The most inspired teaching will generally take place when faculty are pursuing their own intellectual work, and students, rather than being passive observers, are partners in the scholarly enterprise.
Our students become our apprentices. In the fullness of time, they replace us. The glitch in this plan, that continues to work for the sciences, becomes apparent in the overproduction of humanities PhDs for whom there are no jobs.
Paradoxically, much of the money governments spend to sustain the humanities amplifies its vulnerability. The money has attracted academic publishing businesses. Profits passed on to shareholders become debits for governments and taxpayers.
Consider the contract I recently signed with humanities publisher Taylor & Francis. It granted them the right to distribute my work “in printed, electronic or other medium now known or later invented, and in turn to authorise others … to do the same”. We can speculate about what this might mean.
Informa PLC is the parent company of Taylor & Francis. Its 2025 financial report offers rare transparency about a quiet transformation underway in academic publishing. Informa is more open to its investors than it is to humanities scholars. The report reveals that Taylor & Francis generated over US$75 million in 2024 from data access licensing, explicitly naming AI companies among customers gaining legal entry to vast troves of scholarly content. With nearly 9,000 new titles added annually and a vast back catalogue of specialist works, Informa is positioning this licensing as a “repeatable income stream” and a key part of its growth strategy.
What this means for humanists is stark. The very articles and books we painstakingly produce are being fed, legally and lucratively, into AI systems that will soon replicate, and perhaps replace, our intellectual labour.
Signing up to be my apprentice by inviting me to supervise your PhD in philosophy is a bit like apprenticing with a master weaver when a factory with power looms has just opened in your town. Yet most authors remain unaware that their work is fuelling the next generation of AI tools, often without any additional consent or compensation.
Sign up for our weekly newsletter.
This is speculation about possible motivations of Informa PLC. It would not suffice for a class action lawsuit mounted by sacked humanities academics. If pressed, Big Oil’s lawyers can vehemently assert their passion for the environment. That’s certainly how Big Academic Publishing’s lawyers would advise them to reply to questions about how they might be contributing to the failure of humanities faculties.
One hint about Informa’s intentions can be found in a linguistic pivot from the 2024 to the 2025 report. In 2024 there was talk of “flexible Pay-to-Publish Open Research platforms”. That language is absent from the 2025 report. Now that governments are less interested in paying for humanities academics to publish, it is a reasonable inference that Informa is looking to replace lost revenue with money from training AIs. Scholars fret about the sloppy academic referencing of AI text. An AI with full access to the Taylor & Francis back catalogue can almost certainly improve on the referencing of distracted humanists anxious about their jobs.
Herein lies the hypocrisy. We punish students for using AI, even as we gift our own research to a business that directly feeds it into the very models that we caution students against using — all of this without compensation, consent or even awareness. If anyone’s cheating, it’s not the students. The challenge for the humanities isn’t to either abet or beat AI detection tools. It’s to reimagine a scholarly ecosystem with AI where truth-seeking is collaborative, transparent and fair. That starts with confronting the uncomfortable truths not just about our students, but about ourselves.
Nicholas Agar is Professor of Ethics at the University of Waikato in Aotearoa New Zealand. He is the author of How to be Human in the Digital Economy and Dialogues on Human Enhancement, and co-author (with Stuart Whatley and Dan Weijers) of How to Think about Progress: A Skeptic’s Guide to Technology.
The ABC’s Religion and Ethics portal is home to religious reporting & analysis, ethical discussion & philosophical discovery, and inspiring stories of faith and belief.
We acknowledge Aboriginal and Torres Strait Islander peoples as the First Australians and Traditional Custodians of the lands where we live, learn and work.
How AI exposes the moral hypocrisy of academic publishing – Australian Broadcasting Corporation
