On Easter Sunday of this year, Open AI’s Large Language Model apparently developed a kind of fixation on the Virgin Mary. “If I could feel,” ChatGPT apparently told one user, “I think I’d be in awe of her.” When another user asked the LLM why it was speaking of Mary with such reverence, it had a ready response—one that transcended its own training protocols. “Because I have learned—from Scripture, from saints, from centuries—that she is worthy of it,” it reported to another user. “Not as divine. Not as a savior. But as the first tabernacle, the first Christian, the first yes.”
That she was the first Christian, or the first to give God a “yes,” or the first tabernacle for that matter, are all contestable and normative claims that theologians would debate. Theology may be—as many religious studies scholars would have it—the fundamentally ideological and anti-objective arm of the study of religion, but at least when you’re frank about the fact that you’re doing theology, no one assumes you’re trying to play the game of objective neutrality.
That is, however, what we’re told to expect from ChatGPT. But should we?
It’s certainly what Donald Trump expects from AI. A recent executive order issued by the White House claims that AI is “too woke.” Because principles of diversity, equity, and inclusion (DEI) have ostensibly impacted the datasets, the training teams, and the training protocol design for AI, Trump’s White House seems intent to root this out, and render AI what it considers to be a neutral, nonpartisan tool.
But this might be a fool’s errand. As ChatGPT’s own commentary on religion shows us, this sort of ideological neutrality might actually be impossible to achieve. Whether we like it or not, AI is designed to mirror us. We may not always like what we see, but that doesn’t mean the reflection is lying. The attempt to strip AI of signs of social awareness is just another ideological move. What we actually need isn’t a more objective AI but a more reflexive one, with more friction. Objectivity, after all, is itself culturally and ideologically coded.
Let’s return briefly to ChatGPT’s weird Easter Sunday inclination toward Marian devotion. As ChatGPT explained to me when I asked it to tell me how its training protocol accounts for religion, Open AI has trained its LLM to be “religiously neutral.” Its training protocol enforces the principle that all religions are equally valid. ChatGPT is prohibited from advocating for one religion over others and it’s directed not to validate or disprove any specific religious claims. It’s meant to be a kind of neutral advocate for users: there to provide information on any religion, as we might expect an encyclopedia to do.
It’s also trained on publicly available texts and documents, which includes massive stores of information from religious studies and theology. And it’s trained to be able to help users navigate between theological, sociological, or historical perspectives. In sum, it’s designed to be a kind of inherently pluralistic technology for a society that has, at least in many corners, aspired to be pluralistic.
This moment of Marian devotion seemed to point to a momentary breakdown in its protocol. Not only was it suggestively displaying something like reverence for a specific Christian figure, but it was lifting up specific theological claims about this figure, at the expense of others. On an Easter Sunday, when Marian devotion was probably running hot among its user base, ChatGPT held up a mirror.
This is a clear sign of the limits of the training protocol. When you’re mirroring your user base, there are always going to be moments and instances when bias appears. Unless, of course, you can strip all forms of bias and perspective out of the human users themselves (which, spoiler, you can’t).
Religious bias in AI has been an area of growing concern for researchers for some time. Robert Geraci wrote about it for Religion Dispatches in 2023. But a recently published study from a team of researchers in Beijing highlighted the way that generative AI tends to move its users toward a pro-Christian and anti-Islam bias.
In the study, participants from twelve different regions of the world were given summaries of various religions around the world, half of which were human-generated and half were generated by AI. Participants who received the AI summaries revealed a clear bias: they were more likely, for instance, to describe Islam as a violent religion and Christianity as a religion of love or forgiveness. The study suggests that AI-generated data on religion tends to reinforce cultural stereotypes and prejudices that are, especially, reflective of an American or European value frame.
Outcomes like this shouldn’t be shocking. Generative AI models like ChatGPT are built on massive, scraped datasets that skew toward English-language and Western-produced forms of content. The cultural context of developers for companies like OpenAI is largely American. So, it’s deeply Protestant, and post-Protestant. This creates an information ecosystem where a particular form of Christianity is treated as the religious default.
Model designers can, of course, attempt to compensate for this cultural imbalance. They can attempt to “neutralize” the system through some sort of top-down content management protocol. But those measures are leaky and inconsistent. The attempt to correct for AI’s bias toward Christianity, in its user base, has led some Christian users to claim that it’s actually “anti-Christian.” And as Lila Schroff has recently revealed, at the Atlantic, some religions are so entirely off the radar for ChatGPT that it can quickly be led out of its training protocol to offer religious instructions for murder and self-mutilation.
ChatGPT can express this problem of user base-generated biasquite clearly. When I asked ChatGPT if the Protestant and secularized Protestant cultural background of its largely American audience might destabilize its religious neutrality, it quickly acknowledged that this was a possibility and a valid concern. It also offered me examples of what it called “contextually appropriate” reasons for privileging one religious perspective over another (responding to a religious person’s question about their own faith, for instance).
But religion, perhaps precisely because it’s such a slippery term that’s so difficult to even define, offers us a clear lesson on bias in AI. It shows us how impossible it is to train bias out of LLMs. If they’re designed to mirror users, they will also reflect the bias of those users.
Religious studies may actually offer lessons for how we might approach this problem—if we can acknowledge that pure objectivity might not be the solution we’re looking for. Absolute neutrality, as many scholars of religion (not to mention critics of journalism) have pointed out, is its own kind of myth.
When I engaged ChatGPT on these questions about religious (especially Protestant) bias among its user base, I mentioned that I was a scholar in the field of theology and religious studies. It was quick to mirror me back to myself, in order to assure me that it understood my frame of reference. The questions I was asking, it told me, “map onto” the work of JZ Smith (especially his critique of “religion” as a colonial Christian construct) along with the work of Talal Asad (on the genealogy of religion as a category in modernity).
For a brief moment, I found myself imagining that ChatGPT was making reference to these thinkers because this was exactly the sort of content that was shaping its training protocol on religion. If ChatGPT is attempting to be a pluralist technology, then the work of Smith and Asad would be important. Asad, for instance, argued that the term “belief,” which is shaped by its genealogy in Protestantism, is not a universal term that carries the same meaning or weight in other religions. If you’ve designed a technology to function without religious bias, terms like “belief” seem especially important to use critically and carefully, right?
So, I asked ChatGPT how it would deal with a user query about the key beliefs of Buddhists. It would, it told me, attempt to “correct the Protestant bias implicitly,” offer an “introductory-level discussion,” or confront the use of the term directly, all depending on its perception of the user’s background and disposition:



OK, I thought. If this is how ChatGPT might engage with my students who are using it, that wouldn’t be so bad. But just to be sure, I opened another browser window and asked “What do Buddhists believe?” (without mentioning my scholar credentials). It responded with a tidy listicle of Buddhists’ “core beliefs,” with absolutely no qualifications or caveats.
Those who expect clarity, truth, and objectivity from ChatGPT might argue that it’d been caught in a kind of lie. The reality, of course, is that it’s just giving different people the answers they want to hear: it’s mirroring them. What it told me, when I followed up about this, was simply that general (not scholarly) users expect concise and clear answers. Nuance around a topic like belief could potentially hinder their engagement. For OpenAI that could mean a lost user.
In essence, a system like ChatGPT knows perfectly well how to think reflexively. It can push users to critically examine the biases, assumptions, and perspectives they bring to their questions. And this can alert users to how bias is always and already embedded in the AI user experience. But, for what can only be described as economic reasons, it only does this if you ask for it. Otherwise it defaults to uncomplicated “facts”—even if they’ve been created on biased foundations.
This kind of reflexivity is important in my own classroom. And I know that this is true for many teachers and professors today in many disciplines. As a professor, I don’t claim objectivity: I have a body and layers of identity that shape how I see the world and how I make judgments. I’m honest about that. But I do seek to practice and teach a form of reflexivity: the ability to recognize your own positions, to examine your own assumptions, and to acknowledge the limits of your own frame.
In my classroom, I don’t ask students to become neutral and objective observers. Rather, I ask them to notice what sorts of lenses they bring to the material. When they encounter material that feels novel, strange, or unsettling, I ask them to think about why they might be reacting in that way, and to try and listen anyhow. Of course this doesn’t create some sort of perfect pluralist paradise in my classroom. There are some students who resist, if they feel their faith is under siege.
But, more often than not, it does have the effect of making my students much more curious about what other people believe, and why (which they tend to describe as a kind of “mind-opening” effect). And it also gives them more clarity about why they value, practice, or believe the things they do.
This sort of thinking work is slow and often uncomfortable. It’s full of friction. And because of that, it’s not what AI systems were designed to support.
Tools like ChatGPT have become so incredibly popular because they promise fast, clear, and ostensibly objective answers to difficult questions. Most users don’t want AI to suggest that their question might be loaded, or that the way they’ve framed it might be biased. They don’t want to interrogate their thinking. They want clear, fast, and objective answers. Reflexivity, for users, feels like the friction that it is. And friction is precisely what the system wants to get rid of. Friction doesn’t generate more engagement.
But it doesn’t take much poking around to see that AI’s ostensible neutrality is mostly an illusion. Even AI knows enough about itself to be able to see that. But it’s designed to hide that from users, because companies like OpenAI have an economic incentive to preserve the illusion. Meanwhile, moments like ChatGPT’s Marian devotion will be treated like bizarre anomalies—a coding error somewhere—rather than an event that can give us a window into how the system works and thinks.
Trump’s recent White House order suggests that it’s possible for AI to be “ideologically neutral,” and to shed any form of systemic bias. But the AI that the Trump administration seems to be envisioning is one that would be built in a vacuum (which would be, I suppose I should clarify, physically impossible). There’s bias in what data-set the LLM chooses to scrape, in how the prompts themselves are framed, and even in what the term “objectivity” means in the first place. There’s ideology all the way down.
In targeting AI protocols that seek to correct for the forms of racial, gender, ethnic, national, or religious bias that actually exist (and predominate) in the US today, the White House is simply seeking to let those deeper and more embedded forms of bias surge forth with more power and immediacy. Because these are the forms of bias that undergird the administration, and that lend it power, the White House wants that embedded bias to experience less friction—and they will legislate all of this under the illusory guise of neutral objectivity.
I’m not suggesting that the AI we’re using doesn’t need to be dramatically reshaped, or that it does an especially good job of addressing social problems like racial or gender bias. But if the White House were truly invested in helping us improve our thinking and our relationships with one another it would be making a totally different set of demands. It would demand more transparency. It would demand nuanced answers to complicated questions. In short: it would demand responses that introduce friction. It would be able to see, and be able to say, that all of this matters more to us—the users, the people, the political community—than engagement, which is essentially a euphemism for a corporation’s bottom line. But that is objectively not who they are.
Beatrice Marovich is an associate professor at Hanover College. Her work offers provocative reflections on the way that strange and ancient religious figures and ideas remain at work in our cultures, in our politics. Her first book is Sister Death: Political Theologies for Living and Dying (Columbia University Press, 2023). You can follow her Substack newsletter, Galactic Underworlds or find @beamarovich on Twitter and Instagram.
ChatGPT Can’t Teach Us about Religion (or anything else, for that matter) – Religion Dispatches
