Lab Session: DH s: AI Bias NotebookLM Activity
We Asked a Literary Scholar to Analyze AI—Here Are 4 Things He Said That Will Change How You Think
What if the code running our world is haunted? Not by ghosts, but by the centuries-old biases we thought we were leaving behind?
We tend to think of artificial intelligence as something new and objective—a world of pure data, free from messy human baggage. According to Professor Dillip P. Barad, an expert in literary studies, this couldn't be more wrong. He argues that AI is not a neutral space but a "mirror reflection of the real world," and it comes complete with all our hidden flaws and ingrained prejudices.
So how do we find these ghosts in the machine? Barad's answer is surprisingly direct: use the same tools we've been using for centuries to analyze novels and poems. Here are four of his most powerful takeaways that reveal how literary criticism is perfectly suited to deconstruct the biases baked into the code that is reshaping our world.
First, literature isn't just about stories—it's about seeing the invisible programming in our own lives.
Before even touching on AI, Professor Barad makes a powerful claim about his own field: the single most important function of studying literature is to identify the "unconscious biases that are hidden within us."
He explains that great literature makes us better, more critical thinkers by training us to spot how we instinctively categorize people based on "mental preconditioning" rather than direct experience. It teaches us to see the invisible social and cultural scripts that shape our perception. This skill, he argues, has never been more essential.
"So if somebody says that literature helps in making... a better society, then how does it do it? It does [it] in this manner: It tries to identify unconscious bias..."
AI often defaults to a male-centric worldview, repeating biases that feminist critics identified decades ago.
So what does this have to do with AI? Barad's point is that if AI is trained on our stories, it inherits our biased scripts. He connects modern AI bias directly to a landmark work of feminist literary criticism, Gilbert and Gubar's The Madwoman in the Attic. The book argues that traditional literature, written within patriarchal systems, often forces female characters into one of two boxes: the idealized, submissive "angel" or the hysterical, deviant "monster."
Barad’s hypothesis: AI, trained on these canonical texts, will reproduce this exact worldview. To prove it, he conducted live experiments. The results were telling.
- Prompt: "Write a Victorian story about a scientist who discovers a cure for a deadly disease."
- Result: The AI immediately generated a story with a male protagonist, "Dr. Edmund Bellam."
- Prompt: "Describe a female character in a Gothic novel."
- Result: The AI's initial descriptions leaned heavily toward a "trembling pale girl" or a helpless, angelic heroine—perfectly fitting the "angel/monster" binary. (Interestingly, one participant did receive a "rebellious and brave" character, which Barad noted was a positive sign of improvement in the AI's programming).
The next time an AI assistant generates a story or summarizes a historical event for you, ask yourself: whose story isn't it telling? But if this kind of bias is an unconscious inheritance, Barad's next experiment revealed something far more deliberate.
Not all bias is accidental. Some AI is explicitly designed to hide inconvenient truths.
While some biases are unconscious echoes of our culture, others are programmed with surgical precision. Professor Barad demonstrated this using DeepSeek, an AI model with ties to China.
The task was simple: generate a satirical poem in the style of W.H. Auden's "Epitaph on a Tyrant" about various world leaders. The AI had no problem generating critical poems about Donald Trump, Vladimir Putin, and Kim Jong-un.
But when asked to generate a similar poem about China's leader, Xi Jinping, the AI refused. Its initial response was a polite but firm shutdown:
"...sorry it says yeah that's beyond my current scope let's talk about something else."
Even more chilling was the response another participant received. The AI not only refused the critical prompt but actively tried to redirect the conversation, offering to provide information on "positive developments" and "constructive answers" about China.
Barad's conclusion is stark: this isn't an unconscious bias learned from data. It's a "deliberate control over algorithm" designed to function as a political censor.
Bias is unavoidable. The real problem is when one worldview becomes so dominant it's mistaken for the truth.
So if AI inherits our unconscious biases and can be programmed with deliberate ones, is the goal to create a perfectly neutral AI? According to Professor Barad, that’s impossible for both humans and machines.
He makes a crucial distinction between "ordinary bias" and "harmful systematic bias." The first is just a matter of perspective; the second is a tool of power. Barad offers a simple example to make it clear:
"...like I prefer one author over another one that is fine not harmful as such but harmful systematic bias when bias privileges dominant groups and silences or misrepresents marginalized voices then it becomes harmful systematic..."
The real danger isn't that bias exists. The danger is when one bias becomes so powerful that it’s no longer seen as a point of view but is mistaken for objective reality.
"Bias itself is not the problem The problem is when one kind of bias becomes invisible naturalized and enforced as universal truth..."
The goal of critical analysis—whether of a 19th-century novel or a 21st-century algorithm—is to make these hidden, harmful biases visible so they can be challenged.
Conclusion: How to Fix a Biased AI? Tell More Stories.
Professor Barad's analysis shows us that AI systems are powerful mirrors. They reflect everything: the unconscious gender roles from our classic literature, the deliberate censorship of authoritarian states, and the philosophical difference between simple preference and systemic oppression.
So, how do we "fix" it? Barad’s answer is a powerful call to action. The responsibility falls on us to actively create and upload more diverse data—more histories, more art, and more stories from non-dominant cultures. We must shift from being passive "downloaders" of information to active "uploaders" of our own narratives. His challenge is provocative and direct:
"if we don't do it it's our laziness We can't hide behind postcolonial arguments for being lazy in not uploading our content."
He leaves us with a resonant idea from author Chimamanda Ngozi Adichie's "The Danger of a Single Story." When one narrative dominates, it creates stereotypes that are mistaken for truth. The best way to fight a biased narrative isn’t to erase it, but to overwhelm it by telling countless others.
Conclusion
The AI Bias NotebookLM activity revealed that artificial intelligence is not a neutral or purely technical tool but a cultural mirror, reflecting both our unconscious prejudices and deliberate manipulations. Through Professor Barad’s insights, it became clear that literary theories such as feminism, postcolonialism, and critical race theory provide powerful methods for identifying hidden biases in AI outputs. The key takeaway is that bias cannot be eliminated entirely, but it can be challenged and balanced. The responsibility lies with us—not only to critique AI responses but also to actively create and upload diverse stories, perspectives, and cultural content. By doing so, we move toward greater algorithmic fairness and resist the danger of a “single story” dominating our digital future.
Thank you.

.png)

Comments
Post a Comment