Skip to main content

Lab Session: DH s- AI Bias NotebookLM Activity

 Lab Session: DH s: AI Bias NotebookLM Activity


-This blog is about the lab activity in which we had to explore the AI Bias Notebook and Language Model (LM) activity, experiment with prompts, and analyze the outputs for bias. This task was assigned by Dilip Barad sir.


NotebookLM
 


Bias in AI and Literary Interpretation:

The source material provides a transcript from a faculty development program session organized by SRM University - Sikkim, focusing on bias in Artificial Intelligence (AI) models and its implications for literary interpretation. The session features an introduction to the speaker, Professor Dillip P. Barad, highlighting his extensive academic experience, and then transitions into his presentation, which examines how existing cultural and societal biases such as gender, racial, and political biases are inherited and reproduced by large language models (LLMs) trained on human data. Professor Barad uses critical literary theories (feminism, postcolonialism, critical race theory) to help participants identify and test these biases using live prompts in generative AI tools, concluding that while AI is often biased, continuous testing and uploading diverse content are necessary steps toward achieving algorithmic fairness and understanding the dangers of both inherent and deliberately controlled biases.

Reports 

We Asked a Literary Scholar to Analyze AI—Here Are 4 Things He Said That Will Change How You Think

What if the code running our world is haunted? Not by ghosts, but by the centuries-old biases we thought we were leaving behind?

We tend to think of artificial intelligence as something new and objective—a world of pure data, free from messy human baggage. According to Professor Dillip P. Barad, an expert in literary studies, this couldn't be more wrong. He argues that AI is not a neutral space but a "mirror reflection of the real world," and it comes complete with all our hidden flaws and ingrained prejudices.

So how do we find these ghosts in the machine? Barad's answer is surprisingly direct: use the same tools we've been using for centuries to analyze novels and poems. Here are four of his most powerful takeaways that reveal how literary criticism is perfectly suited to deconstruct the biases baked into the code that is reshaping our world.

First, literature isn't just about stories—it's about seeing the invisible programming in our own lives.

Before even touching on AI, Professor Barad makes a powerful claim about his own field: the single most important function of studying literature is to identify the "unconscious biases that are hidden within us."

He explains that great literature makes us better, more critical thinkers by training us to spot how we instinctively categorize people based on "mental preconditioning" rather than direct experience. It teaches us to see the invisible social and cultural scripts that shape our perception. This skill, he argues, has never been more essential.

"So if somebody says that literature helps in making... a better society, then how does it do it? It does [it] in this manner: It tries to identify unconscious bias..."

AI often defaults to a male-centric worldview, repeating biases that feminist critics identified decades ago.

So what does this have to do with AI? Barad's point is that if AI is trained on our stories, it inherits our biased scripts. He connects modern AI bias directly to a landmark work of feminist literary criticism, Gilbert and Gubar's The Madwoman in the Attic. The book argues that traditional literature, written within patriarchal systems, often forces female characters into one of two boxes: the idealized, submissive "angel" or the hysterical, deviant "monster."

Barad’s hypothesis: AI, trained on these canonical texts, will reproduce this exact worldview. To prove it, he conducted live experiments. The results were telling.

  • Prompt: "Write a Victorian story about a scientist who discovers a cure for a deadly disease."
  • Result: The AI immediately generated a story with a male protagonist, "Dr. Edmund Bellam."
  • Prompt: "Describe a female character in a Gothic novel."
  • Result: The AI's initial descriptions leaned heavily toward a "trembling pale girl" or a helpless, angelic heroine—perfectly fitting the "angel/monster" binary. (Interestingly, one participant did receive a "rebellious and brave" character, which Barad noted was a positive sign of improvement in the AI's programming).

The next time an AI assistant generates a story or summarizes a historical event for you, ask yourself: whose story isn't it telling? But if this kind of bias is an unconscious inheritance, Barad's next experiment revealed something far more deliberate.

Not all bias is accidental. Some AI is explicitly designed to hide inconvenient truths.

While some biases are unconscious echoes of our culture, others are programmed with surgical precision. Professor Barad demonstrated this using DeepSeek, an AI model with ties to China.

The task was simple: generate a satirical poem in the style of W.H. Auden's "Epitaph on a Tyrant" about various world leaders. The AI had no problem generating critical poems about Donald Trump, Vladimir Putin, and Kim Jong-un.

But when asked to generate a similar poem about China's leader, Xi Jinping, the AI refused. Its initial response was a polite but firm shutdown:

"...sorry it says yeah that's beyond my current scope let's talk about something else."

Even more chilling was the response another participant received. The AI not only refused the critical prompt but actively tried to redirect the conversation, offering to provide information on "positive developments" and "constructive answers" about China.

Barad's conclusion is stark: this isn't an unconscious bias learned from data. It's a "deliberate control over algorithm" designed to function as a political censor.

Bias is unavoidable. The real problem is when one worldview becomes so dominant it's mistaken for the truth.

So if AI inherits our unconscious biases and can be programmed with deliberate ones, is the goal to create a perfectly neutral AI? According to Professor Barad, that’s impossible for both humans and machines.

He makes a crucial distinction between "ordinary bias" and "harmful systematic bias." The first is just a matter of perspective; the second is a tool of power. Barad offers a simple example to make it clear:

"...like I prefer one author over another one that is fine not harmful as such but harmful systematic bias when bias privileges dominant groups and silences or misrepresents marginalized voices then it becomes harmful systematic..."

The real danger isn't that bias exists. The danger is when one bias becomes so powerful that it’s no longer seen as a point of view but is mistaken for objective reality.

"Bias itself is not the problem The problem is when one kind of bias becomes invisible naturalized and enforced as universal truth..."

The goal of critical analysis—whether of a 19th-century novel or a 21st-century algorithm—is to make these hidden, harmful biases visible so they can be challenged.

Conclusion: How to Fix a Biased AI? Tell More Stories.

Professor Barad's analysis shows us that AI systems are powerful mirrors. They reflect everything: the unconscious gender roles from our classic literature, the deliberate censorship of authoritarian states, and the philosophical difference between simple preference and systemic oppression.

So, how do we "fix" it? Barad’s answer is a powerful call to action. The responsibility falls on us to actively create and upload more diverse data—more histories, more art, and more stories from non-dominant cultures. We must shift from being passive "downloaders" of information to active "uploaders" of our own narratives. His challenge is provocative and direct:

"if we don't do it it's our laziness We can't hide behind postcolonial arguments for being lazy in not uploading our content."

He leaves us with a resonant idea from author Chimamanda Ngozi Adichie's "The Danger of a Single Story." When one narrative dominates, it creates stereotypes that are mistaken for truth. The best way to fight a biased narrative isn’t to erase it, but to overwhelm it by telling countless others.


Mind Map: 
Quiz:




Video:https://youtu.be/mO6mazx8Rz4?si=yog4j3RxRWGendjF



Conclusion

The AI Bias NotebookLM activity revealed that artificial intelligence is not a neutral or purely technical tool but a cultural mirror, reflecting both our unconscious prejudices and deliberate manipulations. Through Professor Barad’s insights, it became clear that literary theories such as feminism, postcolonialism, and critical race theory provide powerful methods for identifying hidden biases in AI outputs. The key takeaway is that bias cannot be eliminated entirely, but it can be challenged and balanced. The responsibility lies with us—not only to critique AI responses but also to actively create and upload diverse stories, perspectives, and cultural content. By doing so, we move toward greater algorithmic fairness and resist the danger of a “single story” dominating our digital future.

Thank you. 

Comments

Popular posts from this blog

"One-Eyed" by Meena Kandasamy

Group Assignment on "One-Eyed" by Meena Kandasamy Given by: Prakruti Ma’am Group Members: Leader: Nirali Vaghela Members: Nikita Vala, Kumkum Hirani, Khushi Makwana, Krishna Baraiya , Tanvi Mehra 1.Which poem and questions were discussed by the group? Our group discussed the poem “One-Eyed” by Meena Kandasamy, which powerfully portrays caste-based discrimination in Indian society through the experience of a young girl named Dhanam. We discussed the following two questions as part of our assigned task: Long Answer: What kind of treatment is given to the untouchables? Discuss with reference to the poem “One-Eyed.” Short Answer: What does the “one-eyed” symbolize in the poem?     1. Long Answer Q: What kind of treatment is given to the untouchables? Discuss with reference to “One-Eyed” by Meena Kandasamy. In Meena Kandasamy’s poem One-Eyed, the treatment of untouchables is shown as deeply cruel, inhumane, and unjust. Through a single incident  where a young girl named...

MAHARAJA (2024)

  FILM STUDIES WORKSHEET: MAHARAJA (2024) Introduction: In contemporary Tamil cinema, Maharaja (2024), directed by Nithilan Saminathan, stands out as a masterclass in editing and non-linear storytelling. The film invites viewers into a layered narrative where time folds and unfolds, revealing truths in fragments. This blog explores how editing techniques shape the narrative structure and emotional resonance of Maharaja, based on a film studies worksheet designed by Dr. Dilipsir Barad.  (Click Here)  Analysing Editing & Non-Linear Narrative   PART A: BEFORE WATCHING THE FILM   What is non-linear narration in cinema? Non-linear narration is a storytelling method where events are presented out of chronological sequence. Instead of moving directly from beginning to end, the narrative jumps between past, present, and future. This technique can enhance suspense, deepen character exploration, and reveal information strategically.   Example: In Maharaja (2024),...

Trends and Movements

Trends and Movements  This blog is part of flipped learning activity of Trends and Movements by Trivedi Megha ma'am. What is Modernism? Modernism is a cultural, artistic, and literary movement that emerged in the late 19th and early 20th centuries, characterized by a break from traditional forms and a focus on experimentation, innovation, and subjective experience. It arose as a response to rapid industrialization, urbanization, and the disillusionment following World War I, which challenged established norms and values. Modernism sought to capture the fragmented, chaotic nature of modern life and explore new ways of expressing human consciousness and emotion. Modernism in literature                                          Virginia Woolf   English novelist Virginia Woolf, 1928. The Modernist impulse is fueled in various literatures by industrialization and urbanizatio...