Gerardine Meaney

Insight Culture: Critical AI – a new analysis for a new ‘intelligence’

Submitted on Friday, 06/09/2024

Staring into the soul of AI, humanities scholars are finding a delusional ‘people pleaser’, but with potential

Professor Gerardine Meaney  has been collaborating on machine learning approaches to literature for over a decade. More recently, she has been tracking  ChatGPT’s improved ability to give accurate information about literature. Asked a year ago for resources on Irish women in Ireland in the 1940s, her AI assistant directed her to a national women’s archive that doesn’t exist.  It now gives a much more factual but vague answer, listing all the major archives in Dublin.

‘AI is a people pleaser,’ says Prof Meaney. ‘Like a student who doesn’t want to admit they haven’t done the reading, it comes up with a vaguely plausible list.’

Meaney is an expert in the application of digital methodologies to humanities research, a principal investigator with the Insight SFI Research Centre for Data Analytics and director of the UCD Centre for Cultural Analytics in the School of English, Drama and Film.

‘AI reflects our biases, it gives us what it thinks we want,’ she says. ‘We need the critical facility to break out of this circle and this is where the humanities comes in – humanities-in-the-loop, as Lauren Goodlad puts it.’

Lauren Goodlad is a leading scholar in an emerging discipline – Critical AI. A journal of the same name was launched in October 2023 with the aim of bringing the critical thinking of humanities scholars into dialogue with work by technologists, to build accountable technology in the public interest.

Prof Meaney describes the approach of the humanities scholar as ‘reading against the grain’. In her field; literature; new lenses are applied to old texts, seeking novelty. For example, a classic novel such as Jane Eyre can be read through a feminist lens, a postcolonial lens, a queer lens. The idea is to understand how the novel has been read before and with that understanding, to look anew.

Now imagine a third year English student using ChatGPT to write an essay on Jane Eyre. The algorithm cannot surface a fresh perspective because it draws exclusively on what has gone before.

‘When asked for a novel perspective on Jane Eyre, ChatGPT offers eco-gothic feminist, an amalgam suggested by a plethora of very recent articles applying eco criticism to gothic novels and gender,’ says Prof Meaney.

Does that mean there is no role for AI in humanistic enquiry? There is potential, Meaney counters, but it matters how we define ‘intelligence’ and how we use the very limited form of intelligence that AI offers.

‘There is a role for AI in interpreting texts, not least because the algorithm can read a volume of material that no human could ever read. However, to make use of that the scholar must be informed by an understanding of how large language models work. I’m a literary scholar, not a mathematician. How much do I need to know? Enough not to take the output at face value.’

People – students included – are inclined to project human consciousness onto AI that offers the most basic interactions. It’s known at the ELIZA effect – the tendency for people to attribute understanding and emotions to programmes that mimic human conversation. The term originated with ELIZA, an early natural language processing programme created by Joseph Weizenbaum at the MIT Artificial Intelligence Laboratory in the 1960s.

We are easily fooled by talking robots, especially when they are sycophantic and programmed to give us what we ask for.

AI has no critical reasoning power. Its approach to source material is to clone rather than to challenge. The humanities scholar employs the opposite method – they question source material and its assumptions to unlock novel insights. Can an algorithm be trained to find the surprising?

A field of research called ‘algorithmic abduction’ is examining this.

‘Algorithmic abduction trains (or tries to train) generative AI to trade off surprise and disruption, shouldering the work of filtering and revealing surprising patterns for productive human interpretation,’ Prof Meaney explains. ‘The method basically looks for the most surprising interpretation consistent with the data and then works in dialogue with a human reader.’

The key point here is that the humanities must stay in the loop to sense-check and develop on the algorithmic findings. The algorithm prompts the scholar, not the other way around.

In ‘Algorithmic Abduction: Robots for Alien Reading’ (University of Chicago Press), James A Evans and Jacob G Foster describe the process of teaching a robot the element of surprise. They begin with a consideration of how the human scholar performs abduction.

‘In abduction, one begins with theory or a set of structured expectations; in this case, these expectations would be based on some scholarly tradition. Then, through investigation of a collection of texts, the scholar becomes surprised by a pattern that violates expectations and theory.

‘Finally, the scholar imagines an alteration in the theory that would accommodate the surprise. Abductive analysis…enjoins the researcher to seek out surprising evidence to drive theory forward.’

In describing how to ‘train’ an algorithm in abductive reasoning, the authors liken it to training a humanities student.

‘Pedagogically, this approach is akin to training a graduate student in a scholarly tradition, but coaching them to be sensitive to surprising deviations in the data and to follow those deviations in imagining alternatives’

As the only kind of reasoning that supplies new ideas, Evans and Foster conclude that ‘abduction is essential for machines to become fellow interpreters.’

With a fresh crop of humanities students starting their academic journeys this month, how likely are they to benefit from dialogue with an abductive algorithm? Naomi McAreavey, Associate Professor of Renaissance Literature at UCD, does not expect rapid adoption. She thinks the focus is still very much on plagiarism and not on a vision of AI advancing humanities scholarship.

‘Unfortunately we risk creating a taboo around AI use in academia,’ says McAreavey. ‘I think this may only disadvantage students rather than protecting academic integrity. We are slow in academia to adopt new tech. It took COVID to really push virtual learning into meaningful use. Now we realise that it can make classroom learning even better.’

McAreavey is completing a two year project at UCD called AI Futures, which is examining the changing place of AI in arts and humanities teaching and learning. As part of this project she was involved in creating a traffic light system for academics to indicate to students whether or not they can use AI in their assessed work. ‘I think we should amber and green light more than we are currently,’ she says. ‘Only by using AI can we develop a critical understanding of what it can and can’t do and where it might add value as a research and writing assistant’.

The emergence of Critical AI reminds us of the imperative to keep humanities scholars at the heart of AI development. Interrogation at this level, not germane to tech development, provides the philosophical guardrails that can help to keep humans at a safer, more critical distance from AI. Writing in The Human Condition the German American historian and philosopher Hannah Arendt describes the alternative:

‘If it should turn out to be true that knowledge (in the modern sense of know-how) and thought have parted company for good, then we would indeed become helpless slaves, not so much of our machines as of our know-how, thoughtless creatures at the mercy of every gadget which is technically possible, no matter how murderous it is.’

 

In conversation with Louise Holden