Professor Mary Fuller discusses ethics and AI with MIT SHASS News

Published on: February 26, 2019

Professor Mary C. Fuller is head of the MIT Literature section. She works on the history of early modern voyages, exploration, and colonization. She is also interested in material books and how readers use them, in the past and in the present. Her books include Voyages in Print: English Travel to America, 1576-1624 (Cambridge University Press, 1995) and Remembering the Early Modern Voyage: English Narratives in the Age of European Expansion (Palgrave, 2008).
 

Q: What opportunities to you see for applying insights, knowledge, and methodologies from literature to promote socially beneficial and ethical uses of computing and AI technologies?

 

In Literature, we study meaning-making through narrative and form. Both these areas of attention carry possibilities for collaboration and exchange with computation and AI. People sometimes say, “I wish I could read” a given text. Usually, they don’t mean that they can’t, literally, read it, but rather that the very things that make literary language so dense with information are opaque or even a barrier for them. By contrast, an expert reader gains information not only from content that can be summarized, but also from the formal structures that organize and amplify what the text says and shape how it makes us feel: the beat pattern of poetic language, the numerology of some Renaissance poems, or any writer’s play with syntax.

That disparity could change. We already have many tools and strategies to aid reading, from footnotes and plot summaries to book groups and online forums; most deal with content, rather than form. But formal patterns are easy for machines to recognize and represent: In the age of AI, we could invent new tools for reading. If the expert reading skills we teach could be made even partially available to readers outside the academy, the gateway to the archive of culture would be wider.
 

Detail from an etching by Gustave Doré (1832-1883), of Milton’s Paradise Lost

“Stories are things in themselves, and they are things to think with. Reading about Milton’s angelic intelligences or William Gibson’s “bright lattices of logic” won’t tell us what we should do with the future…. But reading such stories at MIT may offer a place to think together across the diversity of what and how we know.”

Mary Fuller, Professor of Literature, and Head, MIT Literature

Q: How can literature inform AI and computing projects about the risks and rewards of technological advances in terms of societal and ethical implications?


Narrative is already a research area in AI, as a “keystone competence” for computational modeling of intelligence as well as an aspect of computationally enabled creativity. Activating the existing human capacity to understand stories at depth is something we do every day. As complex systems, stories encode a range of interpretive possibilities; because they can also function, in whole or in part, as memorable metaphors for yet other stories, those possibilities aren’t easily exhausted.

Witness the invocation of Mary Shelley’s Frankenstein by commentators writing about the ambiguous potentials of modern technological innovation. In Shelley’s novel, a man decides in deliberate isolation to make something. He makes it because he can, and the exercise of capability excites him; once the thing is made, he abandons it in horror. Left to make sense of its existence and environment as best it can, what he makes ultimately becomes a monster and comes back to destroy everything the man loves. Shelley reminds us to ask, of our powerful inventions, “what could possibly go wrong?” and to be worried by secrecy and the failure to predict or own outcomes.

Her monster has other lessons to offer, however: As the French sociologist Bruno Latour has suggested in another context, perhaps it’s not that we shouldn’t create new things, so much as that we can’t walk away from what has been created — an ongoing relationship of care is necessary. How to effect that ongoing care is both a technical problem in designing and using deep learning systems — for instance,  making a system’s decision process more transparent — and a question of policy. Who will care for these systems, and how will the costs of care be funded?

We might identify these technical or governance problems independently: So, what is the utility of Shelley’s novel, or of narrative in general? Stories allow us to model interpretive, affective, ethical choices; they also become common ground, conceptual meeting places that can serve to gather very different kinds of interlocutors around a common object. We need these: Computer science alone can’t shoulder the task of modeling the future, understanding social and global impacts, and making ethical decisions.

Read the full article and the rest of the series at MIT SHASS News