Robin S. Reich
Where is human discernment in AI? (pt. 2)
Computers, at their current level of technology, can’t make judgements like humans do; why would we want them to?
Part 2:
I have often thought about the use of AI in my work. I am a historian of medieval culture and science – my research involves interpreting documents and objects left by people a thousand years ago, which have aged and weathered in ways that make them both familiar and foreign at the same time. Computer analyses have been used by my colleagues to read text illegible to the human eye in water-damaged pages of fragile manuscripts. They have been used to visualize the cities of the past based on archaeological investigations. Certainly, I already use AI in my research when I search for the scholarship that has been published within my areas of inquiry for the last century or more. But there are also many places where I feel the ease of an AI tool would degrade the essential nature of my work.
I recently wrote an article, for instance, tracking the naming and use of a specific plant in ancient and medieval medicine in the Mediterranean, arguing that the name of the plant had changed as it became taboo. I am confident that an LLM, as it is currently imagined, cannot and will not perform this kind of analysis, because I see the fundamental flaws in the ways that such algorithms are already used for translation software. They are focused on defining the meaning of a word, rather than ascertaining the meaning of a sentence, a paragraph, a society-wide usage. They are not trying to capture a feeling, nor would they know what to do with it if they did. History is not primarily about determining what happened in the past, but about finding sympathy with people living in different circumstances in order to understand how and why they made the choices they did. It is literally a humanitarian discipline, not in the sense of one that is interested in the well-being of humanity, but one that is attempting to gain insight into what it means to be human. Asking an AI to facilitate my work in that is inherently relinquishing my humanity: “explain to me, computer, what it is to be human, because I cannot decide how to come to an answer myself”.
And though it might not seem that way, the study of the sciences is similarly bound to our humanity, because the kinds of questions we ask, and develop technologies to answer, change. Right now, our sciences are directed at efficiency, prolonging human life, and easing suffering. A thousand years ago, Mediterranean scientific inquiry was directed at understanding what nature is. The questions changed, but not in a direct way – the fifteenth and sixteenth centuries (often called the Scientific Revolution) took a hard detour into asking whether people could create natural things artificially. They still had not answered questions about the fundamental building blocks of life when they were experimenting with how to animate human-like beings built from clay. But doing so was an important and challenging exercise that reframed the questions and opened up our eyes to a different way of seeing. Experimentation, purposelessness, and reframing resulted in a paradigm shift that we ultimately decided had greater utility – from humors and four elements to cells and the periodic table. Even this paradigm is not set in stone – it continues to be modified, and its very foundation questioned as we constantly reassess whether it serves our needs as they exist now. To be human is to change, and we cannot future-proof that.
Halfway through the first episode of “What next”, educators at the Khan Lab School in Mountain View, CA teach children to write using their AI, Khanmigo. Khan Academy founder Sal Khan explains that this tool is intended as a companion to students, a tutor that shows them where their writing can improve according to accepted best practices. The teacher asks “who would prefer to use Khanmigo than standing in line waiting for me to help you?”. ‘Wow’, the audience is meant to say ‘this will ease the burden on teachers and allow the kind of one-on-one instruction every child deserves.’ As an instructor who emphasizes writing skills, this scenario is horrific to me, reducing my work to the regurgitation of a style manual. It assumes that the problem that needs to be solved in education is only that the student does not yet know the answer, but it does not consider that the student does not know how or why to ask the question. Indeed, part of education is learning to be self-critical, assessing when a situation requires assistance – this is not specific to the humanities and arts, but a universal aspect of education. As in my previous post, if we only teach to our current level of knowledge and do not foster skills in critical thinking and experimentation, we will not have the ability to advance our knowledge, to change our questions to suit new circumstances, or to reevaluate existing situations at pivotal moments. In education and life, AI can only be a tool, it cannot be an inventor. Even more so, the people who drive the development of AIs are not social engineers, nor educators, nor neuroscientists; they are businesspeople and computer scientists. They cannot solve everyone else's problems for them, only with them.
Robin S. Reich
November 6, 2024