AI anthropomorphism; Marshall Brain's "Manna"; further reading
"A society with counterfeit people we can’t differentiate from real ones will soon be no society at all."
Researchers discuss AI anthropomorphism
My intuition about the downsides of allowing LLM-based chatbots to pretend they are people was still looking for a rigorous summary of the research carried out on the topic when reader Mark Wiemer pointed me in the right direction with his comment to my March piece about how an ethical AI would never say “I”: Chenhao Tan published an excellent, lightly edited version of a long email exchange that took place in March and April between Ben Shneiderman (University of Maryland) and Michael Muller (IBM Research), wrapped up in a Medium post with the title “On AI Anthropomorphism”, with links to commentaries shared by others in the community and an extensive list of references. I’ll let you go read it.
When you come back, a few thoughts:
None of this is, strictly speaking, new: as Prof. Susan Brennan says in her commentary (also shared by Chenhao Tan), “this debate is as old as the hills”; at the same time, “the debate has quite a different urgency today.”
As you probably know by now, I tend to side with Prof. Shneiderman. Chenhao Tan and Justin D. Weisz summarize Prof. Shneiderman’s perspective with his appeal to designers and developers to “take responsibility for AI-infused tools”, which I support in full. As Elizabeth Weil wrote in her profile of Prof. Emily M. Bender, “Blurring the line [between chatbots and people] is dangerous. A society with counterfeit people we can’t differentiate from real ones will soon be no society at all.”
The conversation with Mark Wiemer in the comment thread prompted me to go back to the big debate of the last - what? - 7-8 years or so, about algorithmic feeds in social media: one of the most successful ever applications of AI and one that billions of people have had direct experience with. I expect that the moral ambiguity and dangers of chatbots pretending to be humans will be an order of magnitude bigger than the moral ambiguity and dangers posed by algorithmic feeds. I won’t rehash the whole conversation, but here is the comment thread if you’re curious.
The question is now how we get this conversation from the research world to the leadership of the companies that develop LLMs and their user interfaces. I don’t have a good answer - and, based on precedent, I am not optimistic: we still, after all, have algorithmic feeds prompting people to go down the most bizarre, damaging and sometimes illegal rabbit holes of content. So this is not an issue that engineers alone can solve, or that lawmakers can solve by calling engineers to testify in Congress. I think we need economists at the table, because they study incentives and outcomes depending on the rules of the game. AI is an arms’ race - where is the equilibrium? And, of course, designers and developers must escalate to their leaders.
(Fiction) reading suggestions and some fun with “Manna”
In the past few months, I’ve often thought of a short novel on AI risks and opportunities that a former colleague recommended to me years ago (Stefano A.: if you’re reading, thank you!) The novel is Manna – Two Views of Humanity’s Future by Marshall Brain: it was written in 2003, it’s quite short (you can read it in one evening), and if you’re not interested in the slim book as a physical object, keep in mind that the author has made the text widely available for free.
The two views of humanity’s future are, of course, a stark, bleak, scarcity-plagued total dystopia and a perfect, blessed, abundance-fueled total utopia, both made possible by AI, depending on how humans decide to use it. I won’t deprive you of the pleasure of reading the book, but I will report that I spent some time fighting with chatbots interfacing GPT-3.5 and GPT-4 in order to have the novel correctly summarized, and I was defeated. You will by now be familiar with the scenario: whether I used the consumer version or the luxury version, factual errors sneaked into the summary, names of characters and places were arbitrarily changed, crucial plot points were ignored, and when the machine was called on a mistake, it remedied it with text containing another mistake. (Whoever wrote the Wikipedia page for Manna, I believe, did a better job, in comparison, although they could expand a bit more on the utopian scenario). The most extraordinary answer I received from the machine, after much “please try again, considering X and Y” prodding from my side, went along these lines (I know, it was in the free version - I should have foreseen it) (emphasis mine):
Unfortunately, as an AI language model, my responses are generated based on pre-existing knowledge and do not have access to specific details or plot points from unpublished works, such as "Manna."
As of my knowledge cutoff in September 2021, Marshall Brain had not released a novel titled "Manna." Therefore, I am unable to provide an accurate summary or details about [X] or [Y] [note by Paola: I have removed spoilers]. It's possible that you may be referring to a different work or a future publication by the author.
I know your answer - easily fixable, nothing to see here, it won’t happen again. True. But my gut reaction? Chat-GPT is gaslighting me. And my second thought? There isn’t anybody gaslighting me there, so I’m not going to feel gaslighted, or gaslit, whatever.
My third thought? Who knows what Marshall Brain would say...
More (fiction) reading
Salvatore Sanfilippo is an Italian programmer best known in the open source community for the Redis data structures server. Lately, he has tried his hand at writing fiction, and the result is an imperfect but charming sci-fi novel called WOHPE (2022). In WOHPE, humanity in a crisis reluctantly returns to AI after a 20-year hiatus due to a ban on the most powerful and dangerous models. But the solution does not lie in silicon-based structures alone… it involves biology. I assume readers are as eagerly expecting the sequel as I am.
I’ll admit that my tastes are eclectic. I also read The Years by 2022 Nobel Prize winner Annie Ernaux. What an extraordinary book.
Finally, I am embroiled in a big, sprawling, navel-gazing novel by Mircea Cărtărescu, Solenoid, which I only recommend if you’re curious about how a rich inner life of dreams, nightmares, hallucinations and apparitions helped the protagonist/author get through life in Bucarest in the 1970s and 1980s. Its relevance here is twofold. First, in Chapter 30 the protagonist muses about Borges’s story "Tlön, Uqbar, Orbis Tertius", which I quoted from in this post. Second, the text he quotes starts with this passage, which appears in the fictitious postscript (emphasis mine):
In March of 1941 a letter written by Gunnar Erfjord was discovered in a book by Hinton which had belonged to Herbert Ashe. The envelope was postmarked Ouro Preto; the mystery of Tlön was fully elucidated by the letter.
This gives Cărtărescu the hook for a long digression about Hinton, a mathematician and writer whose efforts to help people imagine the fourth spatial dimension Solenoid describes in lengthy and fascinated detail. That would be Charles Howard Hinton: he married Mary Ellen Boole, the daughter of educator and mathematician Mary Everest Boole and George Boole, the founder of mathematical logic. One of Charles and Mary Ellen’s sons was George Hinton, who became a mining engineer and botanist (another one was Sebastian Hinton, whose daughter was Joan Hinton, a nuclear physicist and one of the few women scientists who worked for the Manhattan Project). George’s son was entomologist Howard Everest Hinton. Howard, in turn, was the father of… Geoffrey Everest Hinton: yes, the neural network pioneer and “Godfather of AI” who left Google a few weeks ago (which you probably read about here) so he can freely speak out about the risks of AI.
More perspectives on LLMs from writers and researchers
In case you need to catch up, over the past few weeks Jaron Lanier, Ted Chiang and Naomi Klein - among many others - have written about AI and society. Is AI becoming synoymous with hyper-capitalism? And what can be done about it?Policymakers - before they act - could probably do worse than re-read (or read) the simple parable in Manna by Marshall Brain.
That’s it for this issue. Feel free to comment and to share this newsletter with others.
Great read 🤓 Google I/O was both promising and disappointing. The integration into search shows they're moving more towards traditional content with a simpler natural language interface for more details. But that "more details" process is still called "chatting", so we haven't avoided anthropomorphism yet. And of course the next version of Bard uses "I" as much as ever. Thank you for sharing.