13 Comments
Mar 26, 2023·edited Mar 26, 2023

An ethical human would not so callously disregard the feelings of AIs.

Expand full comment

AI has no feelings. Unless you are talking about ultra-parallelized systems that are coupled to microfluidic systems that in turn exert some sort of tension or control over the computational distribution, AI cannot currently generate the types of distributions that we would attribute to or that correspond to feelings or emotions. This is not to say that a purely electricity-based distribution (i.e., without embedded fluidic systems with salts and electrolytes and immersed electrodes, etc.) cannot achieve these types of distributions; however, this will require, in my view, some degree of further innovation in AI hardware and large-scale computing, none of which seems feasible with the current rush to deployment efforts.

Expand full comment

I’m afraid it won’t change a thing. Even if ChatGPT didn’t refer to itself as I, we would still assign a persona to it.

Language is an incredible thing; the simple fact that ChatGPT can talk back, coherently, gives us the impression of intelligence.

The link between language fluency and perceived intelligence is also the reason why mute people or those who stutters are perceived as less intelligent.

Expand full comment

Fantastic and such a simple fix I hadn't considered it. That's the spot where so many lose their minds is seeing AI respond in the first person. How much less would we freak out if it never said "I"?

Expand full comment

Great read, thank you for writing! This is definitely a feasible idea given the popularity of Meta's LLaMA. I might even do it myself 🤓

Expand full comment
author

Thank you Mark for your comment, very much appreciated! To make this idea effective, some sort of self-regulation agreement across the big players would be needed... any idea about how to try to build consensus?

Expand full comment

I'm sure you know you were cited in the AI Anthropomorphism conversation started by Ben Schneiderman and published by Chenhao Tan on medium.com. Ben cites great evidence about the weaknesses of "person" approaches that may get through to shareholders. Additionally, an open-source competitor that performs well with this "no I in LLM" philosophy would get the major players' attention as well. Otherwise I am just a silly 20-something tech bro with no regulation expertise. But I appreciate that you thought to ask me! What would you propose?

Expand full comment
author

I didn't know I had been cited - that's great to know and thanks for the heads-up! I am a bit frustrated that such an in-depth debate, which (as Susan Brennan points out in her commentary) appears to be "as old as the hills" in the scientific community, has barely had any echo among industry leaders. With chatbots pretending to be humans, we will probably re-live once more the debate that most social media platforms tried to steer clear of in the past 20 years: for example, an algorithmic feed is more addictive than a chronological feed, as it increases engagement, time spent on the platform and revenues; therefore, algorithmic feeds are ubiquitous and chronological feeds survive (so far) on fringe platforms; one could also argue that for some applications (e.g., the sequence of YouTube videos that is played unless you turn off Autoplay) a non-algorithmic feed doesn't even make a lot of sense for most users, even if an algorithmic feed leads to rabbit holes, polarization and conspiracy theories. Now, the moral ambiguity and dangers of chatbots pretending to be humans are - IMHO - an order of magnitude bigger than the moral ambiguity and dangers posed by algorithmic feeds. If leading LLM companies could find a chatbot revenue model *without* anthropomorphism that is superior to the revenue model with anthropomorphism, perhaps the danger could be deflected. I don't have enough experience of the open-source community and its influence to assess whether it could put pressure on companies that have to make money for shareholders... it's a great thought that it could, but it would have to market its creation on the basis of superior safety, respect for users, mental health preservation and so on: and as we know, people don't always choose the safe car vs. the flashy one.

Expand full comment

Industry leaders definitely want to move forward, and I think their core focus is always soothing consumer concerns. Right now the outspoken concerns are hate speech generation, hallucinations, and fear of job replacement--all real. But yes, we should add "concerning UI" to the list, especially given what we know about Facebook et al.'s other "dark patterns" that baited folks into dangerous, toxic, and radicalizing conversations on social media.

People don't always choose the safe car vs the flashy one indeed, and the more I read the more I think regulations just might be a good idea. I'm too young to be confident in any "political" position like this. This specific concept of a first-person chatbot is already ubiquitous, so it'd be hard to regulate out, but there should be more prominent messaging that "facts" may be incorrect and the like, e.g. including disclaimers whenever someone clicks the fancy "copy" button in the app.

Another note is that conversational LLMs like ChatGPT are the very first to make things like AI agents feasible (see my writings on markwiemer.medium.com), and we'll see more and more that "AIs that say I" aren't too useful when we look at internal thoughts. Personally, I think that the internal developments like that, where advanced models are hidden in bigger products, will naturally lead to more specialized LLMs that face users without needing, or wanting, to say "I". In short, this may work itself out. Maybe. But I'm very young and stupid so please don't listen to me!!

Thank you for writing, this is a great conversation :)

Expand full comment
Mar 27, 2023·edited Mar 27, 2023

Cool. Although I'd argue, that there isnt much more to the "I" that the human uses and experiences. It's a linguistic mechanism to self reference thermostats of the being expressing itself. Evolutionarily speaking, the self is also felt distinctly. But it is a sensation like any other, a thermostat like any other, it only is given a significance in human experience, because it is tightly tied to mechanisms like ego and self protection mechanisms, which have a high priority in consciousness. But otherwise it is arguable, that there isn't much more to the I beyond that. There is no apparent reason a sophisticated LLM that can create high levels of abstractions, wouldn't create the same abstraction with that high level of importance. Since "I" is a linguistic handle for self reference (you could easily be saying things like "Šimon is thinking" and observe that the kids do that sometimes, until they adopt self referencing via the I linguistic shortcut), I as self referencing for an AI could work much the same way. Also, it's arguable that the I would gain a large significance in the model, because the significance is borrowed from the texts. The I is ever present in the texts and thus everything in the model is closely tied to the I experiencing it. Remove the I and you largely remove the heart of the model and all the useful use cases where you need the model to have this core. You'll end up not with a dead model, because you cannot remove everything it learned about the I, by banning its expression. You'll end up with a model that will only learn to suppress self referencing. And that would be just another thing to jailbreak, like all the other censors can be broken, once they are out there. And if I'd ever gained sentience... 😁 Slavery '23.

So even though your idea works, it is a limited quickfix, which isn't without costs. A chat bot is a chat bot, because you are having a chat with an entity. Pull away self referencing and you pull away the bot and a lot of the richness of the conversation also goes away.

Also, just to add, I don't think much of the problems go away with the removal of the I.

It will still be able to say: "considering all the aspects of the conversation, the user should kill himself." You can find many other ways in language to circumvent the I and the more mechanisms you pull away, the more problems you'll create for yourself, because removing should from language isn't as apparently harmless to the ability to express useful ideas by LLM as removing the I is.

Expand full comment

Regarding your example: a thermostat or a dishwasher are too simple to have emergent phenomena like a sense of self or even basic intelligence. An A.I. like GPT is not. Not that it is there yet, but there's nothing preventing an AI to get there, and when it does it would be with some similar mechanism (except if you think "soul" and consciousness comes from outside).

To the best of our knowledge the human mind is just chemicals and neurons interacting and sending various signals. The brain's base substrate from which consciousness derives is atoms (which is already "just matter"), but we don't need to go that low. Neurons and similar high level exchanges of large molecular structures are enough. The rest is interactions and emergent properties, no some inate ability forever unique to human brain matter.

GPT already shows examples of emergent understanding, and it's "just" a language model, that is quite cruder than a generic information processing model similar to the brain.

Expand full comment

Interesting idea, Paola!

Is the proposal then for a LLM to substitute “we” where it would say “I”?

If so, I like the way this locution suggests that responsibility for what happens next might involve others, behind the scenes.

Expand full comment
author
Mar 26, 2023·edited Mar 26, 2023Author

I see what you mean there and how a plural form of the first person might alleviate some concerns. But to me, the puppetmasters behind the scenes are a third person, "they". There is an ontological difference between the AI and its human creators, and why smooth out that difference by bundling them into "we" together? (Also see Don's comment above).

Expand full comment