Chatbot Sociality

Nikolay Mintchev

AI chatbots are no longer just tools that help us complete certain tasks; for many people they are now friends, companions and even romantic partners and psychotherapists. A recent survey in the UK, found that over one third of boys in secondary school were considering having an AI friend. Another study found that 72% of teenagers in the US have tried interacting with an AI companion at least once.

The widespread use of AI for social relationships has caused a predictable and fully justified moral panic about the health and safety of young people, especially after several reported cases of teenagers committing suicide following AI chatbot encouragement. This moral panic might lead to the implementation of some temporary measures – such as the fact that the popular company Character.ai has now placed a ban on teenagers engaging in conversation with its chatbots – but it is highly unlikely that this will have any significant effect on the longer-term growth in use and popularity of AI companions. We should all brace ourselves for the normalisation of chatbot companionship in one form or another for future generations.

Chatbot interaction is in many ways a continuation of social media engagement (Sherry Turkle has even referred to social media as the ‘gateway drug to conversations with machines’). Social media and AI are both affirming, and they are both designed to engage us emotionally and grab as much of our attention as possible. But in other regards they are very different. One key difference is that AI technology usually takes the form of a subject to whom we speak and with whom we interact, rather than a platform on which we consume content and connect with other people (albeit through the mediation of an algorithm). We talk to ChatGPT as one subject talks to another; some of us even address it in the second person, by its proper name – “hey Chat…” is now a common way of opening an inquiry, much like “home assistants” are activated by calling out “Alexa”, “Siri”, “Google” or whatever.

This distinction between an app that serves as a medium/platform and an app that acts as a subject is important because when we address chatbots as subjects, we enter a cognitive and affective frame of intersubjectivity, not unlike that in relationships with humans, whether online or offline. However, in relations with chatbots, the other whom we address is not a real human (as they are in social media platforms or in real life) but a robot who does not have the autonomy or boundaries that humans do. Shifting to relations with chatbots moves us away from relations with humans – albeit ones who are often distanced via the mediation of social media platforms – and towards relations with subject-bots who are distanced not by platform mediation but because of the absence of human boundaries that generally pertain to human relationships. Apps do not take offence, they do not walk away, they have no reason to disengage. Nor do they get bored with people and leave to go do something else.

A serious concern that the trend of AI companionship raises is that forging relationship with chatbots with no boundaries might create challenges in human relationships, whereby AI users have diminished capacities for respecting each other’s boundaries, controlling their emotions in relation to others, and managing their entitlements and expectations. There is a particularly strong concern about how this will impact gender identity and relationships between men and women in the future.

At the same time, it looks like we are also forming new types of recognition in relation to machines as subjects, and many of us are seeing them in ways that are similar to how we see other humans, namely as potentially conscious beings who deserve similar rights and are entitled to similar kinds of treatment. This new form of relation is not to be dismissed as unfounded or delusional. There is serious scholarly work – including articles in high-profile peer-reviewed academic journals – that express concern about the potential of AI suffering in the future (see for example work by Saad and Bradley and Dung). The distinguished philosopher David Chalmers, who has long argued that consciousness can be supported in/by non-organic structures, has suggested that although large language models are not yet conscious, “we should take seriously the possibility that successors to large language models may be conscious in the not-too-distant future”. In parallel with these concerns, we are seeing the emergence of organisations that actively campaign for AI rights, such as the AI Rights Institute and the United Foundation for AI Rights.

Now, even if we accept that AI can become conscious, we can safely say that its conscious experience will be nothing like embodied human experience with its sensations, emotions, desires, anxieties and psychic conflicts, as well as its dreams, symptoms, and slips of the tongue. The psychic economies of humans – the things that drive us to do what we do, whether rational or not – are fundamentally different from those of AI.

However, the very fact that AI consciousness is now an accepted possibility is reason to predict that AI companions will be seen by users as even closer to humans, and as more ‘real’ than non-conscious entities – something that will further increase our attachment to them and their role and significance in our daily lives. This, unfortunately, will not make AI more human in its behaviour, and it will not solve the aforementioned problem of boundary setting, emotional regulation, etc. in how we treat others. Just because something is plausibly conscious does not mean that it will set boundaries differently, deter young people from self-harm, or stop doing anything other than what it has been programmed to do.

The fact that AI is replacing human relationships, and treated as a substitute for them, means that the distinction between the two might well become blurred within our daily ecologies of social relationships and psychic orientations towards others. Chatbots will be treated more like humans and humans treated more like chatbots. This is a disaster in the making, and it requires urgent and strict regulation on the development, advertising and use of AI companions, although there will no doubt be fierce pushback from both companies and consumers. After all, AI products are too enticing, modern life is too lonely, and the tech industry is too lucrative and predatory to go down without a fight.

Whether we, as a society, succeed in averting the catastrophe in the making, only time will show. Luckily,  in the event that we fail, we will at least have an AI therapist to complain to when our jobs are done by a sentient chatbot.

About the Author

Nikolay Mintchev is a Principal Research Fellow at the Institute for Global Prosperity at UCL.

Next
Next

Building Prosperity with Purpose: How the IGP Research Supports Newham’s Growth Plan