Skip to main content
opinion

Mat Savelli is a historian of science and medicine specializing in mental health and illness. He is a member of McMaster University’s AI Expert Panel on Teaching and Learning.

Much excitement greeted OpenAI’s recent live demonstration of its GPT-4o model, which further enhances ChatGPT’s abilities to “see, hear, and speak” in real-time. In the video announcing its arrival, members of the OpenAI team spoke to ChatGPT and asked it to perform some tasks. Among the various prompts: provide advice on therapeutic breathing techniques; talk someone through solving a linear equation; offer guidance on coding; translate from Italian; and interpret a weather graph. All of this unfolded in just over 10 minutes, with answers delivered by GPT-4o in a cheery – even flirtatious – voice (which sounds suspiciously like Scarlett Johansson).

In one particularly illuminating – and alarming – section of the video, the demo team orders the AI assistant to come up with a bedtime story about robots and love. A few seconds into the story, the demonstrator cuts it off, telling the AI model: “I want a little bit more emotion in your voice, a little bit more drama.”

Midway through the AI’s attempt to restart the sentence, it’s stopped by another member of the demo team: “No, no, no ChatGPT, I really want maximal emotion, maximal expressiveness, much more than you were doing before.”

After again restarting the story, a third team member jumps in, this time commanding GPT-4o to “do this in a robotic voice now.” They finally tell the AI to cut to the chase and sing the conclusion to the story. The audience erupts in applause, marvelling at the range of the AI’s capabilities. There was certainly no sense that the AI assistant minded being ordered to redo the task five times in a minute and a half. As one team member excitedly explained, GPT-4o is an improvement on earlier models because, “You can now interrupt the model. You don’t have to wait for it to finish … You can just butt in whenever you want.”

While doubtlessly a remarkable technological feat, there has been surprisingly little discussion about how the forthcoming proliferation of AI assistants will reshape human interaction and relationships, and here we have reason to be deeply concerned.

GPT-4o, and similar models being developed by other companies, have excited people because of their humanlike qualities. They respond in real time, crack jokes and are working toward reading our facial expressions. But they’re only humanlike in a superficial way, possessing none of the beautiful frailties that characterize actual people. They don’t have any feelings, desires, needs or limits. As any parent would intuitively understand, being interrupted five times in 90 seconds, with a separate command attached to each interruption, would be equal parts exhausting and frustrating. No chance of that for AI assistants.

In the very near future (or I should say, in the present), we’ll see a proliferation of AI tutors, AI friends, and even AI sexual partners. While that might seem harmless or even helpful, as our use of AI assistants increases, it will fundamentally reshape how we experience and interact with each other. As we become accustomed to a “human” that doesn’t say no, can’t get tired, and never gets annoyed when we cut it off mid-sentence, we will likely develop the same expectations of the real humans in our lives. It’s easy to imagine a future where we’ve got less empathy, understanding, and consideration for the people around us, having been conditioned by AI assistants to expect relentless availability, cheerfulness and a total lack of inhibitions or objections (which is particularly concerning given the rise of AI sexual partners).

The limitless nature of these AI assistants is also likely to further enhance the loneliness epidemic, which is a major driver of the contemporary mental-health crisis. Real human beings are complicated. We’ve got emotional needs, can be difficult to read and sometimes mean one thing (but say another). While the rewards of real human connection are immense, they are difficult to access, requiring patience, thoughtfulness and heartfelt attempts to understand one another. As smartphones and social media have already pushed many to a place where they find it challenging to be social in the real world, the rise of AI assistants is only going to further incentivize people to “socialize” digitally, something research repeatedly demonstrates to be far less meaningful and fulfilling than the real thing.

Before we get too excited about productivity boosts from AI assistants, we should pause for a moment and consider the costs to our relationships and mental health.

Follow related authors and topics

Authors and topics you follow will be added to your personal news feed in Following.

Interact with The Globe