Trusting machines with our feelings p2 -Why we trust AI

Through the lens of being vulnerable, perhaps because an AI is not able to understand emotions in the same way another human can, we feel less vulnerable baring ourselves emotionally to AI—they do not understand, so they cannot judge or use it against us.

For example, people reveal personal thoughts to AI that they wouldn’t share with other humans. Google is the go-to for everyone’s embarrassing questions and dark secrets. Google feels like a place that allows you to share thoughts without consequences, to satisfy your wants with no springs attached. Seth Stephens-Davidowitz calls Google a “digital truth serum” and takes advantage of how unguarded people are about what they share with Google for his research into people’s true anxieties, biases, and desires. 

Furthermore, familiarity breeds acceptance. While people may still raise questions over online privacy and data security, the pervasiveness of internet AI algorithms creates convenience in daily life that overrules many reservations. This is especially true as younger generations, starting with Millennials, are born into a world of AI and grow up with this tech.

Now, this example with Google may seem a little anticlimactic. The real drama with the question of trusting AI lies with AI feels a little less passive and more involved. It is more challenging to trust the anthropomorphized idea of technology that replaces humans, because those are threatening to us. That idea of AI robots also perpetuates the idea of robots as serving a transactional function (they’re a tool, and their ultimate success is surpassing human ability). But with AI that we trust emotionally, there is a built-in relationship that by nature requires human participation alongside AI activity. We treat them as more than just tools, and that has created powerful opportunities.

Trusting AI Emotionally: Education

AI social robots that have shown great results for helping children with Autism Spectrum Disorder (ASD) develop social skills, understand facial expression, and improve conversational skills. Humans can be unpredictable, and constantly-changing social cues can be overwhelming; robots have a more structured, predictable range of human expressions that make children with ASD more comfortable and responsive to. 

Children with ASD interacted with NAO, a cute humanoid robot just under two-feet tall, as if it were a real person, simulating human social relations and preparing them for the real thing. Another robot, QTrobot was able to increase these children’s willingness to interact with human therapists, demonstrating how trust in the robot further enhanced trust and comfort with other people as well. On top of the more-approachable simplicity that AI robots offer for teaching children with ASD, robots also have the special trait of being completely non-judgemental.

This is a trait documented to be particularly helpful in robots used to tutor a second language. It is often said that to learn another language, you have to overcome your initial fear of your accent or pronunciation and just be shameless with practicing. With advances in technology, we are able to create an environment that bypasses the source of embarrassment instead. Machines are not judgemental. They are social, but not social enough to trigger embarrassment, so people aren’t afraid to make mistakes in front of a robot, which enables effective learning. 

For everyone, learning requires opening yourself up to repeatedly making mistakes, and especially for neurodivergent people, learning can come with additional stresses. When learning with a robot, emotional comfort comes two-fold: the student doesn’t feel embarrassed, and the student doesn’t worry about perceiving the teacher’s emotions and possible judgement.

Trusting AI Emotionally: Comfort

Although machines don’t feel, they can emulate emotional relations. In Japan, where the aging population and shrinking labor force has incentivized rapid progress with integrating automation and AI into the workforce, robots are also being seen in vital companionship roles. It doesn’t even need to be humanoid—AI-powered robotic animals are being explored as a way to parallel the comfort of real therapeutic animals for the elderly, especially those with dementia, without the complications of fur allergies or animal care. These comfort robots are more than toys or mechanical stuffed animals in that they react and display appropriate “emotions”. These AI are able to tap into a very fundamental human desire to connect, and that is a powerful ability that can help complement human interactions.

Besides providing comfort through companionship, the rudimentary idea of trusting an AI to make decisions we feel uncertain about making can also provide comfort. For example, there’s comfort from the AI making an “objectively-best” decision that, if flawed, is not entirely your fault. We’re inclined to trust a social media’s recommended connections and accounts because oftentimes, its recommendations are based on what and whom we have previously enjoyed, and we romanticize the idea of matchmaking algorithms allowing an easier, rose-laden path straight to our one true love.

It’s not that we trust AI to plan out our lives. We wouldn’t be comfortable with an AI telling us who to definitely marry. But in the initial stages of deciding who you trust enough to pursue a platonic, romantic, or work relationship, it all comes down to making ourselves vulnerable to rejection or disappointment. Because we imbue AI technology with overblown omnipotence, there’s comfort in believing an AI algorithm has determined your “best match.” Your compatibility has been calculated. It’s not a random chance anymore. You can feel more confident about making that first move. 

In this way, again, our trust in AI augments rather than replaces human decisions towards initiating actions and relationships

Fickle humans

We wouldn’t be humans if our thinking and emotions weren’t complex and self-contradictory in some sense. We are likely to trust AI because they lack human qualities like judgement or gossiping that make us wary of other people, but at the same time, we are more inclined to trust AI that exhibits human elements like empathy and cleverness. 

One big factor of people’s misgivings towards AI is due to AI’s lack of empathy. According to Dr. Kurt Gray, humans want to see three qualities in team members in order to trust them: mutual concern, a shared sense of vulnerability, and faith in competence, the most important being mutual concern.

That makes sense: knowing that someone cares for your wellbeing is fundamental to putting your trust in that person. As much as AI might seem to express concern and care, humans know that robots don’t truly feel any concern (or feel anything at all). And that bothers us.

Still, a lot of what makes us feel comfortable and feel like something is empathetic is ingrained in our subconscious biology, which means that imitating empathy can work, and learning from human social psychology to enhance AI is a solid strategy. Research has found some simple design features to improve human-robot interactions, such as adding a face, creating a voice, and having a body that mimics certain gestures in real time.

Humans are also markedly…not robotic. And we like it when robots act more like humans, which may seem counterintuitive to the purpose of a robot. A critical advantage of AI is its continuous consistency, but for garnering emotional trust, predictability can be a turnoff. One UCSD study that observed toddlers’ response to a robot over five months showed that the children interacted most positively when the robot showed variety in its behavior, and that interactions deteriorated quickly when the robot was programmed to act predictably. Dr. Brian Scassellati’s lab at Yale further identified another surprising characteristic that made robots more appealing: the ability to cheat

Scassellati had participants play rock-paper-­scissors with a robot, and observed the way they treated the robot. When it played predictably, they treated it simply like a machine. However, in one round, the robot would remedially change its hand gesture to the winning hand. As soon as the robot cheated to win, people started treating it as an agent instead of an object—they made eye contact, spoke to it, and used personal pronouns. People treated the robot as a “somebody,” because cheating is something a human, not a robot, would do. We like that human element (even if it’s something negative like cheating).

In this way, we dig ourselves into a strange dichotomy that requires a delicate balance. We trust robots because they are more predictable than people, yet if they are entirely predictable, they don’t garner our trust either. We trust AI when they’re more human-like in some ways, but we trust AI because they’re not human-like in others. 

The distrust we have towards AI in terms of privacy issues, biased data, and data protection ultimately stems from our distrust of humans: we don’t generally care about a machine “seeing” our private information. We don’t feel ashamed in front of a machine, or afraid that it will use our secrets against us. We do feel like that about humans. Taking this to its logical conclusion, we should want humans to ultimately be out of the process.

At the same time, we don’t want AI to completely replace humans, so we want there to always be humans in the loop. Do we want, or can we ever trust fully-autonomous AI that doesn’t need any human involvement? Are we content with becoming emotionally entangled with our AI technology in the way we are?

Designing AI with human trust and emotions in mind opens huge potential to amazing developments for humans to progress alongside AI. But similarly, this kind of AI also opens huge vulnerabilities into our psyche for possible manipulative marketing and influence. 

AI is meant to interact with humans, and interactions with humans will involve messy, inexplicable elements like emotions and trust. We stumble through it in human-to-human interactions. But with AI, we have the responsibility of designing and building—we need to stumble more thoughtfully. What kind of future we engineer for AI and humankind relies on us first understanding what we really want with AI, which ultimately is a question of what we really want with other humans. Building trust in AI comes from understanding what humans care about. Do we want an objective, calculating machine? Do we want AI that is genuinely empathetic to us? How would we feel? What do we want to feel?

Author: Katherine Chou

Share this post: