Trusting machines with our feelings

When people think of AI, a prevalent first image often is of things we do not trust. Crashes involving self-driving cars (especially those that are fatal) continuously fuel wariness of autonomous cars. Allowing autonomous weapons to be responsible for making decisions on human life feels fundamentally wrong. Distrust in voting machines during the 2020 U.S. presidential elections sparked controversy and active protests.

What we do not think of as readily are the machines that are so ubiquitous to the fabric of ordinary life that they don’t warrant picturing, like spell check, social media, and banking transaction security. These are just as revealing for our relationship with and trust in technology. There are AI systems integrated so comfortably into our lives that we are dependent on them, if not functionally than at least in part emotionally. If you have ever felt panicked realising you didn’t have your phone, find yourself constantly checking it just to check, or felt sentimental about giving up an old car or phone, then you know this feeling. The emotional interplay of humans and our tech underlie how, why, and when we do trust AI, sometimes more-so than we trust people.

Emotions are characteristically human, and revealing our emotions is a vulnerable act. Feeling emotions and empathy are one of the traits humans haven’t yet engineered into AI. Machines are associated as the antithesis of emotions and feelings—they are built for objectiveness and logical decision-making. It is interesting, then, that our trust in AI seems to extend first to trusting them with our emotions, while our distrust largely resides within their decision-making contexts.

What is trust?

It’s important to first address what trust is: is it a subconscious emotion? A rational decision? The Stanford Encyclopedia of Philosophy, in exploring many different philosophical dimensions of trust, provides three requirements of trusting:  “(1) be vulnerable to others—vulnerable to betrayal in particular; (2) rely on others to be competent to do what we wish to trust them to do; and (3) rely on them to be willing to do it.” 

These requirements for trust do stem from the traditional notion of trust, which occurs between humans in interpersonal relationships. Attributing trust between humans and technologies is a newer idea (if interested in the rationale and debate behind it, see the concept of intentional stance). This difference does create nuances with our examination of those three requirements for trusting. 

Applying a human definition of trust towards AI

Relying on Willingness

The question of willingness in the third requirement, for example, is different for AI than humans: we aren’t currently concerned with our technology’s willingness to carry out their programming. We can rely on AI to do as it’s told, until we can engineer true free will, or some variant of allowing AI to make unengineered choices. We will need to decide whether and in which circumstances such a design might serve our wellbeing.

Being Vulnerable

The first requirement also reveals another way trust in AI comes about differently than trust in other humans. The requirement of being vulnerable to betrayal touches on the paradox of trust: you must take the risk that the person you entrust may not honor that trust, because if there was an 100% guarantee they would pull through, then there would be no need for trust.

With technology, our relationship is supposed to be predictably transactionary: artificial intelligence performs a task that we expect it to do; there is a promise of guarantee built into the idea of a machine. You don’t worry daily about needing to trust your refrigerator to work properly to keep your food cold. You just rely on it to work. 

The problem that arises with increasingly-complex AI is that the guarantee isn’t always there anymore, when AI technology is given responsibilities more complicated than a refrigerator’s simple role. The definition of ‘working’ / ‘not working’ is more difficult to provide for delicate problems—such as which people may be a good match for you to go on a date with, or which people are generally credit worthy. What constitutes a good solution in these situations is not as easy to agree on, putting us in a situation where we have to trust something that historically didn’t require trust.

In essence, we are pushing AI to more human responsibilities—to provide answers for more-human, less-robotic situations. And the issue of trust is evident in all human-to-human situations. For example, you may be able to trust your partner to do the dishes. He may do them well according to his own specification of what well is. You may be fine with that. Your partner might have not been considered trustworthy with doing the dishes by an ex who was much more particular about how the dishes need to be done. Is it that your partner is not trustworthy, or is it that the relationship of trust depends upon the people / entities that enter it, and their standards?

The doing of the dishes may sound like a trivial situation for trust. But how about what constitutes safe driving, when your partner is driving? Or what about even more delicate situations, like caring for children? What is and isn’t safe for your toddler to put in her mouth? What toys can she cling onto from in the park? How long – if at all – can a baby cry before being attended to by a parent? Trust in these situations becomes a matter of negotiation and standards alignment. Some gaps between standards and ways to do things are too big, or hard to align. 

Unfairly, we expect AI to have the perfect solution in many circumstances, and treat it as if it should, because we still think of AI in very simplistic machine-like terms. This is where the second requirement for trusting (relying on the other party to be competent in what we entrust them with)  comes into play in a distinct way for trusting AI.

Relying on Competence

Because our conception of AI stems first from simpler machines that are completely reliable to fulfill their role, the natural progression is that we bring that expectation to AI. To demonstrate the way we trust technology’s competence: have you ever used a calculator even for really simple mathematics, just to make sureWe believe in technology’s competence sometimes more than our own. When we think of AI on the same terms as the tools we know, such as calculators, we impose onto AI the same dependability, believing AI to have a super-human rationality. As humans expand the domain AI works in, we exhibit our desire to make the world computable, to have objective answers—and we expect AI to provide those perfect answers.

When we weigh whether a person warrants trusting, there are many factors that sway our judgement from neutral towards one direction or the other. But when we weigh whether to trust an AI, the simple existence of the question, “Do I trust this AI?” in our minds means that the AI starts from a point of disadvantage. 

Once we have any doubt in an AI, they have in essence, broken their “promise” already. AI shouldn’t need to be trusted because their competence should be guaranteed. (That, or in a case like autonomous weapons, we don’t even trust humans to make the correct decision, so we definitely don’t trust a human-made tool to do it).

Perhaps what we need to remember is that delicate circumstances require delicately calibrating what the solution is. Politically, we do not have “good” or “perfect” solutions to many situations that involve humans. AI can currently derive its solutions from such a pool set, and we need to keep on refining both AI and what the potential solutions are. Expecting perfect trust on this factor – relying on AI to be competent to do what we wish to trust it to do – depends on us understanding what AI’s current limitations are in a particular field, and working with them. We generally trust people to the extent to which we know them capable – and to the peculiarities of these capabilities.  If we know our partner is great at doing the dishes, but has a propensity for getting hangry in the afternoon – we happily delegate them to the kitchen, but mindfully throw them a snack at 3pm before they pick up the toddler from daycare. 

It is necessary to reevaluate our expectations of AI. But it is also valuable to examine our flawed assumptions about AI’s capabilities. They may be flawed, but they are ingrained in the perception we have of what a machine is, and they are informative about how and why we trust. While those assumptions may lead to disproportionate distrust in some AI, they also contribute to natural trust in others, which brings us back to the idea of trusting AI with our emotions even when we distrust them with making decisions. 

Trusting machines with our feelings – part 2 – Why we trust AI

Author: Katherine Chou

Share this post: