by Jay Kiew
Imagine it’s a regular Wednesday morning. You’re walking through the office to get to your desk. As you pass by Paul, your colleague, you notice that his eyes are red and puffy.
“We had to put our dog down this weekend,” Paul explains.
How would you personally respond to Paul?
A bit awkward, humans tend to run on automatic responses when it comes to showing sympathy in situations like this. Most people express sympathy with statements like, “I’m sorry for your loss,” and, “That must be really tough.”
We can see how difficult the situation is for the other person, but it’s much harder to put ourselves in their shoes, especially if we haven’t been in a similar situation before.
For this very reason, machines may be better at empathy than humans.
To set the tone, it’s important to distinguish the difference between sympathy and empathy.
Sympathy is defined as an expression of care or understanding for someone else’s suffering.
Empathy, on the other hand, is defined as the capacity to understand or feel what another person is experiencing from their frame of perspective. Truth be told, we’re not that great at being empathetic. Then again, machines will never necessarily understand or feel what it means to be human either.
Despite that, empathy is not based on how empathetic we think are. Rather, empathy is based on how much the receiving person feels like we can relate to them.
Empathy is in the eye of the beholder, not the giver. Because of this, machines may have the upper-hand.
Chatbots could become human empaths on steroids
When Amazon initially introduced Alexa, it didn’t do too well with natural conversations.
For instance, if you told Alexa that you were feeling nervous, Alexa would provide you with a canned line like, “I’m sorry to hear that,” or, “Sometimes talking to a friend can help.”
If you were to repeat your statement, “Alexa, I’m nervous.”
The same response would come back. “Sometimes talking to a friend can help.”
Alexa was useless in natural conversation until Fraser Kelton, co-founder of empathy API Koko, linked the two together. Founded in MIT’s Media Lab in 2014, Koko is an artificial intelligence (AI) algorithm that provides crowd-sourced cognitive therapy via chat. On the instant messaging mobile app, Kik, teens can request emotional support from other anonymous users. By using machine learning, Koko analyzes thousands of user conversations on Kik to provide suggested responses to frame encouragement.
In one example where Alexa was paired with Koko, a person describes that he’s nervous that he’ll fail an upcoming exam. Alexa, with Koko activated, provides the user with the following response:
“Exams are really stressful, but a little anxiety can help us succeed. It sharpens our minds … It’s your body’s way to prepare itself for action. It’s actually a really good thing. I wonder if you could think of your nerves as your secret weapon. Easier said than done, I know. But I think you’ll do much better than you think.”
Who would pass the Turing Test?
The Turing Test, introduced in 1950 by Alan Turing, is what most would consider to be the first test of artificial intelligence. The idea was that you would sit down and have a natural language conversation with a ‘person’, either a human or a machine. If you couldn’t tell that the ‘person’ was a machine, the machine passed the test.
I don’t know about you, but if somebody told me that they were nervous about an upcoming exam, I’d probably just tell them to do their best and study, focusing on what they can control instead of how they’re feeling.
On the flip side, what Koko spit out was pure empathy wizardry.
In terms of a verdict, Koko would have passed the Turing Test with flying colours and I would have been accused of being a computer. Machines might just do empathy better because although an algorithm can never feel emotions, the machine is meeting the human where they’re at. What’s critical is that the receiver feels like their feelings are felt and understood.
Building humanity with machines
Frankly, the question shouldn’t really be whether humans or machines are more empathetic in the first place. In the humans versus machines debate, the conversation should shift from “humans or machines” to “humans with machines”.
The real question should be how humans can become better communicators with the use of technology.
Since Koko’s launch in 2014, there has been a wave of AI-augmented apps to help us communicate better. Textio helps companies write better job postings. IBM’s Watson Tone Analyzer lets you know whether you’re coming across as confident or tentative, joyous or angry.
As you type an email in Outlook, Canadian-based tech company turalt will diagnose the levels of empathy, assertiveness, formality, and clarity in your message.
“The important thing is that we should never become dependent on suggested responses. Think of
Smart Reply in Inbox by Gmail, it takes away your ‘voice’ in writing. We shouldn’t get to a point where we no longer communicate with each other because our proxies (AI bots) are communicating for us. The best thing AI does is reflect our humanity by providing indicators of where our state of mind is. Our focus should be on developing our humanity by better understanding how we come across,” Dr. Chris McKillop, founder of turalt, said.
It seems like these companies have one thing in common – to help you get your message across in the way that you want it, without misinterpretation.
About the writer
Jay Kiew is a management consultant in the human capital space, leading organizations through change in the age of AI. He holds an MBA from the Ivey Business School and a bachelor’s degree in political science from the University of British Columbia.