Humanization Is Key to Making AI Projects Successful

From Alexa to chatbots, artificial intelligence is becoming a more pervasive part of both our personal and business lives. But experts in the field, including those who have implemented AI systems, say we still have a lot to learn.


HALF MOON BAY, Calif.—Artificial intelligence is routinely touted at tech conferences and elsewhere as the “Next Big Thing” that is going to transform the customer experience and the ability of companies to better sell and market their wares. But there were also skeptical and cautionary notes sounded here, even from vendors, at the Connected Enterprise conference (running Oct. 22-25) sponsored by Constellation Research.

“There are a lot of misconceptions about what AI can do in the enterprise. I would focus on really picking a specific problem,” said Inhi Cho Suh, general manager of Watson Customer Engagement at IBM.

For customers of IBM’s Watson AI supercomputer services, Suh said it’s important to focus on precise algorithms for small sets of data. “The language of business is incredibly unique,” said Suh. “Ask the marketing team or the supply team for the definition of ‘customer’ and ‘order,’ and you might get different answers.”

Estaban Kolsky, principal and founder of Think Jar, a research and advisory firm focused on customer strategies, agreed. “You don’t get to good AI without good data; no one has.”

Another key point is that AI systems evolve. “The big thing with AI is you have to exercise it often. If you use it infrequently, it’s hard to coach it to do what you want. It has to learn,” said Marge Breya, chief marketing officer at Microstrategy.

Jennifer Hewit, who leads the Cognitive and Digital Services department at Credit Suisse, used AI to deploy a new kind of virtual IT service desk at the company called Amelia. The project rolled out slowly, which she said was a deliberate strategy to see how it worked.

When Amelia went live in December of 2017, it proved to be about 23 percent effective at answering employee questions. As with chatbots, an inquiry gets bumped up to a live agent if the virtual help isn’t effective. “One thing we learned was not to let the system fake knowing. That was huge,” said Hewit.

Amelia was initially designed as an avatar, but is now voice only. “We took that [avatar] down because she looked too much like a robot,” said Hewit. The focus is on common, simple problems tech support deals with like email is stuck or a password needs to be reset. In the past year, the staff at Credit Suisse has helped to train the system that is now 85 percent effective at answering questions and serves 76,000 users in 40 countries.

Training Human Beings

While it’s well-known that AI systems need to be taught or learn from their mistakes, these systems are also training the humans who use them, warned Liza Lichtinger, owner of Future Design Station, a company that does research in human computer interaction.

“The language we deliver to devices is rewiring and remapping people’s brains, and that projects into their social interactions,” said Lichtinger.

Recently Lichtinger was doing some consulting work for a company bringing out a new virtual agent for personalized health care. In one crisis scenario the app responded: “Is the victim alive?”

“I jumped when I heard that. Suddenly it was a ‘victim’ not a patient. That changes our paradigm of how we see humans. It just shows that companies aren’t always sure about language and the messaging they’re sending people,” she said.

As these AI systems get more sophisticated, they will pick up on visual cues thanks to the inclusion of biometric data. “This new area into social signaling is going gangbusters at Stanford University where it’s about capturing how engaged you are looking at specific content,” said Lisa Hammitt, global vice president of artificial intelligence at Visa. “Ethics has to come to the forefront as we look at how we are personalizing the experience and trying to predict intent.”

Hammitt said Visa has developed a data bill of rights that is known internally as “rules for being less creepy.”

“You have to expose what the algorithm is doing. If it says you are a karate expert but you hate karate, you have to let people see that and be able to update it,” said Hammitt.

David Needle

David Needle

Based in Silicon Valley, veteran technology reporter David Needle covers mobile, bi g data, and social media among other topics. He was formerly News Editor at Infoworld, Editor of Computer Currents...