The battle of the digital assistants will begin again in earnest in the fall of 2016 when the latest version of Apple’s Siri hits the street with iOS 10. But this time it also will hit the streets with MacOS Sierra and with the Apple TV.
This latest version of Siri will be the first to provide APIs that enable third-party developers to create new skills and functions for the voice-activated digital assistant.
You’ve probably noticed that other digital assistants, such as Amazon’s Alexa, already have access to third-party apps. But gaining these new functions is a bigger deal for Apple, if only because some of those third-party apps for Siri can extend the assistant’s role in artificial intelligence.
One of disadvantages Apple has had with Siri is that it worked with just a few apps from Apple. As a result, compared to other assistants that have reached the market, Siri couldn’t do a lot. Adding to the difficulty of making Siri more useful is Apple’s laudable focus on privacy, which has further limited what Siri can do.
That’s starting to change. Using what Apple is calling “Siri Intelligence,” the virtual assistant will begin to be able to pass portions of requests back to Apple for to obtain more complete information or to produce better results.
In the past, Apple resisted enabling Siri to transmit anything that that could be construed as personally identifiable information back to the company. But now the Siri platform can strip off any data that Apple wants to protect and then act on the rest.
Unfortunately for Apple, in the five years since Siri debuted, competitors have worked hard to introduce their own voice-activated artificial intelligence (AI) assistant. Google debuted Google Now, which is about to become Google Assistant, and Microsoft has Cortana, which runs on Windows 10 and the five or six Windows Phones still in use.
Each of the digital assistants has its own challenges and they react in surprising ways. For example, ask Cortana what is the result of zero divided by zero and the assistant will tell you to ask Siri. If you pose the same question to Siri, it will give you a smart-mouthed response that involves Cookie Monster.
Of the three, Google’s Assistant is doing better than the others at being useful, but to some extent this is due to Google’s long-term effort to gather every piece of information in the universe, regardless of whether it’s personal information. This is why Google can talk to you about your upcoming flights or find a phone number in an obscure email.
For its part, Siri needs a bump up in intelligence, which may be a difficult process.
Apple Pushes to Make Siri’s AI More Useful and Reliable
Part of the reason for the difficulty is that Apple wants to keep the AI running on the device rather than on servers at Apple.
During his presentation at the Worldwide Developers Conference opening keynote June 14, Craig Federighi, Apple’s senior vice president of software engineering, said the company believes that the company should provide “great privacy and great features.” This means that Apple wants what Federighi called “deep learning” to run on Apple devices.
Deep learning is a major function of AI, but it has one characteristic that’s normally not compatible with mobile phones and tablets—it requires a lot of processing power. A high-end Mac won’t have too much trouble handling deep learning, but its primary target is the iPhone followed by the iPad. Even with their 64-bit processors and the greater processor memory now built into the latest devices, it’s not clear an iPhone will have the required processing power.
But does it matter if the iPhone falls short? Maybe not. While Apple can always crank up processor speeds and memory, what may matter most is whether the growth of Siri’s AI is enough to make speech recognition accurate and to provide complete vocalized responses, rather than just doing a search and displaying a Web page.
Perhaps with more intelligence, and some third-party support, Siri will be able to answer a question such as the one I posed today, when I asked whether FedEx had delivered an iPad I’d returned to Apple. For now, what I got was, “Interesting question, Wayne.”
Or perhaps Siri will just get better at voice recognition, which is a challenge Apple has been dealing with for decades, starting with handwriting recognition on its old Newton personal digital assistant from the 1990s. Siri isn’t talking about egg freckles, but the assistant sometimes can generate some whoppers when it misunderstands.
But problems understanding questions aren’t limited to Siri. All too often I’ll be greeted by a simple “Boing!” when Alexa can’t understand a question that I’m asking. Cortana isn’t really any better, except with Cortana I can type in a query.
In some ways the AI assistant race isn’t about which is winning; rather, we’re seeing the very early beginnings of how artificial intelligence can affect us. We already have a way to ask questions simply by speaking. The fact that it’s not working very well is no surprise. But the current state of AI is showing promise for future generations of digital assistants.
It won’t be very long before all we need to do to ask a question and receive a reliable answer is to simply speak up. Somewhere in our office or living room, a device will be listening and will provide a coherent answer. Just don’t ask it to divide zero by zero.