Microsoft is hell-bent on trying to change the world. There is an old saying that if all you build is hammers, everything looks like a nail. But in this case, everything is a “nail,” because virtually anything that even tries to act automatically–and especially autonomously–defines what it can do with software. Archimedes said something to the effect that with a big enough lever he could move the world; well, there are a few companies that potentially have big enough levers, and the largest software company in the world would have to be one of them.
To say the Microsoft Build conference this week in Seattle was content-rich would be making a massive understatement. Much of what CEO Satya Nadella said on stage had global implications. For instance, the concept of being able to retain state on, and run, applications and games across virtually every major operating system in the world would be, in and of itself, a game-changer.
But were two things that really set me back, because they could fundamentally change our world similar to how PCs and smartphones did.
Let’s take each in turn.
Bringing Voice to All Smart Devices
There were a lot of interesting things surrounding Microsoft’s latest advancements with voice technology. One was that its speech-to-text product can now adjust regionally and across vertical business segments to accurate convert what is said to written communications that accurate reflect what was said, regardless of accent or language. This is the first time the potential of something like a real-time universal translator really looks doable–real time translation of not only the words but the emotions and context behind them—and relatively accurately.
But, as amazing as that would be, the idea that we could actually have a conversation with any connected device would be a huge game changer. Let’s use a car (I’m a car guy) as an example. The first time we tried voice it was a recording that basically worked like an idiot light. Rather than a light-up alert that told you that your door wasn’t closed (but didn’t tell you which door), the voice would instead announce that “your door is ajar,” or “your oil pressure is low,” or whatever. In other words, the voice was just a less-efficient idiot light.
However, what if the voice said instead that your right rear door wasn’t closed properly and your child might fall out, or that they were attempting to open the door with the same result? Or that your left front tire was getting low, that you had about half an hour to get someplace to add air, told you that you had a can of air in your trunk, and reminded you that if you didn’t get this done you’d have to buy an $800 wheel. Oh, and that it would be glad to order the wheel, set up an appointment at a tire shop or your car dealer.
What Microsoft showcased was a bi-directional digital assistant capability that could discuss things with you. So you can ask your car questions that detail a problem, identify solutions and even either walk you through a repair or set up an appointment. Or you could have a digital assistant that did the same thing but across a wider variety of subjects and could move digitally from your home into the car.
Now I’ve tried a beta of something like this with Amazon Echo, and even though that was far more rudimentary, I actually had an interesting political discussion with the thing. Now, I might worry that my car would toss me out if it knew which political party I belonged to, but actually being able to ask the tool I’m struggling to use how to best use it would be a huge benefit for any complex system (and, as I age, I’m finding that bar is sadly dropping).
I travel a lot in this job mostly to events like this, briefings and consultations where they want me there in person. I often spend far more time on the plane going to and from a meeting then I spend on the meeting itself. Now some of this is my fault, given I live in a resort town and no longer in Silicon Valley, but video conferencing just hasn’t been good enough to keep me off planes.
Well, that may change, because Microsoft showcased an update to Teams that used Spatial to not only drop an avatar of you that could take your place in a meeting into a remote location, but a technology that would allow a remote viewer to see through a presenter to what they were putting on flip chart or white board–without needing a smart white board. This last feature was pretty cool, because when the presenter walked in front of the white board you could see right through him. Granted, this might work better for remote classrooms, because none of my meetings use whiteboards, but it was still cool.
Now the avatar thing does require a set of HoloLens headsets for each person to get the full impact, but it also will work with a smartphone, tablet, or PC (though the PC might be a tad awkward, given this is an augmented reality app, and you’d want to look through the laptop or monitor screen). As I think about this, you’d likely hurt yourself with a monitor, but you might be able to use a set of Microsoft spec VR glasses at some future point.
Now, for me, if it meant I could just avoid 10% of my travel, I’d buy a HoloLens in a moment, particularly now when I’m sweating about getting measles (I won’t get my booster until today, May 9). But I often get the flu, a cold, or some other annoying sickness when I fly. That’s all on top of losing sleep, listen to people argue or get too drunk, have kids kick my seat or hear kids cry for hours on end (I’m convinced parents bring their small kids just to ensure that I don’t sleep on planes).
Wrapping Up: Birth of the Virtual Aid
Looking across these advancements, I see the potential for something very different. The idea of a virtualized bot aid that could allow us to scale with technology far more then we currently have. Top executives have aids who both learn from them and who allow the executive to spread their influence far more broadly. These aids attend meetings, take notes, handle tasks the executive doesn’t have time for, and ensure that the executive is well informed. Combining ever-smarter bots with avatars could do much the same thing for the rest of us. They could summarize meetings, instantly ping an executive for critical decisions and provide an ever-more-capable (as the technology advances) human-like extension of the employee’s ability to manage what they control.
With services such as Uber expanding to delivery of food and other services that provide point resources, these aids could also do a lot of the jobs (like handling laundry or buying a gift for the spouse) that current aids, and a lot of subordinates, complain about. Of course, there is also the possibility that with this technology you could really slack off, that would likely not end well.
In the end, though, I think we are seeing the birth of something that could eventually really change how, where–and even how much–we work while massively increasing our productivity.
Rob Enderle is a principal at Enderle Group. He is an award-winning analyst and a longtime contributor to QuinStreet publications and Pund-IT.