I’ve been playing around a lot with the Third Generation Amazon Echo Show 10, and it hit me earlier today that this form factor might be an ideal starting platform for the next-generation Desktop PC.
What makes this generation of Echo different is that it has a 10 inch screen that attempts to follow you around the room, making it ideal for use where and while you are doing projects. It reminds me a lot of the 2nd Generation Apple iMac, which I maintain was the best desktop PC design of all time.
Let’s talk about whether the Third Generation Amazon Show could become the perfect future desktop PC.
Second Generation iMac
What made the Second-Generation iMac uniquely beneficial is that it placed all the system weight in the base and had a swivel mount for the attached flat panel display.
This unique design made it incredibly stable, far more stable than the iMacs that came after it (and most other all-in-one configurations), and the swiveling display made it far easier to put the display where you needed it even if that was on the other side of the desk. This design made this Second Generation offering far safer and far more helpful if you tended to move around the desk or the working space rather than just sitting in the same place all day, every day.
Third Generation Echo Show 10
As noted, the Third Generation, Echo Show 10, has similar structural advantages to the Second Generation iMac. Its weight is in the base with a screen that is supported above the base.
However, unlike the old iMac, the Echo Show 10’s screen will automatically follow you around the room, freeing you up to be mobile while listening to music, watching videos, or searching on the web. You interact with it hands-free, and while its AI isn’t yet that smart, it is generally adequate for things like entertainment or looking at directions while building or cooking something. Its functionality is limited by its relatively small (compared to PCs) screen and evident entertainment focus. Still, nothing says it has to stay focused on entertainment, and two weeks ago, I wrote about how Amazon is already exploring the laptop PC space with a Fire Tablet.
Speech and voice have had several significant problems. It takes a lot of time to train a system for your voice, and the results tended to lack punctuation, so you had to spend a ton of time in training and then in editing after dictation.
But as we advance Artificial Intelligence, our ability to quickly adapt to an individual speaker, automate editing, and add punctuation will reach a point where many of us may prefer speech input to keyboard input in a few years. In short, we can begin to dictate to our PCs things we’ve used a keyboard and mouse for in the past.
Now I grew up when secretaries and dictation weren’t unusual. For most, it was beneficial to be able to pace around the office while doing dictation. But you’d still want to look at the screen from time to time to see if the computer was accurately capturing what you were saying and successfully executing the commands you were giving it.
So having a screen that followed you around the room would be helpful, though adding automatic vertical tilt and locomotion would also help keep the computer close to you while you are moving. Initially, just allowing the screen to follow you would provide most of the flexibility you’d likely need to pace while doing dictation.
Connecting it back to AWS for its intelligence and capabilities would further provide for system longevity and open additional possibilities for subsidies in what is already, at just under $250, an aggressively priced digital assistant offering.
Eventually, we’ll move to head-mounted displays and wearable technology for this function. Still, the computational power to blend your environment with what you are working on is very resource-intensive. We don’t yet even have a single example of the headset that would be needed, let alone the Cloud service suite of applications allowing for complete hands-free work. But this implementation could address most of the requirements and became an ideal platform to create the hands-free software we’ll need when head-mounted displays become viable.
The Third Generation Echo Show 10 is arguably the first proper personal robotic solution that has hit the market. Yes, it is limited to swiveling its screen, but there is no doubt it will be followed by products that are more mobile, more capable, and tied to an AWS back end that can provide a level of Artificial Intelligence we’ve never seen before.
You add the ability to translate speech into text with punctuation coupled with the natural language capabilities that NVIDIA was talking about at their GPU Technology Conference earlier this year, and you have the opportunity for something revolutionary. A personal desktop PC with a voice interface as a default and a screen will allow you to walk and work simultaneously.
At the very least, I expect this would get us off our collective butts and allow us to work healthier over time. Amazon seems to be dancing all around the next-generation PC without crossing the line, much like Apple danced around Smartphones with the iPod Touch and then caught the Smartphone market sleeping when they announced the iPhone.
We have another iPhone-like revolution coming, and it looks like Amazon is setting up to take a page out of Apple’s book to get there first with a cloud-connected hands-free solution. Oh, and if you think this is impossible, remember we thought Apple taking the Smartphone market away from Nokia, Microsoft, Research in Motion (BlackBerry), and Palm was impossible too at one time. We don’t think it is impossible anymore.
Unlike Apple, Amazon subsidizes their hardware, suggesting they’ll enter with the As-A-Service model that the traditional PC OEMs are just wrapping their arms around.