#eWEEKchat Sept. 12: Why Voice UI Eventually Will Enable Everything

Voice UI is that big red “easy” button, as popularized by OfficeMax, and easy is always best. How are we going to handle the proliferation of chatbots, smart speakers and talking cars going forward?

eWEEkchat2.new

On Wednesday, Sept. 12, at 11 a.m. PST/2 p.m. EST/7 p.m. GMT, @eWEEKNews will host its 71st monthly #eWEEKChat. The topic will be, "Why Voice UI Eventually Will Enable Everything." It will be moderated by Chris Preimesberger, eWEEK's editor of features and analysis.

Some quick facts:

Topic: #eWEEKchat Sept. 12: "Why Voice UI Eventually Will Enable Everything"

Date/time: Wednesday, Sept. 12, 11 a.m. PST/2 p.m. EST/7 p.m. GMT

Tweetchat handle: You can use #eWEEKChat to follow/participate via Twitter itself, but it's easier and more efficient to use real-time chat room link at CrowdChat.

‘Why Voice UI Eventually Will Enable Everything’

eWEEK loves to discover and talk about IT trends. Voice user interfaces now are about as strategic as trends get.

Last May, when we published a midyear trend article from the Silicon Valley-based Churchill Club, one of the items was this one:

“Trend 3: Voice First Will Open Up Internet to the World. We are trying to bring the world online. However, 25 percent of the adult population is illiterate, so the way to bring them online will be voice first.”

If devices are smart enough to hear and understand human speech, no matter what the language or dialect, and then follow instructions, the percentage of tasks getting accomplished increases tremendously. Voice UI is that big red “easy” button, popularized by OfficeMax, and easy is always best.

The announcement a year ago by Microsoft and Amazon that they were to enable their respective virtual assistant technologies, Cortana and Alexa, to talk to one another, was regarded by industry experts as the first crack in the wall that separated today's leading virtual assistant technologies.

Although voice-activated virtual assistants are present in millions of devices and have plenty of overlap in terms of functionality and third-party support, Cortana, Alexa, Google Now and Apple Siri existed as self-contained ecosystems until only recently.

When it was introduced, the Alexa-Cortana collaboration allowed Windows 10 users to access Alexa's skills by issuing voice commands to Cortana, Windows 10's built-in virtual assistant. Alexa, meanwhile, helped Amazon Echo owners keep track of appointments, reminders and other information gathered by the ever-watchful Cortana throughout the day.

This is the type of intercooperation that is needed for IT to progress and enable all users plenty of options to get things done. Options are always good.

Chatbots are talking to us more and more frequently on the web. Cars manufactured in the last decade are listening to us and following instructions. Smart speakers for homes and offices are seeing a huge spike in sales. Voice-enabled enterprise apps in manufacturing, health care, scientific exploration, industrial IoT and many other sectors are being acquired and installed by the week all over the globe.

We'd like to chat about these things, including how you’re using voice UI now and where you’d like to see it deployed in the future. If you're working on developing voice apps, let's hear from you about what you're doing!

Questions we’ll ask include:

--Are we already taking voice UI for granted in apps we use each day?

--What new applications would you like to see voice UI control?

--How important are voice-to-text, or text-to-voice, apps for you?

--What problems do you see in current voice UI?

Please plan to join us Wednesday, Sept. 12 at 11am PT/ 2pm ET for this discussion.

Chris Preimesberger

Chris J. Preimesberger

Chris J. Preimesberger is Editor-in-Chief of eWEEK and responsible for all the publication's coverage. In his 13 years and more than 4,000 articles at eWEEK, he has distinguished himself in reporting...