Has The Age of George Jetson IoT Time Come? Alex Was the Star of CES

Alexa Voice Service (AVS) is the software that allows owners to control compatible devices with their voice. From the various  reports it was estimated there were 700–1,100 Alexa-controllable products at CES. And the Amazon / Alexa logo was everywhere at CES.

Is the Age of George Jetson here? In a smart home, everything from the the HVAC to the TV to window shades can be controlled. However it’s not easy to really have a whole house of Artificial Intelligence (AI) controlled devices. Why? Many of the IoT-enabled devices don’t talk to other devices if they are made by different manufacturers. Opps! The IoT world awaits THE killer app, like Apple Homekit or Google Home. We are still waiting for them to provide all encompassing, unified smart “home.”

The Amazon Echo is a hands-free speaker controlled with your voice. It connects to the Alexa Voice Service to provide information, news, play music, report on sports scores, deliver weather reports… The uses for AVS and Alexa are limited only by your imagination.

When something is connected to Alexa, the device instantly becomes pseudo-interoperable. Interoperable technology is not an evolutionarily stable strategy for most IoT manufacturers. Interoperability is the ability of different information technology systems and software applications to communicate, exchange data, and use the information that has been exchanged to do something.

What CES showed us is that voice control seems to be the unifying app for IoT. And Alexa is the biggest name in voice control. Smart devices are generally controlled with apps. If there is an app to control the smart device, the app allows AVS to directly control the smart device. So you could say, “Alexa, tell Crestron I’d like to turn the lights on in the bedroom” (for your Crestron) or “Alexa, I would like to turn the heat on the downstairs thermostat to 70 degrees” (for your Iris Smart Home System). It’s easy to see the value of voice control in so many ordinary situations. What’s interesting about AVS is that even though Crestron and Iris have nothing to do with one another, you can control them both with your voice.

Alexa has finely tuned automatic speech recognition (ASR) and natural language understanding (NLU) engines that recognize and respond to voice requests instantly. Alexa is always getting smarter with new capabilities and services through machine learning, regular API updates, feature launches, and custom skills from the Alexa Skills Kit (ASK.) The AVS API  is a programming language agnostic service that makes it easy to integrate Alexa into your devices, services, and applications. And it’s free.

And you can create meaningful user experiences for an endless variety of use cases with Alexa Voice Service (AVS); Amazon’s intelligent voice recognition and natural language understanding service. AVS includes a full range of features, including smart home control, streaming music content, news, timers… and can be added to any connected device that has a microphone and speaker.

But while Alexa has a head start, Google Home, an Echo competitor, is very likely to quickly catch up. Google Home though, works with a completely different set of protocols and has different “awake” words. These are command words that make it pay attention and carry out the request. It seems that we may need to learn to speak to different systems in different ways – perhaps we’ll need lessons in Alexa speak and Google speak as well as and Siri and Cortana speak!

So is the Age of George Jetson here yet? Sort of. What will be interesting is to see if there is a start-up that will pull all of this together so that us regular humans don’t need to become AI experts to connect and use the technology.

Dr. Natalie Petouhoff, VP and Principal Analyst, Constellation Research

Covering customer-facing applications

 

Share

Move Over Siri, Lola and Nina Are Making Waves in Customer Service: Guest Post by Ashley Furness

Move Over Siri, Lola and Nina Are Making Waves in Customer Service 
Siri wasn’t made for customer service, but her release inadvertently revealed a huge opportunity for companies to develop the future support channel of choice.
This opportunity is the ability to accomplish two things at once: provide human-like interactions with customers that don’t involve additional payroll, and feed the consumer’s need for an instant response. Customers hate sitting on hold, wading through IVRs and agents with a bad attitude. Siri-like technology solves all of these issues.
Two companies have already started to capitalize on this opportunity, including the original Siri innovator SRI International. Here’s a sneak preview into how two company are paving the way to the next generation of support technology.
Meet Nina
Customers typically face two common annoyances when they access self-service offerings on a smartphone or tablet. One, they have to type login information and search terms on a tiny keyboard. And two, they have to dig through FAQ or community forum pages to find the answer they are looking for.
Speech is the perfect vehicle for addressing both of these issues. Even though traditional customer service applications might only require tapping through a few pages, that’s enough to stop many consumers conditioned for instant gratification.
“Mobile is this really interesting space where customers now carry around a microphone and a screen in their pocket all day,” says Andy Mauro, senior manager of mobile innovation for Nuance Communications.
His company released in August new mobile customer service technology that capitalizes on this idea for voice-enabled self service. They created a Software Development Kit (SDK) called “Nina” that enables companies to add speech recognition and NLU into an existing mobile application. The result is an app that converses similar to Lola. “[Apps built with Nina] enable faster, more convenient navigation using your own phrasing,” Mauro explains.
Lola Gets Personal
Spanish banking giant BBVA had one goal when executives approached SRI International, the research and development organization that developed Siri.
“They wanted to build the Internet bank of the future,” recalls Norman Winarsky, Ph.D., vice president of SRI Ventures, the venture, license development, and commercialization arm of SRI International.
SRI decided to create a mobile application that emulated the kind of conversation a real-world service agent named Lola delivered. This conversation was meant to be nothing like Interactive Voice Response (IVR) systems that require prompted keywords to deliver answers. Instead, Lola uses sophisticated NLU algorithms and decades worth of speech recognition data to determine the context and intent of the question, no matter how it’s asked.
For example, a banking customer could ask the application, “What was my balance yesterday?” Lola would recognize that “balance” refers to the dollar amount in the bank account and “yesterday” means to exclude transactions from today.
Lola also remembers the context of the conversation. Continuing the previous example, the customer could then ask, “What about the day before?” Lola understands that the customer is still referring to their account balance, and that “the day before” means to exclude transactions from today and yesterday.
The Future of Customer Service
These technologies have clearly tapped into an unmet need in the customer service market: better, more enjoyable self service. So, do these apps have the potential to become the service channel of choice as customers need for instant response grows? Tell us what you think by commenting here.
Ashley Furness is a CRM Analyst for research firm Software Advice. She has spent the last six years reporting and writing business news and strategy features. Her work has appeared in myriad publications including Inc., Upstart Business Journal, the Austin Business Journal and the North Bay Business Journal.

____________________________
Ashley Furness
CRM Analyst
Software Advice

Enhanced by Zemanta
Share