Request Information

The Possibilities and Limitations of Automated Home Systems

Automated home systems and intelligent assistants are still in their early days within the IoT ecosystem, but despite their youth, Siri’s Apple and Amazon’s Alexa already help customers control door locks, turn on light switches, and modify thermostat temperatures. For those who benefit most, however, the promising accessibility offered through a smart home’s voice- or smart phone-activated capabilities also comes with certain pitfalls.

The promise

Todd Stabelfeldt, a quadriplegic since childhood, uses Apple’s smart home platform HomeKit, Siri, and Switch Control to help him open his garage door, draw the window shades (using his mouth), and make dinner party preparations at home with his wife. This technology has provided “a lot of opportunities to demonstrate that I’m a quality man, a man of integrity,” he recently told NBC News. “Everybody has worth and should have the opportunity to demonstrate it.”

Dante Murray of the National Alliance on Mental Illness in Louisville envisions phones, wearables, and smart home tech working holistically to predict and prevent psychotic episodes. “Behavioral nudges” like scheduling lights or TVs to turn off at certain times, could support more holistic treatment, says Dr. John Torous of Beth Israel Deaconess’s Digital Psychiatry Program in Boston. Apple Watch, Fitbit, and other wearables have already introduced features. With fridge technology for ordering groceries and the availability of connected lights like Philips Hue or Lifx, smart home technology could become instrumental to recovery – transforming treatment into a daily, participatory process rather than therapy sessions separated by weeks or months.

The pitfalls

Since voice is the primary method of interaction with the likes of Siri and Alexa, the restricted ability of intelligent agents to fully understand accents and speech patterns is problematic for automated home systems. Teaching a machine to understand fluent speech is hard enough; teaching a machine to understand, say, a stutterer is exponentially more difficult – a potential deal breaker for some users. Artificial Intelligence’s (AI) trouble with deciphering accents, and the frustration from a disrupted interface if there’s a speech delay diminishes the incentive, says Backchannel’s Sonia Paul. “An AI app can only recognize what it’s been trained to hear.” For designers of such devices, speech impediments and certain language idiosyncrasies of pronunciation and emphasis need to be considered.

Amazon is betting on consumer feedback to improve its products – collecting data for languages and accents developers would like to accommodate through programs like Voice Training on Alexa. In April, Amazon made Alexa’s AI and voice-recognition software available to its millions of cloud-computing customers. The service, called Amazon Lex, will allow developers to make chat bot applications using Alexa’s voice recognition technology and leverage the AI’s deep learning abilities to enable their apps to understand more text and speech queries.

As automated home systems’ capabilities become more intelligent as a technology, people with impairments will be afforded more independence and their feedback will play an integral part of the conversation for efficient IoT device management in this space.

Case Study: IoT is redefining the customer experience. Nokia case study.

© 2016 IoT Innovation All rights reserved.

Website by Ironpaper