a16z partner Benedict Evans had an astute observation last week.
As voice-based interfaces improve what they can do (e.g. by adding more “skills”), how do you inform, train and perhaps most importantly remind end users of what they can do, without a GUI to do so?
Well, here are three ideas:
- Leverage recommendation engines (think Google Now, Netflix, etc) to proactively talk to users. For example, what if Alexa had motion sensing on it and when it saw me walk by the first time each day it told me what the weather was going to be
- Developers must design and build for a wider array of edge cases. For example, when listening to Pandora the phrases, “Thumbs up,” “I like this song,” and “Yes! More of this!” should all be able to rate the song higher.
- And probably easiest is to remember you do still have a GUI. For example, Alexa’s companion mobile app could use notifications and suggestions to help users maximize their Echoes. Yet in its current form the app feels like an afterthought.