User interfaces (UI) are things we take for granted these days. In fact, we often don’t notice it unless it’s poorly executed.
If you’ve ever been victimized by confusing kiosks or ATMs, difficult to navigate websites, or apps that don’t work, then you’ve experienced how bad UI can make or break the success of a product/service.
With the rise of touch screen technologies and smartphones, we’re more exposed than ever to evolving UIs. A new trend, which evolved from graphic UIs (as seen in the screens we surround ourselves with), is voice UI.
What Is A Voice User Interface?
Similar to how a graphic UI allows you to visually interact and input commands into your smartphone, a voice UI allows a person to interact with speech applications or programs using their voice.
If you’ve ever heard of or talked to Siri or Cortana, you’ve already interacted with a voice UI. In these cases, the voice UI was the technology that allowed you to “command” your smartphone or personal computer to do specific tasks for you.
Other popular examples you may be familiar with are the fictional Computer from the Star Trek series (not-so fictional now), or home devices such as Google Home or Amazon’s Alexa.
The Growing Popularity of Voice User Interfaces
According to a report by Alpine in 2017, 1.7 million voice-first devices were shipped out to consumers in 2015. By 2016, this number had exploded to 6.5 million devices.
Technology has made voice UIs more common and popular, as being able to make commands without having to use your hands or eyes has many valuable, practical, and accessible applications.
According to Amazon, there are four trends that are contributing to the rise of voice UIs:
- Web services and the Internet of Things have provided ample opportunities for voice UIs to flourish;
- The science and technology behind voice Uis, such as automatic speech recognition and text to speech, are accessible;
- Current hardware (such as those readily available in smartphones) are able to support voice Uis; and
- Voice UIs are able to easily adapt and learn because of artificial intelligence and machine learning.
In addition to this, RedStagFulfillment, backed by different studies, predicts that artificial intelligence (specifically, Voice UI and chatbots) will help bring to ecommerce the personalized and engaging customer experience that only brick-and-mortar shops have achieved so far,
What This Means for UX Web Design
If voice UIs are on the rise, does this mean that other UIs will be replaced? Not necessarily.
Although trends show that voice is growing in popularity, technology has always sought to improve upon itself.
With this in mind, it’s apt to assume that while voice will become more commonplace, it will also grow and integrate with current technologies rather than replace them. This also means that current technologies will need to adapt to accommodate voice UIs.
The Impact of Voice in Web Design
One of the technologies that is adopting voice UI is web design. As a crucial element in any brand’s marketing plan, your web design would lag behind competitors if you do not start including voice in their design arsenal.
Among the ways voice UI will affect your site’s interface, include:
The Need To Be Deliberate With Words
For voice UIs to work successfully alongside web design, designers have to be purposeful in the words that they choose.
While graphic UIs can rely on visual cues, colors, and layouts to convey a message or guide users, voice UIs can only rely on words. Hence, designers have to not only design but also write for their target audience.
Vocabulary, tone of voice, and understandability are some of the important factors that designers must keep in mind. Remember that users on voice UIs will not be looking at a screen to read text, but will rather be hearing text being read out loud to them.
For example, while standard graphic user experiences espouse following a standard style guide to fit a brand, voice user experience designers can follow this advice by making sure the style of writing is consistent.
Finding Out What Your Users Want To Do
As mentioned previously, while graphic UIs have visual aids to guide users, voice UIs do not. Instead, they rely on voice commands.
Generally for this to work, these voice commands have to be standard and easy to remember. However, before you can even set commands, you have to understand what your users want to be able to do.
For example, the general public using a voice UI will usually use it for everyday commands, such as checking the weather or searching for a place. Hence, they will need an interface that’s easy to use, conversational, provides feedback, and can guide users.
On the other hand, voice UIs for “power users”—those who use the technology for specific purposes such as workflow—would need an interface that emphasizes productivity and efficiency.
It is worth noting that because word choice and user intent matter, you must also be able to adapt to when these change.
In addition, because voice UIs and experiences are still growing and changing, you have to be ready to adapt and innovate to maximize its potential. The key to this is to always keep your users’ intent at the core of the design process. As voice UIs and experiences advance, it’s important that you stay ahead of the curve, and not behind it.
Engagement and Personalization
Once you have user intent down, you can now design your voice UI as a brand or personality.
In order to avoid having users feel like they’re speaking with a robot, interactions have to be unique. A user seeking academic information would likely trust a reply given in a mature, professional voice, while another listening to directions might appreciate an easy-going, conversational voice.
You should also consider allowing users to choose which type of voice they would like to use with their interface.
Adapting To The Growth Of Voice User Interfaces
Staying ahead of the curve means not only preparing future design work to accommodate voice UIs and experiences, but also adjusting what you already have.
Here are some ways you can adapt to the growth of voice UIs:
Moving Toward Multi-Modal Designs
Voice UIs are most likely set to integrate with current technologies rather than replace them. While being able to input commands without having to free up your hands or eyes is convenient, it will not replace graphic UIs.
Thus, one way to adapt to the growth of voice UIs is to become multi-modal for different types of information.
Not only does this provide users with options, it also allows for the input and output of information in the most convenient methods available to the user as possible. An example would be navigating apps that, aside from presenting a visual map, also give verbal directions.
Another way to adapt to voice UIs is to invest in web-speech APIs.
APIs, or application program interfaces, set how software components interact with one another. A web-speech API would allow websites to “talk” to you, but also listen to you as well.
Some uses are: dictation, wherein a user speaks and the application converts this to text or vice versa; voice control, wherein a user can use their voice to find their way around a website; and translation.
However, the limitation here is the need for a constant internet connection for verification, and offline capabilities are lacking.
Designing and Implementing Voice User Experiences
Finally it’s time to design and implement.
- First, you need to once again put your users’ intent at the core of your design process, and answer the question of what your app will do for your users. Consider what your users need and what your competitors already provide.
- Second, now that you know what you want your voice UI to do, you then have to define how it will do so—first, in capability (the functions and features it will have), and next, in personality (tone of voice and word choice).
- The next step would be to create the flow of conversation that your users will follow. Conversational dialogues that respond to user queries should direct and guide them toward your app’s capabilities.
- Because conversational dialogues can be created in a number of ways, the fourth step would be to also create alternate phrases that help the app adapt to the different way users may phrase the same request. The more extensive your list of alternate phrases is, the easier it will be for your users to interact with your interface.
- The fifth and last step is to refine your process by testing it. Does the interface help users meet their needs? Are the word choices appropriate? Do the responses provided sound conversational and natural? Change and improve aspects that do not meet their requirements.
Now that you have an idea of what voice UIs and experiences are and how they relate to current web design, you should have a good grasp of how to adapt to accommodate voice in your website. Keep reading to stay ahead of the game, and be at the forefront of potential benefits of this emerging trend.