VUI design | UX research | Conversation design
Design methods and application on voice-centered multimodal experience design in automobile.
This is a master‘s project. In this project, I explored the design of a voice centered in-vehicle seamless multimodal experience. Taxonomy was used to summarize existing human-machine interfaces (HMI) and to explore new design approaches. In the subsequent design, I combined the latest in-vehicle technologies to propose new use cases for voice interaction and completed a comprehensive prototype for voice interaction in the car.
Voice interaction is a new field for me, but I am enthusiastic about the challenge. By reading the literature, I gained a profound understanding of how to utilize voice interaction to its full potential over interaction methods.
I collected 55 case studies of in-vehicle voice interaction in the market and research to analyze current design trends and enhance further understanding of SME(seamless multimodal experience) in terms of HMI, input and output methods, cognitive load, and related tasks.
In these cases, I gradually realized that different voice interaction input and output methods can correspond to different levels of attention. Input and output methods and usage scenarios are decisive factors for attention levels. Therefore, I plan to focus on designing periphery interaction and implicit interaction, which will be the most innovative and potentially promising area.
Based on the research on VUI design methods, I settled down a combination of design methods and toolkits specifically for voice user interface(VUI) design in the automobile.
I would like to understand what problems drivers are currently experiencing with existing voice assistants in in-car scenarios, including siri, alexa auto, etc., and what are their thoughts? This is a summary from the survey.
• Generally low satisfaction with voice assistants(2.2 out of 5).
• Users input is hard to understand by the devices.
• Recovering from the problem is difficult, have to start over the conversation.
• Users sometimes need to check the screen, as a result, they will worry about the safety issue.
• Unpleasant experience comes from: limited functionality, responsiveness, and touch screen is faster.
These points cover the current pain points of drivers and can be seen that voice assistants currently do not fully understand natural language, and do not fully utilize the advantages of voice interaction.
Due to the very nature of this project, I wanted to get support for my ideas from industry. So I approached a voice interaction designer who works for an automotive manufacturer to communicate my initial design direction and try to get support from a manufacturer's perspective in a very early stage. My most important takeways are the expectations she mentioned:
Understand vague instructions
One step ahead
Like human beings
I conducted a contextual inquiry with three drivers in order to identify problems that users might encounter in a realistic environment and to quickly find solutions, during which I sat in the back seat of the car, observed and asked questions of the drivers.
In the contextual inquiry, I found two different interaction patterns: driving-centric and infotainment-centric. Depending on the scenario and user preferences, these two modes may switch. For example, some drivers will focus more on entertainment activities on familiar commuting routes, and on navigation and driving on unfamiliar routes (e.g. traveling).
For the voice user interface design, the first part will be voice character design. In the participatory design workshop, the first toolkit is the Big Five personality traits cards.
In the workshop, each participant will be asked to pick five different traits from the cards and play a role-playing game to understand the character. At the end of this phase, the top five traits chosen will be the assistant's personality.