Google Home users are unable to create custom routines with their smart devices for various scenarios.
Without finding the full potential for customizing their routines, smart home owners have to go through many workarounds to get their devices to do what they want them to do.
Product Designer | Usability Case Study
Affinity Mapping, Task Flows, Job Stories, Wireframes, Prototyping, Hi-Fi, Usability Testing
Through guerilla usability testing, I found that new users who were trying to create a custom routine were confused on the meaning of many important terms and could not find enough guidance to carry through seamlessly. I sought to redesign several critical screens to enable users to understand what to do and thus create routines more easily.
To solve this, they can be brought up a level so that they are not as nested. Alternatively it could also be designed to look more like an interactive button for users to explore.
Simply changing the wording of terms to be consistent will help users understand the flow better and thus not have to go back and forth to remember certain terms.
We can add copy that will give more background on what certain terms mean and also provide some examples for text to input.
Ironically, the process to make life easier for smart home owners is a pain in and of itself. Inconsistent copy, confusing examples, and lack of guidance or instructions make it cumbersome for users to try to learn how to make things automated for various scenarios. I focused on solving these pain points and users were able to setup multiple custom routines on their own with little to no questions.
Pain Point 1 - Popular Actions are not prevalent or clear.
Pain Point 2 - UX Copy is inconsistent before and after adding commands.
Pain Point 3 - 'Add Command' screen lacks any context or directions.
Before: "Popular Actions" button looked like a documentation link.
Before: The meaning of a 'New Command' was ambiguous and changed from screen to screen.
Before: Adding a new command provided no guidance whatsoever.
After: Popular Actions are now expanded out for viewing on first page instead of a nested link.
After: Keeping the guiding words "When I say..." allow users to keep the same mental model when inputting commands.
After: Now, it provides a prompt "When I say..." and some examples underneath to give users ideas on what to input.
1. Design audit - One of the largest weaknesses of the current Google Home app is in its Mapping. Many of the screens do not follow users' existing mental models (the 'Save' button's placement for example). Additionally, screens lack the affordances needed to understand certain interactions (bad copy will cause this).
2. Comparative Research - Though there are many competitors that handle custom routines, each one structures the process differently and no app is perfect. Routines, Commands and Actions are all named differently throughout every app.
In order to improve the flow for smart home users, we have to eliminate any confusion at the pain points that is causing them to exit the flow to get more information (or re-learn things).
Popular Actions gave users clear examples on what kinds of 'Actions' that Google Assistant could carry out. However, they were nested inside an ambiguous link that looked like a documentation URL to many users and lacked the necessary context to get users to click.
Added a dropdown arrow to show that 'Show popular actions' is interactive.
A hybrid version of Version A and B. Shows some popular actions to reduce cognitive load and clutter on page.
Shows full spectrum of popular actions on high level.
Reducing cognitive load is important, but so is providing more information so that the user is efficiently informed to make a decision. I decided on the 2nd option to allow the user to discover more Popular Actions if he/she chose to do so.
Many of my users complained that the wording of terms and instructions would change from one screen to the next, making it extremely hard to follow.
Simply changed the copy inside actions to describe what the user will be setting.
Separated "When I say..." from "Or when..." to further separate the distinctions.
Followed Typeform's design by de-emphasizing later steps.
In these iterations I wanted to explore how separating grouped items would affect the user's thought process. I chose the 1st option because the 2nd option actually adds another decision for the user to make.
Some screens were stripped of any instructions or context for the user to understand what to do. Users would go back to desperately leave the awkwardness and find their footing again.
Added enough context and examples for the user to feel comfortable to input.
Changed title text to "Enter a verbal queue" and followed format for Enter a Google Assistant command to keep consistent.
Combined the first two options, while still emphasizing the input dialog.
Adding affordances were a no-brainer, but I challenged if the modal should be the way to go. Ultimately I went with the first option to reduce cognitive load since this was the first input.
The main focus of this validation testing was to make sure that users could go through the setup process for custom routines extremely fluidly. This meant guiding them through with helpful copy, enough context and well-labeled input fields so that they are never lost on any screen. This also meant providing feedback every time they completed a step and replacing ambiguous "checkmarks" with actual "SAVE" buttons.
To fulfill validation testing, I asked the users two main tasks. I asked them first to setup a ready made routine, and then proceed to create a custom routine while asking them open-ended questions of what they were thinking.
If the process to create custom routines was much easier, then more people would be able to easily create routines and possibly, buy more devices and recommend smart devices to others. However, since there are so many obstacles, this can't happen yet. That's why I sought to provide a helping hand in Google Home's routine creation flow in the form of consistent copy, relevant examples and balanced context throughout the process.
Performing usability testing on new and old users provides a lot of insights into new eyes. It's important to remove the bias as a user myself, so I learned a lot about how others view Google Home's terminologies.
Sometimes, simply adding some placeholder text can go miles for lost users. It's the equivalent of adding a street sign when it isn't there - lots of people are expecting it to be there, and if it isn't then they'll question themselves.
I would have reconstructed the Information Hierarchy of the app! It looks like there are so many different touchpoints that the app tries to cover, but the best apps only focus on tasks related to one function. It is apparent that the Google Home app tries to do too much at once.
I'm open to chat about any opportunities, or even just to chat!