Tire E-Commerce
Optimize shopping to drive sales
Project Details
Client
Bridgestone and Firestone Tires (BATO)
Client Descriptoin
The Americas branch of a multinational auto and truck parts manufacturer
Team
Collaborated with a UX Designer, a UX Director, and members of the Data and Analytics Team
My Role
Lead researcher (contract) - planned and conducted all user research
Timeline
3 Months
Deliverables
Summary report with suggested usability improvements (with substantiation and justification) and a detailed user flow analysis and UX benchmarking scores
EXECUTIVE SUMMARY (TLDR)
The Ask
We were tasked by the client to discover what usability and/or technical issues might be causing low sales performance and come up with suggestions on how to improve the user experience.
Problem
Both Bridgestone and Firestone e-commerce websites were not performing up to desired expectations regarding the number of tire purchases.
Goals
Business Goals
Increase online tire sales by determining the weakest performing and most problematic parts of each website, and developing recommendations on how to improve usability, enhance functionality, and increase conversion.
Research Goals
Discover and prioritize issues on both websites that are negatively affecting usability and conversion. Suggest UI and UX improvements for desktop and mobile versions of each site.
Process and Approach
Overview
General approach: identify the jobs to be done which customers use the sites for, and for each job: assess usage to determine a performance baseline based on analytics, and conduct UX/UI audits and reviews to suggest functional improvements and technical changes to improve the sites.
Defining User Goals
We started by determining what to benchmark, audit, and review in the first place. We identified the most used (high-traffic) areas of the site and determined what tasks our research would asses later on.
Discovering how users flow through the websites
For each significant task we decided to review and audit, we established a quantitative baseline for current usage we would use to: compare to the new and improved design's performance and identify the biggest areas of abandoned task completion, where users failed to achieve their goals.
Evaluating UX and UI Functionality
We developed a bespoke scorecard (rubric) for this project to review and audit the main areas of the websites. We combined traditional, industry standard heuristics with attributes and defenitions of criteria in each scorecard unique to our client's needs. This scorecard provided a high level overview of how well each area of the site supported users and provided an ideal user experience.
UX/UI Recommendations
For each main area of the site, we went through every screen and micro-interaction looking for opportunities to improve the UX, UI, and content (both writing and images). We identified over 500 issues that we reinforced by linking them to the scorecard (rubric) from the previous step.
Solution
Redesign the search functionality on both websites from scratch
Since searching is the main way people shop for tires, it has the biggest impact on sales and should be prioritized above all other changes.
The current search bar does not follow typical UX/UI conventions.
It’s too complicated functionally to be used easily and requires a lot of steps and effort on the part of the user.
Improvements to how searches are initiated and modified should be improved, and a more common design pattern should be used to match users’ mental models of search.
Results
Increased
calls to dealers (more attributed sales)
dealer searches
the number of pages per visit
time spent on site
tire searches
tire detail page views
Decreased
loading times
processing times
APPROACH AND PROCESS (IN DETAIL)
Overview
General approach: identify the jobs to be done which customers use the sites for, and for each job: assess usage to determine a performance baseline based on analytics, and conduct UX/UI audits and reviews to suggest functional improvements and technical changes to improve the sites.
This project had two main parts:
Conduct an audit of both existing sites and determine the largest problem areas of the site from UX and UI perspectives
Recommend UX and UI improvements that will have the largest positive impact
The first part of the project focused on determining how the sites were being used and how that usage would be reflected in analytics data, which would serve as an initial baseline.
The last part of the project focused on defining how we would rate each intent case, and using that rating criteria to evaluate the user experience. Finally, I traversed each site following each intent case’s flow, and documented all UX/UI problems and suggested improvements.
Defining User Goals
We started by determining what to benchmark, audit, and review in the first place. We identified the most used (high-traffic) areas of the site and determined what tasks our research would asses later on.
The first step of the project was to determine what users/customers were using the site for - more specifically, we were looking to define what goals people have when using the Bridgestone or Firestone websites.
What I did
Since we were not able to directly ask users what they were trying to accomplish in a true-intent study, I had to use inductive logic to indirectly determine probable user intent. To do this I analyzed the information architecture of the site and the goals it appeared to be facilitating.
Based on the site’s navigation, content, and functionality, I came up with 7 main intents (goals):
Educate & Research
Select Tires (Shopping)
Find a Store (Local)
Schedule an Appointment
Get Support
Discount & Rewards
Register Tires
To capture these intentions (goals), I chose a wireflow format since it captured all the information we needed and could be made (converted) quickly. The team started calling these visual representations of use cases “Intent Cases” (a mix of the words “intention” and “use cases”) These Intent cases did not capture an exhaustive list of all possible paths a user could take. Rather, they captured the most common or likely path(s) to accomplish a goal. These primary paths through a system focus attention on key elements that make up a user’s experience on each site.
These wire-flows (called intent-cases by our team) were designed to show all the likely paths a user could take in a given scenario based on their goals. These intent-cases were not exhaustive but covered enough ground that the data and analytics team could use them as inputs for their work.
Transition to the next step: After some iteration, our team aligned on the exact "intent-cases" (wire-flows) that we would study further. Getting clear on the main paths users take using the websites was essential to begin measuring user analytics later on in the project.
Discovering how users flow through the websites
For each significant task we decided to review and audit, we established a quantitative baseline for current usage we would use to: compare to the new and improved design's performance and identify the biggest areas of abandoned task completion, where users failed to achieve their goals.
What I did
Those wireflow “Intent Cases” were used as data inputs to measure usage and conversion rates (behavioral signatures), as determined in the Adobe Analytics tracking system, resulting in a data-driven perspective on how users actually used each website.
How I collaborated
I guided the Data and Analytics Team through the branching paths in each “Intent Case” verbally, and they would translate the journeys into a query that would search user activity that adhered to the paths in each “Intent Case”. Groups of users who traversed the site in similar ways were considered the same user segment
For each user segment, the Data Team performed analysis to understand the user's behavior based on where traffic came from and where it goes throughout the website. This information would make it possible to determine how successful users were in achieving their goal. Users that reached a page and took action at the last part of an “Intent Case” were considered successes, and those that dropped off, who didn’t complete a full “intent case” were considered failures.
The Data and Analytics Team used the screen flows to translate user paths (intent cases) through each flow into Adobe Analytics
Transition to the next step: The analytics numbers painted a vivid picture of where users were dropping off - these were parts of the site that most likely had usability issues - which is where our team would focus most of our efforts going forward.
Evaluating UX and UI Functionality
We developed a bespoke scorecard (rubric) for this project to review and audit the main areas of the websites. We combined traditional, industry standard heuristics with attributes and defenitions of criteria in each scorecard unique to our client's needs. This scorecard provided a high level overview of how well each area of the site supported users and provided an ideal user experience.
What to evaluate
The project’s goals required that we evaluate both categories of usability and conversion. There were many possible things to evaluate, and we considered about a dozen listings of heuristics developed by various UX professionals. Where it made sense, we combined these lists, and eliminated any heuristics that weren’t pertinent to the project to develop a manageable set of criteria. We broke each usability and conversion down into five elements:
Usability of the Experience: Ease of Use, Usefulness, Structure, Discoverability, Refined for Mobile
Optimization for Conversion: Increasing Focus, Clarifying Understanding, Expediting Outcomes, Driving Decisions, Enhanced Experience
Deciding how to conduct a benchmarking evaluation
We did not have the time or budget for traditional usability testing, and the elements in the optimization category would be difficult to objectively rate, so I considered ways to efficiently and to the best of our ability, effectively evaluate these websites given our constraints.
I considered conducting a heuristic evaluation since it would be fast and would capture many glaring issues, but I was concerned about missing problems and the bias I would bring if I was the only one doing the evaluations. I’d need to find a method that was more thorough and less biased. I determined that the best usability inspection method would be the “Pragmatic Usability Rating by Experts” method, also known as PURE. The pure method is fast, simple, thorough, and encourages the collaboration of multiple researchers to arrive at an average score.
How I did it
The stakeholders approved this approach, and we hired one additional contractor to help with this scoring. I had already set up the Heuristics analysis framework, and all I would need to do is explain our process
Review the heuristics analysis ratings and definitions
Follow the flows in each “Intent Case”
Rate each part of the experience, taking qualitative notes on things we observed that warranted the rating
Calculating the mean (average) of our combined scores
Discussing the final score, tweaking it slightly if one of us had a compelling argument based on criteria the other had overlooked.
Overview of scores for each of the seven intent cases.
Detailed description of scores for the Shopping Intent-Case
Transition to the next step: By creating a bespoke/customized rubric for grading usability, we were able to address the client's unique needs while minimizing "fluff", metrics, and heuristics that didn't matter as much or didn't apply. Quantitative scoring made usability issues more black and white, easier to understand, and easier to prioritize.
UX/UI Recommendations
For each main area of the site, we went through every screen and micro-interaction looking for opportunities to improve the UX, UI, and content (both writing and images). We identified over 500 issues that we reinforced by linking them to the scorecard (rubric) from the previous step.
To provide specific tactical recommendations for improvements, I revisited the Bridgestone and Firestone websites, following the “intent case” paths once again. Pretending I was the user, I would try to complete the goal of each case and take screenshots and notes of each element that negatively affected my ability to use the site or make a purchase.
Example UX/UI recommendations
Transition to the next step: With a clear set of prioritized and documented usability issues throughout the sites, the product team had a crystal clear understanding of what needed to change, how it needed to be improved, and why it mattered.
CONCLUSION
Takeaways and Recommendations
First Priority - Fix Sitewide Search
The search functionality on both websites is the most problematic part of both websites. Since searching is the main way that people shop for tires, it has the biggest impact on sales and should be prioritized above all other changes.
More specifically, The search bar does not follow typical UI conventions. It’s too complicated functionally to be used easily and requires a lot of steps and effort on the part of the user. Improvements to how searches are initiated and modified should be improved and a more common design pattern should be used to match user’s mental models of search.
Other recommendations
Suggested UX and UI improvements on both desktop and mobile web
Over 350 suggested improvements sitewide
Results
Key Metrics
* Exact site performance metrics are confidential
Increase
calls to dealers (more attributed sales)
dealer searches
the number of pages per visit
time spent on site
tire searches
tire detail page views
Decreased
loading times
processing times
Client Feedback
“Mo helped our team identify key problems and areas of opportunity across the key flows within the website. This allowed us to focus our optimization efforts on “quick win” items that would be easy to achieve while having the highest impact in the near term.“
~ Steve Shay ECD UX Design at iCrossing