Our approach as the Funnels Optimization team had been to test one small thing at a time in order to gradually improve the product and that approach had been working well for the first eight months of our team's existence. However, we had started to uncover a lot of major problems that we believed warranted a full redesign. Historically UX had not been important to the product's stakeholders and accessibility even less so. There was no documentation of why the original design decisions had been made or any research behind them. The only user feedback that we had access to was bits and pieces that we heard second hand from the customer success reps. While we could make changes slowly over time, we felt it would be a better use of our time to build a stronger foundation for the funnels.
First we completed a quick heuristic evaluation of the current designs both for usability and general accessibility standards. Error messaging and general accessibility were the most egregious problems, but there were also many issues with navigation, consistency, recall, system status, and help with the process. Many of these issues were well-known but not prioritized by stakeholders.
While comparing different groups of users' session data, we noticed some large discrepancies in key metrics that could help us ballpark lost revenue numbers to sell stakeholders on the proposed changes.
We also determined that the first slide had the highest abandonment rate, which we assumed could be due to the misalignment of content between the source and the funnel or a low information scent on the first page itself. The first personal information slide had the second highest abandonment rate. We knew it would likely always be towards the top, but made a note to pay attention to people's reaction to the question during user testing to see if there might be a way to soften the blow.
The primary goal for the first usability test was to find issues with the product that we may be overlooking as people familiar with the product and the business's goals and strategy. We also wanted to get an idea of which problems were the largest and could have the biggest impact if solved.
This first test was presented to five participants of various ages, insurance histories, and research preferences. They were presented with the primary landing page for search traffic in our stage environment (to prevent the acquisition and sale of their data) and asked to "compare car insurance offers." The test went fairly well and supported some of our assumptions, but there were a few surprises. One person managed to break the form due to an undiscovered bug and had to refresh in order to progress, and one person read the legal disclosure top to bottom before submitting. There was the occasional ask as to why some question was required. My favorite moment was a participant laughing and saying, "Why is it yelling at me? 'MARRIED?' Sorry, sorry...it just felt so aggressive!"
Based on the findings from the session data and the usability testing, we came up with some additions to the product that could help align the final page with user expectations as well as hopefully add a new source of inbound calls to help increase revenue from call-buying advertisers like State Farm.
After presenting the updated listings page concept to the stakeholders, the biggest concern was that people would click less often on advertisers. Clicks were a large portion of the funnel revenue that we certainly did not want to lose. However, historically carriers had been willing to pay more per click for sources with a higher click-to-application rate. If the user was presented with more info they may be more secure in their decision to choose that carrier, and we assumed the click to app rate would increase as a result. Showing plan details upfront may filter out people who would be dissatisfied for one reason or another. We knew carriers would pay more for higher intent traffic because they already were with sourcing—internal paid sources were the most likely to apply and had the highest bids. It would take more time to roll out the test, but we decided that the test could be set up as a new source to test this theory without affecting existing bids.
Once the wireframe was approved with some small modifications and we received more information from the advertiser relations team as to what we could and could not include on listings, I began to flesh out the design system.
The new design would need to work with 100+ unique color combinations as the product could be white-labeled, so I created two base themes with relative values alongside our standard QuoteWizard by LendingTree theme that followed our brand guidelines. The two neutral themes could be skinned with more affiliate colors than we previously supported which would increase the consistency from their content to ours.
With our team's director and data analyst weighing in on the data collection and analysis side, we came up with a phased plan of testing and implementation. As the final listings page was the biggest pain point in user testing, already looked nothing like the funnel, and would require the most development work, we decided that testing that first (with the current header and footer) on a small subset of traffic would give us the clearest idea of impact for providing users with additional information.
While that test was running, we'd run a standard A/B test of the current HTML and styles versus the redesign. In the beginning we would only test on internal sources that used the QuoteWizard brand, then expand to external sources using the same brand. If the test experience was showing improvements in retention and conversion, we'd then start to roll out the test to the 100+ affiliate brands. Once the test period and analysis of the results were complete for the auto insurance product, we'd follow the same process for the home, renters, health, and Medsupp funnels.