When entering the primary insurance product funnels, people said they were unsure of what the time commitment of the process would be. Drop-off increased with each additional question even when asking a simple yes/no question and we hypothesized it was due to frustration with not knowing where they currently were in the process.
After research on progress indicators I determined that there were two types that would work within our technical limitations and provided clear visual feedback as users moved through the form. The first was a stepped indicator that grouped the form questions into content-based groups—for auto insurance those groups were "vehicles," "drivers," and "contact." The second was a slim line showing estimated percentage complete that lengthened with each question submitted. Both were flexible enough to work across multiple products and could be defined within the JSON that controlled the order of questions.
Each approach had clear pros and cons. The stepped indicator provided a very clear sense of place and allowed users to get an idea of what was to come, but did not change until the entire group of questions was complete which felt discouraging. The slim linear indicator provided visual feedback after every question but did not provide any contextual information on what the process would entail.
Only testing could prove which was the appropriate solution for our product. A single iteration of each type was chosen and we designed a multivariate test that would test how each one affected question completion rate and full-form conversion rate. We started the experiment on 15% of all auto traffic, split evenly between stepped, slim, and the control with no progress indicator.
The slim style won by a wide margin. In order to validate our results, we then removed the stepped option from the test and the increase in conversion rate held. The next phase of testing upped the eligible population to 100% of all auto traffic across all brands with half seeing the slim bar and half seeing no progress indicator. After auto, we slowly phased the testing in across all the full funnel products.
We felt the data further supported our original hypothesis, and planned several further iterations on the progress bar in the future in order to provide more clarity about where the user stood in the entire insurance comparison process.
Although the primary purpose of the product was to display multiple insurance providers so users could compare them, there was a not insignificant percentage of cases where only one of our advertisers were able to write a policy for that person. Users were not typically happy with this result and often did not progress further to the provider’s site. Single advertiser results averaged a 20% click rate as opposed to a 35% click rate on the first position advertiser when multiple options were presented.
The horizontal listing layout worked well when users were presented with three or more potential insurance providers. When only option was present, people were sometimes confused and a couple thought they needed to wait for more to load until they saw the "1 match found" copy at the top of the page. My primary goal was to make it as clear as possible that this was the only provider they were matched with through our service, without implying that it was the only provider possible for them. After several iterations going through rapid user testing using employees not working on the funnel, it appeared that "best match" had the highest positive reaction. The simplest change to the layout also yielded the best result. Stacking the content vertically took up more of the empty space and was visually consistent with the current listing experience. This proposed visual change would also be simple to implement and could be replicated on other products as the A/B test progressed.
This test was created to increase advertiser click rate for people who made it all the way through the form and only matched with one advertiser, which was only about 3% of total traffic. Our standard process would create a test population too small to draw reliable conclusions from, so we decided to run the test at a 50-50 split for 100% of those eligible users.
Once this test was complete, we decided to next test using this layout for the top listing when multiple providers were presented. That test was still in progress but the preliminary results were positive at the time I left the Funnels Optimization team.
Progressive had been our largest advertiser for auto and home insurance for years, but had never permitted us to show their rates. After the advertiser team built up the relationship over several years, they finally agreed to test displaying the user's estimated rate at the end of our funnel before they clicked through to Progressive's site. We knew that not displaying rates caused a lot of confusion for our users, so we were excited to finally test providing that information.
Once we had final confirmation from Progressive on what information they would be providing us via their API and what we would need to display in order to be compliant with both legal departments, I worked with the engineering team on nailing down how we would pass the data back and forth with Progressive in order to create as smooth an experience as possible. The trickiest part was agreeing on how we could delay the bulk of the load since we would have to wait for the response from Progressive and at that time we did not show the final listings page until all information was returned from our advertiser API. We did not want to leave the user in a loading limbo with no feedback for more than a couple seconds, much less twenty, and we could not change the interstitial content without affecting other experiences and products as it was a shared state.
The engineers got to work building a new rate-fetching microservice, and I started on sketching what the end state of the funnel would look like. I had to juggle what people using our product wanted, what our advertiser Progressive wanted, and what the legal and compliance teams required. After sketching, I presented the team and Progressive with several wireframe variations for each possible result.
Once everyone agreed on a direction, I worked on a set of high-fidelity mockups for further approval as well as built a simple interactive prototype to use in testing. The biggest question was—how long would people stand waiting for the rate to be calculated?
During a quick round of user testing with the interactive prototype, multiple people thought the prototype was broken when the load time was so lengthy and said they would likely have abandoned the site. While this particular part of the project was out of our control, we brought this up to the advertiser in the hopes that they might prioritize optimizing the speed of their API. Outside of load times, we heard much of the same feedback as we did during the single listing emphasis tests. QuoteWizard's products claimed to help people easily and quickly compare insurance companies and yet they were only seeing one provider option. People felt it was a bait and switch, and even though they were pleased to see a potential rate it wasn't what they had originally set out to accomplish.
The development of this project took a good bit longer than anticipated due to authentication and security concerns, but once complete and fully tested in the stage environment we began the A/B test between the rate display experience and the standard single listing experience. The population for this experience was very small, so we started with a 50-50 split on eligible users. The click rate on the Progressive ads was trending upwards in the test group and we were waiting on updates from Progressive on the click-to-bind rate when I left QuoteWizard.