Generative User Research - qualitative & quantitative
Spoke is a social media advertising tool that lets real estate agents target their local audiences with effective campaigns. Spoke is a product of Rexlabs - the prop tech house I worked for as a product designer and research lead.
When I joined the product Spoke it was celebrating one year on the market and exponentially growing. However, as we were planning to scale the product we realised we didn't have a full picture of our current customer base (i.e. their characteristics, demographics, value perceptions & how it may differ between different user groups etc.)
Whilst there had been ad hoc feature-related user research before, we hadn't yet put processes in place that would allow us to consistently gather and consolidate data to create a better understanding of our customer base.
To do so the first step was to identify which data sources we already had available and which ones we would have to create. As a Product Designer and Research Lead on the team, I facilitated a workshop to kick us off.
Once we had identified the required data sources I worked with the Data Analyst on the team to create integrations that would allow us to feed our existing data (such as customer feedback from Customer Support and BD calls) into our research tool. In addition I worked with the Devs to create methods of gathering the data we didn't already have (such as a NPS survey to be sent to our customers periodically).
Whilst the main focus of this project was to set up processes that would allow us to get to know our users better over time, we also gathered valuable insights in this initial set-up. Here are just a few of them.
Insight #1: There are three distinct user/customer groups
Very early on in our research we identified that there were three distinct groups we are dealing with when it comes to our customers.
First there was the agent user - a real estate agent who uses Spoke to place a campaign themselves, usually in smaller size agencies. Second, there was the group of admin-users. This group would include Marketing Managers, Executive and Admin Assistants that use the product on behalf of agents. Lastly, there is the group of non-using agents, who have a Marketing Manager, EA etc. place campaign on their behalf but do not spend time in the product themselves and often don't even have an account.
Although this insight wasn't entirely new it helped us formally define our user/customer groups, which in turn helped us to effectively reach them to gather data (i.e. non-using agents cannot be reached via in-app notifications) and differentiate the gathered insights accordingly.
Insight #2: Non-using agents are much less satisfied with the product than admin-users or agent-users.
From our NPS scores, which we had separated for each different user group, we could identify that agents who do not spend time in the product (i.e. non-using agents) are much less satisfied with Spoke.
This was a very interesting insight to us, as this customer group was in most cases the campaign purchaser so their product satisfaction was vital for ongoing engagement and the prevention of churn.
It was at this stage that the team decided to do additional qualitative research to explore the reason for the gap in satisfaction. Together with the PM, I structured a user research plan with the goal of understanding this gap better.
Insight #3: Non-using agents focus on leads, agent- and admin-users value impressions.
After conducting a total of 10 interviews with a number of different participants across our three user/customer groups we identified a difference in expectation between them. It seemed that non-using agents perceived Spoke primarily as a lead generation tool and hence measured the value of the product based on the number of leads they received. The two other user types, however, praised Spoke as a brand exposure tool and focussed less on leads and more on impressions.
There is likely a multitude of reasons for this discrepancy. One may be the lack of visibility non-using agents have over their campaign results. Spoke provides a detailed campaign report for each campaign. However, this campaign sits within the product and we have traditionally relied on users to forward the link to it to the non-using agent - something that we know from this round of research as well as prior research doesn't always happen. Hence, the only touch point non-using agents have with Spoke are... you guessed... notifications of leads coming in through Spoke.
The above insights were eye-opening. We realised that the main decision maker when it comes to purchasing campaigns (i.e. non-using agents) were also the ones that saw the least value in it. As Spoke relies on campaign purchases rather than sign-up fees for revenue, the satisfaction of this group was the most vital for ongoing engagement and retention.
We knew there was more research to do to gain a deeper understanding of the issue. However, having identified the lack of visibility we also decided to prioritise an item in our backlog: email notifications. You can read more about this project here.