Agile Research Methods for Personalization
by Dave Ingram
This is part five of a five part series.
While having a general framework for coming up with new personalization ideas can be extremely helpful, it will never replace rolling up your sleeves and doing the research and data analysis. Once your personalization program is off to the races though, it’s time to move beyond initial ideas and pure instinct, and take a data driven approach. This can take many forms, from analytics and experimentation, to more in-depth research methods such as user experience testing.
You may be thinking that this was the place to start rather than finish. However, research can be an enormous undertaking, and there’s something to be said about intuition, especially when just starting out.
There are many ways to divide the types of research available. One of them is to think about qualitative vs. quantitative research:
- Quantitative research: Looking at things that are clearly measurable and can be tracked over time. Both analytics and experimentation fall into this category.
- Qualitative research: Focusing on things that are harder to measure, but may provide even greater value. As an example, asking a customer to rate a product with 1-5 stars is quantitative, whereas asking open ended qu
estions about why a customer likes or dislikes a particular product is much harder to quantify, but can yield the researcher far greater insight.
Not all forms of research fall cleanly into qualitative or quantitative, but having a good idea of whether the research you’re doing leans towards qualitative or quantitative can help when forming hypotheses and working towards generalizable knowledge.
The most common form of research into user behavior on the web is most likely web analytics, using tools such as Google Analytics, Kissmetrics or Adobe Analytics. These tools allow for the ability to drill down into these dashboards to understand subsets or segments of the overall audience and how one acts differently from another. They provide an excellent starting place for generating ideas about how to personalize based on actual customer data, especially as you use the more advanced segmentation features and notice patterns of behavior around different segments.
Analytics can also be thought of in several different categories, the simplest of which is called “descriptive” analytics. In descriptive analysis, you are simply looking at the past and seeing what happened. It can be especially tempting to use the trends you observe in these graphs to predict what will happen in the future, but this temptation should be avoided as much as possible, as there are better ways to accomplish this.
Other forms of analytics include predictive, which can be used “to make predictions about future or otherwise unknown events” and prescriptive, which “goes beyond predicting future outcomes by also suggesting actions to benefit from the predictions and showing the implications of each decision option”. These more advanced forms of analytics require more sophistication and often different tools and techniques.
With personalization, you are trying to understand how different offers or experiences will lead people to different outcomes. For this reason, it’s very important to involve your data science or business intelligence teams in personalization efforts wherever possible, or plan to build those functions if they don’t yet exist in your organization.
Running controlled experiments is the gold standard of research in the scientific world. A/B testing and multivariate testing--as experimentation is referred to in many marketing circles--doesn’t always receive the same level of rigour, but if you have an A/B testing tool, you possess the best tool available for determining cause and effect, which again, is exactly what we’re trying to do with personalization.
Not every A/B test is created equal. For meaningful results you’ll need to consider the design of the experiment and factors such as sample size and seasonality. You’ll also need to consider how your audience and your hypothesis fit together.
For example, if you are running an experiment with your entire audience, standard A/B test reports will only show you how an experience affects behavior on average across the entire population. By either running the experiment on a subset of the population (using personalization) or by drilling deeper into the reporting, you can discover the nuances of your audience and drive to better personalization.
User Experience Testing
UX Testing can span both qualitative and quantitative research. It generally involves a trained researcher and a relatively small number of participants who are asked to use a product or service while the researcher asks questions that are carefully formulated to not introduce bias. There may be other observers, but they are often only watching by video or through a 2-way mirror to also avoid bias. A common result of user experience testing, even with a very small group of participants, is that common patterns are identified of language or user interface patterns that don’t make sense to many people. These are often overlooked by the designers and implementers because they are so familiar with the interface.
The output of a user experience test is frequently a written report that will provide a large number of ideas for further experimentation and personalization. Keep in mind that just because several people identify a problem or make a suggestion, doesn’t mean that your entire audience will feel the same way. That’s why it continues to be critical to experiment with these new ideas and to refine and optimize the experience gradually.
While user experience testing requires a lot of time and can be quite expensive, surveys are a great way to get quick feedback from a large number of people. A survey can be carried out by a 3rd party organization, sent to your email list using Google Forms, Survey Monkey or similar, or they can be placed directly within your experience using a tool like Qualaroo.
Bear in mind that the design of surveys can also lead to a variety of biases such as response bias and many others. As with experimentation, thinking about the design of your surveys will help you to make the best use of your time and to reach valid conclusions. Running a controlled experiment based on the ideas that come from survey respondents will help even more to validate your findings.
If surveys are a faster and simpler approach to research than user experience testing, then ethnographic research is at the other end of the spectrum entirely. Rather than having someone come to your office or use software to observe them perform specific tasks for 30-60 minutes, ethnographic research involves a researcher traveling to where your customers live or work and observing them in their day to day routines. Rather than answering questions about how a specific page or user experience are working in certain scenarios, you’re observing who people really are and what they do with their time. This level of exploration can lead to insights about things that you would never think to ask about.
For example, Intel has used ethnographic research extensively to discover market trends and identify new business opportunities. When building digital experiences, this level of research can help to determine how your customers differ from one another and suggest further avenues of research--using analytics, experimentation and more--in which to invest time and resources.
Keeping Research Agile
Stick to short iterations and manageable tasks. Any single category of research above could easily be taken on as a one year project by a large team. This will undoubtedly produce richer results, but it’s important that we keep tasks small and manageable. As such, if you’re going to embark on user experience testing, as an example, try to break this down into small chunks that can be accomplished in short periods of time. For example, start with some quick hallway usability testing or go online with something like usertesting.com.
In the first pass, you should be looking for early insights and low hanging fruit, not fully generalizable knowledge. Build on that knowledge with another round of more formal usability testing, or use the first insights to run a series of A/B tests which then roll back into more usability testing. The point is to keep these tasks small enough that you are making rapid changes to your experiences, testing out new personalizations, and continuously gathering feedback as you go.
Whatever system you decide on to manage your personalization backlog you should track research work along with the technical work. This shared list of work helps the personalization owner to prioritize which research work should be done when, and also allows a corpus of knowledge to be built up in a single place that shows others what was discovered, what was tried as a result, what was learned, and finally what was implemented. Many project management tools even allow these links to be made explicitly, so that you can go back and see the life span of any idea that was generated through to completion.
Tracking within this system also ensure that you really are keeping research to manageable chunks, and keeps the researcher closely involved with the rest of the team, helping them to know what questions should be asked and provide further suggestions to the implementation team.
Research is a critical piece of your personalization program. Whichever methods you use and how you build it into your team, keep in mind that it should be agile along with the rest of your program. Researchers and analysts should be working closely with designers and implementers, ensuring that you keep your velocity up and make the best use of all of the learnings that are gathered through every iteration.