Dedicated to Discovering How People Make Choices
Let's suppose you've convinced your organization of the value of testing. Your marketing team has learned how to develop a proper research question; you now know how to design treatments and set up metrics to answer your test's research question. You even monitor for validity threats.
Congratulations! You've achieved a skill set that, frankly, many marketers do not have. (And it is evident in their low customer conversion rates.)
But it is at this level of achievement that organizations with a testing program in place often fall off the mountain. Data analysts neglect to focus on learning how to accurately interpret the results of all that careful testing.
Often, we expect our test results to turn out a certain way, and this can make us unconsciously want a particular result, creating bias in our interpretation of the data. This costs us valuable time and money — and customers.
"We spend a lot of time in the planning phase and say 'if this metric is higher than that metric, then it means this.' So we try to eliminate that human bias beforehand through a lot of planning."
— Derrick Jackson, Director of Data Sciences, MECLABS Institute
Does your marketing team have a data interpretation framework in place to protect against bias?
Free Mobile Marketing Course
Five free micro classes (each under 12 minutes) apply 25 years of research to help you maximize the impact of your messages in a mobile environment.
Research-based Subscription Swipe File
Get 26 free valid marketing experiments to give you ideas for your next A/B test