I was tipped off to this paper by a recent post on the Freakonomics blog. The paper by Roland G. Fryer, Jr., looks at the impact of financial incentives on educational achievement in the U.S. While this topic is inherently interesting to me (much of my graduate work focused on inequalities in educational inputs and outcomes), it is not the typical fodder for this blog. However, there are two aspects of this paper that are relative to the topics of customer experience measurement and creating a customer-centric culture.
- Incentives are generally more effective when tied to the inputs of a process rather than the outcomes.
- Randomized trials and field experiments are valuable and powerful research methods.
A few months ago I posted about how tying customer feedback scores to corporate incentive plans or compensation will usually corrupt the customer feedback program. I’m not standing out by myself on this conclusion. My original post was based in large part on a series of articles published in Quirk’s Marketing Research Review. Jeffrey Henning had a post with a similar point, but much better examples.
The interesting tie-in with the educational research paper is Fryer’s finding that it’s more effective to tie incentives to inputs rather than outcomes of the educational process. Instead of giving students money for good grades or test scores, it’s more effective to give them money for reading books, turning in homework, or attending classes – the key antecedents to educational achievement.
This is similar to an idea we have pushed here at Walker over the last few years: Shift the focus from customer feedback scores to the actions that produce the scores – better customer management, customer-level follow-up, process improvement, etc. Inputs are harder to measure, manage, and incent upon, but they will likely get you better results. I would love to see a study like Fryer’s conducted on corporate incentives.
The other thing I like about this study is the use of empirical trials and field experiments. These techniques are underutilized in business. We see them used in test markets for new products/services or in the use of pilot programs, but they can be used for so much more.
Anytime the impact of a proposed change is unknown or uncertain (when is it not?), and you need empirical evidence before fully implementing the change, you should consider a field experiment. Changes that effect customer sentiment almost always fall into this category.
There is a lot more to be said about the uses and benefits of experiments in customer experience management, so maybe we’ll tackle that in a future post. In the meantime, you can read this article on the topic by Tom Davenport in the February 2009 issue of HBR. It’s definitely worth reading for anyone interested in evidence-based management.
UPDATE: Marketing Research magazine has published a very interesting debate between Larry Gibson and Frank Buckler on the topic of causal analysis in marketing research. Mr. Gibson argues that controlled experiments are the only way to establish true causality while Dr. Buckler argues that correlation-based causal analysis is a good-enough substitute when experiments are impossible or prohibitively expensive. This is much more than an academic debate since millions of dollars can be gained or lost based on the decisions being informed by these results. I encourage you to check out the debate.
Troy Powell, Ph.D.
VP, Statistical Solutions