In my last two posts, I described some different benchmarking options we have when analyzing customer loyalty data. One question remains – how should we interpret benchmark data?
We advise that the responsible use of any benchmark – industry benchmarks or internal survey benchmarks – requires that we do our homework so we clearly understand the source, method, and limitations to any data we may wish to use.
The organization Public Agenda has a good article on twenty questions public policymakers should ask to evaluate research that is put forth as support in the policymaking process; however, they work equally well when we consider benchmarks. The full article can be viewed here. I have adapted a number of their points in compiling some general guidelines you should consider when evaluating benchmarks (they are also useful when evaluating a study in general):
Sponsorship Issues – Make certain you know who is behind the benchmark data and what the possible motivation is for funding and releasing the data.
Sampling Issues – Understanding the sampling frame is perhaps the most critical requirement for adequately assessing the utility of any survey. Unless we know how the respondents were selected (and from what source), we cannot adequately assess the limitations to the survey. Don’t underestimate the power of bad sampling – remember the 1936 Presidential election poll that predicted Alf Landon would beat Franklin Roosevelt? This came as a result of the use of a straw poll, which proved to be less reliable than newer quota management approaches to sampling.
Some specific sampling questions you should ask include:
- How many people were interviewed?
- What is the sampling error?
- How were the respondents chosen? Was the selection random?
- What population(s) do the respondents represent?
- What was the survey’s response rate?
- What level of experience does the respondent have with the product or service?
- Do the respondents multi-source for the product or service in question?
- Are there other customer constituencies who may have insight and/or influence that were not included in the study? Why were they omitted?
- Does the study distinguish the results of B2B customers from B2C customers?
- Are the data weighted? How was the weighting determined?
Survey Method Issues – The method of data gathering matters, and while no method is perfect, some are better than others. Even the simple matter of when the survey was conducted can have a dramatic impact; in 1948, pollsters called the Presidential election in favor of Thomas Dewey, based in large part on polls that were concluded one week prior to the election. Public sentiment was volatile, and incumbent Harry Truman won by 3.5 points.
Survey Content Issues – The questions we ask – and how we ask them – have a considerable influence on the survey results. Examples include text that can bias the respondent’s answer (e.g., “Would you agree with most industry experts that…”), illogical question flow, and response categories that are not mutually exclusive are examples poor survey design that can skew the results we see.
Comparability Issues – Finally, once you have examined the mechanics of how the benchmark was created, you should ask one more question – does the information from this benchmark align with what we have seen from other studies? While the methods will differ, it is common to see some level of convergence in the tone and direction of the results. If the results are dramatically different, it would be wise to re-examine the study (using the points cited above) to determine what might explain the differences.
If you see disparity across a number of different sources of benchmark data, this can tell us something about the nature of customer sentiment – for example, there could be some unaddressed need among the customers that could explain the disparity – discerning what customers are really saying could provide insight that may create a strategic competitive advantage to your organization.
So, the key takeaways from this series are:
1) Benchmarks are enormously valuable – they provide context that helps us in the interpretation of data;
2) Any benchmarking method has limitations – the key is to understand what the business issue is that we are addressing before we commence the survey design phase so that we maximize the utility of the information while minimizing the caveats/limitations we put around the findings;
3) It is easy to accept benchmarks on face value alone – it is incumbent upon us as information users to do the necessary homework (and ask the probing questions) to ensure that we can support and defend any interpretation of our data.
I hope this series has been helpful – I would welcome any questions you might have, or comments on how your organization utilizes benchmarks.
Sr. Vice President, Consulting Services and Resource Management