Menu
Walker Information
Helping you put the customer at the heart of every decision.

Benchmarking Customer Benchmarks, Part 3: Evaluating Benchmarks (and Surveys in General)

In my last two posts, I described some different benchmarking options we have when analyzing customer loyalty data. One question remains – how should we interpret benchmark data?

We advise that the responsible use of any benchmark – industry benchmarks or internal survey benchmarks – requires that we do our homework so we clearly understand the source, method, and limitations to any data we may wish to use.

The organization Public Agenda has a good article on twenty questions public policymakers should ask to evaluate research that is put forth as support in the policymaking process; however, they work equally well when we consider benchmarks. The full article can be viewed here. I have adapted a number of their points in compiling some general guidelines you should consider when evaluating benchmarks (they are also useful when evaluating a study in general):

Sponsorship Issues – Make certain you know who is behind the benchmark data and what the possible motivation is for funding and releasing the data.

 

Sampling Issues – Understanding the sampling frame is perhaps the most critical requirement for adequately assessing the utility of any survey. Unless we know how the respondents were selected (and from what source), we cannot adequately assess the limitations to the survey. Don’t underestimate the power of bad sampling – remember the 1936 Presidential election poll that predicted Alf Landon would beat Franklin Roosevelt? This came as a result of the use of a straw poll, which proved to be less reliable than newer quota management approaches to sampling.

Some specific sampling questions you should ask include:

 

  1. How many people were interviewed?
  2. What is the sampling error?
  3. How were the respondents chosen? Was the selection random?
  4. What population(s) do the respondents represent?
  5. What was the survey’s response rate?
  6. What level of experience does the respondent have with the product or service?
  7. Do the respondents multi-source for the product or service in question?
  8. Are there other customer constituencies who may have insight and/or influence that were not included in the study? Why were they omitted?
  9. Does the study distinguish the results of B2B customers from B2C customers?
  10. Are the data weighted? How was the weighting determined?


Survey Method Issues
– The method of data gathering matters, and while no method is perfect, some are better than others. Even the simple matter of when the survey was conducted can have a dramatic impact; in 1948, pollsters called the Presidential election in favor of Thomas Dewey, based in large part on
polls that were concluded one week prior to the election. Public sentiment was volatile, and incumbent Harry Truman won by 3.5 points.

 

Survey Content Issues – The questions we ask – and how we ask them – have a considerable influence on the survey results. Examples include text that can bias the respondent’s answer (e.g., “Would you agree with most industry experts that…”), illogical question flow, and response categories that are not mutually exclusive are examples poor survey design that can skew the results we see.
 

Comparability Issues – Finally, once you have examined the mechanics of how the benchmark was created, you should ask one more question – does the information from this benchmark align with what we have seen from other studies? While the methods will differ, it is common to see some level of convergence in the tone and direction of the results. If the results are dramatically different, it would be wise to re-examine the study (using the points cited above) to determine what might explain the differences.

If you see disparity across a number of different sources of benchmark data, this can tell us something about the nature of customer sentiment – for example, there could be some unaddressed need among the customers that could explain the disparity – discerning what customers are really saying could provide insight that may create a strategic competitive advantage to your organization.

So, the key takeaways from this series are:

1)   Benchmarks are enormously valuable – they provide context that helps us in the interpretation of data;

2)   Any benchmarking method has limitations – the key is to understand what the business issue is that we are addressing before we commence the survey design phase so that we maximize the utility of the information while minimizing the caveats/limitations we put around the findings;

3)   It is easy to accept benchmarks on face value alone – it is incumbent upon us as information users to do the necessary homework (and ask the probing questions) to ensure that we can support and defend any interpretation of our data.

I hope this series has been helpful – I would welcome any questions you might have, or comments on how your organization utilizes benchmarks.

Mark Ratekin

Sr. Vice President, Consulting Services and Resource Management

Share your thoughts

Your email address will not be published. Required fields are marked *