Walker Information
Helping you put the customer at the heart of every decision.

Category: CX Customer Strategy

The Domino Effect

If you’re anything like me, you can probably count on one hand the number of times that you’ve had Domino’s Pizza in the past. In the hierarchy of chain pizza restaurants, Domino’s was always about 34th  on my list of options (even beneath places that I have since found out were no longer in business). But recently I noticed that Domino’s has taken a bold, creative approach to addressing their customer feedback that mirrors the perception I had in my mind whenever Domino’s was mentioned.

If you haven’t seen the commercials (if you’re a football fan, don’t worry, you will soon), Domino’s basically lets us behind the curtain to see and hear the nature of the feedback they’ve heard. Things like ‘the crust tastes like cardboard’, and ‘the sauce tastes like catsup’ were continually mentioned in customer focus groups and on social media vehicles like Twitter ( and Facebook.

However, instead of trying to divert attention to other aspects that don’t matter as much (like they used to do with the "30 minutes or its free" guarantee), they’ve now addressed this feedback head-on, and have totally recreated their pizza, as well as their approach toward customer feedback.

You can learn more in a pretty candid take in the video below, or at


While it remains to be seen how effective this approach will ultimately be (I’ve yet to try their new pizza, but I have to say I’m at least a little intrigued now), I believe that Domino’s should at least be commended for actually listening to what their customers are saying, doing something about it, and crediting their changes to their desire to be customer focused.


Brad Harmon
Vice President, Consulting Services

Leslie Pagel

An experiment in survey design

Over the past six months several Walker colleagues and I have been working on an experiment. The experiment focuses on using an alternative analytical approach to achieving insights from customer feedback. And, in order to apply a different analytical approach, we had to redesign the survey instrument.

Three Goals

1 – Test something new on behalf of Walker clients and other customer strategists.
2 – Provide our clients with something they can use to improve their customer feedback program.
3 – Generate richer insights that will help Walker bring increased value to all of our client relationships.

Walker recently completed its annual relationship survey, which served as the testing ground for this new concept. And, while the jury is still out on whether or not our experiment accomplished its goals, I thought it might be helpful to document the concept and the design implications.

The Previous Design

In the past, our relationship assessment included a series of questions, where respondents were asked to evaluate our performance on a scale of Excellent-to-Poor. With the feedback we collected, we used multiple regression models to identify the areas that have significant impact on our customer’s relationship with Walker. This information was used to prioritize initiatives at both the customer level and the corporate level.

The Experiment Design Concept

The survey design for this experiment includes a rather different approach. Instead of asking customers to evaluate Walker’s performance, we asked them to evaluate particular aspects of the solution using the same Excellent-to-Poor scale (e.g., Communications with customers). We followed up with a series of questions where the respondents were asked to pick the characteristics that describe that area (e.g., frequency of communications, method they use to communicate, who sends the communication, etc.).

To be clear, these follow-up questions were not an evaluation of the area (as they have been in the past). Instead they were pick lists that included characteristics of that specific solution component.

Looking Forward

With this new approach, we plan to use an analytical technique that will profile various aspects of a customer feedback program. This approach will provide our clients and Walker a view of what makes up an excellent customer feedback solution.

Although we are still in the process of analyzing the results, the team has already learned quite a bit. For example, while this design doesn’t include more questions, the types of questions require more thought and time to complete. Stay tuned for an upcoming blog about lessons learned from a long survey.

While the experiment is not perfect, I remain excited about the insights it will bring to our clients. I look forward to sharing more with you too.

Photo credit: The Lab Depot, Inc.

Note: This post was originally published in Customer Connection on 12/10/2009.

Response Rate: A Psychological Assessment: Sitting on the Behaviorists’ Couch

In previous blogs, we have reviewed findings-to-date, regarding what specific elements of the survey process have been shown to be influential on response rate.  If we take a step back from the specifics, at the heart of the response rate issue, is respondent motivation. How do we get respondents to want to take a survey? In a recent article by Pete Cape, “Understanding Respondent Motivation,” he provides a historical review of how psychological theory on human motivation can help us tap-into motivating survey respondents.  This week, I will start with his review of the Behaviorists, relating how Behaviorist theory can provide insight to the response rate issue.

Behaviorist Theory in a Nutshell

·         The behaviorists provided the concepts of reinforcement, punishment and extinction.

o   Reinforcement concerns strengthening a desired behavior as the result of either experiencing a positive condition (Positive Reinforcement) or eliminating a negative condition (Negative Reinforcement).

o   Punishment concerns weakening behavior as a result of experiencing an aversive condition (Positive Punishment), or removing a positive condition (Negative Punishment).  

o   Extinction occurs when the reinforced behavior is no longer effective.

·         A primary criticism of behaviorist theory is that it provides a mechanistic view of human behavior, which is limited, as the motivation behind human behavior is complex.


How does Behaviorist Theory Aid Understanding of Response Rate?


Sometimes you can learn the most from the criticisms of a theory. Respondents are unlikely to be sustainably motivated to complete a survey by making any single improvement, or removing any single negative component, at one point-in-time. For example, providing a deadline by which to complete the survey has been shown to significantly impact response rate; however, a deadline alone is not likely to sustain response rate, rather there are many other influential factors. Instead of focusing on any one factor, consider the entire survey process-

·         Remember that the process starts before the respondent chooses/declines to take the survey- creating a survey that is easy for the respondent to take, pre-communication, personalizing communication, scheduling reminders, etc.

·         And, the process continues after the respondent completes the survey- communicating and acting upon findings, showing and proving that the feedback matters.


Next week, we will start to examine psychological theories that assume a more humanized-approach, exploring how understanding human needs can help us tackle declining response rates.


Amy Heleine

Director, Marketing Sciences



Cape, P., “Understanding Respondent Motivation,” Survey Sampling International White Paper, 1-17.

Jeff Marr

Stalkers aren’t all bad

Are stalkers ever welcome? Well…

At home, I am constantly stalked by my yellow Lab, Teddy. He’s one of those canines wanting to be near me at every chance, to the point of being really annoying.

Being stalked never sounds like fun, especially if a human is the stalker. But let me offer an exception of sorts . Leaders definitely want to be followed. When you try to influence someone, it sure helps when they agree to cooperate.

Strategic Account Managers are by definition leaders in companies they represent. They are assigned key responsibilities and have access to considerable resources. But SAMs must often lead from the classic position of influencing without direct authority.


So how do SAMs influence coworkers, partners and of course, customers? I think it starts by knowing what you are up against. This influence assessment tool by leadership consultant, Jim Clemmer, seems helpful to me in showing what you are up against in being influential. On a five-point scale where: 1 is extremely weak, 2 is fairly weak, 3 is moderate, 4 is fairly strong, and 5 is extremely strong, score yourself on each attribute for a situation you face:

  • my clarity around what a successful outcome would look like
  • my understanding of their position and win (how they’ll benefit?)
  • my persuasion and communication skills
  • my timing and the fit of my proposed action with the situation
  • my tone and approach (will I increase or decrease defensiveness and conflict?)
  • my genuine desire for a win/win outcome
  • my credibility with this person or group
  • my passion and commitment (including persistence)
  • our levels of mutual trust
  • the strength of our relationship
  • how well I’ve covered the bases with other key influencers and built their support
  • my appointed role, position, and authority

Now add your total across items. Scoring 45 points or higher means you’re in a strong position to influence that person or group in that situation. A score of 25 – 44 is less strong, and you might want to wait for a better time or strengthen a few of your lowest areas (which may take some time and hard work). If you score 24 points or lower, your ability to influence is very low. Increasing your leadership in that situation will clearly take some work and time.

The criteria made two things clear to me — that influencing skills must be developed, and that relationships and trust must be built with those we are trying to reach. As Socrates said, "Let him that would move the world, first move himself".


Is There Value in Customer Follow-Up?

Seems like a no-brainer, right? However, make sure this message is being communicated to account teams as a way to encourage engagement in close the loop actions. Before giving them training on the process and tools they will use to follow-up on issues and document action plans, make sure to share with them the benefits they can expect from the process:

·         Additional revenue opportunities through saved or enhanced relationships: When available, use real life stories from the field on additional opportunities uncovered through the close the loop process


·         Strengthening of customer relationships: Comes from demonstrating to customers that their feedback is of value to your organization and you have a desire to improve their experiences


·          Increased survey response rates: Customers are more likely to participate in the feedback process in the future if they are comfortable that their time is well spent and will be used by your organization

Once account teams have this foundation and belief in the customer feedback process, they will be more likely to actively engage in close the loop activities.

Melissa Meier
Vice President, Client Service

Best practice for avoiding stomach cramps…

I remember being told as a child that swimming immediately after I eat would cause stomach cramps. I believe that it was a stated rule that 1 hour after you eat, you could safely return to the water without fear of cramps (regardless of how much you ate, it was always 1 hour). As a 6 year old, I don’t think I had any idea what a cramp was, but I was sure that I didn’t want one. I think it was a visual in my mind that I would end up in a hospital bed with tubes coming out of me and that I would be lying there looking deep into my mom’s eyes wondering why I had lived life to the ragged edge by swimming shortly after eating and not allowing for the proper digestive magic to take place. So I have taken from this that it is a “best practice” to relax and let your food digest prior to doing a cannon ball off of the diving board. Is that a good definition of a “best practice”?

According to Wikipedia – A Best practice is the belief that there is a technique, method, process, activity, incentive or reward that is more effective at delivering a particular outcome than any other technique, method, process, etc. The idea is that with proper processes, checks, and testing, a desired outcome can be delivered with fewer problems and unforeseen complications. Best practices can also be defined as the most efficient (least amount of effort) and effective (best results) way of accomplishing a task, based on repeatable procedures that have proven themselves over time for large numbers of people.

So how do you know that what you are looking at is really considered to be a "best practice"? Because someone on a blog said so? Because of the marketing scripts that encompass the idea? Because you read it about in a magazine which automatically makes it valid? Because your buddy is doing it at his company and told you it was a "best practice"?

I bring this up because of a recent conversation I was having with a colleague, he said that he was working with a client that stated they ran their organization based off of based practices and they had been burned on more than one occasion by adopting certain “best practices” when in actuality they had adopted unproven methods that were presented to them as “best practices”. This is what happens when you act on something that is packaged as a “best practice” but when you peel back the layers, you find that its validity is in question.

With the speed at which information is circulated and the impact of online social media creating discussions around “best practices”, you tend to hear everyone’s opinion of how they are doing something as being communicated as a “best practice”. I caution you to tread lightly here. Some so-called “best practices” would be better packaged as “this is what we are doing at this moment, and we are having a little bit of luck with it.”

Are you challenging your organization in their ability to implement validated and verified “best practices”? I’m not talking about the latest and greatest buzz, but proven methods that align with the strategic direction of our organization. In the context of your customers, are you utilizing proven methods of engagement to understand the experience that they are having with you? Are you able to accurately diagnose where the value lies in this wealth of information? Are you taking proven action steps? Are you aligning your “best practices” with your overall business objectives?

I challenge you to analyze your customer experience program and really understand the areas that are working and the areas that need work. Then look into validated, proven methods that can address those issues. You are now free to get back in the pool.

Michael Good
Vice President