A recent TED Talk by Shyam Sankar (also the subject of this recent blog post by Leslie Pagel) nicely argues that man-machine cooperation is the real story of technological development. In a 2010 article, chess grandmaster Garry Kasparov, who famously played IBM's Deep Blue in the late 1980s, relays a wonderful anecdote illustrating the power of this symbiosis between humans and computers. In 2005, a freestyle chess tournament was held where humans and computers could work together or play separately. At this point in history, a computer program running on a standard laptop could routinely beat many grandmasters. However, this freestyle tournament produced two interesting and relevant results:
- a strong human player with a weak laptop soundly defeated even the best stand-alone chess computer, and
- the overall winner of the tournament was not a grandmaster with a powerful computer, it was a pair of amateur chess players using three relatively weak laptops.
According to Kasparov analysis, the team's winning edge was a superior interface between all humans and computers that effectively counteracted the superior chess knowledge and/or computational power of their opponents.
From this story both Kasparov and Sankar conclude that the decisive factor for determining the analytic capability of any human-computer combo is the friction between humans and computers. By designing a better interface that reduces the friction, you increase the analytic capabilities derived from the same human and computer at an ever-increasing, convex, rate.
While I completely agree that designing friction out of the interface is a decisive factor, I think there is one critical element behind the success of the human-computer team that both Kasparov and Sankar overlook – the rules of the game favored a cooperative strategy. The rules of freestyle chess require players to make moves much faster than regular chess. Therefore, the ability to have a computer crunch the data and provide the human player with a short list of potential moves is a critical advantage. Given more time between moves, the experience, knowledge and creativity of a chess grandmaster begins to override the speed, computational power, and efficiency of a computer chess program.
These two elements of success in chess are perfectly relevant to customer predictive analytics and customer-focused decision-making.
- First, useful predictive analytics require the smooth interaction of a skilled data scientist and a powerful, but usable, analytic workbench and process. It is not uncommon for an analyst to only spend 10%-40% of their project time actually running analysis and extracting usable insights. The rest of their time is spent transforming and merging data or getting outputs into usable/shareable formats. We need more work on reducing the friction between the analyst and the analytic tools if we are to realize the full benefit of predictive analytics.
- Second, data scientists need to fully understand the context they are working in and the rules that apply to it so they can apply predictive analytics in the right place and using the right approach. For instance, conducting a predictive analytics project within a strategic accounts group with a small set of accounts and deeply embedded account managers is not going to produce as much return as a similar exercise in an outbound call center where fairly inexperienced sales reps are each responsible for hundreds of accounts.
The future of business will require a reliance on powerful, computer-based analytics that easily interact with (not supplant) skilled data scientists who bring both analytic and business context knowledge into the process.