The Idea in Brief
Business faces a dilemma today: Though our economy depends increasingly on services, innovation processes remain oriented toward products. This isn’t surprising: How do you apply formal R&D to services—where real customers engage in real transactions in real time? And ensure that failed experiments don’t harm your customer relationships and brand?
Impossible? No: One large service company—Bank of America—runs formal experiments to create new service concepts for retail banking. Seeking to grow revenue and customer satisfaction, it turned several branches into “laboratories.” At these branches, Innovation & Development (I&D) team members conduct experiments with actual customers during regular business hours—pinpointing innovations for broader rollout.
The program has generated surges of fresh thinking, improved customer satisfaction, attracted new customers, and deepened the company’s understanding of service development. The payoff? A crucial edge over less adventurous competitors.
The Idea in Practice
To launch service-innovation experiments, consider Bank of America’s process:
1. Conceive, assess, and prioritize experiment suggestions. Example:
Drawing on customer-satisfaction studies and other market research, I&D and branch staff submitted experiment ideas, then prioritized them based on impact on customers and fit with the bank’s strategy and funding requirements. Of 200 ideas, 40 became formal experiments—e.g., testing whether TV monitors reduced teller customers’ perceived wait time.
2. Plan and design. Flesh out selected ideas. Resolve experiment problems without customers before testing in a live environment. Example:
The I&D team created a prototype branch where members could rehearse the physical steps involved in an experiment—and correct problems—before implementing the service idea with customers.
3. Implement. Maximize learning by conducting experiments in ways that ensure results’ reliability and accuracy. Example:
To temper the effect of noise (variables other than those being tested), the I&D team repeated the same experiment in the same branch and in different branches, and also established a control branch for each experiment. For instance, to test new account-transfer software, it installed the technology at one center but not at another, similar center. Example:
To ameliorate the Hawthorne effect—people behaving differently when they know they’re being watched—the bank instituted “washout periods.” It waited a week or two before measuring experimental results, so novelty effects among staff could pass.
4. Test. We learn best by receiving immediate feedback on our actions’ results. But assessing results’ accuracy takes time. Balance speed with reliability in providing feedback. Example:
The I&D team ran each experiment for 90 days before adjusting or discontinuing it based on results. Members believed three months provided enough time to gain reliable measures without unduly delaying modifications. They also made exceptions, revamping one mortgage-loan experiment after 30 days because getting credit approvals was taking too long.
5. Recommend. Decide if experiments warrant broader rollout. Example:
Analyzing performance data from test locations and control branches, the bank determined which experiments had enhanced customer satisfaction, revenue generation, and productivity. Then it performed cost-benefit analyses to ascertain whether the performance gain outweighed the expense required to introduce the new process nationally. Of 40 experiments, 20 were recommended for rollout.
At the heart of business today lies a dilemma: Our economy is increasingly dependent on services, yet our innovation processes remain oriented toward products. We have well-tested, scientific methods for developing and refining manufactured goods—methods that date back to the industrial laboratories of Thomas Edison—but many of them don’t seem applicable to the world of services. Companies looking for breakthroughs in service development tend to fall back on informal and largely haphazard efforts, from brainstorming, to trial and error, to innovation teams. Such programs can produce occasional successes, but they offer little opportunity for the kind of systematic learning required to strengthen the consistency and productivity of service development—and innovation in general—over time.