Imagine you just placed an online order from Amazon. What’s to stop you from claiming that the delivery never arrived, and asking for a refund — even if it actually arrived as promised? Or say you just bought a new phone and immediately dropped it, cracking the screen. You submit a replacement request, and the automated system asks if the product arrived broken, or if the damage is your fault. What do you say?
Are Customers Lying to Your Chatbot?
Automated customer service systems that use tools such as online forms, chatbots, and other digital interfaces have become increasingly common across a wide range of industries. These tools offer many benefits to both companies and their customers — but new research suggests they can also come at a cost: Through two simple experiments, researchers found that people are more than twice as likely to lie when interacting with a digital system than when talking to a human. This is because one of the main psychological forces that encourages us to be honest is an intrinsic desire to protect our reputations, and interacting with a machine fundamentally poses less of a reputational risk than talking with a real human. The good news is, the researchers also found that customers who are more likely to cheat will often choose to use a digital (rather than human) communication system, giving companies an avenue to identify users who are more likely to cheat. Of course, there’s no eliminating digital dishonesty. But with a better understanding of the psychology that makes people more or less likely to lie, organizations can build systems that discourage fraud, identify likely cases of cheating, and proactively nudge people to be more honest.