For years, retailers have been trying to mitigate the effects of inherent bias or unintentional discrimination in their physical shopping experiences. And while no one claims the problem is completely solved, many retailers are now taking steps to ensure that their customers are not profiled by the way they look, who they are with, or how they dress or shop when they walks into a store.
But with shopping becoming an increasingly digital experience, retailers will have to face a new and perhaps more unknown challenge: digital bias. Instead of fighting prejudice or unconscious bias among frontline workers, retailers should now look at eliminating bias in their own data in the related algorithms and the use of these in their digital practices.
New retail, new risks
This is a growing problem. More and more shopping is moving online, a trend that was left over by the massive digital acceleration seen during the pandemic. At the same time, retailers want to increase their ability to personalize their offerings and interactions – they are looking for the sweet place to understand that builds a stronger and more profitable bond with a customer.
What’s more, retailers are facing a more competitive digital arena to search for new new customers that puts enormous pressure on marketing costs and the cost of customer acquisition. The reality here is that it will cost more to get the next generation of VIPs, which is why retailers are very sensitive to ways to target. With analytics and the ability to retrieve data from the many different touch points that customers leave behind when using their devices and making purchases, one would think that it would be easy to get this right.
The big picture is that the number of digital (or digitally activated) touch points with customers is expanding rapidly – and so are the opportunities for digital bias to emerge. Consider the growing use of artificial intelligence. As machine learning algorithms are integrated into more and more retail experiences, the risks associated with biased or incomplete training data escalate enormously. Think, for example, of an interactive digital skin care experience trained in a third-party dataset that, unknowingly by the retailer, was skewed massively toward lighter skin tones. The risk of unintentional discrimination or offense is obvious.
Or what about personal marketing based on purchase history? Here, outdated or simplistic assumptions in category demographics run the risk of leading retailers the wrong way – whether it’s the woman wearing a blazer designed for men, the man buying the foundation to cover a stain, or the shop simply wanting gender-neutral products. Thinking outside of traditional category norms is becoming increasingly critical, both to ensure that you are marketing to the right people and not causing insult by taking the wrong assumptions about customers.
Strategies for combating digital bias
There are significant risks of getting it wrong. At best, mistakes will irritate and alienate customers – and risk losing their trust and any chance of a repeat purchase. At worst, the impact of digital bias can be really offensive or even discriminatory. So it is a problem that needs to be solved as soon as possible.
However, the large number of opportunities for digital bias to creep into retail experiences means that there is no simple solution here. Instead, it is about developing a holistic set of strategies and one frame for responsible use of AI across the enterprise.
There are several different aspects to think about here.
Process and people. It is important to establish clear ethical standards and accountability based on fairness, accountability, transparency and accountability. Retailers may consider bringing a Chief Ethics Officer into the C-suite to provide oversight. They should also ensure that their people are closely involved in the process – this “Man plus machine” combination can act as a critical reason check of what an automated solution does.
Design. When creating a new digital solution or AI-driven experience, retailers need to understand and apply ethical design standards from the start. This includes having mechanisms to ensure that training data for machine learning is inclusive. It also means taking data security into account and building data protection by design.
Transparency. Retailers should consider transparency as a way to maintain customer trust. It can e.g. Include being open and honest about when artificial intelligence is used and explaining what data points have made them come up with a particular recommendation or offer a person. Bringing customers into the process, gaining their trust and being transparent in designing solutions that work for everyone is key.
Partners. Retailers often use a partner to develop and maintain AI-powered algorithms and solutions, especially where they lack their own skills in advanced computer science. But if an algorithm does not work as expected and / or offends a customer, it is the dealer’s reputation on the line. It is important to choose partners wisely and ensure that they adhere to the same business values and purposes as the retailer’s own brand.
Monitoring. It’s important to keep a close eye on how a digital solution works when it’s working with customers – even more so as it includes self-learning AI components that evolve the experience over time. Retailers must run regular audits of all algorithmic solutions against key interruptions and security measurements.
Ultimately, a retailer must aim for an approach that is honest, fair, transparent, accountable and centered around human needs. Given how widespread the use of data and AI is now across so many aspects of retail, this kind of principle-based approach is the best way to ensure we build experiences that are truly inclusive for all customers across all shopping channels.
About the authors: Jill Standish is senior CEO and global CEO of retail, and Joe Taiano is the CEO and Marketing Manager of the Consumer Industry at Accenture.