Conversations around AI and ethics may have started as a preoccupation of activists and academics, but now — prompted by the increasing frequency of headlines of biased algorithms, black box models, and privacy violations — boards, C-suites, and data and AI leaders have realized it’s an issue for which they need a strategic approach.
If Your Company Uses AI, It Needs an Institutional Review Board
Companies that use AI know that they need to worry about ethics, but when they start, they tend to follow the same broken three-step process: They identify ethics with “fairness,” they focus on bias, and they look to use technical tools and stakeholder outreach to mitigate their risks. Unfortunately, this sets them up for failure. When it comes to AI, focusing on fairness and bias ignores a huge swath of ethical risks; many of these ethical problems defy technical solutions. Instead of trying to reinvent the wheel, companies should look to the medical profession, and adopt internal review boards (IRBs). IRBs, which are composed of diverse team of experts, are well suited to complex ethical questions. When given jurisdiction and power, and brought in early, they’re a powerful tool that can help companies think through hard ethical problems — saving money and brand reputation in the process.