By Adam Turteltaub
The world has grown enamored with Big Data and the promise of Artificial Intelligence (AI). As the next big thing, many believe that it will be transformative for business, and even medicine, exposing patterns that humans miss, and enabling far better decision making.
But over the last few months, there has been a shift in the discussion as cases of less than compliant and not exactly ethical decisions were being made by the algorithms behind AI, reports Deborah Adleman, a Director with Ernst & Young LLP where she is the US and America’s Data Protection Leader and an executive within the Office of Ethics and Compliance and Risk Management. In this podcast she reports that at least one case gender bias started to emerge, and people from certain ethnic backgrounds were being precluded from hiring due to zip code-based decision making.
This should set off alarm bells for compliance and ethics teams.
To help manage the risk, she recommends not blindly trusting the AI. Compliance teams should take the time to consider four areas that are important for generating trust in the AI solution:
- Ethics: Does the solution agree with the values, mission and code of the organization?
- Social Responsibility: Does it have potentially negative social implications?
- Accountability: Is there clarity as to how the AI operates and the decisions it is supporting
- Reliability: Has it been tested rigorously?
Listen in to raise your own intelligence level about Artificial Intelligence.