Podcast: Play in new window | Download (Duration: 14:33 — 13.4MB)
Subscribe: Apple Podcasts | TuneIn | RSS
In some ways it’s still the Wild West when it comes to AI, with developments happening faster than most can fathom and the law can respond. At the same time, though, the sheriff has begun to arrive.
Gwen Hassan (LinkedIn), Deputy Chief Compliance Officer at Unisys and Adjust Professor at Loyola University Chicago School of Law explains that the EU already has a law in place with a particular focus on ranking the risks of AI, including those that must not be taken, and an emphasis on the privacy implications.
In the US, there is legislation proposed that would require clear notification when content is created using generative AI. It has yet to pass.
Thus far the strongest direction in the US comes out of the White House, where President Biden issued the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order urges ethical generative AI guidelines, sets key goals for what good uses of AI are and calls upon various departments of the government to provide further analysis and direction.
So what should compliance teams do now, despite the legislative holes? She recommends looking at how to extend the existing compliance program to AI and, as AI evolves, develop more specific programs that maps to its risks.
Listen in to learn more about the emerging regulatory climate for AI.