Using Curiosity to Increase Ethics in Artificial Intelligence
Copyright © 2022 Chris J. Benevich. All Rights Reserved.
For Technology and Creativity Tuesdays in my #30DaysOfLinkedIn, I'm redirecting binary mindsets of success at work toward a spectrum of approaches and contributions that broaden thinking.
Beyond specific people or behaviors, I think how we respond to people vocalizing concerns about AI ethics is important:
Do we want to penalize someone raising an ethical concern about AI?
Do we want to penalize someone for showing compassion in the workplace?
For falling prey to the same cognitive biases with which we ALL come designed as humans?
Let's not shut down the conversation happening among consumers right now.
When I don't understand a person's behavior, I like to get curious.
More broadly, I'm also curious if AI governance frameworks can take company culture into consideration.
How might companies create more #SafeToFail environments for their staff to ask questions about AI governance frameworks?
... To ask questions to whatever extent they need to feel comfortable working with said frameworks?
Two AI governance frameworks I found with a quick online search were published through:
Wharton (https://ai.wharton.upenn.edu/white-paper/artificial-intelligence-risk-governance/) and
Personal Data Protection Commission Singapore (https://www.pdpc.gov.sg/-/media/Files/PDPC/PDF-Files/Resource-for-Organisation/AI/SGModelAIGovFramework2.pdf).
I am neither an attorney, nor a risk management specialist nor an AI governance framework creator, so my remarks are intended only to stimulate conversation -- not to provide advice.
Have you read your company's AI governance framework?