Analysis of Decisions in AI Use Shows Organizational, Ethics Issues
Not every corporate executive making technology decisions understands the value of AI, according to an article on zdnet.com. Additionally, sometimes they fail to put ethical systems into place that won’t discriminate or invade the privacy of customers.
AI managers and specialists face organizational and ethical issues that impede their efforts to integrate AI into their plans, according to a recent in-depth analysis that looked at the pressures and compromises faced by today’s AI teams.
The researchers, Bogdana Rakova (Accenture and Partnership on AI), Jingying Yang, (Partnership on AI), Henriette Cramer (Spotify) and Rumman Chowdhury (Accenture), found that many corporations fail to effectively or appropriately act:
“Practitioners have to grapple with lack of accountability, ill-informed performance trade-offs and misalignment of incentives within decision-making structures that are only reactive to external pressure.”
Organizations need people with the expertise to understand AI systems and executive leaders to implement their recommendations, the report said.
“Industry professionals, who are increasingly tasked with developing accountable and responsible AI processes, need to grapple with inherent dualities in their role as both agents for change, but also workers with careers in an organization with potentially misaligned incentives that may not reward or welcome change.”
More organization-level frameworks and metrics, structural support, proactive evaluation and mitigation of issues as they arise are needed to achieve accountability.
The four leading issues the researchers found impeding responsible and accountable AI adoption include the following:
How and when do we act? “Reactive. Organizations act only when pushed by external forces (e.g. media, regulatory pressure)”
How do we measure success? “Performance trade-offs: Organizational-level conversations about fair-ML dominated by ill-informed performance trade-offs.”
What are the internal structures we rely on? “Lack of accountability: Fair-ML work falls through the cracks due to role uncertainty.”
How do we resolve tensions? “Fragmented: Misalignment between individual, team, and organizational level incentives and mission statements within their organization.”
Rokova and her team made five recommendations to help solve these issues:
Educate the C-suite and board
Educate employees at all levels
Open communication channels
Consider a new advocacy role
Assert veto power
Of course, the usual good business acumen is vital in any and all business approaches, especially when it comes to AI.
read more at zdnet.com