‘The Algorithm Made Me Do It’: Artificial Intelligence Ethics Is Still On Shaky Ground

Posted by Tomorrow Team

Dec 22, 2019 8:44:00 AM

Forbes3

 

While artificial intelligence is the trend du jour across enterprises of all types, there’s still scant attention being paid to its ethical ramifications. Perhaps it’s time for people to step up and ask the hard questions. For enterprises, it’s time to bring together — or recruit — people who can ask the hard questions.

In one recent survey by Genesys, 54% of employers questioned say they are not troubled that AI could be used unethically by their companies as a whole or by individual employees. “Employees appear more relaxed than their bosses, with only 17% expressing concern about their companies,” the survey’s authors add.

Despite heightened awareness about the potential risks of AI and highly publicized incidents of privacy violations, unintended bias, and other negative outcomes, AI ethics is still an afterthought, a recent McKinsey survey of 2,360 executives shows. While 39% say their companies recognize risk associated with “explainability,” only 21% say they are actively addressing this risk. “At the companies that reportedly do mitigate AI risks, the most frequently reported tactic is conducting internal reviews of AI models.”

Inevitably, “there will be lawsuits that require you to reveal the human decisions behind the design of your AI systems, what ethical and social concerns you took into account, the origins and methods by which you procured your training data, and how well you monitored the results of those systems for traces of bias or discrimination,” warns Mike Walsh, CEO of Tomorrow, and author of The Algorithmic Leader: How to Be Smart When Machines Are Smarter Than You, in a recent Harvard Business Review article. “At the very least trust, the algorithmic processes at the heart of your business. Simply arguing that your AI platform was a black box that no one understood is unlikely to be a successful legal defense in the 21st century. It will be about as convincing as ‘the algorithm made me do it.’”

It’s more than legal considerations that should drive new thinking about AI ethics. It’s about “maintaining trust between organizations and the people they serve, whether clients, partners, employees, or the general public,” a recent report out of Accenture maintains. The report’s authors, Ronald Sandler and John Basl, both with Northeastern University’s philosophy department, and Steven Tiell of Accenture, state that a well-organized data ethics capacity can help organizations manage risks and liabilities associated with such data misuse and negligence.

 

“It can also help organizations clarify and make actionable mission and organizational values, such as responsibilities to and respect for the people and communities they serve,” Sandler and his co-authors advocate. A data ethics capability also offers organizations “a path to address the transformational power of data-driven AI and machine learning decision-making in an anticipatory way, allowing for proactive responsible development and use that can help organizations shape good governance, rather than inviting strict oversight.”

Sandler and his co-authors make the following recommendations for building a robust and responsive AI and data ethics capability:

  • “Appoint chief data/AI officers with ethics as part of their responsibilities.”
  • “Assemble organizationally high-level ethics advisory groups.”
  • “Incorporate privacy and ethics-oriented risk and liability assessments into decision-making or governance structures.”
  • “Provide training and guidelines on responsible data practices for employees.”
  • “Develop tools, organizational practices/structures, or incentives to encourage employees to identify potentially problematic data practices or uses.”
  • “Use a data certification system or AI auditing system that assesses data sourcing and AI use according to clear standards.”
  • “Include members responsible for representing legal, ethical, and social perspectives on technology research and project teams.”
  • “Create ethics committees that can provide guidance not only on data policy, but also on concrete decisions regarding collection, sharing, and use of data and AI.”

 

Sandler and his co-authors focus on the importance of their final point, urging that organizations establish an AI ethics committee, comprised of stakeholders from across the enterprise — technical, legal, ethical, and organizational. This is still unexplored territory, they caution: “There are not yet data and AI ethics committees with established records of being effective and well-functioning, so there are no success models to serve as case-studies or best practices for how to design and implement them.”

Looking at the approaches of similar technology engagements, an AI ethics committee should seek to address the following concerns:

 

  • “Whether the project under review advances organizational aims and foundational values to an extent that it justifies any organizational and social risks or costs.”
  • “Whether the project is likely to violate any hard constraints, such as legal requirements or fundamental organizational commitments/principles.”
  • “Whether an impartial citizen would judge that the organization has done due diligence in considering the ethical implications of the project.”
  • “Whether it is possible to secure the sought benefits in a way that better aligns with organizational values and commitments and without any significant additional undue burden or costs.”
  • “Whether reputational risks could be significant enough to damage the brand value in the concerned market or in other places where the organization operates.”

 

As mentioned, AI ethics is a very new field. Every step that helps decision-makers understand the “why” of AI-driven decisions will help maintain the trust in these emerging systems.

 

This article originally appeared in Forbes.

Topics: United States