doctor looking at x-ray on a tablet
PHOTO: shutterstock

On Feb. 28, representatives from the Pontifical Academy for Life, Microsoft, IBM, FAO (UN Food and Agriculture Organization) and the Italian government signed the "Rome Call for AI Ethics," a document developed to support an ethical approach to artificial intelligence.

Founded in 1994, the Pontifical Academy is dedicated to study and inform on the principal problems of biomedicine and of law, relative to the promotion and defense of life. Leaving aside its alignment with the Catholic Church, it aims to draw up ethical guidelines to aid in the development of new sciences and technology.

6 Principles for AI Development

Had IBM and Microsoft not signed the "Rome Call for AI Ethics," it probably would have gone unnoticed in the tech industry. However, as these two companies are among the companies leading the charge on AI, the announcement received a fair amount of attention.

The conference sponsors agreed on a common need to work together at a national and international level to promote “algor-ethics,” namely the ethical use of AI. Six principles guide the algor-ethics approach:

  1.  Transparency: In principle, AI systems must be explainable.
  2.  Inclusion: The needs of all human beings must be taken into consideration.
  3.  Responsibility: Those who design and deploy the use of AI must proceed with responsibility and transparency.
  4.  Impartiality: To avoid creating or acting according to a bias.
  5.  Reliability: AI systems must be able to work reliably.
  6.  Security and privacy: AI systems must work securely and respect the privacy of users. 

It is clear after sounding out several technology vendors about the initiative that the "Rome Call for AI Ethics" document has identified a point of concern for many organizations.

Related Article: Responsible AI Moves Into Focus at Microsoft's Data Science and Law Forum

AI Good Intentions

LinkedIn's chief data officer Igor Perisic, noted that while many tech companies are taking the ethical challenges posed by AI very seriously, good intentions are not enough.

If we look at the history of other professions (doctors, lawyers, journalists), their ethical foundations were built on a long and sometimes messy struggle with the role they played within a larger society. At the very least, it is time for computer scientists to begin asking themselves the same hard questions that these other professions have already addressed.

“In our own profession, we now have a situation where many individuals who are creating the systems that will shape society are not themselves always informed about the way their actions impact the world and others,” he said. “Is an algorithm still just an algorithm when it can recommend a given job to millions of people, or not?”

Until very recently, even practicing philosophers could not agree that the use of software creates unique ethical dilemmas, in contrast to those posed by weapons or medicine — topics that have been discussed for thousands of years. But with the ubiquity of software-led decision-making, filtering and other relevance models, today’s software can have a similar impact on the lives of everyday people. Also, the datasets these systems may leverage can often reflect societal trends and biases in the real world.

“While I am not advocating for one specific ethical stance over another, I am advocating for the requirement of being able to reason in this space,” Perisic added.

Related Article: The Next Frontier for IT: AI Ethics

Bad Data, Bad Decisions?

The social utility of AI technology notwithstanding, there are legitimate concerns relating to unintended consequences of AI technologies based on algorithmic bias, said Andrew Pery, a marketing executive at ABBYY who has done extensive research about AI ethics for the Association for Intelligent Information Management (AIIM). 

He pointed out that AI is only as good as the data behind it. Even with the best of intentions, instances often arise where automated decision-making based on AI algorithms produce erroneous and, in many cases, discriminatory outcomes.

Even more disconcerting in his opinion is the lack of transparency by technology companies developing AI algorithms to disclose how such algorithms work, in part to protect their IP rights. “They are asking us, the consumers of AI technologies, to just trust them,” he said.

Already, the General Data Protection Regulation (GDPR) includes a provision that requires data controllers and processors to undertake privacy impact assessments where automated decisions are deployed affecting data subjects' legal and economic rights. Furthermore, the EU Parliament adopted a resolution for a regulatory environment for AI that encourages strong user protections.

The most recent effort in the US at the federal level, the draft Algorithmic Accountability Act of 2019 (S.1108), seeks to direct the Federal Trade Commission (FTC) to require entities that use, store or share personal information to conduct automated decision system impact assessments and data protection impact assessments.

The proposed law would require the FTC to enact regulations within the next two years to require companies that make over $50 million per year or collect data on more than 1 million people to perform an automated decision system impact assessment. In practical terms, two AI issues need to be addressed. These are two highly contentious and problematic areas of AI technologies that demand further scrutiny before wider use: facial recognition and criminal justice.

He believes that to instill ethics in AI development, self-regulation by industry is one plausible approach. “Designing ethics into AI starts with determining what matters to stakeholders such as customers, employees, regulators and the general public,” he added.

To do this, companies should consider setting up a dedicated AI governance and advisory committee including cross-functional leaders and external advisers that would engage with stakeholders, including multi-stakeholder working groups, and establish and oversee the governance of AI-enabled solutions including their design, development, deployment and use.

Related Article: Is it Time for Your Organization to Form an AI Ethics Committee?

AI, Ethics and the Future

And for the future? Renée La Londe, CEO and founder of iTalent Digital, believes that ethics does, and will continue to play, a central role in AI development. AI is being used to address society’s most urgent problems in the areas of wealth disparity, healthcare, education and the environment.

“There is only one way to ensure that AI truly considers every segment of society, including the most vulnerable. And that is to ensure that the group of individuals building and shaping AI is representative of the entire “human family,” she said. “Because of this, it is imperative that we attract people from all walks of life into AI development. Otherwise, unconscious (and conscious) biases will be baked into the technology, which could put certain segments at risk.”

The more diversity we have in AI development, the better we will be able to ensure that the entirety of humanity is well represented and served by this technology.