Where are the ethics for implementing AI

Artificial intelligence and ethics: It depends on the operationalization of ethical principles.

June 20, 2019

Applications based on Artificial Intelligence (AI) could in principle make autonomous decisions. But to what extent should this be allowed? Who is responsible for it? And when do humans have to be able to intervene in AI processes?

Companies are looking for practical answers to ethical questions like these. These can only be found in detail - in addition to the intentions, the operationalization is elementary. PwC provides orientation in the framework of “AI Ethics” in order to find the right answers.

In order to enable the responsible use of AI, information security as well as employee and customer trust are required. Politics and business are currently discussing which ethical rules are necessary for this, albeit far too abstractly. PwC's ethics framework goes into detail and thus helps companies to raise the right ethical questions and to find answers - at company level as well as for specific use cases. Answers that can also be operationalized. Among other things, we took up the guidelines of the expert group of the European Commission, reflected them critically and expanded them.

AI ethics must be defined individually

In developing our ethical framework, we were guided by the question of how AI can serve people. Together with experts

  • of bioethics,
  • of computational linguistics
  • and philosophy

we have identified five meta-principles. Sub-principles are assigned to each of these, which represent the possible answers to the meta-principles. Not all of these sub-principles have to be fulfilled at the same time. Sometimes they even contradict each other. This is inevitable, as the following examples show. Because how ethical principles can be implemented consistently, transparently and comprehensibly in corporate practice depends on the application and the values ​​and goals of a company.

Five key principles for practice

The following examples show how the five key principles can be implemented in practice and what needs to be considered:

1. Principle: Accountability

An exemplary overarching question on this: How is responsibility to be defined and ensured in the case of autonomous or partially autonomous systems, for example for diagnosing machine parts? Detailed questions could be: Who is responsible for errors that occur? The one who provided the data for the system? The system operator? The programmer? Or the person responsible for the process?

An ethical AI application must find answers early in the design process. It is possible that responsibilities in the company have to be redefined, or existing processes have to be changed or created. Legal issues, such as operator liability, also need to be clarified.

View more

2. Principle: Autonomy

The question of responsibility also depends on the degree of autonomy of an AI system. For example, if a bank wants to determine whether a customer is creditworthy, it can leave the decision entirely to an AI application. Or the employee uses a semi-autonomous system to collect decision-relevant data, but makes the decision himself. An ethical question here is: To what extent does the bank want to hand over existential decisions to machines?

An example from the USA shows the ethical problems that can arise in this process: there, judges use the “Compas” AI system to help them reach a decision. For example, a girl who stole a discarded bicycle and rode it for a few meters was sent to custody, but a recidivist robber was released without hesitation until the trial. The AI ​​system had adopted prejudices contained in learning and training data and attested that the young girl had a high likelihood of recidivism. Possible discrimination on the basis of ethnic and / or social affiliation is also a much discussed issue of “Compas”.

View more

3rd principle: justice (fairness)

In addition to the question of the degree of decision-making autonomy, the “Compas” example touches on the ethical principle of justice. This is also relevant for AI-supported language analyzes in application processes. For example, AI can use the formation of vowels in language to determine whether a job candidate is prone to depression. But does a company want to take advantage of this opportunity? And if so, to what extent should depression tendency be a selection criterion? And when is that still fair? The answer may be different for a train driver or a pilot than for a lawyer or warehouse clerk.

Health insurance companies can also use such systems - perhaps to justify higher contributions or to provide insured persons with advice at an early stage. What an ethically responsible decision looks like in each case is not clear from the outset. However, companies have to participate in such discussions - ultimately also to be conducted by society as a whole - from the start and make their perspective on them transparent.

View more

4th principle: safety (safety & security)

AI systems are based on large amounts of data. But what data is collected? Is it sensitive personal or company or customer data? To what extent is the data collection transparent? And are the AI ​​applications robust against cyber attacks? Which degree of anonymization is ethically desirable, which is sensible from a technical point of view?

For example, collecting passenger data at airports can make the fight against terrorism more effective. The more and the better passenger data is available for an AI application, the more it can "learn". However, if unsuspicious groups of people are excluded from the detection, AI may not be optimally trained - which may increase the safety risk for all air travelers. On the other hand, there is the ethical question of whether personal data belong in the analysis tools of airport operators or airlines at all.

View more

5. Explicability

In the case of AI implementation, the question of how it can be explained also arises. For example, is it sufficient that processes or crucial sub-processes are transparent? And for which people in and outside of companies should this be the case? For everyone or only for certain groups? Let us look again at the example of applicant selection: A human resources manager should be able to understand the criteria based on which his AI system comes to a personnel decision or recommendation. But does this also apply to the applicants? Or the “Compas” example again: Do convicts have a right to know how the AI ​​application came up with their recommendation? How is this compatible with the right of the Compas provider to protect the algorithm it has developed as a trade secret?

View more

Asking the right questions instead of giving answers

The scenarios outlined here give an impression of how complex ethical questions are in connection with artificial intelligence. That is why the PwC framework "AI Ethics" no Answers before. Rather, it formulates crucial questions, identifies short-term and long-term risks and helps to avoid them. This succeeds in that the framework shows possible answers to these questions, as well as the possible implementation of these answers - through Responsible AI in business processes as well as through Trust in AI at the system level. These are two more frameworks that we have developed and which also help to find the right answers to operationalize the questions. This is the only way companies can benefit from the advantages of technology without losing credibility and reputation.