Can robots be programmed to say prayers?

The federal government will hold two days of conferences in Nuremberg on digital innovations and artificial intelligence. The theologian and chairman of the German Ethics Council, Peter Dabrock, insists on ethically programmed smart machines.

KNA: Professor Dabrock, route planner, online interpreter, virtual assistants such as Alexa and Siri or medical diagnostics - artificial intelligence (AI) seems to be omnipresent. Will our lives soon be largely determined by machines?

Peter Dabrock: Without wanting to portray myself as a prophet, I dare the thesis: This trend can no longer be reversed. The development takes place worldwide. And that means: The decision on how the AI ​​age will be shaped is largely not made in Germany.

KNA: People are still programming machines - will it be the other way around one day: The machine controls people?

Dabrock: The use of technology always brings gains and losses. However, we have to make sure that these gains and losses do not exceed certain limits. In some areas machines will bring us gains in freedom, in other areas freedom restrictions. One should not give the impression that regulation alone can bring certain challenges of digitalization to the individual and society under control. It takes more than that. And in the end, the focus must be on people. Robots or AI machines must not control it directly, but also not indirectly.

KNA: But how do we prevent that?

Dabrock: By being guided by ethical principles when programming artificial intelligence or training smart machines.

KNA: And is that possible?

Dabrock: To claim that programming ethically is not possible would basically mean that we should keep our hands off such smart machines completely. Because in any programming, whether it is explicitly called 'ethical' or not, the programmers' assumptions - including moral ones - flow into it. Then it makes sense that these assumptions are firstly disclosed and secondly that the programming has to observe certain general legal rules.

For example, with the so-called autonomous vehicle: 'In a collision, human lives have priority over other living beings' or 'No discriminatory standards are allowed to guide the programming'.

Thirdly, it should be possible to articulate moral preferences in such a framework. Again using the example of autonomous vehicles: Don't you have to think about whether it shouldn't also be possible to program your car in such a way that, in the extremely rare case of a tragic dilemma, a self-sacrifice is preferred to someone else's victim? A prerequisite is that nobody is uninformed in the vehicle.

KNA: But how does the AI ​​come to its decisions? Why does she recommend insurance tariff A to one customer and tariff B to the other? How can the doctor be sure that the machine-generated treatment recommendation is really the best? How does the drone find the targets it is fighting?

Dabrock: The big challenge is that the exact selection steps, especially with the smart machines, cannot be traced. Even if that is the case, two principles have to apply: On the one hand, organizations or private individuals who use such AI-based machines must be transparent about which criteria were included in the calculation. On the other hand, nobody can talk himself out of the fact that he is not responsible for a decision, but the machine. In the end, a human or legal person always has to be held liable if something goes wrong. It doesn't matter whether it was caused by a human error or a machine.

KNA: AI systems require considerable amounts of data for their calculations - including about people. How is the protection of personal data guaranteed here?

Dabrock: Handling data responsibly is a key issue of the 21st century. But here too we have to be honest. We cannot carry the principles of the old data protection like data economy, purpose limitation and informed consent like a monstrance before us and at the same time want to use all the advantages of the machines and systems that require huge data.

That is why the German Ethics Council published a statement a year ago in which it proposes a paradigm shift from data protection to data sovereignty. Specifically, this means: We want to continue to maintain informational self-determination, but have not given it with consent, but rather maintain control over the transfer of personal data. There are also exciting technical and administrative models. This works out. And such opportunities show that the data subject does not simply have to sell itself. Politicians must then also want the individual to remain in control of their data and support such initiatives.

The interview was conducted by Stefanie Ball.