How is AI used in drones

Artificial Intelligence and Peace "We need the political will"

When people think of the use of AI in warfare, they usually think of “killer robots”, drones. But could artificial intelligence also be used for peacekeeping operations? Angela Kane, former Assistant Secretary General of the United Nations, talks about the pros and cons of AI in peace missions.

By Svenja Hoffmann and Natascha Holstein

Ms. Kane, you worked for the United Nations for a while. When did you first recognize the importance of technologies in the context of national security interests and the potential impact of these technologies?

I worked for the UN for 37 years, which is an incredibly long time. So I'm a real dinosaur. I first encountered AI, or rather technology than AI, when I started a new position in the Press and Information Department in 1995. They had just launched their first website. Those were exciting times. For the next two years I was solely involved in web development. Over time, more and more things were being put online. Basically, this was the starting signal for my later interest in artificial intelligence. Web development had absolutely nothing to do with “intelligence”, it was pure technology, but there was a lot that could be achieved with technology. And you could see what was possible when people got access to information. What this actually meant for peace and what influence could be exerted on it. From my point of view, it was an incredibly powerful tool, especially in the information field. In view of the increasing networking of information systems with one another, more and more dark sides are emerging. But at that time the positive aspects predominated.

When did the United Nations first deal with the topic of AI?

The UN has been dealing with information and communication technologies since 1988. The issue was originally put on the agenda by Russia for security reasons, because the origin of cyber attacks could never be traced. It is not transparent who triggers something, who orders something. Is it just an amateur hacker? Or is it a targeted attack? And there has not been much progress on this issue. An agreement on standards or rules is not possible. And although a group of government experts is working on this topic, we are still a long way from developing regulations or an instrument that can and must be used by states. Regardless of all the positive aspects of AI, this is an aspect that deserves regulation. There must be a “standard” that is accepted by all states.

What influence does AI have on warfare and the soldiers who, for example, control the drones?

This is one of the most fascinating developments. There have been no declarations of war since the First World War - although there are numerous conflicts; the official number is 150. The expression “boots on the ground” came up 20 to 30 years ago. It stands for soldiers who fight each other in trenches. Nowadays there are human beings involved in warfare and acts of war. But it's no longer hand to hand because everything is controlled electronically. Drones are just the most visible technological elements in this process. They can easily be equipped with weapons of war. And they are getting bigger and more powerful. There are rules for the civil use of drones, which for example are not allowed to come near buildings, but not for their military use. This results in any person in any place, thousands of kilometers away, programming a drone attack on a specific target determined by technological means. That person doesn't really know if the drone is aiming the right thing or what collateral damage could be associated with the attack. Of course, soldiers try to avoid this. But maybe they have had a bad day and are angry. While a drone doesn't feel any feelings, the person who controls it looks very different.

While the drone doesn't feel any feelings, the person who controls it looks very different.

Angela Kane

In addition, nuclear weapons storage facilities are not digitally secured, which is why it is much easier to hack these systems than it was 20 or 30 years ago. The digital systems therefore have weak points in the fight against and tracing of hacker attacks. For example, Iran was held responsible for the attack on the Saudi refinery last year without any evidence to back it up. Had it not been a refinery but a nuclear weapons depot, this attack could have been seen as the first strike in a nuclear war. The use of technologies has increased the risk situation many times over. Statistics on civil collateral damage were kept during the Obama administration. Today, however, this information is not disclosed. However, this lack of transparency violates international humanitarian law, which has been ratified by almost all states. Transparency is essential in this area.

You mentioned that people who control drones can be influenced by their state of mind. However, the use of drones is also determined by the people who write the codes. Bias in the field of AI is a well-known problem. What does this mean for the use of AI in warfare? Could this bias also consciously be used to the advantage of the warring parties?

Yes, of course it is possible. And here, too, we are faced with a lack of transparency. For many people, AI is a black box because they don't know which algorithms were used to program a system. Anyone who develops such an algorithm is never completely impartial, which in turn is very often reflected in the results. We have to make these algorithms and their programming more transparent because they affect people's lives, whether in everyday life, at work or in war.

In AI companies, ethics officers are now used, but they usually only have an advisory role and have no decision-making power about what is and what is not. In addition, there are very few people of color and women on the ethics committees, mainly older white men, so that a clear bias can be assumed even at this level.

In its early days, Google advertised with the slogan “Don't be angry”. However, this motto has long since become obsolete. However, about two or three years ago the company decided against further developing the face recognition project “Maven” and abandoned the project. Whereupon, however, another company immediately took up the matter. Without a regulation for the development of these algorithms there will be no changes.

What is the role of the public and governments in developing regulations for the use of lethal autonomous weapons systems?

I would like to link this issue to another development, namely the Treaty on the Prohibition of Nuclear Weapons. The treaty stems from a humanitarian initiative that was initially fueled by concerns about the humanitarian consequences of nuclear war. Over time, the movement got stronger and stronger and was able to win numerous followers - even if the nuclear powers decided against such a treaty and tried to prevent its acceptance. Nonetheless, the only thing missing for the treaty to come into force is the signatures of three other states. While this would not entail a complete elimination of nuclear weapons, the prohibition on the possession of nuclear weapons would become more and more important over time.

A similar development can be observed in the campaign against killer robots, which is carried out extremely convincingly. A key point here is that it is predominantly young and committed people who are involved who can bring about real change. However, it is not clear whether they can make their voices heard by governments or parliaments. The matter has not yet reached these levels, but I think we are on the right track.

However, I also fear that it will not be easy because the development of lethal autonomous weapons is very advanced. However, 30 states have already spoken out publicly against the development of these weapons. You have submitted initial statements that could possibly be incorporated into a contract. Above all, those countries that do not have such weapons speak out against their development. Because they are aware that they could potentially be affected by the use of such weapons.

The term “killer robot” is on everyone's lips. What do you make of it?

Killer robot is an extremely catchy term because it gets to the point. I think it stimulates the imagination and sends a clear message. At the same time, I would prefer another name. The term “lethal autonomous weapon systems” is not really easy to talk about, does not necessarily stick in the mind and sounds downright bureaucratic. I am aware that there is a lack of an appropriate label. Whether we call them killing machines or mutilating machines, in the end this is exactly what they are used for, and there is nothing to gloss over.

What opportunities for peacekeeping are associated with AI?

AI can play a crucial role in improving the flow of information and transparency, and in this way connecting with people who would otherwise be completely isolated. Since these technological advances have already been made, we should now use them for the benefit of people as well. Doesn't it say at the beginning of the United Nations Charter: “We, the peoples”?

In the Congo - where I was already on duty myself - a massacre in 2014 caused great indignation. The peacekeeping forces stationed only 9 kilometers away hadn't noticed anything because there were no roads. This was the trigger for the use of drones to monitor regions that are difficult to access or dangerous for troops and to ensure that no one is harmed. Even if this form of use is very positive, there is still a risk of abuse. Therefore, the use of drones must be re-applied for in each individual case. They are currently used, for example, in Mali and the Central African Republic, but not in South Sudan because the government there has not approved their use.

Doesn't it say at the beginning of the United Nations Charter: “We, the peoples”?

Angela Kane

Do you think that the potential damage caused by AI combined with its positive impact on peacekeeping and the security situation can be justified?

Our decisions do not depend on whether or not they can be justified - technology exists and we should use it for the benefit of humankind. However, there are no regulations. In Geneva, a body has dealt with these issues since 2014, initially on a larger scale that was open to all member states, then in a smaller body made up of government experts. You have a right of veto: if you disagree with something, this aspect is not taken into account any further. In addition, they have defined guidelines, but have not yet been able to agree on a wording from which a treaty or an agreement could be drawn up. I am particularly concerned that the technology has been self-regulating for a long time. The demand is there and we need to be more active.

As an intergovernmental association, the United Nations is used in the field of diplomatic relations between nations. Occasionally, albeit reluctantly, they rely on the support of NGOs or representatives from industry. But especially in the field of AI, it is absolutely necessary that we work together with industry in order to develop a differentiated strategy for coping with the problem.

Numerous countries and government representatives lack the necessary knowledge and experience on this issue, which of course cannot be blamed on them. They need to be introduced to the topic, and we need a far more targeted approach with a view to the central question of how we can steer the development of artificial intelligence in the area of ​​conflict between warfare and peacekeeping.

In an interview on the “Couch Lessons” discussion series, you said that the problem of using AI in the field of warfare cannot be solved from a human rights perspective, but can be solved from the perspective of peace, security and disarmament. Can these aspects be separated from one another at all?

Actually, they should be connected to each other. The human rights perspective is extremely useful, but it is easily pushed aside. On the one hand, international humanitarian law has undergone enormous development since the Geneva Convention. States are obliged to grant the civilian population special protection within the framework of humanitarian law and human rights. On the other hand, this obligation is not always fulfilled with a view to the events in the world, but at the same time it is deprecated by many sides. For example, the Human Rights Council in Geneva was elected on October 13th. I do not mean to claim that the Human Rights Council only includes countries that have an exemplary human rights record. It was interesting, however, that Saudi Arabia was a candidate but was not accepted into the Human Rights Council and was not elected. China, on the other hand, was accepted, but received fewer votes than in the previous election. This shows a change in awareness, also with regard to the question of whether a government complies with human rights standards or not.

From my point of view, the human rights aspect should be brought to the fore and governments should be pilloried. Naming the culprits by name doesn't always work, but the more countries raise these standards, the better it is for the rest of the world. Many countries in Europe meet high standards of ethical - “right” - behavior, but more countries have to commit to complying with them. When the economy falters, when poverty rises, when the number of migrants and refugees rises, the willingness to adhere to these standards decreases; even more so in times of a pandemic. The number of autocratic governments is increasing and crackdowns on the suffering population are becoming more common. There is more leeway for violations of human rights and humanitarian standards. That is the greatest danger right now: we have to make sure that these standards do not lose weight any further. We need to make sure that governments get back on their feet. But how we can achieve this, I am overwhelmed.

What does the world need in terms of AI's contribution to peacekeeping?

We need the political will to make progress in this area. But in view of the current political climate, such a commitment seems unlikely to me. The use of AI to keep peace is not the focus of industry interest. Industry can control the development, but it also develops exactly what it needs and strives for from a military point of view.

I wish the pandemic had shown us that we can only master this situation together.

Angela Kane

I wonder if the massive investments in military development - a large part of which is also being spent on AI - are really necessary. What else has to be done? Is the pandemic not enough yet? Who or what are we currently fighting against? In the context of NATO, for example, the concept of hostility has lost its weight. The alliance was founded to deter the Eastern bloc states, most of which are now NATO members themselves. I wish that the pandemic had shown us that we can only master this situation together, that we have to fight a common enemy with the pandemic. Instead, mutual blame for the pandemic and selfish efforts to be the first state to secure a vaccine continue to grow. All in all, we need stronger political engagement and that is what is missing at the moment.
This situation may change with the next elections, whether with the current election in the USA or with upcoming elections in other countries.Maybe there will be certain changes, maybe not.

Authors

Svenja Hoffmann and Natascha Holstein conducted the interview; they work in the online editorial department at the headquarters of the Goethe Institute.

Translation: Kathrin Hadeler
Copyright: Goethe-Institut e. V., Internet editorial office
November 2020

Do you have any questions about this article? Write us!

Experts on the topic