Interview with former constitutional judge Prof. Udo Di Fabio
With a new law, Germany wants to allow cars without drivers in certain areas. In an interview, Prof. Udo Di Fabio explains which freedoms must remain with the individual and which principles apply to so-called dilemma situations. Di Fabio was Chairman of the Ethics Committee for Automated Driving until 2017. From 1999 to 2011 he was a judge at the Federal Constitutional Court.
How much constitutional leeway is there for self-driving cars?
The ethics committee that I had the honor of chairing some time ago has drawn up 20 guidelines for the introduction of automated driving systems – and the German government has signaled that it intends to follow them. One key point: the state must promote new technologies if they lead to significantly lower accident rates. But it must not put citizens in a situation where they inevitably have to entrust themselves to automated systems. People should be able to decide whether to surrender the steering wheel or not.
How realistic is freedom of choice if technology causes fewer accidents than humans? Insurers could, for example, demand higher premiums from self-drivers...
I think that is certainly possible. If the technology is significantly safer, then such questions will arise. However, a society has to decide whether it is willing to accept this logic or whether self-driving is a basic right that must be defended.
How far advanced is the German discussion on autonomous driving?
We are still very much at the beginning. However, the debate on mobility concepts is more rational than other ethical discussions, for example on genetic engineering. We reflect objectively on what we want and what we do not want. That is an appropriate way of dealing with it. Car manufacturers can support the debate by seeking dialogue with associations, technical universities or individual users.
“If an accident cannot be prevented, then artificial intelligence must not differentiate according to the characteristics of potential victims.”
Much discussed are dilemma situations in which an accident is unavoidable for an autonomous vehicle. What principles must apply?
If an accident cannot be prevented, then artificial intelligence must not differentiate according to the characteristics of possible victims. It must not weigh up according to age, gender, income or other things. It must only be programmed to follow the general principle of harm reduction. Behind this is our constitutional understanding: Every human life has the same value.
How is it that rare borderline cases receive so much attention?
It is not so much about practical relevance. Constructed dilemma situations – rescuing children or their grandmother – hardly ever occur in everyday life. According to my observation, we discuss these dilemma situations as representative of a fundamental question: Do we leave our fate to a fully automated system? People hesitate for understandable reasons.
What are these reasons?
We have the worry of waking up in a smart day-to-day world where technology does it better than we do, but we gradually lose control. Part of the discomfort is that people are measured by smart machines, i.e. they have demands for perfection that we cannot meet. In a world like this, it is conceivable that more and more decisions will be transferred to artificial intelligence. In the USA, for example, there is a discussion about whether machines can be the better judges. They are not guided by prejudices or moods. At the moment, technology is still very vulnerable in many areas. It is therefore obvious that humans must have the last word. But we are only at the beginning of a development.
Who would be liable if a self-driving car caused an accident?
In principle, the answer is simple: If the vehicle controls itself, the manufacturer or the operator of the technical systems is liable. If a human takes control of the vehicle, the driver is liable. The demarcation is difficult. With automated driving at level 3, drivers are allowed to turn their attention away from the traffic temporarily. However, they must take over the steering again when the vehicle asks them to. From a legal point of view, it is important to define clear handover times. Presumably, cars will have to be equipped with a black box similar to airplanes that records driving data. After a collision, it will then be possible to analyze who was in charge at the decisive moment.
The manufacturer of a self-propelled car does not have to be the operator – let’s think about future mobility services. Who is liable when?
The business community will have to find solutions for this in its own private sphere. If a company builds a car with automated systems and puts it on the road, then that company is liable in the first instance. However, this liability can be transferred by contract to another company that operates, monitors or controls the systems. Neither the legislator nor an ethics committee needs to determine whether such a differentiation is necessary. This will happen within the mobility economy.
What is the half-life of laws when technology changes as fast as in autonomous driving?
I see different speeds. On the one hand, autonomous driving in city centers is progressing more slowly than many people expected. On the other hand, we will probably experience leaps in development in the next ten years that we cannot yet imagine exactly. We should be talking today about the extent to which fully automated driving on the highway can be permitted. We should also think about how we can prepare the infrastructure for automated traffic. Some states in the U.S. have already made progress with autonomous driving. It is important that Germany is also one of the pioneers.
Would you sit in an autonomous car?
I am passionate about driving – but in some situations I am just as happy to hand over the responsibility. With the Ethics Committee, we tested a self-driving test vehicle – in heavy rain on the freeway. I was really annoyed when the system asked me to take control again towards the end. I would have preferred to talk to the passengers even more. I can also well imagine that you would rather let yourself be driven at night than tearing off the last few kilometers back home exhausted. My model for the future: You drive when you feel like it. And if you don’t feel like it, you let it go.
Prof. Udo Di Fabio (66) was a judge in the Second Senate of the Federal Constitutional Court from 1999 to 2011. His department included European law, international law and parliamentary law. From 2016 to 2017, he chaired the Ethics Commission for Automated and Networked Driving set up by the Federal Ministry of Transport. Since 2003 he has been Professor of Public Law at the University of Bonn.
The levels of autonomous driving
Level 1: Assisted driving
Individual assistance systems provide support for certain driving tasks – in longitudinal control (e.g. accelerating and braking) or lateral control (steering).
Level 2: Partially automated driving
Complex assistance systems relieve people of the entire driving task, which includes both longitudinal and lateral guidance. However, the driver must always keep an eye on the traffic.
Level 3: Highly automated driving
The driver can give up control in certain situations. However, they must take over the wheel when the technology demands it. In fact, the technical approval of such a system is not yet possible in Europe.
Level 4: Fully automated driving
The system takes complete control for defined applications. It can stop the vehicle automatically if necessary.
Level 5: Autonomous driving
The vehicle moves driverless in any environment. The human being becomes a passenger.