When it comes down to fully autonomous vehicles, the biggest obstacle manufacturers have to cope with is how to program the software and what capabilities it should be empowered with.
Just think about an emergency situation where the car should decide on its own whether to hit a pedestrian who accidentally fell on the road or sacrifice the passengers by swerving the car and switching lanes. And this is just a one in a million question. While this obviously raises some ethical questions, it also sets up a problem for manufacturers that needs to be solved right now.
Why is there an urgent request for an immediate solution? The reason is simple: you cannot call a vehicle fully autonomous until it does not know how to solve a problem like this. However, this dilemma will only exist temporarily in the so-called “transition period”, when human-driven vehicles and autonomous vehicles both participate in public traffic. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the Trolley problem.
The Trolley Problem
In the “switch” case, a driverless trolley is heading towards five people who are stuck on the tracks and who will be killed unless the trolley is redirected to a side track. You are standing next to a switch. If you pull the switch, the trolley is redirected to a side-track and diverted away from the five. The trouble is that on this side track, there is another person, and this person will be killed if you pull the switch to redirect the train. Nevertheless, a very common response to this case is that it is here permissible for you to save the five by redirecting the train, thus killing the one as a result (Greene 2013).
Translating this problem into the world of self-driving cars is a bit complicated. First, we have to understand how these vehicles communicate with the world and perceive their surroundings. There are two broad types of autonomous vehicles: self-contained and interconnected.
Self-contained autonomous vehicles rely solely on information already programmed into the vehicle. Google’s prototype is an example of a self-contained autonomous vehicle. In contrast, an interconnected autonomous vehicle is wirelessly connected to a communication network or networks, and it can be controlled externally. Not only does the interconnected vehicle receive information over networks, it also transmits its own information to the networks and other vehicles. Regardless of the type of autonomous vehicle, the car will rely on sensors that collect and feed data.
Therefore, even if the car-to-car communication and the sensors and algorithms are all functioning properly (and would be better than current technology), self-driving cars might not always have sufficient time to avoid collisions with objects that suddenly change direction. Self-driving cars may sometimes collide with each other. Moreover, there are also other moving objects to worry about. Pedestrians, cyclists, and wildlife naturally come to mind here. However, we must also take into account human-driven cars. As is generally acknowledged by the experts, self-driving cars will for a relatively long period of time drive alongside human-driven cars (the so-called “mixed traffic”). For these reasons, automated vehicles need to be programmed to be able to instantly respond to situations where a collision is unavoidable.
So the question for the future is not only how to avoid the crash, but how to crash. At first blush, it might seem like a good idea to always transfer control to the people in the car in any and all situation where accidents are likely or unavoidable. However, human reaction-times are slow. Hence, the software needs to be written for how to handle a crash. This is where software plays a key role, and it is definitely unbeatable by humans.
Fasten your seatbelts
One of the main arguments against autonomous vehicles is safety. Surveys show that 90% of fatal collisions are caused by human error. With the software prepared for such situations, the panicky and disorganized ways a human would react in an accident could be cut out. Even in an unavoidable collision, the software would be able to choose the best scenario in a split second, thus reducing the level of crash as much as possible.
However, we are entering the transition period and a new chapter in safety issues is about to begin, where self-driving cars are required to “predict” events in order to counter human error. A few weeks ago, Tesla’s autopilot software played a key role in avoiding a fatal crash by analysing the motion of passing vehicles and enabling emergency braking right before the impact.This incident clearly highlighted the pros and cons of automated driving and raised awareness to the transition period.
Today’s vehicles already feature many automated functions, such as electronic stability control and parking assist. There is also an increasing amount of combined function automations. Nonetheless the current software rely heavily on human intervention when it comes to fully autonomous driving, and today none of the big companies in this industry has come up with a solution. So whoever creates a piece of software that can solve these problems will be the game-changer, since the future of driverless cars is depending on this technology.