Legalizing Driverless Cars: Options, Approaches, and Controversies
Self-driving cars have been somewhat a sci fi fetish for decades, probably only falling short of flying cars as a symbol of the future. These days, they’re not just as widespread as sci fi novels described them, yet the work on the technology is ongoing.
The first attempts to create an autonomous vehicle started back in the early 1960s. In 1961, James L. Adams, a student of Stanford University, created the “Stanford Cart”, a prototype of self-driving vehicle. Mr. Adams has constructed the cart to support his research on the problem of controlling a remote vehicle using video information.
Nowadays, the tech is nothing like that. Google, Uber, and well-known giants of automotive industry such as GM or Toyota all try to find their place in the nascent driverless car industry and significantly improve the existing software and hardware for further development. Despite this, the legislation for self-driving vehicles is still not really there, and lawyers, regulators, manufacturers and engineers still struggle to find any common ground in terms of how autonomous cars should be regulated.
Driverless Cars and Their Unclear Status
Most countries have specific requirements ruling whether a particular vehicle could freely ride on the roads. If such a vehicle satisfies all those requirements it can be considered “road legal” or “street legal.” Mainly, those regulations demand one to equip and license a vehicle for use on the public roads, ensuring its roadworthiness. The car must have the specific configuration of safety equipment (e.g. airbag, seat belt, brakes, first aid kit), lights, and, most importantly in this context, must have a driver, which is pointed out even in the international regulations, such as the U.N. Convention on Road Traffic. Driverless cars obviously don’t comply with the latter requirement and are therefore not road legal, though in certain areas the local governments have made exceptions for them so that they could be duly tested.
Still, even in such areas there are notable issues. First of all, there has to be a driver at the wheel of a self-driving car and take control in critical and/or complicated situations. Currently, it’s just a precaution, but it can be presumed regulators will abolish such requirements over time.
The second issue is the imperfection of technologies that are used in development and testing of driverless cars. For example, Google uses the Internet to create an infrastructure for its driverless cars, namely the maps of the Google Street View (GSV) service. However, there are several questions. Will Google autos work without internet connection? What will the cars do in places that are not covered by the GSV service? Will they eventually work?
The third issue is people’s irrationality. Theoretically, driverless autos are safer than cars controlled by humans. However, you’ve probably heard about the tragedy in Tempe, Arizona where Uber’s self-driving car killed a pedestrian. The Tempe Police Department said that Uber would likely not be at fault in this accident. The woman was crossing the street with her bike outside a crosswalk, which means she violated the traffic rules. According to the released videos from the car’s dash cams, the driver wasn’t looking at the road at the time of the collision, and appeared shocked at the last minute just as the car failed to stop. It’s hard to say who is right and who is wrong in this situation but there clearly was a negligence on both sides, a driver and pedestrian.
Moreover, the society, including the authorities and common citizens, isn’t quite ready for wide adoption of driverless cars at the moment. For instance, the State of California Department of Motor Vehicles (DMV) stated in its report that in two of six incidents where driverless car were involved, humans caused the damage to the car in the autonomous mode albeit the drivers were in the cabin as prescribed by the existing legislation. So, people might demonstrate aggressive attitude towards this type of cars.
There also is the issue of potential legal constraints in areas such as cybersecurity, data protection, insurance, and all sorts of liability (e.g. criminal and/or civil etc.) It’s hard to imagine big fleets of autonomous cars right now, but eventually it might become normal, and such autonomous fleets may change the concept of private ownership of a vehicle altogether.
So, there are lots of questions regarding the technology, the legislation and even human consciousness that should be addressed before self-driving cars are considered road legal. That’s why engineers should improve their creations, regulators should work on complex regulation of driverless vehicles, and people, both pedestrians and drivers, should be a little bit more responsible and attentive.
Who Pays for It?
For now, the main concern for the manufacturers of driverless cars is safety. It’s really hard to convince people that autonomous cars are, in fact, safer than their more traditional peers but it’s quite easy to destroy any good reputation with several unfortunate accidents.
The problem, however, isn’t as complex as it may seem. First of all, regulators need to define who is liable if an accident occurs. There are several options here:
- The driverless car itself.
- The human driver / owner of the car.
- The manufacturer.
- Both the human driver / owner and the manufacturer.
The driverless car is autonomous and can make its own decisions, so it should be liable for its decisions. Sounds reasonable, right? Nonetheless, if a vehicle is held responsible for all the damage it has caused, which can even include the death of a human being, it’s hard to implement such a provision in the existing regulatory framework of any country. The problem is that the property, which a car definitely is, can’t be responsible for its deeds.
Considering driver liable for accidents also isn’t right. Why? The main advantage of self-driving cars is the fact that computer performs driving, so humans may use their time to do other things, like reading books or working during their commute. In some cases, the driver is the car itself, and the person behind the wheel is just a passenger, so we are actually getting back to the argument that driverless cars can’t be held responsible for the crash.
So, maybe manufacturers should take all the responsibility related to the operations of their automobiles if the incident happened due to inappropriate condition of the car (e.g. issues with brakes or the steering wheel) or the software that controls it. Some think that manufacturers should share liability with road infrastructure owners, in case the road affected the autopilot and it was the cause of the accident.
Or, probably, manufacturers and owners / human drivers should all be liable together. The former due to the possible technical problems that may arise over the process of using the car, and the latter due to their part in keeping the vehicle in the recommended condition. However, this approach has too many questions to be answered: who should be liable if the owner hasn’t updated car’s software and the new version resolves the problem that caused the traffic incident? How to divide the damage that should be paid by the owners and manufacturers? All these questions may lead to the answer that the particular software developer also might be held responsible for the damage.
However, there are not only legal issues. Some ethical questions are yet to be clarified, first of all, the driverless car adaptation of the classic trolley problem. In its modern iteration, there are two roads, one with five pedestrians, and the second with only one pedestrian. A driver sees that it’s possible to turn only once to one of the two roads or go to the ditch. What should the autopilot do in this case? The option number one is to kill the driver and save the pedestrians by going to the ditch. The option number two is to kill one person instead of five. And the last one is to kill five persons instead of one. This is one of the most fundamental questions of moral philosophy, and it doesn’t have a universally accepted answer. However, in order for driverless cars to become road legal and safe, it has to be answered somehow.
How Driverless Cars Could Be Legalized
As stated above, before legalizing driverless cars, the technology should become more developed, and many legal questions have to be addressed.
Let’s take a glance at the current attempts of regulators to make autonomous cars legal. For instance, the State of New York proposed issuing a license for demonstrating and testing autonomous vehicle with certain restrictions. This act will expire on April 1st, 2018. Thus, soon enough we can see the report on the results of such an initiative.
Germany also decided to regulate the new technology and on July 21, 2017 enforced the so-called AV Bill bill that legalized driverless cars by modifying the German Road Traffic Act (Straßenverkehrsgesetz, StVG). Here’s the opinion of White & Case LLP on this law:
“The AV Bill defines the requirements for highly and fully automated vehicles to use public roads. It further addresses the rights and duties of the driver when activating the automated driving mode. The AV Bill does not change the general liability concept under German law. Therefore, both the driver and the “owner” (Halter) remain liable even if the vehicle is in automated driving mode, with drivers able to avoid liability if they lawfully used the automated driving mode. Automated vehicles must be equipped with a black box to identify whether the driver or the system had control at the time of an accident. Since this will help the driver/”owner” (or, in practice, the “owner’s” insurance company) to prove that the vehicle caused the accident, the relevance of German product liability rules and product liability insurance is likely to increase.”
The White & Case lawyers said that the main provisions of the law are the following:
- Highly and fully automated vehicles are defined. The definition inter alia requires that the system is able to comply with traffic regulations, recognize when the driver needs to resume control and informs him/her with sufficient lead time, and at any time permits the driver to manually override or deactivate the automated driving mode.
- The definition does not capture so-called “autonomous vehicles”, i.e. vehicles that do not require a driver.
- The use of automated vehicles is allowed within the limits of the intended use (as will be defined by the individual car manufacturers). The system must inform the driver if a given use is not within the limits of the intended use (e.g. leave the driver’s seat when in automated mode).
- The driver is allowed to avert his/her attention from the traffic. However, the driver must remain aware in order to regain control of the vehicle without undue delay either when prompted by the system or when the driver recognizes (or must recognize) that the preconditions for the automated driving mode are no longer fulfilled.
- 100% increase in the maximum liability limits under the Road Traffic Act (i.e., now: maximum EUR 10 m. for death or injury and maximum EUR 2 m. for damage to property).
- Vehicles with highly or fully automated driving functions must be equipped with a black box. In case of an accident, the black box identifies whether the driver or the system had control of the vehicle and therefore clarify whether liability lies with the driver or – potentially – with the manufacturer.
Even though the AV Bill doesn’t regulate driverless cars directly, it’s a big step in terms of further legislative development as it states that drivers can use autopilot and defines situations where they are allowed to do so.
The last but not least example is the State of California. On February 26th, 2018 the DMV announced that it will allow to test a truly driverless car. According to the Adopted Regulatory Text, “autonomous mode” is the status of vehicle operation where technology that is a combination of hardware and software, remote and/or on-board, performs the dynamic driving task, with or without a natural person actively supervising the autonomous technology’s performance of the dynamic driving task. An autonomous vehicle is operating or driving in autonomous mode when it is operated or driven with the autonomous technology engaged.
By stating this, the DMV eliminated the necessity for autonomous vehicles “to have a person in the driver’s seat to take over in the event of an emergency”. As of March 6, 2018 there are 52 Autonomous Vehicle Testing Permit holders. Thus, California is close enough to fully legalizing driverless cars in the nearest future as the legal framework for autonomous vehicle is being actively developed.
Many companies, including Uber, Lyft and Waymo, are also testing autonomous cars in Arizona albeit the state hasn’t any specific regulations regarding the driverless vehicles. Waymo has already picked up passengers in driverless vehicles, and later this year plans to start testing autonomous vehicles even without human drivers.
On March 27th, 2018 the governor of Arizona suspended Uber’s license to test driverless cars mostly because of the lethal incident everyone knows about. It has launched the wide debate on the issue, and created the request for self-driving cars to be under clear safety standards and regulations. In the light of this situation the approach with no license or permit requirements for autonomous vehicles looks negligent, as it definitely involves human lives now.
Making Them Legal
The creation of a regulatory framework for the driverless cars is a long and difficult process. Manufacturers should improve their cars, while regulators, in turn, should allow them to test those cars. Collaboration would be the most efficient way to develop a solid and attractive legislation. Manufacturers will have an opportunity to develop new technologies, while regulators would define the problems and loopholes in legislative framework over the tests. Also, the appropriate laws regarding the cybersecurity, data protection, insurance, and maybe even AI should be developed and enforced.
The main question in the regulation of driverless cars is the question of liability. While currently there can’t be any definitive answer here, generally one may assume that in case robots are someday considered thinking entities that can be held responsible for their actions, this problem will resolve itself. The thing is, however, that we’re very far from seeing robots that can think and request their rights to be respected.
Anyway, in the current legislative situation it’s more likely that a manufacturer will take all the responsibility but when drivers or autopilot software developers also be held liable? Would someone else, such as road infrastructure owners, as well be obliged to compensate damages in certain cases? All these questions should’ve been regulated before car accidents began causing damages to property and people.
Thus, the way is driverless cars could be legalized has several steps. Regulators should permit the tests of the autonomous cars with appropriate license and reports requirements. Also, regulators should enforce clauses regarding the liability in case of car accidents, insurance, cybersecurity etc. Considering the data from the tests and other relevant information, an appropriate legal framework that will allow driverless cars ride even without human driver should be developed.
Currently the legalization process in all aforementioned jurisdictions is still at early stages. However, it’s possible that the next two or three years will see all the required laws fully developed. Hopefully, they would not slow the technological process and regulate the questions of liability, insurance and drivers presence. Over time, even international laws might change, and it can make worldwide legalization of autonomous cars faster.
Follow us on Twitter to stay tuned on the recent developments in regulation of new technologies, and be the first to read expert opinions and editorials.