It was confirmed that a Tesla car involved in a fatal crash that killed the driver was on autopilot mode at the time. The news comes just weeks after an autonomous vehicle – which was part of a trial launched by taxi giant Uber – hit and killed pedestrian Elaine Herzberg, who was walking her bicycle across the road in Tempe, Arizona.
Tesla introduced new safety measures to its autopilot system last year, but clearly these weren’t enough. Meanwhile Uber has withdrawn its autonomous vehicles from public testing – for now at least – and Toyota quickly followed suit, with both moving trials to controlled, non-public test environments.
Working for Control F1 – part of the autonomous and connected vehicles industry – I’ve long been an advocate of caution in this area. Currently, commercial progress in this field is substantially outstripping legislation – especially since so much pressure and value is placed on being first to market. While I’m sure no one in the industry is ignoring the potential of harm to other road users, the perceived race to autonomy could potentially put more value on innovation than on public safety.
Tragedies such as these two deaths also understandably set us back a great deal in terms of public trust. In a paper published last Autumn, a group of behavioural researchers predicted reactions to autonomous vehicle crashes would be “outsized” – that is, much greater than the reaction to an “everyday” pedestrian road death.
The ethics are complex and multifaceted. In both the Tesla and Uber cases, the vehicles did have a “driver” whose job was to take control of the vehicle if the systems failed (and it should be noted that although Tesla’s autopilot system does some of the things a fully autonomous machine can do, it is dubbed a “driver assistance system”).
The likelihood is that both crashes occurred because the drivers weren’t paying enough attention and had too much trust in the system. In the Tesla case, the company claims that “the driver’s hands were not detected on the wheel for six seconds prior to the collision.” In the Uber case, based on the video evidence released by the police, the driver was taking his eyes off the road for several seconds at a time. These are things no driver would or should do under normal circumstances.
We’ve seen this ourselves at Control F1 under simulation testing with Nottingham University’s Human Factors Research Group. After an initial period of hyper-awareness (it’s an uncomfortable feeling at first, being in the driver seat of a vehicle that doesn’t require your input before it starts moving), people quickly become comfortable that the car can drive itself. Within a relatively small period of time we see their concentration lapse as they begin to attend to other matters such as reading emails, browsing the web or streaming their favourite show on Netflix.
Current legislation is scant. The fact that we have vehicles from multiple manufacturers, using different sensors and different software, being tested in the real world with real potential for harm and no proper legislation regarding minimum safety standards is quite staggering. At the same time, some locations in the US and the UK are positively falling over themselves to assist companies in setting up “living labs” in their environments. The people this affects – drivers, pedestrians, cyclists or indeed any other road users – have no say in becoming part of these experiments.
Governments have generally taken a hands-off approach, other than offering some voluntary guidelines. In the US, for example, the Department of Transport requests that autonomous vehicle developers hand in a “voluntary safety self-assessment”, covering how their vehicles are designed, how they react in emergencies and how the companies approach cyber security. So far, out of the 52 companies with permits to test autonomous vehicles in California only General Motors and Waymo have complied.
And yet there is another layer of ethical complexity in all of this, because to date there have been a total of three fatalities which could ostensibly be allocated to autonomy, compared to the 3,000+ fatalities at the hands of human drivers every day globally (not to mention those many more left with serious, life changing injuries). Those of us working in the space believe that autonomous vehicles have the potential to dramatically reduce vehicle related accidents in the future. But to get to this point we have to find a way of safely testing them in genuine public driving conditions with all the elements we as drivers must interact with daily.
Governments indicate that they don’t have the knowledge to put effective legislation in place to regulate this, and the wrong legislation could stifle innovation. But they do have access to the many businesses in the autonomous vehicle space, which could at the very least help to create a simple and universal set of rules.
For example, aeroplanes are now more or less fully automated, but still require two pilots on duty to monitor the situation. It doesn’t feel like too much of a leap to apply this to the testing of autonomous vehicles, with one person watching the road and the other observing both the efficacy of the vehicle and the driver. Frequently rotating drivers could also help to address the issue of drivers quickly becoming accustomed to the autonomy and placing too much trust in the system in these early days.
The other ethical element to consider is of course blame. Is the driver still to blame, as they were in charge of the vehicle at the time of the incident and not paying enough attention to the road? Or is the organisation to blame for not providing the relevant safety standards or systems for that driver to ensure they remain focused at all times? Indeed, is the fault that of the software developer, who for whatever reason failed to code for a specific situation, or the many engineers that designed and fitted the vehicle’s multiple sensors?
As we move towards autonomy there are so many complex questions that will come into play, requiring serious thought, debate and legislation. For now though, some forethought into public safety, and some standards that all test vehicles, organisations and drivers must adhere to would go a long way towards improving public confidence, whilst still allowing the necessarily robust testing of these vehicles in real-world environments to continue.
By Dale Reed, Product Development Director at ControlF1
Photo Credit: Anton Prakash
ControlF1 are Founder members of MCAV, for more information about benefits of membership, please click here
Alternatively, contact Hayley at email@example.com for more information.