Rideshare and Self Driving Cars

The advent of technology has led to car manufacturers fighting to be at the forefront of pioneering new technologies. In the automotive industry, ride hailing companies have beaten almost all others when it comes to self-driving cars. All done with the aim of increasing revenue by deploying autonomous vehicles for their ride hailing clientele. The likes of Uber, and Googles Waymo come to mind, while for car manufacturers, Tesla is at the forefront of machine assisted driving.

But is there a threat for rideshare drivers that this autonomous vehicles will drive them out of business?

Well, maybe not as soon as it would have been earlier.

After one of Uber’s self-driving cars was involved in a fatal crash on 18th March 2018, the technology has somewhat hit a snag. At least for now. Uber immediately announced their halting of further testing, as did Waymo, Toyota, and NVidia among others.

This is not the first fatal accident involving an autonomous vehicle.

In May of 2016, a Tesla Model S driven by Joshua Brown crashed into a tractor trailer that crossed its path and fatally injured the driver. On January 22nd 2018, another model S rammed into the back of a fire truck near Culver City in California. Fortunately, the driver survived. However, Walter Huang, the 38 year old driver of a Tesla Model X was not as lucky. On March 23, the same week after the fatal Uber crash, Walter Huang’s SUV rammed into a California highway barrier, fatally injuring him.

The Model X, at the time of the crash was in Autopilot mode. Tesla, upon recovering and analyzing the vehicles logs stated that there had been ‘several visual and a single audible’ prompt for the driver to reclaim control of the vehicle. According to a Tesla statement via a blog post, ‘There was no action taken by the driver, even though he had approximately 5 seconds and 150 meters with a clear view of the divider.’

Even though Tesla has marketed its vehicles with ‘Autopilot’, there clearly are some flaws.

Which begs the question, how safe are these self-driving or semi-autonomous vehicles?

The Safety of Self Driving vehicles

NHTSA, in its investigations of the first Tesla model S crash came to the conclusion that the Tesla in it Autopilot mode was operating as it was expected to and was not at fault. There are also supporters and critics alike, but there is a general agreement that technology is the way to go for the number of road deaths to decrease which in the US stand at over 30,000 deaths in a year.

With the recent streak of fatal crashes involving semi-autonomous vehicles, consumer trust and confidence in semi-autonomous tech could be eroded. Supporters are however of the opinion that the technology is just scratching the surface. There have also been reports showing that a majority of this accidents, though involving self-driving vehicles, were entirely avoidable if the driver had been alert or responded to warning.

In the case of Tesla, it has come out that the drivers involved in the crashes had entirely left it to autopilot to navigate their vehicles. Tesla, in defense of its technology argues that Autopilot is only meant to ease highway driving, and as an assistant, like cruise control. Investigating authorities have also been in support, concluding that drivers have been at fault.

In one case, the Tesla driver had been allegedly watching a Harry Porter movie, which is illegal, with the car on Autopilot.

The use of backup drivers in these vehicles especially with ride hailing companies has not stopped them from crashing. Tesla’s autopilot as it stands today is limited to highway cruising. In essence, it means drivers should be alert and prepared to regain control at all times – a clearly failure prone oddity.

The same case was applicable with Uber with their first fatal crash. Uber’s system is also dependent on a trained operative to take charge where technology is not efficient.

The distinction being that Uber’s tech, is continuously being developed for operations in local roads with a wide range of variables, including pedestrians. Tesla’s driver assist Autopilot is only meant to ease highway-driving as mentioned earlier. There is also the fact that Uber is not manufacturing vehicles for sale to the mass market. Tesla on the other hand seeks to offer its technology to consumers, even with their tech still in beta.

Their hope is that neither of their tech is found to be faulty as this would have a resounding impact on user trust. Tesla in a blog post opinionated that ‘If the safety level of a Tesla vehicle were to be applied, 900,000 lives of the 1.25 million annual automotive deaths would be saved worldwide.’

The advantages and safety of autonomous vehicles lies in what Thomas Leu, a Google Inc. corporate counsel says, ‘Self-driving cars do not fall asleep, they do not get drunk, and they are not destructed by phone calls or text messages.’

Legal Implications of the technology

Even though governments are committed to enable development of automated vehicles, there is a lack of sufficient legal framework to address problematic issues.

The complexity of the algorithms used to predict and accordingly react to traffic movement, pedestrians and other objects comes into play. Even though the tech is regarded as lifesaving and it improves safety, companies including Google, Tesla, Volvo and Uber are in agreement that accidents are inevitable in this early stages of self-driving technology.

In the UK for instance, the blame for accidents will shift, arguably so, from the driver to the manufacturer. The Consumer Protection Act 1987 (CPA, UK) in the case that a vehicle is ruled defective, imposes strict liability for injury/damage. The acts definition of a defective product? ‘One where safety is not at the level such as people are generally entitled to expect.’ Popular as the ‘consumer expectation test’, it eliminates the need to verify manufacturer negligence, and instead upholds consumer expectations.

In the case of Tesla, this act would work against them in the UK given that they market their vehicles as Autopilot enabled. An average user of artificial intelligence technology, which is what semiautonomous software is, often has minimal product familiarity and understanding.


In Tesla’s accidents, the autopilot marketing outcome is unsatisfactory. The point of it would make users believe the vehicle can make autonomous decisions and self-drive, which has clearly not been the case. Consumer familiarity will of course increase with time, reducing manufacturer liability, but until then, this is one of the legal loophole that is a liability to semi-autonomous technology companies.

Insurance also poses a hurdle for victims to claim damages as there is no clear line defined by law yet in terms of liability. In general, most insurance companies hold the ‘user’ of a car as the liable party in the case of an accident.

In the US, most state and federal laws have loopholes as far as driverless vehicles are concerned. Laws that need review to accommodate the technology include torts, privacy, data, security, transportation, as well as communications administrative law.

It might be a while before we see fully autonomous vehicles taking over our roads. The laws need to be streamlined so as to protect both the users and manufacturers. In the short term, we will however see more vehicles laden with technology to increase safety on the roads.

Legal Issues for Self Driving Ubers
%d bloggers like this: