The death of a woman in Tempe, AZ, in the US after being hit by an autonomous vehicle this week has ignited a conflagration of hatred towards AI. The vehicle was travelling within the speed limit, and was on an unlit road when collided with a pedestrian, illegally crossing.

Before delving into the statistical comparisons of thousands of different iterations of software-run self driving vehicles, versus human drivers across hundreds of millions of lifetimes, one should watch the video of the collision and tragic fatality that occurred this week. While no one could argue anything other than that this incident was awful, and that regulation and safety should always be number one, this video proves that there is a significant proportion of the population who reflexively attack AI and technological improvements, before understanding the basics of the science or truth.

The video is disturbing, for those who don’t care to watch, the self driving car is obeying the speed limit, driving down an utterly unlit street. The vehicles lights provide intermittent illumination, and perhaps 3-5 meters (10-15 feet) before the collision, the victim is visible on the video, illegally crossing the unlit street with a bicycle, and the collision occurs.

A human driver, no matter how aware, if this video’s lighting is accurate, would unlikely have been able to prevent this collision.

As a rule, we rely on the speed limits set to provide us with norms about what speeds we should maintain, but we also rely on other humans to follow the other rules of traffic, as well. The pedestrian was walking down an un-illuminated street, in a non-designated crossing area. They also failed to look for oncoming traffic. The vehicle, and human backup driver, both failed to observe the pedestrian. This may have been a technical failure of the radar, but it is a huge stretch to place sole fault of the accident on the software, the driver, or autonomous vehicles in general, when in the context of this incident, the vehicle may still have performed better than a human. The Uber vehicle was equipped with a radar, cameras, and lidar, which uses lasers to provide a 360 view of the surroundings. Its range should have been upwards of a 100 meters, and this does constitute a failure, especially if evaluating it by those technical standards. If autonomous vehicles want to be proven successful, they should be more perceptive and capable than a human driver, but using solely the optical sensors, or human eyes for that matter, to perceive that driver on an unlit road, going the average speed limit in Tempe of between 35-45MPH (56-72KM/H), it would have been difficult, if not impossible, to have spotted and stopped the vehicle before the collision, under those conditions.

This is indeed a failure. What it is absolutely not, is a wake up call. This is not the first, or last death that autonomous vehicles will cause. There is inherent risk in this endeavor, but the fact is that humans are prone to making random errors. Machines will only make the errors that humans program into them. Anyone who has ever sped to work because they were late, accelerated after passing a slower driver, cut someone off, or gotten behind the wheel when they were at less than 100% cognitive capacity, proves this fact. One could also lay blame on the city for not adequately proving crosswalks, for not adequately illuminating the street, or for setting the speed limit improperly. The real issue, as has been recurrent in autonomous vehicle studies, is that it is humans who are inherently random, unpredictable, and their behavior often dangerous to themselves and others. Anyone who claims that this incident exposes some massive flaw in autonomous systems is saying so either because of misinformation, or they are pursuing an agenda against the technology. The video and technology speaks for themselves.

Those calling for shutdowns, new laws, or criminal liability for companies involved, are unapologetic and self-serving luddites, or publicity whores, contrary to whatever they themselves may claim. The NTSB will review the evidence and find the cause, companies developing this software will learn and improve, and if negligence took place, it will go punished, but it will end there. If anyone didn’t have safety as their primary priority when developing this vehicle, it will be rectified.

Autonomous behavior has come to dominate every single industry where it has been introduced, and facing opposition and issues, has always replaced humans at performing simple, repetitive tasks. It has always improved, and replaced humans, eventually. This notion instills fear in the hearts of many who are not ready for change.

From automobile assembly, to accounting software, to aircraft autopilot and missile guiding, the stakes were  at one point deemed too high for failure, but the trajectory towards the automation of basic tasks is constant. Fully autonomous consumer vehicles are coming en masse, and they are coming soon. There is often a painful price to pay for progress, but there is also a reward when it succeeds. America as the home base for autonomous vehicle technology will benefit America massively, when it does ultimately take over. If America pushes away innovation for fear of new technology, it will not only allow someone else to take our place, it will change who we are: a country that has never shied away from new ideas.

Staff writer: Ari B

Photo credit: Oskar Krawczyk