Safety and AI Self-Driving Cars: World Safety Summit on Autonomous Tech


By Lance Eliot, the AI Trends Insider

I was chatting with Jamie Hyneman, notable co-host of the former MythBusters series, during the recently held World Safety Summit on Autonomous Technology (he was the moderator for the event undertaken at the Levi’s Stadium in Silicon Valley), and we both marveled at the notion that in today’s world we all drive around in these metal cans called cars that come within inches of each other at speeds of 80 miles per hour or more, and yet somehow this happens without continuous catastrophic results.

Of course, there are already significant dreadful outcomes and the number of car crashes and car related deaths is abhorrent, for example there are an approximate 37,000 fatalities that occur in the United States alone each year via car related incidents. Given though the volume of cars and the millions upon millions of miles driven, an estimated 3.22 trillion driving miles per year in the U.S. (per the Federal Highway Administration statistics), it is somewhat remarkable that there aren’t even more car related deaths.

By-and-large, most of these conventional cars related adverse outcomes can be traced to the driver of the vehicle. In other words, it’s not particularly that the car itself had some mechanical fault that led to the horrid outcome, but instead that the human driver by one means or another was the main contributor to the incident.

One of the key questions to be addressed about the advent of AI self-driving cars involves whether or not the newly emerging Autonomous Vehicles (AV’s) will be as safe, or safer than, or perhaps less safe than the use of conventional cars. Obviously, the notion and desire are that AI self-driving cars will be at least as safe as conventional cars, and hopefully much safer.

When I say that an AI self-driving car is safe or safer than conventional cars, I’m not especially referring to whether the drivetrain works better or whether the engine works more auspiciously, those are facets that we all pretty much assume will be at least as safe as conventional cars. Instead, the aspect of safety that we’re really referring to consists of how the self-driving car will be driven.

Will the AI be able to drive an AI self-driving car as safely or more so than a human driver?

If the auto makers and tech firms cannot assure the public and the regulators that AI self-driving cars are “safe” then the emergence of these AV’s will be likely delayed, and progress substantially stinted. Imagine too if AI self-driving cars are allowed fully into the wild (on our public roadways), and it turns out they get into numerous car crashes and people are injured and killed. It is predictable that a backlash of such magnitude could develop that efforts toward AI self-driving cars could become fully undermined and possibly even mothballed.

Marta Thoma Hall, President of Velodyne LiDAR, noted during the World Safety Summit that if you told someone to go across a bridge that is only partially built, their trepidation to do so would certainly be understandable. Yet, there are some in the AI self-driving car industry that don’t seem to get the notion that we are indeed asking people at this time to go across a partially built bridge.

During my numerous conference speaking engagements about AI self-driving cars, I often have fellow AI developers that approach me and question why the lay public doesn’t trust AI self-driving cars. I am more so surprised that these AI developers assume that people should have blind-faith in the existing capabilities of AI self-driving cars, more so than being at all surprised that the public has hesitation about these innovations.

For my article about the public perception of AI self-driving cars, see:

For my article about my overarching framework for AI self-driving cars, see:

For my forensic analysis of the Uber incident in Arizona, see:

The tech industry relishes embracing the “fail fast, fail first” mentality, and yet in the case of AI self-driving cars it is quite unlikely that the public and regulators are going to tolerate substantive failure rates. Failing fast when developing a social media site or when crafting a game app might be sensible, but doing the same for AI self-driving cars, which are real-time systems that encompass life-or-death matters, I don’t think most of society will have a stomach for it, in spite of what some would say is worthwhile due to the desired outcomes at the end of the rainbow once AI self-driving cars have been perfected.

Furthermore, with the increasing number of firms that are aiming to produce AI self-driving cars (around 50 such firms are signed-up to test drive their self-driving cars on California roads), it would seem to increase the chances that there might be one bad apple in the bunch, meaning that if an experimenter auto maker or tech firm, once granted the privilege to put their AI self-driving car on public roads, causes headline worthy injuries or deaths by their contraption, all of the rest of the auto makers and tech firms trying to also develop and field AI self-driving cars could get tainted by the same sour outcome. One bad apple can regrettably and absolutely spoil this entire barrel.

It is an imperative that safety and AI self-driving cars go together. They must be joined at the hip. Insufficient safety and this ship will sink. Indeed, per the World Safety Summit comments of Christopher Hart, former National Transportation Safety Board (NTSB) Chairman, recall a famous ship that had been heralded as unsinkable. The AI self-driving car community and industry does not want to get itself into a similar bind of thinking that AI self-driving cars are unsinkable.

As I’ve mentioned previously, AI developers can get themselves into an egocentric mindset, see my article:

For my indication about the potential of accidents contagion of AI self-driving cars, see:

For my points about the potential of AI self-driving car recalls, see:

For whether or not AI self-driving cars might be a kind of Frankenstein, see my article:

Safety Has to Be Explicitly Built into the AI Self-Driving Car

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars and are very attuned to the need for safety alertness in the AI systems of self-driving cars. Safety has to be explicitly built into the AI and be considered part of its core design.

Trying to somehow add-on safety considerations for self-driving cars after-the-fact would be foolhardy and more so generally impracticable.

There are some in the AI self-driving car industry that consider safety to be an edge problem, meaning that it is not at the core of the systems effort for producing AI self-driving cars. The focus of these AI developers is that they first are seeking to get the self-driving car to drive along a road and do so with some modicum of capability. They assume that other cars will pretty much stay out of the way of the AI self-driving car and that the traffic and pedestrians nearby will give the self-driving car wide berth. In that sense, those developers are betting on safety by believing in a constrained traffic environment, but that’s not the real-world of driving that we all face each and every day on our open roads.

For more about edge problems, see my article:

For my article about pedestrian aspects of AI self-driving cars, see:

For why defensive driving is a must in AI self-driving cars, see my article:

This also brings up the aspect of asserting that AI self-driving cars will lead us to achieving zero fatalities in car related deaths. As I’ve indicated many times, though the notion of zero fatalities is certainly laudable as a goal, unfortunately it is not achievable and worse too it also has the potential for setting misleading expectations for the public and regulators about AI self-driving cars.

Why aren’t zero fatalities realistic? Let’s consider car crashes to be divided into those that are avoidable and those that are unavoidable. For an avoidable car crash, the driver, whether human-based or AI, can potentially maneuver the car in a manner that avoids the car crash and therefore presumably avoids potential human injury or death. For an unavoidable car crash, the driver, again whether human-based or AI, will be unable to avoid the car crash, along with the chances of human injury or death, no matter what the driver might try to do.

Suppose a car is being driven down a street at 45 miles per hour and a pedestrian that was standing at the curb suddenly jumps out into the street, doing so with just a split second to go before impact by the car. The physics of the situation bely the aspect that a car crash and injury or death can be avoided. You might say that the driver of the car should have realized that the pedestrian was going to jump off the curb, but even if you detected the pedestrian it could be that the pedestrian made no overt indication of what they were about to do. My point being that no matter how good a driver the human might be or the AI might be, there are still going to be unavoidable crashes.

So, I am asserting that no matter how good the AI might be, it will still find itself driving a self-driving car that will get into unavoidable car crash situations. Now, we might be able to assume that the number of unavoidable car crashes will be a lot less than the number of car crashes involving fatalities today. In essence, if we could parse out the number of car crashes that were unavoidable out of the total number experienced today, the odds are that it is a low number. Lower but still more than zero.

For my article about zero fatalities aspects, see:

For my article about the ethics aspects of AI self-driving cars, see:

Some have indicated there should be Ethics Review Boards for AI self-driving cars, see my article:

Another facet to be considered involves the level of autonomous capability of the AI and the self-driving car. I’d like to clarify and introduce the notion that there are varying levels of AI self-driving cars.

The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even tending toward removing the gas pedal, brake pedal, and steering wheel, since those are mechanisms used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car (a Level 4 is similar though it has a more constrained driving environment in which it is able to drive).

For self-driving cars less than a Level 4 and Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a Utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other. Period.

For my article about the grand convergence that has led us to this moment in time, see:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

When addressing the topic of safety and AI self-driving cars, we need to explicitly consider the level of the self-driving car. Let’s for the moment define truly autonomous AI self-driving cars as Level 5 and to some degree Level 4, while the Level 3 and below we’ll define as co-shared driving and we will not consider it to be autonomous. I realize that you might argue that there is some autonomous driving aspects at the Level 3, but due to the co-sharing aspects of the driving task at Level 3 let’s for the moment lump the Level 3 into the co-shared driving and not put it into the truly autonomous category.

In essence, we’ll use two categories for AI self-driving cars, those that are co-shared driving and not truly autonomous, and the other category is those that are truly autonomous.

We can then ask the safety question about each of the two classes or categories. This is an important distinction since the answer will differ between the two circumstances. If we lump together all variants of AI self-driving cars, we would not adequately be able to address the safety question due to the commingling of the two quite different classes.

Thus, we have two questions to address:

  •         What is the acceptable safety for the AI self-driving car that encompasses co-shared human driving?
  •         What is the acceptable safety of the truly autonomous AI self-driving car?

The word “safety” is a relative term and one that has a lot of loaded baggage.

I routinely fly on commercial airplanes for work purposes and that take me to destinations all around the country and the world. Would I consider these airplanes to be safe? Yes, I would. A colleague of mine is “afraid” of flying and insists that airplanes are not safe (he avoids flying and often takes a train or ship instead). Who is right? Am I right to say that airplanes are safe, or is he right to claim that they are unsafe?

The typical definition for safety involves the indication that by being “safe” you are not likely to be harmed. Using that kind of a definition, you could assert that today’s commercial airplane travel is relatively safe since the chances of injury or death due to a plane accident is low. Apparently, my colleague does not share this viewpoint that the chances of injury or death are low, and instead believes the odds to be “high” or at least high enough that it dampens his enthusiasm in a willingness to fly.

For my analysis of comparing AI self-driving cars to airplanes and autopilot systems, see:

Safety a Perception As Much as a Reality Measure

Overall, the point being that safety is as much a perception measure as it is a reality measure. The safety aspect is going to be a chance or probability of some kind of harm, and whether you think that something is safe or not will depend upon how much of a chance or probability you think rises above some otherwise “personal” threshold.

Suppose that the chances of a plane crash are one for each of 1.2 million flights, and that the chances of dying are 1 in 11 million by a plane crash. The odds of getting struck by lightning are higher and so is the chances of getting killed by the flu. My colleague should be walking around worried that he is going to get struck by lightening or that he will contract the flu and die, but he does not seem particularly concerned about either of those potential events.

For AI self-driving cars, we are going to contend with both actual quantifiable measures of their safety, along with the public perception of safety. When considering the matter of “safety” and AI self-driving cars, it includes the perception of what is considered as a safe enough threshold, and will vary by segments of the public, by segments of regulators, and other stakeholders too.

I mention this aspect because some AI pundits keep saying that as long as AI self-driving cars can reduce the number of annual car related deaths, presumably the public will accept and even outright embrace the advent of AI self-driving cars. This though does not seem to take into account the nature of how the perception of safety occurs and is shaped.

Let’s imagine that the advent of AI self-driving cars reduces the number of annual car related deaths in the United States from around 37,000 to instead say 27,000 (I’m using this as a placeholder number for purposes of discussion herein and not due to any actual prediction per se). That’s a whopping decrease of 10,000 deaths and thus about a 30% or so drop in the total number of annual deaths. You might believe that if AI self-driving cars could produce that kind of lessened number of deaths, it would therefore be heralded as a blessing to all.

I would dare say that the public is not going to be able to necessarily see things in that light. Instead, for each self-driving car related death, there is more than likely going to be a severe hand wringing about why the death occurred and why the advent of AI self-driving cars is not providing its promise of zero fatalities. If people are promised zero fatalities, they therefore tend to think that the decrease in the number of car related deaths would drop from 37,000 to 0, all in one fell swoop.

The path of getting toward a lower fatality rate is going to be a tortuous one and something that public perception cannot readily see as an overall improved rate of lessened deaths over conventional cars. Suppose that we could somehow go from 37,000 annual deaths to 35,000 in year one, down to 30,000 in year two, down to 25,000 in year three, down to 20,000 in year four, down to 15,000 in year five, down to 10,000 in year six, and suppose it ended at 5,000 in year seven and thereafter (thus accounting for the number of unavoidable car crashes).

Could the public and the regulators be content with that kind of progression towards better safety due to the advent of AI self-driving cars? It’s a tough pill to swallow.

We also then need to consider my earlier point about the distinction between co-shared driving autonomous cars and those that are truly autonomous cars. For the segment of car crashes that will involve the co-shared driving, we need to consider that presumably some percentage of those will be due to the human driver versus the AI driving aspects. Indeed, it could be that the number of car related deaths could go up, rather than down, if you end-up with a struggle taking place between the human driver and the AI system in circumstances of potential crashes.

Imagine that we had 37,000 annual deaths with conventional cars. Suppose we then infuse into society some amount of co-shared driving autonomous cars, along with some percentage of truly autonomous self-driving cars. We then will have a mixture of conventional cars, combined with co-shared driving cars, along with truly autonomous cars. This mixture is going to presumably impact the number of annual car related deaths.

Let’s assume the conventional cars portion of the mix continues at the existing anticipated car related deaths rate (though, this can be argued somewhat since the aspect of the now added mixture of co-shared and truly autonomous might impact the rate).

Let’s assume that the truly autonomous self-driving cars are able to achieve a near-zero death rate (this is argumentative given the mixture of the other conventional cars and the co-shared driving cars).

What will be the expected rate of car related deaths in the co-shared driving instances? We don’t yet know. Will the addition of the AI reduce the chances of the death rate? Maybe, if you assume that the AI will be able to do a better job at the driving task than the humans in conventional cars. Will the AI perhaps increase the chances of the death rate? Maybe, if you take into account that the human is co-sharing the driving task and that there might be hand-off issues and other confusion that regrettably negates the positive reduction in potential deaths by actually increasing the rate of deaths and car crashes.

Thus, when trying to ascertain what the death rate impacts are due to the advent of AI self-driving cars, it is a complex matter since it involves the mixture of conventional car driving, co-shared driving, and truly autonomous driving. You can bet that any car related deaths due to the co-shared driving and the truly autonomous driving will get magnified tremendously and for some it is seen as a bellwether of what is yet to come. Even one car related death in the truly autonomous category can be perceived as an indicator that AI self-driving cars are more so death-traps rather than saviors, and depending upon what ax one has to grind, the circumstance can be used accordingly.

In the very act of defining safety as it relates to AI self-driving cars, we need to be mindful of a multitude of factors.

There are some that are aiming to use the death outcome as the sole measure for defining safety. Thus, they say that we should measure safety by how many self-driving car related deaths there are per year, similar to the measure used with conventional cars.

I would be willing to bet that the public would perceive self-driving car related injuries to also be an element of the safety of AI self-driving cars. In other words, suppose that AI self-driving cars reduced the number of annual car related deaths, but at the same time the number of human related injuries went up. Perhaps the AI self-driving cars were doing a better job at avoiding direct head-on collisions, but in so doing were sideswiping other cars and pedestrians, thus producing more human injuries than before. Would the public be willing to accept a trade-off of the deaths count for the injuries count?

Another factor involves deaths and injuries involving not just humans but also animals. I know that some think it preposterous to potentially include injuries or deaths of animals, and they bristle that such a count could be somehow compared with the counts involving humans. But, once again, let’s consider the public perception. If AI self-driving cars are unable to adequately avoid hitting animals, like say someone’s pet dog that wandered into the street as a self-driving car came along, will the public at large be willing to accept that the beloved and innocent pet was injured or killed by an AI self-driving car?

There are also the potential aspects of damages to be encompassed by the safety moniker. Lets separate damages from any kind of injuries or deaths. We’ll say that damagers are aspects such as cars that can bashed up or that take out street posts or ram into other roadway infrastructure. If AI self-driving cars are let’s say able to avoid some amount of human deaths but in the meantime are creating more damages, perhaps by ramming into objects, what would the public perceive as the overall safety of AI self-driving cars?

We are at a crossroads in the AI self-driving car field of trying to grapple with defining safety. It is a vital measure and one that will ultimately be a determiner of the acceptance of AI self-driving cars. Safety encompasses a multitude of factors and it also is something that is measured on a continuum or spectrum, rather than via a single point. There are some efforts within the automotive field to aid in defining safety aspects of AI self-driving cars, including for example ISO 21448 and ISO 26262, along with the myriad of efforts underway by the SAE.

Echoed repeatedly at the World Safety Summit was the belief that collaboration among the many stakeholders of AI self-driving cars is going to be key to reaching a realistic and usable notion of safety. These stakeholders include the auto makers, the tech firms, the regulators, the media, the safety advocacy groups, the industry associations, the researchers, and so on. I’ve previously called for a form of “coopetition” among such entities, which will aid in assuring that what collectively is devised will also hopefully be collectively supported. So, yes, absolutely, collaboration is needed and keenly sought.

For the potential role of coopetition and AI self-driving cars, see my article:

For the drawbacks of the use of remote operators for AI self-driving cars, see my article:

For my article about why some believe we should start over about AI self-driving cars, see:

I was particularly heartened during the World Safety Summit when Alex Epstein, Director of Transportation Safety for the National Safety Council pointed out that we should all think about our son or daughter going onto the roadways when we are trying to wrestle with defining safety and AI self-driving cars. Those of us immersed in the AI self-driving car realm need to keep our eye on the humanity of what we are developing. AI self-driving cars provide the promise of a tremendous technological advancement and are bound to transform society in incredibly advantageous ways, but with that comes momentous responsibility and a duty to be fixated on safety.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.