Accidents Contagion and AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

A contagion is usually considered a virus that spreads from person to person. Contagion can also generally be used to mean that some kind of practice or idea is spread, usually one of a harmful nature.

At a recent presentation that I gave about AI self-driving cars, I was asked whether there is a kind of “accidents contagion” occurring with the current crop of driverless self-driving cars. The question seemed to be based on the aspect that there appears to have been a recent spate of self-driving car related accidents. Is this perhaps a trend? Does it portend that the self-driving cars are an evil that is being unwisely spread? Should society try to take some kind of action to prevent further spread?

In a previous article I discussed the notion that AI self-driving cars could be perceived as an invading species:

Let’s now take a look at what’s happening with these self-driving car accidents. Most notably, in the last few months we’ve had these:

  •         March 2018: Uber Volvo XC90 runs over and kills a pedestrian
  •         March 2018: Tesla Model X crashes into median and kills human driver
  •         May 2018: Waymo Chrysler Pacifica minivan gets into car accident with minor injuries

Each of those made big headlines. Instantly, there were some that were wringing their hands and saying that this is the end of AI self-driving cars. Some suggest that we were promised perfection in that the advent of self-driving cars would mean that there would no longer be any deaths or injuries associated with being in a car.

I’d like to clear up some of these misconceptions and myths. I’ll use these recent incidents to illuminate some important aspects about AI self-driving cars.

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars, and try to help business and society have a more balanced understanding of what such systems can do today and what they hopefully will be able to do in the future.

Reality About Self-Driving Cars on Our Roads

For those that believe we are going to have no more injuries or deaths while on our roadways, I would say that your only hope right now would be to ban car travel entirely, regardless whether using a conventional car or a self-driving car. Inevitably, no matter what foreseeably happens in the near future, there are going to be injuries and deaths involving conventional cars and also with AI self-driving cars.

Some say that once we have only and exclusively AI self-driving cars on the roadways that we’ll no longer have any car related injuries or deaths. This seems like an unlikely premise. If a pedestrian suddenly steps in front of an AI self-driving car, unless that self-driving car has wings and can fly, it’s going to hit that pedestrian if the physics don’t allow any other option. A self-driving car is still a car. It cannot suddenly disobey the laws of physics. Also, self-driving cars are going to have mechanical problems and breakdowns, just like conventional cars.

We also need to consider the practicality of the idea that we would have only AI self-driving cars on our roadways. Right now, we have about 200 million conventional cars in the United States alone. They are not going to magically disappear or be transformed into AI self-driving cars overnight. Society has yet to decide whether or not this notion of not allowing conventional cars is even something we can all agree to have happen. In essence, for a very long time, we’re going to have human driven cars intermixing with AI self-driving cars.

As such, you can toss whatever you want to into the AI self-driving car side of things, including the potential for V2V (vehicle-to-vehicle communications) and V2I (vehicle-to-infrastructure) communications, but nonetheless with the mixing of human driven cars and self-driving cars there are going to be collisions. A human driver can cause it by ramming into an AI self-driving car, or even an AI self-driving car could “cause” it by somehow not detecting a human driven car or failing to properly predict the actions of an AI self-driving car.

See my article about defensive driving for AI self-driving cars:

News Media Loves a Good/Bad Story

There is an ongoing love-hate relationship of the news media with AI self-driving cars. One minute, the major media outlets are touting that AI self-driving cars will save mankind. Society will be utterly transformed. It’s as though the adoption of AI self-driving cars will cure cancer and solve world hunger, all at the same time.

Admittedly, the AI self-driving car adoption will eventually and gradually transform our society, including potentially allowing mobility unlike what we’ve seen before. Indeed, many predict that we are heading towards a new kind of economy called the “passenger economy.” I’d like to rein in the expectations about how far the AI self-driving car though will solve other of society ills. Climate change? Homelessness? Crime? It’s a bit of a reach to start claiming that we’ll all be better off per se in those other areas.

As an aside, I’ll add a bit of a twist. I claim that if we can really craft true AI Level 5 self-driving cars, which can drive a car in whatever a manner a human can, and if that means that we have then achieved some truer sense of AI, we might then have a chance at other major societal ramifications. In other words, AI that is that advanced and so adept, could be put to many other uses in society, presumably, beyond just being able to drive a car. There are some that say that we can achieve true Level 5 without AI becoming so good that it is more like true AI, and in which case the AI won’t lend itself to other domains. We’ll need to see how this plays out.

See my article about the future of AI and how it applies beyond self-driving cars:

See my article about how the grand convergence has led to this ripe moment in time for the emergence of AI self-driving cars:

Okay, so we’ve now covered the love part of the love-hate relationship with the news media. The hate part also comes to play. Build them up, and knock them down, it’s a popular refrain for anyone wanting to try and get views or readership. Whenever an AI self-driving car stumbles, it’s going to get some pretty prominent attention. It becomes a man-bites-dog kind of story. We were just getting used to the idea that AI self-driving cars will solve our world problems, and then, bang, an AI self-driving car does something bad. Knocks the air right out of the balloon.

In the end, the magnification becomes perhaps overly confusing. Are these isolated cases or something more endemic? The major media usually doesn’t take the time to consider these matters. Get the headline going and get some attention. A few minutes later, something else will be of keener interest.

In my list of the three recent incidents, Waymo is likely exasperated that they are being lumped together with the other two incidents. The Waymo occurrence only involved minor injuries, while the other two incidents involved deaths. Is it fair or reasonable to lump together incidents that are relatively minor with those that have the ultimate consequence? The overarching theme seems to usually be that these are self-driving cars, they are backed by big auto makers and tech firms, and they have gotten into trouble of one kind or another.

Statistical Chances Continue To Increase

Some people ask me why these occurrences “suddenly” seem to be increasing? Suppose I ask you stand in a batting cage. I will start tossing baseballs at you. Consider the pace and frequency involved. You duck and move, and generally can avoid getting hit. I then ask five friends to join me in tossing baseballs at you. Ouch, things are getting rough. I next add five more friends. At this point, you can barely avoid getting hit by the baseballs.

In the case of AI self-driving cars, we are gradually and inexorably having more and more of them on our roadways. If we assume that there is some kind of statistical chance per each AI self-driving car that it will ultimately get into a car accident, we are increasing our chances by the sheer volume involved. This is not the only factor. We also have that by-and-large the early adopters of self-driving cars were more likely to be mindful of learning what their self-driving car can and cannot do. Once we start toward the masses, you’re going to have more and more owners that aren’t going to be so careful.

We also have the factor that at first the other human driven cars were being cautious when coming upon an AI self-driving car. You could readily recognize such a car by the cone head that housed the LIDAR. Gradually, it is becoming harder to discern what is a self-driving car and what is not. Furthermore, even if you can detect that it is a self-driving car, you might not care in the sense that you won’t change how you drive. No more Mr. Nice Guy, and instead it’s all drivers for themselves, human or AI.

The formula is simple. Add more AI self-driving cars to our streets. Increase the number of miles being driven. Add humans that aren’t as careful as maybe they once were. Spice this with potentially untested portions of AI self-driving cars that are now revealed while on our roads. You’ve got yourself more accidents.

See my article about the emerging potential for road rage and AI self-driving cars:

Confusion About the Levels of Self-Driving Cars

Not all AI self-driving cars are the same. I say this because I often see people and the media blurring the aspect that there are true Level 5 self-driving cars, and then anything less than a Level 5 is considered a self-driving car that requires a human driver. The human driver is considered responsible for the self-driving car at less than a Level 5.

See my article about the levels of AI self-driving cars:

If there is an accident involving a less than Level 5 self-driving car, should we consider that to be on par with an accident involving a Level 5 self-driving car? Most would say that you can’t compare the two. It’s like apples versus oranges. In the less than Level 5, there is a co-shared responsibility with a human and so somehow the incident might have occurred because of the human failing, and thus we should not point fingers at the AI.

There are those that take this to the extreme and suggest that no matter what happens with a less then Level 5 self-driving car, it is the fault of the human. The human is the captain of that ship. No matter what else happens to the ship, it’s the captain at fault. This seems like a stretch. If the AI suddenly hands over controls of the self-driving car, and there’s one second left to go, and the self-driving car is heading into a wall, can we really reasonably say that this calamity is due to the human driver? Was the human driver supposed to somehow know that the AI self-driving car was going to ram the car into the wall?

See my article about responsibilities and AI self-driving cars:

Marketing and Advertising to Promote Self-Driving Cars

The auto makers and tech firms are putting millions upon millions of dollars into the self-driving car approach. Naturally, they want to market and advertise these advances. There is a fierce sense of competitiveness in the auto industry and each auto maker tries to outdo the other. Consumers can be fickle. If they perceive that a particular brand or model has something they think they want, those fickle consumers are willing to jump ship to some other car line. No more of the days wherein you stayed with the same auto maker that your parents used.

What’s even more beguiling for some auto makers and tech firms is that often the consumer doesn’t even know what features the car actually has. And, if they do, sometimes the consumer doesn’t actively use the features. Instead, it’s more akin to having bragging rights. My car can parallel park itself. How many times have you used this feature? Not yet. How long have you had the car? Two years. Sigh. Imagine that you were the engineers and AI developers that created the capability, you tested it, you fielded it, and the auto maker advertised and marketed it. All so that it would sit silently and never be used.

Public perception is being shaped by the marketing and advertising that is gradually growing and becoming bolder about the self-driving car capabilities. At some point, it could be that the suggested claims or implications of the ads and marketing might go over-the-line and become misleading and deceptive. Accusations such as this are emerging by various consumer watchdog groups already.

See my article about the marketing of AI self-driving cars:

There is certainly a danger that the blitzkrieg of messages about AI self-driving cars might cause people to falsely believe that self-driving cars and AI has some kind of super powers. In that sense, having the media to bring potentially the marketplace back to reality is going to be helpful. Bombardment by paid-for radio ads, TV ads, billboards, and print ads about how incredibly safe AI self-driving cars is, does need to be tempered by the news media providing the other side of that coin.

Testing, Testing, Testing

When I discuss AI self-driving cars with those that aren’t in-depth on the topic, they are often shocked to discover that the AI self-driving cars on our roads today are pretty much a large societal experiment. We have all collectively allowed our roadways to be used for testing. That self-driving car next to you has not been utterly verified to be completely error-free. I know that I’ll get howls from fellow AI developers that will argue that you can never have a provable error-free AI self-driving car. Indeed, they point out that if that’s what society demands, you might as well put all of the self-driving cars into mothballs and forget about getting to self-driving cars for now.

The argument about never being able to mathematically prove the safety and assurance of an AI self-driving car is somewhat of a red herring. I will absolutely concede that the world is not aiming to require the auto makers and tech firms to provide a full and proper proof of correctness. This though doesn’t mean that then you can just let anything you want to end-up on our public roadways. There needs to be some diligence and sufficient amount of testing beforehand to reasonably know what might happen on our public roadways.

We all need to be doing more at the federally provided proving grounds, see my article:

We all need to be doing much more simulations, more in-depth, more extensive, and etc., see my article about this:

And the other aspect is the danger that somewhere within these complex AI systems there are lurking bugs. They are in there. It’s more a question of how severe the bugs are. Plus, you need to consider what the rest of the AI system will do when a bug is encountered.  An AI self-driving car that is barreling along at 70 miles per hour on the freeway, and if there’s a hidden bug that when encountered causes the AI to try and issue an untoward command to the car controls, well, this is the kind of thing that requires the software to have double-checks and triple checks. It is though difficult to layer too many checks into the system because it is also faced with having to act and react in real-time. The more safety checks you pile-on, it could be that in the act of trying to prevent a problem you’ve caused another problem because the AI fails to take needed action on a timely basis.

See my article about the Uber incident details:

See my article about the topic of software neglect and how it is impacting AI self-driving cars:


Is the sky falling?

Are we seeing an “accidents contagion” that will continue to widen and spread?

I don’t think the sky is falling, but I do think we ought to be looking upward at the sky and realize that there is rain coming and possibly a storm. I say this because the general public might gradually become overtly concerned about the safety of AI self-driving cars, which will then drive the regulators to also step in more so. To-date, the regulators have allowed a lot of latitude via mild regulations in an effort to avoid stifling what seems to be a grand innovation and that will brighten the future.

For more about federal regulations and AI self-driving cars, see my article:

There isn’t per se an accidents contagion in that there’s nothing that is spreading these accidents from one self-driving car to another. There’s not a hacker virus or something like that. We might someday see something along those lines, and so computer security needs to be at the top of the list for the auto makers and tech firms developing AI self-driving cars, but right now that’s not a pressing issue as yet. The accidents aren’t tied to another in any kind of daisy chain. Each is essentially independent of the other.

But, that’s not to say that they aren’t all based on the same core. All the auto makers and tech firms are generally taking the same overall approach to designing, coding, testing, and fielding their AI self-driving cars. In that sense, they all will generally suffer the same limitations and experience the same kinds of accidents. Plus, they are all mixing into the same general environment, doing so by putting these self-driving cars into the mix with human driven cars.

I’ve pointed out previously that it is possible to be tricky about self-driving car disengagements:

For self-driving car accidents, if you are an auto maker or tech firm and you want to minimize the chances of an accident, you would only allow your self-driving cars to be operated in a geographical area that you knew would be least likely to produce accidents. Would you put your self-driving car into the middle of New York City, where the crazy traffic is something that even the best human drivers dread, or would you put your self-driving car into Smalltown, USA where the streets are calm and the traffic is easy going. The problem though with putting all your eggs into the quiet town is that you won’t then really know whether your self-driving car and its AI can scale-up to handle the downtown wild driving.

Some auto makers and tech firms have considered whether it might be best to keep their self-driving cars a bit under wraps and let others take a higher risk of having accidents, and those risk takers gaining the ire of the public, and having their brand tarnished. Meanwhile, the other auto makers or tech firms figure that after those others have taken those first-mover blows, it then clears the landscape for them to then introduce their self-driving cars and say that theirs is the new-and-improved version. The usual tech firm bravado is one that says get there first and take the market before anyone else can, leaving just the crumbs for those that come along later on. The nature of AI self-driving cars though might not lend itself to the same kind of fail-fast, fail-first attitude of Silicon Valley, and the backlash could hurt those innovators. Worse still, it could cause the tide to recede for all. As the old line goes, all boats rise with the rising tide, but the boats also all drop lower with the receding tide.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.