Motivational AI with Bounded Irrationality for Self-Driving Cars


By Dr. Lance B. Eliot is the AI Trends Insider

What the heck is that person doing?

Have you ever been driving along and witnessed another driver that pulled some kind of seemingly crazy stunt? I am sure that you have. Seems like we all have. For most of our driving time, we see drivers doing normal, everyday, primarily rational acts in terms of abiding by the rules of the road and otherwise driving in a rational manner. On occasion, we see drivers that appear to have gone off the deep end. Their driving behavior is nutty, weird, off-the-wall, and what we might more formally call irrational.

Humans are supposed to be rational, according to dominant economic theories, and yet people often do irrational things. If people do indeed do irrational things, is it done on a random basis such that there is no means to anticipate in what ways they will be irrational? According to Richard Thaler, recent winner of the Noble Prize in Economic Sciences, people are predictably irrational.  His work helped to spawn a subfield of economics that entails how human nature can be irrational and yet also be anticipated and dealt with.

Thaler’s efforts and those of his colleagues represent an important extension to the earlier economic beliefs that people were near enough to being rational that having to deal with their irrational behavior was inconsequential. The irrational acts of people were considered a rounding error. If you could model the rational behavior of people, there was no need to take a look at their irrational efforts. The rational models presumed that such models captured 99% of what people did and the remaining 1% was inconsequential.

Furthermore, even if somehow the irrational behavior was more so than just the tiny percentage, some argued previously that even if you did want to bother with looking at irrational behavior, it would not have done you much good because the irrational was so random that you wouldn’t be able to pin it down in any useful way anyhow. Why waste time studying something that would have no particular pattern to it? Without any discernible pattern, there was no means to make it fit into any kind of mathematical model of human behavior.

Notice too that there were some that at first even had said that people are either rational or they are irrational, but could not be both at the same time. In other words, out of a pool of say one hundred people, it used to be thought that maybe 95 of them are rational and there are perhaps 5 that are irrational. This though ultimately was shown to be misleading and misinformed. A person can switch from one state to another, being at one moment rational and the next moment irrational, and thus you cannot make some overarching assumption that people are always one way or the other. Each of us exhibits a mixture of both rational and irrational behaviors, and the emergence of rational or irrational behavior is dictated by numerous underlying factors such as the person, the context of the situation, and so on.

Here’s something for you to ponder. Each morning, during my daily commute on the freeway, I am surrounded by hundreds of other cars throughout the stretch of my commute. Any of these other drivers could easily wreak havoc by ramming their car into other cars. Instead, we all generally seem to abide by the rules-of-the-road and avoid killing each other. We adjust to the start and stop nature of the traffic, we switch lanes without banging into each other, and otherwise undertake a delicate dance involving quite hefty killing machines that could readily harm others (our cars).

Sure, there are during my commute the fender benders, along with some even more serious accidents that occur. Those are primarily indeed “accidents” in that the drivers weren’t paying attention to the driving situations, or were cutting things too close when making a lane shift. There are a few that are hampered by perhaps being drunk and so they bring about an accident. Overall, we would likely agree that most of the drivers for most of the time are acting in a rational manner.

This is somewhat startling when you think about it. Why is it that all these drivers are all abiding by rules that are in their heads and yet otherwise there is nothing that prevents them from just ramming their car into other cars? There is nothing built into any of these cars that prevents the driver from doing serious damage and destruction to others. If one of them wanted to start sideswiping other cars, they could do so. Think of the tremendous amount of trust that we all take into account when we get onto the roadways. We are making a fundamental life-or-death assumption that those people driving cars out there are going to do so in a primarily rational way. That’s a huge assumption, in the sense of the risk to life and limb if that assumption is incorrect.

What motivates this rational behavior? One aspect could be self-preservation. Each driver realizes that if they ram into another car, there is the potential for themselves to be injured. Rational behavior says we don’t normally want to harm ourselves. Another basis for this rational behavior is the potential personal cost to causing an accident, such as the possibility that you’ll need to payout money to others due to having caused an accident. You don’t want to lose your savings, your mortgage, and your other funds so you opt to try and avoid getting into an accident. Another might be that you don’t want to go to jail. And so on.

One might also say that it is perhaps because people don’t want to harm other people. This is at times though is argued as being a bit optimistic about people and some would say perhaps an overly altruistic viewpoint. It is heartwarming to think that other than the potential for personal penalties, such as being physically harmed yourself or losing your money or going to jail, you would want to also not cause harm to others. This could be claimed as a culturally derived aspect and be seen as a mental curse that if you do harm others that your mind will haunt you (and so, we are back to the self-preservation notion).

Given all the above about rational behavior, and the aspect that most of the time we see rational driving behavior, nonetheless we do witness irrational behavior while on the roads. The magnitude of the irrational behavior varies quite a bit, and thus sometimes we’ll see small acts of irrational driving and at other times larger acts of irrational driving. The other day, a driver suddenly darted out of the car pool lane, doing so by illegally crossing the double yellow lines and not waiting for a legal portion to exit from the car pool lane, and then darted across all other four lanes of traffic to try and reach an upcoming freeway exit. The driver disrupted the flow of traffic and risked the lives of all other drivers nearby, along with the potential that had an accident occurred it could have caused a domino effect that would have harmed lots of other drivers behind us all. The move was reckless, illegal, irresponsible, outrageous, scary, unwarranted, and could have generated very adverse consequences.

Was the driver aware of the potential impact? Or, was the driver mentally unaware and was just acting on an impulse that they wanted to get to the freeway exit and so darted across all lanes of traffic with that one goal in mind? Did the driver calculate that they could do this darting and do so without harming anyone else and not putting themselves into harm? Or, did they do this act without any real anticipation of the impacts?

And, why should we care about rational versus irrational driving behavior?

At the Cybernetic Self-Driving Car Institute, we assert that self-driving cars will need to be aware of the rational and irrational driving behavior of human drivers in order to best navigate and maneuver in a world that is a mixture of self-driving cars and human driven cars. Furthermore, we assert that self-driving cars themselves have the potential to be susceptible to irrational behavior and that as AI developers we need to be cognizant of this aspect and deal with it accordingly.

First, in terms of being aware of human driving behavior, Thaler’s indication that irrationality can be possibly predicted is quite helpful to the AI of the self-driving car. In my story above about the wild driver that cut across all lanes of traffic, you might at first say that this can happen at any time and that any driver might do the same, thus, presumably there is no means to accurately predict it. But, we suggest this is not always the case.

In fact, the driver that was in the car pool lane had been making motions that suggested an upcoming attempt at darting across the lanes might occur. They were very subtle signs. The driver was looking over their shoulder at the traffic to their right and kept moving their head back-and-forth. The car was edging onto the lane marker that divides the car pool lane from the next lane over. The driver was moving somewhat erratically in terms of rapidly accelerating up to the next car in the car pool lane and then shaving off distance, which might seem odd but was a potential effort to find a spot to exit out of the car pool lane.

If other drivers weren’t paying attention to that particular car, from their perspective it was like a bolt of lightning out of the sky that the driver suddenly made the crazy maneuver. For those drivers that were watching his car closely, you could sense that something was afoot. There was just enough unusual actions that you kind of instinctively knew that something was going to happen. In my case, I was noticing the car because I like to watch traffic around me and detect patterns in driving. I’d guess that most of my fellow morning commuters are probably instead thinking about what they are going to do at work that day or where they will go for lunch that afternoon. I study drivers.

The point here is that we would want a truly good self-driving car to be able to spot those same subtle signs and be able to act accordingly. We would want the AI to be able to predict irrationality. A Level 5 self-driving car is supposed to do everything that a human driver could do when driving the self-driving car. Should the Level 5 self-driving car drive like an unaware driver or an aware driver? Our goal is to make a self-driving car that is the best driver that can possibly be provided. Some AI systems for self-driving cars might meet the test of being able to drive a self-driving car like a human does, but then fall below the capability of a versatile and savvy driver. We say that the goal should not be to just have AI that can drive a car like a human can, but drive a car like a really savvy human can.

Another factor about irrational behavior and self-driving cars is the role of the human occupant that is inside a self-driving car.

There are some self-driving car makers that are falsely believing that a human occupant in a self-driving car will only provide an indication of where to drive, and then the self-driving car AI does everything else. We have indicated over and over that the occupant is going to want to interact with the AI of the self-driving car, and in fact will need to interact with the self-driving car.  There is a lot more to being an occupant than just saying where you want to go.

Occupants will want to potentially change the destination during the journey within the self-driving car. They might want the self-driving car to take a particular route, or change routes. They might want the self-driving car to go slowly and so the human can enjoy the view, or the human might be in a rush and want the self-driving car to go as fast as allowed. There are a myriad of reasons that the human occupants and the self-driving car will interact with each other.

Thaler’s efforts of predicted irrationality can come to play in this.

Overconfidence effect.

When studying the NFL football draft, Thaler and his colleagues found that the professional football scouts tended to overweigh their judgement about players. The famous movie Moneyball showed similarly how in another sport, baseball, statistics could at times do a better job than could human judgement about the potential for various sports players.

We are anticipating that humans will at first tend to believe that their self-driving cars can do more than the self-driving car can actually do. Early adopters of self-driving cars are often over confident about what the self-driving car can do. There are lots of YouTube videos of Tesla drivers that take their hands off the wheel of the car and don’t seem to realize they are placing themselves into enormous risk. This is in spite of being told by Tesla that they are not to take their hands off the wheels.

Notice that most of the emerging self-driving cars that are at the Levels 2 to 4 are requiring some form of detection mechanism to force the human driver to keep their hands on the wheel. This is a type of nudge. Thaler has argued that to get people toward rational behavior that we need to give them nudges toward it, shifting their irrational behavior over into rational behavior. We can predict that humans will do something irrational like taking their hands off the wheel of a car that is going 80 miles per hour and do so under the false assumption that the self-driving car will do the driving for them. Therefore, rather than just instructing people to not do this, we put in place a mechanism such as a device that detects when hands are not on the wheel and then blare a reminder to put your hands onto the wheel. If the person does not comply, some models of cars are even programmed to gradually slow down and come to a stop.  This is a nudge.

Endowment Effect.

Thaler indicates that people tend to have an endowment effect, involving ownership of things. In his experiments, he had subjects buy a mug for $3 that then subsequently refused to sell it for $6. It would seem the right thing to have done would be to take the easy and quick profit by selling the mug for the six dollars. Thaler claims that people tend to add value to an object by the very fact that they believe they own the object.

For self-driving cars, we anticipate that owners of a self-driving car will potentially exhibit the same kind of endowment effect. They will get into their self-driving car, and upon going onto the roadways will believe that their car is superior to other cars. They will likely want their AI to drive accordingly. They will think that their self-driving car should somehow go faster, drive better, get them to their destination sooner, merely because it is a self-driving car and that they own it.

Fairness Effect.

Thaler found that people tended toward believing in fairness and refused to do something seemingly rational because of this notion. In one experiment, it was raining and so people would want to have an umbrella. When the store selling umbrellas rose the price by even a modest amount, doing so when it rained, the people that were going to buy an umbrella tended to not do so, even though they could have used it due to the rain. They felt they were being gouged and so in spite of now potentially getting wet in the rain, they preferred to do so rather than cave into the price gouging, even when the price itself was only marginally higher.

On the roadways, I continually witness drivers that get upset about a lack of fairness, in their viewpoint. For example, there is an offramp that I take most days, and it has a rightmost lane to turn right upon reaching the end of the offramp. There is a lane to the left, which is intended for those going straight or that want to turn left at the end of the offramp.  There are drivers each morning that get into the lane that is intended to go straight or to turn left, and they suddenly want to squeeze into the lane that is intended to make a right turn.

This is upsetting to some drivers that are already in the right turn lane, since the driver in the other lane is appearing to be rudely “taking cuts” into the right turn lane. Some of those drivers taking cuts are perhaps innocent drivers that got confused and realized they needed to be in the rightmost lane. But, some of those drivers are perhaps seasoned drivers that really know that the rightmost lane backs up, and that they can devilishly go in the other lane and then try to barge into the rightmost lane. This allows them to reduce their wait time in the always overflowing rightmost lane.

I’ve seen many drivers in the rightmost lane that will do almost anything to prevent these intruder cars from getting into the rightmost lane. Perhaps due to a concern over fairness, the drivers are trying to prevent the cut taking drivers from getting into the lane. You could also say that the drivers preventing the cuts are trying to prevent themselves from having to wait longer, and so every time they see someone ahead of them allow a cut into the lane, it makes the wait time even longer. I am sure that the wait time is a factor, but I’d bet it is more about the fairness factor than the wait time per se.

How will a self-driving car handle these “fairness” situations. As an occupant in a self-driving car, you might become enraged to see that your self-driving car is letting all these interlopers into the rightmost lane. Your self-driving car is allowing the unfairness of others to be impinge on you. The odds are that you’ll become upset with your AI and want it to help enforce greater fairness on the roadways. I know that some dreamers will be saying that once we have all self-driving cars that it will be easy to enforce this fairness, but I’d like to emphasize that it will be decades upon decades before we have all self-driving cars and no human driven cars. In essence, we are going to have a mix of self-driving cars and human driven cars for a very long time, and so the AI of the self-driving cars needs to be able to handle that mix.

We are making AI that interacts with the occupants of the self-driving car and can interpret driving commands in a fashion that is compatible with the roadway conditions and circumstances. The AI provides feedback to the occupant about what is feasible versus not feasible. And, we have the system provide a “nudge” to the occupant to get them to shift toward rational behavior if their requests appear to be of an irrational nature.

This brings up another facet of the AI, namely that it too can have its own motivations and be rational or irrational. I know that many of you will be shocked by such a statement. You are likely insistent that the AI would always be rational. It can never be irrational. We have grown up with so many movies and TV shows that depict robots and AI of the future that is purely without emotion and without any sense of irrational behavior.

It’s a crock.

AI systems are being developed in a variety of ways, including via neural networks and other machine learning techniques. Via those black box style efforts, we don’t know for sure why the AI will be doing what it does. Complex patterns are being automatically found and utilized via those automated methods, much of which is so complex that we don’t have any direct means to have it explained by the AI. In that sense, as I’ve warned many times, we are going to have all sorts of inherent biases carried into our AI systems. It’s a given.

This might not be the same kind of motivation that humans have, but it will be patterned upon that motivation. So, whether it exists due to some biological apparatus is not the key, instead we need to focus that it exists in the automation as a result of pattern matching across large data sets that subliminally contain that motivation.

We can also expect that the AI for any self-driving car is going to potentially take on the characteristics of the owner or occupants that go in that self-driving car. If we are going to have self-driving cars that learn over time, and each time they have a human occupant that we’ll call Bob, presumably the AI will begin to learn what Bob likes and dislikes from a driving perspective. This makes sense that the AI would customize to the nature of the owner occupant. Auto makers will want to provide this capability and humans will certainly expect it.

As such, the AI will begin to absorb some of the aspects of the owner occupants. If Bob is the type of person that wants his car rides to be fast and furious, the AI in that self-driving car is going to likely try to achieve this. It will then become part and parcel of the means of how that AI is driving the self-driving car for that person. Furthermore, if the AI of that self-driving car is part of a collective of self-driving cars that share among each other, the driving behavior could be further distributed to other self-driving cars.

This also raises the question about the role of government in self-driving cars and the driving behavior of the AI. At some point, will the government opt to embed into the AI of the self-driving cars various aspects of what is rational driving behavior versus irrational? Right now, the AI makers for self-driving cars are mainly embedding the legal rules-of-the-road. Those are generally clear cut. The cultural aspects of driving are not so clearly written down and not so clearly specified. The government might want to put “nudges” into the AI of the self-driving cars that will shift the AI toward behavior that is considered preferred.

One aspect of Thaler’s work that you might find especially intriguing is the men’s urinal studies that he has done. For those of you that aren’t aware, when men go into a public bathroom and use a urinal, they do so standing up and are intended to focus their output into the urinal. Unfortunately, many men seem to miss the mark and the floor around a urinal often becomes a slippery stinky mess (if you get my drift).  In some urinals, they placed a small image of a fly, placed toward the center area of the urinal. Why was this done?

This was a nudge. Men were now motivated to aim toward the image of the fly. What fun! This more importantly got men to focus their widespread aim toward a specific target. According to the studies, this dramatically reduced the amount of spillover. We are going to likely see in self-driving cars and AI that there will be many instances of the need for “nudges” like this. It will be needed for purposes of having AI cope with the irrational human drivers on the roads, and for AI that becomes itself an irrational driver due to their learning from other drivers on the roads.

Finally, we also need to be wary of human occupants that want their self-driving cars to do untoward actions. For example, suppose someone is bent on killing themselves and they decide the easiest way to do so would be to have their self-driving car get into an accident while they are riding in the self-driving car. They might give commands to the self-driving car to purposely get it into a tight spot, one that could lead to death and destruction. This so-called “suicide by self-driving car” is something that has real potential and that we would want the AI to detect and prevent.

Self-driving cars need to be designed and built on the basis of being aware of rational behavior and irrational behavior. As per Thaler, there is predictable irrational behavior that we can anticipate and therefore cope with. Our AI systems need to be robust enough to deal with this. And, we will need to incorporate nudges into them too.

This content is originally posted on AI Trends.