Strategic-AI Visionary Metaphors: Knight Rider KITT and AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

For leaders overseeing any substantive AI initiative, it is crucial to establish a strategic vision of what you are ultimately trying to achieve. The strategic vision should layout the nature of the AI system that you are embarking upon creating and has to be relatively clear cut so that anyone involved will readily grasp your aims.

If you fail to identify a strategic vision for the AI effort, the odds are that few will comprehend what you are seeking to build and field. Without a collective understanding among your AI developers, you can end-up with something that goes astray of your intention. Worse still, if the overall direction and purpose is muddled or not defined at all, you could wind-up with an untoward result, having wasted precious resources and time that otherwise might have led to better success.

A strategic vision for your AI project can consist of a narrative that spells out in some detail the goals and objectives, plus it is handy to have a pictorial representation that can visually depict your wordy description. Though a description and a picture are worthy, you might also find that making use of a metaphor can be a very powerful communicative mechanism too.

Metaphors are used all the time for major initiatives. Sports metaphors are particularly popular. The leader responsible for a major systems effort might cast the effort in say baseball sporting terms. Let’s try to hit one over the fence, they might say. We are putting together an AI system that will get us to the world series and beyond, a leader might exclaim. If they are more of a football fan, they might indicate that the project is going to be a big touchdown for the company and advance the field of AI.

You’ve got to be somewhat careful though when you start invoking metaphors. There is a potential two-way street about a metaphor. The metaphor can support your efforts and be a quick-and-easy means to convey the spirit and excitement involved in the AI initiative. On the other hand, there are doubters and cynics that might try to turnaround the metaphor and use it against the AI project, especially if the AI system is having troubles during the development stage.

I recall one AI project that the leader used a football metaphor and had posters made-up that were placed onto the walls where the AI development team was working. At first, the football aspects were embraced and used by the developers. Some even brought into the office a football helmet and had it marked with the company name and logo. Team spirit, rallies, and the like, all helped to get the effort launched.

Unfortunately, as the project began to hit bottlenecks, and it was starting to look unlikely that they could achieve the AI capabilities envisioned, some said that the AI system had become a Hail Mary kind of effort. A Hail Mary in football is a long-distance forward pass that is usually made in utter desperation. When a football team realizes that they cannot otherwise score in a more reasoned manner, they at times might resort to the high-risk low-odds Hail Mary tactic. In that sense, the AI project had become a Hail Mary, something that few now believed would succeed and at best they might make a frantic last-resort final-ditch scuffle to revive.

In spite of the chances that a metaphor might be turned against an AI effort, it is likely better to have a metaphor than to not have one. If your major AI project does not have an “officially” anointed metaphor that you’ve carefully chosen, it provides a vacuum into which others might offer a worse-choice metaphor anyway.

One of the most commonly evoked vacuum-filling metaphor’s for AI projects is the Frankenstein metaphor. We all already know about Frankenstein, or at least we’ve all heard of Frankenstein, regardless whether you’ve read Mary Shelley’s book or seen a movie that decently covers her novel. By and large, most people have formed an image that Frankenstein refers to something that has been put together piecemeal, tossing together various parts, and the end-result is usually considered evil or certainly at least ill-advised.

Frankenstein is a rather “sticky” kind of metaphor for especially AI projects, and once your AI effort has been tainted by it, you’ll have a relatively hard time removing the stench.

The reason that Frankenstein is so sticky for AI projects is that it seems to fit well if you consider what an AI system is about. Frankenstein was brought to life, and in some respects there is an appealingly apt association with AI systems that are bringing something to life (though, obviously not in the same biological sense as Frankenstein). If people in your firm or even outside of your firm begin to assert that your AI system is some kind of Frankenstein, you are probably headed downhill on it or perhaps are already at the bottom of a hole you’ll not dig out of.

For my article about AI and Frankenstein metaphors, see:

For AI and the vaunted singularity, see my article:

For the nature of a Turing test for AI systems, see my article:

For more about AI developers and internal naysayers, see my article:

Bismarck Had Mixed Results as a Metaphor for Gen Zers

The metaphor that you choose has to be readily grasped by those that are going to be guided by it. If you pick something obscure as your metaphor, you’ll be spending most of your time just trying to explain what you mean. For example, one AI leader opted to tell his team that they were embarking upon building the Bismarck.

I dare say that most AI developers that are in the Gen Z age bracket are probably either not aware of the Bismarck or at best have the vaguest notion of what it was. During World War II, the German’s built a massive battleship, which they named the Bismarck, and it became famous at the time for its power and prestige. Eventually, it got into a tussle with Allie ships that were trying to sink it, and the German commander of the ship opted to scuttle and sink the Bismarck, so it would not get into the hands of the Allies.

The leader of the AI group was from the Baby Boomer era and thus he was more so connected to World War II history via his father that had served during WWII. In the mind of this AI group leader, he thought that everyone knew about the Bismarck. For him, casting the AI system as the Bismarck was handy because it was an AI autonomous vehicle that was going to be used in the water, and using a ship metaphor seemed to be relevant and clever.

When I chatted with members of his AI team, many were in their early 20’s and recent college grads, several sheepishly told me that they had to look-up the Bismarck to know what it was. And though they appreciated the idea behind the metaphor, it fell flat from their perspective (it was too outdated of a metaphor, and for which they felt that this outdatedness was going to be inadvertently reflected on the AI system, even though the AI system was modern-day and state-of-the-art).

There was even scuttlebutt behind the AI leader’s back that maybe he should have used the Titanic as the metaphor, due to the aspect that they were having massive problems getting the AI system to work. Some were worried that the project was going to fail and “sink” their careers with it. This further illustrates the two-sided coin of using metaphors.

Besides considering whether a chosen metaphor will resonate with your targeted audience, you also need to include the cultural aspects that might come to play too.

There is an effort afoot by a start-up seeking to create a drone-based network in Africa to deliver medical supplies. The use of drones seems particularly applicable as they might be a low-cost and largely effective means to reach remote areas of Africa. In hopes of overcoming qualms that the public might have about a vast set of drones flying back-and-forth across the continent over their heads, which might have a military related connotation as something troublesome, the start-up refers to the drone network as flying donkeys, a metaphor they conjured up.

The intent of the flying donkeys metaphor is that you are supposed to right away envision that a donkey is something good and therefore the drone network gets that goodly glow. We all readily know that donkeys are used to carry loads of goods and often do so for long distances, tirelessly and faithfully doing so. Unless you’ve been kicked by a donkey, you probably have a positive image of donkeys.

By saying that the donkeys are flying, you capture the essence of the drones that are going to be flying, and you also generate a catchy imaginary. The founder believes this metaphor is apt and appeals to his target audience.

I’d like to leave that matter there, but I suppose I should also mention that some that are disbelievers and doubt that the drone network will ever get going, they say the day they will have a fully functioning flying donkey network is the same day that there will be flying pigs (which, if you don’t know, flying pigs is an expression usually meaning that something will never happen). This once again shows the two-sided potential advantage and also disadvantage of using a metaphor for your project.

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. There are some auto makers and tech firms that are using metaphors to help internally or at times externally convey the nature of their AI self-driving car initiatives.

Perhaps the most frequently used metaphor is the Knight Rider KITT, which I’ll explain what that is and offer both the positive and negative aspects of using this particular metaphor.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article:

For the levels of self-driving cars, see my article:

For why AI Level 5 self-driving cars are like a moonshot, see my article:

For the dangers of co-sharing the driving task, see my article:

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action planning
  •         Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see:

See my article about the ethical dilemmas facing AI self-driving cars:

For potential regulations about AI self-driving cars, see my article:

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article:

KITT Metaphor Prevalent for AI Self-Driving Cars

Returning to the topic of metaphors used for AI self-driving cars by some auto makers and tech firms, the most prevalent one seems to be undertaken by making reference to the Knight Rider KITT.

Knight Rider is the name of a TV series and an entertainment franchise that first launched in the early 1980s. The original TV series ran from 1982 to 1986. There was a later resurgence of interest including a series of movies made and released, video games, and as a gradually emerging pop culture icon the TV series was rebooted several times during the 1990s and the 2000’s.

The fictional plot of the Knight Rider series is that a man named Michael Knight makes use of a “partner” in his efforts to fight crime, of which the partner is his car. The car is imbued with AI and is made by the fictional Knight Industries company. The model number of the AI-based car is the model 2000, and thus the clever name of the car is the Knight Industries Two Thousand or “KITT” for short. Later on, there was a newer model, the Three Thousand, but this conveniently also can be called KITT too.

I would guess that anyone that is really enamored of the Knight Rider KITT is familiar with the 1982 Pontiac Firebird that was used in the original TV series. A wide variety of other cars were used in subsequent stages of the TV and film adaptations. I admit that when I see a picture of a Pontiac Firebird or a restored one, I instantly think of KITT. Hope this doesn’t seemingly date me.

One question that comes up about using the Knight Rider KITT as a metaphor for building today’s AI self-driving cars involves whether your AI developers are familiar with KITT. On the one hand, it is somewhat outdated since it got started in the early 1980s. On the other hand, the numerous reboots and avenues through which KITT is deployed such as video games, keeps it somewhat contemporary or at least increases awareness more so than if it had existed only in the original TV series and had not gotten reinvigorated over and over.

It’s also interesting that though some people know what you mean when you say “Knight Rider,” of which they associate the car to that phrasing, if you also say to them “KITT” it is less likely they seem to know what KITT means. They might have heard of “KITT” and thought it meant “Kit” as in a kit that turns a car into an AI-based car. Rarer is the person that knows the letters of K.I.T.T. are an acronym and nor what the acronym consists.

Nevertheless, it doesn’t matter much whether someone knows what KITT means and nor whether they “Knight Rider” phrase is not the name of the car per se. All that really matters is that they know that either Knight Rider or KITT refers to an AI-based car. That’s sufficient.

As a crime fighter, Michael Knight drives around in KITT and interacts with the car as though it is a human being. KITT is supposedly making use of a cybernetic processor and the system was transplanted from a mainframe AI-computer that was being used by the United States government. Michael Knight is the heroic everyday human crime fighter (no super powers), and he is aided by his trusty horse, well, actually, trusty car, and the AI of the car is able to help him reason about the crime fighting and capture the criminals they are seeking.

Put yourself back into the 1980s era and the description of KITT makes a lot of sense for that time period. During the 1980’s, it was commonplace to refer to AI systems as being cybernetic. Also, we mainly had large mainframe computers and the advent of PC’s was just in its infancy (the IBM PC launched in 1981, the Apple Mac in 1984). The fiction writers that composed the Knight Rider series were using the terminology of the time period in coming up with the KITT backstory.

One nifty aspect of using the Knight Rider KITT metaphor is that the plotline involved good guys and fighting crime. In that sense, the imagery of KITT the car is a quite positive one. It was trying to be a do gooder. It worked well with humans, at least ones on the right side of the law. It was loyal to its human “master” which was Michael Knight. These are all facets of the AI-based car that make it palatable for use as a metaphor today.

Suppose that KITT was evil and wantonly killed people. Suppose that KITT was wanting to take over humanity. Suppose that KITT disobeyed its human master and was AI uncontrolled. All of those aspects would reflect on what we envision when you invoke the Knight Rider KITT metaphor. Instead, it is a squeaky clean metaphor and one that involves heroics. Score one for the use of the KITT metaphor.

You might try to use some other famous “talking cars” for your metaphor, but the pickings are rather slim. There is the “My Mother the Car” TV series that ran in the 1960s. That’s ancient times today, and thus it is relatively buried now in history. Plus, the car was a decrepit 1928 Porter touring car. I don’t think any modern auto maker or tech firm wants that imagery.

You could try invoking Herbie, the AI-based VW Beetle that starred in the Disney 1968 motion picture “The Love Bug.” Herbie has had a rather lengthy life span as a franchise, mainly in films and at the Disney parks. It was cute. It was cuddly. You could try to use it as a metaphor for today’s AI-based cars, which might be handy to suggest that AI-based cars are friendly and lovable. Again, doubtful any auto maker or tech firm would want that image per se for their AI self-driving car.

Generally, the sleek and cool aspects of the Knight Rider KITT provide a top choice for serving as a metaphor for today’s AI self-driving car makers and what they are trying to achieve. It is a tad outdated but not much, it invokes a heroic image, and the AI was “manageable” and not out of control.

Let’s consider further the AI aspects of KITT.

Using the fictional Knight 2000 microprocessor, the fictional car had a fictional Voice Synthesizer and a Etymotic Equalizer (audio input) to interact with humans. You might famously recall that this consisted of either a flashing red square inside the car or a three section vertical bats that flashed when the car was speaking. This visual device was used to get those of us watching the show to realize that the car was speaking. If the flashing light trickery had not been used, it might have been less believable that the car was speaking. You’d be looking around for where is that voice coming from.

For the emerging AI self-driving cars, there’s no question that voice interaction is going to be crucial. Human occupants that are passengers in a true Level 5 self-driving car will interact verbally with the AI system. This includes not just commands of where they want to go, but actually much more complex interactions. The AI will need to be savvy enough to interact about desired aspects of how the human wants the car to drive, where it is driving too, and other aspects. In that sense, KITT hit the mark.

The use of the flashing lights to represent when KITT was talking is something that will arise for AI self-driving cars too. Humans are used to speaking to other humans by facing the human and seeing their lips and mouth move. Even when we drive-up to a fast food drive-thru, we look for the speaker and assume that’s where we need to speak back. Designers of AI self-driving cars can either just let human occupants look around confusedly for where to speak to, or put some kind of “obvious” audio input device in say the dashboard. I realize that the inside of an AI self-driving car will likely be lined with several audio input devices and so the humans don’t have to speak in a specific direction per se, but to make things clearer for humans it might make sense to provide a focal point for when they try to talk to the AI.

For the socio-behavioral aspects of AI interaction with humans, see my article:

For in-car voice commands aspects, see my article:

For Machine Learning and AI self-driving cars, see my article:

For the importance of explanation-based machine learning, see my article:

KITT had a wide variety of sensory devices to figure out the driving scene and how to navigate along streets and highways. For Level 5 self-driving cars, we’re currently using a wide variety of sensory devices, including cameras, radar, ultra-sonic, LIDAR, sonar, and so on. It is handy that KITT likewise uses sensory devices, since if it just magically knew about its surroundings this would undermine to some extent the fit of the metaphor to AI self-driving cars.

KITT though went a bit further with the sensory devices than we are today. It used X-rays and other somewhat exotic sensory devices.

Olfactory Sensors Unlikely in AI Self-Driving Cars

One such “exotic” sensory device that I’ve been advocating and exploring is the use of an olfactory sensor for detecting odors. Coincidentally KITT had one.

In a future column, I provide a compelling use case for having an olfactory sensory device in an AI self-driving car. I know that some might think it questionable or fluff, and it is considered an edge or corner case at best right now by most auto makers and tech firms. I claim and make the case that this will be something of value and a future differentiator for those auto makers or tech firms that opt to include it in their AI self-driving car offerings.

KITT also had a money dispenser so that Michael Knight could get cash when he needed it. ATM’s in the early 1980s were still somewhat of a novelty and thus the idea of the KITT having its own internal ATM for Michael Knight was clever and handy. For Level 5 self-driving cars, I’ve predicted that they will be using blockchain for cryptocurrency use, allowing the human passengers to carry on electronic or digital cash transactions. No need to have a cash dispensing ATM inside a Level 5 AI self-driving car.

For my article about blockchain for AI self-driving cars, see:

For the future of marketing of AI self-driving cars, see my article:

For edge cases in AI self-driving cars, see my article:

For sensory devices and AI self-driving cars, see my article:

One area that the metaphor to KITT perhaps is not quite fully applicable involves the crime fighting capabilities of the car.

KITT had a special armored plating that protected it from bullets and explosions. Though this might the case for Level 5 self-driving cars far off in the future, or maybe for militarized versions of self-driving cars in the nearer term, I don’t think we’ll be seeing auto makers outfitting everyday AI self-driving cars with the molecular bonded shell (or similar) that KITT used.

Here’s some additional aspects that I doubt we’ll see in everyday AI self-driving cars that KITT had for crime fighting purposes:

  •         Could create a smoke screen and lay down slick oil to dissuade cars chasing it (I’ve seen conventional cars today that do this simply because they are leaking oil!)
  •         Had a flame thrower (I realize that Elon Musk really likes flame throwers, but I doubt we’ll see them in Tesla’s any time soon)
  •         Tear gas launcher
  •         Lasers to burn through steel plates and walls
  •         Bomb sniffer
  •         Seat ejection system (like James Bond had!)
  •         Other

Some of those crime fighting items are perhaps humorous to note but do keep in mind that if you invoke the Knight Rider KITT as a metaphor, there are some people that might wonder whether or not you are intending to include those kinds of defensive and offensive weapons.

As mentioned earlier, it could make sense to include such capabilities on perhaps military self-driving vehicles or perhaps police self-driving cars. Thus, it is not beyond the realm of “reasonableness” for some to wonder whether your use of the metaphor extends to those aspects too. You probably would want to make sure to clarify when you invoke the Knight Rider KITT metaphor whether you are also encompassing these auxiliary crime fighting kinds of features.

KITT had a Telephone Comlink that allowed the car and Michael Knight to talk with other people on the phone. Of course, we nowadays have cell phones and in the 1980s it was a rather unusual aspect to have a phone capability of any kind available in your car. The Telephone Comlink in that era seemed quite extraordinary and futuristic. How time flies!

For true Level 5 self-driving cars, they will likely be outfitted with various electronic communications and networking capabilities. There will be OTA, Over-The-Air, capabilities for the AI to communicate with the cloud of the auto maker or tech firm. This will allow for the pushing of collected sensory data up to the cloud, along with the auto maker being able to push down into the AI the latest system updates and patches for it. There will also be V2V (vehicle-to-vehicle) electronic communications and V2I (vehicle-to-infrastructure) electronic communications.

The part of the KITT car that drove the car was called the Alpha Circuit. This is kind of interesting because Tesla calls their AI-component the AutoPilot. Most of the auto makers and tech firms are giving specific names to the AI-based element of their self-driving cars. As an aside, I’ve often indicated in my other articles and my speeches that the industry needs to be mindful and cautious of the names they give to their AI-based component.

I say this because, in short, the name of the AI-based component is another example of a two-sided coin. Coming up with a catchy name for the AI is useful and can be a boon to marketing of an AI self-driving car. On the other hand, it can boomerang in that if the name perhaps implies a greater AI capability than truly exists, there might be backlash against the name. In the case of Tesla, I’ve mentioned that the use of “AutoPilot” has already spurred some lawsuits as to the implications of the naming for consumers that bought the Tesla cars outfitted with the feature.

For the crossing of the Rubicon for Tesla, see my article:

For why AI self-driving cars are unlike airplane “autopilots” see my article:

For product liability lawsuits involving AI self-driving cars, see my article:

For my article about OTA, see:

KITT came with a turbo booster.

Michael Knight could ramp-up the speed to a top of around 200” miles per hour (including going forward and for going backwards). Do not be expecting the everyday AI self-driving car to go that fast, at least not for the foreseeable future.

I have though predicted that we’ll be seeing AI self-driving cars that go at high-speeds on purposely wide-open highways as a potential alternative to building expensive high-speed rails to transport people.

See my article about rapid transit and AI self-driving cars:

There were several driving modes of KITT.

In one mode, Normal Cruise, the human driver, Michael Knight, was driving the car. Meanwhile, KITT was watching the driving and could take over the driving task as needed.  In another mode, Auto Cruise, KITT was a like a true Level 5 self-driving car that could autonomously drive the car and not need any human driving input or assistance.

There was also a Pursuit mode, used for crime fighting purposes. I know that this mode seems like the antithesis of what most AI developers are thinking about when it comes to AI self-driving cars. In the view of these AI developers, they imagine a Utopian world in which there will no longer be any kind of high-speed pursuits. I’ve debunked this notion. There will be circumstances where a “pursuit” mode makes sense and would be an aspect desired and needed by even law-abiding AI self-driving cars.

See my article about road rage and AI self-driving cars:

For my article about when AI self-driving cars do illegal acts, see:

For the boundaries of AI self-driving cars and driving controls, see my article:

For safety aspects of AI self-driving cars, see my article:

There are other various capabilities of KITT and in the later years of the series the features seemed to get rather extravagant. The writers of the series likely were under pressure to come up with new gimmicks and keep things fresh.

On a related matter for true Level 5 AI self-driving cars, at first they will have a core set of capabilities and we’ll likely all be pleased and excited to have those core features. Once the general public gets comfortable using an AI self-driving car, I’m sure that there will be a features-war by the auto makers and tech firms to try and differentiate one self-driving car model from another.

In that sense, some of the more “extravagant” items of KITT might actually be later designed, built and added to future models of true Level 5 AI self-driving cars. I’m not referring to the laser beams that destroy things (let’s hope we don’t all end-up with such a feature), but more so the olfactory sensors, the medical status scanners that KITT had, and so on.


Using a metaphor to label an AI systems project can be quite powerful as a means to establish the strategic vision for what you are aiming to achieve. Make sure you pick a relevant metaphor. Try to avoid picking one that is hard for people to comprehend or that is outdated or otherwise seemingly lacks stickiness and relevance. Be on the watch for efforts to distort or twist the metaphor, which, if that happens, it could also be a warning sign that your AI project is drifting into a potential abyss and you ought to be cognizant of that slippage.

For AI self-driving cars, the use of the Knight Rider KITT metaphor is handy and offers some significant advantages. The Knight Rider car enjoys a rather positive impression already, and this can carry over into your AI self-driving car effort. Make sure to clarify what aspects are relevant and which are not, otherwise you might end-up with a passenger car that can launch missiles and ram through walls.

Admittedly, since my daily commute on the freeway in Los Angeles takes about 1 to 2 hours due to snarled traffic, I’d welcome having those missile launchers and ramming features on my AI self-driving car. But, only on mine, and not on anyone else’s. That’s personalization of AI self-driving cars.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.