Reverse Engineering and AI Self-Driving Cars


By Lance Eliot, the AI Trends Insider

At an annual auto show there was a Tesla Model 3 chassis shown on the convention floor, being displayed by a company other than Tesla. Notice that I said it was just the chassis, and not the entire car. You might be wondering why would somebody be showing off a Tesla Model 3 chassis? Did they lose the rest of the car? Or, did they somehow surgically remove the chassis from the rest of the car? It might seem odd that they weren’t showing the entire car.

Answer: Reverse engineering.

It is a well-known secret in the auto industry that the auto makers are desirous of knowing every tiny detail about their competitor’s cars. To find out the details usually involves taking apart an entire car, piece by piece, bolt by bolt. Some of the auto makers have their own inside teams that will buy a competitor’s car and take it apart. Other auto makers will pay an outside company to do the same. There are companies that specialize in doing reverse engineering on cars. They delight in buying the latest model of any car, and meticulously taking it apart like so many Lego blocks.

More than being a delight, these reverse engineering firms can make some good bucks by their efforts. They will sell to other auto makers what they find out about their competitor’s cars. You get a full documented list of every item that went into the car, along with useful added aspects like how much each component likely costs. Furthermore, they can estimate how much it cost to assemble the parts and give another auto maker insight into what kind of assembly effort their competition is using.

An auto maker will even at times pay to get the same insights about their own cars! I know it seems odd, since you would assume that an auto maker would already know how they assemble their own cars and what it costs, but it can be very handy to see the estimates made by the third party. The third party might be high or low, or otherwise not so exacting. It is also handy to know what they are telling your competitors. Plus, you can then have them do a comparison for you to the parts and costs associated with your competitors versus your cars.

The company at the auto show was touting the aspect that they had taken apart the Tesla Model 3 and could tell you whatever you wanted to know about the car. They can tell you the weight of each part. They can tell you the size of each part. They can tell you the cost. They call tell you how capable the part is. They can tell you where the part was made as in the U.S or overseas. In many cases they can even tell you what the composition of the part is, such as percentages of metal versus other elements.

To give you a sense of the magnitude of this cataloging of a car, a rule-of-thumb is that a typical everyday car might have 50,000 or more parts, and it might require something like 200,000 distinct manufacturing steps. Just image that if you were designing a new car, it would be handy to learn from the other cars on the marketplace. Maybe try to use the carburetors that are found on a Brand X Model Y and the mufflers that are on a Brand Z Model Q. If you are an auto maker, you might also discover that you are overpaying for your own carburetor and realize that to reduce costs and ultimately reduce the price that you charge for your car that you ought to switch.

There’s an added twist to this reverse engineering effort. In theory, if you are careful as you take apart a car, you can also scan the parts into a CAD system and essentially reverse engineering the entire design of the car. With today’s powerful CAD systems, you can then view the car from any angle and dive into and zoom in and out of the whole design of the car.

Some really sophisticated CAD systems will allow you to pump the design into a simulator program. This could allow you to potentially act as though the car exists and see how it runs. In a video game kind of way, you can possibly “drive” the car and see how it handles. You can simulate what the gas mileage will be like and examine other facets of the car.

See my article about simulations:

What does all of this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI systems for self-driving cars. You might find of keen interest that the same industry wide pursuit of reverse engineering of conventional cars is now underway for AI self-driving cars too.

Let’s consider first the physical elements of an AI self-driving car.

There are five key stages to the processing aspects of an AI self-driving car:

  •         Sensor data collection and interpretation
  •         Sensor fusion
  •         Virtual world model updating
  •         AI action plan updating
  •         Car commands controls issuance

See my framework for AI self-driving cars:

The most prominent physical aspects of an AI self-driving car are the sensors. There are a multitude of sensors on an AI self-driving car, including radar sensors, sonic sensors, cameras, LIDAR, IMU’s, and other sensors.

See my article about LIDAR for AI self-driving cars:

See my article about IMU’s for AI self-driving cars:

Each auto maker and tech firm is keenly interested in knowing which sensors the other companies are using in their AI self-driving cars. It’s a somewhat free-for-all right now in that there is no dominant supplier per se. Indeed, the various companies that make sensors suitable for AI self-driving cars are in a fierce fight over trying to gain traction for their sensors.

This makes a lot of sense when you realize that today there are 200 million conventional cars in the United States alone – so assume that those will ultimately become obsolete and be replaced with an equal number or greater of AI self-driving cars. You’d want your sensors to be the first-choice for those hundreds of millions of new cars. Plus, you would guess that anyone that grabs the United States market will be very likely to become the global supplier too. Big markets, big bucks to be had.

Sensor Placement Getting More Strategic

Not only is it useful to know which sensors are being used, but it is also valuable to know where they are placed within the AI self-driving car. The design of a self-driving car is still being figured out. How can you best embed these sensors in the car? You want the sensors to be free of obstruction. You also want them to be less vulnerable to the elements. At the same time, they have to look stylishly placed.

The early versions of LIDAR were large and looked like some strange beanie cap on the top of the car. It stood out. Some thought it looked cool, but most thought it looked kind of dorky. The LIDAR devices are getting smaller in size, better in performance, and coming down in cost. This allows for placing it in a manner that it does not seem so visually awkward. It still though needs to be able to freely emit the light beams and so it cannot be buried inside some other aspect of the car.

So, knowing what sensors your competitor is using and how they are placing them into or onto the body of the self-driving car is important insider information. If your competitor does a better job then you, it might cause consumers or businesses to want to buy their AI self-driving car rather than your AI self-driving car. It might seem shallow to AI developers to think that someone will choose an AI self-driving car by how it looks, versus by what it does in terms of the AI capabilities, but I assure you that the average buyer is going to be focused more on looks than what the self-driving car AI can actually achieve, all else being seemingly equal.

See my article about the future of marketing of AI self-driving cars:

In addition to the sensors, you would also want to know what kinds of microprocessors are being used. Once again, there is a battle royale going on in this realm. Imagine that an AI self-driving car might end-up with dozens upon dozens of microprocessors, maybe in the hundreds. Multiply that number by the 200 million cars in the United States in terms of ultimately replacing those cars with AI self-driving cars. It’s going to be a huge bonanza for the chip makers.

Indeed, there are chip makers that are now producing and striving to further advance specialized chips for AI self-driving cars. They hope to get into the early AI self-driving cars and become the defacto standard. It’s like the early days of Beta versus VHS. Which will prevail? If you can get rooted into the marketplace, it will be hard to upset your placement. That’s why you see these latest artificial neural network chips. They are going to be needed aplenty on AI self-driving cars.

Where should these microprocessors be placed inside the body of the car? For some of the early day self-driving cars, they pretty much went into the trunk. The trunk was sacrificed to be jam packed with the computers used by the self-driving car and the AI. If your competitor has slickly found a means to put the microprocessors in the underbody, while you have used up the trunk, and their self-driving car has a fully available trunk, which self-driving car do you think will get chosen by consumers and businesses? Again, doing some reverse engineering could let you know whether you are ahead of the game or behind the game.

You might also discover during your reverse engineering that your competitor has made “mistakes” or at least gotten themselves into some bad spots. Suppose your competitor put the microprocessors in an area of the self-driving car that is subject to heat. This could suggest that after their self-driving cars are on the roads for a few weeks or months that those chips will start to burn out. The competitor might not realize it now and only discover this month from now. Should you tell them? Well, normally you wouldn’t want to give advice to your competitors that makes them better off.

This does though bring up an ethics question. If during your reverse engineering of a competitors AI self-driving car that you find something untoward that could lead to the harm of humans, should you inform the competitor? You might say that you have no such obligation. It’s up to your competitor to find out and deal with it when it occurs. But, if you don’t warn them and it harms human occupants or others, are you in a sense culpable? Maybe it would be good to let the competitor know. Or, if you prefer, inform the marketplace and it will cast your competitor in a bad light, which presumably would be good for you.

See my article about the ethics of AI self-driving cars:

There is a next level of detail in terms of reverse engineering an AI self-driving car. We focused so far on the physical aspects that are relatively easy to discern. You can take apart a car and readily identify the sensors and the chips, etc. What you cannot so readily reverse engineer is the software.

Possible to Reverse-Engineer the Self-Driving Car Software

Each auto maker and tech firm making AI self-driving cars is eager to know what the software is and does in their competitors self-driving car. Sure, some of it might be open source, but most of it is more likely to be proprietary. Indeed, the auto makers and tech firms are protecting their software as though it is gold that belongs in Fort Knox. Who can blame them? The arms race is on. Everyone is trying to figure out how to make the AI be as truly AI as possible.

See my article about open source software for AI self-driving cars:

The AI software is going to ultimately be what makes or breaks a true AI self-driving car. For a level 5 self-driving car, which is the level that involves the AI fully driving the car and there is no human driver needed, we are all trying to get to that nirvana. It’s not going to be easy. There have been millions upon millions of dollars spent on software engineers and others trying to develop this AI code. Firms would want to protect it from prying eyes.

Once the AI code is loaded into the self-driving car, it’s possible to try and reverse engineer it. I know that some of you are saying that this doesn’t seem possible to do, since the code is presumably compiled or otherwise being interpreted and the auto maker or tech firm certainly didn’t load the source code into the on-board computers. You are right that the source code would not be on-board. But, you are wrong if you think that the running software cannot be reverse engineered.

There are plenty of available reverse engineering software tools in the marketplace. If you can get to the memory of the chips, you have a shot at reverse engineering the code. Especially if some of it is open source. I say this because you can already know what the target of the open source is, and thus those portions of the running code can be more readily discerned. This then gets you into the right spots to look for the proprietary stuff.

The artificial neural networks and the machine learning portions are even easier to figure out, usually. If you know the key models used for neural networks, you can somewhat readily find those patterns in the memory and drives of the on-board systems. You can then reverse engineer it back into the overall models with the number of neurons, the weights, and all the rest.

What would make this even more doable would be if you were an AI developer already developing such systems. You would already be aware of the nature of the coding needed for these kinds of systems. If you took an everyday Python or C++ coder, they would be highly unlikely to be able to figure this out. It requires software engineers with more detailed and machine-based experience and skills. This narrows the pool of potential reverse engineers, but doesn’t drop it to zero by any means.

That being said, it is important to emphasize that the auto makers and tech firms need to make sure they are sufficiently protecting their AI self-driving car systems from these kinds of reverse engineering efforts. Besides protecting the Intellectual Property (IP), it also needs to be done to try and prevent hackers from doing nefarious things. Having hackers that can break into and figure out how the AI systems are working is something nobody wants to have happen. To-date, the AI self-driving car field has been less careful about the security aspects than they should be. The focus has been on making an AI self-driving car. That’s good, but not good enough in the sense that having an AI self-driving car vulnerable to security breaches is a very bad thing.

See my article about the importance of cyber security in AI self-driving cars:

You might be wondering whether this reverse engineering of cars is even legal. Can you really physically take apart a competitor’s car? Generally, yes, once you’ve bought the car, there’s nothing that legally stops you from physically taking it apart per se. If you try to copy their car parts or car design, you might then find yourself in violation of their IP. If you merely are studying it, that’s something hard to outlaw.

In the case of reverse engineering the software, that’s something that does tend to run afoul of the law. You might know or remember the Digital Millennium Copyright Act (DCMA) that was passed by the United States Congress in 1998.  The DMCA was pretty much a push by the entertainment industry that was worried about protecting unauthorized copying and dissemination of their copyrighted works. Firms put in place Technical Protection Mechanisms (TPM’s) to prevent hackers or others from doing reverse engineering. Circumvention of a TPM is considered a form of reverse engineering. This is considered generally illegal throughout the United States and has been adopted by much of the rest of the world as being considered illegal too.

There are narrow exceptions for law enforcement purposes, or for national security purposes, and for selective computer security research purposes, but otherwise it’s against the law to circumvent a TPM in order to reverse engineer software. There have been attempts to make the case that bypassing TPM’s for certain kinds of systems should be purposely allowed, doing so for the good of society. Suppose an AI self-driving car has some hidden bugs, or vulnerabilities, or other malfunctions – wouldn’t you and society benefit by allowing “experts” to delve into the software and figure out how it works and identify those perhaps death-producing problems?

If you are thinking about doing a teardown of your own AI self-driving car, I’d advise against it. The odds are that you’ll end-up with thousands of parts and have no idea what they do and why they are there. Also, if you are thinking you might put Humpty Dumpty back together again, I assure you that the odds of putting back together a torn apart car is most likely futile. It’s not just a Rubik’s cube that you need to move the positions back into their proper places. In any case, we’re definitely already seeing some amount of reverse engineering of AI self-driving cars and it will continue and gain momentum as self-driving cars become more prevalent. You could say it’s a Darwinian kind of thing in that the industry will find what works and what doesn’t work, and perhaps more quickly get toward what does work, having done so by involuntarily by having their cars reverse engineered. Excuse me, I’ve got to get back to taking apart the nuts and bolts of that AI self-driving car I bought last week.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.