AI Gets Edgy: How new chips and code are pushing AI from the cloud out to network nodes – part 2

293

Editors note: This is part 2 of a series of articles that AI Trends is publishing based on our recent webinar on the same topic presented by Mark Bunger, VP Research, Lux Research. If you missed the webinar, go to www.aitrends.com/webinar  The webinar series is being presented as part of AI World’s webinar, publishing and research coverage on the enterprise AI marketplace.  Part 1 appears here.

Let’s start by taking a look at some of those things.

First of all, I think it’s important to understand why this happening right now and how we’ve gotten to this point. The centralized architecture that we’ve needed for AI has basically been built on this model. We’ve needed big data, lots of data sets from enterprise computing, and then later, from e-commerce and websites. We created these very, very large data sets that necessitated centralized processing and Cloud computing. More and more now, though, we have IRT devices that are still feeding that data. They’re adding orders of magnitude more data to those big data sets and, at the same time, the devices themselves are looking more like “extremely thin” clients. Sometimes they aren’t really more than sensors-on-a-stick with a communications technology, wired or not, to get them back to that central processing center.

We’ve needed those big processing centers and experts in them to basically manage the data sets, so that they could train AI on those really computer-intensive tasks. That’s been the outside-to-inside model that we’ve faced so far but today, again, that’s changing and that’s due not just to the volume and velocity of data, but to the “three Vs” of big data. Volume and velocity are two big ones – more data, faster has been happening but now, the new thing is variety. We need AI systems to have more diverse experiences – more things than we can teach them in a centralized location – and manage them with a control data set and really get them out into the world. That’s exactly what’s happening with intelligent real time (IRT) devices.

Cars, robots and drones are just a few examples of IRT devices. Wearables and other things that, again, are emerging technologies are now starting to generate data at such a rate that have given rise to latency issues, such that they maybe can’t afford to send a data set back to some centralized process to ask for instructions. They’ve got to be able to make a decision in place to, for example, avoid a collision, and that’s basically the demand pull. What we’re now seeing is also the technology push that’s allowing those bigger and bigger data sets to be processed closer and closer to the edge. So we’re going to more of an edge-centric model as this pendulum swings back out that way.

As we discussed earlier, in the past, when this has happened Motorola, Intel and other companies have really dominated certain architecture because they’ve had the chip that was best at doing that. But this time we might see a more fragmented rather than a “winner takes all” outcome; a sort of the worst case scenario, where as a technology developer or maybe a technology user, you don’t have as many choices as you’d like. But truly important right now is that we have a good understanding of who the candidates are. Who could be that first winner? Who could be the second place or the first loser and who could be all the other companies that are just “also-rans”.

Lets move on to a really specific and very timely example of why AI needs to move to the edge; as I mentioned earlier, the world of IRT devices, especially ones that are mobile, things like cars and drones but not so much refrigerators and things that are tethered. It’s what we call the internet of things in motion and it’s a really messy place. It’s a messy world that they live in and those devices run into a lot of novel situations that they need to deal with.

edgycarBack on April 29, before the recent fatal crash, this happened.  This is a Tesla, and as you can see, it’s driven itself into the back of a flatbed trailer. What happened was the owner of the vehicle “summoned” it. In other words, you say to your vehicle that’s parked “Come and get me, I’m here,” and it will then go and find you.  According to the statement released by Tesla after this happened, “Summons specifically mentions that the vehicle may not detect certain obstacles that are too low or too high for car sensors to see.”  Perhaps that’s why the car, in this case, didn’t stop before impacting the high riding trailer.  It also goes on to say that the owner of the car didn’t do it right, and things like that, but the point is that this was a difficult to predict instance, although it was one that they had at least foreseen, namely, objects that are too low or too high for the car sensors might end up causing a crash like this.

It’s ironic, then, that less than a week later, the exact same thing happened but with fatal consequences.  May 7 was when the fatal crash occurred that we’ve all read about in the news in the last several weeks. What happened in that case was almost identical. The Tesla in this case though, was driving down the road, maybe at high speed, people have said maybe up 85 miles an hour, and there was a tractor trailer, like on April 29, with a very high clearance above the ground that had made a left turn and was transverse across the road that the Tesla was on. The vehicle itself basically saw straight underneath the trailer and didn’t detect it as an obstacle and the driver seems to have been distracted or didn’t react.  The statement from Tesla was almost the same phrasing as before.  “The high ride height of the trailer in both cases combined with in this case, its positioning across the road in the extremely rare circumstances, caused the model S to pass under the trailer.”

teslaaccidentIt actually cut the top off the car and kept driving. It might have been because of its speed or it might have been because the car still didn’t detect that it had been in a crash. Regardless, this is the same problem; the car encountered a really unusual situation. Most of us have probably never run into this but if we did, we would know what to do. When we run into a novel situation, we’re very good at figuring out what to do because we can model a lot of things that we’ve never seen before, based on some kind of way of thinking about the world, that computers and IT devices still need to acquire.

by Mark Bunger, VP Research, Lux Research