AI Gets Edgy: How New Chips are Pushing AI from the Cloud to Network Nodes – part 3

247

Editors note: This is part 3 of a 5 series of 5 articles that AI Trends is publishing based on our recent webinar on the same topic presented by Mark Bunger, VP Research, Lux Research. If you missed the webinar, go to www.aitrends.com/webinar.  The webinar series is being presented as part of AI World’s webinar, publishing and research coverage on the enterprise AI marketplace.  Part 1 appears here and Part 2 appears here.

A similar instance of this, a good bit more humorous, is Google’s software that it uses for what’s called dreaming. Deep Dream is some Google software that when released, basically allowed people to take images and combine them in ways that could be artistic, although often very, very creepy too. Pretty early on some people noticed that Deep Dream seemed to be very infatuated with eyes and dogs. There are an awful lot of eyes and dogs going on in Deep Dream’s dreaming. Why is that?

A little Reddit thread about this offered the answer – that the renders that it makes depend strongly on the statistics of dream data. Again, the data set that it had used and in particular, it used a data set that had a large number of dog faces, of about a 1,000 classes of pictures, several hundred of them were dogs.

aigetsedgypart3Because Deep Dream was trained on dogs, it thought a lot about dogs. Since the world is obviously full of a lot more different things than just dogs; more recently there’s been a train of big developers making their AI tools open. Google has opened up Deep Dream and TensorFlow; which we’ll get back to in a little bit. IBM has opened up Watson. Facebook has a system called Torch. Microsoft has CNTK. Even Yahoo – which I think a lot of us are surprised to hear, is still in business but yes, they are – have Cafe on Spark.

There’s a lot of this taking what seems to be the crown jewels of the company and making it free, putting it out there for anybody to use. Why would they give away their future? The obvious reason is very similar to the Tesla case; they need this variety. They’ve got volume, they’ve got velocity, they can model that all they want. What they need is all this messiness that we have in the real world. Not just dogs, but maybe cats and hamsters and other types of millions of other animals and other objects. That will help them advance their AI efforts faster.

It’s a pretty astute strategy. They want to get out there with those tools and make them available to more people. If it’s software, especially if it’s centralized software that allows you to do that for essentially no cost or certainly no variable cost, it’s a pretty good way to get that data. However if you’re making hardware it’s more difficult because every chip, every board, all those things that are made with atoms, have a cost associated with them. But that cost is declining pretty rapidly and we see a lot of companies like Intel that are pursuing not just software but also hardware aspects of this to get more people on the ball for them.

In Intel’s case, they just went through this. I don’t know if it was a near death experience but it was certainly a huge hit to their business. They were in denial about mobile computing being a big future technology platform and because of that they lost that opportunity to ARM, their big competitor. They don’t want to repeat that and so they’ve made big strides launching IT specific platforms like Basis and Curie and Galileo to provide chips for. They’re advanced IRT devices but they’re still not artificially intelligent.

aigetsedgypart3-2Intel has also been making investments in AI software service companies, software companies and services companies like Extreme Insights for machine learning, Indisys for natural language processing and, most recently Saffron for cognitive computing. This really shows where the AI opportunities are headed and as Intel says, “It also brings us to new consumer devices that need to see sense and interpret complex information in real time. Big data can happen on small devices… (and) Saffron’s technology deployed on those small devices can make intelligent local analytics possible in the Internet of things.”

Software is certainly going to be a part of that on Intel chips, but they also want to be aware of the possibility that it could also be on ARM chips. That’s why they recently bought Altera, a chip maker, for almost $17 billion. Altera makes field programmable gate arrays, a class of chips that you can basically reprogram after they’ve been deployed, so that you don’t have to set them up at the factory and hope they’ll always do their job well.

Much of the demand in this space comes as a result of the Internet of Things (IoT) and that’s why Intel has to continue to support Altera’s ARM efforts. They know that ARM is going to continue to be a big part of mobile and IRT computing but what they want to do is to start integrating Altera with Atom, which is another Intel chip and probably in the future, with future versions of things like Basis, Curie and Galileo.

This is a strategic move and it’s one that they see as not only giving them opportunities but helping to get them very close to their competitor. If you saw the news this week, ARM was just bought by Softbank. Softbank has made investments in robotics companies like Aldebaran and is definitely, highly and keenly aware of the opportunities in IRT, but it also has a significantly different business model and is now starting to look like a pretty big conglomerate, especially with the ARM acquisition. This is clearly a space to watch.

Another company seeing this opportunity, not surprisingly, is Google, They’ve done a lot in the software space but they’re also starting to get into the hardware side of it too. Google developed a chip called Tensor, the Tensor Processing Unit that is tied to the TensorFlow software I referenced earlier. Here again you’ve got a chip, but in this case it’s not necessarily designed for IRT devices but rather for faster processing of machine learning applications. According to Google’s own pronouncements about it, they’re looking at jumping three generations ahead in Moore’s Law and really doing a lot with the chips that people had earlier seen as being on the roadmap.

by Mark Bunger, VP Research, Lux Research