Could AI Startup Geometric Intelligence Have Prevented Tesla’s Fatal Crash?


By Mark Bünger, Lux Research

Artificial intelligence (AI) is a stealthier field than most, but startup Geometric Intelligence is quiet even by those standards. So it was fortunate that we recently heard directly from the founder and CEO, Gary Marcus, and got a few more details. Gary’s background is not in computer science, but in cognitive psychology, such as how children acquire language and music skills. He studied under Stephen Pinker while both were at MIT, has written several books on the topic (including The Algebraic Mind: Integrating Connectionism and Cognitive Science, which gives some clues to his views on machine learning) and is running Geometric while on leave from his lab at New York University. Gary said that Geometric’s approach to machine learning is inspired by this kind of human learning, where we can intuit or guess at a pattern even though we have relatively few examples to follow. “Our first goal is to develop a drop in replacement for deep learning, which requires large datasets. Why is data efficiency important? Because in some fields, like human language, there is an effectively infinite amount of data to process, and it grows and evolves every day.” In other words, Geometric is trying to make machines that learn more efficiently from less data.

While Gary has certainly hit upon an interesting gap in current approaches to neural networks and deep learning, there’s little to recommend the one-year-old, 16-person company besides Gary’s neuroscience pedigree (and that in machine learning of his co-founder, Cambridge Professor Zoubin Ghahramani). So why is it worth paying attention to? Many previous breakthroughs in AI have come from brain researchers: The Salk Institute’s Terry Sejnowski and Tony Bell (now at University of California Berkeley) applied their understanding of the brain’s organization to 1980s AI tools like NetTalk, and problems like dimensionality reduction which have driven Independent Component Analysis (ICA) algorithms. The brain’s use of chemical neurotransmitters like dopamine and serotonin in learning is mimicked in AI, and spikes are a key approach to artificial neural networks [ANNs, see the report “Defining Intelligence – An Overview of Artificial Intelligence, Beyond the Hype and Into the Methods and Applications” (client registration required)], which academic tools like BRIAN and robotics startups like Brain Corporation and Neural Ideas (client registration required) use.

While that might sound abstract, the recent news that a Tesla was involved in a fatal crash while in autopilot mode illustrates precisely the kind of situation where machine learning may never work, but Geometric might: a very unusual traffic situation, in which a high tractor-trailer made a left turn in front of the vehicle. Because of the trailer’s height off the road between its wheels, the Tesla perceived the road to be open, and did not stop (the driver was rumored to have been watching Harry Potter, not paying attention to the road). Geometric’s approach to learning based on small data sets and inferences – like humans do – may be key to solving and surviving the many exceptional situations we and our machines face every day.