A master algorithm may be the solution to machine learning


Machine learning is not new. We have witnessed it since the 1990s, when Amazon introduced a new “recommended for you” section for its users to display more personalized results. When we search for something on Google, machine learning is behind those search results.

The “Friends” recommendations or the suggested pages on Facebook or a product recommendation on any e-commerce site all depend on machine learning.

In other ways, these websites know a lot about us. Every click or search we perform is recorded and provides more information about us to these sites, but none of these sites know completely about us. Google knows what we are searching for, Amazon knows what we are looking to buy, Apple knows about our music interests and Facebook knows a lot about our social behavior. But none of these sites knows about our preferences and choices throughout the day. They only can predict by looking at our previous clicks and not by looking at the big picture of us.

What is a master algorithm?

But suppose there’s an algorithm that knows what we’re searching for on Google, what we’re buying on Amazon and what we’re listening to on Apple Music or watching on Netflix. It also knows about our recent statuses and shares on Facebook.

Now this algorithm knows a lot about us and has a better and more complete picture of us.

This powerful “master algorithm” is at the heart of work postulated by Pedro Domingos, author of The Master Algorithm: How the Quest for the Ultimate Learning Machine Will Remake Our World.

Machine learning has different schools of thought, and each looks at the problem from a different perspective. The symbolists focus more on philosophy, logic and psychology and view learning as inverse of deduction. The connectionists focus on physics and neurosciences and believe in the reverse engineering of the brain. The evolutionaries, as the name suggests, draw their conclusions on the basis of genetics and evolutionary biology, whereas the Bayesians focus on statistics and probabilistic inference. And the analogizers depend on extrapolating the similarity judgements by focusing more on psychology and mathematical optimization.

Read the source article at TechCrunch.