Machines matter to people. But, they “matter” only because they affect people. It’s widely supposed that today’s machines themselves cannot be “affected” — because they have no feelings, no conscious thought, no sentience.
Interestingly enough, it might not always be that way.
While biology has held a relatively firm monopoly on “consciousness” over the last few hundreds of millions of years, many researchers in the domain of machine learning are of the belief that, eventually, humans may replicate self-awareness and inner experience (rough terminology that we’ll use as representative of the broad term “consciousness” for the sake of this article) in our machines. And some of their guesses are sooner than one might expect.
Over the last three months I’ve interviewed more than 30 artificial intelligence researchers (essentially all of whom hold PhDs). I asked them why they believe or don’t believe that consciousness can be replicated in machines.