Deep Learning: It’s Time for AI to Get Philosophical


By Catherine Stinson, postdoctoral scholar at the Rotman Institute of Philosophy, University of Western Ontario, and former machine-learning researcher

I wrote my first lines of code in 1992, in a high school computer science class. When the words “Hello world” appeared in acid green on the tiny screen of a boxy Macintosh computer, I was hooked. I remember thinking with exhilaration, “This thing will do exactly what I tell it to do!” and, only half-ironically, “Finally, someone understands me!” For a kid in the throes of puberty, used to being told what to do by adults of dubious authority, it was freeing to interact with something that hung on my every word – and let me be completely in charge.

For a lot of coders, the feeling of empowerment you get from knowing exactly how a thing works – and having complete control over it – is what attracts them to the job. Artificial intelligence (AI) is producing some pretty nifty gadgets, from self-driving cars (in space!) to automated medical diagnoses. The product I’m most looking forward to is real-time translation of spoken language, so I’ll never again make gaffes such as telling a child I’ve just met that I’m their parent or announcing to a room full of people that I’m going to change my clothes in December.

But it’s starting to feel as though we’re losing control.

These days, most of my interactions with AI consist of shouting, “No, Siri! I said Paris, not bratwurst!” And when my computer does completely understand me, it no longer feels empowering. The targeted ads about early menopause and career counselling hit just a little too close to home, and my Fitbit seems like a creepy Santa Claus who knows when I am sleeping, knows when I’m awake and knows if I’ve been bad or good at sticking to my exercise regimen.

Algorithms tracking our every step and keystroke expose us to dangers much more serious than impulsively buying wrinkle cream. Increasingly polarized and radicalized political movements, leaked health data and the manipulation of elections using harvested Facebook profiles are among the documented outcomes of the mass deployment of AI. Something as seemingly innocent as sharing your jogging routes online can reveal military secrets. These cases are just the tip of the iceberg. Even our beloved Canadian Tire money is being repurposed as a surveillance tool for a machine-learning team.

For years, science-fiction writers have spelled out both the technological marvels and the doomsday scenarios that might result from intelligent technology that understands us perfectly and does exactly what we tell it to do. But only recently has the inevitability of tricorders, robocops and constant surveillance become obvious to the non-fan general public. Stories about AI now appear in the daily news, and these stories seem to be evenly split between hyperbolically self-congratulatory pieces by people in the AI world, about how deep learning is poised to solve every problem from the housing crisis to the flu, and doom-and-gloom predictions of cultural commentators who say robots will soon enslave us all. Alexa’s creepy midnight cackling is just the latest warning sign.

Read the source article at the Globe and Mail.