Biased bots: Human prejudices sneak into AI systems

573

Identifying and addressing possible bias in machine learning will be critically important as we increasingly turn to computers for processing the natural language humans use, for instance in doing online text searches, image categorisation and automated translations.

“Questions about fairness and bias in machine learning are tremendously important for our society,” said Arvind Narayanan, assistant professor of computer science and member of the Center for Information Technology Policy (CITP) at Princeton University. “We have a situation where these artificial intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from.”

The paper, “Semantics derived automatically from language corpora contain human-like biases,” is published in Science. Its lead author is Aylin Caliskan, a postdoctoral research associate and a CITP fellow at Princeton.

Co-author Dr Joanna Bryon, from the Department of Computer Science and CITP affiliate, said: “The most important thing about our research is what this means about semantics, about meaning.  People don’t usually think that implicit biases are a part of what a word means or how we use words, but our research shows they are.  This is hugely important, because it tells us all kinds of things about how we use language, how we learn prejudice, how we learn language, how we evolved language. It also give us some important insight into why our brains work the way they do and what that means about how we should build AI.

“The fact that humans don’t always act on our implicit biases shows how important our explicit knowledge and beliefs are. We’re able as a society to come together and negotiate new and better ways to be, and then act on those negotiations.  Similarly, there are important uses for both implicit and explicit knowledge in AI.  We can use implicit learning to absorb automatically information from the world and culture, but we can use explicit programming to ensure that AI acts in ways we consider acceptable, and to make sure that everyone can see and understand what rules AI is programmed to use.  This last, about making sure we all understand what AI is doing and why is called “transparency” and is the main area of research of my group of PhD students here at Bath.”

Read the source article at University of Bath.