Can AI Be Fair?

Experts from across the country in computer science, philosophy, law, and other fields gathered at a Caltech workshop to examine a question: Can artificial intelligences, or machine-learning algorithms, be fair? Computer scientists talked about addressing various issues using specific types of machine-learning techniques. For example, if you are training an algorithm on data that has preexisting biases, then those biases will be reflected in the algorithm's results. Machine-learning programs typically learn from so-called training data and then, from the data, come up with a model that makes predictions about the future. The goal is to attempt to remove any possible racial or other bias from the models. One of the activities in the workshop involved looking through studies investigating the fairness of machine-learning programs, or algorithms, used for making predictions in college admissions, employment, bank lending, and criminal justice. The participants of the workshop said they thought the cross-disciplinary nature of the workshop was tremendously useful. [Caltech story]

CMS