CMU Researchers Win NSF-Amazon Fairness in AI Awards Tuesday, February 16, 2021 - by Byron Spice Three CMU research teams have received funding through the Program on Fairness in Artificial Intelligence, which the NSF sponsors with Amazon. (Photo courtesy of Arkus Winkler, Unsplash.) Three Carnegie Mellon University research teams have received funding through the Program on Fairness in Artificial Intelligence, which the National Science Foundation sponsors in partnership with Amazon. The program supports computational research focused on fairness in AI, with the goal of building trustworthy AI systems that can be deployed to tackle grand challenges facing society. "There have been increasing concerns over biases in AI systems, for example computer vision algorithms working worse for Blacks than for other races, or ads for higher paying jobs only being shown to men," said Jason Hong, a professor in the Human-Computer Interaction Institute (HCII). "Machine learning researchers are developing new tools and techniques to improve fairness from a quantitative perspective, but there are still many blind spots that defy pure quantification." The CMU projects address new methods for detecting bias, translating fairness goals into public policy and increasing the diversity of people able to use systems that recognize human speech. "Understanding how AI systems can be designed on principles of fairness, transparency and trustworthiness will advance the boundaries of AI applications," said Henry Kautz, director of the NSF's Division of Information and Intelligent Systems. "And it will help us build a more equitable society in which all citizens can be designers of these technologies as well as benefit from them." The CMU projects selected as 2021 awardees are: Organizing Crowd Audits To Detect Bias in Machine Learning. Led by Hong, researchers in the HCII seek to increase the diversity of viewpoints involved in identifying bias and unfairness in AI-enabled systems, in part by developing an audit system that uses crowd workers. Fair AI in Public Policy — Achieving Fair Societal Outcomes in ML Applications to Education, Criminal Justice, and Health & Human Services. Led by Hoda Heidari, an assistant professor in the Machine Learning Department (MLD) and Institute for Software Research, researchers in MLD and the Heinz College of Information Systems and Public Policy will help translate fairness goals in public policy into computationally tractable measures. They will focus on factors along the development life cycle, from data collection through evaluation of tools, to identify sources of unfair outcomes in systems related to education, child welfare and justice. Quantifying and Mitigating Disparities in Language Technologies. Led by Graham Neubig, an associate professor in the Language Technologies Institute (LTI), researchers in the LTI, HCII and George Mason University will develop methods to improve the ability of computer systems to understand the language of a wider variety of people. They will address variations in dialect, vocabulary and speech mechanics that bedevil today's smart speakers, conversational agents and similar technologies. "We are excited to see NSF select an incredibly talented group of researchers whose research efforts are informed by a multiplicity of perspectives," said Prem Natarajan, vice president in Amazon's Alexa unit. "As AI technologies become more prevalent in our daily lives, AI fairness is an increasingly important area of scientific endeavor." For more information, Contact: Byron Spice | 412-268-9068 | bspice@cs.cmu.edu