By
Paula ParisiJune 23, 2022
Meta Platforms has agreed to change its advertising technology and pay a $115,054 fine to settle a Justice Department claim of race and gender discrimination by the algorithm used to display its housing ads. “Meta will — for the first time — change its ad delivery system to address algorithmic discrimination,” U.S. attorney for the Southern District of New York Damian Williams said in a statement. “But if Meta fails to demonstrate that it has sufficiently changed its delivery system to guard against algorithmic bias, this office will proceed with the litigation.” Continue reading Meta Platforms Will Adjust Ad Tech per Agreement with DOJ
By
Debra KaufmanMay 18, 2021
The advent of deepfakes, which replace a person in a video or photo by likeness of someone else, has sparked concern that the ease of using machine learning tools to create them are readily available to criminals and provocateurs. In response, Amazon, Facebook and Microsoft sponsored the Deepfake Detection Challenge, which resulted in several potential tools. But now, researchers at the University of Southern California found that the datasets used to train some of these detection systems demonstrate racial and gender bias. Continue reading USC Researchers Find Bias in Deepfake Detectors’ Datasets
By
Debra KaufmanJanuary 13, 2021
During a CES 2021 panel moderated by The Female Quotient chief executive Shelley Zalis, AI industry executives probed issues related to gender and racial bias in artificial intelligence. Google head of product inclusion Annie Jean-Baptiste, SureStart founder and chief executive Dr. Taniya Mishra and ResMed senior director of health economics and outcomes research Kimberly Sterling described the parameters of such bias. At Google, Jean-Baptiste noted that, “the most important thing we need to remember is that inclusion inputs lead to inclusion outputs.” Continue reading CES: Panel Examines Issues of Gender and Racial Bias in AI
By
Debra KaufmanJune 17, 2020
After years of dissent from the American Civil Liberties Union (ACLU), Fight for the Future and groups of academics, Big Tech companies are finally taking another look at their facial recognition products. Microsoft president Brad Smith stated his company won’t sell facial recognition to the police until federal regulation is instituted. Amazon placed a one-year moratorium on police use of its Rekognition software, and IBM backed away entirely from facial recognition products, citing the potential for abuse. Yesterday we reported that Congress introduced a police reform bill that includes limits on the use of facial recognition software. Continue reading Big Tech Companies Pull Back on Facial Recognition Products