By
Debra KaufmanJanuary 10, 2024
Introduced by Consumer Technology Association VP of Regulatory Affairs David Grossman, FDA Commissioner Robert Califf took the CES stage with interviewer Lisa Dwyer, a partner at international law firm King & Spalding. Califf noted the monumental differences in technology that have taken place between his first stint at the Food & Drug Administration in 2015 and today. “The changes are so dramatic, it’s hard to characterize them,” he said. “We’re moving into a different world.” He’s excited about “the hundreds of products with AI” that can bring so much good to the market but also noted the potential harms. Continue reading CES: FDA Commissioner Robert Califf on Bias in Healthcare
By
Paula ParisiJune 23, 2022
Meta Platforms has agreed to change its advertising technology and pay a $115,054 fine to settle a Justice Department claim of race and gender discrimination by the algorithm used to display its housing ads. “Meta will — for the first time — change its ad delivery system to address algorithmic discrimination,” U.S. attorney for the Southern District of New York Damian Williams said in a statement. “But if Meta fails to demonstrate that it has sufficiently changed its delivery system to guard against algorithmic bias, this office will proceed with the litigation.” Continue reading Meta Platforms Will Adjust Ad Tech per Agreement with DOJ
By
Bella ChenDecember 14, 2021
Top corporations have agreed to improve their AI-driven hiring programs. As artificial intelligence has been applied to assist in the often arduous process of screening candidates, it is reported that the software may be adversely affecting the potential of diversity in the workforce. A group of companies is designing algorithmic safeguards to improve AI screening software as part of an initiative to solve this issue. The companies hope that system upgrades will ultimately help improve decisions involving areas such as hiring, promotion, compensation and a more diverse workforce. Continue reading Companies Join Forces to Minimize Algorithmic Bias in Hiring
By
Paula ParisiDecember 6, 2021
Artificial intelligence and machine learning are technologies with lots of heat behind them, and some controversy. Organizations (including the Entertainment Technology Center at USC) are working to better understand the ramifications of AI and how to hold its users accountable. Among the criticisms is that AI disproportionately exhibits bias against minority groups — the so-called “discrimination feedback loop.” In November, the New York City Council became the first in the nation to pass a law requiring that the hiring and promotion algorithms of employers be subject to audit. Continue reading Guidelines, Accountability Considered as AI Becomes Priority
By
Paula ParisiDecember 1, 2021
Twitter is tweaking its Birdwatch crowdsourced fact-check feature, adding aliases so contributors can conceal their identities when notating someone’s tweet. The company says its goal in having people append anonymously is “keeping focus on the content of notes rather than who’s writing them,” reducing bias and tempering polarization. To ensure aliases don’t overshadow accountability, all Birdwatch accounts now have profile pages that aggregate past contributions, and the ratings those contributions received from other Birdwatchers, accruing credibility to contributors whose notes and ratings are consistently found helpful by others. Continue reading Twitter Formalizes Its Birdwatch Program with Aliases, Profiles
By
Debra KaufmanAugust 20, 2021
Google revealed its work on a new AI-enabled Internet search tool dubbed MUM (Multitask Unified Model), which can “read” the nuances globally of human language. The company says that users will be able to find information more readily and be able to ask abstract questions. MUM is not yet publicly available but Google independently used it for a COVID-19 related project. Vice president of search Pandu Nayak and a colleague designed an “experience” that gave in-depth information on vaccines when users searched for them. Continue reading Google AI-Enabled MUM Aims to Reinvent, Empower Search
By
Debra KaufmanAugust 17, 2021
OpenAI’s Codex, an AI system that translates natural language into code, was released via an API in private beta. Codex, trained on billions of lines of public code, can turn plain English commands into 12+ programming languages and also powers GitHub service Copilot that suggests whole lines of code within Microsoft Visual Studio and other development environments. OpenAI explained that Codex will be offered for free during an “initial period,” and invites “businesses and developers to build on top of it through the API.”
Continue reading OpenAI Debuts Tool to Translate Natural Language into Code
By
Phil LelyveldMay 21, 2021
“AI and Ethics” was the topic of ETC@USC’s March 30th Executive Coffee with… discussion, the third installment of the Spring 2021 series. WarnerMedia’s Renard Jenkins, vice president of content transmission and production, and Michael Zink, vice president of emerging and creative technologies, led the discussion with 12 graduate and undergraduate USC philosophy, cinema, engineering and innovation majors. They explored how diversity and bias impact AI development, and how AI is expected to impact entertainment experiences. Continue reading ETC Executive Coffee: Warner Executives Discuss AI, Ethics
By
Debra KaufmanJanuary 13, 2021
During a CES 2021 panel moderated by The Female Quotient chief executive Shelley Zalis, AI industry executives probed issues related to gender and racial bias in artificial intelligence. Google head of product inclusion Annie Jean-Baptiste, SureStart founder and chief executive Dr. Taniya Mishra and ResMed senior director of health economics and outcomes research Kimberly Sterling described the parameters of such bias. At Google, Jean-Baptiste noted that, “the most important thing we need to remember is that inclusion inputs lead to inclusion outputs.” Continue reading CES: Panel Examines Issues of Gender and Racial Bias in AI
By
Debra KaufmanSeptember 25, 2020
The Justice Department sent Congress draft legislation to weaken Section 230 of the Communications Decency Act, leaving Facebook, YouTube and other social media platforms vulnerable to legal action for content posted by users. The proposed changes would create liability for platforms that allow “known criminal content” to remain once they are aware of it. President Trump claims that social media companies are biased against conservatives. The platforms have not been protected against some civil suits. Continue reading Proposed Legislation Would Weaken Shields for Social Media
By
Debra KaufmanSeptember 24, 2020
Facebook has upped the ante in its showdown with European regulators, stating that an unfavorable decision by Ireland’s Data Protection Commission (DPC) would leave the company no choice but to leave the region. Facebook Ireland’s head of data protection/associate general counsel Yvonne Cunnane is referring to the DPC’s preliminary order to stop the transfer of its European users’ data to servers in the U.S., citing fears of government surveillance. In response, Facebook filed a lawsuit challenging DPC’s ban. Continue reading Facebook Pushes Back Against Regulators on Data Transfer
By
Debra KaufmanJuly 10, 2020
Facebook commissioned an audit, and civil rights attorney Laura Murphy with Relman Colfax attorneys delivered an 89-page report that praised the company for adding rules against voter suppression and creating a team to study algorithmic bias. But it also excoriated Facebook for “vexing and heartbreaking decisions [it] has made that represent significant setbacks for civil rights.” Meanwhile, Facebook is still working to address misinformation on its platform. It recently removed accounts belonging to Roger Stone, which were linked to fake accounts active around the 2016 presidential election. Continue reading Facebook Audit Finds Company’s Civil Rights Efforts Wanting
By
Debra KaufmanJuly 6, 2020
The Association for Computing Machinery (ACM) U.S. Technology Policy Committee (USTPC) issued a statement on the use of facial recognition “as applied by government and the private sector,” concluding that, “when rigorously evaluated, the technology too often produces results demonstrating clear bias based on ethnic, racial, gender, and other human characteristics recognizable by computer systems.” ACM, which has 100,000 global members, urged legislators to suspend use of it by government and business entities. Continue reading ACM Calls for Temporary Ban of Facial Recognition Systems
By
Debra KaufmanApril 3, 2020
In Washington state, governor Jay Inslee just signed a law regulating facial recognition backed by Microsoft that could potentially be a model for other U.S. states. The law allows government agencies to use facial recognition but restricts it from using it for broad surveillance or tracking innocent people. It is more permissive than at least seven U.S. cities that have blocked government use of facial recognition technology due to fears of privacy violations and bias but stricter than states without such laws. Continue reading Washington Inks Facial Recognition Law Backed by Microsoft
By
Debra KaufmanJanuary 8, 2020
When your smart home takes stock of who’s there before turning the heat on to their favored temperature, that’s anticipatory technology. CNET editor-at-large Brian Cooley and CBS Interactive Tech Sites senior vice president, content strategy Lindsey Turrentine led a CES discussion on how data including location, human behavior, facial recognition and object recognition can help smart homes and smart devices anticipate human needs. “Some things will get better,” said Cooley. “And others might be unnerving.” Continue reading CES 2020: Smart Devices Enter an Anticipatory Tech World