By
Paula ParisiSeptember 28, 2022
LinkedIn’s experiments on users have drawn scrutiny from a new study that says the platform may have crossed a line into “social engineering.” The tests, over five years from 2015 to 2019, involved changing the “People You May Know” algorithm to alternate between weak and strong contacts when recommending new connections. Affecting an estimated 20 million users, the test was designed to collect insight to improve the Microsoft-owned platform’s performance, but may have impacted people’s career opportunities. The study was co-authored by researchers at LinkedIn, Harvard Business School, MIT and Stanford and appeared this month in Science. Continue reading LinkedIn Test Raises Ethics Questions Over Parsing Big Data
By
Paula ParisiAugust 18, 2022
Stability AI is in the first stage of release of Stable Diffusion, a text-to-image generator similar in functionality to OpenAI’s DALL-E 2, with one important distinction: this open-source newcomer lacks the filters that prevent the earlier system from creating images of public figures or content deemed excessively toxic. Last week the Stable Diffusion code was made available to just over a thousand researchers and the Los Altos-based startup anticipates a public release in the coming weeks. The unfettered unleashing of a powerful imaging system has stirred controversy in the AI community, raising ethical questions. Continue reading Stability AI Releases Stable Diffusion Text-to-Image Generator
By
Paula ParisiApril 6, 2022
The European Union’s pending Artificial Intelligence Act — the world’s first comprehensive effort to regulate AI — is coming under scrutiny as it moves to law. The Act proposes unplugging AI deemed a risk to society. Critics say it draws too heavily on general consumer product safety rules, overlooking unique aspects of AI, and is too closely tied to EU market law. This could limit its applicability as a template for other regions evaluating AI legislation, contravening the EU’s desired first-movers status in the digital sphere. Continue reading EU’s Sweeping AI Act Takes Tough Stance on High Risk Use
By
Paula ParisiDecember 20, 2021
TikTok is tweaking its For You feed so users won’t be recommended too much of the same content. The move follows a global spate of regulatory hearings that took social media companies to task for promoting content that intentionally polarizes adults and potentially harms children. In addition to “diversifying recommendations” in order to “interrupt repetitive patterns” around topics that may provide negative reinforcement, like “loneliness or weight loss,” the popular ByteDance platform said it is also introducing a new feature that will allow users to avoid specific topics or creators. Continue reading TikTok Adjusts Feed to Curb Repetition, Offers Users Control
By
Paula ParisiDecember 6, 2021
Artificial intelligence and machine learning are technologies with lots of heat behind them, and some controversy. Organizations (including the Entertainment Technology Center at USC) are working to better understand the ramifications of AI and how to hold its users accountable. Among the criticisms is that AI disproportionately exhibits bias against minority groups — the so-called “discrimination feedback loop.” In November, the New York City Council became the first in the nation to pass a law requiring that the hiring and promotion algorithms of employers be subject to audit. Continue reading Guidelines, Accountability Considered as AI Becomes Priority
By
Phil LelyveldMay 21, 2021
“AI and Ethics” was the topic of ETC@USC’s March 30th Executive Coffee with… discussion, the third installment of the Spring 2021 series. WarnerMedia’s Renard Jenkins, vice president of content transmission and production, and Michael Zink, vice president of emerging and creative technologies, led the discussion with 12 graduate and undergraduate USC philosophy, cinema, engineering and innovation majors. They explored how diversity and bias impact AI development, and how AI is expected to impact entertainment experiences. Continue reading ETC Executive Coffee: Warner Executives Discuss AI, Ethics
By
Debra KaufmanFebruary 8, 2021
Quantum computing experts, including executives and scientists, are calling for ethical guidelines, since the technology can advance human DNA manipulation and create new materials for use in war. “Whenever we have a new computing power, there is potential for benefit of humanity, [but] you can imagine ways that it would also hurt people,” said UC Santa Barbara physics professor John Martinis, a former Google chief scientist of quantum hardware. Quantum computing is moving to the forefront; Microsoft, for example, recently debuted a public preview of its Azure Quantum cloud-based platform. Continue reading Quantum Computing Experts Call for Conversation on Ethics
By
Phil LelyveldDecember 14, 2020
Equinix executives led the fifth installment of ETC@USC’s Executive Coffee with… series. “AI development and ethics, what are the intended and unintended consequences of the rollout?” was the topic of the October 22 discussion. Kaladhar Voruganti, VP of technology innovation and senior fellow, and Doron Hendel, senior manager of global business development, ecosystem development, partnerships and alliances at Equinix led the discussion. Eleven graduate and undergraduate USC students, mostly computer science and data science majors, participated. Continue reading ETC Executive Coffee: Equinix Ponders Consequences of AI
By
Debra KaufmanMarch 12, 2020
Microsoft Research, with almost 50 engineers from a dozen technology companies, created a checklist for AI ethics intended to spur conversation on the topic and raise some “good tension” within organizations. To that end, the list, rather than asking “yes” or “no” questions, instead suggests that teams “define fairness criteria.” Participants were not identified by name, but many are in AI-related fields like computer vision, natural language processing and predictive analytics. The group hopes to inspire future efforts. Continue reading Microsoft Research Leads Team to Author AI Ethics Checklist
By
Debra KaufmanJune 18, 2019
A growing number of venture capital and technology executives are pushing for a code of ethics for artificial intelligence startups, as well as tools to make algorithms’ decision-making process more transparent and best practices that include open, consistent communication. At Google, chief decision scientist Cassie Kozyrkov believes humans can fix AI problems. But the technology is still under intense scrutiny from the Department of Housing and Urban Development, the city of San Francisco and the European Commission, among others. Continue reading Tech Firms and Investors Develop AI Ethics, Best Practices
By
Debra KaufmanApril 11, 2019
IEEE Ethically Aligned Design outreach committee co-chair Maya Zuckerman presided over an NAB 2019 panel examining the thorny issues surrounding ethics and artificial intelligence. With Augmented Leadership Institute chief executive Sari Stenfors and Corto chief executive Yves Bergquist, who is lead of AI and neuroscience at USC’s Entertainment Technology Center, the consensus was there is too much focus on AI creating dystopian outcomes. In fact, Stenfors strongly believes AI can contribute to a utopian society. Continue reading Experts Examine Ethical Implications of Artificial Intelligence
By
Debra KaufmanMarch 28, 2019
Google is forming the Advanced Technology External Advisory Council (ATEAC), an external eight-member advisory group to “consider some of the most complex challenges [in AI],” such as facial recognition and fairness. The move comes about a year after Google issued a charter stating its AI principles, and months after Google said it would not provide “general-purpose facial recognition APIs” before the ATEAC addresses relevant policy issues. The advisory group will hold four meetings in 2019, starting in April. Continue reading Google Establishes Advisory Panel to Examine AI Fairness
By
Debra KaufmanMarch 13, 2019
Microsoft debuted its AI Business School, offering instructional videos and case studies to help business executives create and implement AI within their companies. The school, which is similar to such guides as Andrew Ng’s AI Transformation Playbook, grew out of three years of conversations with customers already implementing AI, as well as the company’s own experiences. Microsoft vice president of AI marketing and productization Mitra Azizirad said the guide will focus on strategy, culture, technology basics and AI ethics. Continue reading Microsoft Opens an AI Business School For Non-Tech Execs
By
Don LevyJanuary 11, 2019
IBM chair, CEO and president Ginni Rometty made her second CES keynote appearance and focused on artificial intelligence, big data, quantum computing, and closing the skills gap in computer science in a series of onstage conversations. Rometty drew a distinction between big data and deep data as she explained that there is a tremendous amount of information collected for specific analysis, but there is a wealth of analytical and predictive opportunity yet available. As an example, she cited analysis of fingernails as a means of predicting health issues or weather data to better forecast mid-air turbulence. Continue reading Keynote: IBM Chief Uses Case Studies to Explain Deep Data
By
Phil LelyveldJanuary 8, 2019
Industry leaders gathered at CES to discuss the ethics of artificial intelligence. Isaac Asimov’s Three Laws of Robotics protect humans from physical harm by robots, moderator Kevin Kelly of BigBuzz Marketing Group started out, but how do we protect ourselves from other types of technology-driven harm? AI experts Anna Bethke from Intel, David Hanson from Hanson Robotics, and Mina Hanna from the IEEE had a wide-ranging discussion on how to identify, shape and possibly regulate aspects of AI development that can have ethical and moral ramifications. Continue reading CES Panel: Industry Execs Discuss Ethical Implications of AI