IEEE Ethically Aligned Design outreach committee co-chair Maya Zuckerman presided over an NAB 2019 panel examining the thorny issues surrounding ethics and artificial intelligence. With Augmented Leadership Institute chief executive Sari Stenfors and Corto chief executive Yves Bergquist, who is lead of AI and neuroscience at USC’s Entertainment Technology Center, the consensus was there is too much focus on AI creating dystopian outcomes. In fact, Stenfors strongly believes AI can contribute to a utopian society.
Stenfors noted that, although AI is in its very early stages, she already envisions ways that AI can benefit society. Stenfors posits it as “a personal assistant that really cares about me and my society” and extends that care to notions of equality, accessibility, inclusion, and democracy. She described the Aurora Project, Finland’s alternative to Amazon Echo being created in collaboration with IEEE P7010 and a range of organizations.
“Most [similar] platforms are optimized for the company to make more money,” she said. “Aurora is all human-based,” and the user creates a digital twin based on the data he or she chooses. “Aurora is based on predictive life-event series,” said Stenfors, who also reported that it can encourage a user to improve his job skills if it predicts unemployment is looming. A pilot project, Aurora will debut in a few months.
Zuckerman noted that the IEEE P7000 series of projects addresses “specific issues at [the] intersection of technological and ethical considerations.”
Bergquist stated that “when we talk about AI ethics, we’re talking about human ethics.” “Machines are terrible at abstraction and contextual reasoning,” he continued. “These are critical for ethics.” He nonetheless stressed, “AI should have diversity, not just with gender and race but with domains. We need intellectual diversity as well, so it’s not taken over by computer scientists.”
Machines have become strong at declarative learning (facts) whereas humans are much better at procedural learning (how things work). The third type of knowledge is narrative. “Once you know how to do something, how do you motivate people to do that thing,” said Bergquist. “It’s important because when you talk about AI ethics, you have to understand the notion that [the machines] are going to tell us what to do and take over critical decisions is not true. AI ethics is really human ethics.”
Zuckerman added that a 295-page book on the IEEE Global Initiative on Ethics and Autonomous and Intelligence Systems is available for free download. “The three pillars are universal human values, political self-determination and data agency, and technical dependability,” she said. General principles of ethically aligned design, she added, have emerged for human rights, well-being, data agency, awareness of misuse, transparency, accountability and competence.
No Comments Yet
You can be the first to comment!
Sorry, comments for this entry are closed at this time.