MIT’s AI Risk Assessment Database Debuts with 700 Threats

The list of potential risks associated with artificial intelligence continues to grow. “Global AI adoption is outpacing risk understanding,” warns the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL), which has joined with the MIT multidisciplinary computer group FutureTech to compile the AI Risk Repository, a “living database” of more than 700 unique risks extracted across 43 source categories. Organized by cause, classifying “how, when and why these risks occur,” the repository is comprised of seven risk domains (for example, “misinformation”) and 23 subdomains (such as “false or misleading information”).

While AI’s risks have been well-documented, researchers at CSAIL and FutureTech have uncovered what they call “critical gaps” that amount to about 30 percent of risks that are overlooked by “even the most thorough individual framework.” These are included in the MIT AI Risk Repository, which will be regularly maintained and updated, according to a CSAIL write-up.

Compiled with assistance from researchers at the University of Queensland, Future of Life Institute, KU Leuven and Harmony Intelligence, the MIT AI Risk Repository debuts at a time when census data indicates “a 47 percent rise in AI usage within U.S. industries, jumping from 3.7 percent to 5.45 percent between September 2023 and February 2024,” CSAIL reports.

Peter Slattery, incoming MIT FutureTech postdoc and the Repository’s project lead tells VentureBeat the project originated with an attempt to understand how organizations are responding to the risks associated with AI. Aiming to assemble a comprehensive overview to use as a checklist, the research team found the existing information incomplete.

The result: a scholarly deep dive across 43 existing taxonomies, including academic databases comprised of peer-reviewed articles, conference papers and reports — more than 17,000 records overall.

VentureBeat describes the repository as “a two-dimensional classification system,” categorized first by cause (human or AI), intent (intentional or unintentional) and the timing (pre-deployment or post-deployment), then by a secondary subset of seven domains, “including discrimination and toxicity, privacy and security, misinformation and malicious actors and misuse.”

“More of the risks analyzed were attributed to AI systems (51 percent) than humans (34 percent) and presented as emerging after AI was deployed (65 percent) rather than during its development (10 percent),” the CSAIL article says.

ZDNet observes that “misinformation” is among the least-addressed AI threats, arising in 44 percent of the materials, followed only by “human-computer interaction,” at 41 percent. “AI system safety, failures, and limitations” spawned the most prolific results (76 percent).

While AI with control over critical infrastructure poses obvious risks, AI that does things like score exams or review immigration docs can pose more insidious risks, reports TechCrunch, noting that the repository can help regulators asses the risk field.

No Comments Yet

You can be the first to comment!

Leave a comment

You must be logged in to post a comment.