Navigating the complexities of AI usage and the crafting of policies to regulate it poses a tricky question: What risks must be considered by individuals, businesses, or governments when deploying AI systems or establishing guidelines for their usage? The challenges are multifaceted. While the risk to human safety is glaring in AI systems managing critical infrastructure, less apparent but equally critical risks exist in AI applications for grading exams, sorting job applications, or authenticating documents at border control. Each application introduces distinct, yet serious risks.
The process of creating legislation to manage AI risks, evident in the EU AI Act or California’s SB 1047, has revealed the difficulty policymakers face in agreeing on the scope of risks to be addressed. To assist in this endeavor, MIT researchers have introduced what they’ve termed an AI “risk repository”, essentially a catalog of AI-related risks.
Peter Slattery, a key member of the MIT FutureTech team and project lead for the AI risk repository initiative, explained to TechCrunch, “Our goal was to develop a publicly available, detailed, expandable, and categorized inventory of AI risks that is easy to use, will remain relevant, and that others can adopt for their projects. The necessity for such a repository became clear when we recognized a shared need for it throughout our project and among peers.”
The repository, featuring over 700 identified AI risks categorized by causes (for example, intent), areas (like discrimination), and subareas (such as misinformation and cyber threats), emerged from a need to bridge gaps and identify overlaps in AI safety research. According to Slattery, while other risk frameworks exist, they address only a subset of the risks outlined in the repository, potentially overlooking critical areas that could impact AI development, application, and policy formulation.
“The assumption that there’s a unanimous understanding about AI risks is misleading,” Slattery added. “Our analysis showed that the average frameworks only mention 34% of the 23 risk subdomains we pinpointed, with nearly a quarter addressing less than 20%. No single framework covered all 23 subdomains, and the most comprehensive one only reached 70%. Given such fragmentation, it’s premature to believe there’s a universal agreement on AI risks.”
To compile the repository, the MIT team collaborated with colleagues from the University of Queensland, the Future of Life Institute, KU Leuven, and the AI start-up Harmony Intelligence, reviewing thousands of documents on AI risk assessment from academic sources.
Their review revealed certain risks, such as AI’s implications on privacy and security, were frequently discussed across more than 70% of existing frameworks, while topics like misinformation were mentioned less often. Despite over half of the frameworks addressing AI’s potential for discrimination and misrepresentation, only 12% discussed the “pollution of the information ecosystem” — the surge in AI-generated spam, for instance.
“For researchers, policymakers, and risk management professionals, this database offers a foundational resource to reference for more targeted investigations,” Slattery remarked. “Previously, reviewing disparate literature for a holistic understanding was time-consuming, or relying on a handful of existing frameworks that might overlook crucial risks was the norm. Our comprehensive database aims to save time and enhance oversight.”
Yet, the question remains whether this resource will be utilized effectively. AI regulation globally remains inconsistent, with varying objectives. The existence of a comprehensive AI risk repository like MIT’s might have influenced earlier regulatory efforts, but its potential impact is still up for debate.
Another pertinent consideration is whether recognizing AI risks is sufficient motivation to enact effective regulation. The limitations of many AI safety evaluations are well-documented, and a risk database alone may not remedy these issues.
Nevertheless, the MIT team, led by Neil Thompson of the FutureTech lab, is optimistic. As Thompson shared with TechCrunch, the next research phase will utilize the repository to assess the adequacy of current responses to AI risks.
“With this repository, we aim to identify where organizational responses may be lacking,” Thompson stated. “If there’s a disproportionate focus on certain risks at the expense of others that are equally critical, that’s an imbalance we intend to highlight and rectify.”
Compiled by Techarena.au.
Fanpage: TechArena.au
Watch more about AI – Artificial Intelligence


