The Special Competitive Studies Project (SCSP) in collaboration with the Johns Hopkins Applied Physics Laboratory (JHUAPL) released today a “Framework for Identifying Highly Consequential AI Use Cases.” The framework is a tool for identifying AI use cases or classes of use that will have significant beneficial and/or harmful impacts on society. Regulators can use the framework to evaluate the whole context of potential sector-specific outcomes that will inevitably consist of interrelated benefits and harms. That assessment will enable regulators to support transformative beneficial outcomes to society while mitigating the worst of the harms.
“We cannot, nor should we, regulate every AI use case,” said Rama G. Elluru, SCSP Senior Director for Society. “We need to balance regulation with innovation and look at the entire context of societal impacts. That requires tools that help identify AI uses that merit regulatory attention. Those regulatory efforts could include incentivizing the AI use, mitigating harms, or even banning the use. While this framework does not speak to the regulatory action that should be taken for AI use cases, it helps regulators identify uses or classes of uses that may require their focus.”
The framework presents an assessment of corresponding categories of harms and benefits, with specific harms and benefits within each, qualitative and quantitative means to determine the magnitude of benefits/harms (e.g., probability and scope), and an ultimate determination of whether an AI use case is highly consequential. It is meant to provide a standardizable, but flexible and dynamic approach for sector regulators to identify highly consequential AI uses that can evolve with the technology.
The framework can be applied when regulators foresee a new application for AI, when a new application for AI is developed or brought to a regulatory body, or when an existing AI system creates a new or newly discovered consequential impact. Regulators also will periodically reassess AI use in their sector to determine if the list of AI systems identified as highly consequential remains appropriate given changes in context. The framework was informed by a series of roundtables led by SCSP and JHUAPL to solicit feedback from government experts and regulators, academics, civil society leaders, and industry experts.
“In the course of conducting this analysis, we reviewed and leveraged existing domestic and international frameworks that apply risk-based approaches to classify and advance trustworthy AI. It became clear this kind of framework is needed by government and private sector entities who are trying to anticipate outcomes of AI-enabled systems, but with very little available to help them undertake a meaningful evaluation looking beyond the technical assessment. Our hope is the framework leads to a registry of use cases that can inform industry and be shared with the public to highlight how cases are evaluated,” said Dr. Stephanie Tolbert (JHU-APL study lead).
For more information about the Framework for Identifying Highly Consequential AI Use Cases, please contact SCSP Senior Director of Communications and Public Affairs, Tara Rigler, at tmr@scsp.ai. For more information about SCSP, visit us on our website, and subscribe for regular newsletter and podcast updates here: 2-2-2.