BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI’s Regulatory Framework Begins To Take Shape-And None Too Soon

Following

The newly released Artificial Intelligence Commission Report (“Report”) from the U.S. Chamber of Commerce provides one of the first substantive templates for a regulatory framework of artificial intelligence. It’s an important step, given that the speed of AI roll-out is far outpacing the preparation of a legal framework to regulate its investment, oversight, and implementation.

Thus the Report may prove an important resource for corporate leaders who intend to invest in AI technology, desire guidance on the use of AI by their companies, or seek to influence the development of AI technology.

Generally speaking, the Report identifies workforce preparation, global competitiveness, and national security as top priorities policymakers must address in order to promote the responsible adoption and use of AI and to establish a risk-based regulatory framework.

The Report has been released in the midst of what The Wall Street Journal described as “AI’s breakthrough moment.” The use of AI is growing rapidly across business, consumer and governmental sectors, with an expectation that over the next 10-20 years its application will be ubiquitous.

Indeed, the last six to nine months have seen an explosion of attention to AI opportunities. This includes the exponential growth in AI systems generally, and the development of new AI features and generative technology in particular-including the new AI system, GPT-4. Notably for corporate leaders, the expected use of AI technology offers both significant benefits and presents material risks for society in general, as well as for the economy, and national security.

Surprisingly, federal and state governments have to date been hesitant to develop any meaningful legislative or regulatory proposal for the oversight of AI, including rules intended to address AI’s potential dangers or otherwise to protect individuals. Similarly, neither the courts nor leading policy organizations have yet to articulate specific standards by which corporate boards may exercise oversight over their company’s acquisition and implementation of AI tools. Other than the October, 2022 release of the White House’s “AI Bill of Rights”, there has been little substantive movement in this area prior to the Report’s release.

Such hesitation could be due to a number of factors, including the need to evaluate additional operating experience with AI tools, industry resistance, or simply an unfamiliarity with the technology. Yet corporate leaders should recognize the significant risks associated with further delay. As The New York Times noted, “[B]y failing to establish such guardrails, policymakers are creating the conditions for a race to the bottom in irresponsible AI”.

Into this perceived void comes the Report, with its proposal for a risk-based regulatory framework intended to allow the “responsible and ethical deployment” of this transformational technology. This framework is grounded in the perspective that AI regulation should be technology-neutral and focus on applications and outcomes of AI, not the technologies themselves. Development of a one size-fits-all regulatory framework is to be avoided.

From an overarching perspective, the Report proposes that risk classification be evaluated on the basis of impact to an individual, as opposed to the application of broad, predefined categories. Particular focus would be on consequential decisions with the potential for infringing on an individual’s legal rights, such as access to housing, education, employment, health care, physical safety and freedom, and other basic goods and services, without harmful discrimination.

More specifically, the Report calls for regulation that would classify AI uses into three categories: (1) low-medium risk, (2) high risk, and (3) unacceptable risk. The low/medium category would encompass AI applications that project negligible risks to matters such as privacy, health, safety or fundamental rights. The “high risk” category would likely encompass infringements of legal rights; safety; freedom; and access to housing, education, employment, and health care -with the ultimate decision being subject to the impact to the individual and communities.

Within this construct, stricter legal protections and transparency requirements would focus on high-risk areas, while lower-risk uses of AI could be addressed through “soft law and industry best practices”.

Whether an AI regulatory scheme ultimately follows a Report-based approach or some other style, corporate strategies should recognize not only the eventuality of regulation, but the risks associated with the absence of regulation. This notwithstanding the fact that the development of policies to enforce existing laws, and of new laws addressing “responsible AI and its ethical deployment”, will be a top priority for the current and future administrations, as well as Congress.

As companies increasingly consider AI opportunities, they’re well advised to consider the basic messages of the Report in developing internal guidelines for AI use. This, in order to fill the legal, ethical and safety gaps that may exist in advance of relevant regulation or other guidance. General counsel and compliance officers can team with technology executives to lead this effort, based on the risk-based regulations contained in the Report.

Given the range of possible AI risks that currently present themselves, and the absence of regulation to address those risks, there’s no real upside to waiting on government to act first. There is upside, however, to closely monitoring developments in the formation of AI oversight guidelines and ultimately regulation.

Michael wishes to acknowledge the assistance of his partners Jennifer Mikulina and Jed Gordon in the preparation of this post.

Follow me on LinkedInCheck out my website