BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

AI Regulatory Efforts Creep Forward While Board Oversight Lags

Following

New efforts to regulate AI reflects the understandable tension between the urgency to implement new technology, and the public’s interest that it be done so responsibly. All the while, corporate boards are struggling to develop standards for oversight of their company’s use of AI.

This tension has been exacerbated by the extraordinary growth and development of AI applications like ChatGPT, and parallel concerns not only about the trustworthiness of the technology, but also with its potential to be used to discriminate or spread harmful information.

It’s a tension that has been exacerbated by recent debate amongst business and technology leaders over the role of government in regulating AI development and implementation. For example, a group of scientists, experts and technology leaders (including Elon Musk) recently sent an “open letter” calling on all AI labs to institute a six month pause in the training of AI systems more powerful than GPT-4 (and for government to institute a moratorium in the absence of a voluntary pause).

The specific concern of this group is the lack of planning and management in AI development, prompted in part by “an out-of-control race to develop and deploy even more powerful” and potentially uncontrollable digital minds.

Indeed, a recent column in The New York Times addressed the concerns associated with the impact of competition in the technology industry on safety and trustworthiness. The column suggested that many of those directly involved in AI development “are desperate to be regulated, even if it slows them down.”

On the other hand, some business leaders are expressing a separate concern with the competitive and national security impact associated with government-enforced slowdowns in the advancement of AI technology.

Former Google CEO Eric Schmidt, who chaired a congressional committee on the national security and defense aspects of AI, recently advocated to the House Oversight Committee that government regulation should not unintentionally limit America’s technological advantages. “Let American ingenuity, American scientists, the American government, American corporations invent this future, and we’ll get something pretty close to what we want. [A]nd then you guys can work on the edges, where you have misuse.”

To that point, the newly released 2023 version of the Edelman Trust Barometer concludes that business is the only institution seen as competent and ethical in the context of a deeply and dangerously polarized society. Survey results further suggest that CEOs are obligated to improve economic optimism and hold divisive forces accountable; that they are best situated to respond to problems that government has found difficult to address.

This debate is proceeding while other countries, such as the United Kingdom and more recently China, have pursued varying approaches to government regulation of AI, ranging from supervising how AI systems are used, to national security and preservation of state interests.

The Biden Administration initially entered the debate with its “Blueprint for an AI Bill of Rights,” identifying five principles intended to guide the design, use and deployment of automated systems to protect American citizens in the new era of artificial intelligence. The “Blueprint” is described as a guide to protect against the unintended consequences of automated systems.

Yet in a first step towards possibly more substantive AI regulation, an agency of the U.S. Commerce Department recently issued a formal public request for comment on what it called accountability measures. This includes whether potentially risky new AI models should go through a certification process before they are released. The request for comment seeks feedback on the types of policies that can “support the development of AI audits, assessments, certifications and other mechanisms to create earned trust in AI systems.”

The Commerce Department’s request compares the need for AI accountability to the need for financial accountability; i.e. [M]uch as financial audits create trust in the accuracy of a business’ financial statements, so for AI, such mechanisms can help provide assurance that an AI system is trustworthy.” In other words, “AI system accountability will need policy and governance to develop, just as financial accountability required policy and governance to develop, as well.”

But nothing similar in terms of guidance (or regulation) is on the horizon to offer direction to corporate boards on their oversight of AI implementation. Perhaps something substantive will be forthcoming from governance policy organizations such as the Business Roundtable or the Conference Board-but boards can’t hold their breath in anticipation. That’s especially so when concerns with trustworthiness begin to pop up as AI use increases in particular industry sectors (such as health care).

To address this “limbo” status, boards could at least take the following basic steps to support their oversight: (i) receive periodic briefing on the application of AI in their industry sector and within their organization; (ii) receive regular reports from the organization’s senior AI executive; (iii) understand the levels of trust/risk associated with the organization’s specific AI applications; (iv) delegate primary oversight responsibility to a committee, with knowledgeable members, that will report regularly to the board; and (v) confirm the existence of an information reporting system to that committee (and ultimately to the board) on key AI issues and risks.

The debate on the proper respective roles of government and industry oversight of automated systems such as AI is approaching full-throated status—but it is nowhere close to reaching a consensus. It’s an awkward position in which private industry finds itself: needing to apply AI systems for competitive and mission reasons without specific guidance on how to monitor such applications.

There is now an urgent call for corporate leadership to develop much needed standards-and it is a calling which consumers appear to endorse.

Follow me on LinkedInCheck out my website