BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

OpenAI Chief Sam Altman Sends A Message To Corporate Leaders On AI Risk

Following

Corporate leaders may soon view the OpenAI chief executive Sam Altman’s testimony Tuesday at a Senate hearing on artificial intelligence as an inflection point in the rapid and somewhat unconstrained commercial development of AI.

At one level, the May 16 hearing may represent the beginning of what will likely be a long, but broadly bipartisan, process regulating the use of AI and its amazing promise. Based on the exchanges at the hearing, and other recent developments, a regulatory roadmap is beginning to coalesce that will be instructive to corporate leaders as they plan AI strategies.

At another level, the hearings provided a stark reminder of the dangers inherent in the implementation of AI, and of the law’s basic expectations of companies engaged in its use. For corporate leaders understandably concerned with risk mitigation, it is a sobering and very public reminder of the pitfalls that lie ahead.

But what is unique about the hearing is that both of these messages were most forcefully delivered by OpenAI’s CEO, Altman, who had been invited to testify along with several other witnesses. Altman’s seemingly genuine and responsive testimony makes his message particularly credible for corporate leaders contemplating the scope of potential AI regulation and possible voluntary industry cooperation.

In that regard, Altman proposed a collaboration between industry and government to establish a federal regulatory structure to address AI. To Mr. Altman, this structure would require AI companies -“especially those working on the most powerful models–[to] adhere to an appropriate set of safety requirements, including internal and external testing prior to release and publication of evaluation results.”

This would require federal-level licensing and testing requirements for development and release of AI models above a threshold of capabilities (together with incentives for full compliance with these requirements).

Altman also urged that such a regulatory structure be flexible enough to develop and regularly update the appropriate safety standards, evaluation requirements, disclosure practices, and external validation mechanisms for AI systems that would be subject to license or registration.

Altman further proposed that users should have the option to exclude their data from being used to train AI services. Other witnesses and senators went beyond Altman’s testimony in proposing additional limits. These included consumer disclosure protections, and ownership rights with respect to the computer-generated material created by AI models after having been trained on copyrighted data.

The collaborative approach proposed by Altman appealed to committee members, with Sen. Richard Blumenthal (D.-Conn.) encouraging the AI industry to pursue voluntary action on AI safety as opposed to waiting for Congressional action.

But Altman’s other contribution to corporate strategies was his recognition of the underlying risks that make federal regulation of AI compelling; a recognition made more sobering by the fact that he is the CEO of the company that created AI tools such as ChatGPT.

Somewhat chillingly, he observed that, “my worst fears are that we cause significant, we, the field, the technology, the industry cause significant harm to the world. I think that could happen in a lot of different ways…I think if this technology goes wrong, it can go quite wrong. And we want to be vocal about that. We want to work with the government to prevent that from happening but we try to be very clear-eyed about what the downside case is and the work we have to do to mitigate that.”

Broad classifications of potential harm cited by Altman included privacy, child safety, accuracy, cybersecurity, disinformation and economic/job loss. The potential risk posed by AI to the 2024 election process was also discussed at length.

The hearing also raised the possibility that regulation might incorporate levels of corporate accountability for risk and harm, together with an expectation of voluntary self-regulation. Altman was also quick to point out the ultimate responsibility of companies that implement AI systems. He said, “I think it’s important that companies have their own responsibility here no matter what Congress does.” Senator Blumenthal agreed with this observation, noting that “when AI companies and their clients cause harm, they should be held liable.”

Corporate leaders are wise not to alter strategies on the basis of individual congressional hearings, no matter the urgency of the subject or the prominence of the witnesses. After all, the legislative and rulemaking processes are notoriously long and unpredictable.

But there are exceptions to this general rule. And when the CEO of the company responsible for developing some of the most ingenious of AI products speaks to Congress on the risks of the technology and the need for regulation, it’s time for those leaders to take notice.

The Altman hearing serves to set the basic contours of potential federal regulation in a manner that allows companies to anticipate their own internal compliance, information reporting and risk mitigation requirements. It also frames the broader conversation on the corporate accountability necessary to balance the benefits of technical innovation with the risks of that technology and the ethical and moral implications of its development.

More practically, the Altman hearing may prompt those leaders to establish a board-level process, such as a “Science, Technology and Innovation Committee”, to assist its ability to exercise oversight of the increasingly complex corporate interaction with AI. As the recent hearing suggests, the sooner a company sets its corporate culture with respect to AI development, the more effective it is likely to be in its implementation.

That may well be to Altman’s liking.

Follow me on LinkedInCheck out my website