BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

War, Pestilence, Extinction And Artificial Intelligence: Technology Leaders Sound A New Alarm

Following

The potential risks associated with artificial intelligence as presented by OpenAI’s CEO Sam Altman at the recent Senate hearings just became a lot more complicated for corporate leaders.

On May 30, a one-sentence “open letter” released by the Center for AI Safety warned that developing AI technology could, in the future, present an existential threat to humanity similar to that presented by other societal extinction-level threats: “Mitigating the risk of extinction from A.I. should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.”

The open letter was signed by Mr. Altman and more than 350 other executives, researchers and engineers involved in AI development.

The bleakness of the open letter, and the credibility of many of its signatories, provides additional evidence to the growing public conversation on the potential risks presented by AI and the most effective means of regulating its implementation. It also complicates the challenges already confronting corporate leaders as they make decisions how to incorporate AI technology into their organization. In other words, it’s a problem.

These leaders are already balancing the exciting opportunities presented by AI technology with the known risks, including privacy breaches, clinical errors to wholesale misinformation, racial bias and manipulation. There is, among some of those leaders, a recognition that the rapid development of the technology suggests longer-term concerns, including its potential to surpass human-level capabilities in certain fields.

But injecting the threat of societal extinction into the conversation moves the corporate risk discussion to a new - and most likely unwanted - level. What are we supposed to do with this? Talk about “the skunk at the picnic…”

No thoughtful corporate board is going to reject organizational investment in fundamental AI technology based on “existential risks” to society. They’re generally aware of the “big picture” risks of AI as they relate to political, economic and national security issues, among others. They get it.

But the effect of the open letter may be to lodge in the collective orientation of the board a level of skepticism about the future of the technology that didn’t previously exist. Rare is the situation in which apocalyptic threats are raised within the context of board responsibilities. The Four Horsemen of the Book of Revelations don’t regularly make a boardroom appearance. Directors will take notice.

And that notice may be reflected in a level of skepticism that might affect several aspects of an organization’s future AI investment decisions, from due diligence to ongoing monitoring to more proactive self-regulation and to more effective communication with consumers. When so many prominent tech leaders with such a close familiarity with AI technology express their concerns in such a dramatic way, directors are bound to take note. And they will intuitively wonder what steps they should take to mitigate against the perceived apocalyptic threat.

But it’s also a level of skepticism that may frustrate members of the management team who wish to move boldly to take advantage of the technology’s benefits. They are likely to dismiss the impact of the open letter-and understandably so. They’ll want to encourage their board to view the warning skeptically and not as affecting the organization’s risk profile. Doing so will no doubt require periodic massaging of the board/management dynamic.

But when the developers of new technology go public with the existential risks of that technology, that’s a step that can’t easily be walked back by corporate leaders investing in the technology.

Follow me on LinkedInCheck out my website