BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

Should Governments Use ChatGPT And AI Generated Content? US Federal And State Government Leaders Share Thoughts

Following

It’s hard to avoid the headlines and remarkable results being seen with the use of Large Language Models (LLMs) like ChatGPT and Google Bard as well as additional emerging models for generation of text and image content. As a result, agencies across all levels of government are exploring the potential of using AI-generated content and language models to improve their services, automate processes, and enhance decision-making.

However, there are also concerns about the ethical and legal implications of using AI-generated content and language models in government agencies. These concerns include issues of bias, accuracy, transparency, and accountability, which could have serious implications for public trust, civil rights, and social justice. As such, it is crucial to carefully consider the benefits and risks of AI in government agencies and develop appropriate policies and guidelines to ensure the responsible and ethical use of these technologies.

At the April 2023 GovFuture Forum event at George Mason University (GMU) in the Washington, DC region, government leaders Anthony Boese (Presidential Management Fellow, Interagency Programs Manager and Ethics Officer for the National Artificial Intelligence Institute [NAII]), Scott Beliveau (Branch Chief of Advanced Analytics & Acting Director of Data Architecture in the Office of the Chief Technology Officer (OCTO) at the United States Patent and Trademark Office [USPTO]), and Patrick McLoughlin (Chief Data Officer for the State of Maryland) shared their perspectives on the use of AI and LLMs in government, and their current posture with regards to their use within government activities.

Key takeaways from the event and this panel discussion are detailed below.

Anthony Boese (NAII):

“Most of us have little policy derived yet, specifically about LLMs, like ChatGPT. It is very new and it takes a while for policies to get stood up. Agencies vary on how comfortable they are with experimenting with it. The NAII (National AI Institute) is housed in the VA. There is very limited application [of LLMs] we'll have in the VA, but they're a little more allowing the use of LLMs. One thing to keep in mind is an LLM, and things like this, are AI. And AI as a broader category, which does have several new but very powerful regulations involved, like a collection of Executive Orders (EOs), most importantly, EO 13859 and EO 13960, which is about trustworthy AI. As an additional qualification, we have a list of things we have to do under what we call trustworthy AI. So if you're thinking about what the government does to triangulate these things, that trustworthy is our tagline.”

Patrick McLouglin (CDO, Maryland):

“As the State Chief Data Officer for Maryland, I also serve as the acting executive director for our MD Think program, which essentially is our cloud-based environment for delivery services to essentially the most vulnerable Marylanders, that's family services, any type of food and nutritional benefits services, the SNAP program, our child support, our child welfare, our CGM programs, all of those were all facilitated through that particular cloud-based program.

As state Chief Data Officer is over all the executive branch, I've got different insight into the different agencies of 28 executive branch agencies in Maryland that are agencies, they all have different approaches of where this would be viable. And initially, we're looking at this as a longer roadmap in order to develop some of these policies. It has been rapidly increased as far as our timeline to try to get around solidifying these policies because of just the nature of ChatGPT or Google Bard or the different generative AI availability that's out there and the need to be able to put some of those guardrails up to ensure that our staff and then the people working in the state aren't solely relying on that in order to provide documents that are seen as authoritative, are seen as set in stone, as complying with policy or helping support legislation at all.

We're in a similar state (as the US Federal Government), particularly when it comes to LLM. We’re very much in the development process right now as far as how we want to handle that. Much of our focus right now is kind of trying to mitigate or limit some of the risks we have with some of those unknowns up front around how it's being used. So, for example, some of the concerns around someone within a particular office being able to begin generating procurement requirements or something along those lines, and how we want to try to put some guardrails around that initially as we start to evolve our use cases and the policy around that. So we're in a similar state as the federal government of starting to evolve. We've been talking about this at least.”

Scott Beliveau (USPTO), in addition to his panel participation, provided additional written feedback for this article:

What is the general posture your agency is taking with regards to the use of AI-generated content?

Scott Beliveau (USPTO): The United States Patent and Trademark Office (USPTO)’s mission is to foster and protect more inclusive innovation, and enhance our country's economic prosperity and national security. Intellectual property (IP) intensive industries accounted for $7.8 trillion in U.S. gross domestic product (GPD), or 41 percent of total GDP in 2019.

Artificial Intelligence (AI) has the potential to provide tremendous societal and economic benefits and to foster a new wave of innovation and creativity. AI now appears in 18 percent of all new utility patent applications and in more than 50 percent of all the applications that we examine at the USPTO. AI-generated content and other emerging technologies, however, can pose novel challenges and opportunities in both IP policy and the tools used to deliver reliable intellectual property rights. Consequently, our general posture has been to take a measured approach by actively engaging and seeking feedback from the broader innovation community and experts in AI on IP policy issues. We also recognize that policy issues will arise in the future that we cannot yet imagine. With these engagements, we strive to continue fostering the impressive breakthroughs in AI and other emerging technology through our world-class intellectual property system.

What are the potential risks associated with using Large Language Models (LLMs) such as ChatGPT, Bard, or others in your agency’s work?

Scott Beliveau (USPTO): A patent, trademark, or copyright is part of a grand bargain. An inventor, entrepreneur or artist is given a period of exclusive rights, encouraging her to bring innovations to market that will better society in some way. In return, details about the invention or product — such as how it is made and used — are published, empowering others to improve upon the innovation. The grand bargain of protection in exchange for transparency leading to further innovation is so fundamental to our nation’s progress that it is enshrined in Article 1 of the U.S. Constitution.

The Dec 2020 Executive Order “Trustworthy AI in the Federal Government'' (EO 13960) relies on transparency and IP protection for innovation. Anyone following the news recently knows that LLMs can display bias, unpredictability, and malicious behavior. AI can perpetuate existing biases and produce misleading or inaccurate information. LLMs may be trained on large amounts of personal, proprietary, or copyrighted material and output ‘derivations’ without respect to IP rights. User queries to public LLMs will likely be stored by the organization providing the LLM, and then will be used in future versions. The LLM’s bias, lack of security, and risk are all key variables as our executives consider the implications for agency policy and employee use. The rapid development of LLMs is outpacing the public engagement needed to fully evaluate, verify, and validate the safety, security, and the efficacy of these technologies.

How are you working to mitigate those risks?

Scott Beliveau (USPTO): At our agency, we are eager to understand and harness the potential of Generative Artificial Intelligence (AI) to improve our Agency’s operational quality and efficiency. We are as committed to pursuing innovation within our agency as we are to fostering innovation throughout the nation and the world. However, we must balance that passion with ensuring that our own use of AI technologies is transparent and protects strong intellectual property rights. Pursuant to existing agency and federal policies, we started mitigating the risks of LLMs by prohibiting our employees and contractors from using generative AI tools. This immediate action was taken while we continue to explore ways to bring LLM capabilities to the agency in a responsible manner that serves America’s innovators.

The full recording for the GovFuture Forum event is available on the GovFuture site.

Follow me on Twitter or LinkedInCheck out my website or some of my other work here