Introduction to Artificial Intelligence (AI) in State Government – National Governors Association

0
709

On September 6, 2023, the NGA Center for Best Practices hosted a virtual event, “Introduction to Artificial Intelligence in State Government,” co-organized with the Center for Scientific Evidence in Public Issues at the American Association for the Advancement of Science (AAAS EPI Center).

The event aimed to provide state leaders with foundational knowledge about AI technologies and applications in state government. National experts and state leaders discussed both the potential and risks of AI applications and policies to help ensure its responsible use in the public sector.

The Latest In Washington, D.C., On AI

Sean Gallagher, Deputy Director for Government Affairs at AAAS, provided an overview of what the federal government and Congress are doing to regulate and oversee this technology. To date, there are no overarching federal regulations on the responsible development and use of AI. In practice, states are leading the way on new AI guidance. Much of the current focus at the federal level is about the impact of AI on jobs and the economy, equity in AI and maintaining the U.S.’s competitive edge.

In practice, states are leading the way on new AI guidance.

Gallagher described these recent federal-level activities:

  • White House: In October 2022, the White House released a Blueprint for an AI Bill of Rights around principles that people should be able to expect from automated systems such as requiring entities to disclose when they use AI, giving individuals the ability to opt out, putting into place legal protections where the use of AI may cause someone harm, and ensuring individuals have agency over how their data is used and how it may affect equity.
  • Commerce/NIST: The U.S. Commerce Department’s National Institute of Standards & Technology (NIST) influences and sets standards for AI use in the federal government. In 2019, NIST issued the first tools for mitigating bias. In January 2023, the agency released a voluntary AI Risk Management Framework with a focus on governance and consideration of the entire AI life cycle when measuring, mapping and managing risks.
  • FTC, CFPB, DOJ, EEOC: In April 2023, the Federal Trade Commission (FTC), Consumer Financial Protection Bureau (CFPB), U.S. Department of Justice’s (DOJ) Civil Rights Division and Equal Employment Opportunity Commission (EEOC) jointly issued a statement on enforcement efforts against discrimination and bias in automated systems. In 2023, the FTC began new investigations into inaccuracies stemming from the use of generative AI.
  • Congress: U.S. Senate majority leader Chuck Schumer of New York is leading a bipartisan effort called the Safe Innovation Framework for AI with three other senators. First announced in June 2023, the bipartisan AI framework has five themes for AI legislation: safety, innovation, accountability, alignment with democratic values, and transparency.

Gallagher noted that AI is getting a lot of attention on Capitol Hill now, with many legislative efforts focused on AI. To date, AI has not fallen neatly along partisan lines. AI will be a cross-jurisdictional issue because it affects everything from healthcare to transportation and education.


Why Generative AI has Grabbed the World’s Attention Since Late 2022

Ellie Pavlick, Brown University Assistant Professor of Computer Science & Linguistics, compared traditional AI with generative AI. Professor Pavlick noted that we’ve been using AI systems in different capacities, such as predicting the weather, for the past several decades. Traditional AI models were often based on tabular data that was understood and yielded “yes or no” type outputs and predictions, such as answers to questions about whether it might rain or not. AI models of all types have received more attention recently as they’ve been used in high-impact applications such as hiring.

Pavlick noted generative AI is a new type of AI with open-ended outputs that aren’t yes or no decisions, and that they work like neural networks that learn associations such as common word pairs for a vocal language model. The model is first trained to predict the next word in a sentence and then sequences of words through a Google-like search of the Internet.

Now is a good time to begin using and understanding the technology within safe or “sandboxed” domains so we are ready to adapt quickly in response to progress.

She noted three major risks with generative AI: “hallucination,” thwarted technical guardrails and bias:

  • Hallucination: Generative AI models don’t know what they don’t know, so they may make up answers, whether true or not, when they do not have access to a database to ground the model in reality. For example, when Pavlik asked a generative AI system to create a biography of her, it made up awards she had never received. This is referred to as “hallucination.”
  • Technical issues: OpenAI and Google have worked to prevent generative AI models from, for example, generating false news stories or giving instructions on how to make a bomb. However, AI researchers have shown that it is easy to bypass technical guardrails through the same guess-and-check methods used by legitimate AI researchers.
  • Bias: With both traditional and generative AI, the relationship between inputs and outputs results from a statistical algorithm which is subject to unfair bias and other risks. Generative AI models, however, work in an unstructured way. The inputs are essentially a black box (pulled from the world wide web), and the associations that are made by the AI model can be hard to inspect. Compared to traditional AI, which benefits from decades of research, generative AI is too new to have recommended solutions to bias yet. As an example, Pavlick noted generative AI will most likely display pictures of white males when asked to create or display a picture of a surgeon.

She noted that large language models and generative AI, in general, open up exciting possibilities, but currently there are more questions than answers. Over the next 3-5 years we will likely see rapid advances in our understanding of the risk landscape.

While AI has enormous potential to make predictions or to help with decision-making and automating routine tasks, its risks should be weighed carefully. She told the audience that now is a good time to begin using and understanding the technology within safe or “sandboxed” domains so we are ready to adapt quickly in response to progress.


How States are Using AI

State speakers at the event included:

  • Washington Chief Technology Officer Nick Stowe
  • Ohio Chief Information Officer & Assistant Director of the Administrative Services Department Katrina Flory
  • Utah Chief Information Officer, Department of Technology Services Alan Fuller

The state speakers discussed ways their states are using AI to increase productivity and efficiencies creatively. They shared examples of AI use in everything from fraud detection to chat bots to identifying cattle brands. States are also implementing policies and related activities to explore the risks of AI, putting out guidelines on the uses of generative AI by government staff, standing up advisory committees and study centers, and implementing procurement guides based on principles of responsible and safe AI use. In addition, they addressed their challenges and concerns about working with AI at this time.

You can watch each of the state speakers’ presentations at this link, or below. A summary of key points made by each state are presented here:

  • Washington state is applying computer vision and pattern recognition for detecting and preventing wildfires. The state is also using generative AI for software code development through an application called Copilot; since it’s a relatively new application for the state, they are taking a cautious and purposeful approach. In terms of AI policies, the state is applying its existing privacy principles. The Washington state privacy officer co-chairs the state AI committee. Washington published interim guidelines for the responsible use of generative AI in state government. Washington also developed procurement guidelines (heavily drawn from the NIST Risk Management Framework) to ensure new AI tools are aligned with state principles.
  • The state of Ohio is using AI for chat bots on the state website, OHIO.gov, and for predicting fraud in unemployment insurance. InnovateOhio, an organization founded by Lieutenant Governor Jon Husted, led recent AI Forums in Cincinnati, Cleveland and Columbus that focused on using AI as a tool for productivity, education, customer service and quality of life. All three forums reached capacity within hours of being announced. The Lieutenant Governor assembled AI thought leaders from the education, business and technology fields to help lead the discussions. Attendees included K-12, higher education and workforce development professionals, technology and business leaders, and state and local elected officials.
  • Utah is using AI for threat detection in cybersecurity. Its Department of Agriculture is using AI to identify livestock brands using computer vision, saving significant time from the prior manual process. The state is currently drafting a policy on the use of generative AI, which generally encourages its use but advises caution. The state also created a Center of Excellence in AI. Utah is taking an enterprise approach to AI where any AI use is taking place within the state’s internal enterprise system, not on the open web. This allows the state to make sure that any data entered via a prompt won’t be made public, helping to ensure quality outputs from any AI models.

State AI Resources

The NGA Center for Best Practices has compiled a state resource list on artificial intelligence that currently provides links to federal-level activities, state executive branch activities, state legislative actions, local-level activities, and resources for technical assistance. As additional items are identified, the resource list will be expanded.

Governors’ offices and other policy experts and stakeholders are encouraged to contact NGA to share input on the resource list or offer suggestions of specific AI topics the NGA Center and the AAAS EPI Center can address in future events. NGA’s intended audience includes Governors’ advisors and staff, state policymakers and executive leaders, state procurement officials, state chief information and technology officers, state offices that oversee automated systems such as public benefits distribution, government hiring, fraud detection or other systems.

The AAAS EPI Center also has AI resources to share on the EPI Center website which includes, for example, Foundational Issues in AI and Glossary of AI Terms and other useful resources.


Contacts

Source link