Artificial intelligence has great potential to improve how the federal government works.

AI can increase operational efficiency and effectiveness, free employees of repetitive tasks, uncover new data insights, and enhance service delivery to customers. While they take advantage of these benefits, federal agencies must also manage real and perceived risks associated with AI to build trust in these technologies.

Federal, state and local governments are embracing AI. Federal agencies use it to identify insider threats, support military deployment planning and scheduling, and answer routine immigration questions. Agencies are considering additional uses that range from checking compliance with tax laws and regulations to assessing the accessibility of government products and websites.

This white paper draws on lessons from companies and countries around the world that use AI. These organizations have identified and are addressing AI issues that include bias, security, transparency and job impact, and their insights can be instructive for federal agencies.

Many Americans have questions about effects AI technologies may have on aspects of their lives. According to an October 2018 survey of more than 2,500 Americans, 59% of respondents are “very concerned” or “somewhat concerned,” with job loss and displacement worries ranking highest. They also conveyed concerns about data privacy, security, hacking and the safety of AI systems.

Although these risk factors also affected public perceptions when other technologies were introduced, leaders now need to also address these concerns to foster trust as agencies rely more on AI to carry out missions.

Through an executive order, an AI summit, and the creation of a website and a White House Select Committee on AI, the Office of Management and Budget and the Office of Science and Technology Policy are leading a government-wide effort to maximize AI’s benefits, while laying the groundwork for agencies to address risks responsibly. To increase the trust the public and federal employees have in government’s use of AI tools, the government’s strategy deals with transparency, security, technological know-how, procurement, budgeting and risk management. This white paper discusses further steps agencies can take to manage risks, and looks at pitfalls the AI research and development community has faced.

Even as agencies address concerns, they must move forward with implementation. If they do not incorporate AI tools into their work, they are likely to find it more difficult to address a growing number of complex challenges, according to Joshua Marcuse, executive director of the Defense Department’s Defense Innovation Board. “In most cases, the risks of going too slowly exceed the risks of some projects failing,” he said.

At AI roundtable discussions and interviews conducted by the Partnership for Public Service and the IBM Center for The Business of Government, participants were optimistic about their agency’s ability to implement the technology. Many of them described a path to success that would start by using AI in smaller, attainable projects, enabling their agencies to develop expertise and experience. With government-wide initiatives putting AI front and center, and progress being made in AI research and development, now is the time to act.

Read "More Than Meets AI: Part I."


Read the NextGov article, "Agencies Need Tech-Savvy Feds to Address AI Challenges."

Read the Federal News Network article, "Federal CIO Kent: AI pushes need to retrain ‘broader swath’ of federal employees."

Read the Fed Scoop article, "Suzette Kent: Toughest challenges in federal AI adoption will be in ‘middle’ layer."

Read the MeriTalk article, "Report Cautions Agencies of Bumps on the Road to AI."

Read the ExecutiveBiz article, "Report: Personnel Training, Data Transparency Key to Federal AI Implementation."

Read the FEDWeek article, "Report Outlines Tasks in Integrating AI into Federal Workforce."