b'Forum: Evolving Use of Artificial Intelligence in Government exploring ways to test and measure AI security andGetting enough of the workforce up to speed is trustworthiness. As part of its task, the agency is workingcritical, but government often faces funding and other with international partners to explore the potential forchallengesand often falls short on AI training and global AI standards. These and similar efforts shouldeducation. The federal government should emphasize include creating a framework for assessing bias. expertise in technical, digital, and data skills. It should Security. AI is vulnerable in several ways if designedprovide extensive and ongoing training to employees so without proper security measures. AIs potentiallythey can create, understand, manage, and work with AI widespread impact amplifies cybersecurity concerns.technology.If AI systems are driving cars, fighting wars, and theFederal budget and procurement processes. Outdated like, hackers who can compromise these systemsfederal acquisition and budget processes prevent have greater capacity to do enormous damageagencies from buying and deploying new technology more quickly. Attacks could alter AI training data orquickly and efficiently. Since most agencies start introduce corrupted or incorrect data that changesbudgeting two years in advance, they often do not the conclusions of the AI tool. Hackers also could acthave the flexibility or clairvoyance to buy the newest to reveal personally identifiable information in thetechnologies. The typical acquisition process involves data on which an AI tool was trained. With securitypurchasing a finished product or service, yet many AI paramount, the Defense Department is investigatingapplications are iterative, improving over time through how to safeguard AI technology from attacks. In a 2018experience. The rapid pace of AI development and strategy, the department committed to fund researchimprovement can leave government lagging behind. AI and development of reliable and secure AI systems,is moving fast, so should governments. Agencies should but more work is needed to evaluate the security of AIobtain what they need for AI by taking full advantage technologies. Our government and governments in otherof the tools and flexibilities available in the budget and countries could share knowledge and lessons learned,procurement processes. For example, agencies could as security concerns are global in nature. Given theseuse try before you buy acquisitions that allow them to interconnected security implications, government hasexperiment with new tools on a small scale, or staged to ensure data safety and spend some time reassuringcontracts to evaluate proposals and pilot tools before people that our cybersecurity is very much up to scratch. investing in full.Transparency. With AI, agencies have the ability to accomplish activities more quickly and accurately. ByLessons from Canada on Maximizing AI Benefits making AI transparent, users can learn how and whyand Managing Risksthe tool arrived at a conclusion and what data the AIThe AI research and development community considers technology used. Lack of transparency can pose issuesCanada to be at the forefront among governments at when people want an explanation for why decisionsmanaging AI risks. The Canadian government has taken steps were made. Some AI algorithms are proprietary; othersto ensure its departments and agencies have tools, rules are so complex that it is hard to explain, or for peopleand people to use AI responsibly. Based on the Canadian to understand, how conclusions were reached. Withoutgovernments experiences, U.S. government agencies will clarity about how AI produces its recommendationsneed to balance regulation and oversight with support and conclusions or understanding from employees asfor private sector research, development and innovation. to how to explain results derived from AI technology,Canadas example outlines potential tools, rules, and people governments may risk losing the publics trust. Theissues for consideration.AI research and development community recognizes that transparency will promote trust in AI systems.Tools: Simplify buying credible AI products. In Researchers are looking into explainable AI and makingSeptember 2018, in order to procure AI faster and more AI algorithms and results less of a black box. This willefficiently, the Canadian government released a list of enable governments and others that incorporate AImore than 70 suppliers proficient in AI and AI ethics. into their processes to respond to questions about theThe government deemed these qualified vendors to have decisions involving AI technology.delivered a successful AI product or service. Employee knowledge. Maximizing AI benefits whileRules: Create a framework to assess the risk of using AI managing AI risks hinges on hiring or training employeesin government. According to an April 2019 Canadian who understand and use the technology responsibly.government directive, if a department or agency is WINTER 2019 / 2020 IBM Center for The Business of Government 63'