AI in the Public

Artificial Intelligence in the Public Sector:

A Maturity Model

 

Edited by Michael J. Keegan

The fifth and final contribution to this forum provides public sector leaders a view into the “art of the possible” by emphasizing how AI programs can accelerate the transformation of government programs to better serve the public and do this by offering them framework for establishing a successful AI program. The challenge has always been to design and implement an AI program that has all the critical elements in place to successfully achieve the goal of improved mission delivery and citizen services.

Recognizing the need to address this challenge, the IBM Center commissioned a report in 2018 by Professor Kevin DeSouza, Delivering Artificial Intelligence in Government: Challenges and Opportunities, which proposed an initial maturity model that gave public agencies a starting point for developing an AI capability.

Since that time much has changed in the use and application of this technology, and as such, an opportunity arose to fine tune this model, based on extensive research on how the public sector was deploying AI, documenting successful use cases and highlighting pitfalls and lessons learned. Professor DeSouza offers this revised maturity model with significant input from frontline practitioners and academic in a follow up report, Artificial Intelligence in the Public Sector: A Maturity Model. What follows is an excerpt from that updated report highlighting aspects of the maturity model and insights that may help government agencies get the highest value from their efforts and investments in AI.

AI in the Public

A Maturity Model for Designing, Developing, And Deploying AI

Maturity models are popular in a wide assortment of fields from quality management to software engineering, education and learning, organizational design, and even information systems. While each maturity model has its own peculiarities, they all provide an evolutionary framework to guide improvements and/or advancements on one or more domains.

The domain of interest in this report is AI design, development, and deployment efforts in the public sector. The maturity model has two dimensions. The first dimension represents the critical elements that need to be managed as AI projects are designed, developed, and deployed in the public sector. The model indicates that agencies must show proficiency on six core elements and those elements can be divided into two domains: technical and organizational.

• Technical domain includes big data, AI systems, and analytical capacity.

• Organizational domain includes innovation climate, governance and ethical frameworks, and strategic visioning.

The second dimension outlines the maturity levels that begin with ad hoc, followed by experimentation, planning, and deployment, scaling and learning, and finally, enterprisewide transformation.

Exploring Dimensions of the Maturity Model

The proposed maturity model outlines six core elements and five maturity levels for guiding AI initiatives in the public sector. Public agencies need to start small and be aware of the required upfront financial and time investment for data governance, computational systems, and analytical capacity.

• Big Data: AI relies on big data for its design and development, but once deployed, enables organizations to make sense of large data reservoirs through the application of machine learning algorithms. In an ideal world, public agencies should be able to access, integrate, and leverage data of interest in an effective and efficient manner. The COVID-19 pandemic is cited as a case study where AI-enabled data tools were quickly mobilized for public health, specifically in contact tracing and risk assessment.

AI in the Public

• AI Systems: Computational systems are the engines that transform data into actionable insights and outcomes. As discussed earlier, AI applications leverage a range of computational techniques to ingest, analyze, visualize, and even act on data. AI can fully or partially automate tasks through the power of predictive data analytics, fed by multiple sources of historical and real-time data; learn from previous interactions and self-decide though the power of machine learning; and in some cases, such as chatbots, interact with users through natural language processing.

• Analytical Capacity: AI systems are only as good as the human analytical capacity that support them and refers to the human element related to designing, developing, and deploying AI. Organizations need a well-trained workforce that is analytically aware and has the aptitude to leverage data to derive evidence-driven insights. Public agencies face numerous challenges when it comes to recruiting, developing, and retaining analytical talent, including the general lack of analytic-savvy people in both government and in the general recruiting pool. Moreover, regardless of existing analytical talent, the need for the presence of deliberate mechanisms to leverage that talent to create organizational value is pivotal.

• Innovation Climate: Public agencies need to innovate if they are to deliver on their objectives given the ever-evolving environmental pressures. While innovation in the public sector continues to garner interest, we still see agencies struggle when it comes to digital transformation efforts. Experimentation is critical to the ability to innovate. Yet in public agencies, experimentation is often shunned upon due to the perception of being deemed a failure and wastage of public resources. Data challenges such as incomplete and siloed datasets and a lack of investment necessary to upgrade legacy computational systems can significantly impact the ability of public agencies to innovate.

• Governance and Ethical Frameworks: Governance and ethical frameworks are vital as oversight mechanisms to ensure that AI is deployed in a responsible manner and advance public value. Governance frameworks establish accountability and assign responsibility when it comes to AI design, development, and deployment. They serve as critical coordinating mechanisms to ensure that agencywide economies of scale, learning, and value can be secured. Ethical frameworks ensure that AI mitigates issues such as bias, discrimination, and harm. When AI fails or causes harm, these frameworks can assist in providing recourse mechanisms to compensate victims.

• Strategic Visioning: Leadership at public agencies needs to play an active role in creating environments that are supportive of the development of AI. How they are designed, developed, deployed, and regularly enhanced need to be incorporated into the long-term strategic plans of agencies. A good strategic visioning also considers the important fact that deploying AI can change the function and design of agencies given the affordances of AI for changing work processes and engaging citizens.

AI in the Public

Accessing Levels of Maturity

The six elements described above need to be assessed both individually and collectively in terms of their maturity. The maturity levels are noted below and go in increasing order of sophistication.

• Ad Hoc: The public agency does not have a plan in place to design, develop, and deploy AI. Datasets remain an underutilized asset, computational systems lack necessary capabilities, and analytical capacity is limited or unavailable. There is limited appetite to innovate with AI, and this inertia also plays out with the absence of governance and ethical frameworks for AI.

• Experimentation: The public agency is actively experimenting with AI. Experimental projects leverage datasets, computational systems are being designed and/or upgraded, and analytical capacity is being mobilized around these projects. There is a growing interest in learning from early experimental efforts, and there is a recognition to invest in designing ethical and governance frameworks that support responsible experimentation and innovation.

• Planning and Deployment: The public agency has put in a place a plan to design, develop, and deploy its first set of AI projects. Datasets for the initial set of AI projects are of sufficient quality, investments into computational systems necessary for AI are in place, and an initiative to attract, mobilize, and retain analytical talent is underway. Senior leadership is supportive of AI efforts and initial visioning efforts are underway to incorporate AI into strategic plans of the agency. An initial set of metrics are created and agreed upon to track investments and performance of the AI.

• Scaling and Learning: The public agency is enacting thoughtful and repeatable processes to select and implement AI and these processes encompass all aspects of AI implementation including technical, governance, and staffing. AI projects are viewed as a critical part of the agency; a concerted effort is being made to measure efforts against the metric developed in prior maturity levels.

• Enterprisewide Transformation: The public agency has successfully integrated AI into a routine part of the environment and agencies can move quickly to implement additional AI projects as necessary into the environment. Because the necessary technical, governance and staffing infrastructures are in-place, design and deployment can proceed rapidly across the agency and these efforts are managed using a portfolio approach.

Going Forward

Moving up a level requires a) successfully overcoming the limitations of prior level, and b) evaluating an organization’s readiness for the next level. The evaluation requires knowing what limitations public agencies need to overcome at the current level and at the next level. Therefore, the elements and levels of the proposed maturity model are intertwined and inextricably linked, rather than operating in isolation.

At the ad hoc level, some individuals who have some personal interest in AI initiatives often start talking about their ideas, which can quickly grow if a suitable innovation climate exists. Showing organizational interest in establishing analytical capacity and computational systems can greatly contribute to creating the required level of competency to prepare for moving to the next level. External pressures—for example, efforts at other countries or peer public agencies deploying AI and seeing promising results—can often act as additional stimulants for agencies to move from the ad hoc level to the experimentation level. Public agencies, however, need to start with strategic plans that consider the cost and benefit of initiating an AI initiative, particularly in terms of potential risks and harm to citizens.

AI in the Public

Managers in charge of AI experimentations often express that the ability to share learnings from experiments with peers can effectively facilitate learning and refinements to AI initiatives. Some even believe that this enables them to do rough benchmarking across different classes of AI. Using knowledge sharing networks can support sharing of lessons learned and can therefore facilitate collaboration with both internal (e.g., middle-range managers and staff in relevant departments that contribute to AI initiatives) and external stakeholders (e.g., academia, third parties, and other relevant public agencies). These efforts are paramount for public agencies to make the leap from the experimentation level to the planning and deployment level.

At both the planning and deployment level and the scaling and learning level, ongoing collaborations between program leaders and the IT department are of paramount importance. Detailed business cases need to be developed to clearly articulate how AI initiatives advance public value and engage citizens. Thoughtful medium-range plans are required to outline how efforts on AI projects are aligned to near-term priorities. While technical infrastructure and analytical capacity are low at this level, an organization that is interested in initiating an AI initiative would benefit from developing governance and ethical frameworks and assigning key personnel to plan for recruiting or upskilling analytical capacity. This allows the agency to build a solid base for moving to the highest level of maturity—i.e., the enterprisewide level.

At any level of the model, public agencies are advised to regularly reflect on and share lessons learned and the costs and benefits of moving up a level. Metrics on AI projects should be developed and used for each level. Lack of such mechanism can lead to scaling of prior ineffective practices and poor strategies.

Check out the complete report for more details on specific steps that can enable government agencies move from one level to the next.