Measuring Up: How Can CIOs Take Stock?

 

Measuring Up: How Can CIOs Take Stock?

Thursday, August 13th, 2015 - 8:35
Thursday, August 13, 2015 - 08:26
Savvy leaders know that it is important to not only know what and how you are doing, but for your key stakeholders to know, as well. . . . .

Information technology has made possible the availability of real-time data and the tools to display that data, such as dash­boards, scorecards, and heat maps. This has boosted the use of data and evidence by government decision makers in meeting their agency and program missions. But what about the use of performance metrics by agency chief information officers themselves?

Background. Typically, CIOs have a good inventory of metrics regarding the performance of their technical infrastructure, such as server down time. Metrics on non-technical elements, however — such as the innovation capacity of the IT organization in an agency and its overall health as an organizational unit — are in earlier stages of development.

According to Kevin Desouza, in a new report for the IBM Center, these types of metrics are critical for CIOs to effectively manage their IT organizations, and to convey the strategic value of their IT capabilities for attaining agency-wide objectives.

Desouza interviewed over two dozen public sector CIOs at all levels of government to understand what they saw as missing metrics and what challenges they faced in trying to fill the gaps and create a portfolio of balanced metrics to manage their organization. They told him that it was important to develop three “baskets” of metrics to measure their organization’s IT performance:  IT project management metrics, IT operations management metrics, and the extent of IT innovation in their organizations. 

Key Challenges in Creating IT Metrics.  In his interviews, Desouza learned that IT organizations cannot be successful on their own; they depend on other organizational units in order to meet their performance targets.  For example, they find they may not be able to move as quickly as they would like because of accounting, legal, financial, procurement, personnel, and security policies created and controlled by other units. 

In addition, when IT assets are distributed across multiple organizational units, it becomes important for a CIO to find ways to share best practices in developing standards for comparison of performance.  One organization Desouza examined had over 30 different components, each with its own IT organization – and no common method for comparing performance. 

Another challenge is being able to collect real-time data on IT operations.  Desouza observed: “metrics not collected in real time can largely create analyses that are focused on looking for problems, and not on taking advantage of opportunities for optimization and innovation.”

Types of Metrics CIOs Should Develop.  Desouza found that the “usual suspects” for IT metrics tend to be technical in nature, such as server up-time, CPU utilization rates, and network latency.  But he says that to be useful to managers, there needs to be a balanced portfolio of metrics that cover project management, for new IT systems; operations management, for legacy systems; and innovation, to ensure the IT organization is constantly abreast of the cutting-edge in a fast-moving industry. 

Regarding project management metrics, Desouza offers illustrative examples for areas such as: employee productivity, project schedules, project budgeting and contracting; and organizational satisfaction with the results of service level agreements, etc. 

For operations management, he challenges CIOs to look beyond costs and service reliability to include metrics around how the IT organization supports key agency-wide business priorities.  This would include the development and effectiveness of various applications, security, and cost efficiencies.  Other operations management metrics might include organizational and contractor personnel qualifications and end user satisfaction. 

Finally, for innovation metrics, Desouza notes: “Indicators based on monetary value are not enough.”  He says looking at stakeholders and industry peer recognition are important metrics as well.  He says “Qualitative and quantitative measures such as idea generation, idea diffusion, links between activities and impacts, and ideas selection all present unique opportunities for organizations to consider how innovation is grown.”

What Is the Obama Administration Doing About IT Metrics?  Back in 2012, the Federal CIO Council joined a broader Administration initiative to benchmark mission-support operations. The goal was to identify a couple dozen IT metrics that could be collected across the federal government in a standardized fashion so comparisons could be made and questions asked about what works better. 

Initial metrics and definitions were agreed upon in 2013 and data were collected for the first time in 2014.  The initial set of metrics focused on cost and efficiency, in part because those data were more readily available.  These included measures such as: “IT spend as a share of Total Agency Spend,” and “Cost per Help Desk Ticket.”

When a second round of data were collected in 2015, the initiative expanded to include “quality of service” measures – both operational quality and end-user customer satisfaction.  These included metrics such as “Help Desk Speed to Answer,” and satisfaction with the quality of support for e-mail service. The various satisfaction metrics involved a governmentwide survey of nearly 140,000 managers.  The results were used by departmental deputy secretaries and OMB as part of broader assessments of how individual departments were managing themselves. 

In addition, OMB’s new guidance for the implementation of the recently-adopted Federal IT Acquisition Reform Act (FITARA) clarifies CIO roles and authorities over IT performance across their agencies.  Effective understanding and use of metrics can help Federal CIOs implement that statute in a way that demonstrates the value of IT across agency missions and programs.

The illustrative metrics developed by Desouza expand on these efforts by suggesting a broader aperture for internal use within an agency.  In many cases, these could be IT metrics linked to agency-specific missions that might not lend themselves to a governmentwide approach.  Nevertheless, the portfolio of IT metrics suggested by Desouza could spark more targeted conversations of how IT organizations can best develop metrics and show their value to the broader organization.