Monday, August 31, 2015
A “cubit” is an ancient measure of length – from your elbow to your middle fingertip. We no longer use it, because everyone’s is different, and we get different results. The federal government has a project underway to move from its version of cubits to a

A cross-agency initiative to Benchmark and Improve Mission-Support Operations has been underway since early 2013 when it was announced by the Office of Management and Budget (OMB). Today, the preliminary results of the effort are being used to inform discussions between agencies and OMB in their first-ever “FedStat” meetings on how well are agencies managing their administrative functions and ultimately delivering on strategic objectives. The project manager, Steve Brockelman of the General Services Administration (GSA), says: “we now have a rich set of government-wide, cross-functional benchmarks to support data-driven decision making.” Background. There have been ad hoc efforts over at least the past two decades to benchmark federal agency performance in areas as diverse as call-center efficiency, customer satisfaction and employee satisfaction. But more recently, there has been interest in benchmarking the cost and quality of services across key mission-support activities, such as human resources, real estate management, contracting, and the use of IT. In each case, there has been active support by the respective cross-agency councils, such as the Chief Human Capital Officers Council, the Federal Real Property Council, and the Chief Information Officers Council. Their interests have been spurred in part by a broader interest in moving to shared services across agencies. OMB’s then-deputy director for management, Beth Cobert, and then-GSA administrator, Dan Tangherlini, were the initial champions of the initiative. They have both been followed by two other strong champions – Denise Turner Roth, GSA Administrator and Dave Mader, OMB Controller - which creates continuity in support. In addition, there has been strong support from the President’s Management Council, comprised of the chief operating officers (often the deputy secretaries) from the departments and major agencies. The project is actually “owned” by the different cross-agency councils themselves. Brockelman became the point person because he runs the Office of Executive Councils in GSA, which provides the staff support for most of the cross-agency mission-support councils. He also conducted similar efforts when he was in the private sector. Brockelman notes that the key to success in any benchmarking initiative is to create consistent, standardized, and agreed upon data elements, with clear definitions, and frame of reference (e.g., time, place, or process). Each of the five cross-agency councils created working groups that “took the lead in developing and selecting metrics that would help them improve cost-effectiveness and service levels within their functions.” These include: the Chief Financial Officers Council, the Chief Information Officers Council, the Chief Acquisition Officers Council, the Chief Human Capital Officers Council, and the Federal Real Property Council. Together, they created a “common language for measuring performance of agency mission-support functions.” Their efforts were oftentimes facilitated by staff from the Office of Executive Councils, who served as a neutral convener that promotes collaboration and problem-solving among agencies and OMB. Development. Across the councils, they agreed on three guiding principles for their work: First, imperfect data is better than no data. Data can be enhanced over time, especially if it is seen as useful by agency decision makers. Second, create action-oriented metrics that can answer questions such as: “How efficiently is my function providing services compared to my agency peers?” And third, the resulting data needs to be seen as a resource to be shared across agencies so that the President’s Management Council and agency management teams can better understand the cost and quality of their administrative functions. The cross-agency councils will serve as clearinghouses for identifying and sharing best practices, and the individual agencies can use the results to diagnose issues and prioritize areas ripe for improvement. The initiative completed its first round of data collection in 2014. At that point, they collected about 40 metrics -- largely around cost and efficiency -- across the five targeted mission-support functions. Round two was completed in the first half of 2015. This second round added operational quality and customer satisfaction metrics – about 26 metrics each, across the same five functions. To get customer satisfaction data, the five councils jointly sponsored a survey of 139,000 managers and asked them about their satisfaction with, and importance of, the 26 service areas within the five functions. What Have Been the Results of This Effort? Brockelman says the initial results show “the amount of variation in the cost and quality of commodity services across the government is enormous.” When data from even within one department is compared across bureaus or components, it sparks a discussion among that department’s leadership team: are there legitimate reasons for why performance varies so much from the norm? This new benchmarking data allows the chiefs of various mission-support functions to explore answers to fundamental management questions that they wanted to be able to answer: What is the area with the greatest need for improvement? What are tradeoffs for shifting resources from one area to another? Which shared service providers would deliver greater savings and quality? Which services are internal customers dissatisfied with, and why? The FedStat Meetings. Collecting and reporting data are one thing. Using it is another. This year, for the first time, a forum has been created by OMB to assess mission-support issues across functions and in the context of agencies’ mission delivery. OMB and each agency agreed to hold a high-level “FedStat” review hosted by the agency and co-chaired by OMB’s deputy director for management and the deputy secretary for the agency. Pre-meetings are held jointly between OMB and agency staffs; the goal is to not have any surprises going into the meetings. The meetings focused on real challenges and the broader context allowed a more nuanced discussion. These meetings were conducted over the course of late spring and the summer. Specific actions are summed up by OMB staff and, where appropriate, will be incorporated into the President’s fiscal year 2017 budget to be released in early 2016. Actual data from each agency is shared across (but not outside of) government to encourage honest dialog and develop better quality data. Brockelman, who sits in these meetings, says that about 40 percent of the agenda is centered on the benchmarking data produced by the initiative. He says that questions raised for each of the five benchmarked mission-support functions during the FedStat meetings include: What are the three biggest challenges revealed by the data? What are their root causes? Where do they want help from OMB, GSA, the Office of Personnel Management, or other agencies? And: Does it make sense to move some of these activities to a shared service environment? Next Steps. Brockelman says that the initiative is relaunching its benchmarking website in coming weeks to allow agencies to view, compare, and analyze all of the new data. There are also plans at various cross-agency council meetings this fall to discuss implications of the benchmarking results for mission-support functions government-wide, identifying the key performance drivers, shared challenges, and leading practices. And OMB has already begun a series of conversations with various stakeholders to uncover lessons learned in order to improve the 2016 process. In this endeavor, fine-tuning is a perpetual process – but it is better than using cubits! Graphic credit: Courtesy of khunaspix via FreeDigitalPhotos.net