Friday, December 18, 2009
Recent years have seen a growing emphasis on the reporting of the outputs and outcomes of government programs. Yet there is limited information on what outputs and outcomes are actually reported on in practice. A new report by Richard Boyle, Head of...

Recent years have seen a growing emphasis on the reporting of the outputs and outcomes of government programs. Yet there is limited information on what outputs and outcomes are actually reported on in practice. A new report by Richard Boyle, Head of Research for the Institute of Public Administration in Dublin, Ireland, finds that there is surprisingly little information on the nature and quality of output and outcome indicators that are actually used and presented in performance reports. He further notes that there is an almost total lack of information on cross-national comparative practice.

What types of indicators are actually being reported on? Does reality match the rhetoric? Boyle provides empirical evidence of what is actually happening in output and outcome reporting by government departments. He examines four countries regarded as among those at the forefront of performance reporting: Australia, Canada, Ireland, and the United States. His report provides cross-national comparative data on good and bad practices in performance reporting, shares good practices across these countries, assesses the state of performance reporting, and provides directly relevant assistance to program managers in both central and line agencies.

According to Boyle, there is a clear distinction between performance reports in the US and those in other countries he examined. On the whole, indicators contained in US reports are more likely to report on outcomes, be quantitative in nature, meet data quality criteria, and have associated targets and multi-year baseline data. To learn more, read the full report just released by the IBM Center for The Business of Government.