Monday, December 9, 2013
Government agencies regularly report “incident” data, such as the number of burglaries, house fires, cases of food poisoning, bankruptcies, workplace injuries, and more.
While data can be used externally for accountability, it can also be used internally to predict and prevent these kinds of incidents.

These days, more detailed, near real-time data can be collected because of improvements in technology and new reporting systems.  However, these more detailed data – if not well-explained and put in context -- can alarm the public and cause political problems, even while improving performance.  Recent examples include:

Incident reporting systems are an integral part of many agencies’ operations. But reporting the raw numbers of the total number of incident occurring does not necessarily help prevent future incidents from happening.

Agency managers need to analyze operational data at a much finer level to understand why incidents occur and what can be done to pre­vent them in the future. Understanding the precursors of an incident becomes an essential element in improving performance.

This is often called the “black box” of performance management—understanding the relationships that connect danger signals to potential changes in operations to improve­ program outputs and outcomes.

 

Managing Air Traffic Incidents at the FAA

In a new report for the IBM Center, Dr. Russell Mills offers a case study of the Federal Aviation Administration’s Air Traffic Organization incident reporting systems that have evolved since the late 1990s.  He describes the introduction of voluntary self-reporting of errors by air traffic controllers and the use of increasingly sophisticated electronic tracking equipment. Both of these new measurement systems dramatically improved the timeliness and quality of data about “operational errors” – when aircraft come too close to each other while in flight.  For the most part, air traffic controllers are required to keep aircraft separated by three miles (horizontally) and 1,000 feet (vertically). Deviations from these standards are one measure of the overall safety of the air traffic control system.

He writes that, ironically, this better data collection initially alarmed external stakeholders—the traveling public and Congress. To them, it seemed that there was a dramatic increase in the number of operational errors. In fact, the increased reporting of incidents that had previously been undetected or unreported led to a greater understanding of trends and causal factors, thereby allowing the FAA to put in place corrective actions. While this led to a safer air traffic system, it created political concerns for the agency.

Dr. Mills reports that the FAA overcame these political concerns by creating a new risk-based reporting system for the traveling public and Congress that demonstrated that the new elements of its incident reporting systems are contributing to greater safety.  The FAA shifted from reporting raw numbers of operational errors to reporting on the significance of the numbers --- focusing on risk created by the lack of separation, rather than just compliance with the separation standards.

 

Lessons from FAA’s Incident Reporting Experiences

Based on the experience of the FAA’s evolving incident report systems, Dr. Mills offers a set of strategic, management, and analytical lessons that could be applied by other agencies that may also be in the process of increasing the sophistication of their own incident reporting systems:

Strategic Lessons:  As agencies report more performance information – including incident reporting – there will be increased scrutiny of that performance by external stakeholders.  As a result, agencies need to be prepared to proactively education key stakeholders on the new measures and how to interpret them.  Mills notes that “FAA leaders were often forced to act from a reactive rather than a proactive position in explaining the increased number of [operational errors] due to increased detection of incidents.”

Management Lessons:  In order to be useful to agency leaders, analyses of the data from incidents have to be available in a timely manner.  At the FAA, risk analyses are assessed by panels of experts in each service area at least 3-4 times a week.  This frequent assessment created a continuous feedback loop to detect patterns that need to be addressed.  In addition to the frequency of data reporting and analysis, the success of self-reported errors by front-line air traffic controllers depends on collaboration between managers and employees.  In this case, the ability of the FAA and the union representing the air traffic controllers was critical to obtaining the buy-in of controllers to honestly report without being subject to some form of retaliation.

Analytical Lessons:  Agencies have to balance the need for externally-reported performance indicators with the need for assurance that the indicators are reporting actionable information.  In some agencies, there is external pressure to develop and report indicators without the scientific rigor as to whether the measures are meaningful.  In the case of the FAA, the risk metrics were developed in response to political concerns that the raw numbers of operational errors was climbing sharply.  However, it is still in the process of developing baselines and targets for measures under its long term strategic plan.  In addition, FAA found that having more data did not necessarily mean that it had more performance information at hand.  Having analytical techniques to interpret the data was also an important element in its overall performance management strategy.

 

Graphic Credit:  Courtesy of potowizard via FreeDigitalPhotos