Discussion on a Retrospective Review of Regulation
On November 16, 2023, the George Washington University Regulatory Studies Center and the IBM Center for the Business of Government co-hosted an event, Building on Regulatory Foundations and Bridging to the Future, commemorating the 30th anniversary of Executive Order 12866 and 20th anniversary of Circular A-4. Taking place a couple weeks before ChatGPT’s first birthday, the event featured several breakout sessions, including one focused on a retrospective review of regulation, led by Nick Hart, President and CEO of the Data Foundation.
The session highlighted the important role of retrospective review in the regulatory process and acknowledged that more improvement is needed in this field. Attendees in the session dived into the risks and challenges for conducting retrospective review and discussed potential remedies for those challenges.
Dr. Hart pointed out that retrospective review is an essential part in the feedback loop of the rulemaking process. Retrospective review evaluates the effects of regulations after they have been implemented and verifies whether the existing regulations need to be removed, amended, or streamlined. Retrospective review also helps improve the design of future regulations. By comparing retrospective review against prospective regulatory impact analysis (RIA), regulators can assess whether the assumptions and models used in the ex-ante analysis are valid or need revisions. Such assessment will then inform the analysis and design of future regulations.
Despite the value of retrospective review, the consensus among session attendees was that agencies have not been doing a good job in retrospective review. There are at least two major challenges. First, agencies lack incentives to perform retrospective review. Agencies are required to do ex-ante RIAs before they can issue new regulations, and the Office of Information and Regulatory Affairs (OIRA) serves as the gatekeeper to ensure that agencies comply with this requirement in the rulemaking process. After the regulation has been issued, however, the absence of retrospective review is less problematic for regulators. Regulators have little motivation to identify flaws in existing regulations or to disclose such information to the public. Moreover, there are often limited resources (i.e., capacity and time) within agencies, making regulators even less incentivized to evaluate their regulations retrospectively.
The second challenge is technical difficulties in designing the policy for evaluation. Once the regulation has been implemented, it is difficult to know what the counterfactual world (the one without the regulatory change) would have looked like. The design of regulation rarely takes into account data collection for retrospective review, so regulators often do not have the data needed for an ex-post analysis even if they intend to do so.
While agencies do not conduct retrospective review systematically, participants in the session discussed some effective practices. For example, the Environmental Protection Agency used regional rules as quasi-experiments to gather data for evaluating its gasoline standards. When it rescinded its Covid-19 vaccination mandate, the Occupational Safety and Health Administration was able to compare vaccine uptake after the rule was issued against the assumption made in the RIA of the rule. The EPA also has well-established data collection infrastructure for monitoring pesticides and incidents that may affect regulatory actions.
The session then explored potential remedies to achieve better systematic retrospective review. Some attendees suggested that OIRA can play a key role to provide leadership and guidance for retrospective review. Some mechanisms, such as automatic sunset of regulations, could also incentivize retrospective review. Under a sunset provision, agencies are required to review their rules by certain deadlines and, if they fail to do so, the rules will automatically expire.
Participants discussed the value of supporting internal or external reviews. For example, well-established economists supporting the development of prospective analysis and regulatory impact assessments may be incentivized to support retrospective review in ways program offices are not. Similarly, attendees pointed out that it does not have to be agencies who perform retrospective review. Such analysis can be contracted out to third-party analysts who may produce more objective evaluations.
It is also important to build data collection into the design of regulation such that data can be collected from the outset. One participant pointed to OIRA’s new guidance on pilot projects and data collection in the revised Circular A-4 issued in November 2023. The guidance recommends that agencies use pilot projects to test regulatory alternatives, if timing and other circumstances allow, and consider the regulatory alternatives that would facilitate data collection to support retrospective review.
The session concluded with a discussion about opportunities for artificial intelligence (AI) in retrospective review of regulation. For example, AI may alleviate the problem of resource limitations by helping regulators manage their work and time more efficiently. AI may also assist in collecting or generating the data necessary for analysis or mining existing data that are otherwise impossible to collect. While the lack of incentives persists, the progress in cutting-edge technologies will provide regulators with more tools to perform better retrospective review.