Gerald Ray, Social Security Administration: Conversations on Using Analytics to Improve Mission Outcomes
How do you measure and improve the performance of a group of people who see themselves as experts at what they do? This is the challenge that faced Gerald Ray, who set out to improve the performance of the Social Security Administration’s 1,500 administrative law judges in order to speed their decision-making process and improve the accuracy of their decisions. He set out to do this using analytic tools and targeted training sessions. An Interest in Analytics and Law. Mr. Ray’s interest in applying analytics to the law started when he attended law school. He says he remembers reading his assigned cases and recognized there were many “divergent opinions from different judges.” For example, Mr. Ray cites the premise that judges possess expertise as a result of years of training and experience. Yet data shows that different judges, sitting at similar levels of the judicial system, with very similar training and experience can and do render different judgments. There appears to be a subjective factor involved in applying the law. Mr. Ray realized that while the law has room for interpretation, the legally prescribed policy typically does not. Applying the policy is a process that can be readily measured, once defined. Mr. Ray also discovered that, if the policy has not been applied consistently, the only practical remedial action available is behavior change. Mr. Ray cites several interesting influences on his approach to “marrying” analytics and the law. One is Nobel Prize winning economist Daniel Kahneman. Mr. Ray asked Professor Kahneman what it takes to make someone an expert at what they do. Professor Kahneman responded: A stable process People immersed in the process Feedback on adherence to the process Without the feedback component, people will start to diverge. With feedback, people tend to conform to the process, even if outcomes still diverge. Mapping the Decision Process. Fast forward to his role at the Social Security Administration (SSA). Mr. Ray’s office of Appellate Operations manages cases where an applicant for disability has appealed a ruling on his or her eligibility for benefits. Mr. Ray recalls that when he first joined SSA, there was little performance data per se but there was a lot of historical data about numbers and types of cases. He cites Supreme Court Justice Rehnquist as describing this type of law, administrative law, as only slightly less complex than tax law. The cases managed by Appellate Operations ultimately result in one of two possible verdicts: eligible or ineligible. Given both the “binary” nature of the overall policy applications processed and the requirement to align compliance with policy, Mr. Ray realized that he needed to map all the various process paths of a case to achieve an outcome. Mr. Ray and his staff analyzed these facts and created a “decision tree” model based on over 2,000 data points. The Appellate Operations team then ran the historical data using this new model to measure historical compliance. The initial findings showed that there were a large number of cases that were transferred back and forth between Appellate Operations and the District Court. This presented a focused problem to address: how could the quality of the work be improved such that the applicant either received approval of benefits or proper and accurate due process with no bouncing back and forth? To tackle this objective, Appellate Operations created a heuristic model, “a sort of rule of thumb approach” to applying the law. Based on Professor Kahneman’s research, Mr. Ray required that a feedback loop be added to that model to guide staff, including judges, to specific training activities if a case was returned. They worked with the operations subject matter experts to create training programs that were aimed not at teaching law – but at teaching how to apply the law. A feedback loop was also built into the training programs and was refined over time. One of the training innovations that the SSA built into the process, which won the Deming Award from the U.S. Graduate School in 2011, was tiered training. “Every time a case is returned, a link is sent … first tier is a little handy-dandy guide to applying the law correctly. There’s also a second tier … with hyperlinks to the regulations for more detail. So if you really don’t understand the issue, you can self-train.” Training Based on How Adults Learn. To increase the effectiveness of the training, Mr. Ray researched how adults learn. Most training programs present material and expect adults to understand and retain it without any context. This type of rote learning is successful with children, but not as much with adults. Adults learn better with understanding the context and the value added. For example, what practical use does this give me? Adults are also more likely to learn more effectively when they can relate their own progress to that of a group of peers. So an additional motivational strategy Mr. Ray proposed was to implement a tool called “How Am I Doing?” where employees can see how their own performance ranks within their peer group. Mr. Ray cites that as a result of applying analytics to the continuous improvement approaches outlined above, performance has improved over 12%. The percentage of cases returned reduced from 25% to 15%, “a substantial improvement in the quality of the work.” To listen to Mr. Ray’s complete podcast and to read excerpts from his interview, visit the “Conversations on Using Analytics to Improve Mission Outcomes” page. In my next blog, I will highlight the insights gleaned from an interview with Lori Walsh Chief, Center for Risk and Quantitative Analytics, Securities and Exchange Commission.