Today, we host a guest blog written by our good friend and colleague, Gary Anderberg, PhD (Stanford). In addition to being one of the smartest people I have ever met (and I’ve met a lot of smart people), Gary is an exceptionally interesting and imaginative person. Currently Broadspire’s Practice Leader for Analytics and Outcomes in its Absence and Care Management Division, Gary is quite the expert regarding predictive modeling.
Gary’s been a college professor, a management consultant, VP of a California TPA and Founder of another California TPA. He helped design Zenith National’s Single-Point Program and Developed Prudential’s Workers Comp Managed Care Program, as well as its Integrated Disability Management Product.
Gary went to college to become an astrophysicist, but along the way found himself with a passion for ancient languages and cultures. As he puts it, “Some of my best friends have been dead for a few thousand years.” If that’s not enough, he also writes mystery novels, short stories and screenplays, and is a member of the Mystery Writers of America. In his spare time (you may be forgiven for asking, “He has spare time?”), Gary can sometimes be seen driving his space-age motorcycle through the back roads of Pennsylvania, wearing enough protective gear to make him look like an intergalactic warrior. – Tom Lynch
Predictive Modeling in Workers’ Compensation
Predictive modeling (PM) appears to be the buzzword du jour in workers’ compensation. There are real reasons why PM can be important in managing workers’ comp claims, so let’s stop and take a look at the substance behind the buzz.
PM is a process. Put simply, we look at tens of thousands of claims and try to discern patterns that link inputs – claimant demographics, the nature of an injury, the jurisdiction and many other factors – to claim outcomes. Modelers use many related techniques – Bayesian scoring, various types of regression analysis and neural networks are the most common – but the aim is always to link early information about a claim with the most probable outcome.
Obviously, if the probable outcome is negative – a high reserve, a prolonged period of TTD or the like – modeling can prompt various interventions designed to address and ameliorate that negative outcome. In effect, we are predicting the future in order to change the future. Spotting the potential $250,000 claim and turning it into an actual $60,000 claim is how PM pays for itself.
There are two approaches to PM: (a) mining existing claims data and (b) using claims data plus collateral information that models claim factors not well represented in a standard claim file. Ordinary data mining uses the information captured as data points during the claim process – claimant demographics, ICD-9 codes, NCCI codes, location, etc. But the standard claim file is data-poor. Much of the most revealing information about claimant attitude, co-morbid conditions, workplace conflicts and the like is captured – if it is captured at all – as narrative. Text data mining is a complex and less-than-precise science at this point, thus conventional data mining is limited in what it can provide for PM.
The best PM applications based on conventional data mining can provide a useful red light, yellow light, green light classification for new claims, identifying those claims with obvious problems and those that are obviously clean, leaving a group of ambiguous claims in the middle. This is a good start, but two important refinements that ratchet up the usefulness of PM materially are becoming available.
Sociologists, psychologists, industrial hygienists and others have done a tremendous amount of research in the last 30 years or so into the many factors that influence claims outcomes and delay normal RTW. Many of these factors are not captured in the standard claims process, but they can be captured through the use of an enhanced interview protocol and they can be mathematically modeled as part of a PM application.
Systems are already in place that ask value-neutral but predictive questions during the initial three point contact interviews. Combining the new information from the added questions with the models already developed through claim data mining produces a more granular PM output, which can identify particular claim issues for possible intervention.
For example, development is now underway to include a likelihood of litigation component in an existing PM system by adding a few interview questions and combining those responses with information already captured. Predicting the probability of litigation has a clear value to the adjuster and others in the claim process. Can potential litigation be avoided by changes in how the claim is managed? Do other factors in the claim make running the risk of litigation a worthwhile gamble? Better predictions make for more effective claim management.
Most of the PM systems in development or online are front-end loaded and look at the initial claim data set. But some trials are already underway to perform continuous modeling to look for dangers that may arise as the claim develops. The initial data set for a new claim can predict the most probable glide path for that claim, and in most cases the actual development of the claim will approximate that glide path. In some cases, however, the development can go awry. A secondary infection sets in or the claimant unexpectedly becomes severely depressed or lawyers up. This new, ongoing PM process monitors each claim against its predicted glide path and warns whenever a claim seems to be in danger of becoming an outlier – or a reinsurance event.
But wait a minute: isn’t an alert adjuster supposed to catch all of these factors from the initial interviews on? The use of PM is predicated on the idea that the best adjuster can have a bad day or miss a clue in an interview. A claim may have to be transferred to a new adjuster due to vacation, illness or retirement. Claim adjusters may well have invented the concept of multitasking and we all know that oversights can happen in a high-pressure environment.
A good PM application is the backstop, and it can be set up to alert not just the adjuster, but also the supervisor, the unit manager and the client’s claim analyst all at the same time. This brings new power and precision to the whole claim process, but only if the PM application becomes an integral part of how clams are handled and is not relegated to after-the-fact reporting. Several presentations at a recent Predictive Analytics World Conference in San Francisco made it clear that, in a wide range of business models, PM is still a peripheral function which has not yet been integrated into core processes.
To make the best use of PM in managing workers’ comp claims, two conditions have to be met: (a) adjusters have to understand that PM does not replace them or dumb down their jobs and (b) claim managers have to trust the insights that PM offers. When the PM system tells you that this little puppy dog claim has a very high potential to morph into a snarling Cujo based on how the claimant answered a handful of non-standard questions . . . believe it. Taking a wait and see approach defeats the whole purpose of PM, which is to get ahead of events, not trail along after them in futile desperation.
Remember, the purpose of PM is to avert unfortunate possible outcomes. This is one job at which you can never be too effective. Progress catches up with all of us – even in workers’ comp (one of the last major insurance lines to go paperless, for example). It is unlikely that, in another five years, any claim process without a robust PM component can remain competitive. If you can’t predict how claims will develop, you will be throwing money away.