Today, we host a guest blog written by our good friend and colleague, Gary Anderberg, PhD (Stanford). In addition to being one of the smartest people I have ever met (and I’ve met a lot of smart people), Gary is an exceptionally interesting and imaginative person. Currently Broadspire’s Practice Leader for Analytics and Outcomes in its Absence and Care Management Division, Gary is quite the expert regarding predictive modeling.
Gary’s been a college professor, a management consultant, VP of a California TPA and Founder of another California TPA. He helped design Zenith National’s Single-Point Program and Developed Prudential’s Workers Comp Managed Care Program, as well as its Integrated Disability Management Product.
Gary went to college to become an astrophysicist, but along the way found himself with a passion for ancient languages and cultures. As he puts it, “Some of my best friends have been dead for a few thousand years.” If that’s not enough, he also writes mystery novels, short stories and screenplays, and is a member of the Mystery Writers of America. In his spare time (you may be forgiven for asking, “He has spare time?”), Gary can sometimes be seen driving his space-age motorcycle through the back roads of Pennsylvania, wearing enough protective gear to make him look like an intergalactic warrior. – Tom Lynch
Predictive Modeling in Workers’ Compensation
Predictive modeling (PM) appears to be the buzzword du jour in workers’ compensation. There are real reasons why PM can be important in managing workers’ comp claims, so let’s stop and take a look at the substance behind the buzz.
PM is a process. Put simply, we look at tens of thousands of claims and try to discern patterns that link inputs – claimant demographics, the nature of an injury, the jurisdiction and many other factors – to claim outcomes. Modelers use many related techniques – Bayesian scoring, various types of regression analysis and neural networks are the most common – but the aim is always to link early information about a claim with the most probable outcome.
Obviously, if the probable outcome is negative – a high reserve, a prolonged period of TTD or the like – modeling can prompt various interventions designed to address and ameliorate that negative outcome. In effect, we are predicting the future in order to change the future. Spotting the potential $250,000 claim and turning it into an actual $60,000 claim is how PM pays for itself.
There are two approaches to PM: (a) mining existing claims data and (b) using claims data plus collateral information that models claim factors not well represented in a standard claim file. Ordinary data mining uses the information captured as data points during the claim process – claimant demographics, ICD-9 codes, NCCI codes, location, etc. But the standard claim file is data-poor. Much of the most revealing information about claimant attitude, co-morbid conditions, workplace conflicts and the like is captured – if it is captured at all – as narrative. Text data mining is a complex and less-than-precise science at this point, thus conventional data mining is limited in what it can provide for PM.
The best PM applications based on conventional data mining can provide a useful red light, yellow light, green light classification for new claims, identifying those claims with obvious problems and those that are obviously clean, leaving a group of ambiguous claims in the middle. This is a good start, but two important refinements that ratchet up the usefulness of PM materially are becoming available.
Sociologists, psychologists, industrial hygienists and others have done a tremendous amount of research in the last 30 years or so into the many factors that influence claims outcomes and delay normal RTW. Many of these factors are not captured in the standard claims process, but they can be captured through the use of an enhanced interview protocol and they can be mathematically modeled as part of a PM application.
Systems are already in place that ask value-neutral but predictive questions during the initial three point contact interviews. Combining the new information from the added questions with the models already developed through claim data mining produces a more granular PM output, which can identify particular claim issues for possible intervention.
For example, development is now underway to include a likelihood of litigation component in an existing PM system by adding a few interview questions and combining those responses with information already captured. Predicting the probability of litigation has a clear value to the adjuster and others in the claim process. Can potential litigation be avoided by changes in how the claim is managed? Do other factors in the claim make running the risk of litigation a worthwhile gamble? Better predictions make for more effective claim management.
Most of the PM systems in development or online are front-end loaded and look at the initial claim data set. But some trials are already underway to perform continuous modeling to look for dangers that may arise as the claim develops. The initial data set for a new claim can predict the most probable glide path for that claim, and in most cases the actual development of the claim will approximate that glide path. In some cases, however, the development can go awry. A secondary infection sets in or the claimant unexpectedly becomes severely depressed or lawyers up. This new, ongoing PM process monitors each claim against its predicted glide path and warns whenever a claim seems to be in danger of becoming an outlier – or a reinsurance event.
But wait a minute: isn’t an alert adjuster supposed to catch all of these factors from the initial interviews on? The use of PM is predicated on the idea that the best adjuster can have a bad day or miss a clue in an interview. A claim may have to be transferred to a new adjuster due to vacation, illness or retirement. Claim adjusters may well have invented the concept of multitasking and we all know that oversights can happen in a high-pressure environment.
A good PM application is the backstop, and it can be set up to alert not just the adjuster, but also the supervisor, the unit manager and the client’s claim analyst all at the same time. This brings new power and precision to the whole claim process, but only if the PM application becomes an integral part of how clams are handled and is not relegated to after-the-fact reporting. Several presentations at a recent Predictive Analytics World Conference in San Francisco made it clear that, in a wide range of business models, PM is still a peripheral function which has not yet been integrated into core processes.
To make the best use of PM in managing workers’ comp claims, two conditions have to be met: (a) adjusters have to understand that PM does not replace them or dumb down their jobs and (b) claim managers have to trust the insights that PM offers. When the PM system tells you that this little puppy dog claim has a very high potential to morph into a snarling Cujo based on how the claimant answered a handful of non-standard questions . . . believe it. Taking a wait and see approach defeats the whole purpose of PM, which is to get ahead of events, not trail along after them in futile desperation.
Remember, the purpose of PM is to avert unfortunate possible outcomes. This is one job at which you can never be too effective. Progress catches up with all of us – even in workers’ comp (one of the last major insurance lines to go paperless, for example). It is unlikely that, in another five years, any claim process without a robust PM component can remain competitive. If you can’t predict how claims will develop, you will be throwing money away.
Posts Tagged ‘predictive modeling’
Predictive Modeling in Workers’ Compensation
Monday, September 17th, 2012Triaging Trouble: Predictive Modeling in Claims Management
Tuesday, October 4th, 2011Predictive modeling has long been used in personal lines, especially auto insurance. It’s only in the last 8 or 9 years that we’ve seen it squeezing through the workers’ compensation front door in the areas of underwriting and claims administration. In this period, the major risk management consultants, TPAs and insurers have been developing sophisticated models to, in consultant-speak, “use advanced statistical techniques (e.g., multivariate analyses, generalized linear models) to simultaneously evaluate numerous potential explanatory risk factors for maximum amounts of knowledge from available data sources” (from a TowersPerrin 2006 paper) (PDF).
To translate, in the claims process, the purpose of predictive modeling is to identify injured workers who are most at-risk of delayed recovery or malingering. The best time to do this, of course, is at the time of the injury. As my friend and colleague Mike Shor, of Best Doctors, puts it, “Think of it as being no different from the triage process that occurs in combat medicine or an emergency room…. the military talks about the golden hour….it’s what happens in that first 60 minutes that drive outcome. In WC we believe there is a golden 24-48 hours where the claim decisions that get made determine the ultimate outcome. It is here where claims that have the potential to run off the rails actually do.”
To a certain degree, predictive modeling systems can suggest which injured workers are most at risk for staying out of work longer than is medically necessary. Predictive models use advanced statistical techniques to perform multivariate analyses that suggest the degree of risk associated with any one underwriting risk or any one injured worker claim. Some predictive models use hundreds, even thousands, of univariates, but, in the claims arena, as you can probably imagine, there are a limited number, perhaps 10 to 15, that are of most value, and many of these are of the common sense variety. For example, co-morbidities such as obesity, diabetes and diseases that affect oxygen intake, all of which hinder healing. Others are demographic, such as age, education, marital status and distance from the worksite. For example, if you have a 55-year old divorced Type-2 diabetic male who lives alone more than 20 miles from the worksite and who suffers a crushing injury to the foot you more than likely have an employee at high risk for extended absence. Of course, any claims adjuster worth his or her salt intuitively knows this, but a predictive modeling system can examine all of the appropriate variables and spit out a ranking with recommendations in a nanosecond or two. Predictive modeling doesn’t come cheap, and it doesn’t replace the experience and judgment of a seasoned claims specialist, but, if used wisely, it offers a significantly sharp, relatively new arrow in the claims quiver.
“Used wisely” is the key phrase, because if that happens the claims adjuster can quickly link the at-risk injured worker with a clinician skilled in dealing with the bio-psychosocial risk factors associated with delayed recovery. In other words, the full-court claims press can be applied very early in the claims cycle.
Add to this mix an educated employer injury coordinator who projects a caring and compassionate approach to injured workers and who offers a well-thought-out modified duty program, and the likelihood of successful return to work is increased substantially. The goal is to remove excuses for staying out of work longer than is medically necessary. This type of approach assures that injured workers, the vast majority of whom are motivated to return to productive lives as fast as possible, do so on the fast track. Even more important, those who are not so motivated, those with other agendas, are identified almost immediately.
We recommend that you ask your insurer or TPA claims executives to explain their firm’s approach to and usage of predictive modeling. Employers should know to what degree and in what way their claims adjusters are using this tool.