Use this button to switch between dark and light mode.

2017 RAND Study Evaluates Occupational Disability Guidelines (ODG)

July 22, 2017 (7 min read)

RAND Corp. published its review of the (1) Technical Quality and (2) Clinical Acceptance of the Occupational Disability Guidelines (ODG) published by Work Loss Data Institute (WLDI). This paper briefly discusses the RAND report.

Who Is Work Loss Data Institute (WLDI)?

WLDI publishes the ODG, arguably the most successful treatment guidelines in the occupational medicine space. Over 100,000 users support the platform. Continually updated, ODG incorporates over 10,000 ICD-9 codes, 65,000 ICD-10 codes, and 10,000 CPT codes. ODG distinguishes itself from its primary competitor, American College of Occupational and Environmental Medicine (ACOEM), claiming independence from any medical specialty group.

Hearst, the media empire, bought and placed WLDI in 2016 under its MCG Heath Unit, which is itself under Hearst Health.

Move to Evidenced-Based Medicine (EBM)

As workers’ compensation systems struggled with runaway medical inflation, states adopted a system of Utilization Review (UR) to combat what many saw as the business of medicine; that is, healthcare providers pursuing treatment merely for reimbursement instead of improving patient outcomes. Groups of specialists created self-described standards of care based on a consensus of their members and not on any evidence their proposed treatment resulted in better outcomes.

EBM demanded treatment decisions based not on consensus of medical professionals who predictably would defend their medical turf and bottom line (this consensus often depended on the area of the country or even state). Instead, EBM advocates believe proper medical treatment should not differ from state to state. Patients should receive the best care as determined by the evidence. What works in California should work in Florida.

WLDI was uniquely well positioned to take advantage of the EBM movement as state after state sought to reform their workers’ compensation systems and lower medical costs. Indeed, ODG can (and does) tout their role in reducing medical costs and unnecessary care in many states.

Of course, these efforts lower the income of many healthcare providers by reducing the level, frequency, and duration of treatment. Many healthcare providers, injured workers, and their attorneys criticized the use of EBM and treatment guidelines as “Cookbook” medicine. Were the guidelines really evidenced-based or simply a fancy cost-containment mechanism?

A Guideline to the Guidelines

Just as ODG attempts to assess the efficiency of medical treatment, soon the guidelines themselves were subjects of review and rightly so. RAND published a review in 2004 and ODG finished second in the rankings (WLDI claims 72 guidelines were assessed. RAND claims 5). WLDI trumped this distinction. Texas regulators cited the RAND findings when it adopted the ODG in 2005 as the official treatment guidelines of Texas, resulting in a significant reduction in medical costs.

In 2009, Adelaide Health Technology Assessment (AHTA) evaluated 27 guidelines worldwide. ODG again finished second to a little used Canadian guideline for diagnostic imaging.

WLDI cited these two studies often, claiming ODG “has been ranked among the best and most rigorous in the world for technical quality by RAND Corp. and other”.

The federal government got into the guideline evaluating business when the Agency for Healthcare Research and Quality (AHRQ) created the National Guideline Clearinghouse. WLDI claims they sent AHRQ a withdrawal letter on June 16, 2016. AHRQ’s Mary Nix stated, “We have a higher bar now for the evidence underpinning the Guidelines.” Ms. Nix goes on to say, “We were not able to…assure that systematic evidence review was conducted for each of the topics that they cover in ODGs.” At the same time, RAND was studying ODG leading to the 2017 report.

RAND’s 2017 Evaluation of ODG

Like AHRQ, RAND Corporation now subjects guidelines to more rigorous standards. As such, RAND “aims to assess ODG’s Technical Quality (the rigor of development methods) as well as Clinical Acceptability (the perceived validity of the guideline in the eyes of diverse clinical experts).”

RAND has not repeated its landmark 2004 review of medical treatment guidelines. (They have evaluated guidelines for specific purposes. For example, the 2013 review of guidelines for opioid treatment). Why RAND chose to single out ODG and not study any other guidelines using the new, more stricter standards is not explained. RAND’s website does not list any review of any other treatment guidelines used for occupational injuries. Yet, RAND’s appraisers acknowledge reviewing other guidelines. (“Note: the appraisal team has yet to find a guideline with a systematic review that does this.”) So, instead of reviewing all guidelines (a repeat of the 2004 study) or guidelines in the workers’ compensation space, RAND reviewed the ODG only.

RAND essentially offers numerous criticisms mostly attributable to a tougher test; that is, the assessment tools for guidelines are now “stricter”. For example, the two tools (AGREE II and modified AMSTAR) were neither available nor used in the 2004 review. Many of the problems are due to the lack of documentation or discussion of methodology. RAND questions if the documentation and methodology problems are due to limited involvement of appropriately-trained methodologists which are important under the new, stricter criteria or something worse.

Technical Quality

All four RAND-selected appraisers recommend ODG for use with some modifications. ODG scored a 58% (on a 100-point scale) using the Appraisal of Guidelines for Research and Evaluation II (AGREE II) scale. RAND uses a 100-point scale, but the scores are not assigned a Grade (e.g., A, B, C, D, F). Finding that the ODG has technical quality limitations, RAND concludes, “Therefore, the development methods, as reported by WLDI, fall short of the highest quality possible, particularly those of the literature reviews supporting the guidelines.” RAND provides a list of strengths and weaknesses as well as specific ways in which the technical quality (many limitations appear to be methodology related) could be improved.

The authors provide supporters and critics alike numerous “billboard” material. Perhaps the most damning is the Rigor of Development domain score. RAND’s appraisers applauded ODG when the medical literature included a systematic review. But, the appraisers strongly criticized ODG’s treatment recommendations when a systematic review has not been found and WLDI’s authors must conduct its own review of the literature. WLDI uses a medical literature search whenever systematic reviews are unavailable but does not include any information that such a search includes “all eligible articles or only articles that the chapter developer teams preferred to include.” However, RAND did not identify any instances where WLDI failed to include relevant studies.

Clinical Acceptance

RAND’s eight panelists for clinical acceptability utilized the modified AMSTAR (A Measurement Tool to Assess Systematic Reviews) to score ODG as “fair/good”.

The clinical acceptability scores are much higher for ODG than in 2004. The Clinical Acceptability score is itself subject to criticism. EBM rejects the consensus-based approach because of the concerns that healthcare providers might pursue treatment options that offer no or little benefit but are the preference of certain healthcare providers. Yet, RAND selected eight practitioners to assess ODG’s “validity” or acceptance by healthcare providers.

One or two panelists could downgrade the score if they disagreed with each other. The eight panelists included one US spinal orthopedic and one Australian spinal neurosurgeon. Some panelists voiced strong opposition to ODG’s recommendation against epidural steroid injections for neck and upper back injuries. Other panelists agreed with the ODG. The disagreement among panelists dropped the score to “uncertain” instead of “valid”. In another case, two panelists disagreed with each other on the use of spinal cord stimulators for chronic pain resulting in an “uncertain” validity determination. Again, a small group of panelists who disagree among themselves results in an invalid determination? Is this consensus defeating evidence? Or lack of consensus defeating evidence?

In another case, RAND commented that a “panelist critiqued ODG’s recommendations against surgical treatments for degenerative disc disease and degenerative scoliosis.” RAND does not name the panelist or discuss if this critique is valid or represents that panelists’ disagreement with the current EBM. RAND’s eight panelists expressed their opinions on the evidence apparently disagreeing with the evidence. Isn’t this a consensus-based approach to evaluating evidence?

RAND’s Score in Context

WLDI is probably not thrilled with a (1) 58% Technical Quality score or (2) its “fair/good” Clinical Acceptance score. ODG’s critics will have new ammunition, especially from healthcare provider groups with their own financial agendas. Few will remember (or care) that ODG’s Clinical Acceptance score is much higher now than when studied in 2004.

And few will care the RAND study has several limitations when used to evaluate the ODG Guidelines. First, the study places ODG in a vacuum without a study of other guidelines (like RAND reviewed in 2004) or at least a side-by-side with other occupational guidelines (which RAND reviewed for opiates in 2013). Second, several RAND-identified problems seem more “form” than “substance”. Third, the Clinical Acceptance evaluation could be criticized as eight practitioners participating in an online review arriving at an “uncertain” (which is considered not “valid”) determination when two or more of those clinicians disagreed with themselves. Finally, and perhaps most importantly, RAND actually does not assess the ODG itself for adherence to EBM.

Those criticisms aside, RAND is an expert in the field of methodology and study. WLDI does have demonstrated methodology deficiencies using the AGREE II and AMSTAR tools. Perhaps WLDI’s new owners could lend their expertise to raise their technical quality scores. Further, WLDI can incorporate the criticisms found in the Clinical Acceptance assessment resulting in a much higher score next time RAND chooses to publish a study.

According to RAND, ODG is an imperfect tool, but a tool is better than no tool at all. RAND does recommend ODG’s use albeit with some modifications, to “yield the best possible clinical outcomes in additional to reducing unnecessary healthcare expenses.”

© Copyright 2017 LexisNexis. All rights reserved.