The US spends nearly $400bn annually on pharmaceutical drugs. Some estimates put the amount of fraud waste and abuse (FWA) above 30% of that. While recent trends point to a slower rate of increase in pharmaceutical spending, there is considerable opportunity to remove wasteful and fraudulent spending on pharmaceuticals.
Health analytics provide precise, scalable solutions to detect, and mitigate these cases of FWA, enabling pharmacy teams at health plans to more effectively utilize their resources to generate higher savings.
This blog discusses the background and process of doing FWA analytics. I will discuss more detailed FWA identification methods in future blogs. Please SUBSCRIBE so you don’t miss it.
What is FWA?
The motivations for FWA are inevitably money or abuse of controlled substances. On the rare occasion, prescribing doctors are illegally working with nearby pharmacies to generate unnecessary, fraudulent drug prescriptions. Some pharmacies also commit these FWA cases to boost their revenue/profit. Patients can also commit fraud to abuse pharmaceutical benefits.
Some examples of FWA are
Fraud – outright illegal
- Fraud scripts –where the prescription was fraudulently sent from a doctor’s office to the pharmacy for a patient, where either the patient never visited the doctor or never had a drug prescription ordered during the visit.
- Fraud fills –where the pharmacy billed the insurance company for prescriptions that never was picked up by the patient or was never sent to the patient.
- Filled for another person – this is where one patient pretends to be ill, requests the prescription, picks up the fill for another person.
Waste – not illegal but unjustified
- Switching to more expensive options –where, without justifiable reasons, either the doctor deliberately prescribed more expensive drug options or the pharmacy deliberately switched the prescription to a more expensive alternative. Some states in the US have legislated generic prescribing, where all prescriptions (unless specifically requested by the doctor to “dispense as written”) must be dispensed/filled with the generic equivalent of a particular drug at the pharmacy.
- Excessive prescribing – this is where the prescribing doctor prescribes drug treatments without clinical justification. Or prescribing more days supply than is necessary.
Abuse – typically of controlled substances
- Pain management, pill shopping – where the same patient visits multiple doctors to get multiple prescriptions for controlled substances, e.g. opioids.
Process for FWA identification, mitigation
A well thought out, designed analytics process is necessary to ensure success of utilizing analytics to combat FWA.
Step 1 Planning the analysis
All successful analytics project start with thorough and insightful framing of the analysis, through which the right question are asked, with pragmatic implementation considerations, aiming at achieving realistic goals.
This step requires the analyst engaging subject matter experts (SME) who know the business environment and challenge well. For FWA, these SMEs are pharmacists with real world/retail and health plan pharmacy benefit management experience. You will ask a series of question that probe
- the genesis of the problem,
- when and how it’s observed,
- what impact do the FWA cases have,
- who are impacted and how
With these questions answered, you will then specify what you’re analyzing, how you will do the analysis and clarify what the intended solution will be as well as how you expect the result be used in practice. This initial planning step is critically important, as doing a perfect analysis to answer the wrong question would be an utter waste of time.
Step 2 Data ETL
Next follow the analytic plan, obtain data from the sources available. Judge how complete, accurate, useable are these different sources. There is no need to require 100% accuracy or completeness. Often “good enough” is good enough, as perfect data does not exist in real world.
To the extent possible, ensure your extraction, transform and load (ETL) process is dynamic and scalable, meaning the ETL process can accommodate some changes in the input data structure, perform the transform through as much automated queries/steps as possible and allow for increasing volume of data.
The ETL process should have built in quality/accuracy checks so that any changes in data and errors resulting from the ETL process are identified automatically.
Step 3 Baseline
Once your ETL process completes, you will have the data necessary to derive the baseline. Baseline here refers to the observed level of utilization, across patients, drugs, doctors. You can analyze the baseline for anomalies as well as use the baseline to gauge future trends to spot new problem areas.
Through this step, you would want to add extra layers of intelligence. E.g. adding ATC classifications or RxNorms to the claims data.
Provided you have sufficient volume of data, you may also wish to split up the data into different years. This will enable you to more clearly observe trends.
Step 4 Fraud case identification
You will typically be obtaining averages, “cost per patient”, “scripts per patient” and “days supply” per drug, per patient, per doctor. Then divide the observed average per patient by the average across all patients. You can then identify those patients that consume far more than their peer groups. Typically, upper decile/10% is highly indicative of abnormality. Similarly, you can compare across doctors.
This is also where you may wish to use more advanced statistical techniques, such as cluster analysis, to obtain more nuanced expected levels of utilization and thus be better able to find outliers. For example, through clustering analyses, you may identify that patients with diabetes and hypertension living in urban areas consume similar levels of drugs. Thus identifying outliers within that specific cohort of patients would make much more sense than comparing across all patients.
You would also want to generate findings in ways that enable non-statisticians understand and use the output. For example, the percentile approach of risk scoring is much easier to understand than estimates of the % probability that a claim is a fraud case. Even better, if you’re able to, through your analytic model, identify the drivers of fraud, then the output will more likely be actionable.
Step 5 Implementation, with manual review
Once you complete the building of your data ETL process and the subsequent analysis, you will want to implement the output in pragmatic ways. This means packaging the output in simple language, to be delivered through means that the end user find easy. These could be simple lists of claims, patients or doctors, to review for fraud. Or it could be a simple pharmacy benefit design change. Whatever the suggestions, ensure the way it is presented meets the needs of the end user.
Where possible, build in manual review to ensure your analytic model is working. Often trying to address 100% of the challenge through analysis will be onerous. Invoke the pareto 80/20 principle. It’s often better to use a combination of analytics and manual review so that you don’t spend a crazy amount of time creating an enormous (error prone) model when some simple human review is sufficient.
Step 6 Refinements – be the “cat”
Always build your model to allow incorporation of refinements, in data input, in model construct or in output generated. It can be challenging building and applying predictive models in practice. You will likely need to update your FWA identification algorithm regularly, as this is a cat and mouse game between fraudsters and your team, in a fast evolving healthcare landscape.
Also be sure to document enough of what you do, in case any FWA identified leads to legal battles.