Last week our team attended the Health Technology Assessment World Conference in London. The attendees were mostly from pharmaceutical companies, Health Technology Agencies (HTAs), and some folks from countries thinking about conducting HTAs. There were also representatives from regulatory agencies, so it was a great mix of people with a wide range of interests. Our Chief Analytics Officer, Kermit Daniel, gave a presentation on “How to Build a Model to Predict Reimbursement Decisions by Major HTA/CER Agencies,” describing one of the most exciting projects we have in the works right now. In the presentation, he walked the audience through our goal of creating predictions on whether or not a reimbursement agency will recommend adding a drug to its list. As you can imagine, many attendees cornered him after the talk for more details and discussion; when completed this will be some pretty powerful stuff!
We thought this would be a great topic of interest for our readers, and welcome your thoughts.
The Process of Building the Model
To build the model, we are applying some reasonably sophisticated statistical techniques to our large repository of reimbursement and other data to identify the factors that most strongly influence whether or not agencies recommend reimbursement. Our goal is to use this information to predict whether a drug that has received regulatory approval will be recommended for reimbursement.
The presentation was organized around four important questions to ask before getting started:
- Why build a model?
- Can we build a model?
- How can we build a model?
- How will we know if we built a good model?
Why Build a Model?
This might seem pretty obvious; after all, we all want to know the future. While our main interest in building a model is predicting HTA agencies’ decisions, a good model will also allow us to disentangle the effects of multiple influences. This is especially important when the individual influences themselves are related to one another. Ultimately, the goal is to provide actionable guidance on how to increase the likelihood of a positive recommendation.
Can We Build a Model?
Fundamentally, whether or not we can build a model hinges on whether agencies actually behave in ways that can be described by a model, which we can estimate. This mostly comes down to consistency and transparency. As long as agencies behave consistently, we can identify the resulting patterns in the data. And the more consistency, the better – to the extent agencies make the same decisions when presented with the same information, our job is easier.
But we also need transparency, in the sense that decisions must be based on factors we can observe. It doesn’t help much if agencies are making decisions consistently, but on the basis of factors we can’t see or infer. Fortunately, agencies do seem to make fairly consistent decisions (see below) and our database contains an enormous amount of information about HTA submissions and decisions (consistently for literally hundreds of variables).
* Reviews of 94 drugs reviewed by at least two agencies between Jan. 2005 and Feb. 2013.
How Can We Build a Model?
This mostly comes down to the variables we include, specifically:
- What aspects of the decision we want to predict
- What we believe influences decisions
The first part is easy: We want to predict whether an HTA agency recommends reimbursement, and if so, the extent to which it adds restrictions to those already present on label. The harder decisions are about the explanatory factors to include. So where do we start?
We know that we want to include information that captures the things agencies base their decisions on – reflecting their objectives, constraints, and the information they are presented with. To do that there are basically two approaches:
- Select variables based on logic and what we know about agencies’ objectives and how they operate
- Use exploratory analysis to allow the data guide our model-building
The first is ultimately the best approach. Essentially (and not surprisingly), understanding the basic structure of how agencies make decisions is a huge advantage in estimating a statistical model of decision-making.
The second approach can be helpful, but it is dangerous. Briefly, the danger lies in creating a model that fits the data well without really explaining anything or having the ability to predict outcomes. Because the danger may not be obvious, we’ll devote a blog to this point in the near future.
How Will We Know if We Built a Good Model?
If possible, it is always good to test the model on data not used to estimate the model. If you populate your model with other data and get a completely different result, your model isn’t cutting it… back to the drawing board.
For us the standard has to be even higher. There was a lot of talk at the conference about the need to incorporate payer perspectives into drug development. Our model will be one way that happens, and another step in our goal of providing actionable guidance to drug companies – with the ultimate goal being to save them time and money, and helping to bring important drugs to the market faster.