The health technology assessment (HTA) process takes time, but how much delay in market access should you expect? The time it takes for an HTA agency to issue a reimbursement decision can be a major factor in the overall market access timeline. While HTA agencies publish timelines to set an expectation, those timelines are not always adhered to and decisions that take longer slow down speed to market. This can result in significant business repercussions for the manufacturer including decreased ability to gain market share and increase in time to revenue generation. So, when formulating a market access strategy, an understanding of the observed time for decision versus the published timelines can help to plan and set appropriate expectations. We also want to identify other important factors of market access that take place after a decision is issued and have the potential to affect market access.
Biotechnology companies often do not have the market access resources comparable to large pharma. While this can pose a challenge, there is a considerable opportunity for biotechs to reduce this disadvantage by using health technology assessment (HTA) data in innovative ways to inform and drive their market access decision-making.
With our innovative technology platform and focus on data quality, we have been helping biotech companies understand the market access space for bellwether HTA agencies for the past seven years. Throughout our experience with biotechnology companies, we continually see four assumptions about the HTA process and market access:
Predictive modeling can be a powerful tool for understanding how multiple factors contribute to an event. As outlined in previous posts, Context Matters created a model that used variables from oncology assessments by the Scottish Medicines Consortium (SMC) to identify the most influential variables for a positive reimbursement decision by that agency and used the model to demonstrate the impact that economics (i.e., patient access schemes and ICERs) have in predicting positive SMC decisions for oncology drugs. However, as discussed in our previous post, Understanding Predictive Modeling, not all models are created equal. One measure of a predictive model’s quality is its ability to deliver actionable results and insights. If a model is not able to inform the initial hypothesis and provide a path forward, it is just an academic exercise. But what does it mean for a predictive model to be “useful” and to deliver “actionable results”? What are some of the implications and use cases for predictive models of Health Technology Assessment (HTA) agency decisions?
On March 16, 2016, the National Institute for Health and Care Excellence (NICE) approved some major changes to the UK’s Cancer Drugs Fund (CDF). The most significant of these changes is that NICE will now be evaluating all oncology drugs approved by the European Medicines Agency (EMA), including those previously funded through the CDF. Prior to this change, the CDF had independently conducted its own reviews. Now, the CDF will only serve as a funding source to be used at NICE’s discretion. This reorganization was precipitated by years of budget difficulties and numerous disagreements about the respective roles of NICE and the CDF.
NICE has started to release oncology reviews under its revised technology appraisal process, with many additional reviews in development and expected to be released in the next few months. In this post, we will discuss those reviews and what they may indicate about the future of oncology drug funding in the UK.
Healthcare costs, specifically those related to prescription drugs, are a major election issue for many Americans. According to the August 2016 Kaiser Health Tracking Poll, two-thirds of Americans say Medicare access and healthcare affordability are top election issues, and 53 percent of voters say that prescription drug costs in particular are top priorities. Many of the efforts to address these issues concern the use of comparative efficacy and cost-effectiveness research. These forms of research, widely used integrated in the healthcare systems of many countries in Europe and elsewhere, are only recently gaining traction in the United States. Policy efforts and election outcomes could have a major impact on how these forms of research are used in the future.
In predictive modeling, it is essential to evaluate how good the model fits the underlying data. The closer the model is to fitting the underlying data, the stronger the model. We recently built a model to predict Scottish Medicines Consortium (SMC) reimbursement decisions for oncology health technology assessments (HTAs). Our binary model, that is the model that attempted to predict a positive versus negative SMC reimbursement decision, was able to correctly predict 75% of the oncology decisions in the data set. By statistical standards, 75% is considered a “good” level of prediction. But what happened in the other 25% of assessments? Examining where the model “got it wrong” is an important tool in understanding the data and knowing how to apply the learnings and insights.
The Wall Street Journal recently reported that Bristol-Myers Squibb stock dropped dramatically on August 5th, 2016. This occurred after clinical trial results indicated that BMS’s immunotherapy drug OPDIVO® (nivolumab) failed to demonstrate a clinical improvement compared to chemotherapy in patients with newly-diagnosed lung cancer.
The financial market reaction to the clinical trial results highlights the growing importance of comparative efficacy research, both for obtaining approval and market access and, increasingly, for remaining competitive and driving profits. The impact of this announcement shows that the stakes are higher than just obtaining regulatory approval. Clinical trial results can have immediate and significant consequences before any regulatory decisions are even made. Here we discuss the importance of comparative efficacy research on market access, pricing, profitability, and competition.