Skip to main content

Assessing the performance of health technology assessment (HTA) agencies: developing a multi-country, multi-stakeholder, and multi-dimensional framework to explore mechanisms of impact

Abstract

Background

Health technology assessment (HTA) agencies have an important role to play in managing the rising demands on health systems. However, creating and running such agencies potentially diverts resources from frontline services. A large number of studies address the question of ‘what is the impact of HTA?’. Several points of heterogeneity in this literature include: purpose of the study, definition of HTA, definition of impact, and scope and rigour of evaluations. Our study seeks to address several limitations in this literature. This study aims to explore the mechanisms of impact of an HTA agency. In doing so, we consider HTA as an institution rather than a knowledge product to build an impact evaluation framework from an international, multi-stakeholder and multi-dimensional perspective.

Methods

We conducted 9 key informant interviews with experts from the international HTA community. We addressed several questions, informed by existing frameworks of impact within the literature, to understand their perspectives on the mechanisms of impact of an HTA agency. We analyse data using logic modelling and impact mapping, as tools to understand and visualise mechanisms of change.

Findings

Our impact mapping highlights several distinct, but not necessarily mutually exclusive, mechanisms through which the overall impact of an HTA agency is achieved. These are: the effective conduct of HTA studies; effective use of HTA in agenda-setting and policy formulation processes; effective engagement and external communications; good institutional reputation and fit within the healthcare and policy-making system; effective use of HTA as a tool for the negotiation of health technology prices; and the effective implementation of policy change regarding health technologies. We also identify indicators of these effects.

Conclusions

Our findings and resulting evaluation framework complement and add to existing literature by offering a new perspective on the mechanisms by which HTA agencies generate impact. This new perspective considers HTA as an institution rather than a knowledge product, is international, multi-dimensional, and includes multi-stakeholder views. We hope the analysis will be useful to countries interested in managing HTA performance.

Background

Health technology assessment (HTA) is a field of multidisciplinary research that aims to inform policy decisions and clinical practice around the use, introduction and reimbursement of health technologies. It uses specific methods to examine the health and social value, as well as cost implications of and ethical issues related to the use of a health technology in a systemic, transparent, unbiased and robust manner to inform decision-making [1, 2]. The overall goal of HTA research is to promote an ‘equitable, efficient, and high-quality health system’ [2]. HTA research can be conducted both by private (e.g. ICER in the United States) and by public actors, and the evidence HTA provides can inform decision makers about how best to ensure the health system is equitable, efficient, and of high quality.

While HTA research conducted by private actors can be informative to policy makers, several countries prefer to rely primarily on advice from publicly-funded national ‘HTA agencies’ that aim to serve the public interest. Two examples of such agencies are the HTA agencies of England (the National Institute for Health and Care Excellence, or NICE) and Thailand (the Health Intervention and Technology Assessment Program, or HITAP). Although the authority and responsibility given to HTA agencies varies from country to country, their prevalence across widely differing health systems is indicative of the power of HTA to add value across different contexts.

The launch of the UN’s Sustainable Development Goals (SDGs) in 2015 and its target to ‘achieve universal health coverage (UHC) […] for all’ [3] has drawn further attention towards the establishment of national HTA agencies, as they can offer a pathway to achieving and sustaining UHC even in severely resource-constrained environments. In particular, HTA’s rationale of directing resources towards health technologies that are ‘cost effective’—i.e. those that lead to large improvements in population health relative to the cost involved—allows nascent UHC initiatives to rapidly improve population health even under tight budgets.

However, with cost-effectiveness being such a crucial pillar of HTA, questions on the value added from investments into HTA agency capacity are inherently valid. After all, public resources invested in HTA agencies could have been diverted to frontline medical services offering much more ‘tangible’ health outcomes. Addressing the question of ‘value offered’ by HTA agencies is however not straightforward for a number of reasons. One reason for this is that existing HTA agencies around the world are highly heterogeneous, each operating in a different context, within different systems and under different budgets. As a result, the emerging impacts and externalities of HTA agencies may vary from context to context, highlighting the need to understand the question of how such agencies have impact.

To further position our study, we first discuss some key points from the literature to highlight the heterogeneity relating to the question “what is the impact of health technology assessment?”. The key points discussed put into context existing practice in the evaluation of impacts of HTA. We highlight a number of gaps in existing methodologies and approaches to the evaluation of HTA impact.

The heterogeneous nature of evaluating the impact of health technology assessment

There are a large number of studies relating to the question “What is the impact of health technology assessment?”. However, since this question is rather poorly defined, the literature is quite heterogeneous. To review in detail would require more space than we have here and, in any case, others have been here before us [4,5,6,7]. Specifically, a good starting point for accessing this literature is the study by Gerhardus et al. [6] which provides a helpful framework and clear summary of the literature up to about 2006. Another useful resource is Raftery et al. [4] which gives a detailed description of the various methodologies which have been deployed to measure the impact of health research with a view to evaluating the impact of HTA. Rather than summarising these papers immediately, we will highlight their main contributions to further position our study.

In what follows, we highlight some points for discussion around the heterogeneity of this body of literature. Specifically, we highlight four sources of heterogeneity: (1) variations in the purpose of the study; (2) differences in interpretation of “health technology assessment”; (3) differences in interpretation of “impact”; and (4) variability in scope and level of rigour of evaluation studies. We conclude by highlighting what we think we can and cannot learn from this literature to position our study.

Purpose of study

One source of heterogeneity in the literature is the purpose of the study. Some studies are relatively modest in aim, and are essentially descriptive, concerned with providing basic information about a sample of HTA reports and their findings [8,9,10,11,12,13]. Others have a stronger summative and analytical focus on the question of whether the investment in a health technology was worthwhile, in that they use either quantitative or qualitative data to explore the context in which the value is realised in more detail [14,15,16]. Yet other studies have a more formative purpose: how can the HTA system do a better job of delivering impact [17,18,19]? The nature of the evaluation assessment team also varies. Studies by external assessors, such as those commissioned to independent committees or pool of experts to evaluate the impact of HTA are often summative in nature, whereas studies by HTA or health system insiders are often descriptive or formative.

Definition of the term “health technology assessment”

Further heterogeneity stems from ambiguity in the use of the term “health technology assessment”. Some researchers frame the question as being one of the impact of “reports” (or “guidance” or “advice”) from an HTA agency [6, 8,9,10,11,12,13, 16, 18,19,20,21]. Others frame the question as being one of the impact of HTA “research” [4, 14, 18, 22,23,24,25,26]. These are not necessarily the same. “Reports” may be based on an overview of a relatively small body of evidence generated elsewhere, whereas “research” implies a piece of work which is of publishable standard. Research itself may not necessarily lead to a report (for example if its main conclusion is that the status quo should be maintained). The tendency to frame the impact of HTA as being about the impact of some sort of knowledge product (whether report or research) is helpful for tracing impacts (as the knowledge product provides a source to which impact can be tracked) but arguably means that the more diffuse benefits of a visible HTA presence (e.g. encouraging evidence-based practice; legitimising discussions about cost-effectiveness) are relatively neglected.

Definition of impact

Studies also differ in their interpretation of “impact” and conceptualisation of how impact occurs. For example, Gerhardus et al. [6] offer a six-stage model of impact which we paraphrase here:

  1. 1.

    Awareness: the relevant stakeholder must know of the HTA report.

  2. 2.

    Acceptance: the relevant stakeholder must see the HTA report as valid and a legitimate basis for action.

  3. 3.

    Policy process: the policy process should explicitly utilise the HTA report.

  4. 4.

    Policy decision: the policy decision should cite the HTA report.

  5. 5.

    Practice: there should be “clear and measurable” changes in clinical practice in line with policy decision and thus the report.

  6. 6.

    Outcome: health and economic outcomes should be realised on the basis of the changes in practice.

This six-stage model suggests that an ideal evaluation of the impact of HTA would provide evidence at all stages, and thus show that there was a clear chain from HTA study to health and economic outcomes. However, most studies omit some stages of this chain; some leave off the latter stages and some skip stages altogether. For example, studies which we designate as “model-based” studies [12, 16, 23] effectively skip the implementation chain almost entirely and provide estimates of health and financial benefits based on the HTA agency’s own cost-effectiveness studies, whether or not they have led to policy changes. This predominant focus on the endpoint of the chain characterises other types of evaluations, which appraise outcomes after changes in policy and practice occurred, rather than assuming that they will take place. This also applies to post-market field evaluations [20], where health technologies are assessed under real-world circumstances, or to studies that retrospectively analysed the longitudinal correlation between investments in health research and disease burden [24, 25]. Other studies integrate changes in policy and practice within the assessment, using varied approaches. For instance, primary or secondary data have been used to ascertain the extent to which preliminary HTA findings are actually implemented, thus adjusting pre-implementation model-based estimates to account for actual uptake and coverage [27, 28]. In other cases [8, 9, 11, 13, 19], analyses have assessed to what extent actual clinical practice and usage patterns adhered to guidance issued or appropriate use criteria. Others [29] primarily focussed instead on the time lag between HTA appraisal processes, policy decisions and, access for patients to approved medicines.

Scope and rigour of evaluations

Studies also differ in terms of their scope and level of rigour. There is a trade-off between scope and level of rigour: the most rigorous or in-depth analyses are often those which focus exclusively on a single or narrow set of HTA recommendations, and are published in clinical journals for a particular medical sub-speciality [30, 31]. By single or narrow set of HTA recommendations, we mean that studies often have, without clear explanation, focused on the impact of specific recommendations, rather than the impact of the HTA agency as a whole. At the other end of the scale, studies which focus on the impact of a broader set of HTA reports [11, 12, 28, 32, 33] are inevitably somewhat broad-brush.

From our perspective an effective study design would use a mix of methods, with a quantitative component using state-of-the-art-statistical techniques to detect changes in system behaviour [13, 19, 31], and a qualitative component which draws on knowledge from a wide range of system stakeholders [10, 16]. Such a study design would also have a plausible answer to the question of how changes in system behaviour are attributed to the HTA agency (for example by comparing between territories where the agency’s jurisdiction does and does not hold), and would fully account for all residual uncertainties and list all background assumptions [12, 13, 16, 26, 34].

In terms of study scope, it is also worth noting that the most of studies addressing the value of HTA are context-specific, in that they generally focus on a unique HTA agency or national system. Conversely, in a few cases, studies adopt a comparative perspective. These either contrast the performance of multiple HTA bodies operating within a country [29], or generate insights by applying a common evaluative framework across multiple national settings, though usually narrowing down the number of HTA recommendations considered [27, 30].

Summary of key points from the literature

We know, from the studies discussed in the previous section, that HTA studies have been conducted in several countries, and in many cases have influenced clinical practice and that there is also reasonably plausible evidence that—especially in medium-to large-sized countries—the benefits from implementing individual HTA recommendations can exceed the costs of performing the individual HTA by a substantial margin [15, 16, 20, 26,27,28, 35]. Moreover, the literature offers a rich resource of practical examples on how countries can evaluate their own HTA systems. However, we do not know everything. As highlighted above, from the standpoint of methodological perfection even the most rigorous studies have failings; there are significant gaps in terms of the coverage of time and space in the literature; and, as Raftery et al. [18] highlight, the available empirical literature is unlikely to be a random sampling of the entire human experience of HTA and most likely focusses on settings where HTA has been relatively successful.

Study aim

In this study, in the interests of moving forward, we will focus on addressing two main limitations in the knowledge base. Firstly, current studies are typically at the country level and there is limited ability to cross compare between countries due to variations in reporting. However cumulative knowledge building would be greatly advanced if there were at least a few minimally accepted indicators for evaluating the impact of an HTA agency which could also be used for international comparisons. Secondly, available frameworks for the evaluation of HTA reports (such as the Gerhardus et al. [6] framework above) tend to be somewhat “linear” (in the words of Raftery et al. [4]) and focussed on HTA as a ‘knowledge product’ leading inexorably to change in health service practice and thus health and economic outcomes. However, this is not really compatible with what has been observed in several contexts with an HTA agency operating at the centre of a system and interacting with various stakeholders, as presented in the introduction. As a result, in this study we aim to explore the pathways to, or mechanisms which lead to the impact of an HTA agency from an institutional perspective. This perspective allows us to explore rules of behaviour (both formal and informal) that influence the impact of HTA agencies [36]. The aim of doing so is to build an HTA agency impact evaluation framework that complements existing research in this space. Furthermore, such an approach allows us to take an in-depth analysis not only of an HTA agency’s structure but also of its broader institutional surroundings and how these contribute to its value added. We believe that this study will be useful not only for assessments of organisation-level HTA impact, but that it can also help guide the design and developments of HTA systems globally.

Methods

We conducted a qualitative study using key informant semi-structured interviews to capture the perspectives of 9 senior figures in the international HTA community. We focused on capturing their perspectives on the mechanisms of impact of having a national HTA agency. Specifically, interviewees have backgrounds and experience in several different national contexts that include Australia, Canada, Thailand, and United Kingdom. As a result of the international focus, participants were involved with different types of HTA agencies including those with a distinct decision-making capacity (e.g. in Australia). On the other hand, others participants reflect HTA agencies where HTA report and guideline development are the primary focus. Many of the participants, whilst being based within a specific HTA agency, had international HTA agency experience and as such were able to reflect more critically on the impact of HTA agencies at a more international and strategic level. Moreover, participants reflected a variety of perspectives within the HTA community including: academics, HTA specialists, and those with a clinical background.

Ethics

The relevant ethical approval was granted from the ethics committee of the lead author’s institute. In keeping with the relevant ethical guidelines, participants received a participant information sheet informing them of the purpose and requirements of the study. At the beginning of the interview, the interviewer asked for verbal consent for each interview to be audio recorded and each interviewee’s identifying details were anonymised.

Participant recruitment

Participant recruitment used a purposive sampling approach in order to select respondents based on their ability to provide the needed information [37]. We recruited respondents through contacts within the project team, and subsequently used a snowball sampling technique, which helped us to recruit respondents who otherwise would not have been accessible [38]. The national contexts within which interviewees had specific experience were predominantly contexts in which HTA agencies are well developed or had been in place for a relatively long period of time. We anticipated that respondents from these countries would have a rich understanding of the impact of HTA agencies, key stakeholders, and the factors, or mechanisms, influencing the overall impact of HTA agencies over an extended period of time.

Development of an interview guide

We developed a semi-structured interview guide informed by our learning from several of the frameworks we discussed in the background section. We primarily informed our approach using the model provided by Gerhardus et al. [6] which outlines a six-stage model of impact. We also use insights from the Payback Framework, a popular framework discussing the impact of health services research [39]. The resulting interview guide focussed on asking about interviewee experiences of impact in terms of several effects of HTA from an institutional perspective. We list these below:

  1. i.

    The use and effects of HTA studies (in terms of knowledge development and future research) in HTA agencies;

  2. ii.

    The effects on policy and decision-making processes of having an HTA agency;

  3. iii.

    The effects from those policy and decision-making processes of having an HTA agency on policy;

  4. iv.

    The effects on the health sector of having an HTA agency;

  5. v.

    The effects on health outcomes;

  6. vi.

    The wider economic effects of having an HTA agency.

We used the responses to these questions to probe how and why participants thought that effect(s) had occurred, focussing on the factors that they thought influenced the realisation of those effect(s).

Data collection and analysis

Interviews were conducted by a project team member via Skype and telephone call and were audio recorded subject to verbal participant consent. The data collection phase was followed by a two-stage data analysis phase using logic modelling and impact mapping.

Logic modelling

Logic models were chosen as the tool for data analysis due to their ability to both visualise pathways to change and their use within the programme evaluation field. Logic modelling is based on the understanding of how programme activities contribute to changes in outcomes and overall impact for programme stakeholders [40]. Logic models are word-and-arrow diagrams that reflect the inputs, activities, outputs, and outcomes of a change initiative and their format is flexible so as to allow for the multi-dimensional and multi-stakeholder perspective adopted.

We used the interview data to construct logic models for each interviewee. We utilised the logic model format to visualise how each interviewee perceived the specific effects of HTA agencies as well as how each of those effects are realised. This approach resulted in 9 individual logic models, an illustrative example of the structure of which is shown in Fig. 1. The logic model depicts a ‘generic’ process within an HTA system whereby HTA studies are conducted from which HTA agencies make HTA recommendations aimed at influencing policy decision-making and subsequently policy changes are implemented. To account for the multi-dimensional and multi-stakeholder perspective, mechanisms and effects are illustrated throughout and at the bottom of the logic model, respectively.

Fig. 1
figure 1

Illustrative example of a logic model structure

Impact mapping

Using the logic model visualisations of pathways to impact of HTA agencies facilitated a further phase of analysis where we conducted an impact mapping exercise. Specifically, we collated common effects mapped within the individual logic models and, using a backward mapping approach to identify how individual effects were perceived to contribute to the overall success of the HTA agency. The reason for doing so was that whilst interviewees were giving their individual perspectives on impact and how impact occurs, it was noted that there was much synergy in the effects realised by the HTA agencies and systems within which they worked despite the fact that different HTA systems were represented in the study. We reflect the learning from this exercise in a value tree, the structure of which is illustrated in Fig. 2. A value tree is a hierarchical map depicting an overall objective with a subsequent layer of sub-objectives (mechanisms of impact) and attributes (which we refer to in this study as ‘indicators of effects’) of those mechanisms of change for a given situation. We discuss the learning from this two-stage analysis in the following results section.

Fig. 2
figure 2

Value tree reflecting impact mapping exercise structure

Results

The value tree in Fig. 2 depicts the overall objective of having an HTA agency, that is to achieve a more cost-effective and equitable healthcare system, at the top of the diagram. The subsequent ‘layer’ of the graph, which we refer to as ‘sub-objectives’, describes several distinct but not necessarily mutually exclusive mechanisms through which the overall objective of an HTA agency is achieved. Following that are specific ‘attributes’ of each sub-objective which reflect in this case indicators of those effects leading to overall impact of an HTA agency from a multi-dimensional and multi-stakeholder perspective: we refer to these attributes in Fig. 2 as ‘indicators of effects’.

More specifically, this graph serves to emphasise that in order for an HTA agency to meet its overall objective of contributing to more cost-effective and equitable health care, there are several ‘sub-objectives’, or mechanisms, through which that overall objective is achieved. These are: the effective conduct of HTA studies; effective use of HTA in agenda-setting and policy formulation processes; effective engagement and external communications; good institutional reputation and fit within the healthcare and policy-making system; effective use of HTA as a tool for the negotiation of health technology prices; and the effective implementation of policy change regarding health technologies. In our mapping, the subsequent ‘layers’ of the graph reflect examples of distinct aspects of each sub-objective which can act as indicators of of whether they have been achieved. Through this we can reflect on both the multi-dimensional and multi-stakeholder nature of the impact of an HTA agency.

To take an example, we can look more closely at the sub-objective, or mechanism, of: ‘effective use of HTA in agenda-setting and policy formulation processes’ (shown in Fig. 3). The idea captured here is that through having an HTA agency, HTA can be more effectively used in agenda-setting and policy formulation processes. But how would we know that HTA is being used more effectively as a result of the HTA agency rather than for example because there is some stakeholder with a keen interest in using HTA? Indicators that HTA is being used more effectively in policy making processes as a result of the HTA agency might include the representation of the HTA agency in policy decision-making processes. Moreover, due to the increased use of evidence from HTA studies, the effective use of HTA in policy making might also result in increased rigour in decision-making as well as improved transparency in how policy-makers and insurers decide which health technologies to fund. Also, the more effective use of HTA in policy decision-making through an HTA agency might impact upon the perceptions of policy makers on the healthcare system and the importance of the role of HTA. However, it is worth noting that since our results have been populated with the empirical data collected for this study only, it is thus simply illustrative of the vast array of individual effects which can occur as a result of the presence of an HTA agency across different healthcare systems.

Fig. 3
figure 3

Indicative example of value tree

The results of this impact mapping exercise point to lessons and questions which can inform the development of a framework for the evaluation of impact of HTA agencies. Overall, we have learnt it is important that evaluations of the impact of an HTA agency acknowledge the multiple mechanisms through which impact can occur. The role an HTA agency plays in activating each of these mechanisms can have several distinct effects. First, this reflects the potential outcomes HTA agencies can have for specific stakeholders and institutions within the wider healthcare system. The impact mapping exercise therefore highlights that there are multiple and potentially competing effects between stakeholders and institutions in the wider healthcare system that should be adequately acknowledged in evaluations of impact. Second, the impact mapping exercise highlights that the individual effects of a given mechanism, can in many cases be used as indicators, or at least point to questions an evaluation might address with respect to the overall impact of an HTA agency. For example, in understanding how the HTA agency has achieved effective engagement and external communications, an evaluation could address how the work of an HTA agency has challenged social perceptions, and increased awareness and understanding of the challenges facing the healthcare system and the role of HTA in healthcare decision-making through informing public debate in the media.

To complement these findings, we have developed a framework for the evaluation of the impact of an HTA agency which outlines several questions related to the mechanisms of impact derived from the impact mapping exercise and resulting indicators. This framework is shown in Table 1. The mechanisms of impact displayed in Table 1 map directly to those illustrated in the value tree in Fig. 2. In developing the framework we have provided a list of questions related to each mechanism which the reader can use as a template to guide the conduct of HTA agency impact evaluations.

Table 1 Framework for evaluating the impact of HTA systems

Discussion

Whilst there are a number of studies relating to the question of “what is the impact of health technology assessment?”, there is substantial heterogeneity in this literature with respect to purpose of the study, the conceptualisation of HTA, the conceptualisation of impact, and the scope and rigour of studies. Moreover, studies are often presented from a national rather than international perspective with considerable variations in reporting, making comparisons across countries and contexts difficult. We argue that evaluations could be greatly improved with a few minimally accepted indicators of impact measurement using a systems-focussed framework underpinned by an internationally informed, multi-dimensional, and multi-stakeholder framework. We argue that whilst maintaining the balance between rigour and scope of an evaluation study is incredibly difficult, if not impossible, to conduct, that is not to say that the frameworks guiding such evaluative activity cannot be improved. Realistic approaches to generating knowledge must be taken, for example through addressing a list of questions that are answerable and sufficient to assess the impact of a given HTA agency.

It is worth noting at this point that other studies have developed similar frameworks and our findings are consistent with these frameworks. Based on a systematic programme of qualitative research, the results of this study present a further piece of evidence on how to evaluate the performance of HTA agencies from an institutional perspective rather than viewing HTA as only a ‘knowledge product’. For example, the literature review and interviews published by Charles River Associates [5] compares the use of HTA in several different countries and acknowledges the importance of a multi-stakeholder perspective as well as the lack of evidence on policy and practice in the literature due to a lack of best practice principles for the evaluation of HTA agencies. Similarly, the Payback Framework, the most commonly used model for the evaluation of HTA [4], has a multi-stakeholder perspective but primarily focuses on health services ‘research’. The importance of a multi-stakeholder perspective is also highlighted in national reports from the Austrian and Dutch contexts [14, 22]. However, the aim of this study was to explore the mechanisms of impact of an HTA agency from a multi-dimensional, multi-stakeholder and international perspective with a view to considering the impact of HTA as part of a wider ecosystem of stakeholders, processes, and institutions. With this study, we aim to contribute towards developing a more appropriate framework for the evaluation of HTA agencies.

Learning from our qualitative study contributes to the development of a framework for the evaluation of the impact of HTA agency from an institutional perspective. First, we argue that there are multiple and not necessarily mutually exclusive mechanisms through which HTA systems can meet their overall objective of achieving a more cost-effective and equitable health care system. The HTA agency therefore has a central role to play in ensuring that each of these mechanisms are employed in their work. Moreover, we learn that each of these mechanisms can have distinct features which can act as indicators of effects, or point to important questions evaluations should address. These distinct effects are realised by multiple stakeholders situated within the eco-system of stakeholders, institutions, and processes we have outlined in the introduction. Such learning complements and extends extant literature addressing the question of ‘what is the impact of HTA’ by exploring the mechanisms of impact of an HTA agency from an institutional perspective, rather than viewing HTA as a ‘knowledge product’.

In making this learning practicable for those conducting evaluation activities on the impact of HTA agencies, we have developed framework of questions which evaluators may wish to use to guide evaluation activities, either for a one-off stocktake of the performance of an HTA system, for routine performance monitoring over time, or for comparative benchmarking against other countries. It will be important going forward that this framework of questions is beta-tested in multiple contexts to further refine and ensure the usability of the framework.

Study limitations

This study has several limitations of note. The substantive component of this study has primarily considered only the healthcare system context, when the impact of an HTA agency will most certainly extent beyond the healthcare system, and into the social care and education systems, for example. Moreover, whilst we adopt an international perspective, we must acknowledge that the majority of interviewees in this sample came from countries which can be classed as developed economies. Further analyses would benefit from a wider range of contextual perspectives. Nevertheless, many of those whom were interviewed had significant international expertise in working with HTA agencies. We must also acknowledge that our empirical work is informed by only 9 senior figures in the international HTA community, meaning that experiences are limited to those interviewed.

Conclusions

The development of HTA agencies has grown as a result of the increasing importance of HTA research and an acknowledgement of the role of HTA can play in delivering UHC. Understanding the added value of HTA agencies is therefore important when such agencies often divert money from frontline services. This paper offers a complementary perspective to other studies of the evaluation of HTA. In doing so, we adopt an international, multi-stakeholder, and multi-dimensional perspective to explore the mechanisms of impact from an institutional perspective.

The findings of our qualitative study point to several distinct but not necessarily mutually exclusive mechanisms through which HTA agencies can have value, or impact. Our findings inform the development of a framework consisting of several categories of questions which HTA agency stakeholders might wish to address in evaluating the value of their respective agencies, or in considering the development of an HTA agency. Overall, what is clear is that there are multiple mechanisms through which HTA agencies have impact, mechanisms which relate to a wider number of effects for a variety of stakeholders. If the value of HTA agencies are to be realised, then we hope that the framework developed here will serve to support the development of a few minimally accepted indicators of HTA agency impact.

Our conclusions echo and to those from researchers studying the distinct but related question of how to assess the value of HTA research programmes. Whereas much of the extant literature perceives the question of ‘what is the impact of HTA’ from the perspective of HTA as a ‘knowledge product’, our study adopts a perspective which conceptualises HTA impact from an institutional perspective, allowing us to explore and identify the specific mechanisms of impact from an HTA agency perspective.

Through having a more detailed understanding of the mechanisms of impact of an HTA agency from an institutional perspective, we hope that our analysis will be useful both to countries interested in managing the performance of their own HTA agencies and benchmarking performance against their peers but also to those development partners who are increasingly funding health systems strengthening initiatives including HTA agencies. For both parties, in order to measure the impact of HTA agencies we need to be able to understand the mechanisms through which the impact occurs. To support this end, we present a framework for the evaluation of the impact of HTA agencies. The framework suggests, based on our impact mapping, several areas of questions which an evaluation of an HTA agency might wish to consider to achieve a fuller understanding of how the impact of their HTA agency has been realised.

Availability of data and materials

Not applicable.

References

  1. Goodman CS. HTA 101 Introduction to health technology assessment. Bethesda, MD: National Library of Medicine (US); 2014.

  2. INAHTA. INAHTA: the new definition of HTA; 2020. https://www.inahta.org/2020/05/announcing-the-new-definition-of-hta/. Accessed: 31 May 2021.

  3. United Nations. The sustainable development goals report. New York: United Nations Publications; 2017.

    Google Scholar 

  4. Raftery J, Hanney S, Greenhalgh T, Glover M, Blatch-Jones A. Models and applications for measuring the impact of health research: update of a systematic review for the Health Technology Assessment programme. Health Technol Assess. 2016;20(76):1–254.

    Article  Google Scholar 

  5. Wilsdon T, Fiz E, Haderi A. A comparative analysis of the role and impact of Health Technology Assessment. Charles River Associates; 2014.

  6. Gerhardus A, Dorendorf E, Rottingen J-A, Santamera A. What are the effects of HTA reports on the health system? Evidence from the research literature. In: Velasco Garrido M, Kristensen F, Palmhoj Nielson C, Busse R, editors. Health Technology Assessment and Health Policy-Making In Europe: Current status, challenges, and potential. World Health Organisation; 2008. p. 109–136.

  7. Tarricone R, Boscolo P, Ciani O. Valutare l’impatto dell’HTA come strumento di governo dell-innovazione tecnologica: modelli teorici e studi empirici. Pharmacoeconomics. 2012;14(1):51–62.

    Google Scholar 

  8. Britton M, Jonsson E. Impact of health technology assessments. Some experiences of SBU. Int J Technol Assess Health Care. 2002;18(4):824–32.

    Article  Google Scholar 

  9. Rosén M, Werkö S. Does health technology assessment affect policy-making and clinical practice in Sweden? Int J Technol Assess Health Care. 2014;30(3):265–72.

    Article  Google Scholar 

  10. Zechmeister I, Schumacher I. The impact of health technology assessment reports on decision making in Austria. Int J Technol Assess Health Care. 2012;28(1):77–84.

    Article  Google Scholar 

  11. Bennie M, Dear J, Hems S, Black C, McIver L, Webb DJ. An investigation into the effect of advice from the Scottish Medicines Consortium on the use of medicines in Scotland’s Health Service. Br J Clin Pharmacol. 2011;71(2):283–8.

    Article  Google Scholar 

  12. Jacob R, McGregor M. Assessing the impact of health technology assessment. Int J Technol Assess Health Care. 1997;13(1):68–80.

    Article  CAS  Google Scholar 

  13. Dietrich ES. Effects of the National Institute for Health and Clinical Excellence’s technology appraisals on prescribing and net ingredient costs of drugs in the National Health Service in England. Int J Technol Assess Health Care. 2009;25(3):262–71.

    Article  Google Scholar 

  14. Oortwijn WJ, Hanney SR, Ligtvoet A, Hoorens S, Wooding S, Grant J, et al. Assessing the impact of health technology assessment in the Netherlands. Int J Technol Assess Health Care. 2008;24(3):259–69.

    Article  Google Scholar 

  15. Guthrie S, Hafner M, Bienkowska-gibbs T, Wooding S. Returns on research funded under the NIHR Health Technology Assessment (HTA) Programme. Economic analysis and case studies. Cambridge; 2015.

  16. Jacob R, Battista RN. Assessing technology assessment: early results of the Quebec experience. Int J Technol Assess Health Care. 1993;9(4):564–72.

    Article  CAS  Google Scholar 

  17. Teerawattananon Y, Tritasavit N, Suchonwanich N, Kingkaew P. The use of economic evaluation for guiding the pharmaceutical reimbursement list in Thailand. Z Evid Fortbild Qual Gesundhwes. 2014;108(7):397–404.

    Article  Google Scholar 

  18. Raftery J, Hanney S, Green C, Buxton M. Assessing the impact of England’s National Health Service R & D Health Technology Assessment program using the “payback” approach. Int J Technol Assess Health Care. 2009;25(1):1–5.

    Article  Google Scholar 

  19. Sheldon TA, Cullum N, Dawson D, Lankshear A, Lowson K, Watt I, et al. What’s the evidence that NICE guidance has been implemented? Results from a national evaluation using time series analysis, audit of patients’ notes, and interviews. BMJ. 2004;329:7473.

    Google Scholar 

  20. Levin L, Goeree R, Levine M, Krahn M, Easty T, Brown A, et al. Coverage with evidence development: the Ontario experience. Int J Technol Assess Health Care. 2011;27(2):159–68.

    Article  Google Scholar 

  21. Goeree R, Levin L. Building bridges between academic research and policy formulation the PRUFE framework—an integral part of Ontario’s evidence-based HTPA process. Pharmacoeconomics. 2006;24(11):1143–56.

    Article  Google Scholar 

  22. Ludwig Boltzmann Institut. Auswirkung der HTA-Forschung auf das Gesundheitswesen in Osterreich. Wien; 2010.

  23. Guthrie S, Bienkowska-Gibbs T, Manville C, Pollitt A, Kirtley A, Wooding S. The impact of the national institute for health research health Technology assessment programme, 2003–13: A multimethod evaluation. Health Technol Assess (Rockv). 2015;19(67):1–291.

    Article  Google Scholar 

  24. Chinnery F, Bashevoy G, Blatch-Jones A, Douet L, Puddicombe S, Raftery J. National Institute for Health Research (NIHR) Health Technology Assessment (HTA) Programme research funding and UK burden of disease. Trials. 2018;19(87).

  25. Manton KG, Gu X-L, Lowrimore G, Ullian A, Tolley HD. NIH funding trajectories and their correlations with US health dynamics from 1950 to 2004. Proc Natl Acad Sci. 2009;106(27):10981–6.

    Article  CAS  Google Scholar 

  26. Australian Society for Medical Research. Returns on NHMRC funded Research and Development. Deloitte Access Economics. 2011.

  27. Grieve E, Briggs A. A methodological approach for measuring the impact of HTA. Glasgow: University of Glasgow; 2015.

    Google Scholar 

  28. Johnston SC, Rootenberg JD, Katrak S, Smith WS, Elkins JS. Effect of a US National Institutes of Health programme of clinical trials on public health and costs. Lancet. 2006;367:1319–27.

    Article  Google Scholar 

  29. Varnava A, Bracchi R, Samuels K, Hughes DA, Routledge PA. New medicines in Wales: the all wales medicines strategy Group (AWMSG) appraisal process and outcomes. Pharmacoeconomics. 2018;36(5):613–24.

    Article  Google Scholar 

  30. Fischer KE, Grosse SD, Rogowski WH. The role of health technology assessment in coverage decisions on newborn screening. Int J Technol Assess Health Care. 2011;27(4):313–21.

    Article  Google Scholar 

  31. Chamberlain CA, Martin RM, Busby J, Gilbert R, Cahill DJ, Hollingworth W. Trends in procedures for infertility and caesarean sections: Was NICE disinvestment guidance implemented? NICE recommendation reminders. BMC Public Health. 2013;13(1).

  32. Schumacher I, Zechmeister I. Assessing the impact of health technology assessment on the Austrian healthcare system. Int J Technol Assess Health Care. 2013;29(1):84–91.

    Article  Google Scholar 

  33. Culyer AJ, Podhisita C, Santatiwongchai B. A Star in the East. A short history of HITAP. Amarin Printing and Publishing Public Co., Ltd. 2016; Vol. 1. p. 1–219.

  34. Johnston SC, Rootenberg JD, Katrak S, Smith WS, Elkins JS. Effect of a US National Institutes of Health programme of clinical trials on public health and costs. The Lancet. 2000;367:1319–27.

    Article  Google Scholar 

  35. ZonMw. Externe evaluatie ZonMw programma Doelmatigheids Onderzoek 2006–2017; 2018.

  36. Cairney P. Understanding public policy: theories and issues. Basingstoke: Palgrave Macmillan; 2012.

    Book  Google Scholar 

  37. Miles M, Huberman AM. Qualitative data analysis. 2nd ed. Thousand Oaks: SAGE Publications, Inc; 1994.

    Google Scholar 

  38. Patton MQ. Qualitative research & evaluation methods. 3rd ed. Thousand Oaks, California: SAGE Publications, Inc.; 2002.

    Google Scholar 

  39. Donovan C, Hanney S. The, “Payback Framework” explained. Res Eval. 2011;20(3):181–3.

    Article  Google Scholar 

  40. Funnell S, Rogers P. Purposeful program theory: effective use of theories of change and logic models. San Francisco: Jossey-Bass; 2011.

    Google Scholar 

  41. Dabak SV, Teerawattananon Y, Win T. From design to evaluation: applications of health technology assessment in Myanmar and lessons for low or lower middle-income countries. Int J Technol Assess Health Care. 2019;35:461–6.

    Article  Google Scholar 

Download references

Acknowledgements

Not applicable.

Funding

The Health Intervention and Technology Assessment Program (HITAP) is funded by the Thailand Research Fund (TRF) under a grant for Senior Research Scholar (RTA5980011). HITAP’s International Unit is supported by the International Decision Support Initiative (iDSI) to provide technical assistance on health intervention and technology assessment to governments in low- and middle-income countries. iDSI is funded by the Bill & Melinda Gates Foundation [OPP1134345], the UK's Department for International Development, and the Rockefeller Foundation. Overseas Development Institute (ODI) Fellows at HITAP support its work in the region. The findings, interpretations, and conclusions expressed in this article do not necessarily reflect the views of the funding agencies.

Author information

Authors and Affiliations

Authors

Contributions

The corresponding author was responsible for primary data collection (interviews), data analysis, and collating the manuscript. Authors 2 and 3 were primarily responsible for collating and writing the literature review. Authors 4–8 were involved in writing the introductory, discussion, and concluding sections of the paper, and provided comment and feedback on all other sections. All authors were involved in the scoping of the paper, including defining research questions, identifying participants, and identifying sources of relevant literature. All authors read and approved the final manuscript.

Corresponding author

Correspondence to Robyn Millar.

Ethics declarations

Ethics approval and consent to participate

Ethical approval for this study was obtained from University of Strathclyde ethics committee. Participants were informed about the purpose of the study and what their participation would entail via a participant information sheet, sent to participants via e-mail. Participants gave their consent to participate and be audio recorded via a consent form sent to participants prior to the interview taking place.

Consent for publication

Not applicable.

Competing interests

Author [2]: Professor Alec Morton—currently directs a responsive modelling facility for the UK Department of Health and Social Care, providing methodological advice relating to the use of HTA in the UK system.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Supplementary Information

Additional file 1.

Interview guide.

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Millar, R., Morton, A., Bufali, M.V. et al. Assessing the performance of health technology assessment (HTA) agencies: developing a multi-country, multi-stakeholder, and multi-dimensional framework to explore mechanisms of impact. Cost Eff Resour Alloc 19, 37 (2021). https://doi.org/10.1186/s12962-021-00290-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s12962-021-00290-8

Keywords