We obtained a total of eight HTA guidelines specific to medical devices issued by HTA agencies or research initiatives across six regions. The National Institute for Health and Care Excellence (NICE) in the United Kingdom issued an HTA methods guide for their Medical Technologies Evaluation Programme in 2011 . Following the methods guide, NICE also issued the Diagnostics Assessment Programme manual specifically for diagnostic technologies demonstrating higher test accuracy, but at a greater cost compared to those in current use . In Canada, Health Quality Ontario (HQO) released a method and process guide for HTA in 2018, with a scope spanning from medical devices, diagnostics, and surgical procedures to complex health system interventions . In Australia, two HTA guidelines have been developed separately for therapeutic and diagnostic devices by the Medical Services Advisory Committee (MSAC) [32, 33]. In the Asia–Pacific region, the Singapore Agency for Care Effectiveness (ACE) was the only national HTA organization that has released HTA guidelines on medical devices . Apart from these official HTA agencies, an international collaborative network also contributed to the methodological advancement of HTA for medical devices. For example, the European Network for Health Technology Assessment (EUnetHTA), has launched a series of research initiatives to develop a methodological framework for HTA of therapeutic medical devices .
Available clinical evidence
Given that RCT evidence for medical devices was generally limited, an open-minded and flexible attitude to other forms of evidence e.g., case reports (series), cohort studies, case control studies, and real-world studies was highly recommended [29, 34, 35]. Both the UK and EUnetHTA guidelines have pointed out the high risk of bias in non-randomized controlled trials [30, 35]. At the same time, several tools have been developed, although they may not be specific for medical devices. The Cochrane Risk of Bias Assessment Tool for Non-Randomized Studies of Interventions (ACROBAT-NRSI) could be used to assess the risk of bias in non-randomized controlled studies . In addition, the quality assessment for case reports (series) could refer to the checklist developed by the Canadian Institute of Health Economics .
The draft guidance released by the United States Food and Drug Administration (FDA) in 2016 has spurred a surge in the literature describing how real-world evidence (RWE) can be used to support regulatory approval for medical devices . RWE refers to any evidence on healthcare generated from multiple sources outside clinical trial settings, which is usually in the form of electronic medical records (EMR), electronic health records (EHR), hospital databases, patient registries, claims data, etc. . In addition to market authorization, RWE was also relevant in post-marketing surveillance, coverage decisions, outcome-based contracting, resource use, and treatment compliance [40, 41]. Especially for medical device products for which the regulatory environment does not require RCTs, or in situations where RCTs traditionally have been lacking such as measuring disease burden and detecting new safety signals, RWE could offer unique perspectives.
Unlike randomized clinical trials, most RWE comes from observational studies and might have many drawbacks. While current medical device-specific HTA guidelines have underscored the potential bias associated with RWE and several tools may be available for assessment of bias for non-randomized studies, few guidelines have addressed other common issues including data quality, availability, standards, and privacy [29, 30, 32, 33, 35, 42]. For example, a European study that mapped RWE studies of three medical device products has revealed that the accessibility of data sources for RWE varied greatly across European countries. The study also suggested the types and definitions of variables included in each data source were not consistent, making a comparison across databases impossible . Therefore, there is a need for RWE guidance on medical devices which would not only provide overarching frameworks but also standardize methods and processes ranging from data storage, collection, and sharing to analytic approaches.
International medical device-specific HTA guidelines have emphasized the need to account for the learning curve effect in HTA. The EUnetHTA has suggested that it is necessary to establish a break-in period before the formal evaluation to ensure that users have sufficient time to adapt to the new technology. Also, various degrees of operator proficiency across different types of medical research centers (e.g., teaching hospitals and non-teaching hospitals) would lead to heterogeneity in HTA. Therefore, the EUnetHTA proposed a three-tiered approach to accounting for the learning curve in its HTA guidelines for therapeutic devices. Firstly, assessors should screen for studies that estimate an association between user proficiency or healthcare settings (e.g., teaching or non-teaching hospitals) and clinical outcomes. Secondly, if the effect of the learning curve was not reported in the RCT and relevant information could not be obtained by contacting the investigators, then other types of evidence such as non-randomized controlled and non-comparative effectiveness studies could also be considered in order to explore the association between operator proficiency, types of study centers, and clinical outcomes. Lastly, subgroup analyses could be applied where existing studies were divided into different subgroups based on the level of operator proficiency. Statistical methods such as meta-analysis could be used to estimate the difference in medical outcomes between these subgroups and hence quantify the effect of the learning curve . The radiofrequency ablation (RFA) for liver tumors treatment serves as an example. In a systematic review, researchers divided 100 case reports into four subgroups according to the surgeons’ previous RFA experience (i.e., having done < 20, 21–50, 51–99, > 100 cases respectively). The results of the meta-analysis showed the tumor recurrence rate decreased (18%, 16%, 14%, and 10% respectively in the four subgroups) as surgeons accumulated experience .
Short product life cycle and quick upgrade
In practice, a Bayesian approach was recommended to account for the iterative nature of medical devices in HTA . The Bayesian approach is a statistical method that infers the posterior distribution of unknown parameters according to Bayes’ theorem based on prior knowledge and sample data. Considering that medical devices are incrementally upgraded with minor modifications, clinical trials and/or early research data of the former version of the medical device product, sometimes even data of comparator products could be a source for prior information used in the Bayesian approach.
Inexplicit target population and lack of direct clinical outcomes
Given the lack of direct clinical outcomes for screening and diagnostic devices, the HQO allows the use of established surrogate endpoints or intermediate clinical indicators to predict patients’ final medical outcomes. For instance, the association between intermediate indicators (e.g., blood pressure) and cardiovascular-related deaths has already been established through statistical models . In terms of evaluating screening or diagnostic technologies, NICE, MSAC, and EUnetHTA stress that product performance should be reflected in the entire care pathway. In this way, the HTA should not only evaluate the test accuracy, but also consider the impact of the diagnostic results (no matter how accurate they were) on subsequent treatment pathways and the final medical outcomes [30, 32, 35]. One particular technique described by international HTA guidelines is the linked analysis [30, 32]. In its first step, a linked analysis collects comprehensive data on the test accuracy of diagnostic technologies and the effectiveness of subsequent clinical interventions following the diagnostic results. Then, these data are modeled to simulate the whole care pathway and to estimate the impact of the diagnostic device on the final medical outcome . However, it is worth mentioning that there were two premises for conducting linked analysis: (1) the effectiveness of clinical interventions subsequent to the diagnostic results must be established by confirmatory trials and should be available; (2) Patients’ baseline characteristics in these confirmatory trials of the subsequent clinical interventions should resemble the population to which the diagnostic devices were applied.