Takeaways for An Optimized Human Factors Design and Successful Submission


symposium.jpg

The 2020 Virtual HFES Healthcare Symposium – Takeaways for An Optimized Study Design and Successful Submission

This newsletter is based on the information provided by FDA at the 2020 Virtual Human Factors and Ergonomics Society (HFES) Healthcare Symposium.

At the 2020 Virtual HFES Healthcare Symposium, members from FDA CDER and CDRH held workshops and sessions related to human factors (HF) for combination drug products and medical devices. Through their presentations, they provided insights and perspectives that manufacturers should consider for HF validation testing and submissions. This article discusses three areas we consider helpful to ensure optimized study designs and successful regulatory submissions.

1.  Training – When and how to incorporate training in a validation study.

Consistent with Agilis’ experience with the Agency, FDA CDER discussed that training included in a validation study should correspond to the training that would be marketed with the product. During the discussion, CDER provided a “spectrum” of training, citing several examples of possible variations of training that sit between Training (the defined training program marketed with product) and No Training. For some home use products, all users may not receive training from a healthcare provider (HCP) or pharmacist, or the level of training may vary. Additionally, users may or may not read the instructions for use (IFU) prior to using a product. In these cases, FDA is concerned about the highest risk group – users who receive no training. Therefore, if routine or consistent training cannot be guaranteed for each end user with actual use, defining the representative training is difficult, and the validation study should only include untrained users. 

If every user will receive the same training consistently, FDA wants to know what that training is and how it will be carried out consistently for each end user. FDA also expects the validation study to include the representative training. As examples, the following types of training were specifically discussed where FDA provided their recommendations:

  • Self-training (e.g., demo device or users are given time to self-train) – If it is defined as part of the user interface (UI), evidence that training occurred as the manufacturer defined it is needed. A certification is one example of the type of evidence that CDER would expect.

  • Super user is trained to then train other users – The highest risk scenario should be considered. If the super user will read through the IFU at a minimum, CDER expects the manufacturer to demonstrate that this type of training is representative and evidence that it is actually occurring as defined.

  • Users complete a proctored session before using the product on their own – If this a part of the training, CDRH recommends the validation study begin after the proctored session assuming that is when the user would be considered officially trained based on the manufacturer’s defined training. CDRH also suggested to conduct the validation study with 2 arms (trained and untrained users) to evaluate the effectiveness of the training in mitigating risk.

CDER also clarified expectations for setting up familiarization of a device during a human factors study. Because some users may self-familiarize on the use of a product by studying the IFU or watching videos on their own, CDER recommended setting up scenarios in a validation study by asking participants to do whatever they normally would do. Apart from this, the moderator should not prompt participants, such as specifying that instructions are available or indicating to read the IFU prior to testing, if it is not considered as representative training. Evaluation of the IFU can be included in the validation study, but it should be done after the usability portion is complete by repeating a use scenario with directed use of the IFU or asking knowledge task questions where observations are not possible. The IFU could also be evaluated through probing questions during the debrief if the IFU was identified as a potential root cause during the usability portion. 

Most, if not all, manufacturers would agree that a user will learn how to use a device through repeated use. In order to demonstrate learning effect in a validation study, CDER raised a few considerations, such as: how frequently will the user be using the product, how will the decay period between the first and second uses be incorporated into the validation study, and would the decay period be reflective of actual use. When training is part of the UI, CDRH’s expectation is that training decay may be a minimum of 1 hour. If there is literature to support that the decay period included in the validation study is equivalent to the decay period during actual use, that rationale should be included for the Agency to review.

 

2.  Threshold Analyses – Getting started for proposed ANDA.

Threshold analyses identify differences in design of the user interface (UI) for a proposed generic combination product compared to the UI of the Reference Listed Drug (RLD) to support an abbreviated new drug application (ANDA). CDER pointed to the January 2017 draft guidance titled Comparative Analysis and Related Comparative Use Human Factors Studies for a Drug-Device Combination Product Submitted in an ANDA and advised that the generic drug product and its RLD need not be identical in all respects, but that in the early stages of development, differences from the UI for the RLD should be minimized. This is important for manufacturers because the ANDA relies on the safety and efficacy FDA reviewed for the RLD. Note that if the proposed product demonstrates superiority to the RLD or the user groups or indications vary between the proposed and RLD products, the manufacturer should consult with Regulatory experts to determine if the ANDA pathway is applicable.

When conducting threshold analyses, the following should be included:

  • Labeling comparison – Side-by-side comparison between the RLD and proposed product’s IFU, container label, on-device labeling, and carton labeling as these may be elements intended to minimize medication or use errors.

  • Comparative task analysis – Comparison of task analysis between the RLD and proposed products to identify if UI differences impact any tasks.

  • Physical comparison – Visual and tactile examination of the physical features of the RLD product compared to the proposed product.

If no differences are identified between the proposed and RLD products, the Agency recommends to stop and submit a request for a pre-ANDA meeting to discuss the threshold analyses or ask related questions. On the other hand, if the threshold analyses uncover differences between the proposed and RLD products, the Agency recommends to determine whether the UI differences impact existing critical tasks or result in new critical tasks. The use related risk analysis (URRA) should be leveraged at this point and the focus should be on the tasks that are different. Through this step, if differences in the UI are found to impact any external critical design attributes, which are features that end users rely on to safely and effectively perform a critical task, the Agency recommends the manufacturer consider redesign of the UI to minimize differences from the RLD. Also, the Agency encourages manufacturers to submit a request for a pre-ANDA meeting prior to conducting any comparative use HF testing. Meeting with FDA at this point would lend guidance especially in cases where the users are difficult to recruit because the proposed product is for an orphan drug or the RLD is difficult to acquire.

 

3.  Use Related Risk Analyses – Ensure the URRA is robust and consider all tasks.

The FDA has communicated that they do not expect any combination drug product or medical device to be free of use errors. Instead, the expectation of the Agency is that manufacturers identify the potential use related risks and mitigate those use errors through the design of the UI. If there are use related residual risks that cannot be reduced or eliminated, the Agency recommends that the manufacturer provides sound rationale that outlines further risk control is not practicable and the medical benefits of the intended use outweigh the residual use related risks that remain. 

CDRH stated that a robust use related risk analysis (URRA) is key to a successful incorporation of HF into the design process, thereby leading to a successful HF submission. However, CDRH identified that among the pre-market submissions they received in 2019, approximately 30% had deficiencies due to inadequate, incomplete, or inconsistent URRAs. At a high level, the URRA process involves identifying and categorizing potential use related hazards and severity of harm. The Agency highlighted that the URRAs they review have critical tasks eliminated because of the risk prioritized number (RPN). The Agency advised that URRAs are to include known use issues and only severity of harm should be used to determine criticality instead of RPN or occurrence of harm. 

Once all critical tasks are identified, FDA Guidance (CDRH, 2016) discusses that HF validation testing should be comprehensive in scope, adequately identify any use errors caused by the device UI, and be conducted such that the results can be generalized to actual use. To satisfy these criteria, there are several aspects to consider when developing the HF study design or methodology. One aspect is that the study design should include all critical tasks. It is important to note here that FDA stated that they look at the entire HF study report. CDER stated that a task can be essential and critical or essential and non-critical. The Agency is interested to see what happens for both critical and non-critical tasks; therefore, non-critical tasks should also be included in the HF study report. 

Successful regulatory submissions are the result of demonstrating a safe and effective user interface design, thorough risk assessment, and a well-thought out and executed HF validation study. Meeting these requirements may seem challenging, and we hope to help reduce that challenge by taking out some of the guesswork when it comes to what FDA expects. Therefore, we provided the Agency’s perspectives, which were presented at the 2020 Virtual HFES Healthcare Symposium by members from FDA CDER and CDRH.

 

Sources:

U.S. Department of Health and Human Services. Food and Drug Administration. Center for Devices and Radiological Health. Applying Human Factors and Usability Engineering to Medical Devices. Guidance for Industry and Food and Drug Administration Staff, 2016.

 
 

About the Author:
Sophia Kalita

Sophia+Headshot.jpg

Sophia Kalita is a Biomedical Engineer and Human Factors Consultant with Agilis Consulting Group, LLC. Sophia is experienced in applying human factors principles to the design, evaluation and validation of medical devices and products. Prior to joining Agilis Consulting Group, Sophia worked for a global medical device manufacturer as a cross-functional team leader with focus on design controls and project management. Within her project management role, Sophia managed and ensured the success of human factors activities for valuable medical devices. Sophia also is a contributing author to the recently published AAMI book, "Applied Human Factors in Medical Device Design" (2019).



Sophia Kalita, MS