Three Common Pitfalls in Determining Root Cause of Use Errors 


Three Common Pitfalls in Determining Root Cause of Use Errors  

The primary objective for conducting root cause analysis (RCA), in the human factors (HF) process, is to identify user interface design deficiencies and protect the medical device end users from any resulting harm. Root cause analysis is an integral part of the HF process and can be conducted prospectively or retrospectively. Prospective RCA is predicting what use errors could occur when users use a medical device in a certain use environment and speculating the events that could have led to these use errors. Prospective RCA occurs at the early stage of product development and/or during preliminary risk analysis. Conversely, retrospective RCA is conducted after occurrence of events, such as use errors or use difficulties, that typically occur during HF testing. This article will only focus on retrospective root cause analysis and will hereafter be referred to as root cause analysis or RCA.

Root cause analysis is the process of uncovering or discovering the source(s) of errors, specifically use errors in HF, that have occurred during human machine interaction. Usually, the exact sequence of events that caused the use error can be determined either by observation or interviewing the usability test participant. When use errors (or difficulties) occur during testing, it is common to have some sense of the possible causes of the use error. The next critical step, however, is to test these initial hypotheses with further analyses of the collected data, i.e., data from objective, subjective, and root cause interviews. The collected data, after being properly digested and analyzed, can reveal user interface deficiencies and highlight the specific areas that may need improvement to make the medical device safer and more effective for its users. While the idea of performing RCA sounds straightforward, there are some common pitfalls that can occur along the process. This article aims to present common pitfalls of root cause analysis along with some examples.

These are the most common pitfalls to be aware of: 

  1. Blaming the user 

  2. Absence of data triangulation  

  3. Not tying back root causes to Perception, Cognition, Action (PCA) analysis

1. Blaming the User

A medical device should accommodate its end user’s needs and not the other way around. However, there is a tendency to blame the user for a use error when use errors can almost certainly be traced back to an issue in the user interface (UI) design. Root causes that blame the user (like mental lapse or slip, test anxiety, low intelligence, fatigue, etc.) are incomplete as they only shed light on a singular aspect of the human-machine interaction and can allow inadequacies in the UI design to go undetected, potentially releasing a sub-standard product to the market. When RCA considers the entirety of the human-machine interaction, appropriate UI recommendations can be identified and incorporated prior to the product’s market release therefore, ensuring HF testing is promoting safe and effective use.

For example, in a certain medical device HF study, a test participant was seemingly in a rush to complete all study tasks and would quickly move on to the next task when faced with a task they could not complete. For instance, one of the tasks required the participant to press and hold the ‘Record’ button to record input data. When unable to perform this task, the participant verbalized their difficulties and seemed uninclined to try again. Similar responses from the participant were observed throughout the session. In this case, the observer’s initial tendency would be to blame the user and say that they were in a rush to get through the test session, collect their honorarium, and leave. However, during a strategic debriefing interview at the end of the session, once the participant was informed of the correct way to start recording the device, they expressed that it was unclear how to start recording. Further debriefing revealed that the participant was relying on their prior experience with similar devices to press and let go of the button instead of pressing and holding the button to start recording. Had the RCA been concluded at blaming the user, the actual root cause would have been overlooked, leading to missing insights on deficiencies in the user interface and the associated instructional material.

2. Absence of data triangulation

To ensure robust data collection, the FDA’s 2016 Guidance on ‘Applying Human factors and Usability Engineering to Medical Devices’ requires collection of observation data (including knowledge task data), subjective feedback, and interview data. It may be tempting to view interview data as the primary data source when determining root causes and overlook observation and subjective data. For a richer understanding of use data, specifically use errors, data triangulation is an essential step in data analysis that facilitates validation of data through cross verification from more than two sources. To correctly root cause use events, it is crucial to consider multiple types of data to validate what the user did (observation data) with what they said (subjective/interview data) rather than solely relying on one set of data (e.g., just subjective feedback or interview data). Additionally, it may be valuable to keep an eye out for recurring themes across all participant data.

For example, a participant in an HF validation session did not open the instructions for use during the session, and subjective data (feedback from participant) indicated that they did not open the instructions given their experience with similar devices. However, objective data (observation) showed that the participant did not notice the instructions until the end of the study session, which was further confirmed with interview data. Additionally, it was observed that several other participants did not use the instructions during their sessions. This further demonstrated there was something beyond self-confidence in being able to use the device that led participants to overlook the instructions. Therefore, the comprehensive context provided by the objective data, interview data, and the recurring themes strengthened the root cause that the device UI did not make the availability of instructions salient to the user. 

3. Not tying back root causes to Perception, Cognition, Action (PCA) analysis

To effectively inform which aspect(s) of the UI need to be updated, root causes and use errors should be tied back to the PCA (Perception, Cognition, Action) analysis that was conducted during any preliminary use related analyses for the device.  This includes determining whether the identified root causes are associated with issues with perception, cognition, and/or action as this can add specificity to the UI recommendation. For example, a participant of an autoinjector study had a use error when they did not pinch their injection site during simulation of the injection. There are several potential root causes for this use error: the instructions for pinching skin did not stand out (perception), the instructions were unclear (cognition), or the participant did not pinch because they were simulating the injection on a skin pad (action). During the debriefing interview, the participant mentioned they read the injection steps but did not see any information related to pinching the injection site, which established that the root cause (IFU step did not stand out to the user) was related to perception. Incorporating PCA analysis into the thought process helped identify a more precise UI recommendation related to the appearance/visibility of the ‘Pinch the skin’ step because it informed the RCA on where the use event occurred in the human-machine interaction.    

Other Recommendations for RCA

RCA is often performed on qualitative data, which is data that cannot be measured objectively and is open to subjective assessment of the person analyzing the data. Therefore, it can be helpful to establish guardrails around the root causing process to comprehensively ensure the root causes accurately represent the data. The RCA process should be a structured team effort and it can be helpful to have more than one analyst review the data to prevent any gaps or bias in the analysis. Having an extra set of eyes not only ensures the appropriate root causes are captured but also confirms that all root causes are identified. It would be acceptable for two or more analysts to perform an independent data review to determine root causes and align on the root causes after. In fact, a healthy debate is encouraged to fortify reasoning behind the established root cause(s). 

Conclusion

Root cause analysis is a crucial step in determining what design modifications are needed to keep the users safe. The first step to avoiding mistakes in the RCA process is awareness and being open to changing the current HF strategy, if needed. When RCA is performed effectively, companies will gain invaluable insights into what is and isn’t working with their product’s UI and will be better informed of what updates are needed. If you have any questions about how root cause analysis can help improve your product, do not hesitate to contact Agilis to discuss your needs! 


 
 

About the Author:
Priyanka Ambati, MS

Priyanka completed her Master’s in Human Factors engineering and holds a bachelor's in biomedical engineering. She is excited about leveraging her background in both human factors and medical devices while working with Agilis. She is experienced in quantitative risk assessment, device design and development, and human-centered research. She has facilitated the development of a mental health app for college students by acquiring design inputs from user requirements using qualitative data analysis. Additionally, she has accelerated the improvement of a first-year engineering course to increase retention of students in STEM by conducting focus groups and data analysis.



Agilis