AI/ML transparency for the end user – Regulatory developments and how to implement HF best practices


Transparency Check-In: AI/ML in medical devices


A key factor in developing valid human factors research is ensuring representative user groups. The need for diversity and inclusion within these user groups is crucial to a robust evaluation of the product user interface. The FDA has become increasingly vocal about the need for diversity and inclusion, particularly with respect to clinical trials. With healthcare product development increasingly using tools like Artificial Intelligence (AI) and Machine Learning (ML) to inform and guide medical care, the FDA has emphasized the need for oversight. The Software Pre-Certification (Pre-Cert) Pilot Program was initiated in May 2021 as part of FDA’s Digital Health Innovation Action Plan process for evaluating digital health technologies. FDA has also expressed interest in a more proactive approach to active monitoring products post-market regarding data collection and how data is used.

One indication of the FDA’s interest and continued efforts in this area is the planned 2022 meeting of the Medical Devices Advisory committee for a broad evaluation of the overall accuracy and performance of pulse oximeters. This meeting may result in updating testing or labeling requirements for these products or others (both OTC and prescription) that rely on sensors and data processing to inform medical care. This newsletter will focus on the human factors components to ensuring transparency in the AI medical device space.

What does this mean for human factors?

Navigating this space should be intuitive and user friendly. When you are a patient, you would expect some level of transparency if the diagnosis you receive was informed by artificial intelligence. If you were a healthcare provider, would you expect to understand how your workplace medical records software processes Patient Health Information (PHI) to potentially learn from and inform clinical decisions?

The FDA has defined AI as “the science and engineering of making intelligent machines, especially intelligent computer programs (McCarthy, 2007). Artificial intelligence can use different techniques, including models based on statistical analysis of data, expert systems that primarily rely on if-then statements, and machine learning (AI technique that can be used to design and train software algorithms to learn from and act on data).” See Figure 1 for an illustration of the hierarchy of AI.

Figure 1. Artificial Intelligence Hierarchy [1]

User Engagement

AI has become an increasingly popular tool to enable advancements in modern medicine. There are several limitations to these applications, especially when developments in AI can influence real patients. However, device labeling is largely not posted by the FDA, databased cataloging AI/ML-enabled devices are not standardized or consistent, and there is a lack of standardization across reporting on these devices as well. While in-depth reviews of these devices are conducted by the FDA, more could be done to make this information more accessible and understandable to those who rely on AI-enabled devices for their profession or their medical care. Akin to the Cybersecurity Awareness for Connected Medical Devices video put forth by the FDA in November of 2021, similar routes of outreach should be taken to engage those who interact with AI-enabled medical devices. For developers of these devices, human factors evaluations of user interfaces should include the comprehension of AI capabilities and inherent limitations and the use-related risk involved with their adoption and continued use in the medical field.

The meaning and role of transparency certainly changes based on the eye of the beholder and may not look the same for the surgeon and the patient. An effort should be made to increase the basic understanding of these devices for all users, which will increasingly govern key aspects of our medical care. To improve the user interface of these devices, consider including a standard label, which could be compared to a food and nutrition label, that could be easily accessed and understood by a variety of users. See Figure 2 for a suggested format for informing users of the AI implemented by a connected medical device.

Figure 2. Example AI/ML Product Label

Noting that the success of the food and nutrition label has also been supported by PSAs and other education campaigns in order to establish a common understanding of the terminology and use of the label, significant support in the form of public outreach may be required to ensure an effective rollout of any such AI label standardization. Using a standardized label, patients would be able to determine the degree to which AI is used for the device, security and safeguards in place for their Patient Health Information (PHI), the level of risk involved, the accuracy of the algorithm with respect to the patient’s specific characteristics, and red flags to look for and actions to take if these red flags are observed. This would provide increased possibility for Human in the Loop (HITL) feedback to ensure validity in the continued AI algorithm development.

Wrap Up

It has been almost a year since FDA held a virtual public workshop on transparency in Artificial Intelligence/Machine Learning (AI/ML)-enabled medical devices in October of 2021. As AI-enabled devices in the healthcare space become increasingly prevalent, considerations for the user should be at the forefront of these developments. In the field of human factors, products and devices that employ AI should be sure to evaluate awareness and competency of the AI involvement in end users. From searching for a provider and making an appointment to monitoring vital signs such as insulin levels or blood pressure, medical devices have become substantially more sophisticated. Those involved in the healthcare industry must ensure measures are in place to aid their end user with a transparent understanding of the AI utilized in their products and services.

Agilis has worked in the human factors and instructional design development space for over 20 years. Our work includes regulatory consulting, risk analysis, worldwide human factors study design, conduct and reporting, training and labeling development, and human factors post-market surveillance. Through our expertise, we have seen some of the challenges that AI development faces see and the opportunities to continue technological advancement while maintaining a user focus. Building from our core Agilis will continue to expand our support within the digital health landscape to help clients address user needs.  


References:

[1]  https://python-tricks.com/ai-vs-ml-vs-deep-learning/

 
 

About the Author:
Lauren Jensen, PhD

Dr. Lauren Jensen, PhD, is a Biomedical Engineer and Sr. Consultant, HFE with Agilis Consulting Group. Lauren is experienced in applying human factors principles to the design, evaluation and validation of medical devices and products. Prior to joining Agilis, Lauren worked in the startup space in Austin, TX engineering wearable medical products, and competed as a top ten finalist for the NASA iTech Cycle III for innovative technologies. During her PhD at Tulane University School of Medicine, Lauren developed and validated a therapeutic wearable to reduce surgeon tremor and fatigue in the OR.



We are presenting!

RAPS Convergence
Phoenix, AZ
Sept. 11-13

Lauren Jensen, PhD