AI Summit Highlights: Transparency, Bias in AI, and Exciting Advances

 

Memorable Quotes:

  • "Last year, there was a lot of discussion about AI models that could continuously learn. This year, the focus shifted to large language learning models and evaluating their performance." - Alex Kruzer

  • "One important consideration for bias is how user interfaces can influence bias in datasets, especially with electronic medical record software." - Lauren Horn

  • "The more we can involve stakeholders, from physicians to nurse specialists, throughout the development of the algorithm, the better we can design for the user." - Lauren Horn

Transcript:

Announcer - 00:00:03: Welcome to The Factor, a global medical device podcast series powered by Agilis by Kymanox. Today's episode is hosted by Alex Kruzer, an Engineering Manager and Human Factors Engineering Consultant with Agilis. And she's joined by Dr. Lauren Horn, Manager and Senior Consultant of Human Factors Engineering at Agilis. Together, they recently traveled to Cincinnati, Ohio for the annual AI Summit, hosted by the Association of Food and Drug Officials and the Regulatory Affairs Professional Society, healthcare products collaborative. After RAPS is known for supporting the health care products community through sharing, collaboration and learning. This was their second year of attending, and right away they noticed some changes from last year. Here's Alex.

Comparison 2022 to 2023

Alex - 00:00:49: So I'll just start with saying that there definitely were some differences in what was the hot topic this year compared to last year, at least that I noticed. So last year, there was a lot of discussion about AI models that could continuously learn after they had been released and how to think about potential risks with that situation. And this year, I noticed that topic didn't come up as much, which was interesting. And instead there was a lot of discussion around large language learning models and how to evaluate whether those are performing as expected. And whether they're unbiased, et cetera, which is a pretty challenging problem to try to tackle. But there was a lot of really good discussion about that topic this year, which I appreciated. And I think that was a hot topic just because of the explosion in popularity of ChatGPT and similar tools like that. So Lauren, what did you notice from last year compared to this year?

Lauren - 00:01:42: I noticed that as well, just the shift from the continuous learning to how to ensure, you know, bias is mitigated. Especially with those large language learning models. One aspect of the summit that I greatly appreciate is the continued effort to bring in healthcare providers for input and guidance on how AI and ML applications can best serve users. So while the topics might be changing, there is a continued effort to ensure those healthcare providers are part of the conversation. And importantly, how these applications can best serve the users, those being both providers and the patients themselves. So conversations where industry and FDA and healthcare providers can discuss the best approach to AI and ML integration really fosters that creativity and ensures the end product is meeting the user needs. Clearly, the healthcare providers might not have the expertise or time available to really be able to develop the algorithms, but making sure that they're part of the conversation is a really great step. And it's wonderful that the summit makes sure to include those members on the team.

Alex - 00:02:57: Yeah, 100% agree with that. I think that's so valuable to have all of those perspectives as part of the conversation. And speaking of user needs, one of my favorite topics that was discussed this year was transparency in AI models.

SHAP Values

And in one of the presentations, SHAP values were discussed and their potential for explaining how a given algorithm arrived at its output. And I just thought that was interesting because I hadn't been exposed to that before, but SHAP values actually come from game theory. And it's a way of assigning a value or a score to each feature under consideration by an algorithm. And it indicates how much that feature contributed to the algorithm output. And the reason that this is so interesting and important is that having metrics like this that can explain what's happening, I think really contributes to the usability of these algorithms. And it mitigates risk by making users understand what's happening with the technology that they are using. And I'm really excited to see where that subsegment of the field goes forward when it comes to transparency and design like that. And I really think that we in industry need to be pushing that forward.

Lauren - 00:04:07: Absolutely agree. I think that really speaks to the explainability theme throughout this year and last year's summit, especially getting into the specifics of concepts such as SHAP values, really help the full audience of the summit be able to think about things more concretely, to be able to put those into practice. I thought that presentation was really interesting and helped non-coders understand the specifics of how the algorithm was designed, built, but also maintained.

Bias in AI

So one of my other favorite presentations was the bias in AI panel presentation moderated by Pat Baird. Panel members discussed the nuances of data bias and how transparency is needed to identify sources in the data sets, or sources of bias in the data sets that algorithms are trained on, but also emphasized transparency in the realization and mitigation of the bias in the algorithms currently functioning. So that gets into the predetermined change control plan that we'll talk a little bit about more later. Importantly, also from a human factors perspective, one important consideration for bias is how user interfaces can influence the bias inherent to datasets, especially with electronic medical record software. There was also a call to action for lifecycle bias awareness and management and the concept of whether additional training data should be included to mitigate risks in your algorithm. So Alex, what were some of your key takeaways from the conference this year?

Transparency in AI

Alex - 00:05:47: Yeah. So, one of them I touched on this a little bit. When I was talking about the transparency metrics, but definitely one of my key takeaways is that transparency with AI technologies continues to be on the minds of thought leaders in this space. And it's definitely a problem that we need to still solve. And I think we in industry really need to be thinking about how best to address this and keep AI-based systems or AI-enabled products as safe as possible to use. And one of the ways we can do that is through being appropriately transparent about what's happening with this technology as it's in use. What about you? What was one of your key takeaways?

Predetermined change control plans

Lauren - 00:06:30: So I really appreciated the final presentation or one of the final presentations on predetermined change control plans. So we continue to learn insights from FDA on this predetermined change control process, the draft guidance that was published. One of the insights that was mentioned during the panel was the advantage for sponsors to have the engineers behind the algorithm present at the table for FDA discussions. So this ensures that the sponsor is using the pre-sub to the best of their advantage and that those who are behind the science of the device or the product can really speak to questions FDA might have. It helps to facilitate this discussion and drive the product forward.

Alex - 00:07:20: Yeah, I thought that presentation was really awesome. And I'm really looking forward to see what else comes out of those discussions that are currently ongoing in terms of guidances and publications.

Lauren - 00:07:32: The panelists also really encouraged the sponsors to be clear on the origins of their predetermined change of control plan. So transparency in that is also crucial as well.

Become an Industry Expert

Alex - 00:07:46: So Lauren, in your role at Agilis, you're one of our experts on digital health and conferences like this are a great opportunity to stay on top of regulatory guidances. What other ways do you make sure that you stay the expert on a field like this that's constantly evolving?

Lauren - 00:08:03: Good question. The summit is absolutely an opportunity to stay on top of regulatory guidances and get feedback directly from FDA panelists, that's incredibly valuable, not only during the panel itself, but also during the side conversations that happen during meals and breaks as well. Another opportunity is just to see where trends in the market are going. There were certainly a lot of discussions for app-based AI and evaluating how those can serve the healthcare industry. There are several resources available to stay up to date in this area, in this space. So I would be sure to sign up for FDA's email listservs from the digital health center of excellence, CDR and CDRH email listserv as well. One of my favorite blogs or podcasts is called Let's Talk Risk! with Naveen Agarwal and also following thought leaders from industry and FDA as well. Naveen has also been featured on The Factor, so this podcast as well.

Alex - 00:09:15: Yeah, that's a lot of really great advice. I also want to just add in just, connecting on LinkedIn with various thought leaders in this space. There's a lot of activity there as well. People posting and chatting about new developments. So that's been a great resource. For me, for various topics. So what are you most excited about for the next year when it comes to AI?

Guidance Documents and Industry Participation

Lauren - 00:09:40: So specifically, the guidance documents prioritized by FDA, there A-list and B-list prioritized draft and final guidance documents, and then also presenting our subgroup's white paper on AI and ML applications at point of care for healthcare providers. We are working to get that published in January 2024.

Alex - 00:10:06: That's awesome. I know that one of the other working groups, the GMLP working group, is also working on a white paper looking at post-market activities when it comes to AI devices. So we should see that being published early next year as well. If any of our listeners are interested in participating in a volunteer working group, run by AFDO/RAPS they have open an open call for, participants. And if you go to healthcareproducts.org and you look for the working teams, they have their own websites where you can get in contact with them and get involved with projects there. So I just wanted to mention that in case If we have any listeners that are interested.

AI Foundations Workshop: Jupyter platform

Lauren - 00:10:49: Come join the team. One other really interesting and Hands-on opportunity at the summit was the AI Foundations Workshop from planning to production, where we were able to implement the Jupyter platform to run a Python script to analyze a data set of patients with diabetes. And that really helped us implement and realize different tweaks that you can adjust in not only a data set, but how it's analyzed. To see how minor changes in a data set could influence your end result. But then also different strategies to implement, for example, categorical versus incremental values. And that's also, I believe, where we touched on SHAP values as well. So it was a really interesting and opportunity to be able to to say, you know, I went to the summit and ran an AI script.

Alex - 00:11:55: Yeah, I felt the same way. And I know we touched on earlier how it's so important to have different voices at the table with some of these discussions about policy and regulations and how this industry is going to move forward. And I think that that type of exercise was a really great way to expose all the attendees of this summit to the technical side of an AI algorithm which his group might not have had exposure to in the past, considering that a lot of the attendees were, people who work in regulatory and not necessarily in data science or computer science. So, I thought that session was really great. And it did prompt a lot of interesting discussion, just looking at how the algorithms work and how you can have your script tell you what's happening with the algorithm and how you can uncover potential sources of errors or bias by looking at those numbers. So, I also thought that session was really great.

Lauren - 00:12:57: One of the bigger points of that talk also emphasized defining your problem first. And then establishing the need for a solution, as opposed to throwing a giant data set into a Jupyter or Python script to see what you could get out of it. So once you've established the issue and the need for a solution, You know, does this follow your company's strategy? What are the benefits to implementing this strategy and how will they be measured? A lot of companies right now are toying with the idea of implementing AI in their product or company structure. And I think it's important to keep this framework or approach at the front of our agenda.

Alex - 00:13:49: Yes, I 100% agree. I think there's a tendency to think of AI like a, hammer and then you want to find all the nails that you can hit with the hammer. But really at the end of the day, the best approach is to look at the problems you have and really think about which one of those problems could be best solved by use of an AI algorithm. And that way you're setting yourself up for success where you design your AI solution to make sure that it is solving that particular problem versus what you said, just throwing data at something and seeing what comes out of it. So I appreciated that part of the session as well. So talk to me a little bit about the stakeholders when it comes to AI development and policy and who should be involved in these discussions.

Inclusion of Stakeholders (Healthcare providers)

Lauren - 00:14:39: That is a great question. One of the initial presentations, it was a fireside chat on the day one of the summit moderated by Eric Henry. There were a lot of really influential panelists such as Laura Adams, from the National Academy of Medicine, Dr. Brian Anderson from the Coalition for Health and AI or CHAI, Mitul Patel from Google and Troy Tazbaz from CDRH. So this discussion really touched on how medical error can be or is a system design issue. And we need to be more transparent about medical outcomes to make sure that those medical outcomes are making it into the data set that we are training our algorithms on. This directly aligns with the human factors engineering approach and the holistic understanding of medical device, and combination product design. We are needing to design for the user and not blame the user. And the more we can involve stakeholders those being healthcare providers from physicians to nurse specialists, phlebotomists, you know, the entire spectrum, depending on the medical device being designed. So it's important to get feedback from those end users throughout the development of the algorithm.

Alex - 00:16:22: Yeah, definitely. And to add on to that, something I remember being discussed too, was just the importance of end users being willing to be open and honest about their experiences and making sure that we, are having the important conversations about what could happen in the field so that developers are aware of those things and, can make their designs better based off of that information. So we avoid bias as much as possible. I thought it was a really good discussion for the panel to have.

Lauren - 00:16:58: Agreed.

Final Thoughts

Alex - 00:17:00: All right, Lauren, do you have any final thoughts? Are you looking forward to next year? Will you be attending the summit next year?

Lauren - 00:17:07: So I hope to attend next year. Looking forward to 2024. The AI Summit location hasn't been announced yet, but I just really want to thank the entire AI Summit planning committee for such a thoughtful presentation. And robust presentation. The agenda this year was exceptional, and I really appreciated all the connections I was able to make. And I learned a lot. Thanks. And thanks, Alex, for coming with me.

Alex - 00:17:36: Yeah, I agree with everything you just said. I think the planning committee did a great job and I'll be glad if I could get the opportunity to go again next year as well.

Lauren - 00:17:45: Okay. See you next year. 

Alex - 00:17:46: All right. 

Announcer - 00:17:54: That was Alex Kruzer and Lauren Horn on how Artificial Intelligence is being used in the pharma, biopharma, and medical device industries. Thank you so much for listening to or watching this episode. Please subscribe or follow this podcast and whatever app you're using right now. Or follow Agilis by Kymanox on LinkedIn for all updates. This episode was edited and produced by Earfluence. We'll see you again soon on The Factor.

Like this episode?

 
 
 
Kristen Breunig