Julie Barnes
September 30, 2022
Maverick's Update
Only What Matters In Health Information Policy
There is a lot going on this week. Obviously, we could link the NASA rocket-destroys-asteroid mission to the1998 sci-fi Armageddon movie. Or compare Hurricane Ian to the 1996 Twister movie -- but that’s too terrible to do a lighter piece on right now. The historic White House Conference on Hunger, Nutrition & Health made us think about the 2007 Disney movie Ratatouille, but this blog is about digital health, and rats don't use computers. So, in the One Thoughtful Paragraph, we focus on the Incredibles way the FDA decided to regulate some software solutions as medical devices.
Other incredible news about health data:
Ten leading provider and trade organizations, including the AMA, AHA, and CHIME, wrote a letter to HHS asking for a one-year extension to the October 6th information-sharing deadline in the 21st Century Cures Act, citing knowledge gaps and confusion among implementation and enforcement regulations. More here.
During a panel hosted by Kaiser Permanente, Mayo Clinic, Google and others shared how their respective organizations are utilizing AI to improve diagnostics and risk management but are also facing challenges with equity, algorithm transparency, and privacy.
The Duke-Margolis Center for Health Policy used AI-enabled tools to show how bias may negatively impact patients. Pew Charitable Trusts published an interview with the Center’s Research Director for Digital Health to discuss their findings.
One Thoughtful Paragraph
One of the more brilliant Pixar-Disney creations is The Incredibles series, which is about a family with superpowers trying to restore the public’s trust in superheroes while balancing regular family life. The not-as-great sequel movie, Incredibles 2, still features Holly Hunter, which is good, and the Baby Jack Jack fight scene with a raccoon is classic. But the kind-of-a-stretch storyline is about conquering the villain called the "Screenslaver" who hypnotizes people via TV screens to do terrible things like assassinate the U.S. Ambassador (who is clearly supposed to be Madeleine Albright). In real life, the FDA is trying to prevent software from hypnotizing doctors and nurses into making bad medical decisions. This week, the FDA released new guidance that explained how it would be regulating some AI tools as medical devices, including devices to predict sepsis (sometimes called blood poisoning). Because sepsis is a quickly-evolving medical emergency that is difficult to diagnose, there is software that makes computer alarms go off when sepsis may be in play for patients. While this can be life-saving, the problem is that AI-enabled sepsis detection tools -- which have not been regulated as medical devices so far -- may lead to “alert fatigue” with doctors and nurses who ignore alarms when the alarms cry wolf too often. While we are sure that none of these software solutions will lead to ambassador assassination attempts, or even fights with raccoons, it may be a good idea for the FDA team to wear incredible costumes when they are deciding which clinical decision software to regulate.