As Artificial Intelligence (“AI”) in healthcare grows more ubiquitous, the long standing and complex legal environment continues to earn its reputation as just that: complex. The technology can feel nascent to the user, but AI enters healthcare as the industry continues its long struggle with a maze of policymaking and regulatory requirements. Each AI use case must be analyzed individually and against a blueprint of 21st century digital health legal hitters: matters of privacy, security, data use, intellectual property, bias, antitrust (and more!).
While the promise of AI is immense, it also raises a number of regulatory challenges. The technology is evolving at a faster clip than policy can keep pace. Competitive threats result in tight time frames that shorten the ability to navigate complex considerations. Perhaps most noteworthy, an AI risk analysis often feels like it is being retrofitted to a number of laws that, when constructed, never contemplated their application to AI.
In a way, AI is to healthcare laws as a renovation is to a house. A renovation can bring new and innovative systems and increase home value. Renovators, however, have to balance the urge to make cosmetic fixes at the risk of failing to address larger issues, like a crack in the foundation of the house.
As the industry prioritizes investment in the use of AI to improve healthcare, we do so without the proper privacy infrastructure. Currently, privacy in the U.S. is governed by a patchwork of laws. In the absence of a comprehensive federal consumer privacy law governing the use personal information of individuals, some states have developed their own “renovation plans.” States are recognizing the shifting use of technology and enacting laws that give individuals rights to their information and impose obligations on business, within and outside of the context of AI.
AI needs troves of diverse data and often identifiable information to function well. The current patchwork of privacy requirements exposes an exacerbated risk to both consumers and developers of AI. For consumers, the rights to their personal information vary by state, leaving many individuals without transparency or a clear understanding of how their information is being used – ultimately, limiting their ability to make informed decisions about their privacy. For businesses, the lack of consistent regulations creates compliance uncertainty, leading some to over-collect unnecessary data while others avoid collecting sensitive data altogether to minimize legal risk. In healthcare, where effective privacy is essential to delivering care, inconsistent AI privacy laws result in less effective technology and continued concern and distrust by consumers.
The AI “house” in healthcare is already under renovation and until a comprehensive federal privacy law that preempts state laws is passed, we risk building AI systems on a shaky foundation. We may not feel those cracks right away, but the environment makes AI and healthcare susceptible to further cracks, divisions, and inequitable distribution of its benefits.
Connect with Lisa here.