Should AI Medical Devices be certified like physicians?
Updated: Oct 20, 2020
Should ML-driven Medical Devices be monitored and certified individually – more like physicians – rather than as we do today as armies of clones “frozen” in time as they are released into the wid. Even with the impressive advances in DevOps that are compressing release cycle cadences and automating testing, there are limits to how far even the most sophisticated ML models can evolve under these constraints. Conversely, patient safety and prevention of abuse must always remain as overriding priorities.
Currently, healthcare regulators and practitioners are unified in their prohibition of embeding AI/ML engines configured to evolve and potentially alter their behavior in situ, post final testing and regulatory certification. The rationale is, at this point in time, justified and unassailable. Assuring patient safety and preventing abuse are non-negotiable requirements. As the saying goes, the cure cannot be worse than the disease.
There can be no resolution of this impasse unless and until AI/ML innovators produce a trusted, repeatable, and transparent means of assuring that these fundamental principles will be met.
Can we beef-up existing frameworks and development pipelines?
Even though development and product lifecycles are iterative and cyclical, the sequencing of tasks within a single iteration/cycle are decidedly linear with strict gates governing the promotion from one state to the next; in order to ensure quality and predictability, there can be no allowance for behavioral change post a production “freeze.” Given this constraint, how can a single instance of a medical app or device be permitted to evolve past it’s certified build AND independently of its clones? Is there any scenario where regulators and practitioners (and patients too of course) could possibly allow for such an eventuality?
As it turns out, they (we) already do! Consider your favorite, most-trusted physician. How has she matured post medical school and independently from her graduating class? She’s licensed, periodically re-certified, and there is no doubt that the reason she’s your favorite is due in large part to her experiences after graduation.
I’ve adapted a graphic from CHI’s Machine Learning and Medical Devices Connecting practice to policy (Appendix C) to illustrate how a reimagined medical device certification framework might use today’s physician training and credentialing as a model. Here are a few points that may be worth considering.
Physicians go through basic, supervised, and unsupervised training before being released into “the wild.” This is exactly what we are contemplating.
Physicians are tested and certified throughout their lifetime in different ways and at different intervals to accommodate where they are in their professional lifecycle.
Physicians evolve independently from one another and are regulated individually.
Physicians are most effective after they have gone through formal training and have had the opportunity to mature and adapt to ongoing and localized environmental and medical factors.
Today’s AI/ML solutions, because they cannot develop independently, are stunted at the equivalent of a licensed Resident.
Credentialing and verifying physician credentials is a nightmare
Clearly traditional physician credentiialing workflows and bureaucracies could never scale to manage the volume and complexity of continuous medical device certification. The good news is that there are already a number of initiatives underway that completely reimagine this problem using mobile devices, cloud platforms, and distributed ledgers (block chain – there, I said it). Could one of these approaches scale to support devices and apps?
How could this work?
For a more thorough treatment of how organizations like the FDA are actively working to support machine learning broadly, why beefing up traditional testing, certification, and reporting frameworks may not be well-suited to address this particular set of requirements, and a more detailed description of adapting modern physician certification patterns as a potential starting point as a remedy to this very thorny problem, see CHI Releases Good Machine Learning Practices White Paper and the white paper itself, Machine Learning and Medical Devices Connecting practice to policy (Appendix C for this topic).
Do you think this could work? Why? Why not?