It's been a busy week at the intersection of FDA policy and healthcare AI... 🚀 Today, the FDA, Health Canada | Santé Canada, and Medicines and Healthcare products Regulatory Agency, further identified guiding principles for #transparency for machine learning-enabled medical devices (MLMDs). These principles build on the guiding principles for good machine learning practice (GMLP), released in 2021. Specifically, the principles serve as a framework when adopting and advancing good transparency practices while emphasizing the role of human-centered design and explainability. The guiding principles for transparency of MLMDs consider the following: who (relevant audiences), why (motivation), what (relevant information), where (placement of information), when (timing), and how (methods used to support transparency). Collectively, the guiding principles may shed light on novel legal issues in the #SaMD space relating to generative AI. ➡ On Monday, the FDA added six new web pages to its Center for Devices and Radiological Health (CDRH) #AI Program hub. The new content covers a variety of topics, including: - Methods and Tools for Effective Postmarket Monitoring of Artificial Intelligence (AI)-Enabled Medical Devices - Identifying and Measuring Artificial Intelligence (AI) Bias for Enhancing Health Equity - Performance Evaluation Methods for Evolving Artificial Intelligence (AI)-Enabled Medical Devices Unsurprisingly, the FDA continues to play an instrumental role in the AI regulatory arena. As AI integrates with virtually every aspect of the #healthcare industry, the answers to these critical questions will become increasingly crucial for all ➡ - Will FDA regulate your AI-powered digital health tool? - Can you avoid FDA scrutiny? - What are the key factors FDA considers when deciding how to regulate AI-enabled software? - What kinds of AI software fit into the non-regulated categories of non-device CDS, or enforcement discretion? - If you modify/update my algorithm will you have to re-submit to the FDA? - How does FDA look at generative AI tools? How is that different from algorithmic AI tool regulation? ➡ Rebecca E. Gwilt, Michael Schellhous, and I hope to answer these questions (and more) in our forthcoming webinar about how AI developers and deployers should consider FDA regulations as part of their go-to-market strategy. 💻 Webinar link: https://lnkd.in/eRBifP-m FDA Transparency Principles: https://lnkd.in/eXdzDcwM
Sam Pinson’s Post
More Relevant Posts
-
Fresh off the press! The Proposed 2025 MPFS is out, and unsurprisingly, there are major changes in the works for virtual care management services, new reimbursement opportunities for "Digital Mental Health Treatment," the Medicare Diabetes Prevention Program (MDPP), Medicare Part B Payment for Preventive Services, and more throughout this 2000+ page proposal. Be sure to check it out! 🚀
CMS' proposed 2025 Medicare Physician Fee Schedule has arrived, and it includes several major updates that will impact healthcare innovators and digital health leaders! A few of the most significant changes include: 👉 New “Advanced Primary Care Management” HCPCS Codes 👉 New payment for practitioners using Digital Mental Health Treatment Devices 👉 Changes to Reimbursement for Care Management services in FQHCs and RHCs 👉 Another extension for virtual Direct Supervision 👉 Telehealth flexibilities for Substance Use Disorder treatment 👉 Added flexibility for supervision of PTAs and OTAs CMS is accepting public comments on the proposals until September 9, and we strongly encourage healthcare companies to provide input. Ready to learn more about what these proposals mean for your business? Read our key takeaways at https://lnkd.in/et6kQyED. #cms #pfs #digitalhealth #healthcareinnovation
To view or add a comment, sign in
-
-
Sunday Select: Health AI Industry Standards Take Shape 📰 The Coalition for Health AI (CHAI), a diverse group of stakeholders in the healthcare industry focused on harmonized AI standards, released its draft framework for responsible health #AI. The framework includes an Assurance Standards Guide and an accompanying Assurance Reporting Checklist (ARC). ➡ The CHAI Assurance Standards Guide and ARC present the most comprehensive set of principles and #governance checklists to date. Importantly, the two documents align with several leading AI frameworks, including the White House Blueprint for an AI Bill of Rights, several frameworks from the National Institute of Standards and Technology (NIST), and the National Academy of Medicine’s (NAM’s) AI Code of Conduct work (among others). ➡ The Assurance Standards Guide takes the following five principles-based themes into consideration: (1) Usefulness, Usability, and Efficacy, (2) Fairness and Equity, (3) Safety and Reliability, (4) Transparency, Intelligibility, and Accountability (5) Security and Privacy. The Guide outlines CHAI's 6-Stage Lifecycle for Health AI Development and Deployment. The stages are (1) Define the Problem & Plan, (2) Design the AI System, (3) Engineer the AI Solution, (4) Assess the System, (5) Pilot the System, and (6) Deploy & Monitor the System. ➡ In parallel to the Assurance Standards Guide is the ARC, which spans four checkpoints: (1) Initial Planning, (2) Readiness for Real-World, (3) Real-World Impact and Full Deployment Readiness, and (4) Large Scale and Longer Terms Impacts. The four checkpoints are "intended to guide the development and evaluation of a complete AI solution and system against CHAI standards for trustworthy AI." ➡ #Developers and #deployers of health AI solutions face a shifting landscape of regulatory obligations at the federal and state level, patient safety and litigation risk, and an array of industry principles and standards guides. The CHAI documentation centers the conversation around AI governance processes calibrated for stakeholders in the healthcare industry and is a step toward consensus. ➡ In addition to internal governance and testing requirements, deployers and developers should be cognizant of the existing web of healthcare-specific regulations (e.g., FDA #SaMD implications). Moreover, developers and deployers face various considerations from a contracting perspective (e.g., larger organizations tend to place stringent AI-specific governance and testing obligations on vendors of AI services). Implementing an AI governance and assurance strategy is essential for mitigating risks, ensuring compliance, and fostering trust.💡 Rebecca E. Gwilt Carrie Nixon Michael Pappas Kaitlyn O'Connor LUKASZ KOWALCZYK MD Fabio Thiers, MD PhD Mandeep Maini, NACD.DC™ Jeffery Recker More info:
To view or add a comment, sign in
-
Sunday Select: Funding and Partnerships Update 🚀 Humata Health Raises $25mm 💻 Humata Health, a generative AI startup focused on improving the prior authorization process, closed a $25mm funding round led by The Blue Venture Fund and LRVHealth. Humata will use the proceeds to invest further in generative AI solutions to improve the payer and provider experience. Administrative applications of AI are generally perceived to be lower risk from a regulatory perspective than clinical AI tools. However, developers and deployers should be cognizant of the rapidly changing AI regulatory landscape, both at the federal (HTI-1 Final Rule, Section 1557 Final Rule, and agency guidance published in response to #EO14110) and state (CA ADMT proposals, Utah AI Policy Act, and the Colorado AI Act) level. More info: https://lnkd.in/epv8Wkm5 Color Health Partners with OpenAI 🤝 Color, a late-stage health tech startup, partnered with OpenAI to "pioneer a new way of accelerating cancer patients’ access to treatment." Color and OpenAI developed a copilot application powered by #GPT-4o to "identify missing diagnostics and create tailored workup plans, enabling healthcare providers to make evidence-based decisions about cancer screening and treatment." Partnerships between frontier model developers and health tech startups and systems are promising. A collaborative arrangement lies in between the build vs. buy decision. Developers and deployers should undertake appropriate diligence tailored to AI considerations like data provenance, human supervision, reliability, and fairness. More info: https://lnkd.in/eWmazeHA Pomelo Care Raises $46mm Series B 📈 Pomelo Care, a virtual-based maternal care provider, closed a $46mm Series B round led by First Round Capital and Andreessen Horowitz. Pomelo Care offers virtual maternity care services and "analyzes claims and health record data to proactively identify individual risk factors." Pomelo Care also screens for SDOH to help patients navigate their benefits. Specialty virtual care solutions continue to grow as generalist telehealth models mature. Multi-state and national virtual care models face a patchwork of state scope of practice, licensure, CPOM, and telehealth laws (e.g., California #AB3129, focused on restricting private equity ownership in and management of healthcare organizations). As such, continuous monitoring of state-level regulatory developments is a prerequisite. More info: https://lnkd.in/eVweBXs7 Carrie Nixon Rebecca E. Gwilt Michael Pappas Kaitlyn O'Connor
To view or add a comment, sign in
-
-
We're going to cover a lot of ground on this one, from seminal principles of FDA #SaMD regulation to considerations unique to generative AI products (and what steps developers can take to minimize risk). Be sure to tune in for a discussion on the following common questions (and more) for AI regulation: ➞Does the FDA have a specific separate regulatory pathway for AI devices? ➞Will the FDA look at AI applications differently from other healthcare applications? ➞If I have an AI-enabled tool in the healthcare field, do I need to do a specific FDA analysis for my tool? Nixon Gwilt Law
Attorney & Strategist ⚖️ Healthcare Innovation/Artificial Intelligence/Virtual Care/Digital Health. Entrepreneur, Kindness Enthusiast, Investor in Underestimated/Womxn Founders and Companies the World Needs
⚠️1 𝐷𝐴𝑌 𝐿𝐸𝐹𝑇 𝑇𝑂 𝑅𝐸𝐺𝐼𝑆𝑇𝐸𝑅! Are you a health tech founder trying to develop AI-powered solutions amidst the FDA's ever-changing regulatory landscape? 🚀💡 We hope you'll join us for tomorrow's webinar, where members of Nixon Gwilt's Healthcare AI team will answer your most pressing legal questions about all things FDA & AI. ⚖️🤖 𝐉𝐨𝐢𝐧 𝐮𝐬 𝐟𝐨𝐫 𝐚𝐧 𝐞𝐧𝐥𝐢𝐠𝐡𝐭𝐞𝐧𝐢𝐧𝐠 𝐬𝐞𝐬𝐬𝐢𝐨𝐧 𝐰𝐡𝐞𝐫𝐞 𝐲𝐨𝐮'𝐥𝐥 𝐠𝐚𝐢𝐧 𝐢𝐧𝐯𝐚𝐥𝐮𝐚𝐛𝐥𝐞 𝐢𝐧𝐬𝐢𝐠𝐡𝐭𝐬 𝐢𝐧𝐭𝐨: 1️⃣ Can the FDA regulate your AI-powered digital health tool? 🤔💻 2️⃣ Can you avoid FDA scrutiny? 🛡️📋 3️⃣ What factors does the FDA consider when deciding how to regulate AI-enabled software? 🔍📊 4️⃣ Which software fits into the non-regulated categories of non-device CDS or enforcement discretion? 🗂️📑 5️⃣ Will you have to re-submit your algorithm to the FDA after modifying or updating it? 🔄📈 6️⃣ How does the FDA look at generative AI tools, and how does this differ from algorithmic AI tool regulation? 🤖⚙️ Whether you're a healthcare provider, legal professional, or industry stakeholder, this webinar offers a unique opportunity to help you successfully navigate the FDA's regulatory landscape through strategically planned product development that aligns with FDA expectations. 🌐🛠️ Don't miss out on this exclusive opportunity to learn from Nixon Gwilt attorneys and engage in thought-provoking discussions. Reserve your spot today by registering today! 📝🔗 👉🏼https://bit.ly/4bAXid5 (Registration Link) Sam Pinson Kaitlyn O'Connor Michael Schellhous Reema Taneja Aizaz Chaudhary Stephanie Barnes Michael Pappas #FDA #AI #AIinHealthcare #digitalhealth #healthcareinnovation #SAMD
To view or add a comment, sign in
-