Why This Matters Now
The point of AI in Healthcare: What’s Working and What’s Hype is not to chase every announcement. The useful signal is what changed for builders, creators, teams, and buyers who have to make decisions with imperfect information.
For this issue, I have kept the analysis grounded in what can be acted on: which workflows are becoming more practical, which claims still need verification, and where teams should slow down before treating a polished demo as production reality.
AI in Healthcare: What’s Actually Working
Healthcare represents both enormous AI potential and unusually high stakes for failure. The applications that work well share common characteristics. The failures also share patterns.
This week: a realistic assessment of where AI in healthcare stands, what practitioners need to understand, and how to evaluate healthcare AI applications.
The Applications That Work
Medical Imaging Analysis
AI excels at pattern recognition in medical images. The FDA has approved numerous imaging AI systems, and many are in routine clinical use.
According to the FDA’s AI-Enabled Medical Device List, over 1,450 AI-based medical devices have been authorized as of early 2026. Notably, more than 75% of these devices are in radiology, making it the most mature domain for clinical AI deployment.
What works:
- Radiology: Detecting nodules, fractures, lesions
- Pathology: Identifying cancer cells in tissue samples
- Ophthalmology: Detecting diabetic retinopathy, age-related macular degeneration
- Dermatology: Identifying melanomas and other skin conditions
Why it works:
- Clear patterns that AI can learn
- Large datasets available for training
- Outcomes measurable (did detection improve?)
- Well-defined scope (detect X in Y image type)
Limitations:
- Requires specialized models per imaging type
- Doesn’t generalize across body systems
- Needs integration with clinical workflow
- Requires ongoing accuracy monitoring
Research published in radiology journals confirms that AI can enhance diagnostic accuracy and streamline clinical workflows. However, the actual clinical impact depends heavily on proper integration into existing workflows.
Clinical Documentation
Reducing documentation burden is a high-value application with fewer regulatory hurdles than diagnostic AI.
What works:
- Ambient clinical intelligence (listening to encounters, drafting notes)
- Medical code suggestion (ICD-10, CPT coding assistance)
- Prior authorization automation
- Patient message routing and response drafting
Why it works:
- Lower risk than diagnostic applications
- Clear efficiency gains measurable
- Less regulatory complexity
- High clinician demand for relief
Administrative Optimization
Healthcare administration is ripe for AI optimization:
What works:
- Schedule optimization and patient flow
- Inventory management and supply chain
- Revenue cycle management
- Staff scheduling and resource allocation
Why it works:
- Similar to non-healthcare AI applications
- Clear metrics for success
- Lower regulatory burden
- Proven ROI in many implementations
The Applications That Struggle
Diagnosis Without Context
AI that attempts to diagnose based on symptoms alone consistently underperforms. Medicine requires context that pure pattern matching can’t capture.
Why it struggles:
- Symptoms map to many possible diagnoses
- Patient history matters
- Physical examination provides information AI can’t access
- Edge cases that require clinical judgment
Realistic use: AI as diagnostic support, not replacement. Suggest possibilities, not definitive diagnoses.
Predictive Risk Stratification
Predicting patient deterioration, readmission risk, or disease progression is harder than it appears.
Why it struggles:
- Base rates vary significantly across populations
- Models trained on historical data may not reflect current populations
- Healthcare records are messy and incomplete
- False positives have real consequences (alert fatigue)
Realistic use: Risk scores as one input among many, with appropriate uncertainty communicated.
The Regulatory Landscape
FDA Medical Device Framework
AI/ML-based software as a medical device (SaMD) falls under FDA regulation:
Class I: Low risk—general controls, minimal oversight Class II: Moderate risk—special controls, 510(k) pathway typically Class III: High risk—premarket approval required
According to industry analysis, over 96% of AI-enabled medical devices are currently approved under the 510(k) process, which requires demonstrating substantial equivalence to predicate devices.
The Pre-Determined Change Control Plan
FDA’s recent guidance on AI/ML Software modifications allows manufacturers to specify anticipated modifications and validation methods upfront, enabling iterative improvement without requiring new submissions for every change.
This enables:
- Model updates based on new data
- Performance improvements over time
- Adaptation to new use cases
But requires:
- Defined performance metrics
- Validation protocols for changes
- Monitoring for model drift
HIPAA Considerations
Healthcare AI must comply with HIPAA requirements:
Data use restrictions:
- PHI cannot be used or disclosed except as permitted
- Minimum necessary standard applies
- De-identification requirements for training data
Security requirements:
- Technical safeguards for PHI protection
- Access controls and audit trails
- Encryption requirements
- Incident response procedures
Business associate agreements:
- AI vendors handling PHI need BAAs
- Vendor security assessment required
- Liability for breaches typically with covered entity
According to HIPAA compliance research, 67% of healthcare organizations report being unprepared for stricter AI security standards, making vendor assessment and compliance verification critical.
Implementation Guidance
The Readiness Assessment
Before implementing healthcare AI, organizations should evaluate:
Clinical readiness:
- Evidence base for the proposed AI application
- Clinical champion to lead adoption
- Workflow integration plan
- Staff training plan
- Feedback mechanism for ongoing improvement
Technical readiness:
- Data quality assessment
- Integration capability with existing systems
- Infrastructure capacity
- Security posture
- Vendor capability and stability
Operational readiness:
- Governance framework for AI oversight
- Change management capacity
- Monitoring capability
- Incident response procedures
Regulatory readiness:
- FDA clearance status for the specific use case
- HIPAA compliance verification
- State-specific requirements
- Appropriate liability insurance
Avoiding Common Healthcare AI Failures
Failure: Buying Without Integration Plan
Organizations purchase AI systems and expect them to work without fitting into clinical workflows.
Prevention: Require integration plan as part of vendor evaluation. Budget for workflow redesign.
Failure: Ignoring Maintenance Requirements
AI systems require ongoing maintenance: model updates, performance monitoring, workflow adjustments.
Prevention: Budget for ongoing operations, not just initial implementation.
Failure: Overtrusting AI Output
Clinicians may overtrust AI recommendations, failing to catch errors.
Prevention: Design AI outputs that encourage critical evaluation. Require clinician confirmation for high-stakes decisions.
Failure: Data Quality Issues
Healthcare data is notoriously messy. AI trained on poor data produces poor results.
Prevention: Invest in data quality before AI implementation. Accept slower rollout for better foundation.
What’s Next
Next week: AI and creativity—how AI affects creative work, what’s changing about creative processes, and practical guidance for creative professionals working with AI.
That’s the briefing for this week. See you next Tuesday.
Verification Note
This issue was reviewed in the April 27, 2026 content audit. Product names, model availability, pricing, and regulatory details can change quickly, so high-stakes decisions should be checked against the original provider, regulator, or research source before publication or purchase.