The Case Against Carilion Clinic’s Embrace of AI
Or: Why “Cautious Governance” Isn’t Enough
Carilion Clinic frames its use of Artificial Intelligence (AI) as innovative and carefully regulated. But beneath the polished language and promising pilot projects lies a host of unresolved risks – risks that extend beyond slick marketing into real harms for patients, clinicians, and communities.
Administrative Burden vs. Structural Problems
Carilion promotes tools like AI scribes to reduce clinician documentation time. Yes, clinicians spend significant hours on notes instead of patients – but AI scribes aren’t a panacea.
Research shows that while ambient AI documentation tools can potentially ease workload, they also come with limitations such as hallucinations, inaccuracies, and variable quality that require sustained human oversight. This means the burden is transformed, not eliminated.
Source: Journal of Medical Systems – Limitations of Ambient Clinical Documentation
More importantly, the root causes of burnout – understaffing, productivity quotas, and billing-driven documentation – cannot be solved through software alone.
Predictive Analytics and Algorithmic Bias
Carilion cites predictive analytics tools designed to identify patient deterioration earlier. But real-world evidence shows that hospitals often do not adequately validate these models using their own patient populations.
Less than half of hospitals rigorously test AI tools for bias, raising serious concerns about reliability and equity.
Source: Health Affairs / Healthcare Dive – Hospitals Rarely Test AI for Bias
When models are trained on biased or incomplete data, they do not eliminate disparities – they scale them.
Source: arXiv – Algorithmic Bias in Healthcare AI
Operational Efficiency at the Expense of Humanity
AI-driven command centers promise smoother patient flow and optimized bed usage. But efficiency is not the same as quality care.
When algorithms influence transfer decisions, bed assignments, and discharge timing, patients risk becoming logistics problems rather than people. These systems optimize throughput – not dignity, continuity, or trust.
Imaging, Research, and the Consent Problem
AI-assisted imaging and clinical research tools rely on massive amounts of patient data. Yet patients rarely have meaningful control over how their data is reused, repurposed, or commercialized.
Ethical analyses warn that existing consent models are inadequate for AI-driven secondary data use.
Source: BMC Medical Ethics – AI, Data Use, and Informed Consent
Privacy and Re-identification Risks Are Real
Carilion emphasizes strict governance and data safeguards. However, research shows that advanced AI systems can sometimes re-identify individuals from supposedly anonymized datasets.
Source: Journal of Medical Internet Research – Re-identification Risks in Health AI
Public trust is fragile, and once patient data is compromised, it cannot be reclaimed.
The “Black Box” Problem
Many healthcare AI tools operate as opaque systems, offering little explanation for how they reach conclusions. This lack of transparency undermines clinician trust and complicates accountability.
Source: JMIR Formative Research – Explainability Challenges in Clinical AI
Deskilling and Erosion of Clinical Judgment
Researchers have raised concerns that reliance on AI tools – particularly large language models – could lead to deskilling, where clinicians increasingly defer to machine output rather than exercising independent judgment.
Source: arXiv – Deskilling Risks from Generative AI
Regulation Still Lags Behind Reality
The federal regulatory framework for healthcare AI remains incomplete. Hospitals are largely left to self-govern, creating uneven standards and weak accountability.
Source: Reuters – Calls for Equity-First AI Regulation in Medicine
AI Solves the Wrong Problems
Healthcare in Southwest Virginia does not suffer from a lack of algorithms. It suffers from staffing shortages, rural access barriers, poverty-driven health disparities, and burnout.
AI does not fix these problems. It distracts from them.
Conclusion
Carilion’s cautious rhetoric is better than blind adoption, but it still places patients and clinicians inside a live experiment.
Skepticism is not anti-technology. It is pro-patient, pro-clinician, and pro-trust.
Medicine should move at the speed of trust, not the speed of software.











