Substance Over Scale: The Problem with the New HIMSS
- Ross Wolfson, MHA PMP

- Apr 4
- 2 min read
I was going to pursue my CPHIMS certification this year.
I changed my mind.
After watching the AI narrative coming out of Healthcare Information and Management Systems Society, it stopped feeling like guidance…
…and started sounding like marketing.

Written By: Ross Wolfson, MHA PMP (Founder)
I was going to pursue my CPHIMS certification this year, but I reconsidered after seeing the excessive, biased AI promotion. It's an embarrassment for an organization with such a strong reputation. They sound like Silicon Valley influencers, touting AI benefits without substance.
And have you really asked yourself what real benefit are we talking about? Automated emails? Bots that route you to a human because the AI can't handle healthcare's unpredictability? Every patient, payer, and plan is unique. Fragmentation, unpredictability, and the human connection are vital, and data protection is paramount.
HIMSS seems to have shifted from process improvement to product promotion. The "success stories" are often simulated outcomes or narrow pilots from academic centers, far from the reality of community hospitals. When AI hits an "edge case" (which is most days), it fails. The "benefit" disappears when humans must fix it.
Healthcare is arguably the most chaotic industry. While an academic center with a dedicated data science team creating a predictive model for sepsis might be considered a "success story," in a 100-bed community hospital where nursing staff is already at 110% capacity, that same model is just another alert to ignore. We don't need more data generation; we need workflow integration that actually gives time back to the provider.
The "success" they tout is about scaling, not solving. They scale data generation and emails, but don't solve the fact that humans remain responsible for the outcome.
If I were in charge of the "governance" budget, I'd invest in solutions that truly improve patient care and data security, rather than "bots" that simply redirect to humans....rather than 'bots' that simply redirect to humans—tech that is unproven and far less successful than the hype suggests.
Substance Over Scale: The Real Priorities
Interoperability over Integration:
Interoperability over Integration: We don't just need systems that "talk"; we need them to speak the same language. Without strict standardization, you end up with the "Duplicate Identity" crisis—where the same provider exists under two different names in two different systems. If the data doesn't align at the source, AI won't "interpret" it; it will just scale the confusion.
Data Integrity:
We must clean the foundation before we automate the house. If your "advanced" directory lists a Cardiologist under Pediatrics, your AI isn't just wrong—it’s a clinical liability. You can’t build a high-tech future on a foundation of bad data.
Functional User Experience:
True innovation should reduce the "click-burden" and cognitive load on providers. We need workflow integration that actually gives time back to the clinician, not "bots" that add an extra layer of bureaucracy between the doctor and the patient.
Two grand to get approval from these 'experts'? I’m good.
Comments