I was scrolling through Facebook when a post from The Huntsville Times about the use of artificial intelligence in health care decision-making caught my eye. Which is its own kind of irony, honestly. Facebook's algorithm decided I needed to see it, somewhere between a birthday reminder and a video of lunch ladies lip syncing to "Take On Me" (a banger, tbh). Algorithms make calls about what we see, what we're offered, and what gets filtered out, and a lot of times, they miss the mark. Usually, that's just annoying. In health care, the stakes are a little different.

In most states, a health insurer can deny your coverage request, and no human being has to own that decision. This process, called prior authorization, is essentially an insurer's way of deciding whether a treatment is medically necessary before agreeing to pay for it. An algorithm reviews the request, generates the denial, and unless you've got the time, language, and resources to appeal, that's pretty much the end of it.

Alabama's SB 63, sponsored by Sen. Arthur Orr, from the 2026 legislative session, addresses something that doesn't get talked about enough: AI health insurance denials and algorithmic prior authorization decisions that happen without any human accountability. The bill prohibits health insurance companies from relying exclusively on artificial intelligence for these decisions:

"A determination to deny, reduce, or defer a request for prior authorization shall always be made by a licensed physician or other health care professional who is competent to evaluate any recommendation or conclusion of artificial intelligence in the light of the specific clinical issues involved."

A real live human has to look at another real live human. Someone has to be accountable, and that's a meaningful distinction.

A few thoughts.

What "individual circumstances" actually means

Copy link to section: What "individual circumstances" actually means

The bill doesn't just require a human signature. It requires that AI systems consider each enrollee's individual medical history and the specific clinical context presented by their treating provider, not just patterns from a population dataset. SB 63 also includes anti-discrimination language and restricts how patient data can be used in the review and denial process. How that data is governed matters as much as who reviews it. Stale policies create stale trust, and the people most at risk of being overlooked by outdated data practices are often the same people least positioned to challenge an algorithmic denial.

In other words, the equity concern isn't only about who makes the final decision; it's about whether the system was ever designed to see you as an individual in the first place.

Population-level datasets reflect historical patterns. And historical patterns in American health care carry the full weight of who was excluded, underserved, and forgotten. When an algorithm trained on that data is deployed without individual review, the consequences extend beyond individual denial letters. Harm can be reproduced at scale, quickly, and without a life attached to it.

A human-in-the-loop won't magically fix structural inequity, but removing the human removes accountability entirely. Those are two very different conversations, and they both matter.

Is AI becoming a digital determinant of health?

Copy link to section: Is AI becoming a digital determinant of health?

The social determinants of health framework helps us understand that health outcomes are shaped by conditions upstream of clinical care, like housing (in)stability, income, education, neighborhood, and food (in)security. Determinants aren't medical variables; they're structural ones.

Here's something I've been sitting with as a working theory, not a finished thought: algorithmic decision-making in health coverage may be beginning to function like a social determinant of health. It sits upstream of the care folks need. It shapes who gets access, when, and how much. And unlike income or zip code, it can change overnight, invisibly, and without any community awareness or accountability.

Researchers are beginning to build frameworks around what some are calling digital determinants of health. A shared understanding hasn't yet been reached, but the effects are already evident. Prior authorization denials driven by AI have been documented across major insurers. Virginia Eubanks, in her book Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor, traces how automated systems reproduce systemic inequality and discrimination in ways that are hard to see and harder to challenge. The people most likely to receive an algorithmic no are often the people least positioned to fight it: older folks navigating complex diagnoses, people without a patient advocate, families already stretched thin by caregiving and cost.

This isn't abstract for me. As a court-appointed guardian and conservator, I've navigated the distance between what a system decides and what a person actually needs. When my mother was live discharged from hospice, a system flagged the combination of "hospice" and "discharged" and sent someone to collect the hospital bed. Nobody had reviewed the context. The algorithm saw a pattern and acted. And that's the gap I'm talking about. That distance is where dignity lives or gets lost. Decisions about someone's care should never be managed solely by an algorithm.

I grew up in Huntsville. The Huntsville Times was what I saw first, and AL.com has the full story if you're not behind the paywall. There's something encouraging about seeing my hometown report on a piece of legislation that's doing some careful thinking about health equity.

SB 63 isn't a complete solution. It doesn't reach the underlying incentive structures that make algorithmic denial attractive to insurers in the first place, it doesn't cover self-insured employer plans, and enforcement depends on a state agency with the will and capacity to act.

What SB 63 does is name something worth acknowledging: the decision to deny someone care requires a person willing to be accountable for it. That's not a ceiling; it's a floor. And right now, in most places, the floor doesn't exist (yet).