The algorithm will see you now
What radiology can tell us about the first decade of AI diffusion
Last week, I wrote about why radiology seems like the perfect candidate for automation — the canary in the coal mine that should tell us when we’re all about to be out of a job. AI models that outperform clinicians on radiology benchmarks have existed for many years, and many experts have repeatedly predicted their demise over a decade ago. Yet, radiologists are thriving.
I wrote a piece about this in Works in Progress.
One historical case study I found especially interesting: at one point, Medicare reimbursed radiologists $7 more per mammogram if they used an early form of AI that performed very well on benchmarks to help them do it. But by 2018, they rolled back that policy.
Computer-aided diagnosis turned out to be a disappointment. Between 1998 and 2002 researchers analyzed 430,000 screening mammograms from 200,000 women at 43 community clinics in Colorado, New Hampshire, and Washington. Among the seven clinics that turned to computer-aided detection software, the machines flagged more images, leading to clinicians conducting 20 percent more biopsies, but uncovering no more cancer than before. Several other large clinical studies had similar findings.
Radiology is, in many ways, about a decade ahead of other white collar fields in automation. I find technical, legal and regulatory, and economic reasons algorithms have not taken their jobs (but rather, in some cases, made more of them!). In the article, I investigate what it can tell us to expect in the first years after AI models are competitive with us at our core tasks, according to benchmarks. In some cases, better AI may mean more working humans, not fewer.