AI Foundation Models: Jack of All Trades, Master of None
Nov 18, 2025Categories: Blog
Why We Need More (Strategic) Bias
Authored by: Mo Abdolell, Densitas, CEO
In an AI landscape dominated by the “bigger is better – one model to rule them all” mantra of massive, general-purpose foundation models, the world of radiology demands a more nuanced approach. While models trained on the entire internet are impressive generalists, their “jack of all trades, master of none” nature falls short in high-stakes clinical settings. The path to building reliable, high-performing AI in our field isn’t about eliminating bias, but about strategically embracing it.
This apparent paradox is explained by a fundamental statistical and machine learning principle: the bias-variance tradeoff. Generalist models are low-bias systems; their vast, diverse training data means they make few assumptions. However, this flexibility leads to high variance, unpredictability and a tendency to “hallucinate”. They haven’t been taught the specific rules of our domain.
Fine-tuning is a solution. By training a general model on a curated, high-quality dataset of diagnostic images, we introduce intentional, beneficial bias. The model becomes “biased” toward the specific patterns of CT scans, X-ray images or MRIs, trading broad applicability for deep expertise. This specialization dramatically reduces variance, resulting in a model that is not only more accurate but also more reliable for its specific task.
We see this principle validated in cutting-edge research. Generalist vision models like the Segment Anything Model (SAM) struggle to generalize to diagnostic imaging data out-of-the-box. Yet, specialized adaptations demonstrate remarkable success. S-SAM achieves state-of-the-art performance by fine-tuning just 0.4% of the model’s parameters, efficiently teaching it the new domain. This principle also holds true for language. A general-purpose Large Language Model (LLM), trained on the corpus of internet knowledge, struggles with the specific, nuanced terminology of a radiology report. However, specialized models fine-tuned on medical literature and clinical notes demonstrate far greater accuracy and reliability, proving they have learned the “bias” of the medical domain.
In diagnostic imaging, bias isn’t a flaw to be purged; it’s a powerful engineering tool. The future of clinical AI is not a single, monolithic intelligence. It is a suite of highly specialized, precisely fine-tuned systems, each an expert in its narrow domain. By strategically introducing bias, we are not limiting our models; we are sharpening them into the clinical-grade instruments our field requires.
USA
UK
Canada