Historical Report (March 2026) — Based on the original cumulative ordinal model. For latest results, see the Ablation Study | All Reports

Evidentia Analytics

AI Nodule Detectability Engine — Proof of Concept Results
March 15, 2026
5.5%
Detection Error
Held-out validation
1,795
Nodules Trained
555 patients
17/17
Correct Detections
Concordant nodules

What Does This System Do?

The Evidentia engine takes a CT scan with a nodule location and answers one question:

"What percentage of radiologists would detect this nodule?"

Instead of relying on subjective expert testimony ("I think it's obvious" vs. "I think it's subtle"), the model provides an objective, data-driven assessment based on patterns learned from 1,795 real nodules rated by 12 board-certified radiologists across 7 institutions.

The output is a probability distribution across 5 subtlety levels — suitable for court use as an objective reference standard.

How It Works

🪛
CT Scan Input
DICOM scan + nodule coordinates
🔬
Patch Extraction
64x64x64 voxel cube around nodule, 1mm spacing
🧠
AI Analysis
3D ResNet-18 + 19 clinical features
📊
Detection Report
"X% of radiologists would detect this"

Validation Results

Key Finding
For nodules where all 4 radiologists agreed they could detect the finding, the model correctly predicts high detection rates with an average error of only 5.5%. This was tested on cases the model never saw during training.

Held-Out Validation

44 concordant nodules excluded from all training

Overall MAE17.3%
Detection Error5.5%
Nodules Evaluated17 of 44
Correct Direction17/17 (100%)

Test Set Performance

246 nodules from 84 patients, never seen in training

Overall MAE22.9%
Detection Error31.6%
P(S≥5) Error15.3%
P(S≥4) Error20.0%

Per-Nodule Predictions vs. Ground Truth

Each bar shows the model's predicted detection rate. Ground truth is 100% for all concordant nodules (all 4 radiologists detected them).

5.00
99.9%
5.00
99.4%
4.75
99.5%
4.50
97.1%
4.00
99.7%
3.75
95.3%
3.75
93.3%
3.75
92.4%
3.50
97.4%
3.50
85.8%
3.25
92.4%
2.25
97.6%

Left axis: mean subtlety rating (1=subtle, 5=obvious). Blue bars: model prediction. Green overlay: ground truth (100%).

Training Overview

1,795
Nodules
555
Patients
7
Institutions
12
Radiologists

Data

SourceLIDC-IDRI (public, peer-reviewed)
Raters per nodule4 board-certified radiologists
Split70% train / 15% val / 15% test
Leakage preventionPatient-aware (no patient in multiple splits)
Held-out set44 concordant nodules, fully excluded

Model

Architecture3D ResNet-18 (MedicalNet pretrained)
Auxiliary features19 clinical/morphological features
Output5 cumulative probabilities P(S≥1..5)
MonotonicityEnforced by ordered thresholds
Training63 epochs, AdamW, cosine annealing

What This Means for Litigation

The Problem

In radiology malpractice cases, opposing experts give contradictory subjective opinions: "The nodule was obvious" vs. "The nodule was subtle." Juries have no objective way to evaluate these claims.

Our Solution

The Evidentia engine provides a data-driven, objective assessment: "Based on analysis of 1,795 similar nodules rated by 12 radiologists across 7 institutions, X% of radiologists would be expected to detect this finding."

Why It's Credible

  • Trained on the gold standard: LIDC-IDRI is the largest publicly available multi-reader CT nodule dataset, used in 500+ published studies
  • Multi-institutional: Data from 7 academic medical centers with 8 different scanner types
  • Validated on held-out data: 5.5% detection error on cases never seen during training
  • Monotonic outputs: Probabilities are mathematically constrained to be consistent (cannot output impossible probability combinations)
  • Transparent methodology: Cumulative ordinal model with published statistical foundations

Next Steps