gpt-oss-20b-ddx / README.md
alhusains's picture
update readme
4701c67
metadata
language: en
license: apache-2.0
base_model: openai/gpt-oss-20b
tags:
  - medical
  - radiology
  - eurorad
  - differential-diagnosis
  - chain-of-thought
  - lora
  - gpt-oss
  - clinical-reasoning
library_name: transformers
pipeline_tag: text-generation
datasets:
  - eurorad
widget:
  - text: >
      A 45-year-old woman presents with acute chest pain radiating to the left
      arm. CT angiography shows a filling defect in the LAD. Provide a
      structured differential diagnosis and the most likely diagnosis.
    example_title: Radiology differential diagnosis
model-index:
  - name: gpt-oss-20b-ddx
    results:
      - task:
          type: text-generation
          name: Radiology differential diagnosis
        dataset:
          type: eurorad
          name: Eurorad radiology cases
        metrics:
          - type: accuracy
            value: 0.862
            name: exact-match accuracy

GPT-OSS-20B – Differential Diagnosis Radiology Reasoning

This repository provides a LoRA adapter fine-tuned on radiology cases from the Eurorad dataset to enhance differential diagnosis and structured medical reasoning. The adapter attaches to the base model openai/gpt-oss-20b, enabling stronger radiology-focused performance while remaining lightweight and deployable on a single GPU.

Highlights

  • Improved differential diagnosis accuracy on Eurorad cases (exact match boost from 78.6% → 86.2%)
  • Trained with structured chain-of-thought derived from gpt-oss-120b
  • Works with Unsloth, PEFT, and Transformers

Quick Start

🔹 Load with PEFT + Transformers

from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

BASE = "openai/gpt-oss-20b"
ADAPTER = "alhusains/gpt-oss-20b-eurorad-lora"

tokenizer = AutoTokenizer.from_pretrained(BASE)
base = AutoModelForCausalLM.from_pretrained(BASE, device_map="auto")

model = PeftModel.from_pretrained(base, ADAPTER)
model.eval()

prompt = "Provide a differential diagnosis for multiple bilateral lung nodules."
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, max_new_tokens=300)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Training Summary

  • Dataset: Eurorad radiology case reports (clinical history + imaging findings)
  • Supervision: Structured chain-of-thought reasoning generated by gpt-oss-120b
  • Objective: Enhance differential diagnosis and structured medical reasoning
  • Method: LoRA fine-tuning
    • Rank: 32
    • Alpha: 64
    • Applied to attention, MLP layers, and MoE experts
  • Sequence length: 4096 tokens
  • Framework: Unsloth + PEFT (4-bit training)
  • Precision: bfloat16 mixed precision
  • Training schedule: 3 epochs, AdamW, LR = 1e-4 with cosine decay and warmup
  • Result: Improved exact-match diagnostic accuracy on Eurorad cases (base 78.6% → fine-tuned 86.2%)