Configuration Parsing
Warning:
In adapter_config.json: "peft.task_type" must be a string
GPT-OSS 20B Fine-tuned for Medical Radiology Diagnosis
LoRA fine-tuned version of unsloth/gpt-oss-20b for medical radiology diagnosis tasks.
Model Details
- Base Model:
unsloth/gpt-oss-20b - Fine-tuning Method: LoRA (Low-Rank Adaptation)
- Training Framework: Unsloth
- Task: Medical diagnosis from radiology reports
- Training Dataset: Eurorad medical cases
Installation
pip install unsloth peft transformers accelerate bitsandbytes
Usage
from unsloth import FastLanguageModel
from peft import PeftModel
# Load base model
model, tokenizer = FastLanguageModel.from_pretrained(
model_name="unsloth/gpt-oss-20b",
dtype=None,
max_seq_length=4096,
load_in_4bit=True,
full_finetuning=False,
)
# Load LoRA adapter
model = PeftModel.from_pretrained(
model,
"omareng/on-device-LLM-gpt-oss-20b",
is_trainable=False
)
# Enable inference mode
FastLanguageModel.for_inference(model)
Training Details
- Framework: Unsloth
- Dataset: Eurorad medical radiology cases
- Optimization: 4-bit quantization
- Sequence Length: 4096 tokens
- Adapter Size: 2.27 GB
Citation
[Citation information will be added upon publication]
Limitations
- Clinical Validation Required: This model has not been clinically validated and should not be used for actual patient diagnosis
- Not for Clinical Use: Not intended for direct patient care without clinical validation
- Research Purposes Only: Designed for research in medical AI and diagnostic systems
- May reflect biases present in training data
- Performance may vary across medical specialties
- Like all LLMs, may generate plausible but incorrect information
Contact
Issues: Please report to this repository
Disclaimer: This model is for research purposes only and has not been approved for clinical use. Always consult qualified healthcare professionals for medical decisions.
- Downloads last month
- 9