GPT 5.2
Collection
Distilled models and datasets for GPT 5.2.
•
8 items
•
Updated
•
1
This model was trained on 250 examples generated by GPT 5.2 (high reasoning)
Note: In this distill I fixed formatting issues found in previous gpt 5 distills. Will be going back to update the other 5.2 distills
This qwen3 model was trained 2x faster with Unsloth and Huggingface's TRL library.