NanoChat Speedrun 001

A minimal educational LLM trained using the NanoChat pipeline.

Model Details

  • Format: PyTorch .pt (NanoChat native)
  • Files: model.pt, meta_000700.json
  • Usage: run with NanoChat scripts, not Hugging Face Transformers.

Intended Use

For educational and research demonstration purposes only — shows how to train and host a small LLM end-to-end.

Example (pseudo-code)

from nanochat import GPT
m = GPT.load("royam0820/nanochat-speedrun-001")
print(m.generate("Hello NanoChat!"))


---

### 🧭 What to do
1. Go to your model page  
   → [https://huggingface.co/royam0820/nanochat-speedrun-001](https://huggingface.co/royam0820/nanochat-speedrun-001)
2. Click **“Edit model card.”**
3. Replace everything currently inside the editor with the full block above.
4. Click **“Commit changes.”**

---
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support