Frag Loop
Frag Loop is a decoder-only Transformer trained to generate Shadertoy-compatible GLSL fragment shader bodies. The model outputs the body only (functions + mainImage), and is intended to be wrapped by a fixed WebGL2/Shadertoy template at runtime.
Intended Use
- Primary: generate novel GLSL fragment shader bodies compatible with Shadertoy-style templates.
- Not intended for: general chat, safety-critical systems, or production code generation without review.
Training Data (Summary)
- Pretraining: GLSL-heavy subset of The Stack (bigcode/the-stack-dedup).
- Fine-tuning (SFT): Vipitis/Shadereval-inputs (Shadertoy-oriented subset).
These datasets contain code under multiple licenses. If you redistribute outputs or data, ensure provenance and license compliance.
Tokenizer
The tokenizer implementation is heavily influenced by nanochat:
Training Procedure (High-level)
- Decoder-only Transformer (GPT-style) trained from scratch.
- Pretrain on GLSL-heavy corpus, then SFT on Shadertoy-style examples.
- Optional copy-detection and automatic rejection are supported in the runtime.
Evaluation
- Primary evaluation is compile + render for WebGL2 (moderngl), with metrics for:
- compile success
- render time
- black / static detection
- NaN / Inf detection
How to Use (Recommended Runtime)
The reference runtime + WebUI are provided in the GitHub repo:
From the publish/ bundle, you can run:
# download from HF and start inference server
python infer/inference_server.py --hf-repo hanasaan/frag-loop
# start UI server
node ui_optional/server_node/server.js
Open http://localhost:5173 in a browser.
Limitations
- Outputs may occasionally fail to compile or render.
- The model can still produce outputs similar to training data. Use copy-detection when displaying or distributing outputs.
- Shader aesthetics and performance vary; manual curation is recommended for showcases.
Acknowledgements
- Tokenizer influenced by nanochat (https://github.com/karpathy/nanochat).
- All implementation work in this repository was done by GPT-5.2-Codex (xhigh).
License
This model is trained on datasets with mixed licenses. Please review and comply with dataset licensing terms when using or redistributing the model or outputs.