GuageLLM-12M
GuageLLM-23M is a lightweight GPT-style language model (~23 million parameters) trained from scratch for experimentation, learning, and fast local inference.
This model is designed to be simple, transparent, and easy to run on CPUs while still demonstrating real transformer behavior.
๐น Model Details
- Architecture: GPT-2 style (decoder-only transformer)
- Parameters: ~23M
- Context Length: 64 tokens
- Vocabulary Size: Custom tokenizer
- Training: From scratch
- Framework: ๐ค Transformers (PyTorch)
๐น Intended Use
GuageLLM-23M is intended for:
- Learning how transformers work internally
- Small-scale text generation experiments
- CPU-friendly inference
- Research, education, and tinkering
โ ๏ธ This model is not intended for production or safety-critical applications.
๐น Usage
Text Generation (Pipeline)
from transformers import pipeline
pipe = pipeline(
"text-generation",
model="Hai929/GuageLLM_23M",
trust_remote_code=True
)
pipe("The cat")
- Downloads last month
- 111