GuageLLM-12M

GuageLLM-23M is a lightweight GPT-style language model (~23 million parameters) trained from scratch for experimentation, learning, and fast local inference.

This model is designed to be simple, transparent, and easy to run on CPUs while still demonstrating real transformer behavior.


๐Ÿ”น Model Details

  • Architecture: GPT-2 style (decoder-only transformer)
  • Parameters: ~23M
  • Context Length: 64 tokens
  • Vocabulary Size: Custom tokenizer
  • Training: From scratch
  • Framework: ๐Ÿค— Transformers (PyTorch)

๐Ÿ”น Intended Use

GuageLLM-23M is intended for:

  • Learning how transformers work internally
  • Small-scale text generation experiments
  • CPU-friendly inference
  • Research, education, and tinkering

โš ๏ธ This model is not intended for production or safety-critical applications.


๐Ÿ”น Usage

Text Generation (Pipeline)

from transformers import pipeline

pipe = pipeline(
    "text-generation",
    model="Hai929/GuageLLM_23M",
    trust_remote_code=True
)

pipe("The cat")
Downloads last month
111
Safetensors
Model size
23M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using Hai929/The_GuageLLM_23M 1