HOS-OSS-3.08B
HOS-OSS-3.08B is a lightweight 3.08B parameter causal language model optimized for text and code generation tasks.
It is designed for fast inference, low resource usage, and local deployment.
π Overview
- Model size: ~3.08B parameters
- Architecture: LLaMA-style decoder-only transformer
- Base model: Qwen2.5-Coder-3B-Instruct (distilled / adapted)
- Framework: π€ Transformers
- Use cases:
- Code generation
- Instruction following
- Chat-style completion
- Lightweight local AI assistant
β‘ Features
- Fast inference on low-end GPUs
- Runs on Kaggle / Colab without large VRAM
- Suitable for edge deployment
- Clean instruction-response formatting
π§ Example Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_name = "hydffgg/HOS-OSS-3.08B"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
prompt = "User: Write a Python Hello World
Assistant:"
inputs = tokenizer(prompt, return_tensors="pt")
with torch.no_grad():
outputs = model.generate(
**inputs,
max_new_tokens=512,
temperature=0.7
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 42