HOS-OSS-3.08B

HOS-OSS-3.08B

HOS-OSS-3.08B is a lightweight 3.08B parameter causal language model optimized for text and code generation tasks.
It is designed for fast inference, low resource usage, and local deployment.


πŸš€ Overview

  • Model size: ~3.08B parameters
  • Architecture: LLaMA-style decoder-only transformer
  • Base model: Qwen2.5-Coder-3B-Instruct (distilled / adapted)
  • Framework: πŸ€— Transformers
  • Use cases:
    • Code generation
    • Instruction following
    • Chat-style completion
    • Lightweight local AI assistant

⚑ Features

  • Fast inference on low-end GPUs
  • Runs on Kaggle / Colab without large VRAM
  • Suitable for edge deployment
  • Clean instruction-response formatting

🧠 Example Usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch

model_name = "hydffgg/HOS-OSS-3.08B"

tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)

prompt = "User: Write a Python Hello World
Assistant:"

inputs = tokenizer(prompt, return_tensors="pt")

with torch.no_grad():
    outputs = model.generate(
        **inputs,
        max_new_tokens=512,
        temperature=0.7
    )

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
Downloads last month
42
Safetensors
Model size
3B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for hydffgg/HOS-OSS-3.08B

Base model

Qwen/Qwen2.5-3B
Finetuned
(73)
this model