Athenea-4B-Math
Athenea-4B-Math is a fine-tuned version of huihui-ai/Huihui-Qwen3-4B-Thinking-2507-abliterated, specialized in mathematical reasoning and problem solving.
Trained on high-quality data with explicit reasoning traces using <think> and </think> tags, the model is designed to perform detailed step-by-step reasoning on tasks such as calculus, algebra, and equation solving.
β οΈ Important Note: This model uses an abliterated (uncensored) base version, providing full expressive freedom and unrestricted output generation. Users are fully responsible for any use or content produced by the model. It is intended exclusively for research and experimentation purposes.
π― Model Description
Athenea-4B-Math builds upon Huihui-Qwen3βs structured reasoning capabilities, adapting them to mathematical domains. It demonstrates strong performance on symbolic reasoning and numerical problem-solving tasks.
Key features:
- Step-by-step mathematical reasoning within
<think>blocks - Specialization in calculus, algebra, and general problem solving
- Uncensored output generation for complete reasoning transparency
- Improved logical consistency through focused fine-tuning
- Compatible with open inference frameworks (Transformers, vLLM, etc.)
The model was fine-tuned using the dataset Aquiles-ai/Athenea-Math-100k, which contains a diverse range of curated math problems with reasoning traces and natural language explanations.
Note: Fine-tuning was performed using Kronos, Aquiles-aiβs proprietary enterprise fine-tuning system.
π» Usage
Installation
uv pip install transformers torch accelerate
Basic Inference
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
model = AutoModelForCausalLM.from_pretrained("Aquiles-ai/Athenea-4B-Math",
dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
attn_implementation="flash_attention_2") # Requires flash-attn
# Without flash-attn:
# model = AutoModelForCausalLM.from_pretrained("Aquiles-ai/Athenea-4B-Math",
# dtype="auto",
# device_map="auto"
# )
tokenizer = AutoTokenizer.from_pretrained("Aquiles-ai/Athenea-4B-Math", trust_remote_code=True)
messages = [
{"role": "user", "content": "Hey, find the derivative of 3x^4 - 2x^2 + 5x - 7"}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to('cuda')
with torch.no_grad():
output = model.generate(
**inputs,
max_new_tokens=8092,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
)
# Decode and print the output
print(tokenizer.decode(output[0], skip_special_tokens=True))
Streaming Inference
from transformers import AutoModelForCausalLM, AutoTokenizer, TextIteratorStreamer
import torch
from threading import Thread
model = AutoModelForCausalLM.from_pretrained("Aquiles-ai/Athenea-4B-Math",
dtype=torch.bfloat16,
trust_remote_code=True,
device_map="auto",
attn_implementation="flash_attention_2")
tokenizer = AutoTokenizer.from_pretrained("Aquiles-ai/Athenea-4B-Math", trust_remote_code=True)
messages = [
{"role": "user", "content": "Hey, find the derivative of x^2(3x + 1) using the product rule."}
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to('cuda')
# Create the streamer
streamer = TextIteratorStreamer(tokenizer, skip_prompt=True, skip_special_tokens=True)
# Build kwargs for generate
generate_kwargs = dict(
**inputs,
max_new_tokens=8092,
pad_token_id=tokenizer.eos_token_id,
eos_token_id=tokenizer.eos_token_id,
streamer=streamer,
)
def _generate_thread(model, kwargs):
with torch.no_grad():
model.generate(**kwargs)
thread = Thread(target=_generate_thread, args=(model, generate_kwargs))
thread.start()
for chunk in streamer:
print(chunk, end="", flush=True)
Production Deployment with vLLM
Start server:
vllm serve Aquiles-ai/Athenea-4B-Math \
--host 0.0.0.0 \
--port 8000 \
--api-key dummyapikey \
--max-model-len=16384 \
--async-scheduling \
--gpu-memory-utilization=0.90
Request to the server from the OpenAI client:
from openai import OpenAI
client = OpenAI(api_key="dummyapikey", base_url="http://127.0.0.1:8000/v1")
stream = client.chat.completions.create(
model="Aquiles-ai/Athenea-4B-Math",
messages=[{
"role": "user",
"content": "Hey, find the indefinite integral of 4x^3 -2x + 7"
}],
max_tokens=8092,
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content:
print(chunk.choices[0].delta.content, end="", flush=True)
vLLM Benefits: 20-30x faster inference, OpenAI-compatible API, continuous batching, async scheduling.
Aquiles-playground
In addition to code usage, you can also try our models locally through an open-source playground on GitHub.
Made with β€οΈ by Aquiles-ai
- Downloads last month
- 8
Model tree for Aquiles-ai/Athenea-4B-Math
Base model
Qwen/Qwen3-4B-Thinking-2507
