TurkishReasoner
Collection
Models that are trained on reasoning task in Turkish language.
•
4 items
•
Updated
•
1
TurkishReasoner-Gemma1B is a lightweight Turkish reasoning model fine-tuned from Google's Gemma3-1B. Despite its compact size, this model delivers impressive reasoning capabilities in Turkish, making it ideal for deployment in resource-constrained environments while maintaining high-quality step-by-step reasoning.
This model is ideal for applications requiring reasoning capabilities in resource-constrained environments:
from transformers import AutoModelForCausalLM, AutoTokenizer, pipeline
from peft import PeftModel
import torch
base_model = AutoModelForCausalLM.from_pretrained("unsloth/gemma-3-1b-it")
model = PeftModel.from_pretrained(base_model, "Chan-Y/TurkishReasoner-Gemma3-1B").to("cuda")
tokenizer = AutoTokenizer.from_pretrained("unsloth/gemma-3-1b-it")
pipe = pipeline(
"text-generation",
model=model,
tokenizer=tokenizer,
max_new_tokens=512,
do_sample=True,
temperature=0.7,
top_p=0.95,
)
messages = [
{"role": "system", "content": """Sen kullanıcıların isteklerine Türkçe cevap veren bir asistansın ve sana bir problem verildi.
Problem hakkında düşün ve çalışmanı göster.
Çalışmanı <start_working_out> ve <end_working_out> arasına yerleştir.
Sonra, çözümünü <SOLUTION> ve </SOLUTION> arasına yerleştir.
Lütfen SADECE Türkçe kullan."""},
{"role": "user", "content": "121'in karekökü kaçtır?"},
]
response = pipe(messages, return_full_text=False)[0]["generated_text"]
print(response)
For more information or assistance with this model, please contact the developers: