Tucan-9B-v1.0-GGUF
Bulgarian Language Models for Function Calling π§π¬
Paper: https://arxiv.org/abs/2506.23394
Overview π
TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.
These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and Model Context Protocol (MCP) applications.
Built on top of BgGPT models from INSAIT Institute, these models have been enhanced with function-calling capabilities.
Motivation π―
Although BgGPT models demonstrate strong Bulgarian language comprehension, they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal.
This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language.
Models and variants π¦
Available in three sizes with full models, LoRA adapters, and quantized GGUF variants:
| Model Size | Full Model | LoRA Adapter | GGUF (Quantized) |
|---|---|---|---|
| 2.6B | Tucan-2.6B-v1.0 | LoRA | GGUF |
| 9B | Tucan-9B-v1.0 | LoRA | GGUF π |
| 27B | Tucan-27B-v1.0 | LoRA | GGUF |
GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations
Usage π οΈ
Quick start β‘
pip install -U "transformers[torch]" accelerate bitsandbytes
Prompt format βοΈ
Critical: Use this format for function calling for the best results.
π Required System Prompt Template
<bos><start_of_turn>user
Π’ΠΈ ΡΠΈ ΠΏΠΎΠ»Π΅Π·Π΅Π½ AI Π°ΡΠΈΡΡΠ΅Π½Ρ, ΠΊΠΎΠΉΡΠΎ ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Ρ ΠΏΠΎΠ»Π΅Π·Π½ΠΈ ΠΈ ΡΠΎΡΠ½ΠΈ ΠΎΡΠ³ΠΎΠ²ΠΎΡΠΈ.
ΠΠΌΠ°Ρ Π΄ΠΎΡΡΡΠΏ ΠΈ ΠΌΠΎΠΆΠ΅Ρ Π΄Π° ΠΈΠ·Π²ΠΈΠΊΠ°Ρ Π΅Π΄Π½Π° ΠΈΠ»ΠΈ ΠΏΠΎΠ²Π΅ΡΠ΅ ΡΡΠ½ΠΊΡΠΈΠΈ, Π·Π° Π΄Π° ΠΏΠΎΠΌΠΎΠ³Π½Π΅Ρ Ρ ΠΏΠΎΡΡΠ΅Π±ΠΈΡΠ΅Π»ΡΠΊΠΎΡΠΎ Π·Π°ΠΏΠΈΡΠ²Π°Π½Π΅. ΠΠ·ΠΏΠΎΠ»Π·Π²Π°ΠΉ Π³ΠΈ, ΡΠ°ΠΌΠΎ Π°ΠΊΠΎ Π΅ Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ ΠΈ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ΡΡΠΎ.
ΠΠΎΠ³Π°ΡΠΎ ΠΈΠ·ΠΏΠΎΠ»Π·Π²Π°Ρ ΡΡΠ½ΠΊΡΠΈΡ, ΡΠΎΡΠΌΠ°ΡΠΈΡΠ°ΠΉ ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅ΡΠΎ Ρ Π² Π±Π»ΠΎΠΊ ```tool_call``` Π½Π° ΠΎΡΠ΄Π΅Π»Π΅Π½ ΡΠ΅Π΄, a ΡΠ»Π΅Π΄ ΡΠΎΠ²Π° ΡΠ΅ ΠΏΠΎΠ»ΡΡΠΈΡ ΡΠ΅Π·ΡΠ»ΡΠ°Ρ ΠΎΡ ΠΈΠ·ΠΏΡΠ»Π½Π΅Π½ΠΈΠ΅ΡΠΎ Π² Π±Π»ΠΎΠΊ ```toll_response```.
## Π¨Π°Π±Π»ΠΎΠ½ Π·Π° ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅:
```tool_call
{"name": <function-name>, "arguments": <args-json-object>}```
## ΠΠ°Π»ΠΈΡΠ½ΠΈ ΡΡΠ½ΠΊΡΠΈΠΈ:
[your function definitions here]
## ΠΠΎΡΡΠ΅Π±ΠΈΡΠ΅Π»ΡΠΊΠ° Π·Π°ΡΠ²ΠΊΠ° :
[your query in Bulgarian]<end_of_turn>
<start_of_turn>model
Note π
The model only generates the tool_call blocks with function names and parameters - it doesn't actually execute the functions. Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in tool_response blocks for the conversation to continue the interperation of the results. A full demo is comming soon.
Python example π
π» Complete Working Example
import torch
import json
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig
# Load model
model_name = "s-emanuilov/Tucan-2.6B-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16,
device_map="auto",
attn_implementation="eager" # Required for Gemma models
)
# Create prompt with system template
def create_prompt(functions, user_query):
system_prompt = """Π’ΠΈ ΡΠΈ ΠΏΠΎΠ»Π΅Π·Π΅Π½ AI Π°ΡΠΈΡΡΠ΅Π½Ρ, ΠΊΠΎΠΉΡΠΎ ΠΏΡΠ΅Π΄ΠΎΡΡΠ°Π²Ρ ΠΏΠΎΠ»Π΅Π·Π½ΠΈ ΠΈ ΡΠΎΡΠ½ΠΈ ΠΎΡΠ³ΠΎΠ²ΠΎΡΠΈ.
ΠΠΌΠ°Ρ Π΄ΠΎΡΡΡΠΏ ΠΈ ΠΌΠΎΠΆΠ΅Ρ Π΄Π° ΠΈΠ·Π²ΠΈΠΊΠ°Ρ Π΅Π΄Π½Π° ΠΈΠ»ΠΈ ΠΏΠΎΠ²Π΅ΡΠ΅ ΡΡΠ½ΠΊΡΠΈΠΈ, Π·Π° Π΄Π° ΠΏΠΎΠΌΠΎΠ³Π½Π΅Ρ Ρ ΠΏΠΎΡΡΠ΅Π±ΠΈΡΠ΅Π»ΡΠΊΠΎΡΠΎ Π·Π°ΠΏΠΈΡΠ²Π°Π½Π΅. ΠΠ·ΠΏΠΎΠ»Π·Π²Π°ΠΉ Π³ΠΈ, ΡΠ°ΠΌΠΎ Π°ΠΊΠΎ Π΅ Π½Π΅ΠΎΠ±Ρ
ΠΎΠ΄ΠΈΠΌΠΎ ΠΈ ΠΏΠΎΠ΄Ρ
ΠΎΠ΄ΡΡΠΎ.
ΠΠΎΠ³Π°ΡΠΎ ΠΈΠ·ΠΏΠΎΠ»Π·Π²Π°Ρ ΡΡΠ½ΠΊΡΠΈΡ, ΡΠΎΡΠΌΠ°ΡΠΈΡΠ°ΠΉ ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅ΡΠΎ Ρ Π² Π±Π»ΠΎΠΊ ```tool_call``` Π½Π° ΠΎΡΠ΄Π΅Π»Π΅Π½ ΡΠ΅Π΄, a ΡΠ»Π΅Π΄ ΡΠΎΠ²Π° ΡΠ΅ ΠΏΠΎΠ»ΡΡΠΈΡ ΡΠ΅Π·ΡΠ»ΡΠ°Ρ ΠΎΡ ΠΈΠ·ΠΏΡΠ»Π½Π΅Π½ΠΈΠ΅ΡΠΎ Π² Π±Π»ΠΎΠΊ ```toll_response```.
## Π¨Π°Π±Π»ΠΎΠ½ Π·Π° ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅:
```tool_call
{{"name": <function-name>, "arguments": <args-json-object>}}```
"""
functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
full_prompt = f"{system_prompt}\n## ΠΠ°Π»ΠΈΡΠ½ΠΈ ΡΡΠ½ΠΊΡΠΈΠΈ:\n{functions_text}\n\n## ΠΠΎΡΡΠ΅Π±ΠΈΡΠ΅Π»ΡΠΊΠ° Π·Π°ΡΠ²ΠΊΠ°:\n{user_query}"
chat = [{"role": "user", "content": full_prompt}]
return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)
# Example usage
functions = [{
"name": "create_calendar_event",
"description": "Creates a new event in Google Calendar.",
"parameters": {
"type": "object",
"properties": {
"title": {"type": "string"},
"date": {"type": "string"},
"start_time": {"type": "string"},
"end_time": {"type": "string"}
},
"required": ["title", "date", "start_time", "end_time"]
}
}]
query = "Π‘ΡΠ·Π΄Π°ΠΉ ΡΡΠ±ΠΈΡΠΈΠ΅ 'ΠΠΎΠ΄ΠΈΡΠ΅Π½ ΠΏΡΠ΅Π³Π»Π΅Π΄' Π·Π° 8-ΠΌΠΈ ΡΠ½ΠΈ 2025 ΠΎΡ 14:00 Π΄ΠΎ 14:30."
# Generate response
prompt = create_prompt(functions, query)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(
**inputs,
max_new_tokens=1024,
temperature=0.1,
top_k=25,
top_p=1.0,
repetition_penalty=1.1,
do_sample=True,
eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")],
pad_token_id=tokenizer.eos_token_id
)
result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(result)
Performance & Dataset π
π Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper
Dataset: 8,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4).
Results: ~40% improvement in tool-use capabilities over base BgGPT models in internal benchmarks.
Questions & Contact π¬
For questions, collaboration, or feedback: Connect on LinkedIn
Acknowledgments π
Built on top of BgGPT series.
License π
This work is licensed under CC-BY-4.0.
- Downloads last month
- 112
4-bit
5-bit
6-bit
8-bit
16-bit
Model tree for llm-bg/Tucan-9B-v1.0-GGUF
Base model
google/gemma-2-9b