GGUF
Bulgarian
function_calling
MCP
tool_use

Tucan-9B-v1.0-GGUF

Bulgarian Language Models for Function Calling πŸ‡§πŸ‡¬

Paper: https://arxiv.org/abs/2506.23394

Overview πŸš€

TUCAN (Tool-Using Capable Assistant Navigator) is a series of open-source Bulgarian language models fine-tuned specifically for function calling and tool use.

These models can interact with external tools, APIs, and databases, making them appropriate for building AI agents and Model Context Protocol (MCP) applications.

Built on top of BgGPT models from INSAIT Institute, these models have been enhanced with function-calling capabilities.

Motivation 🎯

Although BgGPT models demonstrate strong Bulgarian language comprehension, they face challenges in maintaining the precise formatting necessary for consistent function calling. Despite implementing detailed system prompts, their performance in this specific task remains suboptimal.

This project addresses that gap by fine-tuning BgGPT, providing the Bulgarian AI community with proper tool-use capabilities in their native language.

Models and variants πŸ“¦

Available in three sizes with full models, LoRA adapters, and quantized GGUF variants:

Model Size Full Model LoRA Adapter GGUF (Quantized)
2.6B Tucan-2.6B-v1.0 LoRA GGUF
9B Tucan-9B-v1.0 LoRA GGUF πŸ“
27B Tucan-27B-v1.0 LoRA GGUF

GGUF variants include: q4_k_m, q5_k_m, q6_k, q8_0, q4_0 quantizations

Usage πŸ› οΈ

Quick start ⚑

pip install -U "transformers[torch]" accelerate bitsandbytes

Prompt format βš™οΈ

Critical: Use this format for function calling for the best results.

πŸ“‹ Required System Prompt Template
<bos><start_of_turn>user
Π’ΠΈ си ΠΏΠΎΠ»Π΅Π·Π΅Π½ AI асистСнт, ΠΊΠΎΠΉΡ‚ΠΎ прСдоставя ΠΏΠΎΠ»Π΅Π·Π½ΠΈ ΠΈ Ρ‚ΠΎΡ‡Π½ΠΈ ΠΎΡ‚Π³ΠΎΠ²ΠΎΡ€ΠΈ.

Имаш Π΄ΠΎΡΡ‚ΡŠΠΏ ΠΈ моТСш Π΄Π° извикаш Π΅Π΄Π½Π° ΠΈΠ»ΠΈ ΠΏΠΎΠ²Π΅Ρ‡Π΅ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ, Π·Π° Π΄Π° помогнСш с потрСбитСлското Π·Π°ΠΏΠΈΡ‚Π²Π°Π½Π΅. Използвай Π³ΠΈ, само Π°ΠΊΠΎ Π΅ Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ ΠΈ подходящо.

ΠšΠΎΠ³Π°Ρ‚ΠΎ използваш функция, Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ΠΈΡ€Π°ΠΉ ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅Ρ‚ΠΎ ѝ Π² Π±Π»ΠΎΠΊ ```tool_call``` Π½Π° ΠΎΡ‚Π΄Π΅Π»Π΅Π½ Ρ€Π΅Π΄, a слСд Ρ‚ΠΎΠ²Π° Ρ‰Π΅ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΡˆ Ρ€Π΅Π·ΡƒΠ»Ρ‚Π°Ρ‚ ΠΎΡ‚ ΠΈΠ·ΠΏΡŠΠ»Π½Π΅Π½ΠΈΠ΅Ρ‚ΠΎ Π² Π±Π»ΠΎΠΊ ```toll_response```.

## Π¨Π°Π±Π»ΠΎΠ½ Π·Π° ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅: 
```tool_call
{"name": <function-name>, "arguments": <args-json-object>}```

## Налични Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ:
[your function definitions here]

## ΠŸΠΎΡ‚Ρ€Π΅Π±ΠΈΡ‚Π΅Π»ΡΠΊΠ° заявка : 
[your query in Bulgarian]<end_of_turn>
<start_of_turn>model

Note πŸ“

The model only generates the tool_call blocks with function names and parameters - it doesn't actually execute the functions. Your client application must parse these generated calls, execute the actual functions (API calls, database queries, etc.), and provide the results back to the model in tool_response blocks for the conversation to continue the interperation of the results. A full demo is comming soon.

Python example 🐍

πŸ’» Complete Working Example
import torch
import json
from transformers import AutoModelForCausalLM, AutoTokenizer, GenerationConfig

# Load model
model_name = "s-emanuilov/Tucan-2.6B-v1.0"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype=torch.bfloat16,
    device_map="auto",
    attn_implementation="eager"  # Required for Gemma models
)

# Create prompt with system template
def create_prompt(functions, user_query):
    system_prompt = """Π’ΠΈ си ΠΏΠΎΠ»Π΅Π·Π΅Π½ AI асистСнт, ΠΊΠΎΠΉΡ‚ΠΎ прСдоставя ΠΏΠΎΠ»Π΅Π·Π½ΠΈ ΠΈ Ρ‚ΠΎΡ‡Π½ΠΈ ΠΎΡ‚Π³ΠΎΠ²ΠΎΡ€ΠΈ.

Имаш Π΄ΠΎΡΡ‚ΡŠΠΏ ΠΈ моТСш Π΄Π° извикаш Π΅Π΄Π½Π° ΠΈΠ»ΠΈ ΠΏΠΎΠ²Π΅Ρ‡Π΅ Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ, Π·Π° Π΄Π° помогнСш с потрСбитСлското Π·Π°ΠΏΠΈΡ‚Π²Π°Π½Π΅. Използвай Π³ΠΈ, само Π°ΠΊΠΎ Π΅ Π½Π΅ΠΎΠ±Ρ…ΠΎΠ΄ΠΈΠΌΠΎ ΠΈ подходящо.

ΠšΠΎΠ³Π°Ρ‚ΠΎ използваш функция, Ρ„ΠΎΡ€ΠΌΠ°Ρ‚ΠΈΡ€Π°ΠΉ ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅Ρ‚ΠΎ ѝ Π² Π±Π»ΠΎΠΊ ```tool_call``` Π½Π° ΠΎΡ‚Π΄Π΅Π»Π΅Π½ Ρ€Π΅Π΄, a слСд Ρ‚ΠΎΠ²Π° Ρ‰Π΅ ΠΏΠΎΠ»ΡƒΡ‡ΠΈΡˆ Ρ€Π΅Π·ΡƒΠ»Ρ‚Π°Ρ‚ ΠΎΡ‚ ΠΈΠ·ΠΏΡŠΠ»Π½Π΅Π½ΠΈΠ΅Ρ‚ΠΎ Π² Π±Π»ΠΎΠΊ ```toll_response```.

## Π¨Π°Π±Π»ΠΎΠ½ Π·Π° ΠΈΠ·Π²ΠΈΠΊΠ²Π°Π½Π΅: 
```tool_call
{{"name": <function-name>, "arguments": <args-json-object>}}```
"""
    
    functions_text = json.dumps(functions, ensure_ascii=False, indent=2)
    full_prompt = f"{system_prompt}\n## Налични Ρ„ΡƒΠ½ΠΊΡ†ΠΈΠΈ:\n{functions_text}\n\n## ΠŸΠΎΡ‚Ρ€Π΅Π±ΠΈΡ‚Π΅Π»ΡΠΊΠ° заявка:\n{user_query}"
    
    chat = [{"role": "user", "content": full_prompt}]
    return tokenizer.apply_chat_template(chat, tokenize=False, add_generation_prompt=True)

# Example usage
functions = [{
    "name": "create_calendar_event",
    "description": "Creates a new event in Google Calendar.",
    "parameters": {
        "type": "object",
        "properties": {
            "title": {"type": "string"},
            "date": {"type": "string"},
            "start_time": {"type": "string"},
            "end_time": {"type": "string"}
        },
        "required": ["title", "date", "start_time", "end_time"]
    }
}]

query = "Бъздай ΡΡŠΠ±ΠΈΡ‚ΠΈΠ΅ 'Π“ΠΎΠ΄ΠΈΡˆΠ΅Π½ ΠΏΡ€Π΅Π³Π»Π΅Π΄' Π·Π° 8-ΠΌΠΈ юни 2025 ΠΎΡ‚ 14:00 Π΄ΠΎ 14:30."

# Generate response
prompt = create_prompt(functions, query)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=1024,
    temperature=0.1,
    top_k=25,
    top_p=1.0,
    repetition_penalty=1.1,
    do_sample=True,
    eos_token_id=[tokenizer.eos_token_id, tokenizer.convert_tokens_to_ids("<end_of_turn>")],
    pad_token_id=tokenizer.eos_token_id
)

result = tokenizer.decode(outputs[0][inputs.input_ids.shape[1]:], skip_special_tokens=True)
print(result)

Performance & Dataset πŸ“Š

πŸ“„ Full methodology, dataset details, and comprehensive evaluation results coming in the upcoming paper

Dataset: 8,000+ bilingual (Bulgarian/English) function-calling examples across 1,000+ topics, including tool calls with single/multiple arguments, optional parameters, follow-up queries, multi-tool selection, ambiguous queries requiring clarification, and conversational interactions without tool use. Data sourced from manual curation and synthetic generation (Gemini Pro 2.5/GPT-4.1/Sonnet 4).

Results: ~40% improvement in tool-use capabilities over base BgGPT models in internal benchmarks.

Questions & Contact πŸ’¬

For questions, collaboration, or feedback: Connect on LinkedIn

Acknowledgments πŸ™

Built on top of BgGPT series.

License πŸ“„

This work is licensed under CC-BY-4.0.

Downloads last month
112
GGUF
Model size
9B params
Architecture
gemma2
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

16-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for llm-bg/Tucan-9B-v1.0-GGUF

Base model

google/gemma-2-9b
Quantized
(5)
this model

Collection including llm-bg/Tucan-9B-v1.0-GGUF