T-pro-it-2.1-GGUF
π¨ Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.
This repository contains T-pro-it-2.1 converted to the GGUF format with
llama.cpp.
See the original BF16 model here: t-tech/T-pro-it-2.1.
Description
T-pro-it-2.1 β is an efficient russian model built upon the Qwen 3 model family with improved instruction following and tool-calling capabilities compared to T-pro-it-2.0. Outperforms Qwen3-32B in tool calling scenarios, which is essential for agentic applications. Built for both general tasks and complex workflows.
NOTE: This model supports only non-thinking mode and does not generate <think></think> blocks in its output. Meanwhile, specifying enable_thinking=False is no longer required.
π Benchmarks
| Ru Arena Hard | ruIFeval* | ruBFCL | |
|---|---|---|---|
| T-pro-it-2.1 | 93.8 | 80.7 | 66.0 |
| T-pro-it-2.1-Q8_0 | 94.2 | 80.8 | 65.8 |
| T-pro-it-2.1-Q6_K | 93.4 | 80.0 | 65.9 |
| T-pro-it-2.1-Q5_K_M | 92.7 | 81.4 | 65.7 |
| T-pro-it-2.1-Q5_K_S | 92.3 | 80.4 | 65.2 |
| T-pro-it-2.1-Q5_0 | 93.8 | 79.9 | 64.8 |
| T-pro-it-2.1-Q4_K_M | 92.6 | 80.7 | 64.8 |
* IFeval metric is mean of 4 values: prompt and instruct levels for strict and loose accuracy.
Recommendation: choose the highest-quality quantisation that fits your hardware (VRAM / RAM).
Filename (β -gguf) |
Quant method | Bits | Size (GB) |
|---|---|---|---|
T-pro-it-2.1-q8_0 |
Q8_0 | 8 | 34.8 |
T-pro-it-2.1-q6_k |
Q6_K | 6 | 26.9 |
T-pro-it-2.1-q5_k_m |
Q5_K_M | 5 | 23.2 |
T-pro-it-2.1-q5_k_s |
Q5_K_S | 5 | 22.6 |
T-pro-it-2.1-q5_0 |
Q5_0 | 5 | 22.6 |
T-pro-it-2.1-q4_k_m |
Q4_K_M | 4 | 19.8 |
Size figures assume no GPU off-loading. Off-loading lowers RAM usage and uses VRAM instead.
Quickstart
llama.cpp
Check out our llama.cpp documentation for more usage guide.
We advise you to clone llama.cpp and install it following the official guide. We follow the latest version of llama.cpp.
In the following demonstration, we assume that you are running commands under the repository llama.cpp.
./llama-cli -hf t-tech/T-pro-it-2.1-GGUF:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --presence-penalty 1.0 -c 40960 -n 32768 --no-context-shift
ollama
Check out our ollama documentation for more usage guide.
You can run T-pro-2.1 with one command:
ollama run t-tech/T-pro-it-2.1:q8_0
See also t-tech ollama homepage.
- Downloads last month
- 404
4-bit
5-bit
6-bit
8-bit