T-lite-it-2.1-GGUF

🚨 Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.

This repository contains T-lite-it-2.1 converted to the GGUF format with llama.cpp.
See the original BF16 model here: t-tech/T-lite-it-2.1.

Description

T-lite-it-2.1 is an efficient Russian model built upon the Qwen 3 architecture, featuring significant improvements in instruction following and adds support for tool-calling capabilities β€” a key advancement over T-lite-it-1.0, which lacks tool-use support. Outperforms Qwen3-8B in tool calling scenarios, which is essential for agentic applications. Built for both general tasks and complex workflows, with higher Russian text generation throughput enabled by optimized tokenizer.

NOTE: This model supports only non-thinking mode and does not generate <think></think> in its output. Meanwhile, specifying enable_thinking=False is no longer required.

πŸ“Š Benchmarks

Model Ru Arena Hard ruIFeval* ruBFCL
T-lite-it-2.1 83.9 75.9 56.5
T-lite-it-2.1-q8_0 79.5 76.2 56.6
T-lite-it-2.1-q6_k 79.5 77.8 56.7
T-lite-it-2.1-q5_k_m 78.6 76.3 56.6
T-lite-it-2.1-q5_0 78.9 76.8 56.3
T-lite-it-2.1-q5_k_s 76.1 75.3 56.0
T-lite-it-2.1-q4_k_m 71.7 75.9 54.7

* IFeval metric is mean of 4 values: prompt and instruct levels for strict and loose accuracy.

Available quantisations

Recommendation: choose the highest-quality quantisation that fits your hardware (VRAM / RAM).

Filename (β†’ -gguf) Quant method Bits Size (GB)
T-lite-it-2.1-q8_0 Q8_0 8 8.7
T-lite-it-2.1-q6_k Q6_K 6 6.7
T-lite-it-2.1-q5_k_m Q5_K_M 5 5.9
T-lite-it-2.1-q5_k_s Q5_K_S 5 5.7
T-lite-it-2.1-q5_0 Q5_0 5 5.7
T-lite-it-2.1-q4_k_m Q4_K_M 4 5.0

Size figures assume no GPU off-loading. Off-loading lowers RAM usage and uses VRAM instead.

Quickstart

llama.cpp

Check out our llama.cpp documentation for more usage guide.

We advise you to clone llama.cpp and install it following the official guide. We follow the latest version of llama.cpp. In the following demonstration, we assume that you are running commands under the repository llama.cpp.

./llama-cli -hf t-tech/T-lite-it-2.1-GGUF:Q8_0 --jinja --color -ngl 99 -fa -sm row --temp 0.6 --presence-penalty 1.0 -c 40960 -n 32768 --no-context-shift

ollama

Check out our ollama documentation for more usage guide.

You can run T-lite-2.1 with one command:

ollama run t-tech/T-lite-it-2.1:q8_0

See also t-tech ollama homepage.

Downloads last month
593
GGUF
Model size
8B params
Architecture
qwen3
Hardware compatibility
Log In to view the estimation

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for t-tech/T-lite-it-2.1-GGUF

Base model

Qwen/Qwen3-8B-Base
Finetuned
Qwen/Qwen3-8B
Quantized
(2)
this model

Collection including t-tech/T-lite-it-2.1-GGUF