File size: 8,533 Bytes
e9bd214
23dc7a3
 
 
e9bd214
23dc7a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e9bd214
 
23dc7a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e9bd214
 
 
23dc7a3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
---
license: apache-2.0
language:
- en
library_name: peft
base_model: TinyLlama/TinyLlama-1.1B-Chat-v1.0
tags:
- tinyllama
- llama
- qlora
- fine-tuning
- motorcycle-repair
- question-answering
- generated:text-generation-code:pipeline_example
pipeline_tag: text-generation
datasets:
- cahlen/cdg-motorcycle-repair-qa-data-85x10
model_index:
- name: cahlen/tinyllama-motorcycle-repair-qa-adapter
  results: []
---

# LoRA Adapter for TinyLlama-1.1B-Chat specialized on Motorcycle Repair QA

This repository contains LoRA adapter weights fine-tuned from the base model `TinyLlama/TinyLlama-1.1B-Chat-v1.0`. The goal was to enhance the model's knowledge and question-answering capabilities specifically within the domain of motorcycle repair and maintenance, while leveraging the efficiency of the compact TinyLlama architecture.

This adapter was trained using QLoRA for memory efficiency.

## Model Description

*   **Base Model:** [TinyLlama/TinyLlama-1.1B-Chat-v1.0](https://huggingface.co/TinyLlama/TinyLlama-1.1B-Chat-v1.0)
*   **Adapter Task:** Question Answering / Instruction Following on Motorcycle Repair topics.
*   **Fine-tuning Method:** QLoRA (4-bit quantization) via `trl`'s `SFTTrainer`.
*   **Dataset:** [cahlen/cdg-motorcycle-repair-qa-data-85x10](https://huggingface.co/datasets/cahlen/cdg-motorcycle-repair-qa-data-85x10) (880 synthetically generated QA pairs).

## Key Features

*   **Domain Specialization:** Improved performance on questions related to motorcycle repair compared to the base model.
*   **Efficiency:** Builds upon the small and efficient TinyLlama (1.1B parameters). The adapter itself is only ~580MB.
*   **QLoRA Trained:** Enables loading the base model in 4-bit precision for reduced memory footprint during inference.

## How to Use

You need to load the base model (`TinyLlama/TinyLlama-1.1B-Chat-v1.0`) and then apply this LoRA adapter on top. Ensure you have `transformers`, `peft`, `accelerate`, and `bitsandbytes` installed.

```python
import torch
from transformers import AutoTokenizer, AutoModelForCausalLM, BitsAndBytesConfig, pipeline
from peft import PeftModel
import os

# --- Configuration ---
base_model_id = "TinyLlama/TinyLlama-1.1B-Chat-v1.0"
adapter_id = "cahlen/tinyllama-motorcycle-repair-qa-adapter" # This is the adapter you are using
device_map = "auto"

# --- Load Tokenizer ---
tokenizer = AutoTokenizer.from_pretrained(base_model_id, trust_remote_code=True)
if tokenizer.pad_token is None:
    tokenizer.pad_token = tokenizer.eos_token
tokenizer.padding_side = "left"

# --- Configure Quantization ---
use_4bit = True # Set to False if not using 4-bit
compute_dtype = getattr(torch, "float16") # Default compute dtype
quantization_config = None

if use_4bit and torch.cuda.is_available():
    print("Using 4-bit quantization")
    bnb_config = BitsAndBytesConfig(
        load_in_4bit=True,
        bnb_4bit_quant_type="nf4",
        bnb_4bit_compute_dtype=compute_dtype,
        bnb_4bit_use_double_quant=False,
    )
    quantization_config = bnb_config
else:
    print("Not using 4-bit quantization or CUDA not available")
    compute_dtype = torch.float32 # Use default float32 on CPU

# --- Load Base Model ---
print(f"Loading base model: {base_model_id}")
base_model = AutoModelForCausalLM.from_pretrained(
    base_model_id,
    quantization_config=quantization_config,
    device_map=device_map,
    trust_remote_code=True,
    torch_dtype=compute_dtype # Set appropriate dtype
)
base_model.config.use_cache = True

# --- Load LoRA Adapter ---
print(f"Loading LoRA adapter: {adapter_id}")
model = PeftModel.from_pretrained(base_model, adapter_id)
model.eval()
print("Adapter loaded successfully.")

# --- Prepare Prompt ---
# Example prompt
topic = "Brake System"
question = "What are the signs of worn brake pads?"
system_prompt = "You are a helpful assistant knowledgeable about motorcycle repair."

user_query = f"Topic: {topic}\nQuestion: {question}"
messages = [
    {"role": "system", "content": system_prompt},
    {"role": "user", "content": user_query},
]
formatted_prompt = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True
)
print(f"--- Prompt ---\n{formatted_prompt}")

# --- Generate Response ---
print("Generating...")
pipe = pipeline(
    task="text-generation",
    model=model,
    tokenizer=tokenizer,
    max_new_tokens=100,
    do_sample=True,
    temperature=0.7,
    top_p=0.9,
    pad_token_id=tokenizer.eos_token_id,
    eos_token_id=tokenizer.eos_token_id
)
result = pipe(formatted_prompt)

# --- Print Response ---
print("\n--- Output ---")
print(result[0]['generated_text'])

# Extract only the assistant's response
assistant_response = result[0]['generated_text'][len(formatted_prompt):].strip()
print("\n--- Assistant Only ---")
print(assistant_response)

```

## Training Details

*   **Base Model:** `TinyLlama/TinyLlama-1.1B-Chat-v1.0`
*   **Dataset:** `cahlen/cdg-motorcycle-repair-qa-data-85x10` (880 examples)
*   **Training Method:** QLoRA using `trl.SFTTrainer`.
*   **QLoRA Config:** 4-bit NF4 quantization, `float16` compute dtype.
*   **LoRA Config:** `r=64`, `lora_alpha=16`, `lora_dropout=0.1`, target modules: `["q_proj", "k_proj", "v_proj", "o_proj", "gate_proj", "up_proj", "down_proj"]`.
*   **Training Arguments:**
    *   Epochs: 1
    *   Batch Size (per device): 4
    *   Gradient Accumulation: 2 (Effective Batch Size: 8)
    *   Optimizer: PagedAdamW (32-bit)
    *   Learning Rate: 2e-4
    *   LR Scheduler: Cosine
    *   Sequence Length: 512
    *   Precision: Mixed (FP16)

## Example Usage & Responses

Here are some examples of prompts and the responses generated by this fine-tuned adapter:

**Example 1: Tire Pressure**
```
<|system|>
You are a helpful assistant knowledgeable about motorcycle repair.</s>
<|user|>
Topic: Tire Maintenance
Question: What is the first step in checking tire pressure?</s>
<|assistant|>
The first step is to check the tire pressure, then check the tire pressure gauge, and finally check the tire tread depth.
```

**Example 2: Oil Change Frequency**
```
<|system|>
You are a helpful assistant knowledgeable about motorcycle repair.</s>
<|user|>
Topic: Oil Maintenance
Question: How often should I change my motorcycle oil?</s>
<|assistant|>
It is recommended to change your motorcycle oil every 5,000 to 10,000 miles, or as recommended by the manufacturer.
```

**Example 3: Worn Brake Pads**
```
<|system|>
You are a helpful assistant knowledgeable about motorcycle repair.</s>
<|user|>
Topic: Brake System
Question: What are the signs of worn brake pads?</s>
<|assistant|>
Worn brake pads can be felt in the brake pedal, resulting in a rough or jerky braking action, or a noticeable decrease in braking performance.
```

**Example 4: General Knowledge Retention**
```
<|system|>
You are a helpful assistant knowledgeable about motorcycle repair.</s>
<|user|>
Topic: Geography
Question: What is the capital of France?</s>
<|assistant|>
The capital of France is Paris.
```

## Limitations and Bias

*   This model inherits limitations and potential biases from the base `TinyLlama` model and the synthetically generated dataset.
*   The motorcycle repair knowledge, while enhanced, is based on AI-generated data and has **not** been verified by human experts. **Do not rely solely on this model for critical repair decisions.** Always consult official service manuals and qualified mechanics.
*   Performance on topics outside of motorcycle repair may be degraded compared to the base model.

## Citation

If you use this adapter, please cite the base model and consider citing this repository:

```bibtex
@misc{cahlen_tinyllama_motorcycle_repair_qa_adapter,
  author = {Cahlen},
  title = {LoRA Adapter for TinyLlama-1.1B-Chat specialized on Motorcycle Repair QA},
  year = {2024},
  publisher = {Hugging Face},
  journal = {Hugging Face Hub},
  howpublished = {\url{https://huggingface.co/cahlen/tinyllama-motorcycle-repair-qa-adapter}}
}

@misc{zhang2024tinyllama,
      title={TinyLlama: An Open-Source Small Language Model},
      author={Peiyuan Zhang and Guangxuan Xiao and Ning Tuan Anh Tran and Xin (Notus) Li and Hao Tan and Yaowen Zhang and Philipp F. Hoefer and Hong Mo Kuan and Benn Tan and Ponnuchamy Muthu Ilakkuvan and Associated Professor Nan Yang and Dr. Si-Qing Qin and Dr. Bin Lin and Dr. Zhengin Li and Dr. Ramesha Karunasena and Dr. Ajay Kumar Jha and Mohamed Ahmed Hassan and ARIES AI},
      year={2024},
      eprint={2401.02385},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}
```