Gemma 3 Interview LoRA β 1B Instruct
This model is a QLoRA fine-tuned version of Gemma-3-1B-IT, trained on a curated dataset of 5,002 interview-style Q&A samples across:
- Artificial Intelligence (AI)
- General Programming
- Web Development
The goal is to enhance Gemma-3 into a technical interview assistant, capable of:
- Generating domain-specific interview questions
- Providing accurate, structured, exam-style answers
- Explaining concepts clearly and concisely
- Maintaining a professional and consistent interview tone
Dataset
The model was fine-tuned on a dataset containing 5,002 samples with the fields:
| Field | Description |
|---|---|
| domain | AI, General Programming, Web Development |
| question | Interview question from that domain |
| answer | Ground-truth, explanation-style answer |
Each training row was converted into:
- Instruction:
"Answer this <domain> interview question: <question>" - Response:
"<answer>"
Usage Example
Python
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
model_id = "Shlok307/ai_interview-lora"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
device_map="auto",
torch_dtype=torch.float16
)
prompt = [
{"role": "user", "content": "Answer this AI interview question: What is backpropagation?"}
]
input_ids = tokenizer.apply_chat_template(
prompt,
tokenize=True,
add_generation_prompt=True,
return_tensors="pt"
).to(model.device)
output = model.generate(
input_ids,
max_new_tokens=200,
do_sample=True,
temperature=0.7
)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Citation
@model{gemma3_interview_lora,
title={Gemma 3 Interview LoRA β 1B IT},
author={Shlok Talhar},
year={2025},
url={https://huggingface.co/Shlok307/gemma3-interview-lora}
}
- Downloads last month
- 35