A minimal fine-tune of Llama 3.2 1B on the Ghana QA JSON dataset (~0.3 epoch).
This is an early test of the model's potential for Ghanaian-context, multilingual chatbot applications.

Note: Due to limited training, responses can be incoherent or off-topic. The model tends to perform better with local languages mixed with English than with pure English inputs.

Supported input languages: English, Twi, Ewe, Ga, or a mix of these with English.


Quickstart

from transformers import AutoTokenizer, TextStreamer
import torch
from unsloth import FastLanguageModel

model_id = "michsethowusu/opani-chat_1b-merged-16bit"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model, tokenizer = FastLanguageModel.from_pretrained(
    model_name=model_id,
    max_seq_length=2048,
    dtype=torch.float16,
    load_in_4bit=False,
)
FastLanguageModel.for_inference(model)

messages = [{"role": "user", "content": "Hefa na metumi atua me utility bills?"}]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)

streamer = TextStreamer(tokenizer, skip_prompt=True)
_ = model.generate(
    **tokenizer(text, return_tensors="pt").to(model.device),
    max_new_tokens=512,
    temperature=0.7,
    top_p=0.9,
    streamer=streamer,
)

Fine-tuning notebook: View on Colab
Contact: Ghana NLP โ€“ natural.language.processing.gh@gmail.com

Downloads last month
9
Safetensors
Model size
1B params
Tensor type
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support