MedQuAD LoRA r=4

Configuraci贸n

  • Base: mistralai/Mistral-7B-Instruct-v0.3
  • LoRA r: 4
  • M贸dulos: q_proj, k_proj, v_proj
  • 4-bit NF4
  • Early Stopping: patience=3

Entrenamiento

Training logs (manual, Epoch estimado):

Step Epoch Training Loss Validation Loss
100 0.046 0.820900 0.792622
200 0.093 0.770500 0.764106
300 0.139 0.762600 0.754589
400 0.186 0.733300 0.741709
500 0.232 0.734900 0.735551
600 0.279 0.741500 0.731295
700 0.325 0.722700 0.710327
800 0.371 0.735200 0.703414
900 0.418 0.721500 0.693650
1000 0.464 0.697900 0.690272
1100 0.511 0.689100 0.684814
1200 0.557 0.662200 0.674680
1300 0.604 0.664400 0.677307
1400 0.650 0.663100 0.669781
1500 0.696 0.616000 0.665949
1600 0.743 0.622500 0.664927
1700 0.789 0.622200 0.658744
1800 0.836 0.630300 0.654155
1900 0.882 0.628300 0.656066
2000 0.929 0.612600 0.653236
2100 0.975 0.619600 0.647662
2200 1.021 0.605400 0.649643
2300 1.068 0.603700 0.646184
2400 1.114 0.600100 0.643537
2500 1.161 0.565200 0.642405
2600 1.207 0.594800 0.636302
2700 1.253 0.587300 0.630301
2800 1.300 0.598400 0.628895
2900 1.346 0.561300 0.630126
3000 1.393 0.538800 0.633145
3100 1.439 0.537100 0.632617

Uso

from peft import PeftModel
from transformers import AutoModelForCausalLM
base = AutoModelForCausalLM.from_pretrained('mistralai/Mistral-7B-Instruct-v0.3', load_in_4bit=True)
model = PeftModel.from_pretrained(base, 'CHF0101/medquad-lora-r4')
Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support