--- license: cc-by-4.0 library_name: peft tags: - generated_from_trainer metrics: - accuracy base_model: deepset/roberta-base-squad2 model-index: - name: STS-Lora-Fine-Tuning-Capstone-roberta-base-deepset-filtered-120-with-higher-r-mid results: [] --- # STS-Lora-Fine-Tuning-Capstone-roberta-base-deepset-filtered-120-with-higher-r-mid This model is a fine-tuned version of [deepset/roberta-base-squad2](https://huggingface.co/deepset/roberta-base-squad2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.7040 - Accuracy: 0.6854 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 30 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 57 | 1.0545 | 0.4551 | | No log | 2.0 | 114 | 1.0270 | 0.5 | | No log | 3.0 | 171 | 0.9823 | 0.5262 | | No log | 4.0 | 228 | 0.9416 | 0.5356 | | No log | 5.0 | 285 | 0.8917 | 0.5805 | | No log | 6.0 | 342 | 0.7931 | 0.6180 | | No log | 7.0 | 399 | 0.7627 | 0.6423 | | No log | 8.0 | 456 | 0.7679 | 0.6573 | | 0.7963 | 9.0 | 513 | 0.7380 | 0.6610 | | 0.7963 | 10.0 | 570 | 0.7256 | 0.6760 | | 0.7963 | 11.0 | 627 | 0.7223 | 0.6742 | | 0.7963 | 12.0 | 684 | 0.7255 | 0.6779 | | 0.7963 | 13.0 | 741 | 0.7132 | 0.6779 | | 0.7963 | 14.0 | 798 | 0.7097 | 0.6835 | | 0.7963 | 15.0 | 855 | 0.7116 | 0.6760 | | 0.7963 | 16.0 | 912 | 0.7200 | 0.6760 | | 0.7963 | 17.0 | 969 | 0.7176 | 0.6760 | | 0.615 | 18.0 | 1026 | 0.7133 | 0.6798 | | 0.615 | 19.0 | 1083 | 0.7121 | 0.6798 | | 0.615 | 20.0 | 1140 | 0.7117 | 0.6873 | | 0.615 | 21.0 | 1197 | 0.7028 | 0.6816 | | 0.615 | 22.0 | 1254 | 0.7033 | 0.6854 | | 0.615 | 23.0 | 1311 | 0.7054 | 0.6798 | | 0.615 | 24.0 | 1368 | 0.7059 | 0.6854 | | 0.615 | 25.0 | 1425 | 0.6996 | 0.6835 | | 0.615 | 26.0 | 1482 | 0.7045 | 0.6835 | | 0.5826 | 27.0 | 1539 | 0.7032 | 0.6854 | | 0.5826 | 28.0 | 1596 | 0.7030 | 0.6854 | | 0.5826 | 29.0 | 1653 | 0.7046 | 0.6854 | | 0.5826 | 30.0 | 1710 | 0.7040 | 0.6854 | ### Framework versions - PEFT 0.10.0 - Transformers 4.38.2 - Pytorch 2.2.1+cu121 - Datasets 2.18.0 - Tokenizers 0.15.2