| 2025-11-27 17:10:12,688 - INFO - === Entrenamiento LoRA para servicio IAM === | |
| 2025-11-27 17:10:12,689 - INFO - Train dataset: /workspace/data/dataset_sft_IAM_train.jsonl | |
| 2025-11-27 17:10:12,689 - INFO - Val dataset : /workspace/data/dataset_sft_IAM_val.jsonl | |
| 2025-11-27 17:10:12,689 - INFO - Output dir : /workspace/out/starcoder2_7b_lora_iam | |
| 2025-11-27 17:10:12,689 - INFO - Cargando tokenizer... | |
| 2025-11-27 17:10:12,998 - INFO - Cargando datasets y aplicando formato... | |
| 2025-11-27 17:10:14,433 - INFO - bitsandbytes NO disponible. Cargando modelo en bfloat16 sin cuantizaci贸n... | |
| 2025-11-27 17:10:18,492 - INFO - Configurando LoRA... | |
| 2025-11-27 17:10:18,492 - INFO - Configurando entrenamiento SFT... | |
| 2025-11-27 17:10:35,535 - INFO - Iniciando entrenamiento... | |
| 2025-11-27 19:57:57,600 - INFO - Entrenamiento finalizado. | |
| 2025-11-27 19:57:57,600 - INFO - Duraci贸n total (s): 10042.06 | |
| 2025-11-27 19:57:57,600 - INFO - Epochs entrenadas : 3.0 | |
| 2025-11-27 19:57:57,601 - INFO - Global steps : 2025 | |
| 2025-11-27 19:57:57,601 - INFO - Evaluando en conjunto de validaci贸n... | |
| 2025-11-27 19:59:43,777 - INFO - M茅tricas de evaluaci贸n: {'eval_loss': 0.06809257715940475, 'eval_runtime': 106.1734, 'eval_samples_per_second': 5.651, 'eval_steps_per_second': 0.706, 'eval_entropy': 0.06766340777277946, 'eval_num_tokens': 16535475.0, 'eval_mean_token_accuracy': 0.9819109582901001, 'epoch': 3.0} | |
| 2025-11-27 19:59:43,778 - INFO - M茅tricas guardadas en: /workspace/out/starcoder2_7b_lora_iam/training_summary_iam.json | |
| 2025-11-27 19:59:43,778 - INFO - Guardando modelo y tokenizer LoRA... | |
| 2025-11-27 19:59:44,174 - INFO - Guardado completo. | |