--- base_model: microsoft/Phi-3-mini-4k-instruct library_name: peft license: mit tags: - trl - sft - generated_from_trainer model-index: - name: phi-3-mini-QLoRA results: [] --- # phi-3-mini-QLoRA This model is a fine-tuned version of [microsoft/Phi-3-mini-4k-instruct](https://huggingface.co/microsoft/Phi-3-mini-4k-instruct) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4826 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:-----:|:---------------:| | 0.8441 | 0.2930 | 1000 | 0.6059 | | 0.5806 | 0.5859 | 2000 | 0.5601 | | 0.5509 | 0.8789 | 3000 | 0.5371 | | 0.5293 | 1.1718 | 4000 | 0.5231 | | 0.5187 | 1.4648 | 5000 | 0.5121 | | 0.5066 | 1.7577 | 6000 | 0.5041 | | 0.501 | 2.0507 | 7000 | 0.4988 | | 0.4904 | 2.3436 | 8000 | 0.4938 | | 0.4889 | 2.6366 | 9000 | 0.4903 | | 0.4871 | 2.9295 | 10000 | 0.4871 | | 0.4823 | 3.2225 | 11000 | 0.4852 | | 0.4759 | 3.5155 | 12000 | 0.4837 | | 0.4756 | 3.8084 | 13000 | 0.4826 | ### Framework versions - PEFT 0.12.0 - Transformers 4.44.2 - Pytorch 2.4.1+cu121 - Datasets 2.21.0 - Tokenizers 0.19.1