distilbert-base-uncased-text-classification-v8
This model is a fine-tuned version of distilbert-base-uncased on the None dataset. It achieves the following results on the evaluation set:
- Loss: 0.1626
- F1: 0.8539
Model description
More information needed
Intended uses & limitations
More information needed
Training and evaluation data
More information needed
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
Training results
Training Loss | Epoch | Step | Validation Loss | F1 |
---|---|---|---|---|
0.435 | 1.0 | 1154 | 0.3097 | 0.6342 |
0.2289 | 2.0 | 2308 | 0.1947 | 0.7906 |
0.1372 | 3.0 | 3462 | 0.1626 | 0.8539 |
0.0867 | 4.0 | 4616 | 0.1693 | 0.8722 |
0.0546 | 5.0 | 5770 | 0.1664 | 0.8796 |
0.0521 | 6.0 | 6924 | 0.1658 | 0.8882 |
0.0447 | 7.0 | 8078 | 0.1651 | 0.8945 |
0.0361 | 8.0 | 9232 | 0.1752 | 0.8922 |
0.0315 | 9.0 | 10386 | 0.1712 | 0.8891 |
0.0289 | 10.0 | 11540 | 0.1763 | 0.8914 |
0.0302 | 11.0 | 12694 | 0.1811 | 0.8937 |
0.0269 | 12.0 | 13848 | 0.1829 | 0.8915 |
0.0334 | 13.0 | 15002 | 0.1867 | 0.8878 |
0.0254 | 14.0 | 16156 | 0.1865 | 0.8869 |
0.026 | 15.0 | 17310 | 0.1856 | 0.8936 |
0.0226 | 16.0 | 18464 | 0.1852 | 0.8933 |
0.0226 | 17.0 | 19618 | 0.1843 | 0.8967 |
0.0202 | 18.0 | 20772 | 0.1832 | 0.8984 |
0.0177 | 19.0 | 21926 | 0.1856 | 0.8975 |
0.0186 | 20.0 | 23080 | 0.1854 | 0.8962 |
Framework versions
- Transformers 4.38.0
- Pytorch 2.3.1+cu121
- Datasets 2.21.0
- Tokenizers 0.15.2
- Downloads last month
- 50
Model tree for siriuswapnil/distilbert-base-uncased-text-classification-v8
Base model
distilbert/distilbert-base-uncased
Finetuned
this model