Edit model card

limbxy_seq_t2

This model is a fine-tuned version of c14kevincardenas/beit-large-patch16-384-limb on the c14kevincardenas/beta_caller_284_limbxy_seq_2 dataset. It achieves the following results on the evaluation set:

  • Loss: 0.0135
  • Rmse: 0.1165

Model description

More information needed

Intended uses & limitations

More information needed

Training and evaluation data

More information needed

Training procedure

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 5e-05
  • train_batch_size: 16
  • eval_batch_size: 16
  • seed: 2014
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 250
  • num_epochs: 15.0
  • mixed_precision_training: Native AMP

Training results

Training Loss Epoch Step Validation Loss Rmse
0.0191 1.0 300 0.0182 0.1349
0.0146 2.0 600 0.0158 0.1259
0.0127 3.0 900 0.0135 0.1165
0.0105 4.0 1200 0.0146 0.1209
0.0088 5.0 1500 0.0150 0.1224
0.009 6.0 1800 0.0161 0.1271
0.0071 7.0 2100 0.0166 0.1288
0.0058 8.0 2400 0.0178 0.1336
0.0046 9.0 2700 0.0186 0.1364
0.0045 10.0 3000 0.0190 0.1381
0.0035 11.0 3300 0.0198 0.1408
0.0032 12.0 3600 0.0217 0.1475
0.0026 13.0 3900 0.0218 0.1477
0.0023 14.0 4200 0.0229 0.1514
0.0015 15.0 4500 0.0234 0.1529

Framework versions

  • Transformers 4.43.0.dev0
  • Pytorch 2.0.1+cu117
  • Datasets 2.19.1
  • Tokenizers 0.19.1
Downloads last month
2
Safetensors
Model size
313M params
Tensor type
F32
·
Inference API
Unable to determine this model's library. Check the docs .

Model tree for c14kevincardenas/limbxy_seq_t2

Finetuned
this model