This model is a fine-tuned version of PY007/TinyLlama-1.1B-intermediate-step-715k-1.5T on the openhermes dataset. It achieves the following results on the evaluation set:
- Loss: 1.2355
Training procedure
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 100
- num_epochs: 1
Training results
Training Loss | Epoch | Step | Validation Loss |
---|---|---|---|
1.4654 | 0.0 | 1 | 3.5326 |
1.2162 | 0.05 | 1503 | 1.9335 |
1.1918 | 0.1 | 3006 | 1.7391 |
1.4188 | 0.15 | 4509 | 1.7574 |
1.8281 | 0.2 | 6012 | 1.6704 |
0.8639 | 0.25 | 7515 | 1.7459 |
1.3764 | 0.3 | 9018 | 1.6832 |
2.1172 | 0.35 | 10521 | 1.6398 |
1.1855 | 0.4 | 12024 | 1.6007 |
1.5604 | 0.45 | 13527 | 1.5256 |
1.0224 | 0.5 | 15030 | 1.4891 |
1.5582 | 0.55 | 16533 | 1.4903 |
0.9489 | 0.6 | 18036 | 1.4179 |
1.67 | 0.65 | 19539 | 1.4585 |
0.8542 | 0.7 | 21042 | 1.3810 |
1.5301 | 0.75 | 22545 | 1.3645 |
0.951 | 0.8 | 24048 | 1.3087 |
1.1791 | 0.85 | 25551 | 1.3018 |
1.3342 | 0.9 | 27054 | 1.2595 |
1.1221 | 0.95 | 28557 | 1.2355 |
Framework versions
- Transformers 4.34.0
- Pytorch 2.0.1 cu117
- Datasets 2.14.6
- Tokenizers 0.14.1
- Downloads last month
- 10
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.