Instructions to use damgomz/fp_bs8_lr1e4_x2 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use damgomz/fp_bs8_lr1e4_x2 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="damgomz/fp_bs8_lr1e4_x2")# Load model directly from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("damgomz/fp_bs8_lr1e4_x2") model = AutoModelForMaskedLM.from_pretrained("damgomz/fp_bs8_lr1e4_x2") - Notebooks
- Google Colab
- Kaggle
# Load model directly
from transformers import AutoTokenizer, AutoModelForMaskedLM
tokenizer = AutoTokenizer.from_pretrained("damgomz/fp_bs8_lr1e4_x2")
model = AutoModelForMaskedLM.from_pretrained("damgomz/fp_bs8_lr1e4_x2")Quick Links
Environmental Impact (CODE CARBON DEFAULT)
| Metric | Value |
|---|---|
| Duration (in seconds) | 177421.5589568615 |
| Emissions (Co2eq in kg) | 0.1334750406979024 |
| CPU power (W) | 42.5 |
| GPU power (W) | [No GPU] |
| RAM power (W) | 15.0 |
| CPU energy (kWh) | 2.0945555037395738 |
| GPU energy (kWh) | [No GPU] |
| RAM energy (kWh) | 0.739249313523375 |
| Consumed energy (kWh) | 2.833804817262955 |
| Country name | Switzerland |
| Cloud provider | nan |
| Cloud region | nan |
| CPU count | 6 |
| CPU model | Intel(R) Xeon(R) Platinum 8360Y CPU @ 2.40GHz |
| GPU count | nan |
| GPU model | nan |
Environmental Impact (for one core)
| Metric | Value |
|---|---|
| CPU energy (kWh) | 0.3415365009919584 |
| Emissions (Co2eq in kg) | 0.06949011059143742 |
Note
5 juillet 2024 !
My Config
| Config | Value |
|---|---|
| checkpoint | albert-base-v2 |
| model_name | fp_bs8_lr1e4_x2 |
| sequence_length | 400 |
| num_epoch | 6 |
| learning_rate | 0.0001 |
| batch_size | 8 |
| weight_decay | 0.0 |
| warm_up_prop | 0.0 |
| drop_out_prob | 0.1 |
| packing_length | 100 |
| train_test_split | 0.2 |
| num_steps | 83254 |
Training and Testing steps
| Epoch | Train Loss | Test Loss |
|---|---|---|
| 0.0 | 14.699384 | 8.961090 |
| 0.5 | 4.739969 | 3.821362 |
| 1.0 | 3.560692 | 3.402070 |
| 1.5 | 3.271684 | 3.207227 |
| 2.0 | 5.506532 | 6.965374 |
| 2.5 | 6.963911 | 6.979262 |
| 3.0 | 6.981505 | 6.987962 |
| 3.5 | 6.974114 | 6.985355 |
| 4.0 | 6.976612 | 6.973448 |
| 4.5 | 6.968990 | 6.975167 |
| 5.0 | 6.969795 | 6.971316 |
| 5.5 | 6.970071 | 6.966503 |
| 6.0 | 6.955143 | 6.965822 |
- Downloads last month
- 2
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("fill-mask", model="damgomz/fp_bs8_lr1e4_x2")