modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
yagmurery/bert-base-uncased-finetuned-cola | 2023-05-03T18:54:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yagmurery | null | null | yagmurery/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-02T10:10:55 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5855730181125508
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6423
- Matthews Correlation: 0.5856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4932 | 1.0 | 535 | 0.5174 | 0.5028 |
| 0.2995 | 2.0 | 1070 | 0.4694 | 0.5782 |
| 0.1959 | 3.0 | 1605 | 0.6423 | 0.5856 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,870 | [
[
-0.0263671875,
-0.051361083984375,
0.01004791259765625,
0.018768310546875,
-0.0264739990234375,
-0.0206298828125,
-0.0172576904296875,
-0.01316070556640625,
0.02593994140625,
0.0161895751953125,
-0.050994873046875,
-0.031890869140625,
-0.0518798828125,
-0.02... |
amittian/setfit_asoc_version_0_0_1 | 2023-05-02T11:01:52.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | amittian | null | null | amittian/setfit_asoc_version_0_0_1 | 0 | 2 | sentence-transformers | 2023-05-02T10:40:46 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# amittian/setfit_asoc_version_0_0_1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("amittian/setfit_asoc_version_0_0_1")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,557 | [
[
-0.0072784423828125,
-0.057037353515625,
0.0229034423828125,
-0.01274871826171875,
-0.011505126953125,
-0.015869140625,
-0.0170135498046875,
-0.01235198974609375,
0.0037136077880859375,
0.035064697265625,
-0.040496826171875,
-0.021697998046875,
-0.04043579101562... |
mrovejaxd/goemotions_bertspanish_finetunig_c | 2023-05-02T12:56:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | mrovejaxd | null | null | mrovejaxd/goemotions_bertspanish_finetunig_c | 0 | 2 | transformers | 2023-05-02T10:57:56 | ---
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
- f1
model-index:
- name: goemotions_bertspanish_finetunig_c
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: test
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.48444444444444446
- name: F1
type: f1
value: 0.39534557037631035
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspanish_finetunig_c
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4091
- Accuracy: 0.4844
- F1: 0.3953
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 8
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,625 | [
[
-0.0276336669921875,
-0.039764404296875,
0.00795745849609375,
0.0231475830078125,
-0.0357666015625,
-0.0270538330078125,
-0.033905029296875,
-0.0222015380859375,
0.0203857421875,
0.011505126953125,
-0.07086181640625,
-0.044769287109375,
-0.044097900390625,
-... |
AliiaR/model-for-texts | 2023-05-02T11:16:39.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AliiaR | null | null | AliiaR/model-for-texts | 0 | 2 | transformers | 2023-05-02T11:10:56 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: model-for-texts
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# model-for-texts
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,262 | [
[
-0.035736083984375,
-0.055572509765625,
0.031097412109375,
0.007526397705078125,
-0.046295166015625,
-0.0290069580078125,
-0.02264404296875,
-0.0254058837890625,
0.01070404052734375,
0.0197601318359375,
-0.0504150390625,
-0.053070068359375,
-0.05364990234375,
... |
EfeTarhan/bert-base-uncased-finetuned-cola | 2023-05-04T18:03:31.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | EfeTarhan | null | null | EfeTarhan/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-02T12:17:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5494767866076017
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4957
- Matthews Correlation: 0.5495
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.234188281761211e-06
- train_batch_size: 64
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4832 | 0.4122 |
| No log | 2.0 | 268 | 0.4678 | 0.5285 |
| No log | 3.0 | 402 | 0.4925 | 0.5312 |
| 0.4083 | 4.0 | 536 | 0.4957 | 0.5495 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,960 | [
[
-0.02496337890625,
-0.050567626953125,
0.00921630859375,
0.0183868408203125,
-0.023223876953125,
-0.01959228515625,
-0.01605224609375,
-0.0161895751953125,
0.0269012451171875,
0.0165252685546875,
-0.0516357421875,
-0.0323486328125,
-0.0511474609375,
-0.02160... |
SallyHyein/distilbert-base-uncased-finetuned-emotion | 2023-05-02T13:18:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | SallyHyein | null | null | SallyHyein/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-02T12:38:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.931
- name: F1
type: f1
value: 0.9309844319832071
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2160
- Accuracy: 0.931
- F1: 0.9310
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8342 | 1.0 | 250 | 0.3068 | 0.9115 | 0.9084 |
| 0.248 | 2.0 | 500 | 0.2160 | 0.931 | 0.9310 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.038665771484375,
-0.0419921875,
0.01540374755859375,
0.021636962890625,
-0.02618408203125,
-0.01959228515625,
-0.013153076171875,
-0.0087432861328125,
0.01116943359375,
0.00882720947265625,
-0.05718994140625,
-0.051513671875,
-0.059173583984375,
-0.008544... |
platzi/platzi-distilroberta-base-mrpc-glue-glombardo | 2023-05-08T14:08:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-glombardo | 0 | 2 | transformers | 2023-05-02T12:46:23 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-glombardo
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: datasetX
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8063725490196079
- name: F1
type: f1
value: 0.8523364485981308
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-glombardo
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the datasetX dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6653
- Accuracy: 0.8064
- F1: 0.8523
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4961 | 1.09 | 500 | 0.7312 | 0.8186 | 0.8702 |
| 0.3273 | 2.18 | 1000 | 0.6653 | 0.8064 | 0.8523 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,872 | [
[
-0.03271484375,
-0.03985595703125,
0.01343536376953125,
0.0159149169921875,
-0.032928466796875,
-0.0295867919921875,
-0.0110321044921875,
-0.0042724609375,
0.0006175041198730469,
0.0132904052734375,
-0.050537109375,
-0.0460205078125,
-0.059051513671875,
-0.0... |
matorus/distilbert-base-uncased-finetuned-emotion | 2023-05-02T13:34:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | matorus | null | null | matorus/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-02T13:03:03 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.925808056925967
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2186
- Accuracy: 0.9255
- F1: 0.9258
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3109 | 0.913 | 0.9104 |
| No log | 2.0 | 500 | 0.2186 | 0.9255 | 0.9258 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,847 | [
[
-0.035736083984375,
-0.043182373046875,
0.013580322265625,
0.0229949951171875,
-0.02679443359375,
-0.019805908203125,
-0.01335906982421875,
-0.01082611083984375,
0.010833740234375,
0.008514404296875,
-0.0557861328125,
-0.05126953125,
-0.059783935546875,
-0.0... |
mrovejaxd/goemotions_bertspanish_finetunig_d | 2023-05-24T06:05:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:go_emotions",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | mrovejaxd | null | null | mrovejaxd/goemotions_bertspanish_finetunig_d | 0 | 2 | transformers | 2023-05-02T13:15:37 | ---
tags:
- generated_from_trainer
datasets:
- go_emotions
metrics:
- accuracy
- f1
model-index:
- name: goemotions_bertspanish_finetunig_d
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: go_emotions
type: go_emotions
config: simplified
split: test
args: simplified
metrics:
- name: Accuracy
type: accuracy
value: 0.5125
- name: F1
type: f1
value: 0.3757437789402451
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# goemotions_bertspanish_finetunig_d
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) on the go_emotions dataset.
It achieves the following results on the evaluation set:
- Loss: 1.8151
- Accuracy: 0.5125
- F1: 0.3757
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,578 | [
[
-0.0278778076171875,
-0.041656494140625,
0.01065826416015625,
0.0213165283203125,
-0.036376953125,
-0.027313232421875,
-0.032806396484375,
-0.0203094482421875,
0.021453857421875,
0.011474609375,
-0.07110595703125,
-0.045684814453125,
-0.0447998046875,
-0.018... |
jjdelgado/my_newsgroups_roberta_model | 2023-05-02T16:40:18.000Z | [
"transformers",
"tf",
"roberta",
"text-classification",
"generated_from_keras_callback",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jjdelgado | null | null | jjdelgado/my_newsgroups_roberta_model | 0 | 2 | transformers | 2023-05-02T13:22:04 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: jjdelgado/my_newsgroups_roberta_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# jjdelgado/my_newsgroups_roberta_model
This model is a fine-tuned version of [RoBERTa-base](https://huggingface.co/RoBERTa-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.3069
- Validation Loss: 1.0260
- Train Accuracy: 0.6920
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3535, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 1.3069 | 1.0260 | 0.6920 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,708 | [
[
-0.036956787109375,
-0.049072265625,
0.029296875,
0.0035877227783203125,
-0.0284271240234375,
-0.028961181640625,
-0.0246429443359375,
-0.017181396484375,
0.004730224609375,
0.00972747802734375,
-0.051910400390625,
-0.05047607421875,
-0.06597900390625,
-0.01... |
hemagamal/model | 2023-05-02T14:04:06.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | hemagamal | null | null | hemagamal/model | 0 | 2 | transformers | 2023-05-02T13:50:09 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: hemagamal/model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hemagamal/model
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6846
- Train Accuracy: 0.5992
- Validation Loss: 0.6300
- Validation Accuracy: 0.6147
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6520 | 0.6836 | 0.6229 | 0.7804 | 0 |
| 0.6846 | 0.5992 | 0.6300 | 0.6147 | 1 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,716 | [
[
-0.038055419921875,
-0.054107666015625,
0.0121002197265625,
0.01462554931640625,
-0.03192138671875,
-0.0295257568359375,
-0.0192108154296875,
-0.024139404296875,
0.0157623291015625,
0.014495849609375,
-0.049530029296875,
-0.05401611328125,
-0.050384521484375,
... |
bodik/autotrain-js-classification-6-cat-dist-bert-uncased-54424128043 | 2023-05-02T14:43:46.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"unk",
"dataset:bodik/autotrain-data-js-classification-6-cat-dist-bert-uncased",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | bodik | null | null | bodik/autotrain-js-classification-6-cat-dist-bert-uncased-54424128043 | 1 | 2 | transformers | 2023-05-02T14:42:56 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- bodik/autotrain-data-js-classification-6-cat-dist-bert-uncased
co2_eq_emissions:
emissions: 0.0013888828664696802
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 54424128043
- CO2 Emissions (in grams): 0.0014
## Validation Metrics
- Loss: 0.332
- Accuracy: 0.914
- Macro F1: 0.917
- Micro F1: 0.914
- Weighted F1: 0.914
- Macro Precision: 0.927
- Micro Precision: 0.914
- Weighted Precision: 0.916
- Macro Recall: 0.910
- Micro Recall: 0.914
- Weighted Recall: 0.914
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/bodik/autotrain-js-classification-6-cat-dist-bert-uncased-54424128043
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("bodik/autotrain-js-classification-6-cat-dist-bert-uncased-54424128043", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("bodik/autotrain-js-classification-6-cat-dist-bert-uncased-54424128043", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,404 | [
[
-0.031890869140625,
-0.0257110595703125,
0.005779266357421875,
0.008636474609375,
-0.00012731552124023438,
0.0093994140625,
-0.0011835098266601562,
-0.0152130126953125,
-0.00341796875,
0.00510406494140625,
-0.046783447265625,
-0.03857421875,
-0.05108642578125,
... |
guoluo/Bert_class_1e-10 | 2023-05-02T15:02:39.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | guoluo | null | null | guoluo/Bert_class_1e-10 | 0 | 2 | transformers | 2023-05-02T15:01:57 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_1e-10
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_1e-10
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.4794
- Train Accuracy: 0.1435
- Validation Loss: 1.4962
- Validation Accuracy: 0.1338
- Train Lr: 9.999547e-11
- Epoch: 999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 9.999547e-11, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-------------:|:-----:|
| 1.4732 | 0.1671 | 1.5014 | 0.1338 | 1e-10 | 0 |
| 1.4751 | 0.1412 | 1.5014 | 0.1338 | 1e-10 | 1 |
| 1.4792 | 0.1388 | 1.5014 | 0.1338 | 1e-10 | 2 |
| 1.4789 | 0.1388 | 1.5014 | 0.1338 | 1e-10 | 3 |
| 1.4755 | 0.1482 | 1.5014 | 0.1338 | 1e-10 | 4 |
| 1.4702 | 0.1482 | 1.5014 | 0.1338 | 1e-10 | 5 |
| 1.4800 | 0.1388 | 1.5014 | 0.1338 | 1e-10 | 6 |
| 1.4739 | 0.1576 | 1.5014 | 0.1338 | 1e-10 | 7 |
| 1.4831 | 0.1435 | 1.5014 | 0.1338 | 1e-10 | 8 |
| 1.4740 | 0.1459 | 1.5014 | 0.1338 | 1e-10 | 9 |
| 1.4762 | 0.1482 | 1.5014 | 0.1338 | 1e-10 | 10 |
| 1.4754 | 0.1388 | 1.5014 | 0.1338 | 1e-10 | 11 |
| 1.4683 | 0.1506 | 1.5014 | 0.1338 | 1e-10 | 12 |
| 1.4787 | 0.1553 | 1.5014 | 0.1338 | 1e-10 | 13 |
| 1.4770 | 0.1388 | 1.5014 | 0.1338 | 1e-10 | 14 |
| 1.4790 | 0.1388 | 1.5013 | 0.1338 | 1e-10 | 15 |
| 1.4799 | 0.1388 | 1.5013 | 0.1338 | 1e-10 | 16 |
| 1.4828 | 0.1388 | 1.5013 | 0.1338 | 1e-10 | 17 |
| 1.4780 | 0.1412 | 1.5013 | 0.1338 | 1e-10 | 18 |
| 1.4826 | 0.1271 | 1.5013 | 0.1338 | 1e-10 | 19 |
| 1.4770 | 0.1365 | 1.5013 | 0.1338 | 1e-10 | 20 |
| 1.4747 | 0.1388 | 1.5013 | 0.1338 | 1e-10 | 21 |
| 1.4783 | 0.1482 | 1.5013 | 0.1338 | 1e-10 | 22 |
| 1.4780 | 0.1506 | 1.5013 | 0.1338 | 1e-10 | 23 |
| 1.4748 | 0.1388 | 1.5013 | 0.1338 | 1e-10 | 24 |
| 1.4776 | 0.1553 | 1.5013 | 0.1338 | 1e-10 | 25 |
| 1.4813 | 0.1459 | 1.5013 | 0.1338 | 1e-10 | 26 |
| 1.4819 | 0.1412 | 1.5013 | 0.1338 | 1e-10 | 27 |
| 1.4756 | 0.1435 | 1.5013 | 0.1338 | 1e-10 | 28 |
| 1.4810 | 0.1435 | 1.5013 | 0.1338 | 1e-10 | 29 |
| 1.4745 | 0.1529 | 1.5013 | 0.1338 | 1e-10 | 30 |
| 1.4839 | 0.1341 | 1.5013 | 0.1338 | 1e-10 | 31 |
| 1.4784 | 0.1318 | 1.5013 | 0.1338 | 1e-10 | 32 |
| 1.4766 | 0.1412 | 1.5013 | 0.1338 | 1e-10 | 33 |
| 1.4740 | 0.1365 | 1.5012 | 0.1338 | 1e-10 | 34 |
| 1.4745 | 0.1529 | 1.5012 | 0.1338 | 1e-10 | 35 |
| 1.4722 | 0.1412 | 1.5012 | 0.1338 | 1e-10 | 36 |
| 1.4701 | 0.1506 | 1.5012 | 0.1338 | 1e-10 | 37 |
| 1.4725 | 0.1388 | 1.5012 | 0.1338 | 1e-10 | 38 |
| 1.4761 | 0.1459 | 1.5012 | 0.1338 | 1e-10 | 39 |
| 1.4825 | 0.1553 | 1.5012 | 0.1338 | 1e-10 | 40 |
| 1.4782 | 0.1412 | 1.5012 | 0.1338 | 1e-10 | 41 |
| 1.4786 | 0.1200 | 1.5012 | 0.1338 | 1e-10 | 42 |
| 1.4709 | 0.1576 | 1.5012 | 0.1338 | 1e-10 | 43 |
| 1.4707 | 0.1318 | 1.5012 | 0.1338 | 1e-10 | 44 |
| 1.4714 | 0.1435 | 1.5012 | 0.1338 | 1e-10 | 45 |
| 1.4729 | 0.1365 | 1.5012 | 0.1338 | 1e-10 | 46 |
| 1.4760 | 0.1694 | 1.5012 | 0.1338 | 1e-10 | 47 |
| 1.4787 | 0.1553 | 1.5012 | 0.1338 | 1e-10 | 48 |
| 1.4707 | 0.1365 | 1.5012 | 0.1338 | 1e-10 | 49 |
| 1.4767 | 0.1506 | 1.5012 | 0.1338 | 1e-10 | 50 |
| 1.4749 | 0.1412 | 1.5012 | 0.1338 | 1e-10 | 51 |
| 1.4737 | 0.1482 | 1.5012 | 0.1338 | 1e-10 | 52 |
| 1.4764 | 0.1365 | 1.5012 | 0.1338 | 1e-10 | 53 |
| 1.4764 | 0.1412 | 1.5011 | 0.1338 | 1e-10 | 54 |
| 1.4808 | 0.1294 | 1.5011 | 0.1338 | 1e-10 | 55 |
| 1.4694 | 0.1365 | 1.5011 | 0.1338 | 1e-10 | 56 |
| 1.4714 | 0.1294 | 1.5011 | 0.1338 | 1e-10 | 57 |
| 1.4766 | 0.1318 | 1.5011 | 0.1338 | 1e-10 | 58 |
| 1.4801 | 0.1388 | 1.5011 | 0.1338 | 1e-10 | 59 |
| 1.4771 | 0.1435 | 1.5011 | 0.1338 | 1e-10 | 60 |
| 1.4740 | 0.1294 | 1.5011 | 0.1338 | 1e-10 | 61 |
| 1.4817 | 0.1341 | 1.5011 | 0.1338 | 1e-10 | 62 |
| 1.4728 | 0.1459 | 1.5011 | 0.1338 | 1e-10 | 63 |
| 1.4791 | 0.1318 | 1.5011 | 0.1338 | 1e-10 | 64 |
| 1.4733 | 0.1224 | 1.5011 | 0.1338 | 1e-10 | 65 |
| 1.4678 | 0.1506 | 1.5011 | 0.1338 | 1e-10 | 66 |
| 1.4789 | 0.1153 | 1.5011 | 0.1338 | 1e-10 | 67 |
| 1.4655 | 0.1529 | 1.5011 | 0.1338 | 1e-10 | 68 |
| 1.4698 | 0.1576 | 1.5011 | 0.1338 | 1e-10 | 69 |
| 1.4755 | 0.1365 | 1.5011 | 0.1338 | 1e-10 | 70 |
| 1.4754 | 0.1412 | 1.5011 | 0.1338 | 1e-10 | 71 |
| 1.4732 | 0.1341 | 1.5011 | 0.1338 | 1e-10 | 72 |
| 1.4762 | 0.1224 | 1.5010 | 0.1338 | 1e-10 | 73 |
| 1.4642 | 0.1435 | 1.5010 | 0.1338 | 1e-10 | 74 |
| 1.4726 | 0.1506 | 1.5010 | 0.1338 | 1e-10 | 75 |
| 1.4810 | 0.1506 | 1.5010 | 0.1338 | 1e-10 | 76 |
| 1.4749 | 0.1341 | 1.5010 | 0.1338 | 1e-10 | 77 |
| 1.4734 | 0.1459 | 1.5010 | 0.1338 | 1e-10 | 78 |
| 1.4740 | 0.1247 | 1.5010 | 0.1338 | 1e-10 | 79 |
| 1.4721 | 0.1412 | 1.5010 | 0.1338 | 1e-10 | 80 |
| 1.4767 | 0.1435 | 1.5010 | 0.1338 | 1e-10 | 81 |
| 1.4748 | 0.1435 | 1.5010 | 0.1338 | 1e-10 | 82 |
| 1.4848 | 0.1412 | 1.5010 | 0.1338 | 1e-10 | 83 |
| 1.4755 | 0.1341 | 1.5010 | 0.1338 | 1e-10 | 84 |
| 1.4705 | 0.1600 | 1.5010 | 0.1338 | 1e-10 | 85 |
| 1.4707 | 0.1624 | 1.5010 | 0.1338 | 1e-10 | 86 |
| 1.4748 | 0.1459 | 1.5010 | 0.1338 | 1e-10 | 87 |
| 1.4759 | 0.1388 | 1.5010 | 0.1338 | 1e-10 | 88 |
| 1.4722 | 0.1576 | 1.5010 | 0.1338 | 1e-10 | 89 |
| 1.4764 | 0.1482 | 1.5010 | 0.1338 | 1e-10 | 90 |
| 1.4711 | 0.1624 | 1.5010 | 0.1338 | 1e-10 | 91 |
| 1.4734 | 0.1412 | 1.5009 | 0.1338 | 1e-10 | 92 |
| 1.4772 | 0.1224 | 1.5009 | 0.1338 | 1e-10 | 93 |
| 1.4660 | 0.1506 | 1.5009 | 0.1338 | 1e-10 | 94 |
| 1.4771 | 0.1529 | 1.5009 | 0.1338 | 1e-10 | 95 |
| 1.4698 | 0.1341 | 1.5009 | 0.1338 | 1e-10 | 96 |
| 1.4763 | 0.1388 | 1.5009 | 0.1338 | 1e-10 | 97 |
| 1.4708 | 0.1459 | 1.5009 | 0.1338 | 1e-10 | 98 |
| 1.4774 | 0.1412 | 1.5009 | 0.1338 | 1e-10 | 99 |
| 1.4648 | 0.1506 | 1.5009 | 0.1338 | 1e-10 | 100 |
| 1.4799 | 0.1412 | 1.5009 | 0.1338 | 1e-10 | 101 |
| 1.4750 | 0.1506 | 1.5009 | 0.1338 | 1e-10 | 102 |
| 1.4779 | 0.1388 | 1.5009 | 0.1338 | 1e-10 | 103 |
| 1.4774 | 0.1435 | 1.5009 | 0.1338 | 1e-10 | 104 |
| 1.4736 | 0.1341 | 1.5009 | 0.1338 | 1e-10 | 105 |
| 1.4702 | 0.1318 | 1.5009 | 0.1338 | 1e-10 | 106 |
| 1.4827 | 0.1341 | 1.5009 | 0.1338 | 1e-10 | 107 |
| 1.4770 | 0.1294 | 1.5009 | 0.1338 | 1e-10 | 108 |
| 1.4783 | 0.1482 | 1.5009 | 0.1338 | 1e-10 | 109 |
| 1.4721 | 0.1459 | 1.5009 | 0.1338 | 1e-10 | 110 |
| 1.4739 | 0.1365 | 1.5008 | 0.1338 | 1e-10 | 111 |
| 1.4722 | 0.1318 | 1.5008 | 0.1338 | 1e-10 | 112 |
| 1.4762 | 0.1247 | 1.5008 | 0.1338 | 1e-10 | 113 |
| 1.4682 | 0.1294 | 1.5008 | 0.1338 | 1e-10 | 114 |
| 1.4719 | 0.1388 | 1.5008 | 0.1338 | 1e-10 | 115 |
| 1.4776 | 0.1529 | 1.5008 | 0.1338 | 1e-10 | 116 |
| 1.4779 | 0.1412 | 1.5008 | 0.1338 | 1e-10 | 117 |
| 1.4776 | 0.1200 | 1.5008 | 0.1338 | 1e-10 | 118 |
| 1.4724 | 0.1200 | 1.5008 | 0.1338 | 1e-10 | 119 |
| 1.4756 | 0.1341 | 1.5008 | 0.1338 | 1e-10 | 120 |
| 1.4768 | 0.1459 | 1.5008 | 0.1338 | 1e-10 | 121 |
| 1.4854 | 0.1294 | 1.5008 | 0.1338 | 1e-10 | 122 |
| 1.4744 | 0.1388 | 1.5008 | 0.1338 | 1e-10 | 123 |
| 1.4661 | 0.1459 | 1.5008 | 0.1338 | 1e-10 | 124 |
| 1.4824 | 0.1412 | 1.5008 | 0.1338 | 1e-10 | 125 |
| 1.4680 | 0.1576 | 1.5008 | 0.1338 | 1e-10 | 126 |
| 1.4763 | 0.1365 | 1.5008 | 0.1338 | 1e-10 | 127 |
| 1.4740 | 0.1435 | 1.5008 | 0.1338 | 1e-10 | 128 |
| 1.4747 | 0.1553 | 1.5008 | 0.1338 | 1e-10 | 129 |
| 1.4720 | 0.1365 | 1.5007 | 0.1338 | 1e-10 | 130 |
| 1.4734 | 0.1294 | 1.5007 | 0.1338 | 1e-10 | 131 |
| 1.4758 | 0.1365 | 1.5007 | 0.1338 | 1e-10 | 132 |
| 1.4724 | 0.1365 | 1.5007 | 0.1338 | 1e-10 | 133 |
| 1.4750 | 0.1341 | 1.5007 | 0.1338 | 1e-10 | 134 |
| 1.4829 | 0.1412 | 1.5007 | 0.1338 | 1e-10 | 135 |
| 1.4690 | 0.1365 | 1.5007 | 0.1338 | 1e-10 | 136 |
| 1.4733 | 0.1506 | 1.5007 | 0.1338 | 1e-10 | 137 |
| 1.4724 | 0.1459 | 1.5007 | 0.1338 | 1e-10 | 138 |
| 1.4804 | 0.1271 | 1.5007 | 0.1338 | 1e-10 | 139 |
| 1.4711 | 0.1482 | 1.5007 | 0.1338 | 1e-10 | 140 |
| 1.4872 | 0.1318 | 1.5007 | 0.1338 | 1e-10 | 141 |
| 1.4796 | 0.1341 | 1.5007 | 0.1338 | 1e-10 | 142 |
| 1.4712 | 0.1576 | 1.5007 | 0.1338 | 1e-10 | 143 |
| 1.4729 | 0.1435 | 1.5007 | 0.1338 | 1e-10 | 144 |
| 1.4678 | 0.1624 | 1.5007 | 0.1338 | 1e-10 | 145 |
| 1.4696 | 0.1553 | 1.5007 | 0.1338 | 1e-10 | 146 |
| 1.4742 | 0.1412 | 1.5007 | 0.1338 | 1e-10 | 147 |
| 1.4814 | 0.1365 | 1.5007 | 0.1338 | 1e-10 | 148 |
| 1.4705 | 0.1224 | 1.5006 | 0.1338 | 1e-10 | 149 |
| 1.4711 | 0.1176 | 1.5006 | 0.1338 | 1e-10 | 150 |
| 1.4692 | 0.1459 | 1.5006 | 0.1338 | 1e-10 | 151 |
| 1.4698 | 0.1529 | 1.5006 | 0.1338 | 1e-10 | 152 |
| 1.4721 | 0.1459 | 1.5006 | 0.1338 | 1e-10 | 153 |
| 1.4692 | 0.1482 | 1.5006 | 0.1338 | 1e-10 | 154 |
| 1.4773 | 0.1341 | 1.5006 | 0.1338 | 1e-10 | 155 |
| 1.4677 | 0.1553 | 1.5006 | 0.1338 | 1e-10 | 156 |
| 1.4815 | 0.1271 | 1.5006 | 0.1338 | 1e-10 | 157 |
| 1.4732 | 0.1271 | 1.5006 | 0.1338 | 1e-10 | 158 |
| 1.4727 | 0.1529 | 1.5006 | 0.1338 | 1e-10 | 159 |
| 1.4764 | 0.1482 | 1.5006 | 0.1338 | 1e-10 | 160 |
| 1.4773 | 0.1412 | 1.5006 | 0.1338 | 1e-10 | 161 |
| 1.4792 | 0.1435 | 1.5006 | 0.1338 | 1e-10 | 162 |
| 1.4733 | 0.1529 | 1.5006 | 0.1338 | 1e-10 | 163 |
| 1.4781 | 0.1435 | 1.5006 | 0.1338 | 1e-10 | 164 |
| 1.4689 | 0.1318 | 1.5006 | 0.1338 | 1e-10 | 165 |
| 1.4795 | 0.1459 | 1.5006 | 0.1338 | 1e-10 | 166 |
| 1.4766 | 0.1294 | 1.5006 | 0.1338 | 1e-10 | 167 |
| 1.4728 | 0.1459 | 1.5005 | 0.1338 | 1e-10 | 168 |
| 1.4664 | 0.1435 | 1.5005 | 0.1338 | 1e-10 | 169 |
| 1.4710 | 0.1388 | 1.5005 | 0.1338 | 1e-10 | 170 |
| 1.4758 | 0.1435 | 1.5005 | 0.1338 | 1e-10 | 171 |
| 1.4760 | 0.1412 | 1.5005 | 0.1338 | 1e-10 | 172 |
| 1.4768 | 0.1388 | 1.5005 | 0.1338 | 1e-10 | 173 |
| 1.4749 | 0.1459 | 1.5005 | 0.1338 | 1e-10 | 174 |
| 1.4795 | 0.1506 | 1.5005 | 0.1338 | 1e-10 | 175 |
| 1.4702 | 0.1459 | 1.5005 | 0.1338 | 1e-10 | 176 |
| 1.4788 | 0.1271 | 1.5005 | 0.1338 | 1e-10 | 177 |
| 1.4753 | 0.1435 | 1.5005 | 0.1338 | 1e-10 | 178 |
| 1.4750 | 0.1388 | 1.5005 | 0.1338 | 1e-10 | 179 |
| 1.4799 | 0.1459 | 1.5005 | 0.1338 | 1e-10 | 180 |
| 1.4768 | 0.1365 | 1.5005 | 0.1338 | 1e-10 | 181 |
| 1.4780 | 0.1459 | 1.5005 | 0.1338 | 1e-10 | 182 |
| 1.4745 | 0.1224 | 1.5005 | 0.1338 | 1e-10 | 183 |
| 1.4618 | 0.1624 | 1.5005 | 0.1338 | 1e-10 | 184 |
| 1.4775 | 0.1553 | 1.5005 | 0.1338 | 1e-10 | 185 |
| 1.4711 | 0.1435 | 1.5005 | 0.1338 | 1e-10 | 186 |
| 1.4802 | 0.1388 | 1.5004 | 0.1338 | 1e-10 | 187 |
| 1.4714 | 0.1529 | 1.5004 | 0.1338 | 1e-10 | 188 |
| 1.4707 | 0.1482 | 1.5004 | 0.1338 | 1e-10 | 189 |
| 1.4712 | 0.1647 | 1.5004 | 0.1338 | 1e-10 | 190 |
| 1.4709 | 0.1435 | 1.5004 | 0.1338 | 1e-10 | 191 |
| 1.4741 | 0.1459 | 1.5004 | 0.1338 | 1e-10 | 192 |
| 1.4682 | 0.1553 | 1.5004 | 0.1338 | 1e-10 | 193 |
| 1.4768 | 0.1224 | 1.5004 | 0.1338 | 1e-10 | 194 |
| 1.4868 | 0.1388 | 1.5004 | 0.1338 | 1e-10 | 195 |
| 1.4736 | 0.1600 | 1.5004 | 0.1338 | 1e-10 | 196 |
| 1.4784 | 0.1388 | 1.5004 | 0.1338 | 1e-10 | 197 |
| 1.4752 | 0.1365 | 1.5004 | 0.1338 | 1e-10 | 198 |
| 1.4790 | 0.1506 | 1.5004 | 0.1338 | 1e-10 | 199 |
| 1.4696 | 0.1412 | 1.5004 | 0.1338 | 1e-10 | 200 |
| 1.4771 | 0.1435 | 1.5004 | 0.1338 | 1e-10 | 201 |
| 1.4723 | 0.1412 | 1.5004 | 0.1338 | 1e-10 | 202 |
| 1.4742 | 0.1294 | 1.5004 | 0.1338 | 1e-10 | 203 |
| 1.4713 | 0.1529 | 1.5004 | 0.1338 | 1e-10 | 204 |
| 1.4752 | 0.1412 | 1.5004 | 0.1338 | 1e-10 | 205 |
| 1.4728 | 0.1365 | 1.5003 | 0.1338 | 1e-10 | 206 |
| 1.4809 | 0.1388 | 1.5003 | 0.1338 | 1e-10 | 207 |
| 1.4772 | 0.1388 | 1.5003 | 0.1338 | 1e-10 | 208 |
| 1.4759 | 0.1506 | 1.5003 | 0.1338 | 1e-10 | 209 |
| 1.4769 | 0.1482 | 1.5003 | 0.1338 | 1e-10 | 210 |
| 1.4686 | 0.1388 | 1.5003 | 0.1338 | 1e-10 | 211 |
| 1.4775 | 0.1506 | 1.5003 | 0.1338 | 1e-10 | 212 |
| 1.4659 | 0.1412 | 1.5003 | 0.1338 | 1e-10 | 213 |
| 1.4766 | 0.1176 | 1.5003 | 0.1338 | 1e-10 | 214 |
| 1.4770 | 0.1341 | 1.5003 | 0.1338 | 1e-10 | 215 |
| 1.4572 | 0.1600 | 1.5003 | 0.1338 | 1e-10 | 216 |
| 1.4677 | 0.1318 | 1.5003 | 0.1338 | 1e-10 | 217 |
| 1.4816 | 0.1224 | 1.5003 | 0.1338 | 1e-10 | 218 |
| 1.4748 | 0.1600 | 1.5003 | 0.1338 | 1e-10 | 219 |
| 1.4753 | 0.1529 | 1.5003 | 0.1338 | 1e-10 | 220 |
| 1.4744 | 0.1247 | 1.5003 | 0.1338 | 1e-10 | 221 |
| 1.4757 | 0.1459 | 1.5003 | 0.1338 | 1e-10 | 222 |
| 1.4777 | 0.1365 | 1.5003 | 0.1338 | 1e-10 | 223 |
| 1.4705 | 0.1459 | 1.5003 | 0.1338 | 1e-10 | 224 |
| 1.4697 | 0.1506 | 1.5003 | 0.1338 | 1e-10 | 225 |
| 1.4714 | 0.1341 | 1.5002 | 0.1338 | 1e-10 | 226 |
| 1.4714 | 0.1365 | 1.5002 | 0.1338 | 1e-10 | 227 |
| 1.4778 | 0.1459 | 1.5002 | 0.1338 | 1e-10 | 228 |
| 1.4764 | 0.1506 | 1.5002 | 0.1338 | 1e-10 | 229 |
| 1.4687 | 0.1741 | 1.5002 | 0.1338 | 1e-10 | 230 |
| 1.4731 | 0.1506 | 1.5002 | 0.1338 | 1e-10 | 231 |
| 1.4747 | 0.1341 | 1.5002 | 0.1338 | 1e-10 | 232 |
| 1.4709 | 0.1412 | 1.5002 | 0.1338 | 1e-10 | 233 |
| 1.4730 | 0.1553 | 1.5002 | 0.1338 | 1e-10 | 234 |
| 1.4749 | 0.1388 | 1.5002 | 0.1338 | 1e-10 | 235 |
| 1.4734 | 0.1271 | 1.5002 | 0.1338 | 1e-10 | 236 |
| 1.4658 | 0.1506 | 1.5002 | 0.1338 | 1e-10 | 237 |
| 1.4662 | 0.1576 | 1.5002 | 0.1338 | 1e-10 | 238 |
| 1.4771 | 0.1459 | 1.5002 | 0.1338 | 1e-10 | 239 |
| 1.4793 | 0.1365 | 1.5002 | 0.1338 | 1e-10 | 240 |
| 1.4702 | 0.1318 | 1.5002 | 0.1338 | 1e-10 | 241 |
| 1.4737 | 0.1341 | 1.5002 | 0.1338 | 1e-10 | 242 |
| 1.4737 | 0.1459 | 1.5002 | 0.1338 | 1e-10 | 243 |
| 1.4799 | 0.1435 | 1.5002 | 0.1338 | 1e-10 | 244 |
| 1.4821 | 0.1435 | 1.5001 | 0.1338 | 1e-10 | 245 |
| 1.4673 | 0.1529 | 1.5001 | 0.1338 | 1e-10 | 246 |
| 1.4720 | 0.1482 | 1.5001 | 0.1338 | 1e-10 | 247 |
| 1.4715 | 0.1600 | 1.5001 | 0.1338 | 1e-10 | 248 |
| 1.4750 | 0.1647 | 1.5001 | 0.1338 | 1e-10 | 249 |
| 1.4735 | 0.1341 | 1.5001 | 0.1338 | 1e-10 | 250 |
| 1.4787 | 0.1341 | 1.5001 | 0.1338 | 1e-10 | 251 |
| 1.4659 | 0.1600 | 1.5001 | 0.1338 | 1e-10 | 252 |
| 1.4787 | 0.1529 | 1.5001 | 0.1338 | 1e-10 | 253 |
| 1.4787 | 0.1341 | 1.5001 | 0.1338 | 1e-10 | 254 |
| 1.4796 | 0.1435 | 1.5001 | 0.1338 | 1e-10 | 255 |
| 1.4739 | 0.1506 | 1.5001 | 0.1338 | 1e-10 | 256 |
| 1.4817 | 0.1318 | 1.5001 | 0.1338 | 1e-10 | 257 |
| 1.4796 | 0.1412 | 1.5001 | 0.1338 | 1e-10 | 258 |
| 1.4780 | 0.1341 | 1.5001 | 0.1338 | 1e-10 | 259 |
| 1.4737 | 0.1341 | 1.5001 | 0.1338 | 1e-10 | 260 |
| 1.4777 | 0.1412 | 1.5001 | 0.1338 | 1e-10 | 261 |
| 1.4709 | 0.1459 | 1.5001 | 0.1338 | 1e-10 | 262 |
| 1.4680 | 0.1576 | 1.5001 | 0.1338 | 1e-10 | 263 |
| 1.4760 | 0.1506 | 1.5000 | 0.1338 | 1e-10 | 264 |
| 1.4743 | 0.1482 | 1.5000 | 0.1338 | 1e-10 | 265 |
| 1.4709 | 0.1553 | 1.5000 | 0.1338 | 1e-10 | 266 |
| 1.4787 | 0.1294 | 1.5000 | 0.1338 | 1e-10 | 267 |
| 1.4727 | 0.1482 | 1.5000 | 0.1338 | 1e-10 | 268 |
| 1.4776 | 0.1553 | 1.5000 | 0.1338 | 1e-10 | 269 |
| 1.4804 | 0.1247 | 1.5000 | 0.1338 | 1e-10 | 270 |
| 1.4682 | 0.1529 | 1.5000 | 0.1338 | 1e-10 | 271 |
| 1.4731 | 0.1435 | 1.5000 | 0.1338 | 1e-10 | 272 |
| 1.4719 | 0.1482 | 1.5000 | 0.1338 | 1e-10 | 273 |
| 1.4773 | 0.1506 | 1.5000 | 0.1338 | 1e-10 | 274 |
| 1.4780 | 0.1294 | 1.5000 | 0.1338 | 1e-10 | 275 |
| 1.4728 | 0.1506 | 1.5000 | 0.1338 | 1e-10 | 276 |
| 1.4748 | 0.1459 | 1.5000 | 0.1338 | 1e-10 | 277 |
| 1.4667 | 0.1341 | 1.5000 | 0.1338 | 1e-10 | 278 |
| 1.4725 | 0.1459 | 1.5000 | 0.1338 | 1e-10 | 279 |
| 1.4774 | 0.1388 | 1.5000 | 0.1338 | 1e-10 | 280 |
| 1.4764 | 0.1529 | 1.5000 | 0.1338 | 1e-10 | 281 |
| 1.4725 | 0.1388 | 1.5000 | 0.1338 | 1e-10 | 282 |
| 1.4734 | 0.1435 | 1.4999 | 0.1338 | 1e-10 | 283 |
| 1.4718 | 0.1506 | 1.4999 | 0.1338 | 1e-10 | 284 |
| 1.4674 | 0.1482 | 1.4999 | 0.1338 | 1e-10 | 285 |
| 1.4762 | 0.1435 | 1.4999 | 0.1338 | 1e-10 | 286 |
| 1.4735 | 0.1482 | 1.4999 | 0.1338 | 1e-10 | 287 |
| 1.4790 | 0.1294 | 1.4999 | 0.1338 | 1e-10 | 288 |
| 1.4777 | 0.1388 | 1.4999 | 0.1338 | 1e-10 | 289 |
| 1.4793 | 0.1576 | 1.4999 | 0.1338 | 1e-10 | 290 |
| 1.4729 | 0.1435 | 1.4999 | 0.1338 | 1e-10 | 291 |
| 1.4742 | 0.1506 | 1.4999 | 0.1338 | 1e-10 | 292 |
| 1.4775 | 0.1341 | 1.4999 | 0.1338 | 1e-10 | 293 |
| 1.4688 | 0.1482 | 1.4999 | 0.1338 | 1e-10 | 294 |
| 1.4782 | 0.1247 | 1.4999 | 0.1338 | 1e-10 | 295 |
| 1.4680 | 0.1482 | 1.4999 | 0.1338 | 1e-10 | 296 |
| 1.4749 | 0.1365 | 1.4999 | 0.1338 | 1e-10 | 297 |
| 1.4814 | 0.1176 | 1.4999 | 0.1338 | 1e-10 | 298 |
| 1.4698 | 0.1388 | 1.4999 | 0.1338 | 1e-10 | 299 |
| 1.4724 | 0.1529 | 1.4999 | 0.1338 | 1e-10 | 300 |
| 1.4753 | 0.1459 | 1.4999 | 0.1338 | 1e-10 | 301 |
| 1.4790 | 0.1341 | 1.4998 | 0.1338 | 1e-10 | 302 |
| 1.4685 | 0.1529 | 1.4998 | 0.1338 | 1e-10 | 303 |
| 1.4850 | 0.1341 | 1.4998 | 0.1338 | 1e-10 | 304 |
| 1.4755 | 0.1435 | 1.4998 | 0.1338 | 1e-10 | 305 |
| 1.4781 | 0.1341 | 1.4998 | 0.1338 | 1e-10 | 306 |
| 1.4800 | 0.1341 | 1.4998 | 0.1338 | 1e-10 | 307 |
| 1.4749 | 0.1529 | 1.4998 | 0.1338 | 1e-10 | 308 |
| 1.4819 | 0.1271 | 1.4998 | 0.1338 | 1e-10 | 309 |
| 1.4702 | 0.1529 | 1.4998 | 0.1338 | 1e-10 | 310 |
| 1.4758 | 0.1459 | 1.4998 | 0.1338 | 1e-10 | 311 |
| 1.4703 | 0.1529 | 1.4998 | 0.1338 | 1e-10 | 312 |
| 1.4768 | 0.1365 | 1.4998 | 0.1338 | 1e-10 | 313 |
| 1.4741 | 0.1294 | 1.4998 | 0.1338 | 1e-10 | 314 |
| 1.4702 | 0.1506 | 1.4998 | 0.1338 | 1e-10 | 315 |
| 1.4744 | 0.1647 | 1.4998 | 0.1338 | 1e-10 | 316 |
| 1.4771 | 0.1482 | 1.4998 | 0.1338 | 1e-10 | 317 |
| 1.4711 | 0.1506 | 1.4998 | 0.1338 | 1e-10 | 318 |
| 1.4679 | 0.1506 | 1.4998 | 0.1338 | 1e-10 | 319 |
| 1.4726 | 0.1459 | 1.4998 | 0.1338 | 1e-10 | 320 |
| 1.4682 | 0.1435 | 1.4997 | 0.1338 | 1e-10 | 321 |
| 1.4750 | 0.1506 | 1.4997 | 0.1338 | 1e-10 | 322 |
| 1.4756 | 0.1482 | 1.4997 | 0.1338 | 1e-10 | 323 |
| 1.4791 | 0.1365 | 1.4997 | 0.1338 | 1e-10 | 324 |
| 1.4794 | 0.1200 | 1.4997 | 0.1338 | 1e-10 | 325 |
| 1.4813 | 0.1435 | 1.4997 | 0.1338 | 1e-10 | 326 |
| 1.4604 | 0.1318 | 1.4997 | 0.1338 | 1e-10 | 327 |
| 1.4815 | 0.1247 | 1.4997 | 0.1338 | 1e-10 | 328 |
| 1.4750 | 0.1412 | 1.4997 | 0.1338 | 1e-10 | 329 |
| 1.4671 | 0.1459 | 1.4997 | 0.1338 | 1e-10 | 330 |
| 1.4749 | 0.1576 | 1.4997 | 0.1338 | 1e-10 | 331 |
| 1.4836 | 0.1341 | 1.4997 | 0.1338 | 1e-10 | 332 |
| 1.4839 | 0.1624 | 1.4997 | 0.1338 | 1e-10 | 333 |
| 1.4660 | 0.1412 | 1.4997 | 0.1338 | 1e-10 | 334 |
| 1.4708 | 0.1318 | 1.4997 | 0.1338 | 1e-10 | 335 |
| 1.4755 | 0.1271 | 1.4997 | 0.1338 | 1e-10 | 336 |
| 1.4823 | 0.1318 | 1.4997 | 0.1338 | 1e-10 | 337 |
| 1.4730 | 0.1318 | 1.4997 | 0.1338 | 1e-10 | 338 |
| 1.4785 | 0.1459 | 1.4997 | 0.1338 | 1e-10 | 339 |
| 1.4720 | 0.1412 | 1.4996 | 0.1338 | 1e-10 | 340 |
| 1.4759 | 0.1459 | 1.4996 | 0.1338 | 1e-10 | 341 |
| 1.4755 | 0.1482 | 1.4996 | 0.1338 | 1e-10 | 342 |
| 1.4756 | 0.1365 | 1.4996 | 0.1338 | 1e-10 | 343 |
| 1.4720 | 0.1459 | 1.4996 | 0.1338 | 1e-10 | 344 |
| 1.4835 | 0.1388 | 1.4996 | 0.1338 | 1e-10 | 345 |
| 1.4722 | 0.1412 | 1.4996 | 0.1338 | 1e-10 | 346 |
| 1.4729 | 0.1271 | 1.4996 | 0.1338 | 9.9999994e-11 | 347 |
| 1.4838 | 0.1271 | 1.4996 | 0.1338 | 9.999999e-11 | 348 |
| 1.4722 | 0.1318 | 1.4996 | 0.1338 | 9.999998e-11 | 349 |
| 1.4709 | 0.1459 | 1.4996 | 0.1338 | 9.9999974e-11 | 350 |
| 1.4729 | 0.1388 | 1.4996 | 0.1338 | 9.999997e-11 | 351 |
| 1.4751 | 0.1459 | 1.4996 | 0.1338 | 9.999996e-11 | 352 |
| 1.4627 | 0.1553 | 1.4996 | 0.1338 | 9.999995e-11 | 353 |
| 1.4719 | 0.1459 | 1.4996 | 0.1338 | 9.9999946e-11 | 354 |
| 1.4696 | 0.1341 | 1.4996 | 0.1338 | 9.999994e-11 | 355 |
| 1.4782 | 0.1435 | 1.4996 | 0.1338 | 9.999993e-11 | 356 |
| 1.4692 | 0.1459 | 1.4996 | 0.1338 | 9.9999925e-11 | 357 |
| 1.4685 | 0.1435 | 1.4996 | 0.1338 | 9.999992e-11 | 358 |
| 1.4787 | 0.1459 | 1.4996 | 0.1338 | 9.999991e-11 | 359 |
| 1.4783 | 0.1694 | 1.4995 | 0.1338 | 9.9999904e-11 | 360 |
| 1.4746 | 0.1553 | 1.4995 | 0.1338 | 9.99999e-11 | 361 |
| 1.4805 | 0.1388 | 1.4995 | 0.1338 | 9.999989e-11 | 362 |
| 1.4651 | 0.1365 | 1.4995 | 0.1338 | 9.999988e-11 | 363 |
| 1.4713 | 0.1435 | 1.4995 | 0.1338 | 9.9999876e-11 | 364 |
| 1.4753 | 0.1341 | 1.4995 | 0.1338 | 9.999987e-11 | 365 |
| 1.4764 | 0.1529 | 1.4995 | 0.1338 | 9.999986e-11 | 366 |
| 1.4719 | 0.1412 | 1.4995 | 0.1338 | 9.9999856e-11 | 367 |
| 1.4746 | 0.1412 | 1.4995 | 0.1338 | 9.999985e-11 | 368 |
| 1.4736 | 0.1341 | 1.4995 | 0.1338 | 9.999984e-11 | 369 |
| 1.4636 | 0.1553 | 1.4995 | 0.1338 | 9.9999835e-11 | 370 |
| 1.4680 | 0.1576 | 1.4995 | 0.1338 | 9.999983e-11 | 371 |
| 1.4725 | 0.1341 | 1.4995 | 0.1338 | 9.999982e-11 | 372 |
| 1.4738 | 0.1388 | 1.4995 | 0.1338 | 9.9999814e-11 | 373 |
| 1.4777 | 0.1506 | 1.4995 | 0.1338 | 9.999981e-11 | 374 |
| 1.4710 | 0.1671 | 1.4995 | 0.1338 | 9.99998e-11 | 375 |
| 1.4726 | 0.1506 | 1.4995 | 0.1338 | 9.999979e-11 | 376 |
| 1.4744 | 0.1365 | 1.4995 | 0.1338 | 9.9999786e-11 | 377 |
| 1.4731 | 0.1529 | 1.4995 | 0.1338 | 9.999978e-11 | 378 |
| 1.4713 | 0.1506 | 1.4994 | 0.1338 | 9.999977e-11 | 379 |
| 1.4790 | 0.1412 | 1.4994 | 0.1338 | 9.9999765e-11 | 380 |
| 1.4689 | 0.1388 | 1.4994 | 0.1338 | 9.999976e-11 | 381 |
| 1.4708 | 0.1482 | 1.4994 | 0.1338 | 9.999975e-11 | 382 |
| 1.4705 | 0.1529 | 1.4994 | 0.1338 | 9.9999745e-11 | 383 |
| 1.4658 | 0.1506 | 1.4994 | 0.1338 | 9.999974e-11 | 384 |
| 1.4758 | 0.1200 | 1.4994 | 0.1338 | 9.999973e-11 | 385 |
| 1.4812 | 0.1365 | 1.4994 | 0.1338 | 9.9999724e-11 | 386 |
| 1.4773 | 0.1694 | 1.4994 | 0.1338 | 9.999972e-11 | 387 |
| 1.4729 | 0.1506 | 1.4994 | 0.1338 | 9.999971e-11 | 388 |
| 1.4729 | 0.1459 | 1.4994 | 0.1338 | 9.99997e-11 | 389 |
| 1.4796 | 0.1365 | 1.4994 | 0.1338 | 9.9999696e-11 | 390 |
| 1.4763 | 0.1294 | 1.4994 | 0.1338 | 9.999969e-11 | 391 |
| 1.4733 | 0.1529 | 1.4994 | 0.1338 | 9.999968e-11 | 392 |
| 1.4726 | 0.1435 | 1.4994 | 0.1338 | 9.9999675e-11 | 393 |
| 1.4699 | 0.1318 | 1.4994 | 0.1338 | 9.999967e-11 | 394 |
| 1.4724 | 0.1318 | 1.4994 | 0.1338 | 9.999966e-11 | 395 |
| 1.4767 | 0.1388 | 1.4994 | 0.1338 | 9.9999654e-11 | 396 |
| 1.4733 | 0.1341 | 1.4994 | 0.1338 | 9.999965e-11 | 397 |
| 1.4769 | 0.1459 | 1.4993 | 0.1338 | 9.999964e-11 | 398 |
| 1.4744 | 0.1482 | 1.4993 | 0.1338 | 9.9999634e-11 | 399 |
| 1.4739 | 0.1435 | 1.4993 | 0.1338 | 9.999963e-11 | 400 |
| 1.4746 | 0.1482 | 1.4993 | 0.1338 | 9.999962e-11 | 401 |
| 1.4725 | 0.1412 | 1.4993 | 0.1338 | 9.999961e-11 | 402 |
| 1.4665 | 0.1459 | 1.4993 | 0.1338 | 9.9999606e-11 | 403 |
| 1.4791 | 0.1506 | 1.4993 | 0.1338 | 9.99996e-11 | 404 |
| 1.4747 | 0.1506 | 1.4993 | 0.1338 | 9.999959e-11 | 405 |
| 1.4770 | 0.1247 | 1.4993 | 0.1338 | 9.9999585e-11 | 406 |
| 1.4773 | 0.1529 | 1.4993 | 0.1338 | 9.999958e-11 | 407 |
| 1.4832 | 0.1318 | 1.4993 | 0.1338 | 9.999957e-11 | 408 |
| 1.4728 | 0.1271 | 1.4993 | 0.1338 | 9.9999564e-11 | 409 |
| 1.4714 | 0.1553 | 1.4993 | 0.1338 | 9.999956e-11 | 410 |
| 1.4758 | 0.1365 | 1.4993 | 0.1338 | 9.999955e-11 | 411 |
| 1.4740 | 0.1459 | 1.4993 | 0.1338 | 9.999954e-11 | 412 |
| 1.4737 | 0.1365 | 1.4993 | 0.1338 | 9.9999536e-11 | 413 |
| 1.4786 | 0.1529 | 1.4993 | 0.1338 | 9.999953e-11 | 414 |
| 1.4694 | 0.1459 | 1.4993 | 0.1338 | 9.999952e-11 | 415 |
| 1.4720 | 0.1459 | 1.4993 | 0.1338 | 9.9999516e-11 | 416 |
| 1.4761 | 0.1294 | 1.4992 | 0.1338 | 9.999951e-11 | 417 |
| 1.4761 | 0.1318 | 1.4992 | 0.1338 | 9.99995e-11 | 418 |
| 1.4724 | 0.1459 | 1.4992 | 0.1338 | 9.9999495e-11 | 419 |
| 1.4760 | 0.1459 | 1.4992 | 0.1338 | 9.999949e-11 | 420 |
| 1.4735 | 0.1412 | 1.4992 | 0.1338 | 9.999948e-11 | 421 |
| 1.4752 | 0.1318 | 1.4992 | 0.1338 | 9.9999474e-11 | 422 |
| 1.4748 | 0.1600 | 1.4992 | 0.1338 | 9.999947e-11 | 423 |
| 1.4777 | 0.1435 | 1.4992 | 0.1338 | 9.999946e-11 | 424 |
| 1.4714 | 0.1482 | 1.4992 | 0.1338 | 9.999945e-11 | 425 |
| 1.4729 | 0.1506 | 1.4992 | 0.1338 | 9.9999446e-11 | 426 |
| 1.4768 | 0.1294 | 1.4992 | 0.1338 | 9.999944e-11 | 427 |
| 1.4718 | 0.1482 | 1.4992 | 0.1338 | 9.999943e-11 | 428 |
| 1.4783 | 0.1271 | 1.4992 | 0.1338 | 9.9999425e-11 | 429 |
| 1.4735 | 0.1553 | 1.4992 | 0.1338 | 9.999942e-11 | 430 |
| 1.4762 | 0.1388 | 1.4992 | 0.1338 | 9.999941e-11 | 431 |
| 1.4698 | 0.1388 | 1.4992 | 0.1338 | 9.9999405e-11 | 432 |
| 1.4655 | 0.1529 | 1.4992 | 0.1338 | 9.99994e-11 | 433 |
| 1.4725 | 0.1412 | 1.4992 | 0.1338 | 9.999939e-11 | 434 |
| 1.4738 | 0.1506 | 1.4992 | 0.1338 | 9.9999384e-11 | 435 |
| 1.4737 | 0.1506 | 1.4991 | 0.1338 | 9.999938e-11 | 436 |
| 1.4704 | 0.1435 | 1.4991 | 0.1338 | 9.999937e-11 | 437 |
| 1.4824 | 0.1271 | 1.4991 | 0.1338 | 9.999936e-11 | 438 |
| 1.4713 | 0.1341 | 1.4991 | 0.1338 | 9.9999356e-11 | 439 |
| 1.4707 | 0.1412 | 1.4991 | 0.1338 | 9.999935e-11 | 440 |
| 1.4721 | 0.1482 | 1.4991 | 0.1338 | 9.999934e-11 | 441 |
| 1.4667 | 0.1435 | 1.4991 | 0.1338 | 9.9999335e-11 | 442 |
| 1.4793 | 0.1365 | 1.4991 | 0.1338 | 9.999933e-11 | 443 |
| 1.4746 | 0.1412 | 1.4991 | 0.1338 | 9.999932e-11 | 444 |
| 1.4637 | 0.1506 | 1.4991 | 0.1338 | 9.9999314e-11 | 445 |
| 1.4701 | 0.1529 | 1.4991 | 0.1338 | 9.999931e-11 | 446 |
| 1.4666 | 0.1506 | 1.4991 | 0.1338 | 9.99993e-11 | 447 |
| 1.4796 | 0.1318 | 1.4991 | 0.1338 | 9.9999294e-11 | 448 |
| 1.4729 | 0.1412 | 1.4991 | 0.1338 | 9.999929e-11 | 449 |
| 1.4725 | 0.1482 | 1.4991 | 0.1338 | 9.999928e-11 | 450 |
| 1.4731 | 0.1412 | 1.4991 | 0.1338 | 9.999927e-11 | 451 |
| 1.4723 | 0.1506 | 1.4991 | 0.1338 | 9.9999266e-11 | 452 |
| 1.4744 | 0.1341 | 1.4991 | 0.1338 | 9.999926e-11 | 453 |
| 1.4746 | 0.1459 | 1.4991 | 0.1338 | 9.999925e-11 | 454 |
| 1.4702 | 0.1318 | 1.4990 | 0.1338 | 9.9999245e-11 | 455 |
| 1.4721 | 0.1459 | 1.4990 | 0.1338 | 9.999924e-11 | 456 |
| 1.4824 | 0.1459 | 1.4990 | 0.1338 | 9.999923e-11 | 457 |
| 1.4732 | 0.1459 | 1.4990 | 0.1338 | 9.9999224e-11 | 458 |
| 1.4740 | 0.1482 | 1.4990 | 0.1338 | 9.999922e-11 | 459 |
| 1.4729 | 0.1482 | 1.4990 | 0.1338 | 9.999921e-11 | 460 |
| 1.4746 | 0.1576 | 1.4990 | 0.1338 | 9.99992e-11 | 461 |
| 1.4771 | 0.1365 | 1.4990 | 0.1338 | 9.9999196e-11 | 462 |
| 1.4809 | 0.1412 | 1.4990 | 0.1338 | 9.999919e-11 | 463 |
| 1.4774 | 0.1365 | 1.4990 | 0.1338 | 9.999918e-11 | 464 |
| 1.4741 | 0.1459 | 1.4990 | 0.1338 | 9.9999176e-11 | 465 |
| 1.4811 | 0.1388 | 1.4990 | 0.1338 | 9.999917e-11 | 466 |
| 1.4776 | 0.1459 | 1.4990 | 0.1338 | 9.999916e-11 | 467 |
| 1.4663 | 0.1506 | 1.4990 | 0.1338 | 9.9999155e-11 | 468 |
| 1.4666 | 0.1482 | 1.4990 | 0.1338 | 9.999915e-11 | 469 |
| 1.4814 | 0.1294 | 1.4990 | 0.1338 | 9.999914e-11 | 470 |
| 1.4720 | 0.1271 | 1.4990 | 0.1338 | 9.9999134e-11 | 471 |
| 1.4668 | 0.1247 | 1.4990 | 0.1338 | 9.999913e-11 | 472 |
| 1.4647 | 0.1671 | 1.4990 | 0.1338 | 9.999912e-11 | 473 |
| 1.4674 | 0.1624 | 1.4989 | 0.1338 | 9.999911e-11 | 474 |
| 1.4724 | 0.1553 | 1.4989 | 0.1338 | 9.9999106e-11 | 475 |
| 1.4711 | 0.1435 | 1.4989 | 0.1338 | 9.99991e-11 | 476 |
| 1.4685 | 0.1482 | 1.4989 | 0.1338 | 9.999909e-11 | 477 |
| 1.4784 | 0.1388 | 1.4989 | 0.1338 | 9.9999085e-11 | 478 |
| 1.4728 | 0.1341 | 1.4989 | 0.1338 | 9.999908e-11 | 479 |
| 1.4708 | 0.1412 | 1.4989 | 0.1338 | 9.999907e-11 | 480 |
| 1.4691 | 0.1553 | 1.4989 | 0.1338 | 9.9999065e-11 | 481 |
| 1.4713 | 0.1506 | 1.4989 | 0.1338 | 9.999906e-11 | 482 |
| 1.4732 | 0.1341 | 1.4989 | 0.1338 | 9.999905e-11 | 483 |
| 1.4727 | 0.1271 | 1.4989 | 0.1338 | 9.9999044e-11 | 484 |
| 1.4751 | 0.1435 | 1.4989 | 0.1338 | 9.999904e-11 | 485 |
| 1.4721 | 0.1671 | 1.4989 | 0.1338 | 9.999903e-11 | 486 |
| 1.4662 | 0.1341 | 1.4989 | 0.1338 | 9.999902e-11 | 487 |
| 1.4711 | 0.1459 | 1.4989 | 0.1338 | 9.9999016e-11 | 488 |
| 1.4743 | 0.1529 | 1.4989 | 0.1338 | 9.999901e-11 | 489 |
| 1.4648 | 0.1529 | 1.4989 | 0.1338 | 9.9999e-11 | 490 |
| 1.4762 | 0.1435 | 1.4989 | 0.1338 | 9.9998995e-11 | 491 |
| 1.4683 | 0.1318 | 1.4989 | 0.1338 | 9.999899e-11 | 492 |
| 1.4702 | 0.1624 | 1.4989 | 0.1338 | 9.999898e-11 | 493 |
| 1.4717 | 0.1482 | 1.4988 | 0.1338 | 9.9998974e-11 | 494 |
| 1.4753 | 0.1435 | 1.4988 | 0.1338 | 9.999897e-11 | 495 |
| 1.4775 | 0.1341 | 1.4988 | 0.1338 | 9.999896e-11 | 496 |
| 1.4755 | 0.1624 | 1.4988 | 0.1338 | 9.9998954e-11 | 497 |
| 1.4748 | 0.1224 | 1.4988 | 0.1338 | 9.999895e-11 | 498 |
| 1.4704 | 0.1365 | 1.4988 | 0.1338 | 9.999894e-11 | 499 |
| 1.4710 | 0.1341 | 1.4988 | 0.1338 | 9.999893e-11 | 500 |
| 1.4720 | 0.1412 | 1.4988 | 0.1338 | 9.9998926e-11 | 501 |
| 1.4743 | 0.1600 | 1.4988 | 0.1338 | 9.999892e-11 | 502 |
| 1.4698 | 0.1459 | 1.4988 | 0.1338 | 9.999891e-11 | 503 |
| 1.4730 | 0.1506 | 1.4988 | 0.1338 | 9.9998905e-11 | 504 |
| 1.4699 | 0.1318 | 1.4988 | 0.1338 | 9.99989e-11 | 505 |
| 1.4714 | 0.1459 | 1.4988 | 0.1338 | 9.999889e-11 | 506 |
| 1.4741 | 0.1553 | 1.4988 | 0.1338 | 9.9998884e-11 | 507 |
| 1.4878 | 0.1318 | 1.4988 | 0.1338 | 9.999888e-11 | 508 |
| 1.4759 | 0.1365 | 1.4988 | 0.1338 | 9.999887e-11 | 509 |
| 1.4716 | 0.1506 | 1.4988 | 0.1338 | 9.999886e-11 | 510 |
| 1.4715 | 0.1294 | 1.4988 | 0.1338 | 9.9998856e-11 | 511 |
| 1.4750 | 0.1600 | 1.4988 | 0.1338 | 9.999885e-11 | 512 |
| 1.4700 | 0.1459 | 1.4987 | 0.1338 | 9.999884e-11 | 513 |
| 1.4716 | 0.1553 | 1.4987 | 0.1338 | 9.9998836e-11 | 514 |
| 1.4749 | 0.1318 | 1.4987 | 0.1338 | 9.999883e-11 | 515 |
| 1.4646 | 0.1529 | 1.4987 | 0.1338 | 9.999882e-11 | 516 |
| 1.4695 | 0.1482 | 1.4987 | 0.1338 | 9.9998815e-11 | 517 |
| 1.4741 | 0.1341 | 1.4987 | 0.1338 | 9.999881e-11 | 518 |
| 1.4748 | 0.1318 | 1.4987 | 0.1338 | 9.99988e-11 | 519 |
| 1.4698 | 0.1294 | 1.4987 | 0.1338 | 9.9998794e-11 | 520 |
| 1.4750 | 0.1365 | 1.4987 | 0.1338 | 9.999879e-11 | 521 |
| 1.4663 | 0.1553 | 1.4987 | 0.1338 | 9.999878e-11 | 522 |
| 1.4771 | 0.1412 | 1.4987 | 0.1338 | 9.999877e-11 | 523 |
| 1.4859 | 0.1388 | 1.4987 | 0.1338 | 9.9998766e-11 | 524 |
| 1.4818 | 0.1294 | 1.4987 | 0.1338 | 9.999876e-11 | 525 |
| 1.4770 | 0.1576 | 1.4987 | 0.1338 | 9.999875e-11 | 526 |
| 1.4692 | 0.1576 | 1.4987 | 0.1338 | 9.9998745e-11 | 527 |
| 1.4794 | 0.1482 | 1.4987 | 0.1338 | 9.999874e-11 | 528 |
| 1.4737 | 0.1529 | 1.4987 | 0.1338 | 9.999873e-11 | 529 |
| 1.4730 | 0.1271 | 1.4987 | 0.1338 | 9.9998725e-11 | 530 |
| 1.4738 | 0.1388 | 1.4987 | 0.1338 | 9.999872e-11 | 531 |
| 1.4749 | 0.1459 | 1.4986 | 0.1338 | 9.999871e-11 | 532 |
| 1.4724 | 0.1412 | 1.4986 | 0.1338 | 9.9998704e-11 | 533 |
| 1.4698 | 0.1459 | 1.4986 | 0.1338 | 9.99987e-11 | 534 |
| 1.4821 | 0.1247 | 1.4986 | 0.1338 | 9.999869e-11 | 535 |
| 1.4726 | 0.1459 | 1.4986 | 0.1338 | 9.999868e-11 | 536 |
| 1.4703 | 0.1529 | 1.4986 | 0.1338 | 9.9998676e-11 | 537 |
| 1.4682 | 0.1576 | 1.4986 | 0.1338 | 9.999867e-11 | 538 |
| 1.4790 | 0.1459 | 1.4986 | 0.1338 | 9.999866e-11 | 539 |
| 1.4691 | 0.1647 | 1.4986 | 0.1338 | 9.9998655e-11 | 540 |
| 1.4718 | 0.1271 | 1.4986 | 0.1338 | 9.999865e-11 | 541 |
| 1.4690 | 0.1271 | 1.4986 | 0.1338 | 9.999864e-11 | 542 |
| 1.4813 | 0.1341 | 1.4986 | 0.1338 | 9.9998634e-11 | 543 |
| 1.4767 | 0.1365 | 1.4986 | 0.1338 | 9.999863e-11 | 544 |
| 1.4742 | 0.1553 | 1.4986 | 0.1338 | 9.999862e-11 | 545 |
| 1.4610 | 0.1412 | 1.4986 | 0.1338 | 9.9998614e-11 | 546 |
| 1.4812 | 0.1482 | 1.4986 | 0.1338 | 9.999861e-11 | 547 |
| 1.4643 | 0.1388 | 1.4986 | 0.1338 | 9.99986e-11 | 548 |
| 1.4648 | 0.1459 | 1.4986 | 0.1338 | 9.999859e-11 | 549 |
| 1.4720 | 0.1459 | 1.4986 | 0.1338 | 9.9998586e-11 | 550 |
| 1.4751 | 0.1459 | 1.4985 | 0.1338 | 9.999858e-11 | 551 |
| 1.4738 | 0.1341 | 1.4985 | 0.1338 | 9.999857e-11 | 552 |
| 1.4729 | 0.1412 | 1.4985 | 0.1338 | 9.9998565e-11 | 553 |
| 1.4799 | 0.1412 | 1.4985 | 0.1338 | 9.999856e-11 | 554 |
| 1.4699 | 0.1341 | 1.4985 | 0.1338 | 9.999855e-11 | 555 |
| 1.4727 | 0.1318 | 1.4985 | 0.1338 | 9.9998544e-11 | 556 |
| 1.4766 | 0.1341 | 1.4985 | 0.1338 | 9.999854e-11 | 557 |
| 1.4673 | 0.1435 | 1.4985 | 0.1338 | 9.999853e-11 | 558 |
| 1.4669 | 0.1388 | 1.4985 | 0.1338 | 9.999852e-11 | 559 |
| 1.4774 | 0.1412 | 1.4985 | 0.1338 | 9.9998516e-11 | 560 |
| 1.4741 | 0.1412 | 1.4985 | 0.1338 | 9.999851e-11 | 561 |
| 1.4693 | 0.1435 | 1.4985 | 0.1338 | 9.99985e-11 | 562 |
| 1.4793 | 0.1388 | 1.4985 | 0.1338 | 9.9998496e-11 | 563 |
| 1.4788 | 0.1435 | 1.4985 | 0.1338 | 9.999849e-11 | 564 |
| 1.4709 | 0.1624 | 1.4985 | 0.1338 | 9.999848e-11 | 565 |
| 1.4732 | 0.1388 | 1.4985 | 0.1338 | 9.9998475e-11 | 566 |
| 1.4734 | 0.1412 | 1.4985 | 0.1338 | 9.999847e-11 | 567 |
| 1.4719 | 0.1529 | 1.4985 | 0.1338 | 9.999846e-11 | 568 |
| 1.4706 | 0.1459 | 1.4985 | 0.1338 | 9.9998454e-11 | 569 |
| 1.4657 | 0.1529 | 1.4984 | 0.1338 | 9.999845e-11 | 570 |
| 1.4775 | 0.1459 | 1.4984 | 0.1338 | 9.999844e-11 | 571 |
| 1.4719 | 0.1576 | 1.4984 | 0.1338 | 9.999843e-11 | 572 |
| 1.4761 | 0.1412 | 1.4984 | 0.1338 | 9.9998426e-11 | 573 |
| 1.4745 | 0.1459 | 1.4984 | 0.1338 | 9.999842e-11 | 574 |
| 1.4759 | 0.1318 | 1.4984 | 0.1338 | 9.999841e-11 | 575 |
| 1.4654 | 0.1482 | 1.4984 | 0.1338 | 9.9998405e-11 | 576 |
| 1.4672 | 0.1600 | 1.4984 | 0.1338 | 9.99984e-11 | 577 |
| 1.4761 | 0.1435 | 1.4984 | 0.1338 | 9.999839e-11 | 578 |
| 1.4760 | 0.1529 | 1.4984 | 0.1338 | 9.9998385e-11 | 579 |
| 1.4728 | 0.1412 | 1.4984 | 0.1338 | 9.999838e-11 | 580 |
| 1.4768 | 0.1412 | 1.4984 | 0.1338 | 9.999837e-11 | 581 |
| 1.4736 | 0.1412 | 1.4984 | 0.1338 | 9.9998364e-11 | 582 |
| 1.4779 | 0.1318 | 1.4984 | 0.1338 | 9.999836e-11 | 583 |
| 1.4745 | 0.1647 | 1.4984 | 0.1338 | 9.999835e-11 | 584 |
| 1.4694 | 0.1529 | 1.4984 | 0.1338 | 9.999834e-11 | 585 |
| 1.4707 | 0.1435 | 1.4984 | 0.1338 | 9.9998336e-11 | 586 |
| 1.4645 | 0.1506 | 1.4984 | 0.1338 | 9.999833e-11 | 587 |
| 1.4747 | 0.1388 | 1.4984 | 0.1338 | 9.999832e-11 | 588 |
| 1.4683 | 0.1435 | 1.4983 | 0.1338 | 9.9998315e-11 | 589 |
| 1.4733 | 0.1412 | 1.4983 | 0.1338 | 9.999831e-11 | 590 |
| 1.4651 | 0.1388 | 1.4983 | 0.1338 | 9.99983e-11 | 591 |
| 1.4742 | 0.1388 | 1.4983 | 0.1338 | 9.9998294e-11 | 592 |
| 1.4765 | 0.1435 | 1.4983 | 0.1338 | 9.999829e-11 | 593 |
| 1.4695 | 0.1553 | 1.4983 | 0.1338 | 9.999828e-11 | 594 |
| 1.4696 | 0.1412 | 1.4983 | 0.1338 | 9.9998274e-11 | 595 |
| 1.4733 | 0.1294 | 1.4983 | 0.1338 | 9.999827e-11 | 596 |
| 1.4689 | 0.1435 | 1.4983 | 0.1338 | 9.999826e-11 | 597 |
| 1.4727 | 0.1388 | 1.4983 | 0.1338 | 9.999825e-11 | 598 |
| 1.4714 | 0.1553 | 1.4983 | 0.1338 | 9.9998246e-11 | 599 |
| 1.4773 | 0.1318 | 1.4983 | 0.1338 | 9.999824e-11 | 600 |
| 1.4743 | 0.1553 | 1.4983 | 0.1338 | 9.999823e-11 | 601 |
| 1.4741 | 0.1294 | 1.4983 | 0.1338 | 9.9998225e-11 | 602 |
| 1.4693 | 0.1506 | 1.4983 | 0.1338 | 9.999822e-11 | 603 |
| 1.4767 | 0.1341 | 1.4983 | 0.1338 | 9.999821e-11 | 604 |
| 1.4762 | 0.1459 | 1.4983 | 0.1338 | 9.9998204e-11 | 605 |
| 1.4791 | 0.1271 | 1.4983 | 0.1338 | 9.99982e-11 | 606 |
| 1.4745 | 0.1412 | 1.4983 | 0.1338 | 9.999819e-11 | 607 |
| 1.4706 | 0.1576 | 1.4982 | 0.1338 | 9.999818e-11 | 608 |
| 1.4704 | 0.1412 | 1.4982 | 0.1338 | 9.9998176e-11 | 609 |
| 1.4826 | 0.1553 | 1.4982 | 0.1338 | 9.999817e-11 | 610 |
| 1.4783 | 0.1247 | 1.4982 | 0.1338 | 9.999816e-11 | 611 |
| 1.4783 | 0.1529 | 1.4982 | 0.1338 | 9.9998156e-11 | 612 |
| 1.4799 | 0.1482 | 1.4982 | 0.1338 | 9.999815e-11 | 613 |
| 1.4732 | 0.1459 | 1.4982 | 0.1338 | 9.999814e-11 | 614 |
| 1.4630 | 0.1624 | 1.4982 | 0.1338 | 9.9998135e-11 | 615 |
| 1.4710 | 0.1482 | 1.4982 | 0.1338 | 9.999813e-11 | 616 |
| 1.4665 | 0.1318 | 1.4982 | 0.1338 | 9.999812e-11 | 617 |
| 1.4760 | 0.1529 | 1.4982 | 0.1338 | 9.9998114e-11 | 618 |
| 1.4696 | 0.1576 | 1.4982 | 0.1338 | 9.999811e-11 | 619 |
| 1.4699 | 0.1647 | 1.4982 | 0.1338 | 9.99981e-11 | 620 |
| 1.4788 | 0.1318 | 1.4982 | 0.1338 | 9.999809e-11 | 621 |
| 1.4685 | 0.1435 | 1.4982 | 0.1338 | 9.9998086e-11 | 622 |
| 1.4771 | 0.1200 | 1.4982 | 0.1338 | 9.999808e-11 | 623 |
| 1.4768 | 0.1435 | 1.4982 | 0.1338 | 9.999807e-11 | 624 |
| 1.4726 | 0.1600 | 1.4982 | 0.1338 | 9.9998065e-11 | 625 |
| 1.4660 | 0.1459 | 1.4982 | 0.1338 | 9.999806e-11 | 626 |
| 1.4760 | 0.1247 | 1.4982 | 0.1338 | 9.999805e-11 | 627 |
| 1.4731 | 0.1482 | 1.4981 | 0.1338 | 9.9998045e-11 | 628 |
| 1.4701 | 0.1412 | 1.4981 | 0.1338 | 9.999804e-11 | 629 |
| 1.4733 | 0.1412 | 1.4981 | 0.1338 | 9.999803e-11 | 630 |
| 1.4682 | 0.1365 | 1.4981 | 0.1338 | 9.9998024e-11 | 631 |
| 1.4741 | 0.1365 | 1.4981 | 0.1338 | 9.999802e-11 | 632 |
| 1.4801 | 0.1318 | 1.4981 | 0.1338 | 9.999801e-11 | 633 |
| 1.4657 | 0.1553 | 1.4981 | 0.1338 | 9.9998e-11 | 634 |
| 1.4670 | 0.1482 | 1.4981 | 0.1338 | 9.9997996e-11 | 635 |
| 1.4755 | 0.1435 | 1.4981 | 0.1338 | 9.999799e-11 | 636 |
| 1.4753 | 0.1412 | 1.4981 | 0.1338 | 9.999798e-11 | 637 |
| 1.4775 | 0.1271 | 1.4981 | 0.1338 | 9.9997975e-11 | 638 |
| 1.4678 | 0.1600 | 1.4981 | 0.1338 | 9.999797e-11 | 639 |
| 1.4653 | 0.1341 | 1.4981 | 0.1338 | 9.999796e-11 | 640 |
| 1.4708 | 0.1671 | 1.4981 | 0.1338 | 9.9997954e-11 | 641 |
| 1.4729 | 0.1200 | 1.4981 | 0.1338 | 9.999795e-11 | 642 |
| 1.4726 | 0.1318 | 1.4981 | 0.1338 | 9.999794e-11 | 643 |
| 1.4733 | 0.1553 | 1.4981 | 0.1338 | 9.9997934e-11 | 644 |
| 1.4681 | 0.1459 | 1.4981 | 0.1338 | 9.999793e-11 | 645 |
| 1.4804 | 0.1365 | 1.4981 | 0.1338 | 9.999792e-11 | 646 |
| 1.4756 | 0.1506 | 1.4980 | 0.1338 | 9.999791e-11 | 647 |
| 1.4690 | 0.1365 | 1.4980 | 0.1338 | 9.9997906e-11 | 648 |
| 1.4788 | 0.1318 | 1.4980 | 0.1338 | 9.99979e-11 | 649 |
| 1.4690 | 0.1294 | 1.4980 | 0.1338 | 9.999789e-11 | 650 |
| 1.4714 | 0.1365 | 1.4980 | 0.1338 | 9.9997885e-11 | 651 |
| 1.4715 | 0.1647 | 1.4980 | 0.1338 | 9.999788e-11 | 652 |
| 1.4819 | 0.1388 | 1.4980 | 0.1338 | 9.999787e-11 | 653 |
| 1.4689 | 0.1365 | 1.4980 | 0.1338 | 9.9997864e-11 | 654 |
| 1.4725 | 0.1341 | 1.4980 | 0.1338 | 9.999786e-11 | 655 |
| 1.4817 | 0.1482 | 1.4980 | 0.1338 | 9.999785e-11 | 656 |
| 1.4753 | 0.1529 | 1.4980 | 0.1338 | 9.999784e-11 | 657 |
| 1.4751 | 0.1435 | 1.4980 | 0.1338 | 9.9997836e-11 | 658 |
| 1.4698 | 0.1459 | 1.4980 | 0.1338 | 9.999783e-11 | 659 |
| 1.4745 | 0.1435 | 1.4980 | 0.1338 | 9.999782e-11 | 660 |
| 1.4743 | 0.1318 | 1.4980 | 0.1338 | 9.9997816e-11 | 661 |
| 1.4747 | 0.1435 | 1.4980 | 0.1338 | 9.999781e-11 | 662 |
| 1.4770 | 0.1318 | 1.4980 | 0.1338 | 9.99978e-11 | 663 |
| 1.4719 | 0.1388 | 1.4980 | 0.1338 | 9.9997795e-11 | 664 |
| 1.4758 | 0.1247 | 1.4980 | 0.1338 | 9.999779e-11 | 665 |
| 1.4790 | 0.1341 | 1.4979 | 0.1338 | 9.999778e-11 | 666 |
| 1.4749 | 0.1553 | 1.4979 | 0.1338 | 9.9997774e-11 | 667 |
| 1.4841 | 0.1271 | 1.4979 | 0.1338 | 9.999777e-11 | 668 |
| 1.4719 | 0.1459 | 1.4979 | 0.1338 | 9.999776e-11 | 669 |
| 1.4717 | 0.1529 | 1.4979 | 0.1338 | 9.999775e-11 | 670 |
| 1.4717 | 0.1318 | 1.4979 | 0.1338 | 9.9997746e-11 | 671 |
| 1.4686 | 0.1341 | 1.4979 | 0.1338 | 9.999774e-11 | 672 |
| 1.4741 | 0.1412 | 1.4979 | 0.1338 | 9.999773e-11 | 673 |
| 1.4667 | 0.1553 | 1.4979 | 0.1338 | 9.9997725e-11 | 674 |
| 1.4719 | 0.1529 | 1.4979 | 0.1338 | 9.999772e-11 | 675 |
| 1.4716 | 0.1600 | 1.4979 | 0.1338 | 9.999771e-11 | 676 |
| 1.4615 | 0.1718 | 1.4979 | 0.1338 | 9.9997705e-11 | 677 |
| 1.4726 | 0.1482 | 1.4979 | 0.1338 | 9.99977e-11 | 678 |
| 1.4748 | 0.1388 | 1.4979 | 0.1338 | 9.999769e-11 | 679 |
| 1.4703 | 0.1529 | 1.4979 | 0.1338 | 9.9997684e-11 | 680 |
| 1.4763 | 0.1224 | 1.4979 | 0.1338 | 9.999768e-11 | 681 |
| 1.4674 | 0.1576 | 1.4979 | 0.1338 | 9.999767e-11 | 682 |
| 1.4685 | 0.1482 | 1.4979 | 0.1338 | 9.999766e-11 | 683 |
| 1.4791 | 0.1318 | 1.4979 | 0.1338 | 9.9997656e-11 | 684 |
| 1.4715 | 0.1412 | 1.4978 | 0.1338 | 9.999765e-11 | 685 |
| 1.4640 | 0.1506 | 1.4978 | 0.1338 | 9.999764e-11 | 686 |
| 1.4791 | 0.1459 | 1.4978 | 0.1338 | 9.9997635e-11 | 687 |
| 1.4751 | 0.1506 | 1.4978 | 0.1338 | 9.999763e-11 | 688 |
| 1.4760 | 0.1459 | 1.4978 | 0.1338 | 9.999762e-11 | 689 |
| 1.4727 | 0.1482 | 1.4978 | 0.1338 | 9.9997614e-11 | 690 |
| 1.4657 | 0.1576 | 1.4978 | 0.1338 | 9.999761e-11 | 691 |
| 1.4701 | 0.1294 | 1.4978 | 0.1338 | 9.99976e-11 | 692 |
| 1.4739 | 0.1459 | 1.4978 | 0.1338 | 9.9997594e-11 | 693 |
| 1.4714 | 0.1341 | 1.4978 | 0.1338 | 9.999759e-11 | 694 |
| 1.4685 | 0.1435 | 1.4978 | 0.1338 | 9.999758e-11 | 695 |
| 1.4755 | 0.1365 | 1.4978 | 0.1338 | 9.999757e-11 | 696 |
| 1.4738 | 0.1412 | 1.4978 | 0.1338 | 9.9997566e-11 | 697 |
| 1.4744 | 0.1318 | 1.4978 | 0.1338 | 9.999756e-11 | 698 |
| 1.4724 | 0.1388 | 1.4978 | 0.1338 | 9.999755e-11 | 699 |
| 1.4713 | 0.1506 | 1.4978 | 0.1338 | 9.9997545e-11 | 700 |
| 1.4778 | 0.1412 | 1.4978 | 0.1338 | 9.999754e-11 | 701 |
| 1.4713 | 0.1435 | 1.4978 | 0.1338 | 9.999753e-11 | 702 |
| 1.4761 | 0.1482 | 1.4978 | 0.1338 | 9.9997524e-11 | 703 |
| 1.4723 | 0.1506 | 1.4977 | 0.1338 | 9.999752e-11 | 704 |
| 1.4657 | 0.1459 | 1.4977 | 0.1338 | 9.999751e-11 | 705 |
| 1.4665 | 0.1553 | 1.4977 | 0.1338 | 9.99975e-11 | 706 |
| 1.4657 | 0.1576 | 1.4977 | 0.1338 | 9.9997496e-11 | 707 |
| 1.4746 | 0.1506 | 1.4977 | 0.1338 | 9.999749e-11 | 708 |
| 1.4702 | 0.1459 | 1.4977 | 0.1338 | 9.999748e-11 | 709 |
| 1.4784 | 0.1412 | 1.4977 | 0.1338 | 9.9997476e-11 | 710 |
| 1.4700 | 0.1459 | 1.4977 | 0.1338 | 9.999747e-11 | 711 |
| 1.4760 | 0.1412 | 1.4977 | 0.1338 | 9.999746e-11 | 712 |
| 1.4777 | 0.1365 | 1.4977 | 0.1338 | 9.9997455e-11 | 713 |
| 1.4673 | 0.1459 | 1.4977 | 0.1338 | 9.999745e-11 | 714 |
| 1.4676 | 0.1506 | 1.4977 | 0.1338 | 9.999744e-11 | 715 |
| 1.4773 | 0.1365 | 1.4977 | 0.1338 | 9.9997434e-11 | 716 |
| 1.4737 | 0.1459 | 1.4977 | 0.1338 | 9.999743e-11 | 717 |
| 1.4729 | 0.1529 | 1.4977 | 0.1338 | 9.999742e-11 | 718 |
| 1.4777 | 0.1576 | 1.4977 | 0.1338 | 9.999741e-11 | 719 |
| 1.4730 | 0.1459 | 1.4977 | 0.1338 | 9.9997406e-11 | 720 |
| 1.4661 | 0.1529 | 1.4977 | 0.1338 | 9.99974e-11 | 721 |
| 1.4761 | 0.1294 | 1.4977 | 0.1338 | 9.999739e-11 | 722 |
| 1.4747 | 0.1506 | 1.4976 | 0.1338 | 9.9997385e-11 | 723 |
| 1.4720 | 0.1459 | 1.4976 | 0.1338 | 9.999738e-11 | 724 |
| 1.4616 | 0.1506 | 1.4976 | 0.1338 | 9.999737e-11 | 725 |
| 1.4706 | 0.1624 | 1.4976 | 0.1338 | 9.9997365e-11 | 726 |
| 1.4649 | 0.1529 | 1.4976 | 0.1338 | 9.999736e-11 | 727 |
| 1.4750 | 0.1435 | 1.4976 | 0.1338 | 9.999735e-11 | 728 |
| 1.4692 | 0.1271 | 1.4976 | 0.1338 | 9.9997344e-11 | 729 |
| 1.4699 | 0.1529 | 1.4976 | 0.1338 | 9.999734e-11 | 730 |
| 1.4699 | 0.1576 | 1.4976 | 0.1338 | 9.999733e-11 | 731 |
| 1.4666 | 0.1529 | 1.4976 | 0.1338 | 9.999732e-11 | 732 |
| 1.4693 | 0.1388 | 1.4976 | 0.1338 | 9.9997316e-11 | 733 |
| 1.4740 | 0.1388 | 1.4976 | 0.1338 | 9.999731e-11 | 734 |
| 1.4656 | 0.1459 | 1.4976 | 0.1338 | 9.99973e-11 | 735 |
| 1.4661 | 0.1435 | 1.4976 | 0.1338 | 9.9997295e-11 | 736 |
| 1.4737 | 0.1435 | 1.4976 | 0.1338 | 9.999729e-11 | 737 |
| 1.4735 | 0.1412 | 1.4976 | 0.1338 | 9.999728e-11 | 738 |
| 1.4743 | 0.1247 | 1.4976 | 0.1338 | 9.9997274e-11 | 739 |
| 1.4690 | 0.1294 | 1.4976 | 0.1338 | 9.999727e-11 | 740 |
| 1.4662 | 0.1459 | 1.4976 | 0.1338 | 9.999726e-11 | 741 |
| 1.4682 | 0.1694 | 1.4976 | 0.1338 | 9.9997254e-11 | 742 |
| 1.4660 | 0.1600 | 1.4975 | 0.1338 | 9.999725e-11 | 743 |
| 1.4690 | 0.1624 | 1.4975 | 0.1338 | 9.999724e-11 | 744 |
| 1.4635 | 0.1624 | 1.4975 | 0.1338 | 9.999723e-11 | 745 |
| 1.4766 | 0.1388 | 1.4975 | 0.1338 | 9.9997226e-11 | 746 |
| 1.4736 | 0.1271 | 1.4975 | 0.1338 | 9.999722e-11 | 747 |
| 1.4796 | 0.1176 | 1.4975 | 0.1338 | 9.999721e-11 | 748 |
| 1.4689 | 0.1506 | 1.4975 | 0.1338 | 9.9997205e-11 | 749 |
| 1.4771 | 0.1271 | 1.4975 | 0.1338 | 9.99972e-11 | 750 |
| 1.4728 | 0.1388 | 1.4975 | 0.1338 | 9.999719e-11 | 751 |
| 1.4729 | 0.1365 | 1.4975 | 0.1338 | 9.9997184e-11 | 752 |
| 1.4749 | 0.1341 | 1.4975 | 0.1338 | 9.999718e-11 | 753 |
| 1.4726 | 0.1271 | 1.4975 | 0.1338 | 9.999717e-11 | 754 |
| 1.4748 | 0.1482 | 1.4975 | 0.1338 | 9.999716e-11 | 755 |
| 1.4708 | 0.1624 | 1.4975 | 0.1338 | 9.9997156e-11 | 756 |
| 1.4683 | 0.1576 | 1.4975 | 0.1338 | 9.999715e-11 | 757 |
| 1.4761 | 0.1412 | 1.4975 | 0.1338 | 9.999714e-11 | 758 |
| 1.4750 | 0.1318 | 1.4975 | 0.1338 | 9.9997136e-11 | 759 |
| 1.4734 | 0.1247 | 1.4975 | 0.1338 | 9.999713e-11 | 760 |
| 1.4670 | 0.1553 | 1.4975 | 0.1338 | 9.999712e-11 | 761 |
| 1.4735 | 0.1482 | 1.4974 | 0.1338 | 9.9997115e-11 | 762 |
| 1.4608 | 0.1553 | 1.4974 | 0.1338 | 9.999711e-11 | 763 |
| 1.4739 | 0.1600 | 1.4974 | 0.1338 | 9.99971e-11 | 764 |
| 1.4723 | 0.1388 | 1.4974 | 0.1338 | 9.9997094e-11 | 765 |
| 1.4740 | 0.1482 | 1.4974 | 0.1338 | 9.999709e-11 | 766 |
| 1.4706 | 0.1435 | 1.4974 | 0.1338 | 9.999708e-11 | 767 |
| 1.4749 | 0.1271 | 1.4974 | 0.1338 | 9.999707e-11 | 768 |
| 1.4735 | 0.1294 | 1.4974 | 0.1338 | 9.9997066e-11 | 769 |
| 1.4764 | 0.1247 | 1.4974 | 0.1338 | 9.999706e-11 | 770 |
| 1.4722 | 0.1412 | 1.4974 | 0.1338 | 9.999705e-11 | 771 |
| 1.4776 | 0.1388 | 1.4974 | 0.1338 | 9.9997045e-11 | 772 |
| 1.4704 | 0.1271 | 1.4974 | 0.1338 | 9.999704e-11 | 773 |
| 1.4726 | 0.1482 | 1.4974 | 0.1338 | 9.999703e-11 | 774 |
| 1.4706 | 0.1459 | 1.4974 | 0.1338 | 9.9997025e-11 | 775 |
| 1.4663 | 0.1459 | 1.4974 | 0.1338 | 9.999702e-11 | 776 |
| 1.4720 | 0.1365 | 1.4974 | 0.1338 | 9.999701e-11 | 777 |
| 1.4655 | 0.1435 | 1.4974 | 0.1338 | 9.9997004e-11 | 778 |
| 1.4741 | 0.1576 | 1.4974 | 0.1338 | 9.9997e-11 | 779 |
| 1.4744 | 0.1318 | 1.4974 | 0.1338 | 9.999699e-11 | 780 |
| 1.4765 | 0.1388 | 1.4973 | 0.1338 | 9.999698e-11 | 781 |
| 1.4773 | 0.1412 | 1.4973 | 0.1338 | 9.9996976e-11 | 782 |
| 1.4629 | 0.1506 | 1.4973 | 0.1338 | 9.999697e-11 | 783 |
| 1.4703 | 0.1529 | 1.4973 | 0.1338 | 9.999696e-11 | 784 |
| 1.4703 | 0.1435 | 1.4973 | 0.1338 | 9.9996955e-11 | 785 |
| 1.4707 | 0.1365 | 1.4973 | 0.1338 | 9.999695e-11 | 786 |
| 1.4775 | 0.1247 | 1.4973 | 0.1338 | 9.999694e-11 | 787 |
| 1.4685 | 0.1600 | 1.4973 | 0.1338 | 9.9996934e-11 | 788 |
| 1.4733 | 0.1459 | 1.4973 | 0.1338 | 9.999693e-11 | 789 |
| 1.4815 | 0.1318 | 1.4973 | 0.1338 | 9.999692e-11 | 790 |
| 1.4751 | 0.1224 | 1.4973 | 0.1338 | 9.9996914e-11 | 791 |
| 1.4659 | 0.1247 | 1.4973 | 0.1338 | 9.999691e-11 | 792 |
| 1.4786 | 0.1412 | 1.4973 | 0.1338 | 9.99969e-11 | 793 |
| 1.4624 | 0.1553 | 1.4973 | 0.1338 | 9.999689e-11 | 794 |
| 1.4695 | 0.1624 | 1.4973 | 0.1338 | 9.9996886e-11 | 795 |
| 1.4753 | 0.1506 | 1.4973 | 0.1338 | 9.999688e-11 | 796 |
| 1.4775 | 0.1247 | 1.4973 | 0.1338 | 9.999687e-11 | 797 |
| 1.4776 | 0.1318 | 1.4973 | 0.1338 | 9.9996865e-11 | 798 |
| 1.4691 | 0.1459 | 1.4973 | 0.1338 | 9.999686e-11 | 799 |
| 1.4734 | 0.1341 | 1.4972 | 0.1338 | 9.999685e-11 | 800 |
| 1.4737 | 0.1388 | 1.4972 | 0.1338 | 9.9996844e-11 | 801 |
| 1.4672 | 0.1459 | 1.4972 | 0.1338 | 9.999684e-11 | 802 |
| 1.4789 | 0.1388 | 1.4972 | 0.1338 | 9.999683e-11 | 803 |
| 1.4670 | 0.1365 | 1.4972 | 0.1338 | 9.999682e-11 | 804 |
| 1.4760 | 0.1294 | 1.4972 | 0.1338 | 9.9996816e-11 | 805 |
| 1.4772 | 0.1365 | 1.4972 | 0.1338 | 9.999681e-11 | 806 |
| 1.4679 | 0.1412 | 1.4972 | 0.1338 | 9.99968e-11 | 807 |
| 1.4724 | 0.1482 | 1.4972 | 0.1338 | 9.9996796e-11 | 808 |
| 1.4758 | 0.1435 | 1.4972 | 0.1338 | 9.999679e-11 | 809 |
| 1.4800 | 0.1412 | 1.4972 | 0.1338 | 9.999678e-11 | 810 |
| 1.4656 | 0.1624 | 1.4972 | 0.1338 | 9.9996775e-11 | 811 |
| 1.4683 | 0.1576 | 1.4972 | 0.1338 | 9.999677e-11 | 812 |
| 1.4766 | 0.1365 | 1.4972 | 0.1338 | 9.999676e-11 | 813 |
| 1.4799 | 0.1271 | 1.4972 | 0.1338 | 9.9996754e-11 | 814 |
| 1.4712 | 0.1459 | 1.4972 | 0.1338 | 9.999675e-11 | 815 |
| 1.4757 | 0.1294 | 1.4972 | 0.1338 | 9.999674e-11 | 816 |
| 1.4739 | 0.1294 | 1.4972 | 0.1338 | 9.999673e-11 | 817 |
| 1.4717 | 0.1459 | 1.4972 | 0.1338 | 9.9996726e-11 | 818 |
| 1.4698 | 0.1506 | 1.4971 | 0.1338 | 9.999672e-11 | 819 |
| 1.4713 | 0.1553 | 1.4971 | 0.1338 | 9.999671e-11 | 820 |
| 1.4729 | 0.1529 | 1.4971 | 0.1338 | 9.9996705e-11 | 821 |
| 1.4724 | 0.1482 | 1.4971 | 0.1338 | 9.99967e-11 | 822 |
| 1.4732 | 0.1318 | 1.4971 | 0.1338 | 9.999669e-11 | 823 |
| 1.4754 | 0.1365 | 1.4971 | 0.1338 | 9.9996685e-11 | 824 |
| 1.4807 | 0.1388 | 1.4971 | 0.1338 | 9.999668e-11 | 825 |
| 1.4737 | 0.1435 | 1.4971 | 0.1338 | 9.999667e-11 | 826 |
| 1.4671 | 0.1506 | 1.4971 | 0.1338 | 9.9996664e-11 | 827 |
| 1.4745 | 0.1435 | 1.4971 | 0.1338 | 9.999666e-11 | 828 |
| 1.4667 | 0.1459 | 1.4971 | 0.1338 | 9.999665e-11 | 829 |
| 1.4679 | 0.1435 | 1.4971 | 0.1338 | 9.999664e-11 | 830 |
| 1.4668 | 0.1553 | 1.4971 | 0.1338 | 9.9996636e-11 | 831 |
| 1.4755 | 0.1341 | 1.4971 | 0.1338 | 9.999663e-11 | 832 |
| 1.4724 | 0.1224 | 1.4971 | 0.1338 | 9.999662e-11 | 833 |
| 1.4662 | 0.1529 | 1.4971 | 0.1338 | 9.9996615e-11 | 834 |
| 1.4751 | 0.1647 | 1.4971 | 0.1338 | 9.999661e-11 | 835 |
| 1.4721 | 0.1506 | 1.4971 | 0.1338 | 9.99966e-11 | 836 |
| 1.4751 | 0.1412 | 1.4971 | 0.1338 | 9.9996594e-11 | 837 |
| 1.4733 | 0.1412 | 1.4970 | 0.1338 | 9.999659e-11 | 838 |
| 1.4761 | 0.1388 | 1.4970 | 0.1338 | 9.999658e-11 | 839 |
| 1.4704 | 0.1435 | 1.4970 | 0.1338 | 9.9996574e-11 | 840 |
| 1.4783 | 0.1341 | 1.4970 | 0.1338 | 9.999657e-11 | 841 |
| 1.4719 | 0.1459 | 1.4970 | 0.1338 | 9.999656e-11 | 842 |
| 1.4625 | 0.1482 | 1.4970 | 0.1338 | 9.999655e-11 | 843 |
| 1.4659 | 0.1318 | 1.4970 | 0.1338 | 9.9996546e-11 | 844 |
| 1.4670 | 0.1624 | 1.4970 | 0.1338 | 9.999654e-11 | 845 |
| 1.4725 | 0.1506 | 1.4970 | 0.1338 | 9.999653e-11 | 846 |
| 1.4698 | 0.1271 | 1.4970 | 0.1338 | 9.9996525e-11 | 847 |
| 1.4734 | 0.1529 | 1.4970 | 0.1338 | 9.999652e-11 | 848 |
| 1.4781 | 0.1388 | 1.4970 | 0.1338 | 9.999651e-11 | 849 |
| 1.4682 | 0.1600 | 1.4970 | 0.1338 | 9.9996504e-11 | 850 |
| 1.4739 | 0.1153 | 1.4970 | 0.1338 | 9.99965e-11 | 851 |
| 1.4642 | 0.1600 | 1.4970 | 0.1338 | 9.999649e-11 | 852 |
| 1.4703 | 0.1553 | 1.4970 | 0.1338 | 9.999648e-11 | 853 |
| 1.4602 | 0.1576 | 1.4970 | 0.1338 | 9.9996476e-11 | 854 |
| 1.4613 | 0.1435 | 1.4970 | 0.1338 | 9.999647e-11 | 855 |
| 1.4713 | 0.1482 | 1.4970 | 0.1338 | 9.999646e-11 | 856 |
| 1.4653 | 0.1365 | 1.4970 | 0.1338 | 9.9996456e-11 | 857 |
| 1.4708 | 0.1459 | 1.4969 | 0.1338 | 9.999645e-11 | 858 |
| 1.4649 | 0.1506 | 1.4969 | 0.1338 | 9.999644e-11 | 859 |
| 1.4663 | 0.1482 | 1.4969 | 0.1338 | 9.9996435e-11 | 860 |
| 1.4643 | 0.1412 | 1.4969 | 0.1338 | 9.999643e-11 | 861 |
| 1.4701 | 0.1529 | 1.4969 | 0.1338 | 9.999642e-11 | 862 |
| 1.4738 | 0.1318 | 1.4969 | 0.1338 | 9.9996414e-11 | 863 |
| 1.4668 | 0.1459 | 1.4969 | 0.1338 | 9.999641e-11 | 864 |
| 1.4665 | 0.1647 | 1.4969 | 0.1338 | 9.99964e-11 | 865 |
| 1.4733 | 0.1271 | 1.4969 | 0.1338 | 9.999639e-11 | 866 |
| 1.4776 | 0.1482 | 1.4969 | 0.1338 | 9.9996386e-11 | 867 |
| 1.4639 | 0.1435 | 1.4969 | 0.1338 | 9.999638e-11 | 868 |
| 1.4681 | 0.1435 | 1.4969 | 0.1338 | 9.999637e-11 | 869 |
| 1.4752 | 0.1341 | 1.4969 | 0.1338 | 9.9996365e-11 | 870 |
| 1.4635 | 0.1412 | 1.4969 | 0.1338 | 9.999636e-11 | 871 |
| 1.4703 | 0.1412 | 1.4969 | 0.1338 | 9.999635e-11 | 872 |
| 1.4803 | 0.1294 | 1.4969 | 0.1338 | 9.9996345e-11 | 873 |
| 1.4737 | 0.1294 | 1.4969 | 0.1338 | 9.999634e-11 | 874 |
| 1.4744 | 0.1553 | 1.4969 | 0.1338 | 9.999633e-11 | 875 |
| 1.4771 | 0.1412 | 1.4969 | 0.1338 | 9.9996324e-11 | 876 |
| 1.4663 | 0.1482 | 1.4968 | 0.1338 | 9.999632e-11 | 877 |
| 1.4740 | 0.1224 | 1.4968 | 0.1338 | 9.999631e-11 | 878 |
| 1.4758 | 0.1576 | 1.4968 | 0.1338 | 9.99963e-11 | 879 |
| 1.4815 | 0.1412 | 1.4968 | 0.1338 | 9.9996296e-11 | 880 |
| 1.4721 | 0.1529 | 1.4968 | 0.1338 | 9.999629e-11 | 881 |
| 1.4738 | 0.1388 | 1.4968 | 0.1338 | 9.999628e-11 | 882 |
| 1.4626 | 0.1529 | 1.4968 | 0.1338 | 9.9996275e-11 | 883 |
| 1.4703 | 0.1365 | 1.4968 | 0.1338 | 9.999627e-11 | 884 |
| 1.4682 | 0.1624 | 1.4968 | 0.1338 | 9.999626e-11 | 885 |
| 1.4777 | 0.1412 | 1.4968 | 0.1338 | 9.9996254e-11 | 886 |
| 1.4710 | 0.1506 | 1.4968 | 0.1338 | 9.999625e-11 | 887 |
| 1.4740 | 0.1247 | 1.4968 | 0.1338 | 9.999624e-11 | 888 |
| 1.4736 | 0.1459 | 1.4968 | 0.1338 | 9.9996234e-11 | 889 |
| 1.4775 | 0.1341 | 1.4968 | 0.1338 | 9.999623e-11 | 890 |
| 1.4711 | 0.1576 | 1.4968 | 0.1338 | 9.999622e-11 | 891 |
| 1.4716 | 0.1388 | 1.4968 | 0.1338 | 9.999621e-11 | 892 |
| 1.4756 | 0.1482 | 1.4968 | 0.1338 | 9.9996206e-11 | 893 |
| 1.4725 | 0.1600 | 1.4968 | 0.1338 | 9.99962e-11 | 894 |
| 1.4757 | 0.1459 | 1.4968 | 0.1338 | 9.999619e-11 | 895 |
| 1.4709 | 0.1341 | 1.4967 | 0.1338 | 9.9996185e-11 | 896 |
| 1.4695 | 0.1388 | 1.4967 | 0.1338 | 9.999618e-11 | 897 |
| 1.4732 | 0.1435 | 1.4967 | 0.1338 | 9.999617e-11 | 898 |
| 1.4733 | 0.1459 | 1.4967 | 0.1338 | 9.9996164e-11 | 899 |
| 1.4682 | 0.1576 | 1.4967 | 0.1338 | 9.999616e-11 | 900 |
| 1.4674 | 0.1435 | 1.4967 | 0.1338 | 9.999615e-11 | 901 |
| 1.4713 | 0.1482 | 1.4967 | 0.1338 | 9.999614e-11 | 902 |
| 1.4737 | 0.1388 | 1.4967 | 0.1338 | 9.9996136e-11 | 903 |
| 1.4719 | 0.1482 | 1.4967 | 0.1338 | 9.999613e-11 | 904 |
| 1.4724 | 0.1365 | 1.4967 | 0.1338 | 9.999612e-11 | 905 |
| 1.4707 | 0.1529 | 1.4967 | 0.1338 | 9.9996116e-11 | 906 |
| 1.4754 | 0.1341 | 1.4967 | 0.1338 | 9.999611e-11 | 907 |
| 1.4783 | 0.1318 | 1.4967 | 0.1338 | 9.99961e-11 | 908 |
| 1.4714 | 0.1529 | 1.4967 | 0.1338 | 9.9996095e-11 | 909 |
| 1.4632 | 0.1600 | 1.4967 | 0.1338 | 9.999609e-11 | 910 |
| 1.4706 | 0.1506 | 1.4967 | 0.1338 | 9.999608e-11 | 911 |
| 1.4776 | 0.1553 | 1.4967 | 0.1338 | 9.9996074e-11 | 912 |
| 1.4702 | 0.1318 | 1.4967 | 0.1338 | 9.999607e-11 | 913 |
| 1.4824 | 0.1412 | 1.4967 | 0.1338 | 9.999606e-11 | 914 |
| 1.4768 | 0.1365 | 1.4966 | 0.1338 | 9.999605e-11 | 915 |
| 1.4711 | 0.1435 | 1.4966 | 0.1338 | 9.9996046e-11 | 916 |
| 1.4660 | 0.1435 | 1.4966 | 0.1338 | 9.999604e-11 | 917 |
| 1.4620 | 0.1506 | 1.4966 | 0.1338 | 9.999603e-11 | 918 |
| 1.4723 | 0.1506 | 1.4966 | 0.1338 | 9.9996025e-11 | 919 |
| 1.4741 | 0.1318 | 1.4966 | 0.1338 | 9.999602e-11 | 920 |
| 1.4686 | 0.1506 | 1.4966 | 0.1338 | 9.999601e-11 | 921 |
| 1.4691 | 0.1412 | 1.4966 | 0.1338 | 9.9996005e-11 | 922 |
| 1.4691 | 0.1412 | 1.4966 | 0.1338 | 9.9996e-11 | 923 |
| 1.4710 | 0.1435 | 1.4966 | 0.1338 | 9.999599e-11 | 924 |
| 1.4785 | 0.1435 | 1.4966 | 0.1338 | 9.9995984e-11 | 925 |
| 1.4680 | 0.1412 | 1.4966 | 0.1338 | 9.999598e-11 | 926 |
| 1.4718 | 0.1388 | 1.4966 | 0.1338 | 9.999597e-11 | 927 |
| 1.4692 | 0.1529 | 1.4966 | 0.1338 | 9.999596e-11 | 928 |
| 1.4683 | 0.1553 | 1.4966 | 0.1338 | 9.9995956e-11 | 929 |
| 1.4708 | 0.1435 | 1.4966 | 0.1338 | 9.999595e-11 | 930 |
| 1.4794 | 0.1388 | 1.4966 | 0.1338 | 9.999594e-11 | 931 |
| 1.4638 | 0.1553 | 1.4966 | 0.1338 | 9.9995935e-11 | 932 |
| 1.4755 | 0.1318 | 1.4966 | 0.1338 | 9.999593e-11 | 933 |
| 1.4647 | 0.1529 | 1.4965 | 0.1338 | 9.999592e-11 | 934 |
| 1.4746 | 0.1412 | 1.4965 | 0.1338 | 9.9995914e-11 | 935 |
| 1.4702 | 0.1459 | 1.4965 | 0.1338 | 9.999591e-11 | 936 |
| 1.4683 | 0.1506 | 1.4965 | 0.1338 | 9.99959e-11 | 937 |
| 1.4708 | 0.1435 | 1.4965 | 0.1338 | 9.9995894e-11 | 938 |
| 1.4755 | 0.1435 | 1.4965 | 0.1338 | 9.999589e-11 | 939 |
| 1.4684 | 0.1435 | 1.4965 | 0.1338 | 9.999588e-11 | 940 |
| 1.4710 | 0.1388 | 1.4965 | 0.1338 | 9.999587e-11 | 941 |
| 1.4666 | 0.1694 | 1.4965 | 0.1338 | 9.9995866e-11 | 942 |
| 1.4737 | 0.1365 | 1.4965 | 0.1338 | 9.999586e-11 | 943 |
| 1.4687 | 0.1459 | 1.4965 | 0.1338 | 9.999585e-11 | 944 |
| 1.4667 | 0.1506 | 1.4965 | 0.1338 | 9.9995845e-11 | 945 |
| 1.4716 | 0.1412 | 1.4965 | 0.1338 | 9.999584e-11 | 946 |
| 1.4663 | 0.1529 | 1.4965 | 0.1338 | 9.999583e-11 | 947 |
| 1.4757 | 0.1459 | 1.4965 | 0.1338 | 9.9995824e-11 | 948 |
| 1.4783 | 0.1318 | 1.4965 | 0.1338 | 9.999582e-11 | 949 |
| 1.4712 | 0.1412 | 1.4965 | 0.1338 | 9.999581e-11 | 950 |
| 1.4732 | 0.1271 | 1.4965 | 0.1338 | 9.99958e-11 | 951 |
| 1.4765 | 0.1388 | 1.4965 | 0.1338 | 9.9995796e-11 | 952 |
| 1.4674 | 0.1600 | 1.4965 | 0.1338 | 9.999579e-11 | 953 |
| 1.4692 | 0.1341 | 1.4964 | 0.1338 | 9.999578e-11 | 954 |
| 1.4707 | 0.1506 | 1.4964 | 0.1338 | 9.9995776e-11 | 955 |
| 1.4730 | 0.1624 | 1.4964 | 0.1338 | 9.999577e-11 | 956 |
| 1.4691 | 0.1576 | 1.4964 | 0.1338 | 9.999576e-11 | 957 |
| 1.4721 | 0.1553 | 1.4964 | 0.1338 | 9.9995755e-11 | 958 |
| 1.4705 | 0.1341 | 1.4964 | 0.1338 | 9.999575e-11 | 959 |
| 1.4677 | 0.1435 | 1.4964 | 0.1338 | 9.999574e-11 | 960 |
| 1.4727 | 0.1553 | 1.4964 | 0.1338 | 9.9995734e-11 | 961 |
| 1.4690 | 0.1271 | 1.4964 | 0.1338 | 9.999573e-11 | 962 |
| 1.4768 | 0.1365 | 1.4964 | 0.1338 | 9.999572e-11 | 963 |
| 1.4692 | 0.1506 | 1.4964 | 0.1338 | 9.999571e-11 | 964 |
| 1.4736 | 0.1624 | 1.4964 | 0.1338 | 9.9995706e-11 | 965 |
| 1.4673 | 0.1529 | 1.4964 | 0.1338 | 9.99957e-11 | 966 |
| 1.4750 | 0.1341 | 1.4964 | 0.1338 | 9.999569e-11 | 967 |
| 1.4658 | 0.1412 | 1.4964 | 0.1338 | 9.9995685e-11 | 968 |
| 1.4730 | 0.1459 | 1.4964 | 0.1338 | 9.999568e-11 | 969 |
| 1.4659 | 0.1435 | 1.4964 | 0.1338 | 9.999567e-11 | 970 |
| 1.4707 | 0.1553 | 1.4964 | 0.1338 | 9.9995665e-11 | 971 |
| 1.4670 | 0.1388 | 1.4964 | 0.1338 | 9.999566e-11 | 972 |
| 1.4720 | 0.1294 | 1.4963 | 0.1338 | 9.999565e-11 | 973 |
| 1.4672 | 0.1624 | 1.4963 | 0.1338 | 9.9995644e-11 | 974 |
| 1.4670 | 0.1647 | 1.4963 | 0.1338 | 9.999564e-11 | 975 |
| 1.4688 | 0.1600 | 1.4963 | 0.1338 | 9.999563e-11 | 976 |
| 1.4673 | 0.1341 | 1.4963 | 0.1338 | 9.999562e-11 | 977 |
| 1.4682 | 0.1365 | 1.4963 | 0.1338 | 9.9995616e-11 | 978 |
| 1.4664 | 0.1600 | 1.4963 | 0.1338 | 9.999561e-11 | 979 |
| 1.4728 | 0.1388 | 1.4963 | 0.1338 | 9.99956e-11 | 980 |
| 1.4704 | 0.1341 | 1.4963 | 0.1338 | 9.9995595e-11 | 981 |
| 1.4721 | 0.1506 | 1.4963 | 0.1338 | 9.999559e-11 | 982 |
| 1.4660 | 0.1388 | 1.4963 | 0.1338 | 9.999558e-11 | 983 |
| 1.4675 | 0.1365 | 1.4963 | 0.1338 | 9.9995574e-11 | 984 |
| 1.4641 | 0.1553 | 1.4963 | 0.1338 | 9.999557e-11 | 985 |
| 1.4780 | 0.1435 | 1.4963 | 0.1338 | 9.999556e-11 | 986 |
| 1.4676 | 0.1365 | 1.4963 | 0.1338 | 9.9995554e-11 | 987 |
| 1.4715 | 0.1435 | 1.4963 | 0.1338 | 9.999555e-11 | 988 |
| 1.4707 | 0.1435 | 1.4963 | 0.1338 | 9.999554e-11 | 989 |
| 1.4668 | 0.1506 | 1.4963 | 0.1338 | 9.999553e-11 | 990 |
| 1.4766 | 0.1388 | 1.4963 | 0.1338 | 9.9995526e-11 | 991 |
| 1.4772 | 0.1224 | 1.4962 | 0.1338 | 9.999552e-11 | 992 |
| 1.4703 | 0.1412 | 1.4962 | 0.1338 | 9.999551e-11 | 993 |
| 1.4681 | 0.1576 | 1.4962 | 0.1338 | 9.9995505e-11 | 994 |
| 1.4767 | 0.1365 | 1.4962 | 0.1338 | 9.99955e-11 | 995 |
| 1.4702 | 0.1318 | 1.4962 | 0.1338 | 9.999549e-11 | 996 |
| 1.4753 | 0.1294 | 1.4962 | 0.1338 | 9.9995484e-11 | 997 |
| 1.4696 | 0.1553 | 1.4962 | 0.1338 | 9.999548e-11 | 998 |
| 1.4794 | 0.1435 | 1.4962 | 0.1338 | 9.999547e-11 | 999 |
### Framework versions
- Transformers 4.29.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 97,395 | [
[
-0.042205810546875,
-0.03363037109375,
0.0225982666015625,
0.01641845703125,
0.0035400390625,
0.00820159912109375,
0.00433349609375,
-0.007488250732421875,
0.060302734375,
0.018707275390625,
-0.04766845703125,
-0.047149658203125,
-0.04510498046875,
-0.000587... |
elaunlu/bert-base-uncased-finetuned-cola | 2023-05-04T18:27:57.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | elaunlu | null | null | elaunlu/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-02T15:02:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.518818601771926
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4610
- Matthews Correlation: 0.5188
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4985 | 1.0 | 535 | 0.4610 | 0.5188 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,721 | [
[
-0.02557373046875,
-0.053009033203125,
0.010528564453125,
0.020751953125,
-0.0279388427734375,
-0.0210418701171875,
-0.0196380615234375,
-0.014892578125,
0.0260772705078125,
0.0162353515625,
-0.0494384765625,
-0.0311431884765625,
-0.05120849609375,
-0.019989... |
xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training | 2023-05-03T08:32:31.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | xinyixiuxiu | null | null | xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training | 0 | 2 | transformers | 2023-05-02T15:14:35 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1040
- Train Accuracy: 0.9654
- Validation Loss: 0.1506
- Validation Accuracy: 0.9507
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1927 | 0.9259 | 0.1336 | 0.9587 | 0 |
| 0.1040 | 0.9654 | 0.1506 | 0.9507 | 1 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,503 | [
[
-0.0258941650390625,
-0.030853271484375,
0.025787353515625,
0.0101165771484375,
-0.037872314453125,
-0.0242767333984375,
-0.00304412841796875,
-0.02337646484375,
0.007396697998046875,
0.01544189453125,
-0.0555419921875,
-0.04010009765625,
-0.058380126953125,
... |
Arro94/nova-model-benchmark | 2023-05-02T16:50:00.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"sv",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | text-classification | Arro94 | null | null | Arro94/nova-model-benchmark | 0 | 2 | transformers | 2023-05-02T16:38:35 | ---
license: gpl-3.0
language:
- sv
pipeline_tag: text-classification
---
Scores (avg. weighted)
- Accuracy: 0.9007633587786259
- Precision: 0.9008606422369183
- Recall: 0.9007633587786259
- F1: 0.9007595035560719
Hyperparams
- Max Seq Len: 45
- Batch Size: 16
- Learning Rate: 2e-5
- Epochs: 5
- Warmup Steps: 147
- Weight Decay: 0.01
- Save/Eval Strat: epoch | 364 | [
[
-0.0250244140625,
-0.0516357421875,
0.02801513671875,
0.04278564453125,
-0.0151519775390625,
-0.0281982421875,
-0.018310546875,
-0.001300811767578125,
0.033905029296875,
0.002422332763671875,
-0.0010900497436523438,
-0.0667724609375,
-0.05316162109375,
0.005... |
Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test | 2023-05-03T06:16:11.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sleoruiz | null | null | Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test | 0 | 2 | transformers | 2023-05-02T17:23:56 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- f1
- recall
- accuracy
- precision
model-index:
- name: bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7374
- F1: 0.1580
- Recall: 0.3233
- Accuracy: 0.3233
- Precision: 0.1045
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy | Precision |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:--------:|:---------:|
| 2.7132 | 1.0 | 6530 | 2.7463 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
| 2.7441 | 2.0 | 13060 | 2.7423 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
| 2.7328 | 3.0 | 19590 | 2.7365 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
| 2.7464 | 4.0 | 26120 | 2.7374 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
| 2.7178 | 5.0 | 32650 | 2.7374 | 0.1580 | 0.3233 | 0.3233 | 0.1045 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,058 | [
[
-0.03009033203125,
-0.048583984375,
0.01154327392578125,
0.006526947021484375,
-0.020660400390625,
-0.02606201171875,
-0.01934814453125,
-0.025146484375,
0.01163482666015625,
0.0251922607421875,
-0.04522705078125,
-0.051849365234375,
-0.054901123046875,
-0.0... |
arianasutanto/finetuned-distilbert | 2023-05-02T22:34:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:hupd",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | arianasutanto | null | null | arianasutanto/finetuned-distilbert | 0 | 2 | transformers | 2023-05-02T17:43:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- hupd
model-index:
- name: finetuned-distilbert
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-distilbert
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the hupd dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,063 | [
[
-0.036956787109375,
-0.05657958984375,
0.009979248046875,
0.0151214599609375,
-0.03399658203125,
-0.021636962890625,
-0.004489898681640625,
-0.0123138427734375,
0.005680084228515625,
0.0265960693359375,
-0.0408935546875,
-0.035003662109375,
-0.051239013671875,
... |
rohanmyer/latlongpredictor | 2023-05-03T01:30:32.000Z | [
"keras",
"region:us"
] | null | rohanmyer | null | null | rohanmyer/latlongpredictor | 0 | 2 | keras | 2023-05-02T20:37:16 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.004999999888241291 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 737 | [
[
-0.038421630859375,
-0.04119873046875,
0.0292205810546875,
0.005710601806640625,
-0.034027099609375,
-0.017425537109375,
0.0005965232849121094,
-0.001186370849609375,
0.023406982421875,
0.0218048095703125,
-0.044952392578125,
-0.04840087890625,
-0.0343017578125,... |
KaanHa/bert-base-uncased-finetuned-cola | 2023-05-07T19:47:47.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | KaanHa | null | null | KaanHa/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-02T20:51:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5365007161029405
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4711
- Matthews Correlation: 0.5365
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.678498850368218e-06
- train_batch_size: 32
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.4731 | 0.4664 |
| 0.4819 | 2.0 | 536 | 0.4537 | 0.5233 |
| 0.4819 | 3.0 | 804 | 0.4711 | 0.5365 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,885 | [
[
-0.0253448486328125,
-0.052520751953125,
0.00940704345703125,
0.018707275390625,
-0.0247802734375,
-0.0197296142578125,
-0.0177459716796875,
-0.016265869140625,
0.025421142578125,
0.0171966552734375,
-0.0513916015625,
-0.0302581787109375,
-0.051544189453125,
... |
BerserkerMother/all-MiniLM-L6-v2-intent-classifier | 2023-05-02T21:32:23.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | BerserkerMother | null | null | BerserkerMother/all-MiniLM-L6-v2-intent-classifier | 0 | 2 | sentence-transformers | 2023-05-02T21:27:49 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# BerserkerMother/all-MiniLM-L6-v2-intent-classifier
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("BerserkerMother/all-MiniLM-L6-v2-intent-classifier")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,589 | [
[
0.007434844970703125,
-0.05230712890625,
0.0283660888671875,
-0.027130126953125,
-0.0114898681640625,
-0.029815673828125,
-0.01441192626953125,
-0.0057220458984375,
-0.006107330322265625,
0.027191162109375,
-0.0511474609375,
-0.0176544189453125,
-0.0392150878906... |
Xenova/detr-resnet-50-panoptic | 2023-05-30T22:36:21.000Z | [
"transformers.js",
"onnx",
"detr",
"image-segmentation",
"region:us"
] | image-segmentation | Xenova | null | null | Xenova/detr-resnet-50-panoptic | 1 | 2 | transformers.js | 2023-05-02T22:34:52 | ---
library_name: "transformers.js"
---
https://huggingface.co/facebook/detr-resnet-50-panoptic with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 511 | [
[
-0.048797607421875,
0.0195465087890625,
0.01367950439453125,
0.0494384765625,
-0.00384521484375,
-0.0031795501708984375,
-0.0029354095458984375,
-0.01947021484375,
0.034820556640625,
0.051605224609375,
-0.06591796875,
-0.0274200439453125,
-0.04010009765625,
... |
lucasmadda/distilbert-base-uncased-finetuned-clinc | 2023-05-03T00:45:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | lucasmadda | null | null | lucasmadda/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-05-02T23:48:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9180645161290323
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7844
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.3042 | 1.0 | 318 | 3.3043 | 0.7403 |
| 2.6451 | 2.0 | 636 | 1.8920 | 0.8365 |
| 1.5585 | 3.0 | 954 | 1.1716 | 0.8881 |
| 1.0188 | 4.0 | 1272 | 0.8677 | 0.9142 |
| 0.8044 | 5.0 | 1590 | 0.7844 | 0.9181 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0.dev20230502
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,938 | [
[
-0.034912109375,
-0.03900146484375,
0.00930023193359375,
0.00623321533203125,
-0.0294952392578125,
-0.0267181396484375,
-0.01453399658203125,
-0.00830078125,
0.002399444580078125,
0.02294921875,
-0.04754638671875,
-0.05029296875,
-0.057342529296875,
-0.01030... |
bright1/fine-tuned-twitter-Roberta-base-sentiment | 2023-05-03T18:39:08.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | bright1 | null | null | bright1/fine-tuned-twitter-Roberta-base-sentiment | 0 | 2 | transformers | 2023-05-03T01:13:01 | ---
tags:
- generated_from_trainer
model-index:
- name: fine-tuned-twitter-Roberta-base-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fine-tuned-twitter-Roberta-base-sentiment
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sentiment-latest](https://huggingface.co/cardiffnlp/twitter-roberta-base-sentiment-latest) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.5453
- eval_accuracy: {'accuracy': 0.7915}
- eval_f1score: {'f1': 0.790972084150606}
- eval_runtime: 68.7486
- eval_samples_per_second: 29.092
- eval_steps_per_second: 3.636
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-07
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-09
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- lr_scheduler_warmup_steps: 1399
- num_epochs: 7
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,480 | [
[
-0.0355224609375,
-0.0531005859375,
0.010650634765625,
0.023834228515625,
-0.0328369140625,
-0.01448822021484375,
-0.0275726318359375,
-0.01708984375,
0.013214111328125,
0.0221099853515625,
-0.05902099609375,
-0.055206298828125,
-0.05517578125,
-0.0066375732... |
xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training_epoch3 | 2023-05-03T01:57:04.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | xinyixiuxiu | null | null | xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training_epoch3 | 0 | 2 | transformers | 2023-05-03T01:19:48 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training_epoch3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training_epoch3
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0549
- Train Accuracy: 0.9840
- Validation Loss: 0.1688
- Validation Accuracy: 0.9358
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0549 | 0.9840 | 0.1688 | 0.9358 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,437 | [
[
-0.0297393798828125,
-0.031707763671875,
0.027801513671875,
0.01155853271484375,
-0.035125732421875,
-0.0290069580078125,
-0.004077911376953125,
-0.0260009765625,
0.004825592041015625,
0.01556396484375,
-0.051055908203125,
-0.03955078125,
-0.05487060546875,
... |
Gridflow/bert-base-uncased-finetuned-emotion | 2023-05-24T17:46:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Gridflow | null | null | Gridflow/bert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-03T01:31:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: bert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9405
- name: F1
type: f1
value: 0.9404154624819866
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3648
- Accuracy: 0.9405
- F1: 0.9404
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.0177 | 1.0 | 250 | 0.3372 | 0.933 | 0.9331 |
| 0.0149 | 2.0 | 500 | 0.3434 | 0.9385 | 0.9386 |
| 0.012 | 3.0 | 750 | 0.3878 | 0.9355 | 0.9353 |
| 0.0135 | 4.0 | 1000 | 0.3981 | 0.938 | 0.9371 |
| 0.0088 | 5.0 | 1250 | 0.3695 | 0.94 | 0.9400 |
| 0.0112 | 6.0 | 1500 | 0.4133 | 0.933 | 0.9334 |
| 0.0105 | 7.0 | 1750 | 0.3733 | 0.937 | 0.9370 |
| 0.0117 | 8.0 | 2000 | 0.3625 | 0.938 | 0.9381 |
| 0.0126 | 9.0 | 2250 | 0.3539 | 0.9405 | 0.9405 |
| 0.0095 | 10.0 | 2500 | 0.3963 | 0.9315 | 0.9318 |
| 0.0088 | 11.0 | 2750 | 0.3692 | 0.9355 | 0.9353 |
| 0.0072 | 12.0 | 3000 | 0.3646 | 0.9385 | 0.9385 |
| 0.0064 | 13.0 | 3250 | 0.3630 | 0.9375 | 0.9373 |
| 0.0052 | 14.0 | 3500 | 0.3659 | 0.9405 | 0.9403 |
| 0.005 | 15.0 | 3750 | 0.3648 | 0.9405 | 0.9404 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,742 | [
[
-0.0457763671875,
-0.0386962890625,
0.01165008544921875,
0.01241302490234375,
-0.0122222900390625,
-0.01215362548828125,
-0.0090179443359375,
-0.01142120361328125,
0.0308074951171875,
0.021148681640625,
-0.059600830078125,
-0.054443359375,
-0.051727294921875,
... |
lucasmadda/distilbert-base-uncased-distilled-clinc | 2023-05-03T02:51:51.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | lucasmadda | null | null | lucasmadda/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-05-03T02:38:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9493548387096774
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3288
- Accuracy: 0.9494
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 3.9476 | 1.0 | 318 | 2.9510 | 0.7468 |
| 2.2551 | 2.0 | 636 | 1.4760 | 0.8555 |
| 1.1113 | 3.0 | 954 | 0.7582 | 0.9126 |
| 0.5674 | 4.0 | 1272 | 0.4822 | 0.9326 |
| 0.3386 | 5.0 | 1590 | 0.3837 | 0.9435 |
| 0.2399 | 6.0 | 1908 | 0.3515 | 0.9432 |
| 0.1951 | 7.0 | 2226 | 0.3370 | 0.9465 |
| 0.1736 | 8.0 | 2544 | 0.3320 | 0.9468 |
| 0.1631 | 9.0 | 2862 | 0.3286 | 0.9471 |
| 0.1575 | 10.0 | 3180 | 0.3288 | 0.9494 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0.dev20230502
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,249 | [
[
-0.03466796875,
-0.03643798828125,
0.01105499267578125,
0.006801605224609375,
-0.0247650146484375,
-0.01873779296875,
-0.00952911376953125,
-0.006351470947265625,
0.0088348388671875,
0.02215576171875,
-0.044525146484375,
-0.05096435546875,
-0.06158447265625,
... |
r10521708/bert-base-chinese-finetuned-qqp-FHTM-5x | 2023-05-08T04:47:44.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:gpl-3.0",
"endpoints_compatible",
"region:us"
] | text-classification | r10521708 | null | null | r10521708/bert-base-chinese-finetuned-qqp-FHTM-5x | 0 | 2 | transformers | 2023-05-03T04:03:09 | ---
license: gpl-3.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: bert-base-chinese-finetuned-qqp-FHTM-5x
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-chinese-finetuned-qqp-FHTM-5x
This model is a fine-tuned version of [ckiplab/bert-base-chinese](https://huggingface.co/ckiplab/albert-base-chinese) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3385688364505768
- Accuracy: 0.8357142857142857
- F1: 0.8244274809160306
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:--------:|
| No log | 1.0 | 30 | 0.247902 | 0.892857 | 0.888889 |
| No log | 2.0 | 60 | 0.205925 | 0.907143 | 0.900763 |
| No log | 3.0 | 90 | 0.137872 | 0.950000 | 0.952381 |
| No log | 4.0 | 120 | 0.108262 | 0.957143 | 0.958904 |
| No log | 5.0 | 150 | 0.103690 | 0.957143 | 0.958904 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1
- Datasets 2.9.0
- Tokenizers 0.13.0.dev0
| 1,763 | [
[
-0.033355712890625,
-0.037017822265625,
0.00362396240234375,
0.0212860107421875,
-0.02569580078125,
-0.031036376953125,
-0.01334381103515625,
-0.020172119140625,
0.00482177734375,
0.0224456787109375,
-0.04913330078125,
-0.046630859375,
-0.0389404296875,
-0.0... |
madhuselvaraj/distil_bert_second | 2023-05-03T14:05:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | madhuselvaraj | null | null | madhuselvaraj/distil_bert_second | 0 | 2 | transformers | 2023-05-03T05:28:17 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distil_bert_second
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distil_bert_second
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,043 | [
[
-0.0330810546875,
-0.051177978515625,
0.01351165771484375,
0.0175323486328125,
-0.03533935546875,
-0.0207061767578125,
-0.010101318359375,
-0.0188751220703125,
0.009979248046875,
0.0160980224609375,
-0.051605224609375,
-0.037109375,
-0.0567626953125,
-0.0075... |
feradauto/scibert_nlp4sg | 2023-05-13T14:42:24.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"en",
"arxiv:2305.05471",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | feradauto | null | null | feradauto/scibert_nlp4sg | 0 | 2 | transformers | 2023-05-03T06:00:41 | ---
license: apache-2.0
language:
- en
metrics:
- accuracy
pipeline_tag: text-classification
widget:
- text: "On Unifying Misinformation Detection. In this paper, we introduce UNIFIEDM2, a general-purpose misinformation model that jointly models multiple domains of misinformation with a single, unified setup. The model is trained to handle four tasks: detecting news bias, clickbait, fake news and verifying rumors. By grouping these tasks together, UNIFIEDM2 learns a richer representation of misinformation, which leads to stateof-the-art or comparable performance across all tasks. Furthermore, we demonstrate that UNIFIEDM2's learned representation is helpful for few-shot learning of unseen misinformation tasks/datasets and model's generalizability to unseen events."
example_title: "Misinformation Detection"
---
# SciBERT NLP4SG
SciBERT NLP4SG is a SciBERT model fine-tuned to detect NLP4SG papers based on their title and abstract.
We present the details in the paper:
The training corpus is a combination of the [NLP4SGPapers training set](https://huggingface.co/datasets/feradauto/NLP4SGPapers) which is manually annotated, and some papers identified by keywords.
For more details about the training data and the model, visit the original repo [here](https://github.com/feradauto/nlp4sg).
Please cite the following paper:
```
@misc{gonzalez2023good,
title={Beyond Good Intentions: Reporting the Research Landscape of NLP for Social Good},
author={Fernando Gonzalez and Zhijing Jin and Jad Beydoun and Bernhard Schölkopf and Tom Hope and Mrinmaya Sachan and Rada Mihalcea},
year={2023},
eprint={2305.05471},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` | 1,714 | [
[
-0.006282806396484375,
0.006908416748046875,
0.0521240234375,
0.0257110595703125,
-0.0131683349609375,
0.01343536376953125,
-0.004337310791015625,
-0.036285400390625,
0.0221099853515625,
0.0247650146484375,
-0.01560211181640625,
-0.0416259765625,
-0.057861328125... |
mattjmattj/HF_RL_unit3_dqn_SpaceInvadersNoFrameskip-v4 | 2023-05-03T09:03:52.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | mattjmattj | null | null | mattjmattj/HF_RL_unit3_dqn_SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-03T09:03:12 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 644.00 +/- 209.03
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mattjmattj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga mattjmattj -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga mattjmattj
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,697 | [
[
-0.041290283203125,
-0.0362548828125,
0.0223236083984375,
0.0247955322265625,
-0.0103607177734375,
-0.0175933837890625,
0.012542724609375,
-0.01325225830078125,
0.01287841796875,
0.0247344970703125,
-0.07086181640625,
-0.035919189453125,
-0.0274200439453125,
... |
Svetlana0303/Regression_bert_NOaug_CustomLoss | 2023-05-03T09:07:13.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_bert_NOaug_CustomLoss | 0 | 2 | transformers | 2023-05-03T09:07:02 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_bert_NOaug_CustomLoss
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_bert_NOaug_CustomLoss
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0264
- Train Mae: 0.1981
- Train Mse: 0.0536
- Train R2-score: 0.9557
- Validation Loss: 0.1484
- Validation Mae: 0.3703
- Validation Mse: 0.2656
- Validation R2-score: 0.8862
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-----:|
| 0.1477 | 0.5158 | 0.3587 | 0.8489 | 0.1118 | 0.5348 | 0.3366 | 0.8997 | 0 |
| 0.1280 | 0.4634 | 0.2930 | 0.8414 | 0.1375 | 0.4847 | 0.3121 | 0.8873 | 1 |
| 0.1232 | 0.4331 | 0.2728 | -0.3855 | 0.1453 | 0.5454 | 0.4140 | 0.8773 | 2 |
| 0.0862 | 0.3752 | 0.2042 | 0.8843 | 0.1683 | 0.4117 | 0.2940 | 0.8728 | 3 |
| 0.0827 | 0.3573 | 0.1824 | 0.9046 | 0.1383 | 0.3792 | 0.2434 | 0.8940 | 4 |
| 0.0701 | 0.4034 | 0.2084 | 0.8164 | 0.1313 | 0.4766 | 0.3297 | 0.8879 | 5 |
| 0.0473 | 0.2988 | 0.1245 | 0.8744 | 0.1544 | 0.4001 | 0.2930 | 0.8780 | 6 |
| 0.0370 | 0.2501 | 0.0887 | 0.8672 | 0.1464 | 0.4236 | 0.3019 | 0.8809 | 7 |
| 0.0346 | 0.3122 | 0.1224 | 0.9196 | 0.1296 | 0.4837 | 0.3147 | 0.8885 | 8 |
| 0.0303 | 0.2493 | 0.0864 | 0.9624 | 0.1399 | 0.4292 | 0.2975 | 0.8876 | 9 |
| 0.0312 | 0.2527 | 0.0862 | 0.9426 | 0.1436 | 0.3984 | 0.2722 | 0.8876 | 10 |
| 0.0301 | 0.2160 | 0.0657 | 0.6312 | 0.1479 | 0.3819 | 0.2836 | 0.8849 | 11 |
| 0.0275 | 0.2286 | 0.0712 | 0.9543 | 0.1473 | 0.3770 | 0.2634 | 0.8851 | 12 |
| 0.0272 | 0.2209 | 0.0656 | 0.9691 | 0.1372 | 0.4141 | 0.2886 | 0.8899 | 13 |
| 0.0264 | 0.1981 | 0.0536 | 0.9557 | 0.1484 | 0.3703 | 0.2656 | 0.8862 | 14 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,857 | [
[
-0.051513671875,
-0.049224853515625,
0.021148681640625,
0.0033111572265625,
-0.013427734375,
-0.01155853271484375,
-0.0012407302856445312,
-0.00945281982421875,
0.042083740234375,
0.0183868408203125,
-0.051513671875,
-0.050201416015625,
-0.05474853515625,
-0... |
guoluo/Bert_class_1e-09 | 2023-05-03T09:21:44.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | guoluo | null | null | guoluo/Bert_class_1e-09 | 0 | 2 | transformers | 2023-05-03T09:20:58 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_1e-09
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_1e-09
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 1.1645
- Train Accuracy: 0.6635
- Validation Loss: 1.1621
- Validation Accuracy: 0.6761
- Train Lr: 9.995005e-10
- Epoch: 999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 9.995005e-10, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:------------:|:-----:|
| 1.4730 | 0.1647 | 1.5009 | 0.1338 | 1e-09 | 0 |
| 1.4744 | 0.1412 | 1.5003 | 0.1338 | 1e-09 | 1 |
| 1.4780 | 0.1388 | 1.4998 | 0.1338 | 1e-09 | 2 |
| 1.4773 | 0.1388 | 1.4993 | 0.1338 | 1e-09 | 3 |
| 1.4733 | 0.1482 | 1.4988 | 0.1338 | 1e-09 | 4 |
| 1.4676 | 0.1482 | 1.4983 | 0.1338 | 1e-09 | 5 |
| 1.4769 | 0.1388 | 1.4979 | 0.1338 | 1e-09 | 6 |
| 1.4704 | 0.1600 | 1.4974 | 0.1338 | 1e-09 | 7 |
| 1.4791 | 0.1435 | 1.4969 | 0.1338 | 1e-09 | 8 |
| 1.4696 | 0.1482 | 1.4963 | 0.1338 | 1e-09 | 9 |
| 1.4714 | 0.1506 | 1.4959 | 0.1338 | 1e-09 | 10 |
| 1.4701 | 0.1365 | 1.4954 | 0.1338 | 1e-09 | 11 |
| 1.4626 | 0.1482 | 1.4949 | 0.1338 | 1e-09 | 12 |
| 1.4725 | 0.1553 | 1.4945 | 0.1338 | 1e-09 | 13 |
| 1.4704 | 0.1435 | 1.4940 | 0.1338 | 1e-09 | 14 |
| 1.4720 | 0.1435 | 1.4935 | 0.1338 | 1e-09 | 15 |
| 1.4724 | 0.1388 | 1.4930 | 0.1338 | 1e-09 | 16 |
| 1.4749 | 0.1388 | 1.4926 | 0.1338 | 1e-09 | 17 |
| 1.4697 | 0.1388 | 1.4921 | 0.1338 | 1e-09 | 18 |
| 1.4736 | 0.1294 | 1.4916 | 0.1338 | 1e-09 | 19 |
| 1.4678 | 0.1412 | 1.4911 | 0.1338 | 1e-09 | 20 |
| 1.4649 | 0.1459 | 1.4906 | 0.1338 | 1e-09 | 21 |
| 1.4681 | 0.1576 | 1.4901 | 0.1338 | 1e-09 | 22 |
| 1.4672 | 0.1576 | 1.4895 | 0.1338 | 1e-09 | 23 |
| 1.4636 | 0.1412 | 1.4890 | 0.1338 | 1e-09 | 24 |
| 1.4660 | 0.1600 | 1.4885 | 0.1338 | 1e-09 | 25 |
| 1.4692 | 0.1576 | 1.4880 | 0.1338 | 1e-09 | 26 |
| 1.4693 | 0.1482 | 1.4876 | 0.1338 | 1e-09 | 27 |
| 1.4627 | 0.1506 | 1.4871 | 0.1338 | 1e-09 | 28 |
| 1.4676 | 0.1529 | 1.4867 | 0.1338 | 1e-09 | 29 |
| 1.4606 | 0.1529 | 1.4862 | 0.1338 | 1e-09 | 30 |
| 1.4697 | 0.1412 | 1.4857 | 0.1338 | 1e-09 | 31 |
| 1.4638 | 0.1435 | 1.4852 | 0.1338 | 1e-09 | 32 |
| 1.4613 | 0.1435 | 1.4847 | 0.1338 | 1e-09 | 33 |
| 1.4583 | 0.1435 | 1.4842 | 0.1338 | 1e-09 | 34 |
| 1.4584 | 0.1576 | 1.4837 | 0.1338 | 1e-09 | 35 |
| 1.4557 | 0.1553 | 1.4833 | 0.1338 | 1e-09 | 36 |
| 1.4531 | 0.1529 | 1.4828 | 0.1338 | 1e-09 | 37 |
| 1.4552 | 0.1506 | 1.4824 | 0.1338 | 1e-09 | 38 |
| 1.4584 | 0.1506 | 1.4820 | 0.1338 | 1e-09 | 39 |
| 1.4646 | 0.1694 | 1.4815 | 0.1268 | 1e-09 | 40 |
| 1.4597 | 0.1412 | 1.4810 | 0.1268 | 1e-09 | 41 |
| 1.4597 | 0.1365 | 1.4806 | 0.1268 | 1e-09 | 42 |
| 1.4515 | 0.1671 | 1.4801 | 0.1268 | 1e-09 | 43 |
| 1.4508 | 0.1341 | 1.4796 | 0.1338 | 1e-09 | 44 |
| 1.4511 | 0.1529 | 1.4792 | 0.1338 | 1e-09 | 45 |
| 1.4520 | 0.1459 | 1.4787 | 0.1338 | 1e-09 | 46 |
| 1.4547 | 0.1788 | 1.4782 | 0.1338 | 1e-09 | 47 |
| 1.4570 | 0.1624 | 1.4777 | 0.1338 | 1e-09 | 48 |
| 1.4486 | 0.1506 | 1.4773 | 0.1338 | 1e-09 | 49 |
| 1.4544 | 0.1671 | 1.4768 | 0.1338 | 1e-09 | 50 |
| 1.4519 | 0.1576 | 1.4764 | 0.1338 | 1e-09 | 51 |
| 1.4503 | 0.1553 | 1.4760 | 0.1338 | 1e-09 | 52 |
| 1.4527 | 0.1412 | 1.4755 | 0.1338 | 1e-09 | 53 |
| 1.4522 | 0.1482 | 1.4750 | 0.1338 | 1e-09 | 54 |
| 1.4562 | 0.1412 | 1.4745 | 0.1338 | 1e-09 | 55 |
| 1.4444 | 0.1412 | 1.4740 | 0.1338 | 9.999999e-10 | 56 |
| 1.4459 | 0.1341 | 1.4735 | 0.1338 | 9.999997e-10 | 57 |
| 1.4506 | 0.1435 | 1.4731 | 0.1338 | 9.999996e-10 | 58 |
| 1.4536 | 0.1412 | 1.4726 | 0.1338 | 9.999995e-10 | 59 |
| 1.4503 | 0.1506 | 1.4722 | 0.1338 | 9.999994e-10 | 60 |
| 1.4466 | 0.1553 | 1.4717 | 0.1338 | 9.999993e-10 | 61 |
| 1.4540 | 0.1506 | 1.4713 | 0.1338 | 9.999992e-10 | 62 |
| 1.4448 | 0.1553 | 1.4708 | 0.1338 | 9.999991e-10 | 63 |
| 1.4507 | 0.1294 | 1.4704 | 0.1338 | 9.99999e-10 | 64 |
| 1.4446 | 0.1412 | 1.4699 | 0.1338 | 9.999989e-10 | 65 |
| 1.4387 | 0.1482 | 1.4694 | 0.1338 | 9.999988e-10 | 66 |
| 1.4491 | 0.1318 | 1.4690 | 0.1338 | 9.999986e-10 | 67 |
| 1.4354 | 0.1741 | 1.4685 | 0.1338 | 9.999985e-10 | 68 |
| 1.4393 | 0.1741 | 1.4680 | 0.1338 | 9.999984e-10 | 69 |
| 1.4443 | 0.1506 | 1.4675 | 0.1338 | 9.999983e-10 | 70 |
| 1.4441 | 0.1624 | 1.4670 | 0.1338 | 9.999982e-10 | 71 |
| 1.4411 | 0.1553 | 1.4665 | 0.1338 | 9.999981e-10 | 72 |
| 1.4438 | 0.1365 | 1.4660 | 0.1338 | 9.99998e-10 | 73 |
| 1.4314 | 0.1647 | 1.4656 | 0.1338 | 9.999979e-10 | 74 |
| 1.4394 | 0.1600 | 1.4651 | 0.1338 | 9.999978e-10 | 75 |
| 1.4469 | 0.1765 | 1.4647 | 0.1338 | 9.999976e-10 | 76 |
| 1.4408 | 0.1600 | 1.4642 | 0.1338 | 9.999975e-10 | 77 |
| 1.4388 | 0.1624 | 1.4638 | 0.1338 | 9.999974e-10 | 78 |
| 1.4391 | 0.1529 | 1.4633 | 0.1338 | 9.999973e-10 | 79 |
| 1.4367 | 0.1600 | 1.4629 | 0.1338 | 9.999972e-10 | 80 |
| 1.4407 | 0.1576 | 1.4624 | 0.1338 | 9.999971e-10 | 81 |
| 1.4388 | 0.1529 | 1.4620 | 0.1338 | 9.99997e-10 | 82 |
| 1.4483 | 0.1694 | 1.4615 | 0.1338 | 9.999969e-10 | 83 |
| 1.4385 | 0.1765 | 1.4610 | 0.1338 | 9.999968e-10 | 84 |
| 1.4331 | 0.1929 | 1.4606 | 0.1338 | 9.999966e-10 | 85 |
| 1.4328 | 0.1694 | 1.4602 | 0.1338 | 9.999965e-10 | 86 |
| 1.4365 | 0.1694 | 1.4597 | 0.1338 | 9.999964e-10 | 87 |
| 1.4374 | 0.1694 | 1.4592 | 0.1338 | 9.999963e-10 | 88 |
| 1.4330 | 0.1765 | 1.4588 | 0.1338 | 9.999962e-10 | 89 |
| 1.4370 | 0.1529 | 1.4584 | 0.1268 | 9.999961e-10 | 90 |
| 1.4311 | 0.1765 | 1.4579 | 0.1268 | 9.99996e-10 | 91 |
| 1.4330 | 0.1788 | 1.4574 | 0.1268 | 9.999959e-10 | 92 |
| 1.4363 | 0.1435 | 1.4570 | 0.1268 | 9.999958e-10 | 93 |
| 1.4248 | 0.1694 | 1.4566 | 0.1268 | 9.999956e-10 | 94 |
| 1.4353 | 0.1812 | 1.4561 | 0.1268 | 9.999955e-10 | 95 |
| 1.4279 | 0.1600 | 1.4556 | 0.1268 | 9.999954e-10 | 96 |
| 1.4337 | 0.1718 | 1.4552 | 0.1268 | 9.999953e-10 | 97 |
| 1.4282 | 0.1694 | 1.4548 | 0.1268 | 9.999952e-10 | 98 |
| 1.4342 | 0.1718 | 1.4543 | 0.1268 | 9.999951e-10 | 99 |
| 1.4213 | 0.1694 | 1.4539 | 0.1268 | 9.99995e-10 | 100 |
| 1.4358 | 0.1647 | 1.4535 | 0.1268 | 9.999949e-10 | 101 |
| 1.4306 | 0.1859 | 1.4530 | 0.1338 | 9.999948e-10 | 102 |
| 1.4330 | 0.1718 | 1.4525 | 0.1338 | 9.999946e-10 | 103 |
| 1.4319 | 0.1694 | 1.4521 | 0.1338 | 9.999945e-10 | 104 |
| 1.4280 | 0.1576 | 1.4516 | 0.1338 | 9.999944e-10 | 105 |
| 1.4240 | 0.1671 | 1.4512 | 0.1338 | 9.999943e-10 | 106 |
| 1.4359 | 0.1647 | 1.4507 | 0.1338 | 9.999942e-10 | 107 |
| 1.4296 | 0.1318 | 1.4502 | 0.1338 | 9.999941e-10 | 108 |
| 1.4308 | 0.1835 | 1.4498 | 0.1338 | 9.99994e-10 | 109 |
| 1.4242 | 0.1835 | 1.4493 | 0.1338 | 9.999939e-10 | 110 |
| 1.4257 | 0.1741 | 1.4489 | 0.1338 | 9.999938e-10 | 111 |
| 1.4235 | 0.1694 | 1.4485 | 0.1338 | 9.999936e-10 | 112 |
| 1.4269 | 0.1576 | 1.4481 | 0.1338 | 9.999935e-10 | 113 |
| 1.4188 | 0.1624 | 1.4476 | 0.1338 | 9.999934e-10 | 114 |
| 1.4221 | 0.1624 | 1.4471 | 0.1408 | 9.999933e-10 | 115 |
| 1.4269 | 0.1929 | 1.4467 | 0.1408 | 9.999932e-10 | 116 |
| 1.4274 | 0.1765 | 1.4463 | 0.1408 | 9.999931e-10 | 117 |
| 1.4262 | 0.1459 | 1.4458 | 0.1408 | 9.99993e-10 | 118 |
| 1.4208 | 0.1718 | 1.4453 | 0.1408 | 9.999929e-10 | 119 |
| 1.4237 | 0.1718 | 1.4448 | 0.1408 | 9.999928e-10 | 120 |
| 1.4242 | 0.1718 | 1.4444 | 0.1408 | 9.999926e-10 | 121 |
| 1.4321 | 0.1435 | 1.4439 | 0.1408 | 9.999925e-10 | 122 |
| 1.4208 | 0.1671 | 1.4435 | 0.1408 | 9.999924e-10 | 123 |
| 1.4127 | 0.1929 | 1.4430 | 0.1408 | 9.999923e-10 | 124 |
| 1.4281 | 0.1671 | 1.4425 | 0.1408 | 9.999922e-10 | 125 |
| 1.4135 | 0.1953 | 1.4421 | 0.1408 | 9.999921e-10 | 126 |
| 1.4214 | 0.1718 | 1.4417 | 0.1408 | 9.99992e-10 | 127 |
| 1.4190 | 0.1953 | 1.4412 | 0.1408 | 9.999919e-10 | 128 |
| 1.4187 | 0.1929 | 1.4408 | 0.1408 | 9.999918e-10 | 129 |
| 1.4159 | 0.1671 | 1.4404 | 0.1408 | 9.999916e-10 | 130 |
| 1.4168 | 0.1506 | 1.4399 | 0.1408 | 9.999915e-10 | 131 |
| 1.4185 | 0.1765 | 1.4395 | 0.1408 | 9.999914e-10 | 132 |
| 1.4145 | 0.1765 | 1.4390 | 0.1408 | 9.999913e-10 | 133 |
| 1.4168 | 0.1882 | 1.4385 | 0.1408 | 9.999912e-10 | 134 |
| 1.4245 | 0.1812 | 1.4381 | 0.1408 | 9.999911e-10 | 135 |
| 1.4101 | 0.1671 | 1.4377 | 0.1408 | 9.99991e-10 | 136 |
| 1.4140 | 0.1835 | 1.4372 | 0.1479 | 9.999909e-10 | 137 |
| 1.4131 | 0.2024 | 1.4368 | 0.1479 | 9.999908e-10 | 138 |
| 1.4200 | 0.1694 | 1.4363 | 0.1479 | 9.999906e-10 | 139 |
| 1.4104 | 0.1765 | 1.4359 | 0.1479 | 9.999905e-10 | 140 |
| 1.4260 | 0.1788 | 1.4354 | 0.1479 | 9.999904e-10 | 141 |
| 1.4185 | 0.1859 | 1.4350 | 0.1479 | 9.999903e-10 | 142 |
| 1.4098 | 0.1929 | 1.4346 | 0.1479 | 9.999902e-10 | 143 |
| 1.4109 | 0.1812 | 1.4342 | 0.1479 | 9.999901e-10 | 144 |
| 1.4054 | 0.2118 | 1.4337 | 0.1479 | 9.9999e-10 | 145 |
| 1.4072 | 0.2000 | 1.4333 | 0.1479 | 9.999899e-10 | 146 |
| 1.4111 | 0.1906 | 1.4329 | 0.1479 | 9.999898e-10 | 147 |
| 1.4174 | 0.1718 | 1.4324 | 0.1479 | 9.999896e-10 | 148 |
| 1.4068 | 0.1671 | 1.4320 | 0.1479 | 9.999895e-10 | 149 |
| 1.4069 | 0.1694 | 1.4316 | 0.1479 | 9.999894e-10 | 150 |
| 1.4043 | 0.2047 | 1.4311 | 0.1479 | 9.999893e-10 | 151 |
| 1.4046 | 0.1929 | 1.4307 | 0.1479 | 9.999892e-10 | 152 |
| 1.4066 | 0.1953 | 1.4302 | 0.1479 | 9.999891e-10 | 153 |
| 1.4031 | 0.2000 | 1.4298 | 0.1479 | 9.99989e-10 | 154 |
| 1.4112 | 0.1788 | 1.4294 | 0.1479 | 9.999889e-10 | 155 |
| 1.4012 | 0.2118 | 1.4290 | 0.1479 | 9.999888e-10 | 156 |
| 1.4140 | 0.1812 | 1.4285 | 0.1479 | 9.999886e-10 | 157 |
| 1.4062 | 0.1741 | 1.4281 | 0.1479 | 9.999885e-10 | 158 |
| 1.4049 | 0.1929 | 1.4276 | 0.1479 | 9.999884e-10 | 159 |
| 1.4082 | 0.2047 | 1.4272 | 0.1479 | 9.999883e-10 | 160 |
| 1.4085 | 0.1882 | 1.4268 | 0.1479 | 9.999882e-10 | 161 |
| 1.4095 | 0.1835 | 1.4264 | 0.1479 | 9.999881e-10 | 162 |
| 1.4040 | 0.2047 | 1.4259 | 0.1479 | 9.99988e-10 | 163 |
| 1.4080 | 0.2071 | 1.4255 | 0.1479 | 9.999879e-10 | 164 |
| 1.3990 | 0.2047 | 1.4251 | 0.1479 | 9.999878e-10 | 165 |
| 1.4095 | 0.2094 | 1.4247 | 0.1479 | 9.999876e-10 | 166 |
| 1.4054 | 0.1906 | 1.4242 | 0.1479 | 9.999874e-10 | 167 |
| 1.4014 | 0.2188 | 1.4238 | 0.1479 | 9.999872e-10 | 168 |
| 1.3944 | 0.2259 | 1.4234 | 0.1479 | 9.99987e-10 | 169 |
| 1.3990 | 0.2047 | 1.4230 | 0.1479 | 9.999868e-10 | 170 |
| 1.4027 | 0.2094 | 1.4226 | 0.1479 | 9.999865e-10 | 171 |
| 1.4030 | 0.2024 | 1.4222 | 0.1479 | 9.999863e-10 | 172 |
| 1.4038 | 0.1929 | 1.4218 | 0.1479 | 9.999861e-10 | 173 |
| 1.4008 | 0.1859 | 1.4213 | 0.1479 | 9.999859e-10 | 174 |
| 1.4051 | 0.2141 | 1.4209 | 0.1479 | 9.999856e-10 | 175 |
| 1.3957 | 0.2024 | 1.4204 | 0.1479 | 9.999854e-10 | 176 |
| 1.4036 | 0.1788 | 1.4200 | 0.1479 | 9.999852e-10 | 177 |
| 1.3998 | 0.1953 | 1.4196 | 0.1479 | 9.99985e-10 | 178 |
| 1.3987 | 0.2047 | 1.4192 | 0.1479 | 9.999848e-10 | 179 |
| 1.4036 | 0.2000 | 1.4187 | 0.1479 | 9.999845e-10 | 180 |
| 1.4005 | 0.2047 | 1.4183 | 0.1479 | 9.999843e-10 | 181 |
| 1.4007 | 0.2118 | 1.4179 | 0.1479 | 9.999841e-10 | 182 |
| 1.3974 | 0.1882 | 1.4174 | 0.1479 | 9.999839e-10 | 183 |
| 1.3847 | 0.2118 | 1.4170 | 0.1479 | 9.999837e-10 | 184 |
| 1.3995 | 0.2094 | 1.4166 | 0.1479 | 9.999834e-10 | 185 |
| 1.3922 | 0.1835 | 1.4163 | 0.1549 | 9.999832e-10 | 186 |
| 1.4009 | 0.2071 | 1.4158 | 0.1549 | 9.99983e-10 | 187 |
| 1.3924 | 0.2188 | 1.4154 | 0.1549 | 9.999828e-10 | 188 |
| 1.3915 | 0.2259 | 1.4150 | 0.1549 | 9.999825e-10 | 189 |
| 1.3922 | 0.2353 | 1.4146 | 0.1549 | 9.999823e-10 | 190 |
| 1.3913 | 0.2424 | 1.4142 | 0.1549 | 9.999821e-10 | 191 |
| 1.3933 | 0.2188 | 1.4137 | 0.1549 | 9.999819e-10 | 192 |
| 1.3874 | 0.2400 | 1.4133 | 0.1549 | 9.999817e-10 | 193 |
| 1.3961 | 0.2071 | 1.4129 | 0.1549 | 9.999814e-10 | 194 |
| 1.4043 | 0.2000 | 1.4125 | 0.1549 | 9.999812e-10 | 195 |
| 1.3918 | 0.2071 | 1.4121 | 0.1620 | 9.99981e-10 | 196 |
| 1.3959 | 0.2094 | 1.4117 | 0.1620 | 9.999808e-10 | 197 |
| 1.3930 | 0.1812 | 1.4113 | 0.1620 | 9.999805e-10 | 198 |
| 1.3954 | 0.2071 | 1.4109 | 0.1620 | 9.999803e-10 | 199 |
| 1.3853 | 0.2259 | 1.4105 | 0.1620 | 9.999801e-10 | 200 |
| 1.3934 | 0.2212 | 1.4100 | 0.1620 | 9.999799e-10 | 201 |
| 1.3876 | 0.2212 | 1.4095 | 0.1620 | 9.999797e-10 | 202 |
| 1.3894 | 0.2235 | 1.4091 | 0.1620 | 9.999794e-10 | 203 |
| 1.3860 | 0.2447 | 1.4087 | 0.1690 | 9.999792e-10 | 204 |
| 1.3892 | 0.2000 | 1.4083 | 0.1690 | 9.99979e-10 | 205 |
| 1.3870 | 0.2259 | 1.4078 | 0.1761 | 9.999788e-10 | 206 |
| 1.3941 | 0.2094 | 1.4074 | 0.1761 | 9.999785e-10 | 207 |
| 1.3908 | 0.1953 | 1.4070 | 0.1761 | 9.999783e-10 | 208 |
| 1.3886 | 0.2306 | 1.4066 | 0.1761 | 9.999781e-10 | 209 |
| 1.3888 | 0.2376 | 1.4062 | 0.1761 | 9.999779e-10 | 210 |
| 1.3806 | 0.2329 | 1.4058 | 0.1761 | 9.999777e-10 | 211 |
| 1.3893 | 0.2424 | 1.4054 | 0.1761 | 9.999774e-10 | 212 |
| 1.3775 | 0.2282 | 1.4050 | 0.1761 | 9.999772e-10 | 213 |
| 1.3867 | 0.2047 | 1.4046 | 0.1761 | 9.99977e-10 | 214 |
| 1.3871 | 0.2353 | 1.4041 | 0.1761 | 9.999768e-10 | 215 |
| 1.3678 | 0.2612 | 1.4037 | 0.1761 | 9.999765e-10 | 216 |
| 1.3773 | 0.2376 | 1.4034 | 0.1761 | 9.999763e-10 | 217 |
| 1.3906 | 0.2141 | 1.4030 | 0.1761 | 9.999761e-10 | 218 |
| 1.3838 | 0.2235 | 1.4026 | 0.1761 | 9.999759e-10 | 219 |
| 1.3835 | 0.2612 | 1.4022 | 0.1761 | 9.999757e-10 | 220 |
| 1.3824 | 0.2329 | 1.4017 | 0.1761 | 9.999754e-10 | 221 |
| 1.3830 | 0.2376 | 1.4013 | 0.1761 | 9.999752e-10 | 222 |
| 1.3848 | 0.2235 | 1.4009 | 0.1831 | 9.99975e-10 | 223 |
| 1.3772 | 0.2565 | 1.4004 | 0.1831 | 9.999748e-10 | 224 |
| 1.3764 | 0.2447 | 1.4001 | 0.1831 | 9.999745e-10 | 225 |
| 1.3779 | 0.2541 | 1.3997 | 0.1831 | 9.999743e-10 | 226 |
| 1.3781 | 0.2588 | 1.3993 | 0.1831 | 9.999741e-10 | 227 |
| 1.3838 | 0.2047 | 1.3989 | 0.1831 | 9.999739e-10 | 228 |
| 1.3807 | 0.2259 | 1.3985 | 0.1831 | 9.999737e-10 | 229 |
| 1.3745 | 0.2635 | 1.3982 | 0.1831 | 9.999734e-10 | 230 |
| 1.3776 | 0.2447 | 1.3977 | 0.1831 | 9.999732e-10 | 231 |
| 1.3787 | 0.2282 | 1.3973 | 0.1831 | 9.99973e-10 | 232 |
| 1.3747 | 0.2706 | 1.3969 | 0.1831 | 9.999728e-10 | 233 |
| 1.3771 | 0.2447 | 1.3965 | 0.1901 | 9.999725e-10 | 234 |
| 1.3783 | 0.2259 | 1.3961 | 0.1901 | 9.999723e-10 | 235 |
| 1.3763 | 0.2141 | 1.3957 | 0.1901 | 9.999721e-10 | 236 |
| 1.3687 | 0.2565 | 1.3953 | 0.1901 | 9.999719e-10 | 237 |
| 1.3681 | 0.2565 | 1.3949 | 0.1901 | 9.999717e-10 | 238 |
| 1.3785 | 0.2400 | 1.3945 | 0.1901 | 9.999714e-10 | 239 |
| 1.3807 | 0.2259 | 1.3941 | 0.1972 | 9.999712e-10 | 240 |
| 1.3709 | 0.2353 | 1.3937 | 0.1972 | 9.99971e-10 | 241 |
| 1.3736 | 0.2753 | 1.3933 | 0.1972 | 9.999708e-10 | 242 |
| 1.3735 | 0.2376 | 1.3929 | 0.1972 | 9.999706e-10 | 243 |
| 1.3797 | 0.2235 | 1.3925 | 0.1972 | 9.999703e-10 | 244 |
| 1.3814 | 0.2541 | 1.3921 | 0.2042 | 9.999701e-10 | 245 |
| 1.3672 | 0.2565 | 1.3917 | 0.2042 | 9.999699e-10 | 246 |
| 1.3702 | 0.2518 | 1.3912 | 0.2042 | 9.999697e-10 | 247 |
| 1.3696 | 0.2682 | 1.3908 | 0.2042 | 9.999694e-10 | 248 |
| 1.3727 | 0.2424 | 1.3904 | 0.2042 | 9.999692e-10 | 249 |
| 1.3712 | 0.2635 | 1.3900 | 0.2042 | 9.99969e-10 | 250 |
| 1.3755 | 0.2235 | 1.3896 | 0.2042 | 9.999688e-10 | 251 |
| 1.3626 | 0.2612 | 1.3892 | 0.2042 | 9.999686e-10 | 252 |
| 1.3751 | 0.2376 | 1.3889 | 0.2042 | 9.999683e-10 | 253 |
| 1.3742 | 0.2353 | 1.3885 | 0.2042 | 9.999681e-10 | 254 |
| 1.3749 | 0.2329 | 1.3881 | 0.2042 | 9.999679e-10 | 255 |
| 1.3686 | 0.2541 | 1.3878 | 0.2042 | 9.999677e-10 | 256 |
| 1.3761 | 0.2353 | 1.3873 | 0.2042 | 9.999674e-10 | 257 |
| 1.3742 | 0.2565 | 1.3869 | 0.2042 | 9.999672e-10 | 258 |
| 1.3720 | 0.2682 | 1.3864 | 0.2042 | 9.99967e-10 | 259 |
| 1.3676 | 0.2471 | 1.3860 | 0.2042 | 9.999668e-10 | 260 |
| 1.3710 | 0.2541 | 1.3856 | 0.2042 | 9.999666e-10 | 261 |
| 1.3640 | 0.2918 | 1.3852 | 0.2042 | 9.999663e-10 | 262 |
| 1.3611 | 0.2588 | 1.3848 | 0.2042 | 9.999661e-10 | 263 |
| 1.3686 | 0.2635 | 1.3844 | 0.2042 | 9.999659e-10 | 264 |
| 1.3653 | 0.2776 | 1.3840 | 0.2042 | 9.999657e-10 | 265 |
| 1.3623 | 0.2729 | 1.3836 | 0.2042 | 9.999654e-10 | 266 |
| 1.3690 | 0.2518 | 1.3832 | 0.2042 | 9.999652e-10 | 267 |
| 1.3642 | 0.2635 | 1.3828 | 0.2042 | 9.99965e-10 | 268 |
| 1.3676 | 0.2518 | 1.3823 | 0.2042 | 9.999648e-10 | 269 |
| 1.3697 | 0.2612 | 1.3820 | 0.2042 | 9.999646e-10 | 270 |
| 1.3579 | 0.2894 | 1.3816 | 0.2042 | 9.999643e-10 | 271 |
| 1.3626 | 0.2588 | 1.3812 | 0.2042 | 9.999641e-10 | 272 |
| 1.3602 | 0.2753 | 1.3807 | 0.2042 | 9.999639e-10 | 273 |
| 1.3667 | 0.2612 | 1.3803 | 0.2042 | 9.999637e-10 | 274 |
| 1.3669 | 0.2847 | 1.3800 | 0.2042 | 9.999634e-10 | 275 |
| 1.3602 | 0.2988 | 1.3796 | 0.2042 | 9.999632e-10 | 276 |
| 1.3618 | 0.2941 | 1.3792 | 0.2042 | 9.99963e-10 | 277 |
| 1.3531 | 0.3129 | 1.3788 | 0.2183 | 9.999627e-10 | 278 |
| 1.3597 | 0.2894 | 1.3785 | 0.2183 | 9.999623e-10 | 279 |
| 1.3636 | 0.2729 | 1.3781 | 0.2183 | 9.99962e-10 | 280 |
| 1.3619 | 0.2706 | 1.3777 | 0.2183 | 9.999617e-10 | 281 |
| 1.3573 | 0.3059 | 1.3772 | 0.2183 | 9.999613e-10 | 282 |
| 1.3587 | 0.2635 | 1.3768 | 0.2183 | 9.99961e-10 | 283 |
| 1.3569 | 0.2776 | 1.3764 | 0.2183 | 9.999607e-10 | 284 |
| 1.3521 | 0.3200 | 1.3761 | 0.2183 | 9.999603e-10 | 285 |
| 1.3603 | 0.3176 | 1.3757 | 0.2183 | 9.9996e-10 | 286 |
| 1.3575 | 0.2894 | 1.3753 | 0.2183 | 9.999597e-10 | 287 |
| 1.3626 | 0.2565 | 1.3749 | 0.2183 | 9.999593e-10 | 288 |
| 1.3613 | 0.2565 | 1.3746 | 0.2183 | 9.99959e-10 | 289 |
| 1.3615 | 0.2706 | 1.3742 | 0.2183 | 9.999587e-10 | 290 |
| 1.3554 | 0.2706 | 1.3739 | 0.2183 | 9.999583e-10 | 291 |
| 1.3559 | 0.2988 | 1.3735 | 0.2183 | 9.99958e-10 | 292 |
| 1.3588 | 0.2682 | 1.3731 | 0.2254 | 9.999577e-10 | 293 |
| 1.3506 | 0.2824 | 1.3727 | 0.2254 | 9.999573e-10 | 294 |
| 1.3588 | 0.2706 | 1.3723 | 0.2324 | 9.99957e-10 | 295 |
| 1.3486 | 0.2824 | 1.3720 | 0.2254 | 9.999567e-10 | 296 |
| 1.3553 | 0.3012 | 1.3716 | 0.2254 | 9.999563e-10 | 297 |
| 1.3605 | 0.2447 | 1.3712 | 0.2254 | 9.99956e-10 | 298 |
| 1.3502 | 0.3176 | 1.3709 | 0.2254 | 9.999557e-10 | 299 |
| 1.3522 | 0.3012 | 1.3705 | 0.2254 | 9.999553e-10 | 300 |
| 1.3544 | 0.2824 | 1.3701 | 0.2183 | 9.99955e-10 | 301 |
| 1.3577 | 0.2494 | 1.3697 | 0.2183 | 9.999547e-10 | 302 |
| 1.3470 | 0.2918 | 1.3693 | 0.2183 | 9.999543e-10 | 303 |
| 1.3623 | 0.2871 | 1.3689 | 0.2183 | 9.99954e-10 | 304 |
| 1.3532 | 0.2776 | 1.3685 | 0.2183 | 9.999537e-10 | 305 |
| 1.3551 | 0.2753 | 1.3681 | 0.2183 | 9.999533e-10 | 306 |
| 1.3566 | 0.2659 | 1.3677 | 0.2183 | 9.99953e-10 | 307 |
| 1.3517 | 0.2965 | 1.3673 | 0.2113 | 9.999527e-10 | 308 |
| 1.3574 | 0.2988 | 1.3669 | 0.2113 | 9.999523e-10 | 309 |
| 1.3467 | 0.3200 | 1.3666 | 0.2113 | 9.99952e-10 | 310 |
| 1.3510 | 0.3082 | 1.3662 | 0.2113 | 9.999517e-10 | 311 |
| 1.3448 | 0.3129 | 1.3658 | 0.2113 | 9.999513e-10 | 312 |
| 1.3512 | 0.2800 | 1.3654 | 0.2113 | 9.99951e-10 | 313 |
| 1.3486 | 0.3082 | 1.3650 | 0.2113 | 9.999507e-10 | 314 |
| 1.3441 | 0.3106 | 1.3647 | 0.2113 | 9.999503e-10 | 315 |
| 1.3474 | 0.3176 | 1.3643 | 0.2113 | 9.9995e-10 | 316 |
| 1.3496 | 0.2965 | 1.3639 | 0.2113 | 9.999497e-10 | 317 |
| 1.3436 | 0.3200 | 1.3635 | 0.2183 | 9.999493e-10 | 318 |
| 1.3398 | 0.3318 | 1.3631 | 0.2183 | 9.99949e-10 | 319 |
| 1.3440 | 0.3318 | 1.3627 | 0.2183 | 9.999487e-10 | 320 |
| 1.3402 | 0.3294 | 1.3624 | 0.2254 | 9.999483e-10 | 321 |
| 1.3463 | 0.3247 | 1.3620 | 0.2254 | 9.99948e-10 | 322 |
| 1.3458 | 0.3012 | 1.3616 | 0.2254 | 9.999477e-10 | 323 |
| 1.3492 | 0.3153 | 1.3612 | 0.2254 | 9.999473e-10 | 324 |
| 1.3496 | 0.2941 | 1.3609 | 0.2324 | 9.99947e-10 | 325 |
| 1.3505 | 0.2776 | 1.3605 | 0.2394 | 9.999467e-10 | 326 |
| 1.3314 | 0.3200 | 1.3601 | 0.2394 | 9.999463e-10 | 327 |
| 1.3509 | 0.3082 | 1.3597 | 0.2394 | 9.99946e-10 | 328 |
| 1.3441 | 0.3318 | 1.3593 | 0.2465 | 9.999457e-10 | 329 |
| 1.3360 | 0.3365 | 1.3589 | 0.2535 | 9.999453e-10 | 330 |
| 1.3424 | 0.3271 | 1.3586 | 0.2606 | 9.99945e-10 | 331 |
| 1.3513 | 0.2824 | 1.3582 | 0.2606 | 9.999447e-10 | 332 |
| 1.3505 | 0.3106 | 1.3578 | 0.2606 | 9.999443e-10 | 333 |
| 1.3332 | 0.3176 | 1.3575 | 0.2606 | 9.99944e-10 | 334 |
| 1.3374 | 0.3341 | 1.3571 | 0.2606 | 9.999437e-10 | 335 |
| 1.3425 | 0.3106 | 1.3567 | 0.2606 | 9.999434e-10 | 336 |
| 1.3480 | 0.2988 | 1.3563 | 0.2606 | 9.99943e-10 | 337 |
| 1.3396 | 0.2894 | 1.3560 | 0.2606 | 9.999427e-10 | 338 |
| 1.3431 | 0.3271 | 1.3556 | 0.2676 | 9.999424e-10 | 339 |
| 1.3378 | 0.3271 | 1.3552 | 0.2676 | 9.99942e-10 | 340 |
| 1.3409 | 0.3318 | 1.3548 | 0.2676 | 9.999417e-10 | 341 |
| 1.3401 | 0.3506 | 1.3544 | 0.2676 | 9.999414e-10 | 342 |
| 1.3394 | 0.3153 | 1.3541 | 0.2746 | 9.99941e-10 | 343 |
| 1.3350 | 0.3412 | 1.3537 | 0.2746 | 9.999407e-10 | 344 |
| 1.3464 | 0.3200 | 1.3533 | 0.2817 | 9.999404e-10 | 345 |
| 1.3349 | 0.3412 | 1.3530 | 0.2817 | 9.9994e-10 | 346 |
| 1.3362 | 0.3318 | 1.3527 | 0.2817 | 9.999397e-10 | 347 |
| 1.3454 | 0.3153 | 1.3523 | 0.2817 | 9.999394e-10 | 348 |
| 1.3336 | 0.3459 | 1.3519 | 0.2817 | 9.99939e-10 | 349 |
| 1.3333 | 0.3812 | 1.3516 | 0.2817 | 9.999387e-10 | 350 |
| 1.3349 | 0.3459 | 1.3512 | 0.2817 | 9.999384e-10 | 351 |
| 1.3363 | 0.3388 | 1.3509 | 0.2817 | 9.99938e-10 | 352 |
| 1.3243 | 0.3553 | 1.3505 | 0.2887 | 9.999377e-10 | 353 |
| 1.3317 | 0.3529 | 1.3502 | 0.2817 | 9.999374e-10 | 354 |
| 1.3294 | 0.3388 | 1.3498 | 0.2887 | 9.99937e-10 | 355 |
| 1.3385 | 0.3459 | 1.3494 | 0.2887 | 9.999367e-10 | 356 |
| 1.3293 | 0.3624 | 1.3491 | 0.2887 | 9.999364e-10 | 357 |
| 1.3285 | 0.3694 | 1.3487 | 0.2887 | 9.99936e-10 | 358 |
| 1.3377 | 0.3271 | 1.3483 | 0.2887 | 9.999357e-10 | 359 |
| 1.3367 | 0.3271 | 1.3479 | 0.2887 | 9.999354e-10 | 360 |
| 1.3332 | 0.3341 | 1.3476 | 0.2887 | 9.99935e-10 | 361 |
| 1.3377 | 0.3600 | 1.3473 | 0.2887 | 9.999347e-10 | 362 |
| 1.3222 | 0.3953 | 1.3469 | 0.2887 | 9.999344e-10 | 363 |
| 1.3268 | 0.3553 | 1.3465 | 0.2887 | 9.99934e-10 | 364 |
| 1.3315 | 0.3412 | 1.3461 | 0.2887 | 9.999337e-10 | 365 |
| 1.3318 | 0.3365 | 1.3458 | 0.2887 | 9.999334e-10 | 366 |
| 1.3273 | 0.3671 | 1.3454 | 0.3028 | 9.99933e-10 | 367 |
| 1.3294 | 0.3576 | 1.3450 | 0.3028 | 9.999327e-10 | 368 |
| 1.3291 | 0.3694 | 1.3446 | 0.3028 | 9.999324e-10 | 369 |
| 1.3198 | 0.3600 | 1.3443 | 0.3028 | 9.99932e-10 | 370 |
| 1.3227 | 0.3741 | 1.3440 | 0.3028 | 9.999317e-10 | 371 |
| 1.3275 | 0.3553 | 1.3436 | 0.3028 | 9.999314e-10 | 372 |
| 1.3285 | 0.3388 | 1.3432 | 0.3028 | 9.99931e-10 | 373 |
| 1.3314 | 0.3671 | 1.3428 | 0.3028 | 9.999307e-10 | 374 |
| 1.3250 | 0.3812 | 1.3425 | 0.3028 | 9.999304e-10 | 375 |
| 1.3255 | 0.3553 | 1.3422 | 0.2958 | 9.9993e-10 | 376 |
| 1.3269 | 0.3906 | 1.3419 | 0.2958 | 9.999297e-10 | 377 |
| 1.3257 | 0.3694 | 1.3415 | 0.2958 | 9.999294e-10 | 378 |
| 1.3235 | 0.3624 | 1.3412 | 0.2958 | 9.99929e-10 | 379 |
| 1.3304 | 0.3224 | 1.3408 | 0.3028 | 9.999287e-10 | 380 |
| 1.3203 | 0.3694 | 1.3404 | 0.3028 | 9.999284e-10 | 381 |
| 1.3223 | 0.3694 | 1.3400 | 0.3169 | 9.99928e-10 | 382 |
| 1.3217 | 0.3953 | 1.3397 | 0.3169 | 9.999277e-10 | 383 |
| 1.3163 | 0.3882 | 1.3393 | 0.3169 | 9.999274e-10 | 384 |
| 1.3261 | 0.3718 | 1.3390 | 0.3169 | 9.99927e-10 | 385 |
| 1.3308 | 0.3624 | 1.3386 | 0.3169 | 9.999267e-10 | 386 |
| 1.3263 | 0.3482 | 1.3382 | 0.3239 | 9.999264e-10 | 387 |
| 1.3218 | 0.4094 | 1.3378 | 0.3239 | 9.99926e-10 | 388 |
| 1.3217 | 0.3788 | 1.3375 | 0.3239 | 9.999256e-10 | 389 |
| 1.3270 | 0.3482 | 1.3370 | 0.3239 | 9.999251e-10 | 390 |
| 1.3237 | 0.3600 | 1.3367 | 0.3239 | 9.999247e-10 | 391 |
| 1.3207 | 0.3741 | 1.3363 | 0.3239 | 9.999243e-10 | 392 |
| 1.3203 | 0.3835 | 1.3360 | 0.3239 | 9.999238e-10 | 393 |
| 1.3177 | 0.3671 | 1.3356 | 0.3169 | 9.999234e-10 | 394 |
| 1.3187 | 0.4000 | 1.3353 | 0.3169 | 9.999229e-10 | 395 |
| 1.3227 | 0.3529 | 1.3349 | 0.3169 | 9.999225e-10 | 396 |
| 1.3195 | 0.3624 | 1.3345 | 0.3239 | 9.99922e-10 | 397 |
| 1.3217 | 0.4141 | 1.3342 | 0.3239 | 9.999216e-10 | 398 |
| 1.3205 | 0.3906 | 1.3338 | 0.3239 | 9.999211e-10 | 399 |
| 1.3192 | 0.3812 | 1.3334 | 0.3239 | 9.999207e-10 | 400 |
| 1.3194 | 0.3812 | 1.3330 | 0.3239 | 9.999203e-10 | 401 |
| 1.3175 | 0.3741 | 1.3326 | 0.3239 | 9.999198e-10 | 402 |
| 1.3118 | 0.4306 | 1.3323 | 0.3239 | 9.999194e-10 | 403 |
| 1.3226 | 0.3788 | 1.3319 | 0.3239 | 9.999189e-10 | 404 |
| 1.3186 | 0.4047 | 1.3315 | 0.3239 | 9.999185e-10 | 405 |
| 1.3201 | 0.3671 | 1.3312 | 0.3239 | 9.99918e-10 | 406 |
| 1.3193 | 0.4000 | 1.3308 | 0.3310 | 9.999176e-10 | 407 |
| 1.3247 | 0.3718 | 1.3304 | 0.3310 | 9.999171e-10 | 408 |
| 1.3146 | 0.3906 | 1.3301 | 0.3310 | 9.999167e-10 | 409 |
| 1.3139 | 0.3812 | 1.3298 | 0.3380 | 9.999163e-10 | 410 |
| 1.3172 | 0.4165 | 1.3294 | 0.3451 | 9.999158e-10 | 411 |
| 1.3146 | 0.4071 | 1.3291 | 0.3451 | 9.999154e-10 | 412 |
| 1.3148 | 0.3859 | 1.3287 | 0.3451 | 9.999149e-10 | 413 |
| 1.3177 | 0.4024 | 1.3284 | 0.3521 | 9.999145e-10 | 414 |
| 1.3096 | 0.4329 | 1.3280 | 0.3662 | 9.99914e-10 | 415 |
| 1.3126 | 0.3929 | 1.3276 | 0.3662 | 9.999136e-10 | 416 |
| 1.3147 | 0.4235 | 1.3273 | 0.3662 | 9.999132e-10 | 417 |
| 1.3149 | 0.3600 | 1.3269 | 0.3732 | 9.999127e-10 | 418 |
| 1.3122 | 0.4259 | 1.3265 | 0.3732 | 9.999123e-10 | 419 |
| 1.3140 | 0.3929 | 1.3262 | 0.3732 | 9.999118e-10 | 420 |
| 1.3111 | 0.3835 | 1.3258 | 0.3873 | 9.999114e-10 | 421 |
| 1.3131 | 0.4094 | 1.3255 | 0.3944 | 9.999109e-10 | 422 |
| 1.3118 | 0.3859 | 1.3251 | 0.3944 | 9.999105e-10 | 423 |
| 1.3146 | 0.3671 | 1.3248 | 0.4014 | 9.9991e-10 | 424 |
| 1.3078 | 0.4188 | 1.3244 | 0.4085 | 9.999096e-10 | 425 |
| 1.3087 | 0.4188 | 1.3241 | 0.4085 | 9.999092e-10 | 426 |
| 1.3125 | 0.4188 | 1.3237 | 0.4155 | 9.999087e-10 | 427 |
| 1.3071 | 0.4024 | 1.3234 | 0.4225 | 9.999083e-10 | 428 |
| 1.3131 | 0.3929 | 1.3230 | 0.4296 | 9.999078e-10 | 429 |
| 1.3077 | 0.4424 | 1.3227 | 0.4296 | 9.999074e-10 | 430 |
| 1.3127 | 0.4024 | 1.3223 | 0.4296 | 9.999069e-10 | 431 |
| 1.3047 | 0.4518 | 1.3220 | 0.4296 | 9.999065e-10 | 432 |
| 1.2997 | 0.4329 | 1.3216 | 0.4296 | 9.99906e-10 | 433 |
| 1.3050 | 0.4329 | 1.3213 | 0.4296 | 9.999056e-10 | 434 |
| 1.3077 | 0.4329 | 1.3210 | 0.4296 | 9.999052e-10 | 435 |
| 1.3064 | 0.4329 | 1.3206 | 0.4296 | 9.999047e-10 | 436 |
| 1.3038 | 0.4424 | 1.3202 | 0.4296 | 9.999043e-10 | 437 |
| 1.3140 | 0.3976 | 1.3199 | 0.4366 | 9.999038e-10 | 438 |
| 1.3025 | 0.4235 | 1.3195 | 0.4366 | 9.999034e-10 | 439 |
| 1.3021 | 0.4282 | 1.3192 | 0.4296 | 9.999029e-10 | 440 |
| 1.3029 | 0.4235 | 1.3188 | 0.4366 | 9.999025e-10 | 441 |
| 1.2991 | 0.4682 | 1.3185 | 0.4366 | 9.99902e-10 | 442 |
| 1.3099 | 0.4165 | 1.3181 | 0.4366 | 9.999016e-10 | 443 |
| 1.3051 | 0.4376 | 1.3178 | 0.4366 | 9.999012e-10 | 444 |
| 1.2937 | 0.4353 | 1.3174 | 0.4437 | 9.999007e-10 | 445 |
| 1.3004 | 0.4235 | 1.3171 | 0.4507 | 9.999003e-10 | 446 |
| 1.2956 | 0.4682 | 1.3167 | 0.4507 | 9.998998e-10 | 447 |
| 1.3079 | 0.4329 | 1.3164 | 0.4577 | 9.998994e-10 | 448 |
| 1.3026 | 0.4376 | 1.3160 | 0.4577 | 9.998989e-10 | 449 |
| 1.3009 | 0.4400 | 1.3156 | 0.4648 | 9.998985e-10 | 450 |
| 1.3018 | 0.4353 | 1.3153 | 0.4648 | 9.99898e-10 | 451 |
| 1.3011 | 0.4329 | 1.3149 | 0.4648 | 9.998976e-10 | 452 |
| 1.3014 | 0.4259 | 1.3146 | 0.4648 | 9.998972e-10 | 453 |
| 1.3028 | 0.4659 | 1.3142 | 0.4648 | 9.998967e-10 | 454 |
| 1.2986 | 0.4329 | 1.3140 | 0.4648 | 9.998963e-10 | 455 |
| 1.2987 | 0.4376 | 1.3136 | 0.4718 | 9.998958e-10 | 456 |
| 1.3080 | 0.4188 | 1.3132 | 0.4718 | 9.998954e-10 | 457 |
| 1.2989 | 0.4282 | 1.3129 | 0.4718 | 9.99895e-10 | 458 |
| 1.3003 | 0.4447 | 1.3125 | 0.4718 | 9.998945e-10 | 459 |
| 1.2984 | 0.4494 | 1.3122 | 0.4718 | 9.998941e-10 | 460 |
| 1.2991 | 0.4306 | 1.3118 | 0.4859 | 9.998936e-10 | 461 |
| 1.3014 | 0.4588 | 1.3115 | 0.4930 | 9.998932e-10 | 462 |
| 1.3041 | 0.4118 | 1.3112 | 0.4930 | 9.998927e-10 | 463 |
| 1.3031 | 0.4306 | 1.3109 | 0.4930 | 9.998923e-10 | 464 |
| 1.2979 | 0.4329 | 1.3105 | 0.4930 | 9.998918e-10 | 465 |
| 1.3049 | 0.4424 | 1.3102 | 0.4930 | 9.998914e-10 | 466 |
| 1.3003 | 0.4541 | 1.3098 | 0.4930 | 9.99891e-10 | 467 |
| 1.2883 | 0.4518 | 1.3095 | 0.4930 | 9.998905e-10 | 468 |
| 1.2887 | 0.5012 | 1.3091 | 0.5 | 9.998901e-10 | 469 |
| 1.3032 | 0.4541 | 1.3088 | 0.5 | 9.998896e-10 | 470 |
| 1.2940 | 0.4518 | 1.3084 | 0.5 | 9.998892e-10 | 471 |
| 1.2887 | 0.4894 | 1.3081 | 0.5 | 9.998887e-10 | 472 |
| 1.2878 | 0.4753 | 1.3078 | 0.5 | 9.998883e-10 | 473 |
| 1.2885 | 0.4941 | 1.3074 | 0.5 | 9.998878e-10 | 474 |
| 1.2936 | 0.4612 | 1.3071 | 0.5 | 9.998874e-10 | 475 |
| 1.2915 | 0.4659 | 1.3067 | 0.5 | 9.99887e-10 | 476 |
| 1.2886 | 0.4518 | 1.3064 | 0.5 | 9.998865e-10 | 477 |
| 1.2975 | 0.4376 | 1.3061 | 0.5 | 9.998861e-10 | 478 |
| 1.2930 | 0.4635 | 1.3057 | 0.4930 | 9.998856e-10 | 479 |
| 1.2910 | 0.4894 | 1.3054 | 0.4930 | 9.998852e-10 | 480 |
| 1.2891 | 0.4682 | 1.3050 | 0.5 | 9.998847e-10 | 481 |
| 1.2900 | 0.4965 | 1.3047 | 0.5 | 9.998843e-10 | 482 |
| 1.2902 | 0.4682 | 1.3044 | 0.5 | 9.998838e-10 | 483 |
| 1.2912 | 0.4965 | 1.3041 | 0.5 | 9.998834e-10 | 484 |
| 1.2926 | 0.4541 | 1.3037 | 0.5 | 9.99883e-10 | 485 |
| 1.2893 | 0.4706 | 1.3034 | 0.5070 | 9.998825e-10 | 486 |
| 1.2823 | 0.4965 | 1.3030 | 0.5070 | 9.998821e-10 | 487 |
| 1.2865 | 0.4894 | 1.3026 | 0.5 | 9.998816e-10 | 488 |
| 1.2902 | 0.4682 | 1.3023 | 0.5 | 9.998812e-10 | 489 |
| 1.2818 | 0.5082 | 1.3020 | 0.5 | 9.998807e-10 | 490 |
| 1.2924 | 0.4424 | 1.3017 | 0.5 | 9.998803e-10 | 491 |
| 1.2839 | 0.4918 | 1.3013 | 0.5 | 9.998798e-10 | 492 |
| 1.2840 | 0.4635 | 1.3010 | 0.5 | 9.998794e-10 | 493 |
| 1.2860 | 0.4800 | 1.3007 | 0.5 | 9.99879e-10 | 494 |
| 1.2913 | 0.4424 | 1.3003 | 0.5 | 9.998785e-10 | 495 |
| 1.2914 | 0.4988 | 1.2999 | 0.5070 | 9.998781e-10 | 496 |
| 1.2898 | 0.4635 | 1.2996 | 0.5070 | 9.998776e-10 | 497 |
| 1.2885 | 0.4635 | 1.2992 | 0.5141 | 9.998772e-10 | 498 |
| 1.2825 | 0.4847 | 1.2989 | 0.5141 | 9.998767e-10 | 499 |
| 1.2835 | 0.4682 | 1.2986 | 0.5141 | 9.998762e-10 | 500 |
| 1.2855 | 0.4894 | 1.2982 | 0.5141 | 9.998756e-10 | 501 |
| 1.2873 | 0.4729 | 1.2978 | 0.5141 | 9.998751e-10 | 502 |
| 1.2834 | 0.5106 | 1.2975 | 0.5141 | 9.998745e-10 | 503 |
| 1.2837 | 0.5153 | 1.2972 | 0.5211 | 9.99874e-10 | 504 |
| 1.2818 | 0.4941 | 1.2969 | 0.5211 | 9.998734e-10 | 505 |
| 1.2815 | 0.5082 | 1.2966 | 0.5211 | 9.998729e-10 | 506 |
| 1.2845 | 0.4800 | 1.2962 | 0.5211 | 9.998723e-10 | 507 |
| 1.2966 | 0.4376 | 1.2959 | 0.5211 | 9.998717e-10 | 508 |
| 1.2863 | 0.4941 | 1.2955 | 0.5282 | 9.998712e-10 | 509 |
| 1.2814 | 0.4871 | 1.2952 | 0.5282 | 9.998706e-10 | 510 |
| 1.2809 | 0.5224 | 1.2948 | 0.5282 | 9.998701e-10 | 511 |
| 1.2850 | 0.4682 | 1.2945 | 0.5352 | 9.998695e-10 | 512 |
| 1.2787 | 0.5035 | 1.2942 | 0.5352 | 9.99869e-10 | 513 |
| 1.2819 | 0.5059 | 1.2939 | 0.5352 | 9.998684e-10 | 514 |
| 1.2825 | 0.4729 | 1.2936 | 0.5423 | 9.998679e-10 | 515 |
| 1.2720 | 0.5341 | 1.2932 | 0.5423 | 9.998673e-10 | 516 |
| 1.2779 | 0.5153 | 1.2929 | 0.5423 | 9.998667e-10 | 517 |
| 1.2803 | 0.5176 | 1.2925 | 0.5563 | 9.998662e-10 | 518 |
| 1.2803 | 0.4706 | 1.2922 | 0.5563 | 9.998656e-10 | 519 |
| 1.2752 | 0.5059 | 1.2919 | 0.5563 | 9.998651e-10 | 520 |
| 1.2816 | 0.4894 | 1.2915 | 0.5634 | 9.998645e-10 | 521 |
| 1.2723 | 0.5459 | 1.2912 | 0.5634 | 9.99864e-10 | 522 |
| 1.2828 | 0.5012 | 1.2909 | 0.5775 | 9.998634e-10 | 523 |
| 1.2901 | 0.4871 | 1.2906 | 0.5775 | 9.998629e-10 | 524 |
| 1.2856 | 0.4800 | 1.2902 | 0.5775 | 9.998623e-10 | 525 |
| 1.2812 | 0.5176 | 1.2899 | 0.5775 | 9.998617e-10 | 526 |
| 1.2731 | 0.5176 | 1.2896 | 0.5775 | 9.998612e-10 | 527 |
| 1.2819 | 0.5082 | 1.2892 | 0.5775 | 9.998606e-10 | 528 |
| 1.2775 | 0.5106 | 1.2889 | 0.5775 | 9.998601e-10 | 529 |
| 1.2774 | 0.5012 | 1.2886 | 0.5775 | 9.998595e-10 | 530 |
| 1.2765 | 0.5294 | 1.2883 | 0.5775 | 9.99859e-10 | 531 |
| 1.2782 | 0.5176 | 1.2880 | 0.5775 | 9.998584e-10 | 532 |
| 1.2763 | 0.5082 | 1.2877 | 0.5775 | 9.998579e-10 | 533 |
| 1.2716 | 0.5082 | 1.2873 | 0.5775 | 9.998573e-10 | 534 |
| 1.2827 | 0.5035 | 1.2870 | 0.5775 | 9.998568e-10 | 535 |
| 1.2741 | 0.5106 | 1.2867 | 0.5775 | 9.998562e-10 | 536 |
| 1.2719 | 0.5294 | 1.2864 | 0.5775 | 9.998556e-10 | 537 |
| 1.2698 | 0.5153 | 1.2861 | 0.5775 | 9.998551e-10 | 538 |
| 1.2801 | 0.5294 | 1.2857 | 0.5775 | 9.998545e-10 | 539 |
| 1.2698 | 0.5459 | 1.2854 | 0.5775 | 9.99854e-10 | 540 |
| 1.2722 | 0.5129 | 1.2851 | 0.5775 | 9.998534e-10 | 541 |
| 1.2690 | 0.5176 | 1.2848 | 0.5775 | 9.998529e-10 | 542 |
| 1.2807 | 0.5106 | 1.2845 | 0.5775 | 9.998523e-10 | 543 |
| 1.2762 | 0.5153 | 1.2841 | 0.5845 | 9.998518e-10 | 544 |
| 1.2734 | 0.5365 | 1.2838 | 0.5915 | 9.998512e-10 | 545 |
| 1.2607 | 0.5459 | 1.2835 | 0.5915 | 9.998506e-10 | 546 |
| 1.2778 | 0.5035 | 1.2831 | 0.5915 | 9.998501e-10 | 547 |
| 1.2625 | 0.5271 | 1.2828 | 0.5986 | 9.998495e-10 | 548 |
| 1.2641 | 0.5318 | 1.2825 | 0.5986 | 9.99849e-10 | 549 |
| 1.2695 | 0.5341 | 1.2822 | 0.6056 | 9.998484e-10 | 550 |
| 1.2721 | 0.5459 | 1.2819 | 0.6056 | 9.998479e-10 | 551 |
| 1.2707 | 0.5271 | 1.2816 | 0.6056 | 9.998473e-10 | 552 |
| 1.2695 | 0.5247 | 1.2812 | 0.6056 | 9.998468e-10 | 553 |
| 1.2766 | 0.5035 | 1.2809 | 0.6056 | 9.998462e-10 | 554 |
| 1.2678 | 0.5482 | 1.2806 | 0.6056 | 9.998457e-10 | 555 |
| 1.2677 | 0.5318 | 1.2803 | 0.6056 | 9.998451e-10 | 556 |
| 1.2711 | 0.5271 | 1.2799 | 0.6056 | 9.998445e-10 | 557 |
| 1.2639 | 0.5529 | 1.2796 | 0.6056 | 9.99844e-10 | 558 |
| 1.2619 | 0.5906 | 1.2794 | 0.6056 | 9.998434e-10 | 559 |
| 1.2710 | 0.5271 | 1.2791 | 0.6056 | 9.998429e-10 | 560 |
| 1.2666 | 0.5647 | 1.2787 | 0.6056 | 9.998423e-10 | 561 |
| 1.2639 | 0.5388 | 1.2784 | 0.6056 | 9.998418e-10 | 562 |
| 1.2736 | 0.5200 | 1.2781 | 0.6056 | 9.998412e-10 | 563 |
| 1.2722 | 0.5271 | 1.2777 | 0.6056 | 9.998407e-10 | 564 |
| 1.2638 | 0.5482 | 1.2774 | 0.6056 | 9.998401e-10 | 565 |
| 1.2654 | 0.5318 | 1.2771 | 0.6056 | 9.998395e-10 | 566 |
| 1.2649 | 0.5459 | 1.2767 | 0.6056 | 9.99839e-10 | 567 |
| 1.2638 | 0.5412 | 1.2764 | 0.6056 | 9.998384e-10 | 568 |
| 1.2626 | 0.5694 | 1.2761 | 0.6056 | 9.998379e-10 | 569 |
| 1.2579 | 0.5576 | 1.2758 | 0.6056 | 9.998373e-10 | 570 |
| 1.2673 | 0.5671 | 1.2755 | 0.6056 | 9.998368e-10 | 571 |
| 1.2628 | 0.5224 | 1.2751 | 0.6056 | 9.998362e-10 | 572 |
| 1.2664 | 0.5247 | 1.2748 | 0.6056 | 9.998357e-10 | 573 |
| 1.2653 | 0.5247 | 1.2745 | 0.6056 | 9.998351e-10 | 574 |
| 1.2662 | 0.5294 | 1.2742 | 0.6056 | 9.998345e-10 | 575 |
| 1.2553 | 0.5459 | 1.2738 | 0.6056 | 9.99834e-10 | 576 |
| 1.2572 | 0.5765 | 1.2735 | 0.6056 | 9.998334e-10 | 577 |
| 1.2645 | 0.5271 | 1.2732 | 0.6056 | 9.998329e-10 | 578 |
| 1.2659 | 0.5388 | 1.2728 | 0.5986 | 9.998323e-10 | 579 |
| 1.2604 | 0.5482 | 1.2725 | 0.5986 | 9.998318e-10 | 580 |
| 1.2665 | 0.5012 | 1.2722 | 0.5986 | 9.998312e-10 | 581 |
| 1.2617 | 0.5388 | 1.2718 | 0.6056 | 9.998307e-10 | 582 |
| 1.2657 | 0.5200 | 1.2715 | 0.6056 | 9.998301e-10 | 583 |
| 1.2616 | 0.5412 | 1.2712 | 0.6127 | 9.998296e-10 | 584 |
| 1.2571 | 0.5624 | 1.2709 | 0.6127 | 9.99829e-10 | 585 |
| 1.2589 | 0.5482 | 1.2707 | 0.6127 | 9.998284e-10 | 586 |
| 1.2522 | 0.5671 | 1.2704 | 0.6056 | 9.998279e-10 | 587 |
| 1.2607 | 0.5553 | 1.2701 | 0.6056 | 9.998273e-10 | 588 |
| 1.2534 | 0.5624 | 1.2698 | 0.6056 | 9.998268e-10 | 589 |
| 1.2607 | 0.5624 | 1.2695 | 0.6056 | 9.998262e-10 | 590 |
| 1.2507 | 0.5812 | 1.2692 | 0.6056 | 9.998257e-10 | 591 |
| 1.2587 | 0.5506 | 1.2688 | 0.6056 | 9.998251e-10 | 592 |
| 1.2608 | 0.5506 | 1.2685 | 0.6056 | 9.998246e-10 | 593 |
| 1.2531 | 0.5553 | 1.2682 | 0.6056 | 9.99824e-10 | 594 |
| 1.2529 | 0.5953 | 1.2679 | 0.6056 | 9.998234e-10 | 595 |
| 1.2587 | 0.5435 | 1.2676 | 0.6056 | 9.998229e-10 | 596 |
| 1.2547 | 0.5459 | 1.2673 | 0.6056 | 9.998223e-10 | 597 |
| 1.2549 | 0.5694 | 1.2669 | 0.6056 | 9.998218e-10 | 598 |
| 1.2550 | 0.5576 | 1.2667 | 0.6127 | 9.998212e-10 | 599 |
| 1.2594 | 0.5741 | 1.2663 | 0.6127 | 9.998207e-10 | 600 |
| 1.2558 | 0.5435 | 1.2660 | 0.6127 | 9.998201e-10 | 601 |
| 1.2565 | 0.5576 | 1.2657 | 0.6127 | 9.998196e-10 | 602 |
| 1.2509 | 0.5671 | 1.2654 | 0.6127 | 9.99819e-10 | 603 |
| 1.2568 | 0.5765 | 1.2650 | 0.6127 | 9.998185e-10 | 604 |
| 1.2573 | 0.5529 | 1.2647 | 0.6197 | 9.998179e-10 | 605 |
| 1.2585 | 0.5388 | 1.2644 | 0.6197 | 9.998173e-10 | 606 |
| 1.2561 | 0.5647 | 1.2641 | 0.6197 | 9.998168e-10 | 607 |
| 1.2506 | 0.5459 | 1.2638 | 0.6197 | 9.998162e-10 | 608 |
| 1.2531 | 0.5765 | 1.2635 | 0.6197 | 9.998157e-10 | 609 |
| 1.2610 | 0.5506 | 1.2632 | 0.6197 | 9.998151e-10 | 610 |
| 1.2600 | 0.5553 | 1.2630 | 0.6197 | 9.998145e-10 | 611 |
| 1.2570 | 0.5788 | 1.2627 | 0.6197 | 9.998138e-10 | 612 |
| 1.2604 | 0.5600 | 1.2624 | 0.6197 | 9.998131e-10 | 613 |
| 1.2517 | 0.6000 | 1.2621 | 0.6197 | 9.998125e-10 | 614 |
| 1.2429 | 0.6141 | 1.2618 | 0.6268 | 9.998118e-10 | 615 |
| 1.2512 | 0.5718 | 1.2615 | 0.6268 | 9.998111e-10 | 616 |
| 1.2457 | 0.6047 | 1.2612 | 0.6268 | 9.998105e-10 | 617 |
| 1.2537 | 0.5718 | 1.2609 | 0.6268 | 9.998098e-10 | 618 |
| 1.2472 | 0.6047 | 1.2606 | 0.6268 | 9.998091e-10 | 619 |
| 1.2471 | 0.5953 | 1.2603 | 0.6268 | 9.998085e-10 | 620 |
| 1.2561 | 0.5765 | 1.2600 | 0.6268 | 9.998078e-10 | 621 |
| 1.2440 | 0.6000 | 1.2596 | 0.6268 | 9.998071e-10 | 622 |
| 1.2524 | 0.5671 | 1.2593 | 0.6268 | 9.998065e-10 | 623 |
| 1.2532 | 0.5835 | 1.2590 | 0.6268 | 9.998058e-10 | 624 |
| 1.2488 | 0.5576 | 1.2587 | 0.6268 | 9.998051e-10 | 625 |
| 1.2444 | 0.5976 | 1.2584 | 0.6268 | 9.998045e-10 | 626 |
| 1.2502 | 0.6094 | 1.2581 | 0.6268 | 9.998038e-10 | 627 |
| 1.2469 | 0.6024 | 1.2578 | 0.6268 | 9.998031e-10 | 628 |
| 1.2458 | 0.5718 | 1.2575 | 0.6338 | 9.998025e-10 | 629 |
| 1.2477 | 0.5953 | 1.2572 | 0.6338 | 9.998018e-10 | 630 |
| 1.2435 | 0.6024 | 1.2569 | 0.6338 | 9.998011e-10 | 631 |
| 1.2480 | 0.5788 | 1.2566 | 0.6268 | 9.998005e-10 | 632 |
| 1.2532 | 0.5412 | 1.2563 | 0.6268 | 9.997998e-10 | 633 |
| 1.2395 | 0.6047 | 1.2560 | 0.6268 | 9.997991e-10 | 634 |
| 1.2395 | 0.6259 | 1.2557 | 0.6268 | 9.997985e-10 | 635 |
| 1.2486 | 0.5788 | 1.2555 | 0.6268 | 9.997978e-10 | 636 |
| 1.2469 | 0.5835 | 1.2551 | 0.6338 | 9.997971e-10 | 637 |
| 1.2482 | 0.5647 | 1.2549 | 0.6338 | 9.997965e-10 | 638 |
| 1.2402 | 0.5765 | 1.2545 | 0.6338 | 9.997958e-10 | 639 |
| 1.2389 | 0.6047 | 1.2543 | 0.6408 | 9.997951e-10 | 640 |
| 1.2414 | 0.5953 | 1.2539 | 0.6408 | 9.997945e-10 | 641 |
| 1.2449 | 0.6071 | 1.2536 | 0.6408 | 9.997938e-10 | 642 |
| 1.2436 | 0.5929 | 1.2533 | 0.6408 | 9.997931e-10 | 643 |
| 1.2437 | 0.5929 | 1.2530 | 0.6408 | 9.997925e-10 | 644 |
| 1.2383 | 0.6094 | 1.2527 | 0.6408 | 9.997918e-10 | 645 |
| 1.2492 | 0.5859 | 1.2524 | 0.6408 | 9.997911e-10 | 646 |
| 1.2437 | 0.6047 | 1.2521 | 0.6408 | 9.997905e-10 | 647 |
| 1.2383 | 0.5882 | 1.2518 | 0.6408 | 9.997898e-10 | 648 |
| 1.2484 | 0.5694 | 1.2516 | 0.6408 | 9.997891e-10 | 649 |
| 1.2385 | 0.6000 | 1.2512 | 0.6408 | 9.997885e-10 | 650 |
| 1.2402 | 0.6094 | 1.2510 | 0.6408 | 9.997878e-10 | 651 |
| 1.2392 | 0.5953 | 1.2506 | 0.6408 | 9.997871e-10 | 652 |
| 1.2480 | 0.5788 | 1.2503 | 0.6408 | 9.997865e-10 | 653 |
| 1.2373 | 0.5929 | 1.2500 | 0.6408 | 9.997858e-10 | 654 |
| 1.2406 | 0.5882 | 1.2497 | 0.6408 | 9.997851e-10 | 655 |
| 1.2478 | 0.5506 | 1.2495 | 0.6408 | 9.997845e-10 | 656 |
| 1.2418 | 0.5906 | 1.2492 | 0.6408 | 9.997838e-10 | 657 |
| 1.2421 | 0.6071 | 1.2489 | 0.6408 | 9.997831e-10 | 658 |
| 1.2368 | 0.5976 | 1.2486 | 0.6408 | 9.997825e-10 | 659 |
| 1.2435 | 0.5600 | 1.2483 | 0.6408 | 9.997818e-10 | 660 |
| 1.2422 | 0.6024 | 1.2480 | 0.6408 | 9.997811e-10 | 661 |
| 1.2397 | 0.6094 | 1.2477 | 0.6479 | 9.997805e-10 | 662 |
| 1.2419 | 0.6000 | 1.2474 | 0.6479 | 9.997798e-10 | 663 |
| 1.2365 | 0.5812 | 1.2471 | 0.6479 | 9.997791e-10 | 664 |
| 1.2399 | 0.6024 | 1.2469 | 0.6549 | 9.997785e-10 | 665 |
| 1.2446 | 0.6047 | 1.2466 | 0.6549 | 9.997778e-10 | 666 |
| 1.2391 | 0.6047 | 1.2463 | 0.6549 | 9.997771e-10 | 667 |
| 1.2460 | 0.6165 | 1.2460 | 0.6549 | 9.997765e-10 | 668 |
| 1.2348 | 0.6141 | 1.2457 | 0.6479 | 9.997758e-10 | 669 |
| 1.2348 | 0.6024 | 1.2454 | 0.6479 | 9.997752e-10 | 670 |
| 1.2347 | 0.6094 | 1.2451 | 0.6479 | 9.997745e-10 | 671 |
| 1.2319 | 0.5953 | 1.2448 | 0.6479 | 9.997738e-10 | 672 |
| 1.2381 | 0.6118 | 1.2445 | 0.6479 | 9.997732e-10 | 673 |
| 1.2299 | 0.6141 | 1.2442 | 0.6479 | 9.997725e-10 | 674 |
| 1.2329 | 0.6071 | 1.2439 | 0.6479 | 9.997718e-10 | 675 |
| 1.2309 | 0.6259 | 1.2436 | 0.6479 | 9.997712e-10 | 676 |
| 1.2260 | 0.6259 | 1.2433 | 0.6479 | 9.997705e-10 | 677 |
| 1.2328 | 0.6212 | 1.2430 | 0.6479 | 9.997698e-10 | 678 |
| 1.2348 | 0.6024 | 1.2427 | 0.6479 | 9.997692e-10 | 679 |
| 1.2315 | 0.6047 | 1.2424 | 0.6479 | 9.997685e-10 | 680 |
| 1.2375 | 0.6235 | 1.2421 | 0.6479 | 9.997678e-10 | 681 |
| 1.2276 | 0.6376 | 1.2418 | 0.6479 | 9.997672e-10 | 682 |
| 1.2278 | 0.6165 | 1.2416 | 0.6408 | 9.997665e-10 | 683 |
| 1.2383 | 0.6188 | 1.2413 | 0.6408 | 9.997658e-10 | 684 |
| 1.2323 | 0.6071 | 1.2410 | 0.6408 | 9.997652e-10 | 685 |
| 1.2242 | 0.6094 | 1.2407 | 0.6408 | 9.997645e-10 | 686 |
| 1.2382 | 0.5976 | 1.2404 | 0.6408 | 9.997638e-10 | 687 |
| 1.2333 | 0.6212 | 1.2401 | 0.6479 | 9.997632e-10 | 688 |
| 1.2327 | 0.6094 | 1.2398 | 0.6479 | 9.997625e-10 | 689 |
| 1.2319 | 0.6259 | 1.2395 | 0.6479 | 9.997618e-10 | 690 |
| 1.2244 | 0.6329 | 1.2392 | 0.6479 | 9.997612e-10 | 691 |
| 1.2279 | 0.6118 | 1.2390 | 0.6479 | 9.997605e-10 | 692 |
| 1.2330 | 0.6212 | 1.2387 | 0.6479 | 9.997598e-10 | 693 |
| 1.2285 | 0.6306 | 1.2384 | 0.6479 | 9.997592e-10 | 694 |
| 1.2234 | 0.6188 | 1.2381 | 0.6479 | 9.997585e-10 | 695 |
| 1.2296 | 0.6282 | 1.2379 | 0.6479 | 9.997578e-10 | 696 |
| 1.2289 | 0.6353 | 1.2375 | 0.6479 | 9.997572e-10 | 697 |
| 1.2305 | 0.6259 | 1.2373 | 0.6479 | 9.997565e-10 | 698 |
| 1.2264 | 0.6329 | 1.2370 | 0.6479 | 9.997558e-10 | 699 |
| 1.2254 | 0.6165 | 1.2367 | 0.6479 | 9.997552e-10 | 700 |
| 1.2318 | 0.6188 | 1.2364 | 0.6479 | 9.997545e-10 | 701 |
| 1.2261 | 0.6094 | 1.2361 | 0.6479 | 9.997538e-10 | 702 |
| 1.2320 | 0.6094 | 1.2359 | 0.6479 | 9.997532e-10 | 703 |
| 1.2271 | 0.6188 | 1.2356 | 0.6479 | 9.997525e-10 | 704 |
| 1.2189 | 0.6282 | 1.2353 | 0.6479 | 9.997518e-10 | 705 |
| 1.2196 | 0.6329 | 1.2350 | 0.6479 | 9.997512e-10 | 706 |
| 1.2207 | 0.6376 | 1.2348 | 0.6479 | 9.997505e-10 | 707 |
| 1.2265 | 0.5929 | 1.2345 | 0.6479 | 9.997498e-10 | 708 |
| 1.2226 | 0.6400 | 1.2342 | 0.6479 | 9.997492e-10 | 709 |
| 1.2294 | 0.6212 | 1.2338 | 0.6479 | 9.997485e-10 | 710 |
| 1.2220 | 0.6235 | 1.2335 | 0.6479 | 9.997478e-10 | 711 |
| 1.2288 | 0.6165 | 1.2332 | 0.6479 | 9.997472e-10 | 712 |
| 1.2299 | 0.6376 | 1.2330 | 0.6479 | 9.997465e-10 | 713 |
| 1.2196 | 0.6212 | 1.2327 | 0.6479 | 9.997458e-10 | 714 |
| 1.2180 | 0.6282 | 1.2324 | 0.6479 | 9.997452e-10 | 715 |
| 1.2271 | 0.6494 | 1.2322 | 0.6479 | 9.997445e-10 | 716 |
| 1.2231 | 0.6188 | 1.2319 | 0.6479 | 9.997438e-10 | 717 |
| 1.2253 | 0.6212 | 1.2317 | 0.6479 | 9.997432e-10 | 718 |
| 1.2265 | 0.5976 | 1.2314 | 0.6479 | 9.997425e-10 | 719 |
| 1.2221 | 0.6071 | 1.2311 | 0.6479 | 9.997418e-10 | 720 |
| 1.2174 | 0.6306 | 1.2308 | 0.6479 | 9.997412e-10 | 721 |
| 1.2241 | 0.6282 | 1.2306 | 0.6479 | 9.997404e-10 | 722 |
| 1.2241 | 0.6259 | 1.2303 | 0.6479 | 9.997396e-10 | 723 |
| 1.2211 | 0.6118 | 1.2300 | 0.6479 | 9.997388e-10 | 724 |
| 1.2126 | 0.6259 | 1.2298 | 0.6549 | 9.997381e-10 | 725 |
| 1.2193 | 0.6541 | 1.2295 | 0.6549 | 9.997373e-10 | 726 |
| 1.2128 | 0.6471 | 1.2292 | 0.6549 | 9.997365e-10 | 727 |
| 1.2246 | 0.6141 | 1.2289 | 0.6549 | 9.997357e-10 | 728 |
| 1.2164 | 0.6282 | 1.2286 | 0.6549 | 9.99735e-10 | 729 |
| 1.2171 | 0.6282 | 1.2284 | 0.6549 | 9.997342e-10 | 730 |
| 1.2173 | 0.6447 | 1.2281 | 0.6549 | 9.997334e-10 | 731 |
| 1.2135 | 0.6353 | 1.2278 | 0.6549 | 9.997326e-10 | 732 |
| 1.2139 | 0.6329 | 1.2275 | 0.6549 | 9.997319e-10 | 733 |
| 1.2202 | 0.6353 | 1.2273 | 0.6549 | 9.997311e-10 | 734 |
| 1.2140 | 0.6541 | 1.2270 | 0.6549 | 9.997303e-10 | 735 |
| 1.2116 | 0.6400 | 1.2267 | 0.6549 | 9.997295e-10 | 736 |
| 1.2206 | 0.6282 | 1.2264 | 0.6549 | 9.997287e-10 | 737 |
| 1.2170 | 0.6235 | 1.2262 | 0.6549 | 9.99728e-10 | 738 |
| 1.2202 | 0.6329 | 1.2259 | 0.6549 | 9.997272e-10 | 739 |
| 1.2149 | 0.6424 | 1.2256 | 0.6549 | 9.997264e-10 | 740 |
| 1.2109 | 0.6329 | 1.2253 | 0.6549 | 9.997256e-10 | 741 |
| 1.2127 | 0.6235 | 1.2250 | 0.6549 | 9.997249e-10 | 742 |
| 1.2132 | 0.6447 | 1.2248 | 0.6549 | 9.997241e-10 | 743 |
| 1.2129 | 0.6165 | 1.2245 | 0.6549 | 9.997233e-10 | 744 |
| 1.2094 | 0.6494 | 1.2242 | 0.6549 | 9.997225e-10 | 745 |
| 1.2206 | 0.6118 | 1.2240 | 0.6549 | 9.997217e-10 | 746 |
| 1.2174 | 0.6376 | 1.2237 | 0.6549 | 9.99721e-10 | 747 |
| 1.2220 | 0.6047 | 1.2234 | 0.6549 | 9.997202e-10 | 748 |
| 1.2130 | 0.6424 | 1.2232 | 0.6549 | 9.997194e-10 | 749 |
| 1.2201 | 0.6259 | 1.2229 | 0.6549 | 9.997186e-10 | 750 |
| 1.2147 | 0.6329 | 1.2226 | 0.6549 | 9.997179e-10 | 751 |
| 1.2148 | 0.6235 | 1.2223 | 0.6549 | 9.997171e-10 | 752 |
| 1.2149 | 0.6329 | 1.2221 | 0.6549 | 9.997163e-10 | 753 |
| 1.2139 | 0.6329 | 1.2218 | 0.6549 | 9.997155e-10 | 754 |
| 1.2167 | 0.6400 | 1.2215 | 0.6549 | 9.997148e-10 | 755 |
| 1.2103 | 0.6518 | 1.2212 | 0.6549 | 9.99714e-10 | 756 |
| 1.2095 | 0.6471 | 1.2209 | 0.6549 | 9.997132e-10 | 757 |
| 1.2157 | 0.6259 | 1.2207 | 0.6549 | 9.997124e-10 | 758 |
| 1.2153 | 0.6424 | 1.2204 | 0.6549 | 9.997116e-10 | 759 |
| 1.2136 | 0.6400 | 1.2202 | 0.6549 | 9.997109e-10 | 760 |
| 1.2068 | 0.6353 | 1.2199 | 0.6549 | 9.997101e-10 | 761 |
| 1.2131 | 0.6329 | 1.2197 | 0.6549 | 9.997093e-10 | 762 |
| 1.2018 | 0.6494 | 1.2194 | 0.6549 | 9.997085e-10 | 763 |
| 1.2136 | 0.6353 | 1.2191 | 0.6549 | 9.997078e-10 | 764 |
| 1.2101 | 0.6306 | 1.2188 | 0.6549 | 9.99707e-10 | 765 |
| 1.2122 | 0.6447 | 1.2186 | 0.6549 | 9.997062e-10 | 766 |
| 1.2098 | 0.6353 | 1.2183 | 0.6549 | 9.997054e-10 | 767 |
| 1.2114 | 0.6518 | 1.2181 | 0.6549 | 9.997047e-10 | 768 |
| 1.2122 | 0.6400 | 1.2178 | 0.6620 | 9.997039e-10 | 769 |
| 1.2138 | 0.6235 | 1.2175 | 0.6690 | 9.997031e-10 | 770 |
| 1.2082 | 0.6588 | 1.2172 | 0.6761 | 9.997023e-10 | 771 |
| 1.2133 | 0.6518 | 1.2169 | 0.6761 | 9.997015e-10 | 772 |
| 1.2063 | 0.6329 | 1.2167 | 0.6761 | 9.997008e-10 | 773 |
| 1.2104 | 0.6541 | 1.2164 | 0.6761 | 9.997e-10 | 774 |
| 1.2060 | 0.6376 | 1.2161 | 0.6761 | 9.996992e-10 | 775 |
| 1.2030 | 0.6471 | 1.2158 | 0.6761 | 9.996984e-10 | 776 |
| 1.2076 | 0.6329 | 1.2155 | 0.6761 | 9.996977e-10 | 777 |
| 1.2008 | 0.6565 | 1.2153 | 0.6761 | 9.996969e-10 | 778 |
| 1.2092 | 0.6447 | 1.2150 | 0.6761 | 9.996961e-10 | 779 |
| 1.2116 | 0.6471 | 1.2147 | 0.6761 | 9.996953e-10 | 780 |
| 1.2111 | 0.6306 | 1.2144 | 0.6761 | 9.996945e-10 | 781 |
| 1.2123 | 0.6565 | 1.2142 | 0.6761 | 9.996938e-10 | 782 |
| 1.1970 | 0.6635 | 1.2139 | 0.6761 | 9.99693e-10 | 783 |
| 1.2024 | 0.6635 | 1.2136 | 0.6761 | 9.996922e-10 | 784 |
| 1.2029 | 0.6329 | 1.2134 | 0.6761 | 9.996914e-10 | 785 |
| 1.2050 | 0.6447 | 1.2131 | 0.6761 | 9.996907e-10 | 786 |
| 1.2117 | 0.6541 | 1.2128 | 0.6761 | 9.996899e-10 | 787 |
| 1.2021 | 0.6588 | 1.2126 | 0.6761 | 9.996891e-10 | 788 |
| 1.2075 | 0.6565 | 1.2123 | 0.6761 | 9.996883e-10 | 789 |
| 1.2131 | 0.6518 | 1.2120 | 0.6761 | 9.996876e-10 | 790 |
| 1.2062 | 0.6541 | 1.2118 | 0.6761 | 9.996868e-10 | 791 |
| 1.2005 | 0.6471 | 1.2115 | 0.6761 | 9.99686e-10 | 792 |
| 1.2104 | 0.6541 | 1.2112 | 0.6761 | 9.996852e-10 | 793 |
| 1.1939 | 0.6424 | 1.2110 | 0.6761 | 9.996844e-10 | 794 |
| 1.2017 | 0.6588 | 1.2107 | 0.6761 | 9.996837e-10 | 795 |
| 1.2061 | 0.6588 | 1.2105 | 0.6761 | 9.996829e-10 | 796 |
| 1.2084 | 0.6565 | 1.2102 | 0.6761 | 9.996821e-10 | 797 |
| 1.2063 | 0.6635 | 1.2099 | 0.6761 | 9.996813e-10 | 798 |
| 1.2001 | 0.6588 | 1.2096 | 0.6761 | 9.996806e-10 | 799 |
| 1.2047 | 0.6447 | 1.2094 | 0.6761 | 9.996798e-10 | 800 |
| 1.2034 | 0.6471 | 1.2092 | 0.6761 | 9.99679e-10 | 801 |
| 1.1968 | 0.6541 | 1.2089 | 0.6761 | 9.996782e-10 | 802 |
| 1.2095 | 0.6376 | 1.2086 | 0.6761 | 9.996775e-10 | 803 |
| 1.1969 | 0.6565 | 1.2083 | 0.6761 | 9.996767e-10 | 804 |
| 1.2043 | 0.6447 | 1.2080 | 0.6761 | 9.996759e-10 | 805 |
| 1.2058 | 0.6376 | 1.2078 | 0.6761 | 9.996751e-10 | 806 |
| 1.1986 | 0.6565 | 1.2075 | 0.6761 | 9.996743e-10 | 807 |
| 1.1983 | 0.6588 | 1.2073 | 0.6761 | 9.996736e-10 | 808 |
| 1.2041 | 0.6353 | 1.2070 | 0.6761 | 9.996728e-10 | 809 |
| 1.2055 | 0.6494 | 1.2068 | 0.6761 | 9.99672e-10 | 810 |
| 1.1934 | 0.6565 | 1.2065 | 0.6761 | 9.996712e-10 | 811 |
| 1.1971 | 0.6635 | 1.2063 | 0.6761 | 9.996705e-10 | 812 |
| 1.2028 | 0.6494 | 1.2060 | 0.6761 | 9.996697e-10 | 813 |
| 1.2042 | 0.6565 | 1.2058 | 0.6761 | 9.996689e-10 | 814 |
| 1.1954 | 0.6565 | 1.2055 | 0.6761 | 9.996681e-10 | 815 |
| 1.2005 | 0.6541 | 1.2052 | 0.6761 | 9.996673e-10 | 816 |
| 1.1996 | 0.6518 | 1.2050 | 0.6761 | 9.996666e-10 | 817 |
| 1.1968 | 0.6424 | 1.2047 | 0.6761 | 9.996658e-10 | 818 |
| 1.1947 | 0.6471 | 1.2045 | 0.6761 | 9.99665e-10 | 819 |
| 1.1982 | 0.6518 | 1.2042 | 0.6761 | 9.996642e-10 | 820 |
| 1.1967 | 0.6447 | 1.2039 | 0.6761 | 9.996635e-10 | 821 |
| 1.1976 | 0.6565 | 1.2037 | 0.6761 | 9.996627e-10 | 822 |
| 1.1990 | 0.6424 | 1.2034 | 0.6761 | 9.996619e-10 | 823 |
| 1.2013 | 0.6400 | 1.2032 | 0.6761 | 9.996611e-10 | 824 |
| 1.2046 | 0.6518 | 1.2029 | 0.6761 | 9.996604e-10 | 825 |
| 1.1975 | 0.6659 | 1.2027 | 0.6761 | 9.996596e-10 | 826 |
| 1.1907 | 0.6612 | 1.2025 | 0.6761 | 9.996588e-10 | 827 |
| 1.1963 | 0.6659 | 1.2022 | 0.6761 | 9.99658e-10 | 828 |
| 1.1901 | 0.6588 | 1.2019 | 0.6761 | 9.996572e-10 | 829 |
| 1.1920 | 0.6635 | 1.2017 | 0.6761 | 9.996565e-10 | 830 |
| 1.1900 | 0.6588 | 1.2014 | 0.6761 | 9.996557e-10 | 831 |
| 1.1954 | 0.6612 | 1.2012 | 0.6761 | 9.996549e-10 | 832 |
| 1.1956 | 0.6471 | 1.2010 | 0.6761 | 9.99654e-10 | 833 |
| 1.1882 | 0.6612 | 1.2007 | 0.6761 | 9.996531e-10 | 834 |
| 1.1963 | 0.6494 | 1.2004 | 0.6761 | 9.996522e-10 | 835 |
| 1.1932 | 0.6471 | 1.2002 | 0.6761 | 9.996514e-10 | 836 |
| 1.1955 | 0.6565 | 1.1999 | 0.6761 | 9.996505e-10 | 837 |
| 1.1932 | 0.6565 | 1.1997 | 0.6761 | 9.996496e-10 | 838 |
| 1.1943 | 0.6565 | 1.1994 | 0.6761 | 9.996487e-10 | 839 |
| 1.1885 | 0.6518 | 1.1991 | 0.6761 | 9.996478e-10 | 840 |
| 1.1975 | 0.6565 | 1.1989 | 0.6761 | 9.996469e-10 | 841 |
| 1.1930 | 0.6518 | 1.1986 | 0.6761 | 9.99646e-10 | 842 |
| 1.1836 | 0.6729 | 1.1984 | 0.6761 | 9.996451e-10 | 843 |
| 1.1839 | 0.6706 | 1.1982 | 0.6761 | 9.996443e-10 | 844 |
| 1.1870 | 0.6565 | 1.1979 | 0.6761 | 9.996434e-10 | 845 |
| 1.1919 | 0.6541 | 1.1976 | 0.6761 | 9.996425e-10 | 846 |
| 1.1877 | 0.6588 | 1.1974 | 0.6761 | 9.996416e-10 | 847 |
| 1.1914 | 0.6635 | 1.1971 | 0.6761 | 9.996407e-10 | 848 |
| 1.1953 | 0.6588 | 1.1969 | 0.6761 | 9.996398e-10 | 849 |
| 1.1865 | 0.6635 | 1.1966 | 0.6761 | 9.996389e-10 | 850 |
| 1.1927 | 0.6612 | 1.1964 | 0.6761 | 9.99638e-10 | 851 |
| 1.1831 | 0.6588 | 1.1961 | 0.6761 | 9.996372e-10 | 852 |
| 1.1877 | 0.6729 | 1.1959 | 0.6761 | 9.996363e-10 | 853 |
| 1.1787 | 0.6588 | 1.1956 | 0.6761 | 9.996354e-10 | 854 |
| 1.1773 | 0.6612 | 1.1954 | 0.6761 | 9.996345e-10 | 855 |
| 1.1871 | 0.6706 | 1.1951 | 0.6761 | 9.996336e-10 | 856 |
| 1.1812 | 0.6612 | 1.1949 | 0.6761 | 9.996327e-10 | 857 |
| 1.1870 | 0.6612 | 1.1946 | 0.6761 | 9.996318e-10 | 858 |
| 1.1824 | 0.6612 | 1.1944 | 0.6761 | 9.996309e-10 | 859 |
| 1.1842 | 0.6494 | 1.1942 | 0.6761 | 9.9963e-10 | 860 |
| 1.1800 | 0.6776 | 1.1939 | 0.6761 | 9.996292e-10 | 861 |
| 1.1848 | 0.6800 | 1.1937 | 0.6761 | 9.996283e-10 | 862 |
| 1.1904 | 0.6682 | 1.1934 | 0.6761 | 9.996274e-10 | 863 |
| 1.1798 | 0.6682 | 1.1932 | 0.6761 | 9.996265e-10 | 864 |
| 1.1813 | 0.6635 | 1.1930 | 0.6761 | 9.996256e-10 | 865 |
| 1.1847 | 0.6706 | 1.1927 | 0.6761 | 9.996247e-10 | 866 |
| 1.1915 | 0.6612 | 1.1925 | 0.6761 | 9.996238e-10 | 867 |
| 1.1793 | 0.6800 | 1.1923 | 0.6761 | 9.996229e-10 | 868 |
| 1.1836 | 0.6776 | 1.1920 | 0.6761 | 9.99622e-10 | 869 |
| 1.1884 | 0.6753 | 1.1918 | 0.6761 | 9.996212e-10 | 870 |
| 1.1780 | 0.6847 | 1.1916 | 0.6761 | 9.996203e-10 | 871 |
| 1.1850 | 0.6729 | 1.1913 | 0.6761 | 9.996194e-10 | 872 |
| 1.1930 | 0.6588 | 1.1911 | 0.6761 | 9.996185e-10 | 873 |
| 1.1882 | 0.6518 | 1.1908 | 0.6761 | 9.996176e-10 | 874 |
| 1.1870 | 0.6729 | 1.1906 | 0.6761 | 9.996167e-10 | 875 |
| 1.1886 | 0.6541 | 1.1903 | 0.6761 | 9.996158e-10 | 876 |
| 1.1785 | 0.6659 | 1.1901 | 0.6761 | 9.99615e-10 | 877 |
| 1.1861 | 0.6588 | 1.1898 | 0.6761 | 9.996141e-10 | 878 |
| 1.1864 | 0.6753 | 1.1896 | 0.6761 | 9.996132e-10 | 879 |
| 1.1904 | 0.6706 | 1.1893 | 0.6761 | 9.996123e-10 | 880 |
| 1.1829 | 0.6659 | 1.1890 | 0.6761 | 9.996114e-10 | 881 |
| 1.1840 | 0.6706 | 1.1888 | 0.6761 | 9.996105e-10 | 882 |
| 1.1742 | 0.6753 | 1.1886 | 0.6761 | 9.996096e-10 | 883 |
| 1.1818 | 0.6635 | 1.1883 | 0.6761 | 9.996087e-10 | 884 |
| 1.1794 | 0.6729 | 1.1881 | 0.6761 | 9.996078e-10 | 885 |
| 1.1860 | 0.6612 | 1.1879 | 0.6761 | 9.99607e-10 | 886 |
| 1.1812 | 0.6635 | 1.1876 | 0.6761 | 9.996061e-10 | 887 |
| 1.1820 | 0.6682 | 1.1874 | 0.6761 | 9.996052e-10 | 888 |
| 1.1819 | 0.6776 | 1.1871 | 0.6761 | 9.996043e-10 | 889 |
| 1.1871 | 0.6635 | 1.1869 | 0.6761 | 9.996034e-10 | 890 |
| 1.1799 | 0.6635 | 1.1867 | 0.6761 | 9.996025e-10 | 891 |
| 1.1803 | 0.6729 | 1.1864 | 0.6761 | 9.996016e-10 | 892 |
| 1.1827 | 0.6612 | 1.1861 | 0.6761 | 9.996007e-10 | 893 |
| 1.1818 | 0.6635 | 1.1859 | 0.6761 | 9.995998e-10 | 894 |
| 1.1818 | 0.6753 | 1.1857 | 0.6761 | 9.99599e-10 | 895 |
| 1.1763 | 0.6776 | 1.1854 | 0.6761 | 9.995981e-10 | 896 |
| 1.1753 | 0.6706 | 1.1852 | 0.6761 | 9.995972e-10 | 897 |
| 1.1783 | 0.6706 | 1.1849 | 0.6761 | 9.995963e-10 | 898 |
| 1.1787 | 0.6753 | 1.1847 | 0.6761 | 9.995954e-10 | 899 |
| 1.1771 | 0.6541 | 1.1845 | 0.6761 | 9.995945e-10 | 900 |
| 1.1735 | 0.6659 | 1.1842 | 0.6761 | 9.995936e-10 | 901 |
| 1.1812 | 0.6565 | 1.1840 | 0.6761 | 9.995927e-10 | 902 |
| 1.1791 | 0.6659 | 1.1837 | 0.6761 | 9.995919e-10 | 903 |
| 1.1768 | 0.6682 | 1.1835 | 0.6761 | 9.99591e-10 | 904 |
| 1.1781 | 0.6682 | 1.1833 | 0.6761 | 9.995901e-10 | 905 |
| 1.1747 | 0.6612 | 1.1830 | 0.6761 | 9.995892e-10 | 906 |
| 1.1791 | 0.6753 | 1.1828 | 0.6761 | 9.995883e-10 | 907 |
| 1.1805 | 0.6706 | 1.1825 | 0.6761 | 9.995874e-10 | 908 |
| 1.1753 | 0.6612 | 1.1823 | 0.6761 | 9.995865e-10 | 909 |
| 1.1684 | 0.6776 | 1.1820 | 0.6761 | 9.995856e-10 | 910 |
| 1.1760 | 0.6588 | 1.1818 | 0.6761 | 9.995847e-10 | 911 |
| 1.1827 | 0.6682 | 1.1815 | 0.6761 | 9.995839e-10 | 912 |
| 1.1749 | 0.6776 | 1.1813 | 0.6761 | 9.99583e-10 | 913 |
| 1.1826 | 0.6706 | 1.1810 | 0.6761 | 9.995821e-10 | 914 |
| 1.1789 | 0.6706 | 1.1808 | 0.6761 | 9.995812e-10 | 915 |
| 1.1759 | 0.6659 | 1.1806 | 0.6761 | 9.995803e-10 | 916 |
| 1.1679 | 0.6682 | 1.1804 | 0.6761 | 9.995794e-10 | 917 |
| 1.1653 | 0.6659 | 1.1801 | 0.6761 | 9.995785e-10 | 918 |
| 1.1746 | 0.6729 | 1.1799 | 0.6761 | 9.995776e-10 | 919 |
| 1.1765 | 0.6659 | 1.1796 | 0.6761 | 9.995768e-10 | 920 |
| 1.1719 | 0.6682 | 1.1794 | 0.6761 | 9.995759e-10 | 921 |
| 1.1728 | 0.6753 | 1.1791 | 0.6761 | 9.99575e-10 | 922 |
| 1.1680 | 0.6706 | 1.1789 | 0.6761 | 9.995741e-10 | 923 |
| 1.1740 | 0.6541 | 1.1786 | 0.6761 | 9.995732e-10 | 924 |
| 1.1794 | 0.6635 | 1.1784 | 0.6761 | 9.995723e-10 | 925 |
| 1.1689 | 0.6753 | 1.1782 | 0.6761 | 9.995714e-10 | 926 |
| 1.1742 | 0.6729 | 1.1780 | 0.6761 | 9.995705e-10 | 927 |
| 1.1682 | 0.6706 | 1.1777 | 0.6761 | 9.995696e-10 | 928 |
| 1.1695 | 0.6706 | 1.1775 | 0.6761 | 9.995688e-10 | 929 |
| 1.1724 | 0.6682 | 1.1773 | 0.6761 | 9.995679e-10 | 930 |
| 1.1782 | 0.6729 | 1.1770 | 0.6761 | 9.99567e-10 | 931 |
| 1.1631 | 0.6776 | 1.1768 | 0.6761 | 9.995661e-10 | 932 |
| 1.1734 | 0.6659 | 1.1766 | 0.6761 | 9.995652e-10 | 933 |
| 1.1639 | 0.6706 | 1.1763 | 0.6761 | 9.995643e-10 | 934 |
| 1.1755 | 0.6729 | 1.1761 | 0.6761 | 9.995634e-10 | 935 |
| 1.1706 | 0.6706 | 1.1759 | 0.6761 | 9.995625e-10 | 936 |
| 1.1671 | 0.6682 | 1.1757 | 0.6761 | 9.995617e-10 | 937 |
| 1.1684 | 0.6753 | 1.1754 | 0.6761 | 9.995608e-10 | 938 |
| 1.1744 | 0.6753 | 1.1752 | 0.6761 | 9.995599e-10 | 939 |
| 1.1667 | 0.6682 | 1.1750 | 0.6761 | 9.99559e-10 | 940 |
| 1.1703 | 0.6682 | 1.1748 | 0.6761 | 9.995581e-10 | 941 |
| 1.1656 | 0.6682 | 1.1746 | 0.6761 | 9.995572e-10 | 942 |
| 1.1696 | 0.6682 | 1.1744 | 0.6761 | 9.995563e-10 | 943 |
| 1.1650 | 0.6706 | 1.1741 | 0.6761 | 9.995554e-10 | 944 |
| 1.1644 | 0.6706 | 1.1739 | 0.6761 | 9.995544e-10 | 945 |
| 1.1701 | 0.6776 | 1.1737 | 0.6761 | 9.995534e-10 | 946 |
| 1.1635 | 0.6753 | 1.1734 | 0.6761 | 9.995524e-10 | 947 |
| 1.1717 | 0.6729 | 1.1732 | 0.6761 | 9.995514e-10 | 948 |
| 1.1740 | 0.6635 | 1.1730 | 0.6761 | 9.995504e-10 | 949 |
| 1.1675 | 0.6635 | 1.1727 | 0.6761 | 9.995494e-10 | 950 |
| 1.1670 | 0.6659 | 1.1725 | 0.6761 | 9.995484e-10 | 951 |
| 1.1695 | 0.6776 | 1.1723 | 0.6761 | 9.995474e-10 | 952 |
| 1.1651 | 0.6729 | 1.1720 | 0.6761 | 9.995464e-10 | 953 |
| 1.1642 | 0.6588 | 1.1718 | 0.6761 | 9.995454e-10 | 954 |
| 1.1652 | 0.6729 | 1.1716 | 0.6761 | 9.995444e-10 | 955 |
| 1.1673 | 0.6682 | 1.1714 | 0.6761 | 9.995434e-10 | 956 |
| 1.1649 | 0.6729 | 1.1712 | 0.6761 | 9.995424e-10 | 957 |
| 1.1665 | 0.6753 | 1.1710 | 0.6761 | 9.995414e-10 | 958 |
| 1.1633 | 0.6776 | 1.1707 | 0.6761 | 9.995405e-10 | 959 |
| 1.1625 | 0.6635 | 1.1705 | 0.6761 | 9.995395e-10 | 960 |
| 1.1668 | 0.6635 | 1.1703 | 0.6761 | 9.995385e-10 | 961 |
| 1.1607 | 0.6729 | 1.1701 | 0.6761 | 9.995375e-10 | 962 |
| 1.1697 | 0.6706 | 1.1699 | 0.6761 | 9.995365e-10 | 963 |
| 1.1637 | 0.6753 | 1.1696 | 0.6761 | 9.995355e-10 | 964 |
| 1.1644 | 0.6729 | 1.1694 | 0.6761 | 9.995345e-10 | 965 |
| 1.1613 | 0.6729 | 1.1692 | 0.6761 | 9.995335e-10 | 966 |
| 1.1685 | 0.6612 | 1.1690 | 0.6761 | 9.995325e-10 | 967 |
| 1.1595 | 0.6706 | 1.1688 | 0.6761 | 9.995315e-10 | 968 |
| 1.1650 | 0.6706 | 1.1686 | 0.6761 | 9.995305e-10 | 969 |
| 1.1582 | 0.6682 | 1.1684 | 0.6761 | 9.995295e-10 | 970 |
| 1.1609 | 0.6729 | 1.1682 | 0.6761 | 9.995285e-10 | 971 |
| 1.1619 | 0.6706 | 1.1679 | 0.6761 | 9.995275e-10 | 972 |
| 1.1618 | 0.6776 | 1.1677 | 0.6761 | 9.995265e-10 | 973 |
| 1.1594 | 0.6682 | 1.1675 | 0.6761 | 9.995255e-10 | 974 |
| 1.1572 | 0.6753 | 1.1673 | 0.6761 | 9.995245e-10 | 975 |
| 1.1591 | 0.6776 | 1.1670 | 0.6761 | 9.995235e-10 | 976 |
| 1.1600 | 0.6729 | 1.1668 | 0.6761 | 9.995225e-10 | 977 |
| 1.1590 | 0.6635 | 1.1666 | 0.6761 | 9.995215e-10 | 978 |
| 1.1570 | 0.6753 | 1.1664 | 0.6761 | 9.995205e-10 | 979 |
| 1.1615 | 0.6729 | 1.1662 | 0.6761 | 9.995195e-10 | 980 |
| 1.1601 | 0.6776 | 1.1660 | 0.6761 | 9.995185e-10 | 981 |
| 1.1605 | 0.6682 | 1.1658 | 0.6761 | 9.995175e-10 | 982 |
| 1.1557 | 0.6800 | 1.1656 | 0.6761 | 9.995165e-10 | 983 |
| 1.1575 | 0.6729 | 1.1653 | 0.6761 | 9.995155e-10 | 984 |
| 1.1531 | 0.6659 | 1.1651 | 0.6761 | 9.995145e-10 | 985 |
| 1.1654 | 0.6753 | 1.1649 | 0.6761 | 9.995135e-10 | 986 |
| 1.1555 | 0.6776 | 1.1647 | 0.6761 | 9.995125e-10 | 987 |
| 1.1603 | 0.6753 | 1.1645 | 0.6761 | 9.995115e-10 | 988 |
| 1.1605 | 0.6729 | 1.1643 | 0.6761 | 9.995105e-10 | 989 |
| 1.1575 | 0.6682 | 1.1640 | 0.6761 | 9.995095e-10 | 990 |
| 1.1633 | 0.6776 | 1.1638 | 0.6761 | 9.995085e-10 | 991 |
| 1.1637 | 0.6776 | 1.1636 | 0.6761 | 9.995075e-10 | 992 |
| 1.1583 | 0.6753 | 1.1634 | 0.6761 | 9.995065e-10 | 993 |
| 1.1557 | 0.6824 | 1.1632 | 0.6761 | 9.995055e-10 | 994 |
| 1.1611 | 0.6682 | 1.1629 | 0.6761 | 9.995045e-10 | 995 |
| 1.1580 | 0.6659 | 1.1627 | 0.6761 | 9.995035e-10 | 996 |
| 1.1599 | 0.6682 | 1.1625 | 0.6761 | 9.995025e-10 | 997 |
| 1.1575 | 0.6824 | 1.1623 | 0.6761 | 9.995015e-10 | 998 |
| 1.1645 | 0.6635 | 1.1621 | 0.6761 | 9.995005e-10 | 999 |
### Framework versions
- Transformers 4.29.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 96,393 | [
[
-0.049652099609375,
-0.0307159423828125,
0.0272216796875,
0.00800323486328125,
-0.00046062469482421875,
0.0085906982421875,
0.0004935264587402344,
-0.007022857666015625,
0.056915283203125,
0.0255279541015625,
-0.0447998046875,
-0.04595947265625,
-0.0444946289062... |
Svetlana0303/Regression_bert_aug_CustomLoss_2 | 2023-05-03T09:45:40.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_bert_aug_CustomLoss_2 | 0 | 2 | transformers | 2023-05-03T09:45:11 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_bert_aug_CustomLoss_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_bert_aug_CustomLoss_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2338
- Train Mae: 0.5263
- Train Mse: 0.4258
- Train R2-score: 0.7899
- Validation Loss: 0.2340
- Validation Mae: 0.5490
- Validation Mse: 0.4329
- Validation R2-score: 0.7254
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-----:|
| 0.2004 | 0.4982 | 0.3687 | 0.6907 | 0.1488 | 0.4239 | 0.3023 | 0.7428 | 0 |
| 0.1118 | 0.4054 | 0.2460 | 0.7552 | 0.0783 | 0.3502 | 0.1873 | 0.8501 | 1 |
| 0.0531 | 0.3256 | 0.1543 | 0.8049 | 0.0489 | 0.3257 | 0.1489 | 0.8412 | 2 |
| 0.0342 | 0.2826 | 0.1151 | 0.7986 | 0.0328 | 0.2697 | 0.1215 | 0.9246 | 3 |
| 0.0266 | 0.2587 | 0.0962 | 0.8802 | 0.0713 | 0.2884 | 0.1297 | 0.8729 | 4 |
| 0.0543 | 0.3022 | 0.1388 | 0.7724 | 0.0609 | 0.3238 | 0.1524 | 0.7723 | 5 |
| 0.0380 | 0.2756 | 0.1114 | 0.8822 | 0.0421 | 0.1984 | 0.0700 | 0.9070 | 6 |
| 0.0593 | 0.3134 | 0.1537 | 0.8764 | 0.2335 | 0.5183 | 0.4636 | 0.7816 | 7 |
| 0.2330 | 0.5234 | 0.4182 | -1.5020 | 0.2656 | 0.5726 | 0.3948 | 0.5533 | 8 |
| 0.2359 | 0.5149 | 0.4195 | 0.7932 | 0.2347 | 0.5263 | 0.4461 | 0.7502 | 9 |
| 0.2341 | 0.5204 | 0.4268 | 0.8100 | 0.2341 | 0.5509 | 0.4307 | 0.7220 | 10 |
| 0.2335 | 0.5250 | 0.4235 | 0.8053 | 0.2328 | 0.5433 | 0.4334 | 0.7322 | 11 |
| 0.2319 | 0.5217 | 0.4244 | 0.7825 | 0.2352 | 0.5549 | 0.4296 | 0.7158 | 12 |
| 0.2323 | 0.5243 | 0.4234 | 0.7877 | 0.2346 | 0.5536 | 0.4286 | 0.7170 | 13 |
| 0.2338 | 0.5263 | 0.4258 | 0.7899 | 0.2340 | 0.5490 | 0.4329 | 0.7254 | 14 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,857 | [
[
-0.0498046875,
-0.045928955078125,
0.0249481201171875,
0.00467681884765625,
-0.0137481689453125,
-0.01456451416015625,
-0.0019741058349609375,
-0.01299285888671875,
0.03741455078125,
0.0198211669921875,
-0.0498046875,
-0.0494384765625,
-0.057403564453125,
-0... |
guriko/autotrain-resume-55035128532 | 2023-05-03T09:48:29.000Z | [
"transformers",
"pytorch",
"deberta-v2",
"text-classification",
"autotrain",
"en",
"dataset:guriko/autotrain-data-resume",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | guriko | null | null | guriko/autotrain-resume-55035128532 | 1 | 2 | transformers | 2023-05-03T09:46:32 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain 🤗"
datasets:
- guriko/autotrain-data-resume
co2_eq_emissions:
emissions: 0.0031738020850551043
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 55035128532
- CO2 Emissions (in grams): 0.0032
## Validation Metrics
- Loss: 0.658
- Accuracy: 0.812
- Macro F1: 0.759
- Micro F1: 0.812
- Weighted F1: 0.787
- Macro Precision: 0.884
- Micro Precision: 0.812
- Weighted Precision: 0.856
- Macro Recall: 0.750
- Micro Recall: 0.812
- Weighted Recall: 0.812
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/guriko/autotrain-resume-55035128532
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("guriko/autotrain-resume-55035128532", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("guriko/autotrain-resume-55035128532", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,267 | [
[
-0.0309295654296875,
-0.0212554931640625,
0.006946563720703125,
0.01508331298828125,
-0.0009613037109375,
0.0032711029052734375,
-0.005390167236328125,
-0.0158233642578125,
-0.006160736083984375,
0.00612640380859375,
-0.047271728515625,
-0.03302001953125,
-0.056... |
Svetlana0303/Regression_bert_aug_CustomLoss_3 | 2023-05-03T10:06:55.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_bert_aug_CustomLoss_3 | 0 | 2 | transformers | 2023-05-03T10:06:27 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_bert_aug_CustomLoss_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_bert_aug_CustomLoss_3
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1282
- Train Mae: 0.3851
- Train Mse: 0.1862
- Train R2-score: 0.7249
- Validation Loss: 0.1246
- Validation Mae: 0.3798
- Validation Mse: 0.1857
- Validation R2-score: 0.8337
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-----:|
| 0.1451 | 0.4188 | 0.2732 | 0.8063 | 0.0642 | 0.3529 | 0.1994 | 0.8824 | 0 |
| 0.0567 | 0.3078 | 0.1428 | 0.7511 | 0.0452 | 0.3038 | 0.1257 | 0.8820 | 1 |
| 0.0380 | 0.2662 | 0.1007 | 0.8889 | 0.0661 | 0.2984 | 0.1442 | 0.8883 | 2 |
| 0.0363 | 0.2542 | 0.0980 | 0.8034 | 0.0318 | 0.2567 | 0.0978 | 0.9117 | 3 |
| 0.0279 | 0.2257 | 0.0714 | 0.9002 | 0.0327 | 0.2305 | 0.0793 | 0.8920 | 4 |
| 0.0241 | 0.2046 | 0.0593 | 0.8695 | 0.0306 | 0.2353 | 0.0813 | 0.9330 | 5 |
| 0.0230 | 0.1960 | 0.0540 | 0.8762 | 0.0284 | 0.2160 | 0.0710 | 0.9197 | 6 |
| 0.0223 | 0.1914 | 0.0510 | 0.9366 | 0.0285 | 0.2251 | 0.0791 | 0.9282 | 7 |
| 0.0223 | 0.1923 | 0.0516 | 0.9498 | 0.0306 | 0.2042 | 0.0748 | 0.9309 | 8 |
| 0.0231 | 0.1827 | 0.0493 | 0.8516 | 0.0302 | 0.2009 | 0.0682 | 0.9198 | 9 |
| 0.0335 | 0.1794 | 0.0576 | 0.9259 | 0.0765 | 0.3684 | 0.2192 | 0.8243 | 10 |
| 0.1380 | 0.3960 | 0.2567 | 0.8748 | 0.1037 | 0.4172 | 0.2244 | 0.6992 | 11 |
| 0.1078 | 0.4071 | 0.2170 | 0.8256 | 0.1219 | 0.4020 | 0.2234 | 0.7304 | 12 |
| 0.1217 | 0.3807 | 0.2060 | 0.8084 | 0.1434 | 0.3934 | 0.2113 | 0.8317 | 13 |
| 0.1282 | 0.3851 | 0.1862 | 0.7249 | 0.1246 | 0.3798 | 0.1857 | 0.8337 | 14 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,857 | [
[
-0.05035400390625,
-0.04248046875,
0.0266265869140625,
0.006710052490234375,
-0.01495361328125,
-0.01491546630859375,
0.00084686279296875,
-0.01337432861328125,
0.036102294921875,
0.01837158203125,
-0.048309326171875,
-0.051177978515625,
-0.056640625,
-0.011... |
danielizham/whisper-small-es | 2023-05-12T04:40:36.000Z | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"whisper-event",
"generated_from_trainer",
"es",
"dataset:mozilla-foundation/common_voice_11_0",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | danielizham | null | null | danielizham/whisper-small-es | 1 | 2 | transformers | 2023-05-03T11:55:41 | ---
language:
- es
license: apache-2.0
tags:
- whisper-event
- generated_from_trainer
datasets:
- mozilla-foundation/common_voice_11_0
metrics:
- wer
model-index:
- name: Whisper Small Spanish
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: mozilla-foundation/common_voice_11_0 es
type: mozilla-foundation/common_voice_11_0
config: es
split: test
args: es
metrics:
- name: Wer
type: wer
value: 8.212281066472636
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Small Spanish
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 es dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2210
- Wer: 8.2123
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.1386 | 4.01 | 1000 | 0.2464 | 9.8000 |
| 0.1098 | 8.01 | 2000 | 0.2272 | 8.6229 |
| 0.028 | 12.02 | 3000 | 0.2577 | 8.6956 |
| 0.1083 | 16.02 | 4000 | 0.2210 | 8.2123 |
| 0.0189 | 20.03 | 5000 | 0.2520 | 8.4455 |
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.12.1.dev0
- Tokenizers 0.13.3
| 2,160 | [
[
-0.034942626953125,
-0.03851318359375,
0.002651214599609375,
0.020904541015625,
-0.0178680419921875,
-0.032562255859375,
-0.02569580078125,
-0.029541015625,
0.0238189697265625,
0.0157012939453125,
-0.05682373046875,
-0.040557861328125,
-0.0428466796875,
-0.0... |
rlagofls33/kogpt2-base-v2-finetuned-klue-ner | 2023-05-06T11:23:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"token-classification",
"generated_from_trainer",
"dataset:klue",
"license:cc-by-nc-sa-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | token-classification | rlagofls33 | null | null | rlagofls33/kogpt2-base-v2-finetuned-klue-ner | 0 | 2 | transformers | 2023-05-03T11:56:12 | ---
license: cc-by-nc-sa-4.0
tags:
- generated_from_trainer
datasets:
- klue
metrics:
- f1
model-index:
- name: kogpt2-base-v2-finetuned-klue-ner
results:
- task:
name: Token Classification
type: token-classification
dataset:
name: klue
type: klue
config: ner
split: validation
args: ner
metrics:
- name: F1
type: f1
value: 0.37298165525403665
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kogpt2-base-v2-finetuned-klue-ner
This model is a fine-tuned version of [skt/kogpt2-base-v2](https://huggingface.co/skt/kogpt2-base-v2) on the klue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4076
- F1: 0.3730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6084 | 1.0 | 876 | 0.5353 | 0.2118 |
| 0.3911 | 2.0 | 1752 | 0.4691 | 0.3041 |
| 0.2855 | 3.0 | 2628 | 0.4076 | 0.3730 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,738 | [
[
-0.03076171875,
-0.03753662109375,
0.018829345703125,
0.010772705078125,
-0.038299560546875,
-0.03228759765625,
-0.011993408203125,
-0.02130126953125,
-0.00839996337890625,
0.0323486328125,
-0.04705810546875,
-0.0276641845703125,
-0.05426025390625,
-0.028839... |
tsobastiv/autotrain-product-analysis-55101128694 | 2023-05-03T12:32:41.000Z | [
"transformers",
"pytorch",
"vit",
"image-classification",
"autotrain",
"vision",
"dataset:tsobastiv/autotrain-data-product-analysis",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | tsobastiv | null | null | tsobastiv/autotrain-product-analysis-55101128694 | 0 | 2 | transformers | 2023-05-03T12:31:21 | ---
tags:
- autotrain
- vision
- image-classification
datasets:
- tsobastiv/autotrain-data-product-analysis
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/tiger.jpg
example_title: Tiger
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/teapot.jpg
example_title: Teapot
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/palace.jpg
example_title: Palace
co2_eq_emissions:
emissions: 0.5776073248431609
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 55101128694
- CO2 Emissions (in grams): 0.5776
## Validation Metrics
- Loss: 0.129
- Accuracy: 1.000
- Macro F1: 1.000
- Micro F1: 1.000
- Weighted F1: 1.000
- Macro Precision: 1.000
- Micro Precision: 1.000
- Weighted Precision: 1.000
- Macro Recall: 1.000
- Micro Recall: 1.000
- Weighted Recall: 1.000 | 887 | [
[
-0.0229949951171875,
-0.00934600830078125,
0.01470947265625,
0.004459381103515625,
0.00848388671875,
0.0045318603515625,
0.00572967529296875,
-0.016998291015625,
-0.0237884521484375,
-0.0045166015625,
-0.0306243896484375,
-0.042572021484375,
-0.042205810546875,
... |
leonardosaveri/DSChallenge_Roberta | 2023-05-03T13:12:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | leonardosaveri | null | null | leonardosaveri/DSChallenge_Roberta | 0 | 2 | transformers | 2023-05-03T12:56:00 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DSChallenge_Roberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DSChallenge_Roberta
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2321
- Accuracy: 0.9333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3884 | 1.0 | 746 | 0.1892 | 0.9303 |
| 0.227 | 2.0 | 1492 | 0.2321 | 0.9333 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,383 | [
[
-0.0267791748046875,
-0.048370361328125,
0.03021240234375,
-0.0008511543273925781,
-0.0268707275390625,
-0.032012939453125,
-0.014892578125,
-0.0133514404296875,
0.002834320068359375,
0.035430908203125,
-0.059722900390625,
-0.052825927734375,
-0.058013916015625,... |
Svetlana0303/Regression_albert_NOaug_CustomLoss_3 | 2023-05-03T13:51:37.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_albert_NOaug_CustomLoss_3 | 0 | 2 | transformers | 2023-05-03T13:51:34 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_albert_NOaug_CustomLoss_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_albert_NOaug_CustomLoss_3
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1481
- Train Mae: 0.5450
- Train Mse: 0.3746
- Train R2-score: 0.7999
- Validation Loss: 0.1364
- Validation Mae: 0.6382
- Validation Mse: 0.4728
- Validation R2-score: 0.8856
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-----:|
| 0.2075 | 0.6054 | 0.4691 | 0.1331 | 0.1389 | 0.6396 | 0.4919 | 0.8859 | 0 |
| 0.1982 | 0.5741 | 0.4337 | 0.8066 | 0.1275 | 0.5890 | 0.3885 | 0.8851 | 1 |
| 0.1775 | 0.5592 | 0.3934 | 0.7398 | 0.1849 | 0.6878 | 0.5975 | 0.8749 | 2 |
| 0.1511 | 0.5350 | 0.3713 | 0.8239 | 0.1441 | 0.6497 | 0.4982 | 0.8841 | 3 |
| 0.1489 | 0.5429 | 0.3710 | 0.8262 | 0.1319 | 0.6294 | 0.4547 | 0.8862 | 4 |
| 0.1477 | 0.5429 | 0.3837 | 0.7268 | 0.1269 | 0.6120 | 0.4229 | 0.8865 | 5 |
| 0.1580 | 0.5603 | 0.3782 | 0.6256 | 0.1556 | 0.6630 | 0.5300 | 0.8817 | 6 |
| 0.1491 | 0.5482 | 0.3743 | 0.8104 | 0.1515 | 0.6586 | 0.5192 | 0.8826 | 7 |
| 0.1499 | 0.5354 | 0.3661 | 0.8207 | 0.2043 | 0.7009 | 0.6370 | 0.8702 | 8 |
| 0.1811 | 0.5516 | 0.4196 | 0.7534 | 0.1303 | 0.6252 | 0.4465 | 0.8865 | 9 |
| 0.1547 | 0.5531 | 0.3798 | 0.6862 | 0.1438 | 0.6493 | 0.4971 | 0.8842 | 10 |
| 0.1464 | 0.5429 | 0.3604 | 0.7679 | 0.1549 | 0.6622 | 0.5282 | 0.8818 | 11 |
| 0.1507 | 0.5507 | 0.3787 | 0.7918 | 0.1489 | 0.6556 | 0.5119 | 0.8831 | 12 |
| 0.1555 | 0.5530 | 0.3888 | 0.7355 | 0.1269 | 0.6126 | 0.4238 | 0.8866 | 13 |
| 0.1481 | 0.5450 | 0.3746 | 0.7999 | 0.1364 | 0.6382 | 0.4728 | 0.8856 | 14 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,847 | [
[
-0.051239013671875,
-0.041900634765625,
0.02392578125,
0.003398895263671875,
-0.00675201416015625,
-0.01309967041015625,
0.00351715087890625,
-0.013275146484375,
0.04095458984375,
0.0267181396484375,
-0.048583984375,
-0.05120849609375,
-0.05206298828125,
-0.... |
Sachinkelenjaguri/climate-tcfd-recommendation | 2023-05-03T13:59:38.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"autotrain",
"unk",
"dataset:Sachinkelenjaguri/autotrain-data-climate-tcfd-recommendation",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Sachinkelenjaguri | null | null | Sachinkelenjaguri/climate-tcfd-recommendation | 0 | 2 | transformers | 2023-05-03T13:52:18 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Sachinkelenjaguri/autotrain-data-climate-tcfd-recommendation
co2_eq_emissions:
emissions: 0.0015416078395342335
---
# Class
0 - None <br>
1 - Metrics and Targets <br>
2 - Strategy <br>
3 - Risk Management <br>
4 - Governance <br>
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 55122128742
- CO2 Emissions (in grams): 0.0015
## Validation Metrics
- Loss: 0.646
- Accuracy: 0.777
- Macro F1: 0.727
- Micro F1: 0.777
- Weighted F1: 0.779
- Macro Precision: 0.734
- Micro Precision: 0.777
- Weighted Precision: 0.786
- Macro Recall: 0.731
- Micro Recall: 0.777
- Weighted Recall: 0.777
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Sachinkelenjaguri/climate-tcfd-recommendation
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Sachinkelenjaguri/climate-tcfd-recommendation", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Sachinkelenjaguri/climate-tcfd-recommendation", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,447 | [
[
-0.03228759765625,
-0.0298614501953125,
0.01202392578125,
0.00894927978515625,
-0.005756378173828125,
0.01158905029296875,
-0.00008946657180786133,
-0.00505828857421875,
-0.01236724853515625,
0.024078369140625,
-0.049713134765625,
-0.046875,
-0.055145263671875,
... |
bitextor/bicleaner-ai-full-en-ca | 2023-08-24T10:27:12.000Z | [
"transformers",
"tf",
"xlm-roberta",
"bicleaner-ai",
"en",
"ca",
"multilingual",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | null | bitextor | null | null | bitextor/bicleaner-ai-full-en-ca | 0 | 2 | transformers | 2023-05-03T14:13:35 | ---
language:
- en
- ca
- multilingual
license: cc-by-sa-4.0
tags:
- bicleaner-ai
tasks:
- text-classification
---
# Bicleaner AI full model for en-ca
Bicleaner AI is a tool that aims at detecting noisy sentence pairs in a parallel corpus. It
indicates the likelihood of a pair of sentences being mutual translations (with a value near to 1) or not (with a value near to 0).
Sentence pairs considered very noisy are scored with 0.
Find out at our repository for further instructions on how to use it: https://github.com/bitextor/bicleaner-ai
| 554 | [
[
-0.029388427734375,
-0.07733154296875,
0.0290679931640625,
0.024566650390625,
-0.0240936279296875,
0.01082611083984375,
-0.0156707763671875,
-0.045806884765625,
0.02581787109375,
0.0281219482421875,
-0.023651123046875,
-0.02801513671875,
-0.05303955078125,
0... |
Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test-3 | 2023-05-03T15:41:50.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sleoruiz | null | null | Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test-3 | 0 | 2 | transformers | 2023-05-03T14:30:30 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- f1
- recall
- accuracy
- precision
model-index:
- name: bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-test-3
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.5287
- F1: 0.4018
- Recall: 0.3146
- Accuracy: 0.3146
- Precision: 0.6085
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:---------:|
| 2.1331 | 1.0 | 1546 | 3.1241 | 0.3069 | 0.2550 | 0.2550 | 0.4860 |
| 1.5436 | 2.0 | 3092 | 2.8434 | 0.3705 | 0.3091 | 0.3091 | 0.5677 |
| 0.9374 | 3.0 | 4638 | 2.8335 | 0.3988 | 0.3280 | 0.3280 | 0.5673 |
| 0.5072 | 4.0 | 6184 | 2.9788 | 0.4117 | 0.3359 | 0.3359 | 0.5901 |
| 0.27 | 5.0 | 7730 | 3.5287 | 0.4018 | 0.3146 | 0.3146 | 0.6085 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,054 | [
[
-0.0321044921875,
-0.04388427734375,
0.01468658447265625,
0.006076812744140625,
-0.01812744140625,
-0.025604248046875,
-0.01678466796875,
-0.0240325927734375,
0.01114654541015625,
0.024444580078125,
-0.045135498046875,
-0.049652099609375,
-0.05340576171875,
... |
Svetlana0303/Regression_albert_aug_CustomLoss_3 | 2023-05-03T14:33:47.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_albert_aug_CustomLoss_3 | 0 | 2 | transformers | 2023-05-03T14:33:44 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Regression_albert_aug_CustomLoss_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Regression_albert_aug_CustomLoss_3
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2368
- Train Mae: 0.5301
- Train Mse: 0.4296
- Train R2-score: 0.7669
- Validation Loss: 0.2410
- Validation Mae: 0.5680
- Validation Mse: 0.4286
- Validation R2-score: 0.6930
- Epoch: 14
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 1e-04, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Mae | Train Mse | Train R2-score | Validation Loss | Validation Mae | Validation Mse | Validation R2-score | Epoch |
|:----------:|:---------:|:---------:|:--------------:|:---------------:|:--------------:|:--------------:|:-------------------:|:-----:|
| 0.2614 | 0.5480 | 0.4524 | 0.7369 | 0.2408 | 0.5194 | 0.4609 | 0.7578 | 0 |
| 0.2442 | 0.5374 | 0.4362 | 0.7109 | 0.2334 | 0.5376 | 0.4391 | 0.7399 | 1 |
| 0.2431 | 0.5349 | 0.4356 | 0.7503 | 0.2432 | 0.5234 | 0.4657 | 0.7591 | 2 |
| 0.2386 | 0.5250 | 0.4264 | 0.7926 | 0.2348 | 0.5525 | 0.4316 | 0.7203 | 3 |
| 0.2409 | 0.5342 | 0.4325 | 0.7166 | 0.2431 | 0.5233 | 0.4656 | 0.7591 | 4 |
| 0.2400 | 0.5298 | 0.4310 | 0.7553 | 0.2358 | 0.5250 | 0.4490 | 0.7513 | 5 |
| 0.2384 | 0.5274 | 0.4299 | 0.7791 | 0.2341 | 0.5491 | 0.4329 | 0.7253 | 6 |
| 0.2413 | 0.5306 | 0.4335 | 0.7593 | 0.2365 | 0.5583 | 0.4299 | 0.7109 | 7 |
| 0.2381 | 0.5299 | 0.4298 | 0.7784 | 0.2335 | 0.5452 | 0.4347 | 0.7306 | 8 |
| 0.2379 | 0.5280 | 0.4297 | 0.7575 | 0.2335 | 0.5448 | 0.4349 | 0.7312 | 9 |
| 0.2374 | 0.5306 | 0.4309 | 0.8098 | 0.2334 | 0.5441 | 0.4352 | 0.7321 | 10 |
| 0.2381 | 0.5302 | 0.4303 | 0.7428 | 0.2337 | 0.5466 | 0.4340 | 0.7288 | 11 |
| 0.2376 | 0.5323 | 0.4275 | 0.7806 | 0.2333 | 0.5411 | 0.4369 | 0.7358 | 12 |
| 0.2339 | 0.5277 | 0.4217 | 0.7986 | 0.2363 | 0.5232 | 0.4506 | 0.7525 | 13 |
| 0.2368 | 0.5301 | 0.4296 | 0.7669 | 0.2410 | 0.5680 | 0.4286 | 0.6930 | 14 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,843 | [
[
-0.050628662109375,
-0.042205810546875,
0.0214385986328125,
0.0013027191162109375,
-0.0026702880859375,
-0.00901031494140625,
0.0026264190673828125,
-0.00858306884765625,
0.04278564453125,
0.024932861328125,
-0.048553466796875,
-0.04901123046875,
-0.050811767578... |
abertsch/bart-base-booksum | 2023-07-21T14:32:12.000Z | [
"transformers",
"pytorch",
"bart",
"feature-extraction",
"text2text-generation",
"dataset:abertsch/booksum-fullbooks",
"arxiv:2305.01625",
"endpoints_compatible",
"region:us"
] | text2text-generation | abertsch | null | null | abertsch/bart-base-booksum | 0 | 2 | transformers | 2023-05-03T14:44:00 | ---
datasets:
- abertsch/booksum-fullbooks
pipeline_tag: text2text-generation
---
Baseline for the preprint [Unlimiformer: Long-Range Transformers with Unlimited Length Input](https://arxiv.org/abs/2305.01625).
This model was finetuned from a BART-base model as a baseline. It was finetuned on the dataset BookSum (full-book setting). | 335 | [
[
-0.059814453125,
-0.021820068359375,
0.0276641845703125,
-0.002132415771484375,
-0.024078369140625,
-0.0139312744140625,
-0.01473236083984375,
-0.01242828369140625,
0.01085662841796875,
0.06903076171875,
-0.040069580078125,
-0.0189208984375,
-0.032684326171875,
... |
Xenova/LaMini-Neo-125M | 2023-09-02T20:59:21.000Z | [
"transformers.js",
"onnx",
"gpt_neo",
"text-generation",
"region:us"
] | text-generation | Xenova | null | null | Xenova/LaMini-Neo-125M | 0 | 2 | transformers.js | 2023-05-03T14:44:43 | ---
library_name: "transformers.js"
---
https://huggingface.co/MBZUAI/LaMini-Neo-125M with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 501 | [
[
-0.04437255859375,
0.01226806640625,
0.0226287841796875,
0.05023193359375,
-0.01416015625,
-0.004016876220703125,
-0.0036773681640625,
-0.0161590576171875,
0.032989501953125,
0.04998779296875,
-0.058837890625,
-0.0303497314453125,
-0.0330810546875,
0.0060195... |
PanJLu/Distilbert_10 | 2023-05-03T16:47:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | PanJLu | null | null | PanJLu/Distilbert_10 | 0 | 2 | transformers | 2023-05-03T15:02:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Distilbert_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Distilbert_10
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0017
- Accuracy: 0.9995
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 125 | 0.4132 | 0.8255 |
| No log | 2.0 | 250 | 0.2182 | 0.9235 |
| No log | 3.0 | 375 | 0.1047 | 0.9755 |
| 0.3288 | 4.0 | 500 | 0.0434 | 0.9895 |
| 0.3288 | 5.0 | 625 | 0.0267 | 0.993 |
| 0.3288 | 6.0 | 750 | 0.0137 | 0.997 |
| 0.3288 | 7.0 | 875 | 0.0066 | 0.998 |
| 0.034 | 8.0 | 1000 | 0.0021 | 0.9995 |
| 0.034 | 9.0 | 1125 | 0.0018 | 0.9995 |
| 0.034 | 10.0 | 1250 | 0.0017 | 0.9995 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,889 | [
[
-0.03192138671875,
-0.041778564453125,
0.01206207275390625,
0.0113372802734375,
-0.0180511474609375,
-0.0196380615234375,
-0.0021457672119140625,
-0.00820159912109375,
0.01319122314453125,
0.0173797607421875,
-0.050201416015625,
-0.0511474609375,
-0.057037353515... |
Xenova/bert-base-multilingual-uncased | 2023-09-01T21:06:45.000Z | [
"transformers.js",
"onnx",
"bert",
"fill-mask",
"region:us"
] | fill-mask | Xenova | null | null | Xenova/bert-base-multilingual-uncased | 0 | 2 | transformers.js | 2023-05-03T15:08:37 | ---
library_name: "transformers.js"
---
https://huggingface.co/bert-base-multilingual-uncased with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 509 | [
[
-0.032623291015625,
0.01351165771484375,
0.018157958984375,
0.05755615234375,
-0.01263427734375,
0.00988006591796875,
-0.02618408203125,
-0.02130126953125,
0.0272979736328125,
0.038726806640625,
-0.058563232421875,
-0.035064697265625,
-0.043212890625,
0.0008... |
HasinMDG/SDeberta-base-v0 | 2023-05-03T15:14:06.000Z | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | HasinMDG | null | null | HasinMDG/SDeberta-base-v0 | 0 | 2 | sentence-transformers | 2023-05-03T15:13:46 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/XT-Deberta-Base-V0
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/XT-Deberta-Base-V0")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,543 | [
[
-0.00717926025390625,
-0.05810546875,
0.03472900390625,
-0.0118408203125,
-0.01568603515625,
-0.0127410888671875,
-0.01404571533203125,
-0.0055999755859375,
0.00437164306640625,
0.038330078125,
-0.04437255859375,
-0.03033447265625,
-0.049041748046875,
0.0119... |
Xenova/codegen-350M-multi | 2023-09-01T18:51:59.000Z | [
"transformers.js",
"onnx",
"codegen",
"text-generation",
"region:us"
] | text-generation | Xenova | null | null | Xenova/codegen-350M-multi | 1 | 2 | transformers.js | 2023-05-03T15:14:56 | ---
library_name: "transformers.js"
---
https://huggingface.co/Salesforce/codegen-350M-multi with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 508 | [
[
-0.03857421875,
0.0222625732421875,
0.0163421630859375,
0.062103271484375,
0.005893707275390625,
0.009796142578125,
-0.0104217529296875,
-0.0135955810546875,
0.016326904296875,
0.0440673828125,
-0.056854248046875,
-0.033782958984375,
-0.0389404296875,
0.0092... |
egarciamartin/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-03T15:34:28.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | egarciamartin | null | null | egarciamartin/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-03T15:33:53 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 523.00 +/- 116.02
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga egarciamartin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga egarciamartin -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga egarciamartin
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,706 | [
[
-0.041595458984375,
-0.036773681640625,
0.0213470458984375,
0.0253143310546875,
-0.01003265380859375,
-0.017852783203125,
0.01248931884765625,
-0.01476287841796875,
0.01374053955078125,
0.0235595703125,
-0.070556640625,
-0.034698486328125,
-0.027130126953125,
... |
uisikdag/ayla_ozetler2006_bertuncased | 2023-05-03T20:52:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | uisikdag | null | null | uisikdag/ayla_ozetler2006_bertuncased | 0 | 2 | transformers | 2023-05-03T16:04:41 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ayla_ozetler200_bertuncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ayla_ozetler200_bertuncased
This model is a fine-tuned version of [dbmdz/bert-base-turkish-uncased](https://huggingface.co/dbmdz/bert-base-turkish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3311
- Accuracy: 0.9
## Model description
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 0.89 | 6 | 1.6870 | 0.4278 |
| 1.7467 | 1.93 | 13 | 1.1508 | 0.6972 |
| 1.0982 | 2.96 | 20 | 0.7106 | 0.8028 |
| 1.0982 | 4.0 | 27 | 0.5116 | 0.85 |
| 0.5588 | 4.89 | 33 | 0.4031 | 0.8694 |
| 0.3365 | 5.93 | 40 | 0.3696 | 0.8778 |
| 0.3365 | 6.96 | 47 | 0.3394 | 0.8806 |
| 0.2345 | 8.0 | 54 | 0.3397 | 0.9 |
| 0.1791 | 8.89 | 60 | 0.3311 | 0.9 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.11.0
| 1,934 | [
[
-0.03973388671875,
-0.0462646484375,
0.001728057861328125,
0.00970458984375,
-0.025787353515625,
-0.0301513671875,
-0.01357269287109375,
-0.0205078125,
0.01268768310546875,
0.0282135009765625,
-0.056884765625,
-0.050201416015625,
-0.050689697265625,
-0.01686... |
Mike00vito/best-multi-singleCLS | 2023-05-05T22:49:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Mike00vito | null | null | Mike00vito/best-multi-singleCLS | 0 | 2 | transformers | 2023-05-03T17:03:45 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: prova
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# prova
This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5444
- F1 Score: 0.8762
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 Score |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 369 | 0.4424 | 0.7742 |
| No log | 2.0 | 738 | 0.5326 | 0.7994 |
| No log | 3.0 | 1107 | 0.4969 | 0.8681 |
| No log | 4.0 | 1476 | 0.5444 | 0.8762 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,531 | [
[
-0.035858154296875,
-0.043426513671875,
0.00659942626953125,
0.02001953125,
-0.0298309326171875,
-0.028350830078125,
-0.0253143310546875,
-0.025146484375,
0.0164947509765625,
0.024505615234375,
-0.054901123046875,
-0.051025390625,
-0.0411376953125,
-0.016174... |
Efser/bert-base-uncased-finetuned-cola | 2023-05-07T20:12:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Efser | null | null | Efser/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-03T18:53:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5364243566130295
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5099
- Matthews Correlation: 0.5364
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.83714610462349e-06
- train_batch_size: 10
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5569 | 1.0 | 856 | 0.5234 | 0.4895 |
| 0.3749 | 2.0 | 1712 | 0.5099 | 0.5364 |
| 0.3075 | 3.0 | 2568 | 0.5430 | 0.5285 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,884 | [
[
-0.0267333984375,
-0.05096435546875,
0.01012420654296875,
0.01806640625,
-0.0247955322265625,
-0.020355224609375,
-0.0179901123046875,
-0.01442718505859375,
0.0268402099609375,
0.0167999267578125,
-0.051788330078125,
-0.0310211181640625,
-0.0523681640625,
-0... |
cansurav/bert-base-uncased-finetuned-cola-learning_rate-2e-05 | 2023-05-04T18:06:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | cansurav | null | null | cansurav/bert-base-uncased-finetuned-cola-learning_rate-2e-05 | 0 | 2 | transformers | 2023-05-03T18:58:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-learning_rate-2e-05
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5892439733711194
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-learning_rate-2e-05
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4480
- Matthews Correlation: 0.5892
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5052 | 1.0 | 535 | 0.5532 | 0.5030 |
| 0.3006 | 2.0 | 1070 | 0.4480 | 0.5892 |
| 0.1918 | 3.0 | 1605 | 0.7164 | 0.5340 |
| 0.138 | 4.0 | 2140 | 0.8575 | 0.5570 |
| 0.0866 | 5.0 | 2675 | 1.1483 | 0.5211 |
| 0.0652 | 6.0 | 3210 | 0.9938 | 0.5816 |
| 0.046 | 7.0 | 3745 | 1.1453 | 0.5739 |
| 0.0314 | 8.0 | 4280 | 1.3524 | 0.5573 |
| 0.0212 | 9.0 | 4815 | 1.4664 | 0.5573 |
| 0.0203 | 10.0 | 5350 | 1.4505 | 0.5679 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,429 | [
[
-0.031524658203125,
-0.046478271484375,
0.00606536865234375,
0.0162200927734375,
-0.0177764892578125,
-0.0171356201171875,
-0.01195526123046875,
-0.01355743408203125,
0.026824951171875,
0.00951385498046875,
-0.0482177734375,
-0.037628173828125,
-0.05087280273437... |
yagmurery/bert-base-uncased-finetuned-epoch-1-cola | 2023-05-03T19:29:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yagmurery | null | null | yagmurery/bert-base-uncased-finetuned-epoch-1-cola | 0 | 2 | transformers | 2023-05-03T19:04:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-epoch-1-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5827865839334545
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-epoch-1-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3446
- Matthews Correlation: 0.5828
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 9
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.0843 | 1.0 | 535 | 1.2087 | 0.4972 |
| 0.1239 | 2.0 | 1070 | 1.0809 | 0.5406 |
| 0.1116 | 3.0 | 1605 | 1.0645 | 0.5378 |
| 0.1108 | 4.0 | 2140 | 1.0710 | 0.5544 |
| 0.0745 | 5.0 | 2675 | 1.2258 | 0.5739 |
| 0.051 | 6.0 | 3210 | 1.2926 | 0.5570 |
| 0.04 | 7.0 | 3745 | 1.3278 | 0.5579 |
| 0.0245 | 8.0 | 4280 | 1.3446 | 0.5828 |
| 0.0158 | 9.0 | 4815 | 1.4632 | 0.5608 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,330 | [
[
-0.03204345703125,
-0.052734375,
0.00946807861328125,
0.01459503173828125,
-0.0200347900390625,
-0.0230560302734375,
-0.011444091796875,
-0.01392364501953125,
0.026123046875,
0.01568603515625,
-0.054962158203125,
-0.037017822265625,
-0.051116943359375,
-0.02... |
bilginn/bert-base-uncased-finetuned-cola | 2023-05-05T20:49:47.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | bilginn | null | null | bilginn/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-03T19:32:34 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5678267214677118
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5922
- Matthews Correlation: 0.5678
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9.207256119784435e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.5811 | 1.0 | 2138 | 0.6179 | 0.4846 |
| 0.4698 | 2.0 | 4276 | 0.8083 | 0.5495 |
| 0.3161 | 3.0 | 6414 | 1.1152 | 0.5389 |
| 0.2499 | 4.0 | 8552 | 1.0719 | 0.5624 |
| 0.1755 | 5.0 | 10690 | 1.1734 | 0.5709 |
| 0.1511 | 6.0 | 12828 | 1.2383 | 0.5699 |
| 0.0738 | 7.0 | 14966 | 1.3802 | 0.5598 |
| 0.0677 | 8.0 | 17104 | 1.4711 | 0.5599 |
| 0.0509 | 9.0 | 19242 | 1.5751 | 0.5678 |
| 0.0397 | 10.0 | 21380 | 1.5922 | 0.5678 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,410 | [
[
-0.029815673828125,
-0.046539306640625,
0.006622314453125,
0.01397705078125,
-0.0188446044921875,
-0.01751708984375,
-0.011138916015625,
-0.01372528076171875,
0.0294342041015625,
0.0165252685546875,
-0.052978515625,
-0.03668212890625,
-0.051605224609375,
-0.... |
Ibrahim-Alam/finetuning-camembert-base-on-imdb | 2023-05-06T05:11:24.000Z | [
"transformers",
"pytorch",
"tensorboard",
"camembert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Ibrahim-Alam | null | null | Ibrahim-Alam/finetuning-camembert-base-on-imdb | 0 | 2 | transformers | 2023-05-03T19:40:07 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-camembert-base-on-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.90044
- name: F1
type: f1
value: 0.9034335596508245
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-camembert-base-on-imdb
This model is a fine-tuned version of [camembert-base](https://huggingface.co/camembert-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2533
- Accuracy: 0.9004
- F1: 0.9034
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,516 | [
[
-0.040374755859375,
-0.039764404296875,
0.01373291015625,
0.0037059783935546875,
-0.04205322265625,
-0.0206298828125,
-0.010955810546875,
-0.0026302337646484375,
0.015289306640625,
0.0382080078125,
-0.06402587890625,
-0.04180908203125,
-0.05364990234375,
-0.... |
Ibrahim-Alam/finetuning-distilbert-base-uncased-on-imdb | 2023-05-03T19:49:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Ibrahim-Alam | null | null | Ibrahim-Alam/finetuning-distilbert-base-uncased-on-imdb | 0 | 2 | transformers | 2023-05-03T19:43:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
- f1
model-index:
- name: finetuning-distilbert-base-uncased-on-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.96
- name: F1
type: f1
value: 0.9596231493943473
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-distilbert-base-uncased-on-imdb
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1311
- Accuracy: 0.96
- F1: 0.9596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,554 | [
[
-0.0384521484375,
-0.0433349609375,
0.0103912353515625,
0.00540924072265625,
-0.03912353515625,
-0.00782012939453125,
-0.006374359130859375,
-0.0023746490478515625,
0.016143798828125,
0.0234222412109375,
-0.054107666015625,
-0.03704833984375,
-0.0673828125,
... |
Xenova/roberta-large-mnli | 2023-05-31T12:53:22.000Z | [
"transformers.js",
"onnx",
"roberta",
"text-classification",
"region:us"
] | text-classification | Xenova | null | null | Xenova/roberta-large-mnli | 2 | 2 | transformers.js | 2023-05-03T20:07:31 | ---
library_name: "transformers.js"
---
https://huggingface.co/roberta-large-mnli with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 497 | [
[
-0.0307769775390625,
0.0084075927734375,
0.0269775390625,
0.048095703125,
-0.0042724609375,
-0.0100250244140625,
-0.01373291015625,
-0.018218994140625,
0.0272369384765625,
0.043365478515625,
-0.052490234375,
-0.03253173828125,
-0.042694091796875,
0.013061523... |
uisikdag/ayla_ozetler250_bertuncased | 2023-05-04T22:51:01.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | uisikdag | null | null | uisikdag/ayla_ozetler250_bertuncased | 0 | 2 | transformers | 2023-05-03T20:22:00 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: ayla_ozetler250_bertuncased
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ayla_ozetler250_bertuncased
This model is a fine-tuned version of [dbmdz/bert-base-turkish-uncased](https://huggingface.co/dbmdz/bert-base-turkish-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1504
- Accuracy: 0.96
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 7 | 1.5357 | 0.3333 |
| 1.5789 | 2.0 | 14 | 0.9467 | 0.8427 |
| 0.9393 | 3.0 | 21 | 0.3740 | 0.9413 |
| 0.9393 | 4.0 | 28 | 0.2198 | 0.9493 |
| 0.2828 | 5.0 | 35 | 0.1560 | 0.9573 |
| 0.0982 | 6.0 | 42 | 0.1517 | 0.96 |
| 0.0982 | 7.0 | 49 | 0.1407 | 0.9627 |
| 0.05 | 8.0 | 56 | 0.1527 | 0.96 |
| 0.0395 | 9.0 | 63 | 0.1524 | 0.96 |
| 0.0242 | 10.0 | 70 | 0.1504 | 0.96 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.11.0
| 2,022 | [
[
-0.04058837890625,
-0.04693603515625,
0.002655029296875,
0.008209228515625,
-0.0204925537109375,
-0.02899169921875,
-0.01047515869140625,
-0.0167236328125,
0.0154876708984375,
0.0279083251953125,
-0.05841064453125,
-0.049346923828125,
-0.050811767578125,
-0.... |
ThanHitt/FishTreeRock_Classifier_v1 | 2023-05-03T20:37:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"vit",
"image-classification",
"huggingpics",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | ThanHitt | null | null | ThanHitt/FishTreeRock_Classifier_v1 | 0 | 2 | transformers | 2023-05-03T20:37:27 | ---
tags:
- image-classification
- pytorch
- huggingpics
metrics:
- accuracy
model-index:
- name: FishTreeRock_Classifier_v1
results:
- task:
name: Image Classification
type: image-classification
metrics:
- name: Accuracy
type: accuracy
value: 0.9850746393203735
---
# FishTreeRock_Classifier_v1
Autogenerated by HuggingPics🤗🖼️
Create your own image classifier for **anything** by running [the demo on Google Colab](https://colab.research.google.com/github/nateraw/huggingpics/blob/main/HuggingPics.ipynb).
Report any issues with the demo at the [github repo](https://github.com/nateraw/huggingpics).
## Example Images
#### fish

#### rock

#### tree
 | 780 | [
[
-0.034820556640625,
-0.051422119140625,
0.0109710693359375,
0.0284271240234375,
-0.045013427734375,
0.0094757080078125,
0.0131988525390625,
-0.0350341796875,
0.052459716796875,
0.0017871856689453125,
-0.040863037109375,
-0.052398681640625,
-0.044952392578125,
... |
pandma/es_pipeline | 2023-05-03T20:54:53.000Z | [
"spacy",
"token-classification",
"es",
"model-index",
"region:us"
] | token-classification | pandma | null | null | pandma/es_pipeline | 0 | 2 | spacy | 2023-05-03T20:54:28 | ---
tags:
- spacy
- token-classification
language:
- es
model-index:
- name: es_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.998766394
- name: NER Recall
type: recall
value: 0.9988961039
- name: NER F Score
type: f_score
value: 0.9988312447
---
| Feature | Description |
| --- | --- |
| **Name** | `es_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.2,<3.6.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (13 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `BILLING_PERIOD_END`, `BILLING_PERIOD_START`, `BILL_OWNER`, `COMPANY_NAME`, `CUPS`, `DIRECTION`, `ENERGY_P1_PRICE`, `ENERGY_P2_PRICE`, `ENERGY_P3_PRICE`, `NIF`, `POWER_P1_PRICE`, `POWER_P2_PRICE`, `TOTAL_IMPORTE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 99.88 |
| `ENTS_P` | 99.88 |
| `ENTS_R` | 99.89 |
| `TRANSFORMER_LOSS` | 6425.46 |
| `NER_LOSS` | 41888.91 | | 1,274 | [
[
-0.0443115234375,
-0.006587982177734375,
0.0171966552734375,
0.0173187255859375,
-0.032958984375,
0.00666046142578125,
0.0045623779296875,
0.00033164024353027344,
0.045135498046875,
0.0555419921875,
-0.06268310546875,
-0.06781005859375,
-0.048797607421875,
-... |
Python-proje/mymodel | 2023-05-04T16:05:42.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | Python-proje | null | null | Python-proje/mymodel | 0 | 2 | transformers | 2023-05-03T20:59:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- rouge
model-index:
- name: mymodel
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mymodel
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3705
- Rouge1: 1.762
- Rouge2: 1.4938
- Rougel: 1.7366
- Rougelsum: 1.7385
- Gen Len: 19.7335
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| 1.446 | 1.0 | 12500 | 1.3705 | 1.762 | 1.4938 | 1.7366 | 1.7385 | 19.7335 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,529 | [
[
-0.034393310546875,
-0.047454833984375,
0.01690673828125,
0.020477294921875,
-0.02154541015625,
-0.0260009765625,
-0.01222991943359375,
-0.015594482421875,
0.0260009765625,
0.0367431640625,
-0.052337646484375,
-0.05487060546875,
-0.04083251953125,
-0.0084915... |
kreola/bert-base-uncased-finetuned-cola | 2023-05-07T19:48:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | kreola | null | null | kreola/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-03T21:07:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.49971547639767977
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4689
- Matthews Correlation: 0.4997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4971 | 1.0 | 535 | 0.4689 | 0.4997 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,723 | [
[
-0.0257568359375,
-0.05267333984375,
0.01214599609375,
0.021026611328125,
-0.028167724609375,
-0.0225830078125,
-0.019134521484375,
-0.015350341796875,
0.0254364013671875,
0.0164947509765625,
-0.049285888671875,
-0.0312042236328125,
-0.051177978515625,
-0.02... |
platzi/platzi-distilroberta-base-mrpc-glue-cristian-durango | 2023-05-04T01:52:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-cristian-durango | 0 | 2 | transformers | 2023-05-04T01:33:56 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-cristian-durango
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8259803921568627
- name: F1
type: f1
value: 0.8794567062818336
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-cristian-durango
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4245
- Accuracy: 0.8260
- F1: 0.8795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5318 | 1.09 | 500 | 0.4245 | 0.8260 | 0.8795 |
| 0.3704 | 2.18 | 1000 | 0.6045 | 0.8309 | 0.8739 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,892 | [
[
-0.0294342041015625,
-0.042755126953125,
0.00925445556640625,
0.0208740234375,
-0.031524658203125,
-0.0247039794921875,
-0.01287841796875,
-0.00363922119140625,
0.0064697265625,
0.00832366943359375,
-0.048126220703125,
-0.04388427734375,
-0.0587158203125,
-0... |
chastelove/distilbert-base-uncased_emotion_ft_0504 | 2023-05-04T04:44:17.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | chastelove | null | null | chastelove/distilbert-base-uncased_emotion_ft_0504 | 0 | 2 | transformers | 2023-05-04T04:22:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
- precision
model-index:
- name: distilbert-base-uncased_emotion_ft_0504
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.935
- name: F1
type: f1
value: 0.9353661273711807
- name: Precision
type: precision
value: 0.9062644261189533
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased_emotion_ft_0504
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1552
- Accuracy: 0.935
- F1: 0.9354
- Precision: 0.9063
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|
| 0.7741 | 1.0 | 250 | 0.2686 | 0.909 | 0.9070 | 0.8911 |
| 0.2073 | 2.0 | 500 | 0.1767 | 0.9315 | 0.9319 | 0.9013 |
| 0.1397 | 3.0 | 750 | 0.1581 | 0.935 | 0.9353 | 0.9081 |
| 0.1123 | 4.0 | 1000 | 0.1552 | 0.935 | 0.9354 | 0.9063 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1
- Datasets 2.12.0
- Tokenizers 0.11.0
| 2,159 | [
[
-0.036651611328125,
-0.03436279296875,
0.01303863525390625,
0.0203399658203125,
-0.02301025390625,
-0.01708984375,
-0.00830841064453125,
-0.00568389892578125,
0.01367950439453125,
0.010162353515625,
-0.054229736328125,
-0.051971435546875,
-0.05975341796875,
... |
brusooo/flowers_classification | 2023-05-04T14:28:38.000Z | [
"keras",
"image-classification",
"region:us"
] | image-classification | brusooo | null | null | brusooo/flowers_classification | 0 | 2 | keras | 2023-05-04T06:46:02 | ---
library_name: keras
inference: false
tags:
- image-classification
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 887 | [
[
-0.037200927734375,
-0.03997802734375,
0.031890869140625,
0.0081634521484375,
-0.043243408203125,
-0.0177154541015625,
0.01097869873046875,
-0.0033969879150390625,
0.0204620361328125,
0.030548095703125,
-0.043731689453125,
-0.05120849609375,
-0.040008544921875,
... |
Oscar-chen/roberta-base | 2023-05-09T07:48:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | Oscar-chen | null | null | Oscar-chen/roberta-base | 0 | 2 | transformers | 2023-05-04T07:19:48 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1131
- Accuracy: 0.9637
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.3406 | 0.8619 |
| No log | 2.0 | 200 | 0.2220 | 0.9119 |
| No log | 3.0 | 300 | 0.1429 | 0.9487 |
| No log | 4.0 | 400 | 0.1131 | 0.9637 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,418 | [
[
-0.020263671875,
-0.038543701171875,
0.0197296142578125,
0.00672149658203125,
-0.017974853515625,
-0.035308837890625,
-0.0096282958984375,
-0.0122833251953125,
0.00209808349609375,
0.0243988037109375,
-0.05267333984375,
-0.053924560546875,
-0.0526123046875,
... |
leonardosaveri/DSChallenge_Roberta_Base | 2023-05-04T08:08:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | leonardosaveri | null | null | leonardosaveri/DSChallenge_Roberta_Base | 0 | 2 | transformers | 2023-05-04T07:52:31 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DSChallenge_Roberta_Base
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DSChallenge_Roberta_Base
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1755
- Accuracy: 0.9549
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2974 | 1.0 | 793 | 0.1676 | 0.9419 |
| 0.1491 | 2.0 | 1586 | 0.1755 | 0.9549 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,385 | [
[
-0.025726318359375,
-0.050445556640625,
0.026092529296875,
0.0007872581481933594,
-0.027069091796875,
-0.0335693359375,
-0.0173797607421875,
-0.0108642578125,
0.00467681884765625,
0.029083251953125,
-0.06207275390625,
-0.053436279296875,
-0.060302734375,
-0.... |
EceKun/bert-base-uncased-finetuned-cola | 2023-05-07T12:20:47.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | EceKun | null | null | EceKun/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-04T08:37:47 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: train
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5768716704740007
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9220
- Matthews Correlation: 0.5769
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 8.45110449379687e-06
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5408 | 1.0 | 855 | 0.4675 | 0.4474 |
| 0.351 | 2.0 | 1710 | 0.6087 | 0.5354 |
| 0.2601 | 3.0 | 2565 | 0.7320 | 0.5580 |
| 0.1919 | 4.0 | 3420 | 0.8818 | 0.5595 |
| 0.1437 | 5.0 | 4275 | 0.9220 | 0.5769 |
| 0.1071 | 6.0 | 5130 | 1.0528 | 0.5629 |
| 0.0734 | 7.0 | 5985 | 1.0573 | 0.5577 |
| 0.0635 | 8.0 | 6840 | 1.0774 | 0.5580 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,248 | [
[
-0.0289764404296875,
-0.04913330078125,
0.00806427001953125,
0.013702392578125,
-0.020660400390625,
-0.018280029296875,
-0.01317596435546875,
-0.01267242431640625,
0.028656005859375,
0.0158538818359375,
-0.052703857421875,
-0.0347900390625,
-0.053253173828125,
... |
leonardosaveri/DSChallenge_Roberta_Base_10Epochs | 2023-05-04T09:53:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | leonardosaveri | null | null | leonardosaveri/DSChallenge_Roberta_Base_10Epochs | 0 | 2 | transformers | 2023-05-04T09:03:33 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DSChallenge_Roberta_Base_10Epochs
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DSChallenge_Roberta_Base_10Epochs
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3678
- Accuracy: 0.9428
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.0769 | 1.0 | 793 | 0.2022 | 0.9410 |
| 0.1081 | 2.0 | 1586 | 0.2630 | 0.9423 |
| 0.0563 | 3.0 | 2379 | 0.2948 | 0.9477 |
| 0.0296 | 4.0 | 3172 | 0.3678 | 0.9428 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,528 | [
[
-0.0284881591796875,
-0.04864501953125,
0.024749755859375,
0.0014734268188476562,
-0.024261474609375,
-0.0306549072265625,
-0.015472412109375,
-0.0105133056640625,
0.006336212158203125,
0.0268096923828125,
-0.06304931640625,
-0.053131103515625,
-0.06076049804687... |
SHENMU007/neunit_BASE_V5 | 2023-05-05T01:06:20.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | SHENMU007 | null | null | SHENMU007/neunit_BASE_V5 | 0 | 2 | transformers | 2023-05-04T09:50:19 | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
| 1,251 | [
[
-0.0350341796875,
-0.051727294921875,
-0.005931854248046875,
0.01265716552734375,
-0.025390625,
-0.0193939208984375,
-0.01763916015625,
-0.0265045166015625,
0.0114288330078125,
0.021270751953125,
-0.0411376953125,
-0.050048828125,
-0.04315185546875,
0.008583... |
alemdarberk/bert-base-uncased-finetuned-cola | 2023-05-06T12:57:09.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | alemdarberk | null | null | alemdarberk/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-04T10:10:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5906590396340186
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5480
- Matthews Correlation: 0.5907
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.337026393714949e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 134 | 0.4323 | 0.5445 |
| No log | 2.0 | 268 | 0.4164 | 0.6013 |
| No log | 3.0 | 402 | 0.5480 | 0.5907 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,886 | [
[
-0.0245819091796875,
-0.052032470703125,
0.00899505615234375,
0.0200653076171875,
-0.025146484375,
-0.02020263671875,
-0.018402099609375,
-0.016143798828125,
0.026947021484375,
0.016632080078125,
-0.05059814453125,
-0.03094482421875,
-0.051513671875,
-0.0213... |
David1785/finetuned-bert-mrpc | 2023-05-04T13:45:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | David1785 | null | null | David1785/finetuned-bert-mrpc | 0 | 2 | transformers | 2023-05-04T10:44:47 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: finetuned-bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8877551020408163
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-bert-mrpc
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4588
- Accuracy: 0.8382
- F1: 0.8878
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.579 | 1.0 | 230 | 0.4858 | 0.7745 | 0.8521 |
| 0.4163 | 2.0 | 460 | 0.4477 | 0.8088 | 0.8721 |
| 0.2533 | 3.0 | 690 | 0.4588 | 0.8382 | 0.8878 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,859 | [
[
-0.0389404296875,
-0.046417236328125,
0.00717926025390625,
0.0106048583984375,
-0.02777099609375,
-0.033294677734375,
-0.0163726806640625,
-0.012115478515625,
0.0183563232421875,
0.0195770263671875,
-0.0626220703125,
-0.039703369140625,
-0.049652099609375,
-... |
sepehrbakhshi/bert-base-uncased-finetuned-cola | 2023-05-05T21:06:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | sepehrbakhshi | null | null | sepehrbakhshi/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-04T11:42:28 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.49971547639767977
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4699
- Matthews Correlation: 0.4997
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5016 | 1.0 | 535 | 0.4699 | 0.4997 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,723 | [
[
-0.0258331298828125,
-0.05267333984375,
0.0115966796875,
0.020904541015625,
-0.0282440185546875,
-0.02252197265625,
-0.0192718505859375,
-0.0153350830078125,
0.0254974365234375,
0.016693115234375,
-0.0494384765625,
-0.0306854248046875,
-0.050994873046875,
-0... |
Vignesh-Trender/my_awesome_model | 2023-05-05T06:02:21.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Vignesh-Trender | null | null | Vignesh-Trender/my_awesome_model | 0 | 2 | transformers | 2023-05-04T11:46:12 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Vignesh-Trender/my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Vignesh-Trender/my_awesome_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1294
- Validation Loss: 0.2072
- Train Accuracy: 0.9230
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7810, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Accuracy | Epoch |
|:----------:|:---------------:|:--------------:|:-----:|
| 0.2500 | 0.1823 | 0.9293 | 0 |
| 0.1294 | 0.2072 | 0.9230 | 1 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,785 | [
[
-0.0426025390625,
-0.046112060546875,
0.022308349609375,
0.004161834716796875,
-0.0296478271484375,
-0.021697998046875,
-0.005756378173828125,
-0.015655517578125,
0.010589599609375,
0.0035266876220703125,
-0.04559326171875,
-0.0533447265625,
-0.055572509765625,
... |
helenai/Alireza1044-albert-base-v2-stsb-ov | 2023-05-04T13:15:48.000Z | [
"transformers",
"openvino",
"albert",
"text-classification",
"en",
"endpoints_compatible",
"region:us"
] | text-classification | helenai | null | null | helenai/Alireza1044-albert-base-v2-stsb-ov | 0 | 2 | transformers | 2023-05-04T13:15:34 | ---
language:
- en
tags:
- openvino
---
# Alireza1044/albert-base-v2-stsb
This is the [Alireza1044/albert-base-v2-stsb](https://huggingface.co/Alireza1044/albert-base-v2-stsb) model converted to [OpenVINO](https://openvino.ai), for accellerated inference.
An example of how to do inference on this model:
```python
from optimum.intel.openvino import OVModelForSequenceClassification
from transformers import AutoTokenizer, pipeline
# model_id should be set to either a local directory or a model available on the HuggingFace hub.
model_id = "helenai/Alireza1044-albert-base-v2-stsb-ov"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = OVModelForSequenceClassification.from_pretrained(model_id)
pipe = pipeline("text-classification", model=model, tokenizer=tokenizer)
result = pipe("I like you. I love you")
print(result)
```
| 841 | [
[
-0.0249176025390625,
-0.03680419921875,
0.018707275390625,
0.0220489501953125,
-0.00954437255859375,
-0.0294189453125,
0.0022678375244140625,
-0.008148193359375,
0.0232696533203125,
0.03924560546875,
-0.053314208984375,
-0.03363037109375,
-0.040802001953125,
... |
Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-ds | 2023-05-04T14:15:42.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sleoruiz | null | null | Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-ds | 0 | 2 | transformers | 2023-05-04T13:21:03 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-ds
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.9552
- eval_f1: 0.6062
- eval_recall: 0.5982
- eval_accuracy: 0.5982
- eval_precision: 0.6312
- eval_runtime: 15.886
- eval_samples_per_second: 99.647
- eval_steps_per_second: 6.232
- epoch: 6.0
- step: 2772
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,472 | [
[
-0.02203369140625,
-0.05255126953125,
0.0239105224609375,
0.00989532470703125,
-0.0265655517578125,
-0.028839111328125,
-0.0283355712890625,
-0.033416748046875,
0.004924774169921875,
0.0276336669921875,
-0.0430908203125,
-0.049591064453125,
-0.056915283203125,
... |
aleksahet/xlm-r-squad-sr-lat | 2023-07-08T09:35:56.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"question-answering",
"sr",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | aleksahet | null | null | aleksahet/xlm-r-squad-sr-lat | 1 | 2 | transformers | 2023-05-04T14:00:15 | ---
language:
- sr
metrics:
- f1
- exact_match
library_name: transformers
pipeline_tag: question-answering
---
# XLM-R-SQuAD-sr-lat
This is XLM-R-based model finetuned on synthetic question answering dataset which is created by translating SQuAD 1.1. This model is the result of my thesis.
# Usage
```python
from transformers import pipeline
model_name = 'aleksahet/xlm-r-squad-sr-lat'
pipe = pipeline('question-answering', model=model_name, tokenizer=model_name)
sample = {
'question': 'U kom gradu je rođen Željko Obradović?',
'context': 'Željko Obradović (Čačak, 9. mart 1960) bivši je srpski i jugoslovenski košarkaš. Najuspešniji je trener u istoriji košarke.'
}
res = pipe(sample)
```
# Performance
Model was tested on synthetic question answering dataset, created by automatic translation of SQuAD 1.1 dev split. The model achieved the following results:
- Exact Match: ```71.04```
- F1: ```81.62```
# Source Code
Source code for synthetic dataset generation and model finetuning can be found on this [GitHub repository](https://github.com/aleksac99/SQuAD-SR/). | 1,078 | [
[
-0.0265350341796875,
-0.059356689453125,
0.0290985107421875,
0.01221466064453125,
-0.0267791748046875,
0.0132293701171875,
0.0012598037719726562,
-0.01067352294921875,
0.0021991729736328125,
0.051177978515625,
-0.0850830078125,
-0.037689208984375,
-0.02821350097... |
kishoreb4/distilbert-base-uncased-finetuned-emotion | 2023-05-04T14:33:58.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | kishoreb4 | null | null | kishoreb4/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-04T14:11:39 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.919
- name: F1
type: f1
value: 0.9190477193383318
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2268
- Accuracy: 0.919
- F1: 0.9190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8412 | 1.0 | 250 | 0.3320 | 0.9005 | 0.8966 |
| 0.26 | 2.0 | 500 | 0.2268 | 0.919 | 0.9190 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037811279296875,
-0.041595458984375,
0.0144500732421875,
0.0219573974609375,
-0.026092529296875,
-0.019256591796875,
-0.013519287109375,
-0.0086517333984375,
0.0110015869140625,
0.0088653564453125,
-0.057098388671875,
-0.0513916015625,
-0.059417724609375,
... |
Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-dss | 2023-05-04T15:28:09.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sleoruiz | null | null | Sleoruiz/bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-dss | 0 | 2 | transformers | 2023-05-04T14:21:28 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
metrics:
- f1
- recall
- accuracy
- precision
model-index:
- name: bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-dss
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bertin-roberta-fine-tuned-text-classification-SL-data-augmentation-dss
This model is a fine-tuned version of [bertin-project/bertin-roberta-base-spanish](https://huggingface.co/bertin-project/bertin-roberta-base-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3050
- F1: 0.4713
- Recall: 0.4797
- Accuracy: 0.4797
- Precision: 0.4820
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:---------:|
| No log | 1.0 | 359 | 3.4261 | 0.2636 | 0.3268 | 0.3268 | 0.2780 |
| 3.7358 | 2.0 | 718 | 2.7048 | 0.3631 | 0.4179 | 0.4179 | 0.3773 |
| 2.4772 | 3.0 | 1077 | 2.4578 | 0.4072 | 0.4407 | 0.4407 | 0.4095 |
| 2.4772 | 4.0 | 1436 | 2.3357 | 0.4403 | 0.4545 | 0.4545 | 0.4815 |
| 1.6075 | 5.0 | 1795 | 2.3050 | 0.4713 | 0.4797 | 0.4797 | 0.4820 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,048 | [
[
-0.0283203125,
-0.0419921875,
0.01535797119140625,
0.00296783447265625,
-0.019989013671875,
-0.0253753662109375,
-0.0183868408203125,
-0.0250244140625,
0.0127105712890625,
0.025726318359375,
-0.04644775390625,
-0.054718017578125,
-0.059539794921875,
-0.00991... |
leonardosaveri/DSChallenge_Roberta_Base_Parameters | 2023-05-04T17:06:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | leonardosaveri | null | null | leonardosaveri/DSChallenge_Roberta_Base_Parameters | 0 | 2 | transformers | 2023-05-04T15:34:59 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: DSChallenge_Roberta_Base_Parameters
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# DSChallenge_Roberta_Base_Parameters
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3702
- Accuracy: 0.9392
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3735 | 1.0 | 3169 | 0.4367 | 0.9204 |
| 0.3029 | 2.0 | 6338 | 0.3719 | 0.9374 |
| 0.2616 | 3.0 | 9507 | 0.3662 | 0.9388 |
| 0.2785 | 4.0 | 12676 | 0.3702 | 0.9392 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,535 | [
[
-0.03228759765625,
-0.0543212890625,
0.02642822265625,
0.00004404783248901367,
-0.0290985107421875,
-0.03436279296875,
-0.01520538330078125,
-0.00795745849609375,
0.004909515380859375,
0.0300750732421875,
-0.062286376953125,
-0.0545654296875,
-0.061676025390625,... |
Sleoruiz/roberta-bne-fine-tuned-text-classification-SL-data-augmentation-dss | 2023-05-04T16:11:54.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sleoruiz | null | null | Sleoruiz/roberta-bne-fine-tuned-text-classification-SL-data-augmentation-dss | 0 | 2 | transformers | 2023-05-04T15:43:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- recall
- accuracy
- precision
model-index:
- name: roberta-bne-fine-tuned-text-classification-SL-data-augmentation-dss
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-bne-fine-tuned-text-classification-SL-data-augmentation-dss
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3544
- F1: 0.4643
- Recall: 0.4629
- Accuracy: 0.4629
- Precision: 0.4880
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:---------:|
| 3.3244 | 1.0 | 562 | 2.7345 | 0.3306 | 0.3939 | 0.3939 | 0.3500 |
| 2.4396 | 2.0 | 1124 | 2.4186 | 0.4061 | 0.4468 | 0.4468 | 0.4349 |
| 1.8841 | 3.0 | 1686 | 2.2738 | 0.4453 | 0.4702 | 0.4702 | 0.4583 |
| 1.4409 | 4.0 | 2248 | 2.2984 | 0.4500 | 0.4582 | 0.4582 | 0.4625 |
| 1.0328 | 5.0 | 2810 | 2.3544 | 0.4643 | 0.4629 | 0.4629 | 0.4880 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,019 | [
[
-0.032958984375,
-0.040771484375,
0.0109100341796875,
-0.00353240966796875,
-0.016998291015625,
-0.0248870849609375,
-0.010101318359375,
-0.0213623046875,
0.0170440673828125,
0.0261383056640625,
-0.048797607421875,
-0.054290771484375,
-0.055450439453125,
-0.... |
mnavas/roberta-finetuned-WebClassification-v2-smalllinguaES | 2023-05-10T12:40:36.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | mnavas | null | null | mnavas/roberta-finetuned-WebClassification-v2-smalllinguaES | 0 | 2 | transformers | 2023-05-04T15:51:35 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: roberta-finetuned-WebClassification-v2-smalllinguaES
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-finetuned-WebClassification-v2-smalllinguaES
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2410
- Accuracy: 0.6471
- F1: 0.6471
- Precision: 0.6471
- Recall: 0.6471
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| No log | 1.0 | 9 | 2.3023 | 0.0588 | 0.0588 | 0.0588 | 0.0588 |
| No log | 2.0 | 18 | 2.0337 | 0.2353 | 0.2353 | 0.2353 | 0.2353 |
| No log | 3.0 | 27 | 1.8946 | 0.4706 | 0.4706 | 0.4706 | 0.4706 |
| No log | 4.0 | 36 | 1.7548 | 0.5882 | 0.5882 | 0.5882 | 0.5882 |
| No log | 5.0 | 45 | 1.6002 | 0.5294 | 0.5294 | 0.5294 | 0.5294 |
| No log | 6.0 | 54 | 1.4561 | 0.5294 | 0.5294 | 0.5294 | 0.5294 |
| No log | 7.0 | 63 | 1.3614 | 0.5294 | 0.5294 | 0.5294 | 0.5294 |
| No log | 8.0 | 72 | 1.2781 | 0.5882 | 0.5882 | 0.5882 | 0.5882 |
| No log | 9.0 | 81 | 1.2420 | 0.5882 | 0.5882 | 0.5882 | 0.5882 |
| No log | 10.0 | 90 | 1.2410 | 0.6471 | 0.6471 | 0.6471 | 0.6471 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cpu
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,378 | [
[
-0.03857421875,
-0.041839599609375,
0.01410675048828125,
-0.0038242340087890625,
-0.0114288330078125,
-0.0216827392578125,
-0.0102691650390625,
-0.013336181640625,
0.02069091796875,
0.0269622802734375,
-0.05096435546875,
-0.0517578125,
-0.0518798828125,
-0.0... |
wasertech/wav2vec2-cv-fr-9 | 2023-09-22T11:17:40.000Z | [
"transformers",
"pytorch",
"wav2vec2",
"automatic-speech-recognition",
"fr",
"license:mpl-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | automatic-speech-recognition | wasertech | null | null | wasertech/wav2vec2-cv-fr-9 | 1 | 2 | transformers | 2023-05-04T15:55:20 | ---
license: mpl-2.0
language:
- fr
---
<details>
<summary>Click here to read this model card in English.</summary>
French voice transcription model adjusted on more than 2,500 hours of audio (in French) from the base model Wav2Vec2 XLSR 53 from the R&D laboratory of MetaAI.
This model was trained on the same datasets as the [French model 0.9](https://github.com/wasertech/commonvoice-fr/releases/tag/v0.9.0-fr-0.1) in order to compare the performance of the DeepSpeech2 architecture (DeepSpeech/STT+KenLM) and the CTC decoder of Wav2Vec2.
This is a distribution for research and evaluation purposes only under the Mozilla Public License version 2.0.
## Datasets:
- [X] Lingua Libre (~40h)
- [ ] Common Voice FR (v9.0) (~850h)*
- [x] Speech Training (~180h)
- [ ] African Accented French (~15h)*
- [ ] M-AILABS French (~315h)*
- [X] Att-HACK (~75h)
- [X] Multilingual LibriSpeech (~1 100h)
Total: ~1 395h (comming soon ~2 573h)
\* Comming Soon
## Settings
## Licence :
[Mozilla Public License (MPL) 2.0](https://github.com/common-voice/commonvoice-fr/blob/5699e59244d14bb14d5b7603b91c934b761c9194/DeepSpeech/LICENSE.txt)
## Results on test sets:
Test performed with TranScorerLM evaluation module on data pre-transformed to DeepSpeech/STT CSV training format.
| Test set | WER | REC |
|--------------|-----------|------------|
| Multilingual LibriSpeech (MLS) | 25.74% | 8.14% |
| African Accent French | 66.12% | 34.56% |
| TrainingSpeech | 14.56% | 3.68% |
| LinguaLibre | 38.62% | 9.30% |
| M-AILABS FR | 15.90% | 4.28% |
| Att-HACK | 6.07% | 2.78% |
| CommonVoice FR 9.0 | 35.98% | 12.10% |
| **Average** | **22.16%** | 7.03% |
## Trainer's Notes
This 0.99-pre version of the French model uses a new architecture. Unlike previous distributions based on the DeepSpeech2 architecture with a KenLM language model; this new distribution uses the Wav2vec2 architecture.
Also using a CTC decoder as an output scorer of an acoustic model, Wav2vec2, to the advantage of KenLM, takes full advantage of the advances introduced since the democratization of transformers in the application of the art.
These advances can be seen in the measurements of the error rate per word (WER) and per character (CER) but also when using the model.
The next step would be to add, update and augment the acoustic model data with one or more background noise layers from various noise source environments (a fan, a car, a crowd of people, etc - c.f. [Model 0.9](https://github.com/wasertech/commonvoice-fr/releases/tag/v0.9.0-fr-0.1) - ) but also by applying more essential transformations such as echo and other various distortions of the input. We could take advantage of advances in transformers to identify the noise and train a model to remove it and keep only the speech. We could then use the output of such a model as input to this one. This would greatly improve transcription accuracy under extreme noise conditions.
To improve the performance of the model on your data, it is recommended to adjust it on them.
Works with Transformers.
</details>
Modèle Français de transcription vocale ajusté sur plus de 2'500 heures d'audio (en français) à partir du modèle de base Wav2Vec2 XLSR 53 du laboratoire R&D de MetaAI.
Ce modèle à été entraîné sur les mêmes sets de données que le [modèle français 0.9](https://github.com/wasertech/commonvoice-fr/releases/tag/v0.9.0-fr-0.1) afin de comparer les performances de l'architecture DeepSpeech2 (DeepSpeech/STT+KenLM) et du decoder CTC de Wav2Vec2.
Il s'agit d'une distribution uniquement déstinée à des fins de recherches et d'évaluation regie par la licence publique de Mozilla dans sa version 2.0.
## Jeux de données :
- [X] Lingua Libre (~40h)
- [ ] Common Voice FR (v9.0) (~850h)*
- [X] Training Speech (~180h)
- [ ] African Accented French (~15h)*
- [ ] M-AILABS French (~315h)*
- [X] Att-HACK (~75h)
- [X] Multilingual LibriSpeech (~1'100h)
Total: ~1'395h (bientôt disponible ~2'573h)
\* Bientôt disponible
## Paramètres
## Licence :
[Mozilla Public License (MPL) 2.0](https://github.com/common-voice/commonvoice-fr/blob/5699e59244d14bb14d5b7603b91c934b761c9194/DeepSpeech/LICENSE.txt)
## Résultats sur les sets de test:
Test effectué avec le module d'évaluation de TranScorerLM sur les données pré-transformées au format d'entraînement CSV de DeepSpeech/STT.
| Test set | WER | CER |
|--------------|-----------|------------|
| Multilingual LibriSpeech (MLS) | 25.74% | 8.14% |
| African Accented French | 66.12% | 34.56% |
| TrainingSpeech | 14.56% | 3.68% |
| LinguaLibre | 38.62% | 9.30% |
| M-AILABS FR | 15.90% | 4.28% |
| Att-HACK | 6.07% | 2.78% |
| CommonVoice FR 9.0 | 35.98% | 12.10% |
| **Moyenne** | **22.16%** | 7.03% |
## Notes de l'entraîneur
Cette version 0.99-pre du modèle français utilise une nouvelle architecture à l'instar des distributions précédentes basées sur l'architecture DeepSpeech2 avec un modèle de langage KenLM; cette nouvelle distribution utilise l'architecture Wav2vec2.
Utilisant également un decoder CTC en tant que scorer en sortie d'un modèle acoustique, Wav2vec2, à l'avantage de KenLM, tire pleinement parti des avancées introduites depuis la démocratisation des transformers dans l'application de l'art.
Ces avancées se perçoivent dans les mesures du taux d'erreur par mot (WER) et par caractère (CER) mais également lors de l'utilisation du modèle.
La prochaine étape consiterait à ajouter, mettre à jour et augmenter les données du modèle acoustique avec une ou plusieurs couches de bruit de fond provenant de divers environnements source de bruit (un ventilateur, une voiture, une foule de gens, etc - c.f. [Modèle 0.9](https://github.com/wasertech/commonvoice-fr/releases/tag/v0.9.0-fr-0.1) - ) mais également en applicant des transformations plus essentielles tel que l'echo et autres diverses diformations de l'entrée. Nous pourrions profiter des avancées dans le domaine des transformers pour identifier le bruit et entraîner un modèle pour le supprimer et ne garder que le discours. Nous pourrions alors utiliser la sortie d'un tel modèle en entrée de celui-ci. Cela améliorerait grandement la precision de la transcription dans des conditions de bruit extrême.
Pour améliorer les performence du modèle sur vos données il est préconisé de l'ajuster sur celles-ci.
Fonctionne avec Transformers. | 6,395 | [
[
-0.032867431640625,
-0.050994873046875,
0.0034942626953125,
0.0196075439453125,
-0.01529693603515625,
-0.01421356201171875,
-0.036468505859375,
-0.04071044921875,
-0.006561279296875,
0.036590576171875,
-0.042236328125,
-0.04876708984375,
-0.048736572265625,
... |
meltemtatli/bert-base-uncased-finetuned-cola-part2 | 2023-05-04T16:23:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | meltemtatli | null | null | meltemtatli/bert-base-uncased-finetuned-cola-part2 | 0 | 2 | transformers | 2023-05-04T16:00:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-part2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5726999708077573
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-part2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5136
- Matthews Correlation: 0.5727
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.966102391464137e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| No log | 1.0 | 268 | 0.4343 | 0.5343 |
| 0.4076 | 2.0 | 536 | 0.4104 | 0.5934 |
| 0.4076 | 3.0 | 804 | 0.5136 | 0.5727 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,898 | [
[
-0.022796630859375,
-0.04931640625,
0.009246826171875,
0.0187530517578125,
-0.0248565673828125,
-0.0203399658203125,
-0.016571044921875,
-0.016937255859375,
0.022552490234375,
0.01522064208984375,
-0.050689697265625,
-0.0279998779296875,
-0.052581787109375,
... |
Sleoruiz/roberta-bne-fine-tuned-text-classification-SL-dss | 2023-05-08T18:34:52.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"es",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sleoruiz | null | null | Sleoruiz/roberta-bne-fine-tuned-text-classification-SL-dss | 0 | 2 | transformers | 2023-05-04T16:22:22 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
- recall
- accuracy
- precision
model-index:
- name: roberta-bne-fine-tuned-text-classification-SL-dss
results: []
language:
- es
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-bne-fine-tuned-text-classification-SL-dss
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5089
- F1: 0.4781
- Recall: 0.4750
- Accuracy: 0.4750
- Precision: 0.5009
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Recall | Accuracy | Precision |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:--------:|:---------:|
| 3.235 | 1.0 | 836 | 2.4142 | 0.3995 | 0.4471 | 0.4471 | 0.4786 |
| 2.0006 | 2.0 | 1672 | 2.1013 | 0.4672 | 0.4942 | 0.4942 | 0.4867 |
| 1.2424 | 3.0 | 2508 | 2.1138 | 0.4861 | 0.4852 | 0.4852 | 0.5132 |
| 0.7242 | 4.0 | 3344 | 2.2694 | 0.4828 | 0.4747 | 0.4747 | 0.5126 |
| 0.3403 | 5.0 | 4180 | 2.5089 | 0.4781 | 0.4750 | 0.4750 | 0.5009 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,997 | [
[
-0.032257080078125,
-0.041229248046875,
0.01068878173828125,
-0.002529144287109375,
-0.017974853515625,
-0.025543212890625,
-0.01155853271484375,
-0.0213623046875,
0.017974853515625,
0.0272674560546875,
-0.048248291015625,
-0.056488037109375,
-0.05712890625,
... |
surprisedPikachu007/search_summarize_v1 | 2023-09-07T06:03:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:billsum",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"text-generation-inference",
"region:us"
] | text2text-generation | surprisedPikachu007 | null | null | surprisedPikachu007/search_summarize_v1 | 1 | 2 | transformers | 2023-05-04T17:39:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- billsum
metrics:
- rouge
model-index:
- name: search_summarize_v1
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: billsum
type: billsum
config: default
split: ca_test
args: default
metrics:
- name: Rouge1
type: rouge
value: 0.1476
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# search_summarize_v1
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the billsum dataset.
It achieves the following results on the evaluation set:
- Loss: 2.5224
- Rouge1: 0.1476
- Rouge2: 0.0551
- Rougel: 0.1228
- Rougelsum: 0.1228
- Gen Len: 19.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:|
| No log | 1.0 | 62 | 2.8176 | 0.1281 | 0.0401 | 0.1087 | 0.1086 | 19.0 |
| No log | 2.0 | 124 | 2.5989 | 0.1372 | 0.0476 | 0.1138 | 0.1137 | 19.0 |
| No log | 3.0 | 186 | 2.5386 | 0.1464 | 0.0541 | 0.1218 | 0.1219 | 19.0 |
| No log | 4.0 | 248 | 2.5224 | 0.1476 | 0.0551 | 0.1228 | 0.1228 | 19.0 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,130 | [
[
-0.03204345703125,
-0.033203125,
0.009033203125,
-0.0032749176025390625,
-0.0245361328125,
-0.026458740234375,
0.0005784034729003906,
-0.0163116455078125,
0.01459503173828125,
0.0308380126953125,
-0.0416259765625,
-0.052581787109375,
-0.05029296875,
-0.00749... |
hmert00/bert-base-uncased-finetuned-cola | 2023-05-05T15:33:14.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | hmert00 | null | null | hmert00/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-04T19:08:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.4706932444154383
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4939
- Matthews Correlation: 0.4707
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.590049876247753e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5211 | 1.0 | 535 | 0.4939 | 0.4707 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,732 | [
[
-0.0241241455078125,
-0.052581787109375,
0.0102386474609375,
0.0204010009765625,
-0.026947021484375,
-0.02142333984375,
-0.0178375244140625,
-0.0163726806640625,
0.02679443359375,
0.0164642333984375,
-0.04949951171875,
-0.0293426513671875,
-0.050384521484375,
... |
cekole/bert-base-uncased-finetuned-cola | 2023-05-07T17:49:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | cekole | null | null | cekole/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-04T19:22:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: -0.2550525313550364
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4681
- Matthews Correlation: -0.2551
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4974 | 1.0 | 535 | 0.4681 | -0.2551 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,724 | [
[
-0.0250396728515625,
-0.0535888671875,
0.01214599609375,
0.02105712890625,
-0.02801513671875,
-0.0222015380859375,
-0.0189361572265625,
-0.01470184326171875,
0.0266571044921875,
0.0160369873046875,
-0.049468994140625,
-0.0313720703125,
-0.05133056640625,
-0.... |
elifcen/bert-base-uncased-finetuned-cola | 2023-05-07T16:00:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | elifcen | null | null | elifcen/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-04T19:32:56 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5205935908642821
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4500
- Matthews Correlation: 0.5206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4917 | 1.0 | 535 | 0.4500 | 0.5206 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,722 | [
[
-0.025238037109375,
-0.052886962890625,
0.011260986328125,
0.020904541015625,
-0.027313232421875,
-0.0229644775390625,
-0.019134521484375,
-0.01526641845703125,
0.0257568359375,
0.0166778564453125,
-0.049652099609375,
-0.0307769775390625,
-0.050537109375,
-0... |
MohamedGalal/marbert-sarcasm-detector | 2023-07-08T16:41:11.000Z | [
"transformers",
"pytorch",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"ar",
"license:afl-3.0",
"endpoints_compatible",
"region:us"
] | text-classification | MohamedGalal | null | null | MohamedGalal/marbert-sarcasm-detector | 0 | 2 | transformers | 2023-05-04T20:05:33 | ---
tags:
- generated_from_keras_callback
model-index:
- name: ’Marbert-sarcasm-detector
results: []
license: afl-3.0
language:
- ar
metrics:
- Accuracy
- F1 score
- Precision
- Recall
pipeline_tag: text-classification
widget:
- text: "بعد أن حصل الطالب على شهادة الليسانس بدأ فى تحضير الماجستير."
example_title: "NonSarc 01"
- text: "بعد أن حصل على الليسانس بدأ فى تحضيرالماجستير .وبعد أن حصل على الماجستير بدأ فى «تحضير» الشاى للزبائن."
example_title: "Sarc 01"
- text: " .جمع كلمة امراءة هي كلمة نساء"
example_title: "NonSarc 02"
- text: ".جمع كلمة امراءة هي كلمة نساء. حتى اللغة مش قادرة عليهم هههههههه"
example_title: "Sarc 02"
- text: ".للجامعة العربية موقفا كبيرا"
example_title: "NonSarc 03"
- text: " للجامعة العربية للانصاف موقفاَ كبيراَ !!!!! يتسع لاكثر من الف سيارة امام المبنى , هاهاها "
example_title: "Sarc 03"
- text: "!!هو أنت كدة عايش؟ يا بني دا أنت ميت بالحياة"
example_title: "Sarc 04"
- text: "شهر أكتوبر ده شهر عسل بجد مبيجبش ليا فيه غير الاكتئاب كل سنه"
example_title: "Sarc 05"
- text: " فى ناس زى النسكافيه ثلاثة فى واحد يتكلموا معاك وعنك وعليك"
example_title: "Sarc 06"
- text: " في ناس زي النسمة روحهم خفيفية و وجدهم يشرح الصدر. اهلا و سهلا بكم . اسعدنا مروركم من هنا. "
example_title: "NonSarc 06"
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MARBERT Sarcasm Detector
This model is fine-tuned UBC-NLP/MARBERTv2 was finetuned on ArSarcasT corpus training dataset.
It achieves the following results on the evaluation sets:
| Eval Datatset| Accuracy | F1 | Precision | Recall|
| :----- | :---: | :---: | :---: | :---: |
|ArSarcasT |0.844 | 0.735 | 0.754 | 0.718 |
|iSarcasmEVAL |0.892 | 0.633 | 0.616 | 0.650 |
|ArSarcasmV2 |0.771 | 0.561 | 0.590 | 0.534 |
## Model description
Fine-tuned MARBERT-v2 model on Sarcastic tweets dataset for sarcasm detection text classification.
## Intended uses & limitations
More information needed
## Training and evaluation data
- Training dataset: ArSarcasT development split.
- Evaluation Datasets:
- ArSarcasm-v2 test dataset.
- iSarcasmEVAL test dataset.
- ArSarcasT test dataset.
## Training procedure
Fine-tuning, 3 epochs
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.12.0
- Tokenizers 0.13.3 | 2,589 | [
[
-0.028594970703125,
-0.036346435546875,
0.0180206298828125,
0.0024013519287109375,
-0.03460693359375,
-0.0296173095703125,
-0.0087738037109375,
-0.01117706298828125,
0.00390625,
0.03057861328125,
-0.031280517578125,
-0.050994873046875,
-0.057952880859375,
-0... |
eceersoyy/bert-base-uncased-finetuned-cola | 2023-05-07T09:03:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | eceersoyy | null | null | eceersoyy/bert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-04T20:06:46 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5891424967516642
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6806
- Matthews Correlation: 0.5891
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7.873067343773953e-06
- train_batch_size: 4
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5444 | 1.0 | 2138 | 0.6008 | 0.5429 |
| 0.4189 | 2.0 | 4276 | 0.6806 | 0.5891 |
| 0.2808 | 3.0 | 6414 | 0.8681 | 0.5778 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,885 | [
[
-0.0259552001953125,
-0.051025390625,
0.00959014892578125,
0.0188751220703125,
-0.0238800048828125,
-0.0197601318359375,
-0.0173797607421875,
-0.01348876953125,
0.0274505615234375,
0.01690673828125,
-0.050506591796875,
-0.031768798828125,
-0.052490234375,
-0... |
Soulaimen/resnet-50-shortSleeveCleanedData | 2023-05-04T23:59:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"resnet",
"image-classification",
"generated_from_trainer",
"dataset:imagefolder",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | Soulaimen | null | null | Soulaimen/resnet-50-shortSleeveCleanedData | 0 | 2 | transformers | 2023-05-04T21:29:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- imagefolder
metrics:
- accuracy
model-index:
- name: resnet-50-shortSleeveCleanedData
results:
- task:
name: Image Classification
type: image-classification
dataset:
name: imagefolder
type: imagefolder
config: default
split: train
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9781420765027322
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# resnet-50-shortSleeveCleanedData
This model is a fine-tuned version of [microsoft/resnet-50](https://huggingface.co/microsoft/resnet-50) on the imagefolder dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1103
- Accuracy: 0.9781
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 7
- total_train_batch_size: 56
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.01
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.973 | 1.0 | 147 | 0.9371 | 0.7268 |
| 0.6565 | 2.0 | 294 | 0.5520 | 0.8710 |
| 0.4609 | 3.0 | 441 | 0.2983 | 0.9279 |
| 0.3937 | 4.0 | 588 | 0.2051 | 0.9486 |
| 0.3723 | 5.0 | 735 | 0.1521 | 0.9727 |
| 0.3926 | 6.0 | 882 | 0.1490 | 0.9672 |
| 0.3326 | 7.0 | 1029 | 0.1367 | 0.9650 |
| 0.3166 | 8.0 | 1176 | 0.1109 | 0.9738 |
| 0.3492 | 9.0 | 1323 | 0.1108 | 0.9760 |
| 0.3228 | 10.0 | 1470 | 0.1103 | 0.9781 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,326 | [
[
-0.0357666015625,
-0.02301025390625,
-0.0020732879638671875,
0.0040130615234375,
-0.01445770263671875,
-0.0211334228515625,
0.0032196044921875,
-0.01326751708984375,
0.0137176513671875,
0.0175323486328125,
-0.056427001953125,
-0.0445556640625,
-0.046295166015625... |
danny3/codehelper-ds | 2023-05-05T18:11:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | danny3 | null | null | danny3/codehelper-ds | 0 | 2 | transformers | 2023-05-04T21:41:08 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: codehelper-ds
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# codehelper-ds
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1453
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,197 | [
[
-0.0298919677734375,
-0.046295166015625,
0.018951416015625,
0.004604339599609375,
-0.031219482421875,
-0.03363037109375,
-0.0095367431640625,
-0.0176239013671875,
-0.0033817291259765625,
0.018951416015625,
-0.050323486328125,
-0.038665771484375,
-0.0537414550781... |
rsonavane/flan-t5-xl-alpaca-dolly-lora-peft | 2023-05-05T06:11:03.000Z | [
"peft",
"pytorch",
"t5",
"adapter",
"flan-t5",
"lora",
"text2text-generation",
"en",
"ja",
"de",
"fr",
"multilingual",
"dataset:yahma/alpaca-cleaned",
"dataset:databricks/databricks-dolly-15k",
"dataset:samsum",
"region:us"
] | text2text-generation | rsonavane | null | null | rsonavane/flan-t5-xl-alpaca-dolly-lora-peft | 0 | 2 | peft | 2023-05-04T22:08:55 | ---
datasets:
- yahma/alpaca-cleaned
- databricks/databricks-dolly-15k
- samsum
pipeline_tag: text2text-generation
tags:
- t5
- adapter
- flan-t5
- peft
- lora
language:
- en
- ja
- de
- fr
- multilingual
---
# Usage
Find below some example scripts on how to use the model in `transformers`:
## Using the Pytorch model
```python
import torch
from peft import PeftModel, PeftConfig
from transformers import AutoModelForSeq2SeqLM, AutoTokenizer
# Load peft config for pre-trained checkpoint etc.
peft_model_id = "rsonavane/flan-t5-xl-alpaca-dolly-lora-peft"
config = PeftConfig.from_pretrained(peft_model_id)
# load base LLM model and tokenizer
model = AutoModelForSeq2SeqLM.from_pretrained(config.base_model_name_or_path, load_in_8bit=True, device_map={"":0})
tokenizer = AutoTokenizer.from_pretrained(config.base_model_name_or_path)
# Load the Lora model
model = PeftModel.from_pretrained(model, peft_model_id, device_map={"":0})
```
## Prompt generation
```python
def generate_prompt(instruction: str, input_ctxt: str = "") -> str:
if input_ctxt:
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input_ctxt}
### Response:"""
else:
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:"""
```
## Inference
```python
input_ctxt = ""
instruction = ""
input_text = generate_prompt(instruction, input_ctxt)
input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to("cuda")
outputs = model.generate(input_ids)
print(tokenizer.decode(outputs[0]))
```
## Training Details
Intended for conversation analysis, closed qna and summarization.
Trained on instructions from doll-15k, alpaca-52k and samsum dataset. | 1,922 | [
[
-0.02325439453125,
-0.057769775390625,
0.00731658935546875,
-0.0002827644348144531,
-0.01059722900390625,
-0.01470184326171875,
0.007671356201171875,
-0.00437164306640625,
0.0010051727294921875,
0.0455322265625,
-0.04266357421875,
-0.0269775390625,
-0.0342102050... |
meltemtatli/bert-base-uncased-finetuned-cola-trying | 2023-05-05T09:48:15.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | meltemtatli | null | null | meltemtatli/bert-base-uncased-finetuned-cola-trying | 0 | 2 | transformers | 2023-05-04T22:09:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: bert-base-uncased-finetuned-cola-trying
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5318380398617779
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-cola-trying
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4377
- Matthews Correlation: 0.5318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.4603 | 1.0 | 535 | 0.4377 | 0.5318 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,736 | [
[
-0.023956298828125,
-0.050811767578125,
0.01410675048828125,
0.022613525390625,
-0.02850341796875,
-0.0247802734375,
-0.0216217041015625,
-0.01360321044921875,
0.0241851806640625,
0.01268768310546875,
-0.050018310546875,
-0.032806396484375,
-0.052642822265625,
... |
seviladiguzel/355a590 | 2023-05-04T22:45:18.000Z | [
"keras",
"region:us"
] | null | seviladiguzel | null | null | seviladiguzel/355a590 | 0 | 2 | keras | 2023-05-04T22:44:41 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 4.999999873689376e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | mixed_float16 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 846 | [
[
-0.03936767578125,
-0.03973388671875,
0.03167724609375,
0.00824737548828125,
-0.04376220703125,
-0.0182342529296875,
0.0093536376953125,
-0.005672454833984375,
0.0189056396484375,
0.0294647216796875,
-0.04541015625,
-0.051025390625,
-0.03948974609375,
0.0018... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.