modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
bpben/en_imdb_sent_trf | 2023-05-10T14:09:43.000Z | [
"spacy",
"text-classification",
"en",
"region:us"
] | text-classification | bpben | null | null | bpben/en_imdb_sent_trf | 0 | 2 | spacy | 2023-05-10T14:09:26 | ---
tags:
- spacy
- text-classification
language:
- en
model-index:
- name: en_imdb_sent_trf
results: []
---
| Feature | Description |
| --- | --- |
| **Name** | `en_imdb_sent_trf` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.4.4,<3.5.0` |
| **Default Pipeline** | `transformer`, `textcat` |
| **Components** | `transformer`, `textcat` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (2 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`textcat`** | `pos`, `neg` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `CATS_SCORE` | 87.99 |
| `CATS_MICRO_P` | 88.08 |
| `CATS_MICRO_R` | 88.08 |
| `CATS_MICRO_F` | 88.08 |
| `CATS_MACRO_P` | 88.01 |
| `CATS_MACRO_R` | 87.98 |
| `CATS_MACRO_F` | 87.99 |
| `CATS_MACRO_AUC` | 93.56 |
| `CATS_MACRO_AUC_PER_TYPE` | 0.00 |
| `TRANSFORMER_LOSS` | 24.99 |
| `TEXTCAT_LOSS` | 2726.89 | | 1,005 | [
[
-0.04180908203125,
-0.0168914794921875,
0.01396942138671875,
0.0152435302734375,
-0.049957275390625,
0.02801513671875,
0.00716400146484375,
0.0018434524536132812,
0.061737060546875,
0.053680419921875,
-0.06707763671875,
-0.052978515625,
-0.0526123046875,
0.0... |
Cynthiaiii4/Text_classification_model_bbu_v3 | 2023-05-10T15:09:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Cynthiaiii4 | null | null | Cynthiaiii4/Text_classification_model_bbu_v3 | 0 | 2 | transformers | 2023-05-10T14:40:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbu_v3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbu_v3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9237
- Accuracy: 0.8125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3377 | 1.0 | 6650 | 0.7974 | 0.7825 |
| 0.1582 | 2.0 | 13300 | 0.9237 | 0.8125 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,420 | [
[
-0.032135009765625,
-0.038421630859375,
0.010284423828125,
0.00925445556640625,
-0.0301361083984375,
-0.031524658203125,
-0.0101776123046875,
-0.02996826171875,
-0.001590728759765625,
0.02545166015625,
-0.043212890625,
-0.055877685546875,
-0.043365478515625,
... |
Cynthiaiii4/Text_classification_model_bbu_v4 | 2023-05-10T16:54:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Cynthiaiii4 | null | null | Cynthiaiii4/Text_classification_model_bbu_v4 | 0 | 2 | transformers | 2023-05-10T15:30:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbu_v4
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbu_v4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5753
- Accuracy: 0.7775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.334 | 1.0 | 882 | 0.4661 | 0.775 |
| 0.1585 | 2.0 | 1764 | 0.5753 | 0.7775 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,418 | [
[
-0.0321044921875,
-0.0357666015625,
0.0101776123046875,
0.00719451904296875,
-0.0304107666015625,
-0.0297698974609375,
-0.01030731201171875,
-0.0297698974609375,
0.0001596212387084961,
0.025115966796875,
-0.044281005859375,
-0.055938720703125,
-0.041900634765625... |
alistvt/zero-docalog | 2023-05-12T21:47:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"question-answering",
"generated_from_trainer",
"dataset:doc2dial",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | alistvt | null | null | alistvt/zero-docalog | 0 | 2 | transformers | 2023-05-10T15:33:50 | ---
tags:
- generated_from_trainer
datasets:
- doc2dial
model-index:
- name: zero-docalog
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zero-docalog
This model is a fine-tuned version of [alistvt/zero-docalog](https://huggingface.co/alistvt/zero-docalog) on the doc2dial dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 30
- total_train_batch_size: 240
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 10
- num_epochs: 1.0
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,158 | [
[
-0.0234832763671875,
-0.05474853515625,
0.0209503173828125,
-0.0100860595703125,
-0.03851318359375,
-0.0343017578125,
-0.0006690025329589844,
-0.01537322998046875,
0.011749267578125,
0.0255584716796875,
-0.050689697265625,
-0.048919677734375,
-0.048248291015625,... |
Consensus/contriever-msmarco | 2023-05-10T17:58:56.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | Consensus | null | null | Consensus/contriever-msmarco | 1 | 2 | sentence-transformers | 2023-05-10T17:56:56 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 2,961 | [
[
-0.0197296142578125,
-0.055938720703125,
0.01861572265625,
0.02886962890625,
-0.02349853515625,
-0.03253173828125,
-0.01629638671875,
0.0014705657958984375,
0.01323699951171875,
0.0303497314453125,
-0.03924560546875,
-0.042633056640625,
-0.053070068359375,
-... |
Gridflow/distilbert-base-uncased-finetuned-emotion | 2023-05-10T19:01:54.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Gridflow | null | null | Gridflow/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-10T18:28:15 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.937
- name: F1
type: f1
value: 0.9371930654030473
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1698
- Accuracy: 0.937
- F1: 0.9372
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1395 | 1.0 | 250 | 0.1659 | 0.9355 | 0.9358 |
| 0.0945 | 2.0 | 500 | 0.1657 | 0.935 | 0.9351 |
| 0.0783 | 3.0 | 750 | 0.1832 | 0.937 | 0.9371 |
| 0.0653 | 4.0 | 1000 | 0.1729 | 0.9335 | 0.9332 |
| 0.053 | 5.0 | 1250 | 0.1698 | 0.937 | 0.9372 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,059 | [
[
-0.037506103515625,
-0.04058837890625,
0.01163482666015625,
0.0190277099609375,
-0.0234832763671875,
-0.016082763671875,
-0.0103607177734375,
-0.00804901123046875,
0.01403045654296875,
0.00823974609375,
-0.05731201171875,
-0.052581787109375,
-0.06005859375,
... |
Ibrahim-Alam/finetuning-roberta-base-on-sst2_1epoch | 2023-10-04T14:05:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:sst2",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Ibrahim-Alam | null | null | Ibrahim-Alam/finetuning-roberta-base-on-sst2_1epoch | 0 | 2 | transformers | 2023-05-10T19:48:58 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- sst2
metrics:
- accuracy
- f1
model-index:
- name: finetuning-roberta-base-on-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sst2
type: sst2
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.9415137614678899
- name: F1
type: f1
value: 0.9425028184892897
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-roberta-base-on-sst2
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2207
- Accuracy: 0.9415
- F1: 0.9425
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,519 | [
[
-0.0157318115234375,
-0.052764892578125,
0.021697998046875,
0.007205963134765625,
-0.038970947265625,
-0.030853271484375,
-0.0269622802734375,
-0.01110076904296875,
-0.002117156982421875,
0.0281524658203125,
-0.050384521484375,
-0.037567138671875,
-0.05917358398... |
Adoley/covid-tweets-sentiment-analysis-roberta-model | 2023-05-11T19:25:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Adoley | null | null | Adoley/covid-tweets-sentiment-analysis-roberta-model | 0 | 2 | transformers | 2023-05-10T23:10:10 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: covid-tweets-sentiment-analysis-roberta-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-tweets-sentiment-analysis-roberta-model
This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5581
- Rmse: 0.6098
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7026 | 2.0 | 500 | 0.5581 | 0.6098 |
| 0.4029 | 4.0 | 1000 | 0.6095 | 0.5859 |
| 0.204 | 6.0 | 1500 | 0.8989 | 0.6307 |
| 0.1046 | 8.0 | 2000 | 1.1872 | 0.5906 |
| 0.058 | 10.0 | 2500 | 1.2907 | 0.5919 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,672 | [
[
-0.02618408203125,
-0.052337646484375,
0.004482269287109375,
0.0121612548828125,
-0.020538330078125,
-0.0122833251953125,
-0.01513671875,
-0.007129669189453125,
0.00862884521484375,
0.01104736328125,
-0.06280517578125,
-0.057281494140625,
-0.0611572265625,
-... |
YaYaB/l3-setfit | 2023-05-11T00:01:06.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | YaYaB | null | null | YaYaB/l3-setfit | 0 | 2 | sentence-transformers | 2023-05-10T23:25:52 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# YaYaB/l3-setfit
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("YaYaB/l3-setfit")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,519 | [
[
-0.00690460205078125,
-0.065673828125,
0.0294342041015625,
-0.01441192626953125,
-0.01251220703125,
-0.019927978515625,
-0.01045989990234375,
-0.016265869140625,
0.00506591796875,
0.03668212890625,
-0.04632568359375,
-0.0202789306640625,
-0.03753662109375,
0... |
asurinsaka/distilbert-base-uncased-finetuned-emotion | 2023-05-11T01:52:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | asurinsaka | null | null | asurinsaka/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-11T01:41:10 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9210361010646059
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2132
- Accuracy: 0.921
- F1: 0.9210
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8036 | 1.0 | 250 | 0.2959 | 0.912 | 0.9099 |
| 0.236 | 2.0 | 500 | 0.2132 | 0.921 | 0.9210 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
| 1,845 | [
[
-0.03826904296875,
-0.04180908203125,
0.0159759521484375,
0.0220184326171875,
-0.0260772705078125,
-0.019287109375,
-0.013092041015625,
-0.009033203125,
0.010528564453125,
0.00870513916015625,
-0.05694580078125,
-0.051910400390625,
-0.05865478515625,
-0.0086... |
Cynthiaiii4/Text_classification_model_bbu_RF | 2023-05-11T15:49:57.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Cynthiaiii4 | null | null | Cynthiaiii4/Text_classification_model_bbu_RF | 0 | 2 | transformers | 2023-05-11T02:15:06 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbu_RF
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbu_RF
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4642
- Accuracy: 0.7775
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 100 | 0.4967 | 0.7575 |
| No log | 2.0 | 200 | 0.4642 | 0.7775 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,418 | [
[
-0.032684326171875,
-0.0390625,
0.00609588623046875,
0.008026123046875,
-0.0291748046875,
-0.031158447265625,
-0.01457977294921875,
-0.0309906005859375,
0.0021038055419921875,
0.02557373046875,
-0.048126220703125,
-0.053680419921875,
-0.043792724609375,
-0.0... |
Intel/bert-large-uncased-rte-int8-static | 2023-05-11T05:53:43.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"rte",
"glue",
"torchdistill",
"nlp",
"int8",
"neural-compressor",
"Intel® Neural Compressor",
"text-classfication",
"PostTrainingStatic",
"en",
"dataset:rte",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/bert-large-uncased-rte-int8-static | 0 | 2 | transformers | 2023-05-11T02:33:21 | ---
language: en
tags:
- bert
- rte
- glue
- torchdistill
- nlp
- int8
- neural-compressor
- Intel® Neural Compressor
- text-classfication
- PostTrainingStatic
license: apache-2.0
datasets:
- rte
metrics:
- f1
---
# INT8 bert-large-uncased-rte-int8-static
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [yoshitomo-matsubara/bert-large-uncased-rte](https://huggingface.co/yoshitomo-matsubara/bert-large-uncased-rte).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.7365|0.7401|
| **Model size (MB)** |1244|1349|
#### Load with Intel® Neural Compressor:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
"Intel/bert-large-uncased-rte-int8-static",
)
```
| 1,008 | [
[
-0.027587890625,
-0.034149169921875,
0.01125335693359375,
0.01419830322265625,
-0.025665283203125,
0.0019664764404296875,
-0.035736083984375,
-0.0028018951416015625,
-0.0038967132568359375,
0.006923675537109375,
-0.0223236083984375,
-0.02294921875,
-0.0476379394... |
November11/distilbert-base-uncased-finetuned-emotion | 2023-05-11T08:54:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | November11 | null | null | November11/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-11T03:10:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9274136087775933
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8097 | 1.0 | 250 | 0.3265 | 0.905 | 0.9023 |
| 0.2531 | 2.0 | 500 | 0.2180 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037841796875,
-0.0411376953125,
0.0142669677734375,
0.0218963623046875,
-0.025970458984375,
-0.019134521484375,
-0.01293182373046875,
-0.00862884521484375,
0.010498046875,
0.00789642333984375,
-0.05596923828125,
-0.052398681640625,
-0.060150146484375,
-0.... |
xqchq/test-trainer2 | 2023-05-11T09:42:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | xqchq | null | null | xqchq/test-trainer2 | 0 | 2 | transformers | 2023-05-11T03:38:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: test-trainer2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test-trainer2
This model is a fine-tuned version of [hfl/minirbt-h256](https://huggingface.co/hfl/minirbt-h256) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,021 | [
[
-0.040069580078125,
-0.0504150390625,
0.002223968505859375,
0.01070404052734375,
-0.031829833984375,
-0.02947998046875,
-0.0012712478637695312,
-0.0209197998046875,
0.0097503662109375,
0.014678955078125,
-0.06072998046875,
-0.013336181640625,
-0.042236328125,
... |
Intel/distilbert-base-uncased-MRPC-int8-dynamic | 2023-05-11T06:35:32.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"text-classfication",
"nlp",
"neural-compressor",
"PostTrainingDynamic",
"int8",
"Intel® Neural Compressor",
"en",
"dataset:glue",
"dataset:mrpc",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/distilbert-base-uncased-MRPC-int8-dynamic | 0 | 2 | transformers | 2023-05-11T06:12:58 | ---
language: en
license: mit
datasets:
- glue
- mrpc
metrics:
- f1
tags:
- text-classfication
- nlp
- neural-compressor
- PostTrainingDynamic
- int8
- Intel® Neural Compressor
---
# Dynamically quantized DistilBERT base uncased finetuned MPRC
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** This model is a [DistilBERT](https://huggingface.co/textattack/distilbert-base-uncased-MRPC) fine-tuned on MPRC dynamically quantized with [optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details on the original model, we encourage users to check out [this](https://huggingface.co/textattack/distilbert-base-uncased-MRPC) model card.
## How to Get Started With the Model
### PyTorch
To load the quantized model, you can do as follows:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/distilbert-base-uncased-MRPC-int8-dynamic")
```
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8983|0.9027|
| **Model size (MB)** |75|268|
| 1,536 | [
[
-0.02716064453125,
-0.044036865234375,
0.0237884521484375,
0.00748443603515625,
-0.024078369140625,
0.0292510986328125,
-0.015899658203125,
0.0077056884765625,
-0.0108489990234375,
-0.004093170166015625,
-0.03143310546875,
-0.038665771484375,
-0.055572509765625,... |
Intel/distilbert-base-uncased-MRPC-int8-static | 2023-05-11T07:24:22.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"text-classfication",
"nlp",
"neural-compressor",
"PostTrainingsStatic",
"int8",
"Intel® Neural Compressor",
"en",
"dataset:glue",
"dataset:mrpc",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/distilbert-base-uncased-MRPC-int8-static | 0 | 2 | transformers | 2023-05-11T06:37:11 | ---
language: en
license: mit
datasets:
- glue
- mrpc
metrics:
- f1
tags:
- text-classfication
- nlp
- neural-compressor
- PostTrainingsStatic
- int8
- Intel® Neural Compressor
---
# Statically quantized DistilBERT base uncased finetuned MPRC
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** This model is a [DistilBERT](https://huggingface.co/textattack/distilbert-base-uncased-MRPC) fine-tuned on MPRC statically quantized with [optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details on the original model, we encourage users to check out [this](https://huggingface.co/textattack/distilbert-base-uncased-MRPC) model card.
## How to Get Started With the Model
### PyTorch
To load the quantized model, you can do as follows:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/distilbert-base-uncased-MRPC-int8-static")
```
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9007|0.9027|
| **Model size (MB)** |242|268|
| 1,534 | [
[
-0.03277587890625,
-0.0355224609375,
0.0220489501953125,
0.0060882568359375,
-0.029388427734375,
0.011810302734375,
-0.0201568603515625,
0.01099395751953125,
-0.01390838623046875,
-0.0020046234130859375,
-0.027191162109375,
-0.042694091796875,
-0.05413818359375,... |
ozoora/kzlbert-3poi | 2023-05-11T07:08:43.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | ozoora | null | null | ozoora/kzlbert-3poi | 0 | 2 | transformers | 2023-05-11T06:49:57 | Use:
tokenizer = BertTokenizerFast.from_pretrained('ozooora/kzlbert-3poi')
model = AutoModelForSequenceClassification.from_pretrained('ozooora/kzlbert-3poi', return_dict=True)
@torch.no_grad()
def predict(text):
inputs = tokenizer(text, max_length=419, padding=True, truncation=True, return_tensors='pt')
outputs = model(**inputs)
predicted_probs = torch.nn.functional.softmax(outputs.logits, dim=1)
predicted = torch.argmax(predicted_probs, dim=1).item()
return predicted, predicted_probs[0].tolist() | 525 | [
[
-0.0141448974609375,
-0.0433349609375,
0.01461029052734375,
0.0218658447265625,
-0.022369384765625,
-0.0197601318359375,
-0.0156402587890625,
-0.0217132568359375,
0.0069580078125,
0.022491455078125,
-0.0384521484375,
-0.0311737060546875,
-0.052581787109375,
... |
Intel/albert-base-v2-MRPC-int8 | 2023-05-11T07:31:04.000Z | [
"transformers",
"pytorch",
"albert",
"text-classification",
"text-classfication",
"nlp",
"neural-compressor",
"PostTrainingsDynamic",
"int8",
"Intel® Neural Compressor",
"en",
"dataset:glue",
"dataset:mrpc",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/albert-base-v2-MRPC-int8 | 0 | 2 | transformers | 2023-05-11T07:22:42 | ---
language: en
license: mit
datasets:
- glue
- mrpc
metrics:
- f1
tags:
- text-classfication
- nlp
- neural-compressor
- PostTrainingsDynamic
- int8
- Intel® Neural Compressor
- albert
---
# Dynamically quantized Albert base finetuned MPRC
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
## Model Details
**Model Description:** This model is a [Albert](https://huggingface.co/textattack/albert-base-v2-MRPC) fine-tuned on MPRC dynamically quantized with [optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
- **Model Type:** Text Classification
- **Language(s):** English
- **License:** Apache-2.0
- **Parent Model:** For more details on the original model, we encourage users to check out [this](https://huggingface.co/textattack/albert-base-v2-MRPC) model card.
## How to Get Started With the Model
### PyTorch
To load the quantized model, you can do as follows:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
model = IncQuantizedModelForSequenceClassification.from_pretrained("Intel/albert-base-v2-MRPC-int8")
```
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9193|0.9263|
| **Model size (MB)** |45.0|46.7| | 1,497 | [
[
-0.036590576171875,
-0.033843994140625,
0.0208740234375,
0.01190185546875,
-0.016937255859375,
0.0177459716796875,
-0.01519012451171875,
-0.005893707275390625,
-0.0110015869140625,
0.00818634033203125,
-0.0281829833984375,
-0.0379638671875,
-0.048065185546875,
... |
Intel/bert-base-uncased-CoLA-int8 | 2023-05-11T08:12:06.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"en",
"dataset:mrpc",
"dataset:cola",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/bert-base-uncased-CoLA-int8 | 0 | 2 | transformers | 2023-05-11T07:39:18 | ---
language: en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
- bert
datasets:
- mrpc
- cola
metrics:
- f1
---
# INT8 BERT base uncased finetuned CoLA
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [textattack/bert-base-uncased-CoLA](https://huggingface.co/textattack/bert-base-uncased-CoLA).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.5451|0.5339|
| **Model size (MB)** |112|438|
#### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
'Intel/bert-base-uncased-CoLA-int8',
)
``` | 1,003 | [
[
-0.0223541259765625,
-0.036529541015625,
0.00481414794921875,
0.0187225341796875,
-0.01763916015625,
0.00970458984375,
-0.0272216796875,
0.0023345947265625,
-0.0075225830078125,
0.00142669677734375,
-0.02001953125,
-0.01708984375,
-0.044708251953125,
-0.0179... |
Cynthiaiii4/Text_classification_model_bbc_v6 | 2023-05-11T09:40:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Cynthiaiii4 | null | null | Cynthiaiii4/Text_classification_model_bbc_v6 | 0 | 2 | transformers | 2023-05-11T07:51:34 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbc_v6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbc_v6
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8115
- Accuracy: 0.77
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 50 | 0.5348 | 0.7625 |
| No log | 2.0 | 100 | 0.7592 | 0.76 |
| No log | 3.0 | 150 | 0.7245 | 0.775 |
| No log | 4.0 | 200 | 0.8115 | 0.77 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,540 | [
[
-0.03118896484375,
-0.0293426513671875,
0.007442474365234375,
0.00768280029296875,
-0.02838134765625,
-0.0335693359375,
-0.01163482666015625,
-0.027740478515625,
0.00510406494140625,
0.021881103515625,
-0.04779052734375,
-0.0546875,
-0.050384521484375,
-0.01... |
Intel/bert-large-uncased-cola-int8 | 2023-05-11T08:18:49.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"en",
"dataset:mrpc",
"dataset:cola",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/bert-large-uncased-cola-int8 | 0 | 2 | transformers | 2023-05-11T08:11:16 | ---
language: en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
- bert
datasets:
- mrpc
- cola
metrics:
- f1
---
# INT8 BERT large uncased finetuned CoLA
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [yoshitomo-matsubara/bert-large-uncased-cola](https://huggingface.co/yoshitomo-matsubara/bert-large-uncased-cola).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.6336|0.6335|
| **Model size (MB)** |388|1340|
#### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
'Intel/bert-large-uncased-cola-int8',
)
```
| 1,034 | [
[
-0.026702880859375,
-0.035247802734375,
0.0073699951171875,
0.0181121826171875,
-0.01873779296875,
0.00859832763671875,
-0.031280517578125,
-0.00011092424392700195,
-0.0010538101196289062,
0.0011434555053710938,
-0.019317626953125,
-0.01226806640625,
-0.04379272... |
Intel/bert-base-uncased-STS-B-int8 | 2023-05-11T08:31:50.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"en",
"dataset:mrpc",
"dataset:stsb",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/bert-base-uncased-STS-B-int8 | 0 | 2 | transformers | 2023-05-11T08:22:56 | ---
language: en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
- bert
datasets:
- mrpc
- stsb
metrics:
- f1
---
# INT8 BERT base uncased finetuned STS-B
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [textattack/bert-base-uncased-STS-B](https://huggingface.co/textattack/bert-base-uncased-STS-B).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.8755|0.8805|
| **Model size (MB)** |118|438|
#### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
'Intel/bert-base-uncased-STS-B-int8',
)
``` | 1,007 | [
[
-0.0248565673828125,
-0.03765869140625,
0.007598876953125,
0.016510009765625,
-0.0266265869140625,
0.00931549072265625,
-0.031280517578125,
0.0029735565185546875,
-0.0118560791015625,
0.0034961700439453125,
-0.0244598388671875,
-0.0224761962890625,
-0.0453491210... |
Satfail/distilbert-base-uncased-finetuned-emotion | 2023-05-11T08:45:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Satfail | null | null | Satfail/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-11T08:29:33 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9275991035276141
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2144
- Accuracy: 0.9275
- F1: 0.9276
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8085 | 1.0 | 250 | 0.3020 | 0.9055 | 0.9031 |
| 0.2411 | 2.0 | 500 | 0.2144 | 0.9275 | 0.9276 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0+cu118
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,803 | [
[
-0.037628173828125,
-0.04150390625,
0.01424407958984375,
0.022003173828125,
-0.0257415771484375,
-0.01910400390625,
-0.0128173828125,
-0.00799560546875,
0.010162353515625,
0.0081329345703125,
-0.055938720703125,
-0.05145263671875,
-0.059906005859375,
-0.0080... |
Intel/bert-base-cased-finetuned-sst2-int8 | 2023-05-11T09:01:46.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"en",
"dataset:mrpc",
"dataset:sst2",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/bert-base-cased-finetuned-sst2-int8 | 0 | 2 | transformers | 2023-05-11T08:43:47 | ---
language: en
license: apache-2.0
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
- bert
datasets:
- mrpc
- sst2
metrics:
- f1
---
# INT8 BERT base uncased finetuned sst2
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [gchhablani/bert-base-cased-finetuned-sst2](https://huggingface.co/gchhablani/bert-base-cased-finetuned-sst2).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9151|0.9232|
| **Model size (MB)** |111|433|
#### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
'Intel/bert-base-cased-finetuned-sst2-int8',
)
``` | 1,033 | [
[
-0.020782470703125,
-0.035125732421875,
0.00688934326171875,
0.00994110107421875,
-0.030303955078125,
0.006618499755859375,
-0.0278472900390625,
0.005077362060546875,
-0.01169586181640625,
0.004940032958984375,
-0.0231170654296875,
-0.01329803466796875,
-0.04248... |
Intel/bert-base-uncased-QNLI-int8 | 2023-05-11T09:01:24.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"text-classfication",
"int8",
"Intel® Neural Compressor",
"PostTrainingStatic",
"en",
"dataset:mrpc",
"dataset:qnli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Intel | null | null | Intel/bert-base-uncased-QNLI-int8 | 0 | 2 | transformers | 2023-05-11T08:49:52 | ---
language: en
license: mit
tags:
- text-classfication
- int8
- Intel® Neural Compressor
- PostTrainingStatic
- bert
datasets:
- mrpc
- qnli
metrics:
- f1
---
# INT8 BERT base uncased finetuned QNLI
## Post-training static quantization
### PyTorch
This is an INT8 PyTorch model quantized with [huggingface/optimum-intel](https://github.com/huggingface/optimum-intel) through the usage of [Intel® Neural Compressor](https://github.com/intel/neural-compressor).
The original fp32 model comes from the fine-tuned model [textattack/bert-base-uncased-QNLI](https://huggingface.co/textattack/bert-base-uncased-QNLI).
#### Test result
| |INT8|FP32|
|---|:---:|:---:|
| **Accuracy (eval-f1)** |0.9081|0.9154|
| **Model size (MB)** |133|438|
#### Load with optimum:
```python
from optimum.intel.neural_compressor.quantization import IncQuantizedModelForSequenceClassification
int8_model = IncQuantizedModelForSequenceClassification.from_pretrained(
'Intel/bert-base-uncased-QNLI-int8',
)
``` | 1,003 | [
[
-0.0229644775390625,
-0.026947021484375,
0.0059661865234375,
0.0123291015625,
-0.02093505859375,
0.004024505615234375,
-0.0239410400390625,
0.0026836395263671875,
-0.007358551025390625,
0.0025386810302734375,
-0.026214599609375,
-0.02099609375,
-0.0338134765625,... |
nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic | 2023-05-11T08:56:29.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | nakcnx | null | null | nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic | 0 | 2 | sentence-transformers | 2023-05-11T08:54:08 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("nakcnx/setfit-paraphrase-multilingual-MiniLM-bad_topic")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,597 | [
[
-0.01053619384765625,
-0.061767578125,
0.032257080078125,
-0.00045037269592285156,
-0.02764892578125,
-0.017120361328125,
-0.0106964111328125,
0.0015316009521484375,
0.00836181640625,
0.0423583984375,
-0.036773681640625,
-0.0179595947265625,
-0.031402587890625,
... |
xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5-2 | 2023-05-14T09:59:11.000Z | [
"transformers",
"tf",
"albert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | xinyixiuxiu | null | null | xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5-2 | 0 | 2 | transformers | 2023-05-11T09:41:37 | ---
tags:
- generated_from_keras_callback
model-index:
- name: xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5-2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# xinyixiuxiu/albert-xxlarge-v2-SST2-incremental_pre_training-epoch1-5-2
This model was trained from scratch on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0328
- Train Accuracy: 0.9894
- Validation Loss: 0.1551
- Validation Accuracy: 0.9507
- Epoch: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 3e-06, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.0328 | 0.9894 | 0.1551 | 0.9507 | 0 |
### Framework versions
- Transformers 4.28.1
- TensorFlow 2.7.0
- Datasets 2.10.1
- Tokenizers 0.12.1
| 1,445 | [
[
-0.029571533203125,
-0.03179931640625,
0.0272216796875,
0.010589599609375,
-0.03656005859375,
-0.030059814453125,
-0.00444793701171875,
-0.0272979736328125,
0.005367279052734375,
0.014678955078125,
-0.052490234375,
-0.039947509765625,
-0.056182861328125,
-0.... |
Cynthiaiii4/Text_classification_model_bbu_12500 | 2023-05-11T12:49:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Cynthiaiii4 | null | null | Cynthiaiii4/Text_classification_model_bbu_12500 | 0 | 2 | transformers | 2023-05-11T11:22:23 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Text_classification_model_bbu_12500
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Text_classification_model_bbu_12500
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9447
- Accuracy: 0.795
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.348 | 1.0 | 882 | 0.4511 | 0.7925 |
| 0.1714 | 2.0 | 1764 | 0.5316 | 0.7925 |
| 0.0852 | 3.0 | 2646 | 0.8147 | 0.79 |
| 0.0529 | 4.0 | 3528 | 0.9447 | 0.795 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,547 | [
[
-0.033935546875,
-0.03790283203125,
0.006988525390625,
0.007129669189453125,
-0.0285186767578125,
-0.0292816162109375,
-0.0108489990234375,
-0.0243682861328125,
0.006206512451171875,
0.0214080810546875,
-0.04736328125,
-0.05377197265625,
-0.047088623046875,
... |
Michelvh/bert-question-answering-dutch | 2023-05-12T14:25:23.000Z | [
"transformers",
"pytorch",
"bert",
"question-answering",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | Michelvh | null | null | Michelvh/bert-question-answering-dutch | 0 | 2 | transformers | 2023-05-11T11:41:15 | ---
tags:
- generated_from_trainer
model-index:
- name: bert-question-answering-dutch
results: []
dataset:
- type: yhavinga/squad_v2_dutch
- name: Dutch translation of SQUAD v2 dataset by yhavinga
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-question-answering-dutch
This model is a fine-tuned version of [GroNLP/bert-base-dutch-cased](https://huggingface.co/GroNLP/bert-base-dutch-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1493
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.1616 | 1.0 | 16288 | 0.9373 |
| 0.807 | 2.0 | 32576 | 0.9496 |
| 0.579 | 3.0 | 48864 | 1.1493 |
### Framework versions
- Transformers 4.27.4
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,479 | [
[
-0.05084228515625,
-0.06640625,
0.0143280029296875,
0.0160064697265625,
-0.02313232421875,
-0.0400390625,
-0.01800537109375,
-0.0247344970703125,
0.0109405517578125,
0.038116455078125,
-0.054443359375,
-0.03570556640625,
-0.051513671875,
-0.0120086669921875,... |
MrPark97/distilbert-base-uncased-finetuned-emotion | 2023-05-11T13:52:23.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | MrPark97 | null | null | MrPark97/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-11T13:39:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9219181118935907
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.922
- F1: 0.9219
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8438 | 1.0 | 250 | 0.3229 | 0.901 | 0.8975 |
| 0.2511 | 2.0 | 500 | 0.2156 | 0.922 | 0.9219 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037933349609375,
-0.041351318359375,
0.0149078369140625,
0.0218658447265625,
-0.0264434814453125,
-0.0188446044921875,
-0.01316070556640625,
-0.00876617431640625,
0.01087188720703125,
0.00843048095703125,
-0.057159423828125,
-0.0518798828125,
-0.0592956542968... |
Camille03/sentiment-model | 2023-06-02T15:00:54.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Camille03 | null | null | Camille03/sentiment-model | 0 | 2 | transformers | 2023-05-11T14:37:15 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: sentiment-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sentiment-model
This model is a fine-tuned version of [prajjwal1/bert-tiny](https://huggingface.co/prajjwal1/bert-tiny) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5607
- Accuracy: 0.7833
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5278 | 1.0 | 1500 | 0.4808 | 0.7817 |
| 0.3811 | 2.0 | 3000 | 0.5271 | 0.78 |
| 0.3366 | 3.0 | 4500 | 0.5607 | 0.7833 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.12.1+cu102
- Datasets 2.12.0
- Tokenizers 0.12.1
| 1,442 | [
[
-0.0382080078125,
-0.04681396484375,
0.01526641845703125,
0.01580810546875,
-0.031402587890625,
-0.038665771484375,
-0.023162841796875,
-0.00608062744140625,
0.0146331787109375,
0.01071929931640625,
-0.06134033203125,
-0.0496826171875,
-0.04595947265625,
-0.... |
Santici/distilroberta-base-mrpc-glue-santi-cinotti | 2023-05-11T14:48:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Santici | null | null | Santici/distilroberta-base-mrpc-glue-santi-cinotti | 0 | 2 | transformers | 2023-05-11T14:40:13 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: distilroberta-base-mrpc-glue-santi-cinotti
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8529411764705882
- name: F1
type: f1
value: 0.8901098901098902
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue-santi-cinotti
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5176
- Accuracy: 0.8529
- F1: 0.8901
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5035 | 1.09 | 500 | 0.5691 | 0.8309 | 0.8804 |
| 0.3369 | 2.18 | 1000 | 0.5176 | 0.8529 | 0.8901 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,836 | [
[
-0.0263824462890625,
-0.042144775390625,
0.00814056396484375,
0.021820068359375,
-0.0284576416015625,
-0.0234527587890625,
-0.007080078125,
-0.00740814208984375,
0.0138397216796875,
0.005382537841796875,
-0.046539306640625,
-0.038970947265625,
-0.0595703125,
... |
zawyar/t5-base-finetuned-urdu | 2023-05-11T16:25:47.000Z | [
"transformers",
"tf",
"t5",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | zawyar | null | null | zawyar/t5-base-finetuned-urdu | 0 | 2 | transformers | 2023-05-11T15:43:54 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: zawyar/t5-base-finetuned-urdu
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# zawyar/t5-base-finetuned-urdu
This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0778
- Validation Loss: 0.0562
- Epoch: 3
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 3000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1262 | 0.0646 | 0 |
| 0.0897 | 0.1241 | 1 |
| 0.0828 | 0.0534 | 2 |
| 0.0778 | 0.0562 | 3 |
### Framework versions
- Transformers 4.29.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,588 | [
[
-0.037933349609375,
-0.0309295654296875,
0.01361083984375,
0.00940704345703125,
-0.03515625,
-0.018157958984375,
-0.01401519775390625,
-0.0138702392578125,
-0.0030384063720703125,
0.01505279541015625,
-0.05194091796875,
-0.06005859375,
-0.06201171875,
-0.010... |
Neronuser/dqn-SpaceInvadersNoFrameskip-no-r | 2023-05-11T15:46:38.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Neronuser | null | null | Neronuser/dqn-SpaceInvadersNoFrameskip-no-r | 0 | 2 | stable-baselines3 | 2023-05-11T15:45:57 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 821.00 +/- 300.51
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Neronuser -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Neronuser -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Neronuser
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,694 | [
[
-0.041717529296875,
-0.036865234375,
0.021636962890625,
0.0247344970703125,
-0.00920867919921875,
-0.020294189453125,
0.0114593505859375,
-0.01396942138671875,
0.01300811767578125,
0.0247650146484375,
-0.070068359375,
-0.036163330078125,
-0.026275634765625,
... |
tollefj/setfit-nocola-20-iter-25-epochs | 2023-05-11T17:22:55.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | tollefj | null | null | tollefj/setfit-nocola-20-iter-25-epochs | 0 | 2 | sentence-transformers | 2023-05-11T17:22:10 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# tollefj/setfit-nocola-20-iter-25-epochs
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tollefj/setfit-nocola-20-iter-25-epochs")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,567 | [
[
-0.00093841552734375,
-0.053680419921875,
0.0242156982421875,
-0.004154205322265625,
-0.00917816162109375,
-0.020843505859375,
-0.01399993896484375,
-0.009918212890625,
-0.004619598388671875,
0.033599853515625,
-0.03790283203125,
-0.022064208984375,
-0.041625976... |
guoluo/Bert_class_1e-06_112epoch | 2023-05-11T17:43:54.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | guoluo | null | null | guoluo/Bert_class_1e-06_112epoch | 0 | 2 | transformers | 2023-05-11T17:43:09 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_1e-06
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_1e-06
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2359
- Train Accuracy: 0.9271
- Validation Loss: 0.9369
- Validation Accuracy: 0.7394
- Train Lr: 9.938033e-07
- Epoch: 111
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 9.938033e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:------------:|:-----:|
| 1.2823 | 0.4776 | 1.0993 | 0.6761 | 1e-06 | 0 |
| 1.0339 | 0.6776 | 0.9839 | 0.6761 | 9.99999e-07 | 1 |
| 0.9705 | 0.6776 | 0.9658 | 0.6761 | 9.999969e-07 | 2 |
| 0.9486 | 0.6776 | 0.9590 | 0.6761 | 9.99994e-07 | 3 |
| 0.9369 | 0.6776 | 0.9544 | 0.6761 | 9.9999e-07 | 4 |
| 0.9332 | 0.6776 | 0.9470 | 0.6761 | 9.99985e-07 | 5 |
| 0.9205 | 0.6776 | 0.9421 | 0.6761 | 9.99979e-07 | 6 |
| 0.9135 | 0.6776 | 0.9374 | 0.6761 | 9.999719e-07 | 7 |
| 0.9113 | 0.6776 | 0.9340 | 0.6761 | 9.99964e-07 | 8 |
| 0.9005 | 0.6776 | 0.9294 | 0.6761 | 9.99955e-07 | 9 |
| 0.8896 | 0.6776 | 0.9242 | 0.6761 | 9.99945e-07 | 10 |
| 0.8746 | 0.6800 | 0.9191 | 0.6761 | 9.99934e-07 | 11 |
| 0.8649 | 0.6824 | 0.9143 | 0.6761 | 9.999219e-07 | 12 |
| 0.8621 | 0.6847 | 0.9095 | 0.6761 | 9.999089e-07 | 13 |
| 0.8506 | 0.6847 | 0.9019 | 0.6761 | 9.99895e-07 | 14 |
| 0.8434 | 0.6800 | 0.8943 | 0.6761 | 9.9988e-07 | 15 |
| 0.8286 | 0.6871 | 0.8885 | 0.6761 | 9.998639e-07 | 16 |
| 0.8239 | 0.6824 | 0.8814 | 0.6761 | 9.998469e-07 | 17 |
| 0.8181 | 0.6894 | 0.8785 | 0.6761 | 9.998289e-07 | 18 |
| 0.7962 | 0.6894 | 0.8731 | 0.6690 | 9.998099e-07 | 19 |
| 0.7908 | 0.7012 | 0.8671 | 0.6690 | 9.997899e-07 | 20 |
| 0.7640 | 0.6988 | 0.8641 | 0.6761 | 9.997689e-07 | 21 |
| 0.7644 | 0.7035 | 0.8590 | 0.6831 | 9.997469e-07 | 22 |
| 0.7512 | 0.7200 | 0.8558 | 0.6831 | 9.99724e-07 | 23 |
| 0.7394 | 0.7200 | 0.8527 | 0.6972 | 9.997e-07 | 24 |
| 0.7366 | 0.7271 | 0.8501 | 0.7113 | 9.99675e-07 | 25 |
| 0.7293 | 0.7247 | 0.8471 | 0.7042 | 9.996489e-07 | 26 |
| 0.7189 | 0.7529 | 0.8479 | 0.7113 | 9.99622e-07 | 27 |
| 0.7077 | 0.7341 | 0.8411 | 0.7183 | 9.99594e-07 | 28 |
| 0.6965 | 0.7671 | 0.8409 | 0.7183 | 9.99565e-07 | 29 |
| 0.6838 | 0.7482 | 0.8372 | 0.7113 | 9.99535e-07 | 30 |
| 0.6835 | 0.7506 | 0.8362 | 0.7113 | 9.99504e-07 | 31 |
| 0.6702 | 0.7812 | 0.8365 | 0.6901 | 9.99472e-07 | 32 |
| 0.6623 | 0.7812 | 0.8323 | 0.7113 | 9.994391e-07 | 33 |
| 0.6565 | 0.7553 | 0.8298 | 0.6972 | 9.994051e-07 | 34 |
| 0.6452 | 0.7718 | 0.8291 | 0.6901 | 9.993701e-07 | 35 |
| 0.6396 | 0.7718 | 0.8284 | 0.7113 | 9.993341e-07 | 36 |
| 0.6299 | 0.7765 | 0.8262 | 0.6831 | 9.992972e-07 | 37 |
| 0.6230 | 0.7953 | 0.8364 | 0.7113 | 9.992592e-07 | 38 |
| 0.6095 | 0.7741 | 0.8233 | 0.7113 | 9.992202e-07 | 39 |
| 0.6193 | 0.7718 | 0.8206 | 0.7113 | 9.991802e-07 | 40 |
| 0.6008 | 0.7859 | 0.8260 | 0.7254 | 9.991393e-07 | 41 |
| 0.5967 | 0.7859 | 0.8199 | 0.7254 | 9.990973e-07 | 42 |
| 0.5883 | 0.7835 | 0.8189 | 0.7183 | 9.990544e-07 | 43 |
| 0.5751 | 0.8071 | 0.8279 | 0.7324 | 9.990104e-07 | 44 |
| 0.5709 | 0.8000 | 0.8204 | 0.7324 | 9.989654e-07 | 45 |
| 0.5697 | 0.8047 | 0.8229 | 0.7254 | 9.989195e-07 | 46 |
| 0.5580 | 0.8094 | 0.8152 | 0.7254 | 9.988726e-07 | 47 |
| 0.5595 | 0.8071 | 0.8275 | 0.7324 | 9.988246e-07 | 48 |
| 0.5486 | 0.7929 | 0.8168 | 0.7324 | 9.987757e-07 | 49 |
| 0.5400 | 0.8094 | 0.8239 | 0.7254 | 9.987258e-07 | 50 |
| 0.5352 | 0.8071 | 0.8190 | 0.7183 | 9.986749e-07 | 51 |
| 0.5141 | 0.8235 | 0.8171 | 0.7183 | 9.986229e-07 | 52 |
| 0.5324 | 0.8024 | 0.8191 | 0.7183 | 9.985699e-07 | 53 |
| 0.5123 | 0.8024 | 0.8279 | 0.7254 | 9.98516e-07 | 54 |
| 0.5151 | 0.8165 | 0.8213 | 0.7113 | 9.984611e-07 | 55 |
| 0.4986 | 0.8118 | 0.8176 | 0.7183 | 9.984052e-07 | 56 |
| 0.4925 | 0.8259 | 0.8208 | 0.7113 | 9.983482e-07 | 57 |
| 0.4848 | 0.8188 | 0.8182 | 0.7042 | 9.982904e-07 | 58 |
| 0.4952 | 0.8282 | 0.8214 | 0.7113 | 9.982315e-07 | 59 |
| 0.4837 | 0.8329 | 0.8192 | 0.7113 | 9.981716e-07 | 60 |
| 0.4513 | 0.8518 | 0.8224 | 0.7183 | 9.981106e-07 | 61 |
| 0.4628 | 0.8376 | 0.8227 | 0.7183 | 9.980488e-07 | 62 |
| 0.4633 | 0.8447 | 0.8246 | 0.7183 | 9.979859e-07 | 63 |
| 0.4472 | 0.8447 | 0.8256 | 0.7113 | 9.97922e-07 | 64 |
| 0.4529 | 0.8306 | 0.8285 | 0.7183 | 9.978571e-07 | 65 |
| 0.4579 | 0.8329 | 0.8331 | 0.7042 | 9.977913e-07 | 66 |
| 0.4326 | 0.8376 | 0.8278 | 0.7113 | 9.977244e-07 | 67 |
| 0.4255 | 0.8447 | 0.8265 | 0.7113 | 9.976566e-07 | 68 |
| 0.4322 | 0.8494 | 0.8293 | 0.7042 | 9.975878e-07 | 69 |
| 0.4189 | 0.8424 | 0.8382 | 0.7042 | 9.97518e-07 | 70 |
| 0.4236 | 0.8494 | 0.8302 | 0.7113 | 9.974472e-07 | 71 |
| 0.4025 | 0.8494 | 0.8364 | 0.7042 | 9.973753e-07 | 72 |
| 0.4225 | 0.8659 | 0.8370 | 0.7113 | 9.973025e-07 | 73 |
| 0.4027 | 0.8541 | 0.8377 | 0.7042 | 9.972288e-07 | 74 |
| 0.4090 | 0.8588 | 0.8381 | 0.7113 | 9.97154e-07 | 75 |
| 0.3887 | 0.8682 | 0.8378 | 0.7042 | 9.970781e-07 | 76 |
| 0.4022 | 0.8706 | 0.8406 | 0.7042 | 9.970014e-07 | 77 |
| 0.3867 | 0.8682 | 0.8457 | 0.7113 | 9.969236e-07 | 78 |
| 0.3689 | 0.8706 | 0.8460 | 0.7113 | 9.968448e-07 | 79 |
| 0.3728 | 0.8729 | 0.8527 | 0.7042 | 9.967652e-07 | 80 |
| 0.3754 | 0.8706 | 0.8525 | 0.7042 | 9.966844e-07 | 81 |
| 0.3580 | 0.8871 | 0.8531 | 0.7113 | 9.966027e-07 | 82 |
| 0.3718 | 0.8659 | 0.8593 | 0.7042 | 9.965199e-07 | 83 |
| 0.3535 | 0.8800 | 0.8593 | 0.7324 | 9.964363e-07 | 84 |
| 0.3342 | 0.8824 | 0.8704 | 0.6972 | 9.963516e-07 | 85 |
| 0.3341 | 0.8918 | 0.8630 | 0.7324 | 9.962658e-07 | 86 |
| 0.3371 | 0.8776 | 0.8698 | 0.7042 | 9.961792e-07 | 87 |
| 0.3338 | 0.8847 | 0.8689 | 0.7042 | 9.960916e-07 | 88 |
| 0.3295 | 0.8776 | 0.8753 | 0.6972 | 9.960029e-07 | 89 |
| 0.3259 | 0.8847 | 0.8696 | 0.7183 | 9.959133e-07 | 90 |
| 0.3290 | 0.8776 | 0.8726 | 0.7183 | 9.958227e-07 | 91 |
| 0.3117 | 0.8988 | 0.8798 | 0.7324 | 9.95731e-07 | 92 |
| 0.3075 | 0.8965 | 0.8836 | 0.7254 | 9.956385e-07 | 93 |
| 0.2905 | 0.9129 | 0.8868 | 0.7183 | 9.95545e-07 | 94 |
| 0.2979 | 0.9153 | 0.8888 | 0.7183 | 9.954504e-07 | 95 |
| 0.3031 | 0.8800 | 0.8956 | 0.7324 | 9.953548e-07 | 96 |
| 0.2883 | 0.9035 | 0.8984 | 0.7042 | 9.952582e-07 | 97 |
| 0.2835 | 0.9106 | 0.8969 | 0.7254 | 9.951607e-07 | 98 |
| 0.2803 | 0.9059 | 0.8998 | 0.7254 | 9.950621e-07 | 99 |
| 0.2812 | 0.9176 | 0.9034 | 0.7254 | 9.949626e-07 | 100 |
| 0.2714 | 0.9153 | 0.9028 | 0.7183 | 9.948621e-07 | 101 |
| 0.2905 | 0.9059 | 0.9144 | 0.7254 | 9.947606e-07 | 102 |
| 0.2631 | 0.9224 | 0.9143 | 0.6972 | 9.946582e-07 | 103 |
| 0.2679 | 0.9176 | 0.9180 | 0.7254 | 9.945547e-07 | 104 |
| 0.2583 | 0.9224 | 0.9206 | 0.7042 | 9.944504e-07 | 105 |
| 0.2613 | 0.9200 | 0.9286 | 0.7254 | 9.94345e-07 | 106 |
| 0.2669 | 0.9012 | 0.9237 | 0.7254 | 9.942386e-07 | 107 |
| 0.2571 | 0.9153 | 0.9351 | 0.7254 | 9.941313e-07 | 108 |
| 0.2570 | 0.9106 | 0.9306 | 0.7324 | 9.940229e-07 | 109 |
| 0.2344 | 0.9200 | 0.9396 | 0.7183 | 9.939135e-07 | 110 |
| 0.2359 | 0.9271 | 0.9369 | 0.7394 | 9.938033e-07 | 111 |
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 12,033 | [
[
-0.049652099609375,
-0.03436279296875,
0.024688720703125,
0.00391387939453125,
-0.0004315376281738281,
0.004131317138671875,
0.0031223297119140625,
0.00246429443359375,
0.056182861328125,
0.0244903564453125,
-0.04522705078125,
-0.0460205078125,
-0.04083251953125... |
guoluo/Bert_class_1e-06_137epoch | 2023-05-11T18:38:36.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | guoluo | null | null | guoluo/Bert_class_1e-06_137epoch | 0 | 2 | transformers | 2023-05-11T18:37:48 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_1e-06_137epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_1e-06_137epoch
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1694
- Train Accuracy: 0.9459
- Validation Loss: 1.0179
- Validation Accuracy: 0.7394
- Train Lr: 9.907274e-07
- Epoch: 136
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 9.907274e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:------------:|:-----:|
| 1.2823 | 0.4776 | 1.0993 | 0.6761 | 1e-06 | 0 |
| 1.0339 | 0.6776 | 0.9839 | 0.6761 | 9.99999e-07 | 1 |
| 0.9705 | 0.6776 | 0.9658 | 0.6761 | 9.999969e-07 | 2 |
| 0.9486 | 0.6776 | 0.9590 | 0.6761 | 9.99994e-07 | 3 |
| 0.9369 | 0.6776 | 0.9544 | 0.6761 | 9.9999e-07 | 4 |
| 0.9332 | 0.6776 | 0.9470 | 0.6761 | 9.99985e-07 | 5 |
| 0.9205 | 0.6776 | 0.9421 | 0.6761 | 9.99979e-07 | 6 |
| 0.9135 | 0.6776 | 0.9374 | 0.6761 | 9.999719e-07 | 7 |
| 0.9113 | 0.6776 | 0.9340 | 0.6761 | 9.99964e-07 | 8 |
| 0.9005 | 0.6776 | 0.9294 | 0.6761 | 9.99955e-07 | 9 |
| 0.8896 | 0.6776 | 0.9242 | 0.6761 | 9.99945e-07 | 10 |
| 0.8746 | 0.6800 | 0.9191 | 0.6761 | 9.99934e-07 | 11 |
| 0.8649 | 0.6824 | 0.9143 | 0.6761 | 9.999219e-07 | 12 |
| 0.8621 | 0.6847 | 0.9095 | 0.6761 | 9.999089e-07 | 13 |
| 0.8506 | 0.6847 | 0.9019 | 0.6761 | 9.99895e-07 | 14 |
| 0.8434 | 0.6800 | 0.8943 | 0.6761 | 9.9988e-07 | 15 |
| 0.8286 | 0.6871 | 0.8885 | 0.6761 | 9.998639e-07 | 16 |
| 0.8239 | 0.6824 | 0.8814 | 0.6761 | 9.998469e-07 | 17 |
| 0.8181 | 0.6894 | 0.8785 | 0.6761 | 9.998289e-07 | 18 |
| 0.7962 | 0.6894 | 0.8731 | 0.6690 | 9.998099e-07 | 19 |
| 0.7908 | 0.7012 | 0.8671 | 0.6690 | 9.997899e-07 | 20 |
| 0.7640 | 0.6988 | 0.8641 | 0.6761 | 9.997689e-07 | 21 |
| 0.7644 | 0.7035 | 0.8590 | 0.6831 | 9.997469e-07 | 22 |
| 0.7512 | 0.7200 | 0.8558 | 0.6831 | 9.99724e-07 | 23 |
| 0.7394 | 0.7200 | 0.8527 | 0.6972 | 9.997e-07 | 24 |
| 0.7366 | 0.7271 | 0.8501 | 0.7113 | 9.99675e-07 | 25 |
| 0.7293 | 0.7247 | 0.8471 | 0.7042 | 9.996489e-07 | 26 |
| 0.7189 | 0.7529 | 0.8479 | 0.7113 | 9.99622e-07 | 27 |
| 0.7077 | 0.7341 | 0.8411 | 0.7183 | 9.99594e-07 | 28 |
| 0.6965 | 0.7671 | 0.8409 | 0.7183 | 9.99565e-07 | 29 |
| 0.6838 | 0.7482 | 0.8372 | 0.7113 | 9.99535e-07 | 30 |
| 0.6835 | 0.7506 | 0.8362 | 0.7113 | 9.99504e-07 | 31 |
| 0.6702 | 0.7812 | 0.8365 | 0.6901 | 9.99472e-07 | 32 |
| 0.6623 | 0.7812 | 0.8323 | 0.7113 | 9.994391e-07 | 33 |
| 0.6565 | 0.7553 | 0.8298 | 0.6972 | 9.994051e-07 | 34 |
| 0.6452 | 0.7718 | 0.8291 | 0.6901 | 9.993701e-07 | 35 |
| 0.6396 | 0.7718 | 0.8284 | 0.7113 | 9.993341e-07 | 36 |
| 0.6299 | 0.7765 | 0.8262 | 0.6831 | 9.992972e-07 | 37 |
| 0.6230 | 0.7953 | 0.8364 | 0.7113 | 9.992592e-07 | 38 |
| 0.6095 | 0.7741 | 0.8233 | 0.7113 | 9.992202e-07 | 39 |
| 0.6193 | 0.7718 | 0.8206 | 0.7113 | 9.991802e-07 | 40 |
| 0.6008 | 0.7859 | 0.8260 | 0.7254 | 9.991393e-07 | 41 |
| 0.5967 | 0.7859 | 0.8199 | 0.7254 | 9.990973e-07 | 42 |
| 0.5883 | 0.7835 | 0.8189 | 0.7183 | 9.990544e-07 | 43 |
| 0.5751 | 0.8071 | 0.8279 | 0.7324 | 9.990104e-07 | 44 |
| 0.5709 | 0.8000 | 0.8204 | 0.7324 | 9.989654e-07 | 45 |
| 0.5697 | 0.8047 | 0.8229 | 0.7254 | 9.989195e-07 | 46 |
| 0.5580 | 0.8094 | 0.8152 | 0.7254 | 9.988726e-07 | 47 |
| 0.5595 | 0.8071 | 0.8275 | 0.7324 | 9.988246e-07 | 48 |
| 0.5486 | 0.7929 | 0.8168 | 0.7324 | 9.987757e-07 | 49 |
| 0.5400 | 0.8094 | 0.8239 | 0.7254 | 9.987258e-07 | 50 |
| 0.5352 | 0.8071 | 0.8190 | 0.7183 | 9.986749e-07 | 51 |
| 0.5141 | 0.8235 | 0.8171 | 0.7183 | 9.986229e-07 | 52 |
| 0.5324 | 0.8024 | 0.8191 | 0.7183 | 9.985699e-07 | 53 |
| 0.5123 | 0.8024 | 0.8279 | 0.7254 | 9.98516e-07 | 54 |
| 0.5151 | 0.8165 | 0.8213 | 0.7113 | 9.984611e-07 | 55 |
| 0.4986 | 0.8118 | 0.8176 | 0.7183 | 9.984052e-07 | 56 |
| 0.4925 | 0.8259 | 0.8208 | 0.7113 | 9.983482e-07 | 57 |
| 0.4848 | 0.8188 | 0.8182 | 0.7042 | 9.982904e-07 | 58 |
| 0.4952 | 0.8282 | 0.8214 | 0.7113 | 9.982315e-07 | 59 |
| 0.4837 | 0.8329 | 0.8192 | 0.7113 | 9.981716e-07 | 60 |
| 0.4513 | 0.8518 | 0.8224 | 0.7183 | 9.981106e-07 | 61 |
| 0.4628 | 0.8376 | 0.8227 | 0.7183 | 9.980488e-07 | 62 |
| 0.4633 | 0.8447 | 0.8246 | 0.7183 | 9.979859e-07 | 63 |
| 0.4472 | 0.8447 | 0.8256 | 0.7113 | 9.97922e-07 | 64 |
| 0.4529 | 0.8306 | 0.8285 | 0.7183 | 9.978571e-07 | 65 |
| 0.4579 | 0.8329 | 0.8331 | 0.7042 | 9.977913e-07 | 66 |
| 0.4326 | 0.8376 | 0.8278 | 0.7113 | 9.977244e-07 | 67 |
| 0.4255 | 0.8447 | 0.8265 | 0.7113 | 9.976566e-07 | 68 |
| 0.4322 | 0.8494 | 0.8293 | 0.7042 | 9.975878e-07 | 69 |
| 0.4189 | 0.8424 | 0.8382 | 0.7042 | 9.97518e-07 | 70 |
| 0.4236 | 0.8494 | 0.8302 | 0.7113 | 9.974472e-07 | 71 |
| 0.4025 | 0.8494 | 0.8364 | 0.7042 | 9.973753e-07 | 72 |
| 0.4225 | 0.8659 | 0.8370 | 0.7113 | 9.973025e-07 | 73 |
| 0.4027 | 0.8541 | 0.8377 | 0.7042 | 9.972288e-07 | 74 |
| 0.4090 | 0.8588 | 0.8381 | 0.7113 | 9.97154e-07 | 75 |
| 0.3887 | 0.8682 | 0.8378 | 0.7042 | 9.970781e-07 | 76 |
| 0.4022 | 0.8706 | 0.8406 | 0.7042 | 9.970014e-07 | 77 |
| 0.3867 | 0.8682 | 0.8457 | 0.7113 | 9.969236e-07 | 78 |
| 0.3689 | 0.8706 | 0.8460 | 0.7113 | 9.968448e-07 | 79 |
| 0.3728 | 0.8729 | 0.8527 | 0.7042 | 9.967652e-07 | 80 |
| 0.3754 | 0.8706 | 0.8525 | 0.7042 | 9.966844e-07 | 81 |
| 0.3580 | 0.8871 | 0.8531 | 0.7113 | 9.966027e-07 | 82 |
| 0.3718 | 0.8659 | 0.8593 | 0.7042 | 9.965199e-07 | 83 |
| 0.3535 | 0.8800 | 0.8593 | 0.7324 | 9.964363e-07 | 84 |
| 0.3342 | 0.8824 | 0.8704 | 0.6972 | 9.963516e-07 | 85 |
| 0.3341 | 0.8918 | 0.8630 | 0.7324 | 9.962658e-07 | 86 |
| 0.3371 | 0.8776 | 0.8698 | 0.7042 | 9.961792e-07 | 87 |
| 0.3338 | 0.8847 | 0.8689 | 0.7042 | 9.960916e-07 | 88 |
| 0.3295 | 0.8776 | 0.8753 | 0.6972 | 9.960029e-07 | 89 |
| 0.3259 | 0.8847 | 0.8696 | 0.7183 | 9.959133e-07 | 90 |
| 0.3290 | 0.8776 | 0.8726 | 0.7183 | 9.958227e-07 | 91 |
| 0.3117 | 0.8988 | 0.8798 | 0.7324 | 9.95731e-07 | 92 |
| 0.3075 | 0.8965 | 0.8836 | 0.7254 | 9.956385e-07 | 93 |
| 0.2905 | 0.9129 | 0.8868 | 0.7183 | 9.95545e-07 | 94 |
| 0.2979 | 0.9153 | 0.8888 | 0.7183 | 9.954504e-07 | 95 |
| 0.3031 | 0.8800 | 0.8956 | 0.7324 | 9.953548e-07 | 96 |
| 0.2883 | 0.9035 | 0.8984 | 0.7042 | 9.952582e-07 | 97 |
| 0.2835 | 0.9106 | 0.8969 | 0.7254 | 9.951607e-07 | 98 |
| 0.2803 | 0.9059 | 0.8998 | 0.7254 | 9.950621e-07 | 99 |
| 0.2812 | 0.9176 | 0.9034 | 0.7254 | 9.949626e-07 | 100 |
| 0.2714 | 0.9153 | 0.9028 | 0.7183 | 9.948621e-07 | 101 |
| 0.2905 | 0.9059 | 0.9144 | 0.7254 | 9.947606e-07 | 102 |
| 0.2631 | 0.9224 | 0.9143 | 0.6972 | 9.946582e-07 | 103 |
| 0.2679 | 0.9176 | 0.9180 | 0.7254 | 9.945547e-07 | 104 |
| 0.2583 | 0.9224 | 0.9206 | 0.7042 | 9.944504e-07 | 105 |
| 0.2613 | 0.9200 | 0.9286 | 0.7254 | 9.94345e-07 | 106 |
| 0.2669 | 0.9012 | 0.9237 | 0.7254 | 9.942386e-07 | 107 |
| 0.2571 | 0.9153 | 0.9351 | 0.7254 | 9.941313e-07 | 108 |
| 0.2570 | 0.9106 | 0.9306 | 0.7324 | 9.940229e-07 | 109 |
| 0.2344 | 0.9200 | 0.9396 | 0.7183 | 9.939135e-07 | 110 |
| 0.2359 | 0.9271 | 0.9369 | 0.7394 | 9.938033e-07 | 111 |
| 0.2395 | 0.9271 | 0.9522 | 0.7042 | 9.93692e-07 | 112 |
| 0.2408 | 0.9247 | 0.9509 | 0.7183 | 9.935796e-07 | 113 |
| 0.2330 | 0.9294 | 0.9561 | 0.7042 | 9.934664e-07 | 114 |
| 0.2247 | 0.9271 | 0.9539 | 0.7183 | 9.933522e-07 | 115 |
| 0.2192 | 0.9318 | 0.9705 | 0.7042 | 9.93237e-07 | 116 |
| 0.2173 | 0.9341 | 0.9621 | 0.7254 | 9.931208e-07 | 117 |
| 0.2138 | 0.9200 | 0.9679 | 0.7183 | 9.930036e-07 | 118 |
| 0.2239 | 0.9176 | 0.9733 | 0.6972 | 9.928855e-07 | 119 |
| 0.2188 | 0.9341 | 0.9838 | 0.7042 | 9.927663e-07 | 120 |
| 0.2116 | 0.9341 | 0.9764 | 0.7324 | 9.926462e-07 | 121 |
| 0.2061 | 0.9200 | 0.9840 | 0.7183 | 9.925251e-07 | 122 |
| 0.2061 | 0.9435 | 0.9798 | 0.7254 | 9.92403e-07 | 123 |
| 0.2049 | 0.9388 | 1.0056 | 0.7042 | 9.9228e-07 | 124 |
| 0.1947 | 0.9459 | 0.9898 | 0.7254 | 9.92156e-07 | 125 |
| 0.1990 | 0.9365 | 0.9935 | 0.6972 | 9.92031e-07 | 126 |
| 0.1945 | 0.9506 | 0.9997 | 0.7113 | 9.91905e-07 | 127 |
| 0.1955 | 0.9365 | 0.9972 | 0.7254 | 9.91778e-07 | 128 |
| 0.1845 | 0.9459 | 1.0044 | 0.7254 | 9.916502e-07 | 129 |
| 0.1722 | 0.9388 | 1.0057 | 0.7183 | 9.915212e-07 | 130 |
| 0.1693 | 0.9576 | 1.0118 | 0.7113 | 9.913914e-07 | 131 |
| 0.1837 | 0.9318 | 1.0126 | 0.7113 | 9.912605e-07 | 132 |
| 0.1894 | 0.9412 | 1.0254 | 0.6972 | 9.911287e-07 | 133 |
| 0.1702 | 0.9506 | 1.0156 | 0.7254 | 9.909959e-07 | 134 |
| 0.1697 | 0.9576 | 1.0184 | 0.7183 | 9.908621e-07 | 135 |
| 0.1694 | 0.9459 | 1.0179 | 0.7394 | 9.907274e-07 | 136 |
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 14,426 | [
[
-0.04974365234375,
-0.034454345703125,
0.0246124267578125,
0.003719329833984375,
-0.000621795654296875,
0.004085540771484375,
0.0029430389404296875,
0.0025997161865234375,
0.0562744140625,
0.024505615234375,
-0.045166015625,
-0.0458984375,
-0.040802001953125,
... |
Adoley/covid-tweets-sentiment-analysis-distilbert-model | 2023-07-04T19:50:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | Adoley | null | null | Adoley/covid-tweets-sentiment-analysis-distilbert-model | 0 | 2 | transformers | 2023-05-11T19:35:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: covid-tweets-sentiment-analysis-distilbert-model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# covid-tweets-sentiment-analysis-distilbert-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5979
- Rmse: 0.6680
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 2
- eval_batch_size: 2
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.7464 | 2.0 | 500 | 0.5979 | 0.6680 |
| 0.4318 | 4.0 | 1000 | 0.6374 | 0.6327 |
| 0.1694 | 6.0 | 1500 | 0.9439 | 0.6311 |
| 0.072 | 8.0 | 2000 | 1.1471 | 0.6556 |
| 0.0388 | 10.0 | 2500 | 1.2217 | 0.6437 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,707 | [
[
-0.03155517578125,
-0.048126220703125,
-0.0019474029541015625,
0.0168609619140625,
-0.023895263671875,
-0.0043792724609375,
-0.00728607177734375,
0.00036144256591796875,
0.0054779052734375,
-0.0018510818481445312,
-0.0584716796875,
-0.052490234375,
-0.0621948242... |
shivansh-ka/Multilingual-Toxic-Comment-Roberta | 2023-05-11T20:19:05.000Z | [
"keras",
"region:us"
] | null | shivansh-ka | null | null | shivansh-ka/Multilingual-Toxic-Comment-Roberta | 0 | 2 | keras | 2023-05-11T20:16:57 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | 1e-06 |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 1.9999999494757503e-05 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 741 | [
[
-0.036712646484375,
-0.03973388671875,
0.0282135009765625,
0.003753662109375,
-0.035400390625,
-0.017913818359375,
0.0007882118225097656,
0.001461029052734375,
0.02349853515625,
0.0183258056640625,
-0.04425048828125,
-0.047943115234375,
-0.034820556640625,
0... |
guoluo/Bert_class_1e-06_48epoch_loss | 2023-05-11T20:32:03.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | guoluo | null | null | guoluo/Bert_class_1e-06_48epoch_loss | 0 | 2 | transformers | 2023-05-11T20:31:19 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_1e-06_48epoch_loss
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_1e-06_48epoch_loss
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5580
- Train Accuracy: 0.8094
- Validation Loss: 0.8152
- Validation Accuracy: 0.7254
- Train Lr: 9.988726e-07
- Epoch: 47
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 9.988726e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:------------:|:-----:|
| 1.2823 | 0.4776 | 1.0993 | 0.6761 | 1e-06 | 0 |
| 1.0339 | 0.6776 | 0.9839 | 0.6761 | 9.99999e-07 | 1 |
| 0.9705 | 0.6776 | 0.9658 | 0.6761 | 9.999969e-07 | 2 |
| 0.9486 | 0.6776 | 0.9590 | 0.6761 | 9.99994e-07 | 3 |
| 0.9369 | 0.6776 | 0.9544 | 0.6761 | 9.9999e-07 | 4 |
| 0.9332 | 0.6776 | 0.9470 | 0.6761 | 9.99985e-07 | 5 |
| 0.9205 | 0.6776 | 0.9421 | 0.6761 | 9.99979e-07 | 6 |
| 0.9135 | 0.6776 | 0.9374 | 0.6761 | 9.999719e-07 | 7 |
| 0.9113 | 0.6776 | 0.9340 | 0.6761 | 9.99964e-07 | 8 |
| 0.9005 | 0.6776 | 0.9294 | 0.6761 | 9.99955e-07 | 9 |
| 0.8896 | 0.6776 | 0.9242 | 0.6761 | 9.99945e-07 | 10 |
| 0.8746 | 0.6800 | 0.9191 | 0.6761 | 9.99934e-07 | 11 |
| 0.8649 | 0.6824 | 0.9143 | 0.6761 | 9.999219e-07 | 12 |
| 0.8621 | 0.6847 | 0.9095 | 0.6761 | 9.999089e-07 | 13 |
| 0.8506 | 0.6847 | 0.9019 | 0.6761 | 9.99895e-07 | 14 |
| 0.8434 | 0.6800 | 0.8943 | 0.6761 | 9.9988e-07 | 15 |
| 0.8286 | 0.6871 | 0.8885 | 0.6761 | 9.998639e-07 | 16 |
| 0.8239 | 0.6824 | 0.8814 | 0.6761 | 9.998469e-07 | 17 |
| 0.8181 | 0.6894 | 0.8785 | 0.6761 | 9.998289e-07 | 18 |
| 0.7962 | 0.6894 | 0.8731 | 0.6690 | 9.998099e-07 | 19 |
| 0.7908 | 0.7012 | 0.8671 | 0.6690 | 9.997899e-07 | 20 |
| 0.7640 | 0.6988 | 0.8641 | 0.6761 | 9.997689e-07 | 21 |
| 0.7644 | 0.7035 | 0.8590 | 0.6831 | 9.997469e-07 | 22 |
| 0.7512 | 0.7200 | 0.8558 | 0.6831 | 9.99724e-07 | 23 |
| 0.7394 | 0.7200 | 0.8527 | 0.6972 | 9.997e-07 | 24 |
| 0.7366 | 0.7271 | 0.8501 | 0.7113 | 9.99675e-07 | 25 |
| 0.7293 | 0.7247 | 0.8471 | 0.7042 | 9.996489e-07 | 26 |
| 0.7189 | 0.7529 | 0.8479 | 0.7113 | 9.99622e-07 | 27 |
| 0.7077 | 0.7341 | 0.8411 | 0.7183 | 9.99594e-07 | 28 |
| 0.6965 | 0.7671 | 0.8409 | 0.7183 | 9.99565e-07 | 29 |
| 0.6838 | 0.7482 | 0.8372 | 0.7113 | 9.99535e-07 | 30 |
| 0.6835 | 0.7506 | 0.8362 | 0.7113 | 9.99504e-07 | 31 |
| 0.6702 | 0.7812 | 0.8365 | 0.6901 | 9.99472e-07 | 32 |
| 0.6623 | 0.7812 | 0.8323 | 0.7113 | 9.994391e-07 | 33 |
| 0.6565 | 0.7553 | 0.8298 | 0.6972 | 9.994051e-07 | 34 |
| 0.6452 | 0.7718 | 0.8291 | 0.6901 | 9.993701e-07 | 35 |
| 0.6396 | 0.7718 | 0.8284 | 0.7113 | 9.993341e-07 | 36 |
| 0.6299 | 0.7765 | 0.8262 | 0.6831 | 9.992972e-07 | 37 |
| 0.6230 | 0.7953 | 0.8364 | 0.7113 | 9.992592e-07 | 38 |
| 0.6095 | 0.7741 | 0.8233 | 0.7113 | 9.992202e-07 | 39 |
| 0.6193 | 0.7718 | 0.8206 | 0.7113 | 9.991802e-07 | 40 |
| 0.6008 | 0.7859 | 0.8260 | 0.7254 | 9.991393e-07 | 41 |
| 0.5967 | 0.7859 | 0.8199 | 0.7254 | 9.990973e-07 | 42 |
| 0.5883 | 0.7835 | 0.8189 | 0.7183 | 9.990544e-07 | 43 |
| 0.5751 | 0.8071 | 0.8279 | 0.7324 | 9.990104e-07 | 44 |
| 0.5709 | 0.8000 | 0.8204 | 0.7324 | 9.989654e-07 | 45 |
| 0.5697 | 0.8047 | 0.8229 | 0.7254 | 9.989195e-07 | 46 |
| 0.5580 | 0.8094 | 0.8152 | 0.7254 | 9.988726e-07 | 47 |
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 5,978 | [
[
-0.05169677734375,
-0.0389404296875,
0.0190582275390625,
0.0021305084228515625,
-0.000576019287109375,
0.0011262893676757812,
0.0023555755615234375,
-0.0009593963623046875,
0.05328369140625,
0.0211181640625,
-0.04791259765625,
-0.048248291015625,
-0.043670654296... |
guoluo/Bert_class_1e-06_50epoch_loss | 2023-05-11T21:03:17.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | guoluo | null | null | guoluo/Bert_class_1e-06_50epoch_loss | 0 | 2 | transformers | 2023-05-11T21:02:36 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_1e-06_50epoch_loss
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_1e-06_50epoch_loss
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5486
- Train Accuracy: 0.7929
- Validation Loss: 0.8168
- Validation Accuracy: 0.7324
- Train Lr: 9.987757e-07
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 9.987757e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:------------:|:-----:|
| 1.2823 | 0.4776 | 1.0993 | 0.6761 | 1e-06 | 0 |
| 1.0339 | 0.6776 | 0.9839 | 0.6761 | 9.99999e-07 | 1 |
| 0.9705 | 0.6776 | 0.9658 | 0.6761 | 9.999969e-07 | 2 |
| 0.9486 | 0.6776 | 0.9590 | 0.6761 | 9.99994e-07 | 3 |
| 0.9369 | 0.6776 | 0.9544 | 0.6761 | 9.9999e-07 | 4 |
| 0.9332 | 0.6776 | 0.9470 | 0.6761 | 9.99985e-07 | 5 |
| 0.9205 | 0.6776 | 0.9421 | 0.6761 | 9.99979e-07 | 6 |
| 0.9135 | 0.6776 | 0.9374 | 0.6761 | 9.999719e-07 | 7 |
| 0.9113 | 0.6776 | 0.9340 | 0.6761 | 9.99964e-07 | 8 |
| 0.9005 | 0.6776 | 0.9294 | 0.6761 | 9.99955e-07 | 9 |
| 0.8896 | 0.6776 | 0.9242 | 0.6761 | 9.99945e-07 | 10 |
| 0.8746 | 0.6800 | 0.9191 | 0.6761 | 9.99934e-07 | 11 |
| 0.8649 | 0.6824 | 0.9143 | 0.6761 | 9.999219e-07 | 12 |
| 0.8621 | 0.6847 | 0.9095 | 0.6761 | 9.999089e-07 | 13 |
| 0.8506 | 0.6847 | 0.9019 | 0.6761 | 9.99895e-07 | 14 |
| 0.8434 | 0.6800 | 0.8943 | 0.6761 | 9.9988e-07 | 15 |
| 0.8286 | 0.6871 | 0.8885 | 0.6761 | 9.998639e-07 | 16 |
| 0.8239 | 0.6824 | 0.8814 | 0.6761 | 9.998469e-07 | 17 |
| 0.8181 | 0.6894 | 0.8785 | 0.6761 | 9.998289e-07 | 18 |
| 0.7962 | 0.6894 | 0.8731 | 0.6690 | 9.998099e-07 | 19 |
| 0.7908 | 0.7012 | 0.8671 | 0.6690 | 9.997899e-07 | 20 |
| 0.7640 | 0.6988 | 0.8641 | 0.6761 | 9.997689e-07 | 21 |
| 0.7644 | 0.7035 | 0.8590 | 0.6831 | 9.997469e-07 | 22 |
| 0.7512 | 0.7200 | 0.8558 | 0.6831 | 9.99724e-07 | 23 |
| 0.7394 | 0.7200 | 0.8527 | 0.6972 | 9.997e-07 | 24 |
| 0.7366 | 0.7271 | 0.8501 | 0.7113 | 9.99675e-07 | 25 |
| 0.7293 | 0.7247 | 0.8471 | 0.7042 | 9.996489e-07 | 26 |
| 0.7189 | 0.7529 | 0.8479 | 0.7113 | 9.99622e-07 | 27 |
| 0.7077 | 0.7341 | 0.8411 | 0.7183 | 9.99594e-07 | 28 |
| 0.6965 | 0.7671 | 0.8409 | 0.7183 | 9.99565e-07 | 29 |
| 0.6838 | 0.7482 | 0.8372 | 0.7113 | 9.99535e-07 | 30 |
| 0.6835 | 0.7506 | 0.8362 | 0.7113 | 9.99504e-07 | 31 |
| 0.6702 | 0.7812 | 0.8365 | 0.6901 | 9.99472e-07 | 32 |
| 0.6623 | 0.7812 | 0.8323 | 0.7113 | 9.994391e-07 | 33 |
| 0.6565 | 0.7553 | 0.8298 | 0.6972 | 9.994051e-07 | 34 |
| 0.6452 | 0.7718 | 0.8291 | 0.6901 | 9.993701e-07 | 35 |
| 0.6396 | 0.7718 | 0.8284 | 0.7113 | 9.993341e-07 | 36 |
| 0.6299 | 0.7765 | 0.8262 | 0.6831 | 9.992972e-07 | 37 |
| 0.6230 | 0.7953 | 0.8364 | 0.7113 | 9.992592e-07 | 38 |
| 0.6095 | 0.7741 | 0.8233 | 0.7113 | 9.992202e-07 | 39 |
| 0.6193 | 0.7718 | 0.8206 | 0.7113 | 9.991802e-07 | 40 |
| 0.6008 | 0.7859 | 0.8260 | 0.7254 | 9.991393e-07 | 41 |
| 0.5967 | 0.7859 | 0.8199 | 0.7254 | 9.990973e-07 | 42 |
| 0.5883 | 0.7835 | 0.8189 | 0.7183 | 9.990544e-07 | 43 |
| 0.5751 | 0.8071 | 0.8279 | 0.7324 | 9.990104e-07 | 44 |
| 0.5709 | 0.8000 | 0.8204 | 0.7324 | 9.989654e-07 | 45 |
| 0.5697 | 0.8047 | 0.8229 | 0.7254 | 9.989195e-07 | 46 |
| 0.5580 | 0.8094 | 0.8152 | 0.7254 | 9.988726e-07 | 47 |
| 0.5595 | 0.8071 | 0.8275 | 0.7324 | 9.988246e-07 | 48 |
| 0.5486 | 0.7929 | 0.8168 | 0.7324 | 9.987757e-07 | 49 |
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 6,168 | [
[
-0.051544189453125,
-0.038482666015625,
0.019500732421875,
0.002544403076171875,
-0.0001596212387084961,
0.0017147064208984375,
0.0024280548095703125,
-0.0003414154052734375,
0.053924560546875,
0.021240234375,
-0.04779052734375,
-0.048248291015625,
-0.0434265136... |
guoluo/Bert_class_1e-06_266epoch | 2023-05-11T22:38:35.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | guoluo | null | null | guoluo/Bert_class_1e-06_266epoch | 0 | 2 | transformers | 2023-05-11T22:37:55 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_1e-06_266epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_1e-06_266epoch
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0213
- Train Accuracy: 0.9976
- Validation Loss: 1.4092
- Validation Accuracy: 0.7254
- Train Lr: 9.653716e-07
- Epoch: 265
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 9.653716e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:------------:|:-----:|
| 1.2823 | 0.4776 | 1.0993 | 0.6761 | 1e-06 | 0 |
| 1.0339 | 0.6776 | 0.9839 | 0.6761 | 9.99999e-07 | 1 |
| 0.9705 | 0.6776 | 0.9658 | 0.6761 | 9.999969e-07 | 2 |
| 0.9486 | 0.6776 | 0.9590 | 0.6761 | 9.99994e-07 | 3 |
| 0.9369 | 0.6776 | 0.9544 | 0.6761 | 9.9999e-07 | 4 |
| 0.9332 | 0.6776 | 0.9470 | 0.6761 | 9.99985e-07 | 5 |
| 0.9205 | 0.6776 | 0.9421 | 0.6761 | 9.99979e-07 | 6 |
| 0.9135 | 0.6776 | 0.9374 | 0.6761 | 9.999719e-07 | 7 |
| 0.9113 | 0.6776 | 0.9340 | 0.6761 | 9.99964e-07 | 8 |
| 0.9005 | 0.6776 | 0.9294 | 0.6761 | 9.99955e-07 | 9 |
| 0.8896 | 0.6776 | 0.9242 | 0.6761 | 9.99945e-07 | 10 |
| 0.8746 | 0.6800 | 0.9191 | 0.6761 | 9.99934e-07 | 11 |
| 0.8649 | 0.6824 | 0.9143 | 0.6761 | 9.999219e-07 | 12 |
| 0.8621 | 0.6847 | 0.9095 | 0.6761 | 9.999089e-07 | 13 |
| 0.8506 | 0.6847 | 0.9019 | 0.6761 | 9.99895e-07 | 14 |
| 0.8434 | 0.6800 | 0.8943 | 0.6761 | 9.9988e-07 | 15 |
| 0.8286 | 0.6871 | 0.8885 | 0.6761 | 9.998639e-07 | 16 |
| 0.8239 | 0.6824 | 0.8814 | 0.6761 | 9.998469e-07 | 17 |
| 0.8181 | 0.6894 | 0.8785 | 0.6761 | 9.998289e-07 | 18 |
| 0.7962 | 0.6894 | 0.8731 | 0.6690 | 9.998099e-07 | 19 |
| 0.7908 | 0.7012 | 0.8671 | 0.6690 | 9.997899e-07 | 20 |
| 0.7640 | 0.6988 | 0.8641 | 0.6761 | 9.997689e-07 | 21 |
| 0.7644 | 0.7035 | 0.8590 | 0.6831 | 9.997469e-07 | 22 |
| 0.7512 | 0.7200 | 0.8558 | 0.6831 | 9.99724e-07 | 23 |
| 0.7394 | 0.7200 | 0.8527 | 0.6972 | 9.997e-07 | 24 |
| 0.7366 | 0.7271 | 0.8501 | 0.7113 | 9.99675e-07 | 25 |
| 0.7293 | 0.7247 | 0.8471 | 0.7042 | 9.996489e-07 | 26 |
| 0.7189 | 0.7529 | 0.8479 | 0.7113 | 9.99622e-07 | 27 |
| 0.7077 | 0.7341 | 0.8411 | 0.7183 | 9.99594e-07 | 28 |
| 0.6965 | 0.7671 | 0.8409 | 0.7183 | 9.99565e-07 | 29 |
| 0.6838 | 0.7482 | 0.8372 | 0.7113 | 9.99535e-07 | 30 |
| 0.6835 | 0.7506 | 0.8362 | 0.7113 | 9.99504e-07 | 31 |
| 0.6702 | 0.7812 | 0.8365 | 0.6901 | 9.99472e-07 | 32 |
| 0.6623 | 0.7812 | 0.8323 | 0.7113 | 9.994391e-07 | 33 |
| 0.6565 | 0.7553 | 0.8298 | 0.6972 | 9.994051e-07 | 34 |
| 0.6452 | 0.7718 | 0.8291 | 0.6901 | 9.993701e-07 | 35 |
| 0.6396 | 0.7718 | 0.8285 | 0.7113 | 9.993341e-07 | 36 |
| 0.6299 | 0.7765 | 0.8262 | 0.6831 | 9.992972e-07 | 37 |
| 0.6230 | 0.7953 | 0.8364 | 0.7113 | 9.992592e-07 | 38 |
| 0.6095 | 0.7741 | 0.8233 | 0.7113 | 9.992202e-07 | 39 |
| 0.6193 | 0.7718 | 0.8206 | 0.7113 | 9.991802e-07 | 40 |
| 0.6008 | 0.7859 | 0.8260 | 0.7254 | 9.991393e-07 | 41 |
| 0.5967 | 0.7859 | 0.8199 | 0.7254 | 9.990973e-07 | 42 |
| 0.5883 | 0.7835 | 0.8189 | 0.7183 | 9.990544e-07 | 43 |
| 0.5751 | 0.8071 | 0.8279 | 0.7324 | 9.990104e-07 | 44 |
| 0.5709 | 0.8000 | 0.8204 | 0.7324 | 9.989654e-07 | 45 |
| 0.5697 | 0.8047 | 0.8229 | 0.7254 | 9.989195e-07 | 46 |
| 0.5580 | 0.8094 | 0.8152 | 0.7254 | 9.988726e-07 | 47 |
| 0.5595 | 0.8071 | 0.8275 | 0.7324 | 9.988246e-07 | 48 |
| 0.5486 | 0.7929 | 0.8168 | 0.7324 | 9.987757e-07 | 49 |
| 0.5400 | 0.8094 | 0.8239 | 0.7254 | 9.987258e-07 | 50 |
| 0.5352 | 0.8071 | 0.8190 | 0.7183 | 9.986749e-07 | 51 |
| 0.5141 | 0.8235 | 0.8171 | 0.7183 | 9.986229e-07 | 52 |
| 0.5324 | 0.8024 | 0.8191 | 0.7183 | 9.985699e-07 | 53 |
| 0.5123 | 0.8024 | 0.8279 | 0.7254 | 9.98516e-07 | 54 |
| 0.5151 | 0.8165 | 0.8213 | 0.7113 | 9.984611e-07 | 55 |
| 0.4986 | 0.8118 | 0.8176 | 0.7183 | 9.984052e-07 | 56 |
| 0.4925 | 0.8259 | 0.8208 | 0.7113 | 9.983482e-07 | 57 |
| 0.4848 | 0.8188 | 0.8182 | 0.7042 | 9.982904e-07 | 58 |
| 0.4952 | 0.8282 | 0.8214 | 0.7113 | 9.982315e-07 | 59 |
| 0.4837 | 0.8329 | 0.8192 | 0.7113 | 9.981716e-07 | 60 |
| 0.4513 | 0.8518 | 0.8224 | 0.7183 | 9.981106e-07 | 61 |
| 0.4628 | 0.8376 | 0.8227 | 0.7183 | 9.980488e-07 | 62 |
| 0.4633 | 0.8447 | 0.8246 | 0.7183 | 9.979859e-07 | 63 |
| 0.4472 | 0.8447 | 0.8256 | 0.7113 | 9.97922e-07 | 64 |
| 0.4529 | 0.8306 | 0.8285 | 0.7183 | 9.978571e-07 | 65 |
| 0.4579 | 0.8329 | 0.8331 | 0.7042 | 9.977913e-07 | 66 |
| 0.4326 | 0.8376 | 0.8278 | 0.7113 | 9.977244e-07 | 67 |
| 0.4255 | 0.8447 | 0.8265 | 0.7113 | 9.976566e-07 | 68 |
| 0.4322 | 0.8494 | 0.8293 | 0.7042 | 9.975878e-07 | 69 |
| 0.4189 | 0.8424 | 0.8382 | 0.7042 | 9.97518e-07 | 70 |
| 0.4236 | 0.8494 | 0.8302 | 0.7113 | 9.974472e-07 | 71 |
| 0.4025 | 0.8494 | 0.8364 | 0.7042 | 9.973753e-07 | 72 |
| 0.4225 | 0.8659 | 0.8370 | 0.7113 | 9.973025e-07 | 73 |
| 0.4027 | 0.8541 | 0.8377 | 0.7042 | 9.972288e-07 | 74 |
| 0.4090 | 0.8588 | 0.8381 | 0.7113 | 9.97154e-07 | 75 |
| 0.3887 | 0.8682 | 0.8378 | 0.7042 | 9.970781e-07 | 76 |
| 0.4022 | 0.8706 | 0.8406 | 0.7042 | 9.970014e-07 | 77 |
| 0.3867 | 0.8682 | 0.8457 | 0.7113 | 9.969236e-07 | 78 |
| 0.3689 | 0.8706 | 0.8460 | 0.7113 | 9.968448e-07 | 79 |
| 0.3728 | 0.8729 | 0.8527 | 0.7042 | 9.967652e-07 | 80 |
| 0.3754 | 0.8706 | 0.8525 | 0.7042 | 9.966844e-07 | 81 |
| 0.3580 | 0.8871 | 0.8531 | 0.7113 | 9.966027e-07 | 82 |
| 0.3718 | 0.8659 | 0.8593 | 0.7042 | 9.965199e-07 | 83 |
| 0.3535 | 0.8800 | 0.8593 | 0.7324 | 9.964363e-07 | 84 |
| 0.3342 | 0.8824 | 0.8704 | 0.6972 | 9.963516e-07 | 85 |
| 0.3341 | 0.8918 | 0.8630 | 0.7324 | 9.962658e-07 | 86 |
| 0.3371 | 0.8776 | 0.8698 | 0.7042 | 9.961792e-07 | 87 |
| 0.3338 | 0.8847 | 0.8689 | 0.7042 | 9.960916e-07 | 88 |
| 0.3295 | 0.8776 | 0.8753 | 0.6972 | 9.960029e-07 | 89 |
| 0.3259 | 0.8847 | 0.8696 | 0.7183 | 9.959133e-07 | 90 |
| 0.3290 | 0.8776 | 0.8726 | 0.7183 | 9.958227e-07 | 91 |
| 0.3117 | 0.8988 | 0.8798 | 0.7324 | 9.95731e-07 | 92 |
| 0.3075 | 0.8965 | 0.8836 | 0.7254 | 9.956385e-07 | 93 |
| 0.2905 | 0.9129 | 0.8868 | 0.7183 | 9.95545e-07 | 94 |
| 0.2979 | 0.9153 | 0.8888 | 0.7183 | 9.954504e-07 | 95 |
| 0.3031 | 0.8800 | 0.8956 | 0.7324 | 9.953548e-07 | 96 |
| 0.2883 | 0.9035 | 0.8984 | 0.7042 | 9.952582e-07 | 97 |
| 0.2835 | 0.9106 | 0.8969 | 0.7254 | 9.951607e-07 | 98 |
| 0.2803 | 0.9059 | 0.8998 | 0.7254 | 9.950621e-07 | 99 |
| 0.2812 | 0.9176 | 0.9034 | 0.7254 | 9.949626e-07 | 100 |
| 0.2714 | 0.9153 | 0.9028 | 0.7183 | 9.948621e-07 | 101 |
| 0.2905 | 0.9059 | 0.9144 | 0.7254 | 9.947606e-07 | 102 |
| 0.2631 | 0.9224 | 0.9143 | 0.6972 | 9.946582e-07 | 103 |
| 0.2679 | 0.9176 | 0.9180 | 0.7254 | 9.945547e-07 | 104 |
| 0.2583 | 0.9224 | 0.9206 | 0.7042 | 9.944504e-07 | 105 |
| 0.2613 | 0.9200 | 0.9286 | 0.7254 | 9.94345e-07 | 106 |
| 0.2669 | 0.9012 | 0.9237 | 0.7254 | 9.942386e-07 | 107 |
| 0.2571 | 0.9153 | 0.9351 | 0.7254 | 9.941313e-07 | 108 |
| 0.2570 | 0.9106 | 0.9306 | 0.7324 | 9.940229e-07 | 109 |
| 0.2344 | 0.9200 | 0.9396 | 0.7183 | 9.939135e-07 | 110 |
| 0.2359 | 0.9271 | 0.9369 | 0.7394 | 9.938033e-07 | 111 |
| 0.2395 | 0.9271 | 0.9522 | 0.7042 | 9.93692e-07 | 112 |
| 0.2408 | 0.9247 | 0.9509 | 0.7183 | 9.935796e-07 | 113 |
| 0.2330 | 0.9294 | 0.9561 | 0.7042 | 9.934664e-07 | 114 |
| 0.2247 | 0.9271 | 0.9539 | 0.7183 | 9.933522e-07 | 115 |
| 0.2192 | 0.9318 | 0.9705 | 0.7042 | 9.93237e-07 | 116 |
| 0.2173 | 0.9341 | 0.9621 | 0.7254 | 9.931208e-07 | 117 |
| 0.2138 | 0.9200 | 0.9679 | 0.7183 | 9.930036e-07 | 118 |
| 0.2239 | 0.9176 | 0.9733 | 0.6972 | 9.928855e-07 | 119 |
| 0.2188 | 0.9341 | 0.9838 | 0.7042 | 9.927663e-07 | 120 |
| 0.2116 | 0.9341 | 0.9764 | 0.7324 | 9.926462e-07 | 121 |
| 0.2061 | 0.9200 | 0.9840 | 0.7183 | 9.925251e-07 | 122 |
| 0.2061 | 0.9435 | 0.9798 | 0.7254 | 9.92403e-07 | 123 |
| 0.2049 | 0.9388 | 1.0056 | 0.7042 | 9.9228e-07 | 124 |
| 0.1947 | 0.9459 | 0.9898 | 0.7254 | 9.92156e-07 | 125 |
| 0.1990 | 0.9365 | 0.9935 | 0.6972 | 9.92031e-07 | 126 |
| 0.1945 | 0.9506 | 0.9997 | 0.7113 | 9.91905e-07 | 127 |
| 0.1955 | 0.9365 | 0.9972 | 0.7254 | 9.91778e-07 | 128 |
| 0.1845 | 0.9459 | 1.0044 | 0.7254 | 9.916502e-07 | 129 |
| 0.1722 | 0.9388 | 1.0057 | 0.7183 | 9.915212e-07 | 130 |
| 0.1693 | 0.9576 | 1.0118 | 0.7113 | 9.913914e-07 | 131 |
| 0.1837 | 0.9318 | 1.0126 | 0.7113 | 9.912605e-07 | 132 |
| 0.1894 | 0.9412 | 1.0254 | 0.6972 | 9.911287e-07 | 133 |
| 0.1702 | 0.9506 | 1.0156 | 0.7254 | 9.909959e-07 | 134 |
| 0.1697 | 0.9576 | 1.0184 | 0.7183 | 9.908621e-07 | 135 |
| 0.1694 | 0.9459 | 1.0179 | 0.7394 | 9.907274e-07 | 136 |
| 0.1587 | 0.9553 | 1.0255 | 0.7183 | 9.905916e-07 | 137 |
| 0.1590 | 0.9576 | 1.0308 | 0.7324 | 9.90455e-07 | 138 |
| 0.1670 | 0.9576 | 1.0376 | 0.7254 | 9.903173e-07 | 139 |
| 0.1606 | 0.9482 | 1.0405 | 0.7254 | 9.901787e-07 | 140 |
| 0.1605 | 0.9576 | 1.0468 | 0.7324 | 9.900391e-07 | 141 |
| 0.1476 | 0.9624 | 1.0470 | 0.7183 | 9.898986e-07 | 142 |
| 0.1493 | 0.9553 | 1.0530 | 0.7183 | 9.89757e-07 | 143 |
| 0.1292 | 0.9718 | 1.0573 | 0.7183 | 9.896146e-07 | 144 |
| 0.1393 | 0.9694 | 1.0655 | 0.7183 | 9.894711e-07 | 145 |
| 0.1458 | 0.9529 | 1.0627 | 0.7324 | 9.893266e-07 | 146 |
| 0.1319 | 0.9694 | 1.0809 | 0.7042 | 9.891812e-07 | 147 |
| 0.1358 | 0.9624 | 1.0716 | 0.7254 | 9.890348e-07 | 148 |
| 0.1514 | 0.9624 | 1.0863 | 0.7113 | 9.888875e-07 | 149 |
| 0.1384 | 0.9624 | 1.0777 | 0.7324 | 9.887391e-07 | 150 |
| 0.1286 | 0.9694 | 1.0907 | 0.7113 | 9.885898e-07 | 151 |
| 0.1316 | 0.9694 | 1.0914 | 0.7183 | 9.884395e-07 | 152 |
| 0.1310 | 0.9671 | 1.0933 | 0.7183 | 9.882883e-07 | 153 |
| 0.1331 | 0.9647 | 1.0940 | 0.7254 | 9.881361e-07 | 154 |
| 0.1225 | 0.9718 | 1.0998 | 0.7183 | 9.87983e-07 | 155 |
| 0.1176 | 0.9718 | 1.1027 | 0.7183 | 9.878289e-07 | 156 |
| 0.1205 | 0.9671 | 1.1042 | 0.7183 | 9.876738e-07 | 157 |
| 0.1295 | 0.9647 | 1.1100 | 0.7183 | 9.875179e-07 | 158 |
| 0.1097 | 0.9718 | 1.1243 | 0.7183 | 9.873609e-07 | 159 |
| 0.1072 | 0.9812 | 1.1196 | 0.7183 | 9.87203e-07 | 160 |
| 0.1063 | 0.9788 | 1.1262 | 0.7254 | 9.87044e-07 | 161 |
| 0.1208 | 0.9647 | 1.1248 | 0.7042 | 9.868842e-07 | 162 |
| 0.1120 | 0.9694 | 1.1296 | 0.7183 | 9.867233e-07 | 163 |
| 0.1123 | 0.9694 | 1.1367 | 0.7183 | 9.865615e-07 | 164 |
| 0.0972 | 0.9882 | 1.1382 | 0.7183 | 9.863987e-07 | 165 |
| 0.1175 | 0.9647 | 1.1515 | 0.7254 | 9.86235e-07 | 166 |
| 0.1136 | 0.9741 | 1.1551 | 0.7183 | 9.860704e-07 | 167 |
| 0.0929 | 0.9859 | 1.1558 | 0.7183 | 9.859048e-07 | 168 |
| 0.0895 | 0.9812 | 1.1637 | 0.7183 | 9.857382e-07 | 169 |
| 0.1013 | 0.9718 | 1.1599 | 0.7183 | 9.855706e-07 | 170 |
| 0.1026 | 0.9718 | 1.1607 | 0.7183 | 9.854022e-07 | 171 |
| 0.0983 | 0.9788 | 1.1601 | 0.7254 | 9.852326e-07 | 172 |
| 0.0809 | 0.9882 | 1.1673 | 0.7183 | 9.850622e-07 | 173 |
| 0.0923 | 0.9765 | 1.1763 | 0.7254 | 9.848909e-07 | 174 |
| 0.0840 | 0.9835 | 1.1775 | 0.7254 | 9.847186e-07 | 175 |
| 0.0887 | 0.9812 | 1.1881 | 0.7254 | 9.845453e-07 | 176 |
| 0.0922 | 0.9718 | 1.1893 | 0.7254 | 9.84371e-07 | 177 |
| 0.0794 | 0.9882 | 1.1944 | 0.7254 | 9.841958e-07 | 178 |
| 0.0826 | 0.9835 | 1.2019 | 0.7113 | 9.840197e-07 | 179 |
| 0.0725 | 0.9929 | 1.1993 | 0.7254 | 9.838426e-07 | 180 |
| 0.0727 | 0.9929 | 1.2000 | 0.7113 | 9.836646e-07 | 181 |
| 0.0759 | 0.9859 | 1.2061 | 0.7254 | 9.834856e-07 | 182 |
| 0.0945 | 0.9788 | 1.2160 | 0.7113 | 9.833057e-07 | 183 |
| 0.0796 | 0.9812 | 1.2021 | 0.7254 | 9.831248e-07 | 184 |
| 0.0792 | 0.9835 | 1.2152 | 0.7183 | 9.829429e-07 | 185 |
| 0.0803 | 0.9859 | 1.2169 | 0.7183 | 9.827601e-07 | 186 |
| 0.0835 | 0.9812 | 1.2237 | 0.7183 | 9.825764e-07 | 187 |
| 0.0680 | 0.9859 | 1.2224 | 0.7113 | 9.823916e-07 | 188 |
| 0.0898 | 0.9812 | 1.2188 | 0.7183 | 9.82206e-07 | 189 |
| 0.0780 | 0.9788 | 1.2196 | 0.7113 | 9.820194e-07 | 190 |
| 0.0759 | 0.9835 | 1.2473 | 0.6901 | 9.818318e-07 | 191 |
| 0.0915 | 0.9694 | 1.2324 | 0.7042 | 9.816433e-07 | 192 |
| 0.0767 | 0.9859 | 1.2285 | 0.7042 | 9.814539e-07 | 193 |
| 0.0663 | 0.9906 | 1.2300 | 0.7113 | 9.812636e-07 | 194 |
| 0.0795 | 0.9835 | 1.2481 | 0.7042 | 9.810723e-07 | 195 |
| 0.0686 | 0.9882 | 1.2451 | 0.7042 | 9.8088e-07 | 196 |
| 0.0702 | 0.9835 | 1.2363 | 0.7113 | 9.806869e-07 | 197 |
| 0.0751 | 0.9812 | 1.2419 | 0.7113 | 9.804927e-07 | 198 |
| 0.0680 | 0.9859 | 1.2398 | 0.7113 | 9.802976e-07 | 199 |
| 0.0543 | 0.9882 | 1.2477 | 0.7042 | 9.801016e-07 | 200 |
| 0.0666 | 0.9835 | 1.2703 | 0.6972 | 9.799047e-07 | 201 |
| 0.0704 | 0.9859 | 1.2476 | 0.7042 | 9.797068e-07 | 202 |
| 0.0634 | 0.9859 | 1.2609 | 0.7042 | 9.79508e-07 | 203 |
| 0.0650 | 0.9882 | 1.2557 | 0.7113 | 9.793082e-07 | 204 |
| 0.0533 | 0.9976 | 1.2743 | 0.7113 | 9.791074e-07 | 205 |
| 0.0585 | 0.9882 | 1.2753 | 0.7113 | 9.789057e-07 | 206 |
| 0.0596 | 0.9929 | 1.2881 | 0.7042 | 9.787032e-07 | 207 |
| 0.0593 | 0.9953 | 1.2948 | 0.7042 | 9.784997e-07 | 208 |
| 0.0625 | 0.9859 | 1.2883 | 0.7042 | 9.782952e-07 | 209 |
| 0.0556 | 0.9929 | 1.2802 | 0.7113 | 9.780898e-07 | 210 |
| 0.0615 | 0.9812 | 1.2972 | 0.7113 | 9.778835e-07 | 211 |
| 0.0621 | 0.9859 | 1.3030 | 0.6972 | 9.776762e-07 | 212 |
| 0.0559 | 0.9882 | 1.2857 | 0.7183 | 9.774681e-07 | 213 |
| 0.0635 | 0.9859 | 1.3151 | 0.7042 | 9.772589e-07 | 214 |
| 0.0544 | 0.9882 | 1.2969 | 0.7113 | 9.770488e-07 | 215 |
| 0.0477 | 0.9976 | 1.2981 | 0.7113 | 9.768378e-07 | 216 |
| 0.0554 | 0.9882 | 1.3156 | 0.7113 | 9.766259e-07 | 217 |
| 0.0548 | 0.9906 | 1.3094 | 0.7113 | 9.76413e-07 | 218 |
| 0.0470 | 0.9976 | 1.3185 | 0.7042 | 9.761993e-07 | 219 |
| 0.0489 | 0.9953 | 1.3197 | 0.7042 | 9.759846e-07 | 220 |
| 0.0436 | 0.9976 | 1.3024 | 0.7113 | 9.757689e-07 | 221 |
| 0.0456 | 0.9953 | 1.3061 | 0.7113 | 9.755523e-07 | 222 |
| 0.0417 | 0.9976 | 1.3189 | 0.7042 | 9.753348e-07 | 223 |
| 0.0416 | 0.9953 | 1.3220 | 0.7042 | 9.751164e-07 | 224 |
| 0.0369 | 1.0 | 1.3211 | 0.7113 | 9.748971e-07 | 225 |
| 0.0570 | 0.9859 | 1.3274 | 0.7042 | 9.746768e-07 | 226 |
| 0.0416 | 0.9929 | 1.3409 | 0.6901 | 9.744556e-07 | 227 |
| 0.0314 | 1.0 | 1.3376 | 0.7042 | 9.742334e-07 | 228 |
| 0.0421 | 0.9929 | 1.3242 | 0.7183 | 9.740104e-07 | 229 |
| 0.0398 | 0.9976 | 1.3331 | 0.7042 | 9.737864e-07 | 230 |
| 0.0483 | 0.9882 | 1.3431 | 0.7042 | 9.735616e-07 | 231 |
| 0.0356 | 0.9953 | 1.3526 | 0.7042 | 9.733358e-07 | 232 |
| 0.0392 | 0.9953 | 1.3500 | 0.7042 | 9.731091e-07 | 233 |
| 0.0413 | 0.9953 | 1.3659 | 0.6972 | 9.728815e-07 | 234 |
| 0.0371 | 0.9929 | 1.3473 | 0.7042 | 9.726529e-07 | 235 |
| 0.0383 | 0.9929 | 1.3689 | 0.6972 | 9.724233e-07 | 236 |
| 0.0452 | 0.9953 | 1.3552 | 0.7042 | 9.721929e-07 | 237 |
| 0.0408 | 0.9953 | 1.3430 | 0.7113 | 9.719615e-07 | 238 |
| 0.0507 | 0.9906 | 1.3656 | 0.7042 | 9.717293e-07 | 239 |
| 0.0437 | 0.9953 | 1.3735 | 0.6972 | 9.714961e-07 | 240 |
| 0.0368 | 0.9929 | 1.3713 | 0.7113 | 9.71262e-07 | 241 |
| 0.0381 | 0.9976 | 1.3793 | 0.6972 | 9.71027e-07 | 242 |
| 0.0369 | 0.9953 | 1.3835 | 0.7113 | 9.707911e-07 | 243 |
| 0.0343 | 0.9976 | 1.3778 | 0.7183 | 9.705543e-07 | 244 |
| 0.0321 | 0.9929 | 1.3790 | 0.7113 | 9.703166e-07 | 245 |
| 0.0367 | 0.9953 | 1.3830 | 0.7113 | 9.70078e-07 | 246 |
| 0.0302 | 0.9953 | 1.3828 | 0.7113 | 9.698384e-07 | 247 |
| 0.0333 | 0.9929 | 1.3821 | 0.7113 | 9.69598e-07 | 248 |
| 0.0386 | 0.9929 | 1.3962 | 0.7113 | 9.693566e-07 | 249 |
| 0.0335 | 0.9929 | 1.4009 | 0.7113 | 9.691144e-07 | 250 |
| 0.0481 | 0.9835 | 1.3924 | 0.7113 | 9.688712e-07 | 251 |
| 0.0361 | 0.9953 | 1.3923 | 0.7113 | 9.686271e-07 | 252 |
| 0.0343 | 0.9906 | 1.4150 | 0.6972 | 9.683821e-07 | 253 |
| 0.0429 | 0.9906 | 1.3859 | 0.7254 | 9.681362e-07 | 254 |
| 0.0353 | 0.9906 | 1.4019 | 0.7113 | 9.678894e-07 | 255 |
| 0.0317 | 0.9929 | 1.4072 | 0.7113 | 9.676417e-07 | 256 |
| 0.0231 | 1.0 | 1.4038 | 0.7113 | 9.67393e-07 | 257 |
| 0.0240 | 1.0 | 1.4172 | 0.7183 | 9.671435e-07 | 258 |
| 0.0358 | 0.9882 | 1.4316 | 0.7042 | 9.66893e-07 | 259 |
| 0.0381 | 0.9906 | 1.4047 | 0.7254 | 9.666417e-07 | 260 |
| 0.0311 | 0.9929 | 1.4056 | 0.7113 | 9.663894e-07 | 261 |
| 0.0274 | 0.9976 | 1.4240 | 0.7113 | 9.661362e-07 | 262 |
| 0.0305 | 0.9976 | 1.4322 | 0.7113 | 9.658822e-07 | 263 |
| 0.0322 | 0.9929 | 1.4127 | 0.7183 | 9.656274e-07 | 264 |
| 0.0213 | 0.9976 | 1.4092 | 0.7254 | 9.653716e-07 | 265 |
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 26,681 | [
[
-0.049774169921875,
-0.034454345703125,
0.0245819091796875,
0.0037403106689453125,
-0.0005931854248046875,
0.004291534423828125,
0.0029277801513671875,
0.002544403076171875,
0.056121826171875,
0.0244903564453125,
-0.0452880859375,
-0.0457763671875,
-0.0408020019... |
hellomattnewman/msba-adrida | 2023-05-12T00:08:41.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | hellomattnewman | null | null | hellomattnewman/msba-adrida | 0 | 2 | transformers | 2023-05-12T00:01:50 | ---
license: "mit"
widget:
- text: "Took the pill, 12 hours later my muscles started to really hurt, then my ribs started to burn so bad I couldn't breath."
---
This model takes text (narrative of reasctions to medications) as input and returns a predicted severity score for the reaction (LABEL_1 is severe reaction). Please do NOT use for medical diagnosis.
Example usage:
```python
import torch
import tensorflow as tf
from transformers import RobertaTokenizer, RobertaModel
from transformers import AutoModelForSequenceClassification
from transformers import TFAutoModelForSequenceClassification
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("hellomattnewman/msba-adrida")
model = AutoModelForSequenceClassification.from_pretrained("hellomattnewman/msba-adrida")
def adr_predict(x):
encoded_input = tokenizer(x, return_tensors='pt')
output = model(**encoded_input)
scores = output[0][0].detach().numpy()
scores = tf.nn.softmax(scores)
return scores.numpy()[1]
sentence = "I have severe pain."
adr_predict(sentence)
```
| 1,087 | [
[
0.0171356201171875,
-0.056365966796875,
0.041412353515625,
0.0132293701171875,
-0.007965087890625,
-0.017608642578125,
-0.0012731552124023438,
-0.0096435546875,
0.0198516845703125,
0.0333251953125,
-0.0266265869140625,
-0.05389404296875,
-0.070068359375,
0.0... |
renbtt/distilbert-base-uncased-finetuned-sti | 2023-05-12T02:39:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | renbtt | null | null | renbtt/distilbert-base-uncased-finetuned-sti | 0 | 2 | transformers | 2023-05-12T00:59:27 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sti
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sti
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3127
- Accuracy: 0.8904
- F1: 0.8904
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5478 | 1.0 | 47 | 0.3518 | 0.8850 | 0.8848 |
| 0.3574 | 2.0 | 94 | 0.3127 | 0.8904 | 0.8904 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,496 | [
[
-0.034088134765625,
-0.047821044921875,
0.0136260986328125,
0.01611328125,
-0.0302276611328125,
-0.02069091796875,
-0.01291656494140625,
-0.0092010498046875,
0.006092071533203125,
0.0183563232421875,
-0.049530029296875,
-0.043853759765625,
-0.060699462890625,
... |
AustinCarthy/Baseline_10Kphish_benignFall_20_20_20 | 2023-05-12T02:49:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_10Kphish_benignFall_20_20_20 | 0 | 2 | transformers | 2023-05-12T01:52:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_10Kphish_benignFall_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_10Kphish_benignFall_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0830
- Accuracy: 0.9916
- F1: 0.9039
- Precision: 0.9971
- Recall: 0.8266
- Roc Auc Score: 0.9132
- Tpr At Fpr 0.01: 0.8118
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0118 | 1.0 | 6563 | 0.0538 | 0.9889 | 0.8681 | 0.9948 | 0.77 | 0.8849 | 0.7234 |
| 0.0053 | 2.0 | 13126 | 0.0538 | 0.9915 | 0.9021 | 0.9945 | 0.8254 | 0.9126 | 0.7654 |
| 0.0018 | 3.0 | 19689 | 0.0639 | 0.9916 | 0.9040 | 0.9945 | 0.8286 | 0.9142 | 0.7782 |
| 0.0009 | 4.0 | 26252 | 0.0843 | 0.9905 | 0.8894 | 0.9978 | 0.8022 | 0.9011 | 0.8086 |
| 0.0 | 5.0 | 32815 | 0.0830 | 0.9916 | 0.9039 | 0.9971 | 0.8266 | 0.9132 | 0.8118 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,236 | [
[
-0.04180908203125,
-0.0435791015625,
0.00849151611328125,
0.00974273681640625,
-0.0207366943359375,
-0.021759033203125,
-0.004497528076171875,
-0.018035888671875,
0.0282440185546875,
0.028411865234375,
-0.05352783203125,
-0.05401611328125,
-0.049774169921875,
... |
yamazaki-m/distilbert-base-uncased-finetuned-emotion | 2023-05-12T07:01:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yamazaki-m | null | null | yamazaki-m/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-12T02:26:12 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9274137058842844
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2089
- Accuracy: 0.9275
- F1: 0.9274
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8454 | 1.0 | 250 | 0.3120 | 0.9045 | 0.9011 |
| 0.2469 | 2.0 | 500 | 0.2089 | 0.9275 | 0.9274 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,842 | [
[
-0.03765869140625,
-0.04052734375,
0.013946533203125,
0.021942138671875,
-0.02685546875,
-0.02020263671875,
-0.0128936767578125,
-0.00827789306640625,
0.00989532470703125,
0.00856781005859375,
-0.0557861328125,
-0.05169677734375,
-0.05987548828125,
-0.007328... |
AustinCarthy/Baseline_100Kphish_benignFall_20_20_20 | 2023-05-12T09:43:16.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_100Kphish_benignFall_20_20_20 | 0 | 2 | transformers | 2023-05-12T02:49:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_100Kphish_benignFall_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_100Kphish_benignFall_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0206
- Accuracy: 0.9973
- F1: 0.9713
- Precision: 0.9998
- Recall: 0.9444
- Roc Auc Score: 0.9722
- Tpr At Fpr 0.01: 0.962
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0021 | 1.0 | 65625 | 0.0198 | 0.9974 | 0.9721 | 0.9966 | 0.9488 | 0.9743 | 0.9436 |
| 0.0013 | 2.0 | 131250 | 0.0251 | 0.9969 | 0.9664 | 0.9996 | 0.9354 | 0.9677 | 0.9416 |
| 0.0025 | 3.0 | 196875 | 0.0284 | 0.9966 | 0.9625 | 0.9996 | 0.928 | 0.9640 | 0.953 |
| 0.0 | 4.0 | 262500 | 0.0187 | 0.9974 | 0.9717 | 0.9994 | 0.9456 | 0.9728 | 0.965 |
| 0.0011 | 5.0 | 328125 | 0.0206 | 0.9973 | 0.9713 | 0.9998 | 0.9444 | 0.9722 | 0.962 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,244 | [
[
-0.04083251953125,
-0.043914794921875,
0.009368896484375,
0.00980377197265625,
-0.0192413330078125,
-0.02105712890625,
-0.0035610198974609375,
-0.0172576904296875,
0.0283050537109375,
0.02886962890625,
-0.05426025390625,
-0.0555419921875,
-0.050506591796875,
... |
tollefj/setfit-nocola-20-iter-25-epochs-allsamples | 2023-05-12T03:21:57.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | tollefj | null | null | tollefj/setfit-nocola-20-iter-25-epochs-allsamples | 0 | 2 | sentence-transformers | 2023-05-12T03:21:15 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# tollefj/setfit-nocola-20-iter-25-epochs-allsamples
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("tollefj/setfit-nocola-20-iter-25-epochs-allsamples")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,589 | [
[
-0.0013189315795898438,
-0.05413818359375,
0.0240936279296875,
-0.00568389892578125,
-0.00881195068359375,
-0.0208282470703125,
-0.0126800537109375,
-0.00926971435546875,
-0.0027370452880859375,
0.03338623046875,
-0.037109375,
-0.022125244140625,
-0.042572021484... |
hdks/bert-mrpc | 2023-05-12T04:55:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | hdks | null | null | hdks/bert-mrpc | 0 | 2 | transformers | 2023-05-12T04:18:55 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8553921568627451
- name: F1
type: f1
value: 0.8987993138936535
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6473
- Accuracy: 0.8554
- F1: 0.8988
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 459 | 0.3845 | 0.8480 | 0.8920 |
| 0.5092 | 2.0 | 918 | 0.4326 | 0.8578 | 0.9033 |
| 0.3024 | 3.0 | 1377 | 0.6473 | 0.8554 | 0.8988 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,841 | [
[
-0.03326416015625,
-0.03839111328125,
0.007678985595703125,
0.0122833251953125,
-0.0281219482421875,
-0.029998779296875,
-0.01491546630859375,
-0.0186767578125,
0.0180511474609375,
0.01401519775390625,
-0.0589599609375,
-0.0384521484375,
-0.051239013671875,
... |
itsmeboris/bert-base-cased-conversational-ner | 2023-05-12T05:35:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"token-classification",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | itsmeboris | null | null | itsmeboris/bert-base-cased-conversational-ner | 0 | 2 | transformers | 2023-05-12T05:28:16 | ---
tags:
- generated_from_trainer
metrics:
- precision
- recall
- f1
- accuracy
model-index:
- name: bert-base-cased-conversational-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-conversational-ner
This model is a fine-tuned version of [DeepPavlov/bert-base-cased-conversational](https://huggingface.co/DeepPavlov/bert-base-cased-conversational) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3583
- Job Title precision: 0.8377
- Job Title recall: 0.8317
- Job Title f1: 0.8347
- Loc precision: 0.8938
- Loc recall: 0.9340
- Loc f1: 0.9135
- Org precision: 0.7092
- Org recall: 0.7032
- Org f1: 0.7062
- Misc precision: 0.6246
- Misc recall: 0.7270
- Misc f1: 0.6719
- Precision: 0.8154
- Recall: 0.8240
- F1: 0.8197
- Accuracy: 0.8687
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Job Title precision | Job Title recall | Job Title f1 | Loc precision | Loc recall | Loc f1 | Org precision | Org recall | Org f1 | Misc precision | Misc recall | Misc f1 | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:-------------------:|:----------------:|:------------:|:-------------:|:----------:|:------:|:-------------:|:----------:|:------:|:--------------:|:-----------:|:-------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 308 | 0.3583 | 0.8377 | 0.8317 | 0.8347 | 0.8938 | 0.9340 | 0.9135 | 0.7092 | 0.7032 | 0.7062 | 0.6246 | 0.7270 | 0.6719 | 0.8154 | 0.8240 | 0.8197 | 0.8687 |
| 0.3975 | 2.0 | 616 | 0.3767 | 0.7906 | 0.9035 | 0.8433 | 0.8731 | 0.9614 | 0.9151 | 0.6275 | 0.7973 | 0.7023 | 0.6623 | 0.6894 | 0.6756 | 0.7658 | 0.8866 | 0.8218 | 0.8669 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.7.1+cu110
- Datasets 2.12.0
- Tokenizers 0.13.2
| 2,606 | [
[
-0.04278564453125,
-0.046295166015625,
0.01947021484375,
0.0005474090576171875,
-0.01232147216796875,
-0.02056884765625,
-0.0035610198974609375,
-0.00989532470703125,
0.0286102294921875,
0.03265380859375,
-0.0478515625,
-0.046356201171875,
-0.048370361328125,
... |
Bhanu9Prakash/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-12T06:09:04.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Bhanu9Prakash | null | null | Bhanu9Prakash/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-12T06:08:30 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 434.00 +/- 154.03
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bhanu9Prakash -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Bhanu9Prakash -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Bhanu9Prakash
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 10000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,707 | [
[
-0.041229248046875,
-0.036468505859375,
0.020843505859375,
0.024139404296875,
-0.01044464111328125,
-0.01763916015625,
0.0124969482421875,
-0.01385498046875,
0.01389312744140625,
0.0241546630859375,
-0.0712890625,
-0.03485107421875,
-0.0273284912109375,
-0.0... |
seanghay/distilbert-base-uncased-finetuned-cola | 2023-05-12T06:23:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | seanghay | null | null | seanghay/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-12T06:18:55 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5463170422325025
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5736
- Matthews Correlation: 0.5463
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5222 | 1.0 | 535 | 0.5322 | 0.3973 |
| 0.3484 | 2.0 | 1070 | 0.5036 | 0.4986 |
| 0.2366 | 3.0 | 1605 | 0.5736 | 0.5463 |
| 0.1815 | 4.0 | 2140 | 0.7577 | 0.5294 |
| 0.1337 | 5.0 | 2675 | 0.8006 | 0.5449 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,042 | [
[
-0.023773193359375,
-0.049102783203125,
0.01323699951171875,
0.0187225341796875,
-0.0203399658203125,
-0.008392333984375,
-0.00531005859375,
-0.00376129150390625,
0.0230560302734375,
0.01119232177734375,
-0.045501708984375,
-0.0357666015625,
-0.062103271484375,
... |
xqchq/TextClassificationTHUCNews | 2023-05-12T08:52:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:thuc_news",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | xqchq | null | null | xqchq/TextClassificationTHUCNews | 0 | 2 | transformers | 2023-05-12T07:24:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- thuc_news
model-index:
- name: TextClassificationTHUCNews
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TextClassificationTHUCNews
This model is a fine-tuned version of [hfl/minirbt-h256](https://huggingface.co/hfl/minirbt-h256) on the thuc_news dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,072 | [
[
-0.0343017578125,
-0.0443115234375,
0.0123748779296875,
0.00257110595703125,
-0.032623291015625,
-0.02557373046875,
-0.00865936279296875,
-0.0225982666015625,
0.00836944580078125,
0.0149688720703125,
-0.049835205078125,
-0.0303802490234375,
-0.0364990234375,
... |
Jimmie/distilbert-base-uncased-finetuned-emotion | 2023-05-12T08:26:22.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Jimmie | null | null | Jimmie/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-12T07:40:43 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9213722275342461
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2256
- Accuracy: 0.9215
- F1: 0.9214
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8409 | 1.0 | 250 | 0.3272 | 0.902 | 0.8991 |
| 0.2574 | 2.0 | 500 | 0.2256 | 0.9215 | 0.9214 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,842 | [
[
-0.0380859375,
-0.040618896484375,
0.01493072509765625,
0.02203369140625,
-0.0263671875,
-0.0202178955078125,
-0.0126953125,
-0.00861358642578125,
0.0103912353515625,
0.0085296630859375,
-0.056549072265625,
-0.051788330078125,
-0.059661865234375,
-0.00798034... |
seanghay/xlm-roberta-base-imdb | 2023-05-12T10:38:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | seanghay | null | null | seanghay/xlm-roberta-base-imdb | 0 | 2 | transformers | 2023-05-12T09:56:06 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- imdb
metrics:
- accuracy
model-index:
- name: xlm-roberta-base-imdb
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: imdb
type: imdb
config: plain_text
split: test
args: plain_text
metrics:
- name: Accuracy
type: accuracy
value: 0.93936
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-imdb
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the imdb dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2223
- Accuracy: 0.9394
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.2345 | 1.0 | 1563 | 0.1808 | 0.9306 |
| 0.1612 | 2.0 | 3126 | 0.2223 | 0.9394 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,664 | [
[
-0.033905029296875,
-0.044830322265625,
0.02142333984375,
-0.002285003662109375,
-0.025421142578125,
-0.0183868408203125,
-0.0103912353515625,
-0.009918212890625,
0.00881195068359375,
0.04132080078125,
-0.060546875,
-0.04339599609375,
-0.06365966796875,
-0.0... |
shivansh-ka/Multilingual-Toxic-Comment-Roberta-best | 2023-05-12T10:15:00.000Z | [
"keras",
"region:us"
] | null | shivansh-ka | null | null | shivansh-ka/Multilingual-Toxic-Comment-Roberta-best | 0 | 2 | keras | 2023-05-12T10:13:13 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | 1e-06 |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 9.999999747378752e-06 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
| 740 | [
[
-0.035919189453125,
-0.03900146484375,
0.0277099609375,
0.004383087158203125,
-0.03466796875,
-0.01666259765625,
0.0011529922485351562,
0.0012025833129882812,
0.0235137939453125,
0.019287109375,
-0.043243408203125,
-0.047882080078125,
-0.03515625,
0.00736236... |
PhDmath/distilbert-base-uncased-finetuned-emotion | 2023-05-12T12:52:33.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | PhDmath | null | null | PhDmath/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-12T11:33:07 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9293576247301535
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.9295
- F1: 0.9294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8632 | 1.0 | 250 | 0.3270 | 0.904 | 0.9008 |
| 0.253 | 2.0 | 500 | 0.2169 | 0.9295 | 0.9294 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037872314453125,
-0.04180908203125,
0.0141754150390625,
0.0218505859375,
-0.0263824462890625,
-0.0193634033203125,
-0.01316070556640625,
-0.00848388671875,
0.01012420654296875,
0.00809478759765625,
-0.05645751953125,
-0.052093505859375,
-0.0601806640625,
... |
AustinCarthy/Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75 | 2023-05-12T13:54:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75 | 0 | 2 | transformers | 2023-05-12T12:53:56 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0237
- Accuracy: 0.9975
- F1: 0.9731
- Precision: 0.9983
- Recall: 0.9492
- Roc Auc Score: 0.9746
- Tpr At Fpr 0.01: 0.9508
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0042 | 1.0 | 6563 | 0.0276 | 0.9966 | 0.9628 | 0.9983 | 0.9298 | 0.9649 | 0.9308 |
| 0.0024 | 2.0 | 13126 | 0.0242 | 0.9972 | 0.9698 | 0.9973 | 0.9438 | 0.9718 | 0.927 |
| 0.0026 | 3.0 | 19689 | 0.0244 | 0.9970 | 0.9679 | 0.9987 | 0.939 | 0.9695 | 0.9514 |
| 0.0003 | 4.0 | 26252 | 0.0293 | 0.9968 | 0.9657 | 0.9989 | 0.9346 | 0.9673 | 0.9472 |
| 0.0007 | 5.0 | 32815 | 0.0237 | 0.9975 | 0.9731 | 0.9983 | 0.9492 | 0.9746 | 0.9508 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,208 | [
[
-0.032623291015625,
-0.038330078125,
0.00751495361328125,
0.006866455078125,
-0.018890380859375,
-0.0200958251953125,
0.006732940673828125,
-0.01064300537109375,
0.0269622802734375,
0.030517578125,
-0.052001953125,
-0.056884765625,
-0.053802490234375,
-0.008... |
berluk/resnet50-fish-rec | 2023-05-12T17:18:31.000Z | [
"keras",
"image-classification",
"region:us"
] | image-classification | berluk | null | null | berluk/resnet50-fish-rec | 0 | 2 | keras | 2023-05-12T13:15:58 | ---
library_name: keras
pipeline_tag: image-classification
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| learning_rate | 0.001 |
| decay | 0.0 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 | | 545 | [
[
-0.033843994140625,
-0.036163330078125,
0.023681640625,
-0.0035400390625,
-0.031951904296875,
-0.0172576904296875,
0.0021572113037109375,
-0.01023101806640625,
0.01511383056640625,
0.0185546875,
-0.03497314453125,
-0.04827880859375,
-0.03619384765625,
-0.006... |
TencentARC/QA-CLIP-ViT-L-14 | 2023-05-16T11:19:35.000Z | [
"transformers",
"pytorch",
"chinese_clip",
"zero-shot-image-classification",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | zero-shot-image-classification | TencentARC | null | null | TencentARC/QA-CLIP-ViT-L-14 | 0 | 2 | transformers | 2023-05-12T13:42:18 | ---
license: apache-2.0
widget:
- src: https://huggingface.co/datasets/mishig/sample_images/resolve/main/cat-dog-music.png
candidate_labels: 音乐表演, 体育运动
example_title: 猫和狗
---
[**中文说明**](README_CN.md) | [**English**](README.md)
# Introduction
This project aims to provide a better Chinese CLIP model. The training data used in this project consists of publicly accessible image URLs and related Chinese text descriptions, totaling 400 million. After screening, we ultimately used 100 million data for training.
This project is produced by QQ-ARC Joint Lab, Tencent PCG. For more detailed information, please refer to the [main page of the QA-CLIP project](https://huggingface.co/TencentARC/QA-CLIP). We have also open-sourced our code on GitHub, [QA-CLIP](https://github.com/TencentARC-QQ/QA-CLIP), and welcome to star!
<br><br>
## Results
We conducted zero-shot tests on [MUGE Retrieval](https://tianchi.aliyun.com/muge), [Flickr30K-CN](https://github.com/li-xirong/cross-lingual-cap), and [COCO-CN](https://github.com/li-xirong/coco-cn) datasets for image-text retrieval tasks. For the image zero-shot classification task, we tested on the ImageNet dataset. The test results are shown in the table below:
**Flickr30K-CN Zero-shot Retrieval (Official Test Set)**:
<table border="1" width="120%">
<tr align="center">
<th>Task</th><th colspan="3">Text-to-Image</th><th colspan="3">Image-to-Text</th>
</tr>
<tr align="center">
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>RN50</sub></td><td>48.8</td><td>76.0</td><td>84.6</td><td>60.0</td><td>85.9</td><td>92.0</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>RN50</sub></td><td><b>50.5</b></td><td><b>77.4</b></td><td><b>86.1</b></td><td><b>67.1</b></td><td><b>87.9</b></td><td><b>93.2</b></td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>ViT-B/16</sub></td><td>62.7</td><td>86.9</td><td>92.8</td><td>74.6</td><td>93.5</td><td>97.1</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>ViT-B/16</sub></td><td><b>63.8</b></td><td><b>88.0</b></td><td><b>93.2</b></td><td><b>78.4</b></td><td><b>96.1</b></td><td><b>98.5</b></td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>ViT-L/14</sub></td><td>68.0</td><td>89.7</td><td>94.4</td><td>80.2</td><td>96.6</td><td>98.2</td>
</tr>
<tr align="center">
<td width="120%">AltClip<sub>ViT-L/14</sub></td><td><b>69.7</b></td><td>90.1</td><td><b>94.8</b></td><td>84.8</td><td>97.7</td><td>99.1</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>ViT-L/14</sub></td><td>69.3</td><td><b>90.3</b></td><td>94.7</td><td><b>85.3</b></td><td><b>97.9</b></td><td><b>99.2</b></td>
</tr>
</table>
<br>
**MUGE Zero-shot Retrieval (Official Validation Set)**:
<table border="1" width="120%">
<tr align="center">
<th>Task</th><th colspan="3">Text-to-Image</th><th colspan="3">Image-to-Text</th>
</tr>
<tr align="center">
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>RN50</sub></td><td>42.6</td><td>68.5</td><td>78.0</td><td>30.0</td><td>56.2</td><td>66.9</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>RN50</sub></td><td><b>44.0</b></td><td><b>69.9</b></td><td><b>79.5</b></td><td><b>32.4</b></td><td><b>59.5</b></td><td><b>70.3</b></td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>ViT-B/16</sub></td><td>52.1</td><td>76.7</td><td>84.4</td><td>38.7</td><td>65.6</td><td>75.1</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>ViT-B/16</sub></td><td><b>53.2</b></td><td><b>77.7</b></td><td><b>85.1</b></td><td><b>40.7</b></td><td><b>68.2</b></td><td><b>77.2</b></td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>ViT-L/14</sub></td><td>56.4</td><td>79.8</td><td>86.2</td><td>42.6</td><td>69.8</td><td>78.6</td>
</tr>
<tr align="center">
<td width="120%">AltClip<sub>ViT-L/14</sub></td><td>29.6</td><td>49.9</td><td>58.8</td><td>21.4</td><td>42.0</td><td>51.9</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>ViT-L/14</sub></td><td><b>57.4</b></td><td><b>81.0</b></td><td><b>87.7</b></td><td><b>45.5</b></td><td><b>73.0</b></td><td><b>81.4</b></td>
</tr>
</table>
<br>
**COCO-CN Zero-shot Retrieval (Official Test Set)**:
<table border="1" width="120%">
<tr align="center">
<th>Task</th><th colspan="3">Text-to-Image</th><th colspan="3">Image-to-Text</th>
</tr>
<tr align="center">
<td>Metric</td><td>R@1</td><td>R@5</td><td>R@10</td><td>R@1</td><td>R@5</td><td>R@10</td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>RN50</sub></td><td>48.1</td><td>81.3</td><td>90.5</td><td>50.9</td><td>81.1</td><td>90.5</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>RN50</sub></td><td><b>50.1</b></td><td><b>82.5</b></td><td><b>91.7</b></td><td><b>56.7</b></td><td><b>85.2</b></td><td><b>92.9</b></td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>ViT-B/16</sub></td><td>62.2</td><td>87.1</td><td>94.9</td><td>56.3</td><td>84.0</td><td>93.3</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>ViT-B/16</sub></td><td><b>62.9</b></td><td><b>87.7</b></td><td><b>94.7</b></td><td><b>61.5</b></td><td><b>87.6</b></td><td><b>94.8</b></td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>ViT-L/14</sub></td><td>64.9</td><td>88.8</td><td>94.2</td><td>60.6</td><td>84.4</td><td>93.1</td>
</tr>
<tr align="center">
<td width="120%">AltClip<sub>ViT-L/14</sub></td><td>63.5</td><td>87.6</td><td>93.5</td><td>62.6</td><td><b>88.5</b></td><td><b>95.9</b></td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>ViT-L/14</sub></td><td><b>65.7</b></td><td><b>90.2</b></td><td><b>95.0</b></td><td><b>64.5</b></td><td>88.3</td><td>95.1</td>
</tr>
</table>
<br>
**Zero-shot Image Classification on ImageNet**:
<table border="1" width="120%">
<tr align="center">
<th>Task</th><th colspan="1">ImageNet</th>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>RN50</sub></td><td>33.5</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>RN50</sub></td><td><b>35.5</b></td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>ViT-B/16</sub></td><td>48.4</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>ViT-B/16</sub></td><td><b>49.7</b></td>
</tr>
<tr align="center">
<td width="120%">CN-CLIP<sub>ViT-L/14</sub></td><td>54.7</td>
</tr>
<tr align="center", style="background-color: Honeydew;">
<td width="120%">QA-CLIP<sub>ViT-L/14</sub></td><td><b>55.8</b></td>
</tr>
</table>
<br>
<br><br>
# Getting Started
## Inference Code
Inference code example:
```python
from PIL import Image
import requests
from transformers import ChineseCLIPProcessor, ChineseCLIPModel
model = ChineseCLIPModel.from_pretrained("TencentARC/QA-CLIP-ViT-L-14")
processor = ChineseCLIPProcessor.from_pretrained("TencentARC/QA-CLIP-ViT-L-14")
url = "https://clip-cn-beijing.oss-cn-beijing.aliyuncs.com/pokemon.jpeg"
image = Image.open(requests.get(url, stream=True).raw)
# Squirtle, Bulbasaur, Charmander, Pikachu in English
texts = ["杰尼龟", "妙蛙种子", "小火龙", "皮卡丘"]
# compute image feature
inputs = processor(images=image, return_tensors="pt")
image_features = model.get_image_features(**inputs)
image_features = image_features / image_features.norm(p=2, dim=-1, keepdim=True) # normalize
# compute text features
inputs = processor(text=texts, padding=True, return_tensors="pt")
text_features = model.get_text_features(**inputs)
text_features = text_features / text_features.norm(p=2, dim=-1, keepdim=True) # normalize
# compute image-text similarity scores
inputs = processor(text=texts, images=image, return_tensors="pt", padding=True)
outputs = model(**inputs)
logits_per_image = outputs.logits_per_image # this is the image-text similarity score
probs = logits_per_image.softmax(dim=1)
```
<br><br>
# Acknowledgments
The project code is based on implementation of <b>[Chinese-CLIP](https://github.com/OFA-Sys/Chinese-CLIP)</b>, and we are very grateful for their outstanding open-source contributions.
<br><br> | 8,888 | [
[
-0.035064697265625,
-0.046905517578125,
-0.006717681884765625,
0.0224456787109375,
-0.0294342041015625,
0.003856658935546875,
-0.01387786865234375,
-0.033355712890625,
0.038299560546875,
-0.01108551025390625,
-0.07080078125,
-0.0325927734375,
-0.041168212890625,... |
AustinCarthy/Base_10Kphish_benignFall_IL_10K_OnlyPhish_10K_from_benign_top_p_0.75 | 2023-05-12T14:54:57.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Base_10Kphish_benignFall_IL_10K_OnlyPhish_10K_from_benign_top_p_0.75 | 0 | 2 | transformers | 2023-05-12T13:55:03 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Base_10Kphish_benignFall_IL_10K_OnlyPhish_10K_from_benign_top_p_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Base_10Kphish_benignFall_IL_10K_OnlyPhish_10K_from_benign_top_p_0.75
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1123
- Accuracy: 0.9899
- F1: 0.8810
- Precision: 0.9985
- Recall: 0.7882
- Roc Auc Score: 0.8941
- Tpr At Fpr 0.01: 0.8132
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0068 | 1.0 | 6563 | 0.0531 | 0.9894 | 0.8756 | 0.9934 | 0.7828 | 0.8913 | 0.7264 |
| 0.0042 | 2.0 | 13126 | 0.0747 | 0.9894 | 0.8754 | 0.9962 | 0.7808 | 0.8903 | 0.7666 |
| 0.0015 | 3.0 | 19689 | 0.0648 | 0.9904 | 0.8887 | 0.9983 | 0.8008 | 0.9004 | 0.8088 |
| 0.0008 | 4.0 | 26252 | 0.0861 | 0.9912 | 0.8983 | 0.9980 | 0.8166 | 0.9083 | 0.831 |
| 0.0 | 5.0 | 32815 | 0.1123 | 0.9899 | 0.8810 | 0.9985 | 0.7882 | 0.8941 | 0.8132 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,214 | [
[
-0.0338134765625,
-0.0390625,
0.007244110107421875,
0.00882720947265625,
-0.019805908203125,
-0.0203094482421875,
0.005924224853515625,
-0.011810302734375,
0.0248565673828125,
0.0299530029296875,
-0.049835205078125,
-0.057159423828125,
-0.054473876953125,
-0... |
grenmon/bart-large-finetuned-summarization | 2023-05-12T14:49:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | summarization | grenmon | null | null | grenmon/bart-large-finetuned-summarization | 0 | 2 | transformers | 2023-05-12T14:28:31 | ---
license: apache-2.0
tags:
- summarization
- generated_from_trainer
metrics:
- rouge
model-index:
- name: bart-large-finetuned-summarization
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-finetuned-summarization
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1841
- Rouge1: 32.6763
- Rouge2: 23.1598
- Rougel: 31.2322
- Rougelsum: 32.278
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.6e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|
| 1.7048 | 1.0 | 308 | 1.1916 | 32.0296 | 21.6931 | 30.2623 | 31.1959 |
| 1.1153 | 2.0 | 616 | 1.2054 | 30.7076 | 21.7771 | 29.3115 | 29.9377 |
| 0.78 | 3.0 | 924 | 1.1096 | 32.4164 | 22.494 | 31.0367 | 31.8135 |
| 0.5335 | 4.0 | 1232 | 1.1547 | 33.2561 | 23.6119 | 32.1371 | 32.591 |
| 0.361 | 5.0 | 1540 | 1.1841 | 32.6763 | 23.1598 | 31.2322 | 32.278 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,899 | [
[
-0.042572021484375,
-0.051666259765625,
0.020660400390625,
0.0102081298828125,
-0.0172576904296875,
-0.0165863037109375,
-0.0150604248046875,
-0.0172271728515625,
0.032989501953125,
0.03240966796875,
-0.05230712890625,
-0.044342041015625,
-0.04461669921875,
... |
hemagamal/mdeberta_quran_qa_model | 2023-05-12T15:29:29.000Z | [
"transformers",
"tf",
"deberta-v2",
"question-answering",
"generated_from_keras_callback",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | hemagamal | null | null | hemagamal/mdeberta_quran_qa_model | 0 | 2 | transformers | 2023-05-12T14:50:32 | ---
license: mit
tags:
- generated_from_keras_callback
model-index:
- name: hemagamal/mdeberta_quran_qa_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hemagamal/mdeberta_quran_qa_model
This model is a fine-tuned version of [timpal0l/mdeberta-v3-base-squad2](https://huggingface.co/timpal0l/mdeberta-v3-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 11.9013
- Train End Logits Loss: 5.9506
- Train Start Logits Loss: 5.9506
- Train End Logits Sparse Categorical Accuracy: 0.0582
- Train Start Logits Sparse Categorical Accuracy: 0.0426
- Validation Loss: 11.9013
- Validation End Logits Loss: 5.9506
- Validation Start Logits Loss: 5.9506
- Validation End Logits Sparse Categorical Accuracy: 0.0459
- Validation Start Logits Sparse Categorical Accuracy: 0.0917
- Epoch: 15
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 2e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Loss | Train Start Logits Loss | Train End Logits Sparse Categorical Accuracy | Train Start Logits Sparse Categorical Accuracy | Validation Loss | Validation End Logits Loss | Validation Start Logits Loss | Validation End Logits Sparse Categorical Accuracy | Validation Start Logits Sparse Categorical Accuracy | Epoch |
|:----------:|:---------------------:|:-----------------------:|:--------------------------------------------:|:----------------------------------------------:|:---------------:|:--------------------------:|:----------------------------:|:-------------------------------------------------:|:---------------------------------------------------:|:-----:|
| 12.4236 | 6.1795 | 6.2441 | 0.0724 | 0.0895 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 0 |
| 11.9013 | 5.9506 | 5.9506 | 0.0469 | 0.0469 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 1 |
| 11.9013 | 5.9506 | 5.9506 | 0.0369 | 0.0398 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 2 |
| 11.9013 | 5.9506 | 5.9506 | 0.0369 | 0.0554 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 3 |
| 11.9013 | 5.9506 | 5.9506 | 0.0483 | 0.0455 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 4 |
| 11.9013 | 5.9506 | 5.9506 | 0.0554 | 0.0412 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 5 |
| 11.9013 | 5.9506 | 5.9506 | 0.0241 | 0.0398 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 6 |
| 11.9013 | 5.9506 | 5.9506 | 0.0369 | 0.0412 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 7 |
| 11.9013 | 5.9506 | 5.9506 | 0.0426 | 0.0426 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 8 |
| 11.9013 | 5.9506 | 5.9506 | 0.0511 | 0.0426 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 9 |
| 11.9013 | 5.9506 | 5.9506 | 0.0426 | 0.0469 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 10 |
| 11.9013 | 5.9506 | 5.9506 | 0.0440 | 0.0341 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 11 |
| 11.9013 | 5.9506 | 5.9506 | 0.0412 | 0.0398 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 12 |
| 11.9013 | 5.9506 | 5.9506 | 0.0440 | 0.0440 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 13 |
| 11.9013 | 5.9506 | 5.9506 | 0.0426 | 0.0412 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 14 |
| 11.9013 | 5.9506 | 5.9506 | 0.0582 | 0.0426 | 11.9013 | 5.9506 | 5.9506 | 0.0459 | 0.0917 | 15 |
### Framework versions
- Transformers 4.29.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 8,111 | [
[
-0.050323486328125,
-0.04473876953125,
0.0163421630859375,
0.0107574462890625,
-0.00583648681640625,
0.005603790283203125,
0.00753021240234375,
-0.004467010498046875,
0.04803466796875,
0.0256805419921875,
-0.054107666015625,
-0.043182373046875,
-0.04861450195312... |
AustinCarthy/Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75_lr1e-6 | 2023-05-12T15:54:59.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75_lr1e-6 | 0 | 2 | transformers | 2023-05-12T14:55:18 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75_lr1e-6
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Base_100Kphish_benignFall_IL_10K_OnlyPhish_from_benign_top_p_0.75_lr1e-6
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0201
- Accuracy: 0.9978
- F1: 0.9766
- Precision: 0.9985
- Recall: 0.9556
- Roc Auc Score: 0.9778
- Tpr At Fpr 0.01: 0.9614
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.004 | 1.0 | 6563 | 0.0144 | 0.9980 | 0.9781 | 0.9969 | 0.96 | 0.9799 | 0.9528 |
| 0.0012 | 2.0 | 13126 | 0.0202 | 0.9977 | 0.9749 | 0.9992 | 0.9518 | 0.9759 | 0.9618 |
| 0.002 | 3.0 | 19689 | 0.0176 | 0.9978 | 0.9761 | 0.9985 | 0.9546 | 0.9773 | 0.9586 |
| 0.0 | 4.0 | 26252 | 0.0205 | 0.9977 | 0.9749 | 0.9992 | 0.9518 | 0.9759 | 0.961 |
| 0.0005 | 5.0 | 32815 | 0.0201 | 0.9978 | 0.9766 | 0.9985 | 0.9556 | 0.9778 | 0.9614 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,222 | [
[
-0.032867431640625,
-0.037567138671875,
0.0072174072265625,
0.00638580322265625,
-0.0194854736328125,
-0.020965576171875,
0.00742340087890625,
-0.011932373046875,
0.02642822265625,
0.0296783447265625,
-0.052734375,
-0.05645751953125,
-0.05377197265625,
-0.00... |
timopixel/distilbert-base-uncased-finetuned-squad | 2023-06-07T21:50:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"question-answering",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | timopixel | null | null | timopixel/distilbert-base-uncased-finetuned-squad | 0 | 2 | transformers | 2023-05-12T15:25:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: distilbert-base-uncased-finetuned-squad
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-squad
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset.
It achieves the following results on the evaluation set:
- Loss: 3.7286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 12
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 21 | 3.7602 |
| No log | 2.0 | 42 | 3.7330 |
| No log | 3.0 | 63 | 3.7286 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,432 | [
[
-0.03131103515625,
-0.050140380859375,
0.01103973388671875,
0.02392578125,
-0.02581787109375,
-0.0000883936882019043,
-0.00555419921875,
-0.00997161865234375,
0.0016946792602539062,
0.02032470703125,
-0.06988525390625,
-0.041595458984375,
-0.052154541015625,
... |
guoluo/Bert_class_6e-07_1000epoch | 2023-05-12T15:54:43.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"endpoints_compatible",
"region:us"
] | text-classification | guoluo | null | null | guoluo/Bert_class_6e-07_1000epoch | 0 | 2 | transformers | 2023-05-12T15:53:56 | ---
tags:
- generated_from_keras_callback
model-index:
- name: Bert_class_6e-07_1000epoch
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Bert_class_6e-07_1000epoch
This model is a fine-tuned version of [guoluo/Bert_1.5e_07](https://huggingface.co/guoluo/Bert_1.5e_07) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0016
- Train Accuracy: 1.0
- Validation Loss: 1.9732
- Validation Accuracy: 0.7254
- Train Lr: 4.4465096e-07
- Epoch: 999
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': 4.4465096e-07, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Train Lr | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-------------:|:-----:|
| 1.3494 | 0.3459 | 1.2196 | 0.6761 | 6e-07 | 0 |
| 1.1404 | 0.6776 | 1.0650 | 0.6761 | 5.999997e-07 | 1 |
| 1.0351 | 0.6776 | 1.0006 | 0.6761 | 5.9999894e-07 | 2 |
| 0.9820 | 0.6776 | 0.9771 | 0.6761 | 5.9999786e-07 | 3 |
| 0.9609 | 0.6776 | 0.9674 | 0.6761 | 5.9999644e-07 | 4 |
| 0.9541 | 0.6776 | 0.9616 | 0.6761 | 5.999946e-07 | 5 |
| 0.9431 | 0.6776 | 0.9569 | 0.6761 | 5.9999246e-07 | 6 |
| 0.9373 | 0.6776 | 0.9525 | 0.6761 | 5.9998996e-07 | 7 |
| 0.9363 | 0.6776 | 0.9496 | 0.6761 | 5.9998706e-07 | 8 |
| 0.9293 | 0.6776 | 0.9470 | 0.6761 | 5.999838e-07 | 9 |
| 0.9222 | 0.6776 | 0.9439 | 0.6761 | 5.9998024e-07 | 10 |
| 0.9129 | 0.6776 | 0.9410 | 0.6761 | 5.9997626e-07 | 11 |
| 0.9043 | 0.6776 | 0.9384 | 0.6761 | 5.9997194e-07 | 12 |
| 0.9058 | 0.6776 | 0.9352 | 0.6761 | 5.999673e-07 | 13 |
| 0.9016 | 0.6776 | 0.9316 | 0.6761 | 5.999622e-07 | 14 |
| 0.9001 | 0.6776 | 0.9283 | 0.6761 | 5.999568e-07 | 15 |
| 0.8924 | 0.6776 | 0.9251 | 0.6761 | 5.999511e-07 | 16 |
| 0.8894 | 0.6776 | 0.9215 | 0.6761 | 5.9994494e-07 | 17 |
| 0.8886 | 0.6776 | 0.9194 | 0.6761 | 5.9993846e-07 | 18 |
| 0.8718 | 0.6776 | 0.9165 | 0.6761 | 5.9993164e-07 | 19 |
| 0.8718 | 0.6753 | 0.9118 | 0.6761 | 5.999244e-07 | 20 |
| 0.8532 | 0.6800 | 0.9096 | 0.6761 | 5.9991686e-07 | 21 |
| 0.8565 | 0.6800 | 0.9065 | 0.6761 | 5.9990896e-07 | 22 |
| 0.8457 | 0.6824 | 0.9038 | 0.6761 | 5.9990066e-07 | 23 |
| 0.8394 | 0.6847 | 0.9013 | 0.6761 | 5.99892e-07 | 24 |
| 0.8383 | 0.6847 | 0.8963 | 0.6761 | 5.9988304e-07 | 25 |
| 0.8350 | 0.6871 | 0.8944 | 0.6761 | 5.9987366e-07 | 26 |
| 0.8259 | 0.6871 | 0.8901 | 0.6761 | 5.9986394e-07 | 27 |
| 0.8159 | 0.6918 | 0.8867 | 0.6761 | 5.998539e-07 | 28 |
| 0.8123 | 0.6847 | 0.8839 | 0.6761 | 5.998434e-07 | 29 |
| 0.7992 | 0.6918 | 0.8811 | 0.6761 | 5.998326e-07 | 30 |
| 0.8039 | 0.6918 | 0.8780 | 0.6690 | 5.998215e-07 | 31 |
| 0.7898 | 0.6988 | 0.8750 | 0.6620 | 5.9980994e-07 | 32 |
| 0.7820 | 0.6965 | 0.8725 | 0.6690 | 5.9979806e-07 | 33 |
| 0.7759 | 0.7012 | 0.8687 | 0.6690 | 5.9978584e-07 | 34 |
| 0.7673 | 0.7082 | 0.8656 | 0.6831 | 5.997732e-07 | 35 |
| 0.7646 | 0.7200 | 0.8637 | 0.6901 | 5.9976026e-07 | 36 |
| 0.7562 | 0.7106 | 0.8617 | 0.6901 | 5.9974695e-07 | 37 |
| 0.7492 | 0.7271 | 0.8623 | 0.7042 | 5.9973326e-07 | 38 |
| 0.7426 | 0.7365 | 0.8568 | 0.6901 | 5.997192e-07 | 39 |
| 0.7483 | 0.7271 | 0.8548 | 0.6901 | 5.9970483e-07 | 40 |
| 0.7366 | 0.7153 | 0.8511 | 0.7042 | 5.9969005e-07 | 41 |
| 0.7317 | 0.7318 | 0.8497 | 0.7183 | 5.9967493e-07 | 42 |
| 0.7194 | 0.7341 | 0.8487 | 0.6972 | 5.996595e-07 | 43 |
| 0.7099 | 0.7412 | 0.8472 | 0.7183 | 5.9964367e-07 | 44 |
| 0.7069 | 0.7388 | 0.8456 | 0.7113 | 5.9962747e-07 | 45 |
| 0.7077 | 0.7506 | 0.8441 | 0.7113 | 5.996109e-07 | 46 |
| 0.6933 | 0.7435 | 0.8415 | 0.7254 | 5.9959405e-07 | 47 |
| 0.6937 | 0.7482 | 0.8428 | 0.7113 | 5.9957677e-07 | 48 |
| 0.6873 | 0.7365 | 0.8407 | 0.7113 | 5.9955914e-07 | 49 |
| 0.6819 | 0.7671 | 0.8385 | 0.7113 | 5.995412e-07 | 50 |
| 0.6759 | 0.7694 | 0.8387 | 0.7042 | 5.995228e-07 | 51 |
| 0.6633 | 0.7859 | 0.8365 | 0.7042 | 5.995041e-07 | 52 |
| 0.6770 | 0.7529 | 0.8346 | 0.6972 | 5.994851e-07 | 53 |
| 0.6642 | 0.7624 | 0.8352 | 0.6972 | 5.9946564e-07 | 54 |
| 0.6620 | 0.7694 | 0.8356 | 0.6972 | 5.9944585e-07 | 55 |
| 0.6530 | 0.7694 | 0.8317 | 0.6972 | 5.9942573e-07 | 56 |
| 0.6458 | 0.7765 | 0.8301 | 0.6972 | 5.994052e-07 | 57 |
| 0.6394 | 0.7671 | 0.8293 | 0.6901 | 5.9938435e-07 | 58 |
| 0.6482 | 0.7765 | 0.8296 | 0.6972 | 5.9936315e-07 | 59 |
| 0.6377 | 0.7741 | 0.8302 | 0.7254 | 5.9934155e-07 | 60 |
| 0.6124 | 0.7929 | 0.8294 | 0.7113 | 5.993196e-07 | 61 |
| 0.6242 | 0.7788 | 0.8294 | 0.7183 | 5.992973e-07 | 62 |
| 0.6243 | 0.7882 | 0.8284 | 0.7183 | 5.9927464e-07 | 63 |
| 0.6102 | 0.7835 | 0.8257 | 0.6972 | 5.992516e-07 | 64 |
| 0.6175 | 0.7765 | 0.8260 | 0.7254 | 5.9922826e-07 | 65 |
| 0.6181 | 0.7835 | 0.8285 | 0.7324 | 5.9920455e-07 | 66 |
| 0.5997 | 0.7882 | 0.8247 | 0.7254 | 5.9918045e-07 | 67 |
| 0.5913 | 0.7906 | 0.8225 | 0.7183 | 5.99156e-07 | 68 |
| 0.5945 | 0.7835 | 0.8227 | 0.7254 | 5.991312e-07 | 69 |
| 0.5842 | 0.7835 | 0.8237 | 0.7254 | 5.9910604e-07 | 70 |
| 0.5878 | 0.7976 | 0.8235 | 0.7254 | 5.990805e-07 | 71 |
| 0.5762 | 0.7906 | 0.8220 | 0.7254 | 5.9905466e-07 | 72 |
| 0.5870 | 0.7882 | 0.8235 | 0.7324 | 5.990284e-07 | 73 |
| 0.5694 | 0.7976 | 0.8227 | 0.7183 | 5.990018e-07 | 74 |
| 0.5758 | 0.7882 | 0.8216 | 0.7254 | 5.9897485e-07 | 75 |
| 0.5606 | 0.8000 | 0.8190 | 0.7324 | 5.9894757e-07 | 76 |
| 0.5767 | 0.8000 | 0.8196 | 0.7254 | 5.989199e-07 | 77 |
| 0.5613 | 0.8094 | 0.8237 | 0.7183 | 5.9889186e-07 | 78 |
| 0.5484 | 0.7976 | 0.8187 | 0.7324 | 5.988635e-07 | 79 |
| 0.5458 | 0.8071 | 0.8217 | 0.7183 | 5.9883473e-07 | 80 |
| 0.5531 | 0.8047 | 0.8228 | 0.7183 | 5.988056e-07 | 81 |
| 0.5409 | 0.8024 | 0.8182 | 0.7254 | 5.987762e-07 | 82 |
| 0.5540 | 0.7835 | 0.8182 | 0.7183 | 5.9874634e-07 | 83 |
| 0.5290 | 0.8071 | 0.8214 | 0.7183 | 5.9871616e-07 | 84 |
| 0.5181 | 0.8141 | 0.8255 | 0.7113 | 5.9868563e-07 | 85 |
| 0.5092 | 0.8306 | 0.8221 | 0.7183 | 5.9865476e-07 | 86 |
| 0.5148 | 0.8071 | 0.8194 | 0.7254 | 5.986235e-07 | 87 |
| 0.5197 | 0.8071 | 0.8188 | 0.7254 | 5.985919e-07 | 88 |
| 0.5165 | 0.8047 | 0.8231 | 0.7183 | 5.9855995e-07 | 89 |
| 0.5102 | 0.8282 | 0.8216 | 0.7113 | 5.985276e-07 | 90 |
| 0.5115 | 0.8141 | 0.8183 | 0.7254 | 5.984949e-07 | 91 |
| 0.5032 | 0.8118 | 0.8188 | 0.7183 | 5.984619e-07 | 92 |
| 0.4980 | 0.8118 | 0.8280 | 0.7113 | 5.984285e-07 | 93 |
| 0.4848 | 0.8259 | 0.8216 | 0.7254 | 5.9839476e-07 | 94 |
| 0.4923 | 0.8329 | 0.8232 | 0.7113 | 5.9836066e-07 | 95 |
| 0.4902 | 0.8235 | 0.8246 | 0.7113 | 5.983262e-07 | 96 |
| 0.4764 | 0.8306 | 0.8278 | 0.7113 | 5.9829136e-07 | 97 |
| 0.4776 | 0.8306 | 0.8236 | 0.7113 | 5.982562e-07 | 98 |
| 0.4733 | 0.8329 | 0.8235 | 0.7113 | 5.9822065e-07 | 99 |
| 0.4755 | 0.8424 | 0.8278 | 0.7042 | 5.981848e-07 | 100 |
| 0.4651 | 0.8376 | 0.8204 | 0.7113 | 5.981485e-07 | 101 |
| 0.4796 | 0.8306 | 0.8263 | 0.7042 | 5.981119e-07 | 102 |
| 0.4534 | 0.8306 | 0.8256 | 0.7254 | 5.9807496e-07 | 103 |
| 0.4551 | 0.8329 | 0.8307 | 0.7042 | 5.980377e-07 | 104 |
| 0.4562 | 0.8400 | 0.8253 | 0.7113 | 5.98e-07 | 105 |
| 0.4532 | 0.8329 | 0.8329 | 0.7324 | 5.9796196e-07 | 106 |
| 0.4589 | 0.8447 | 0.8274 | 0.7113 | 5.979236e-07 | 107 |
| 0.4460 | 0.8424 | 0.8349 | 0.7183 | 5.978848e-07 | 108 |
| 0.4429 | 0.8447 | 0.8312 | 0.7113 | 5.978457e-07 | 109 |
| 0.4311 | 0.8376 | 0.8340 | 0.7113 | 5.9780626e-07 | 110 |
| 0.4301 | 0.8518 | 0.8362 | 0.7183 | 5.977665e-07 | 111 |
| 0.4294 | 0.8541 | 0.8379 | 0.7183 | 5.977263e-07 | 112 |
| 0.4307 | 0.8447 | 0.8372 | 0.7113 | 5.9768576e-07 | 113 |
| 0.4287 | 0.8353 | 0.8495 | 0.7113 | 5.976449e-07 | 114 |
| 0.4241 | 0.8541 | 0.8363 | 0.7113 | 5.976037e-07 | 115 |
| 0.4063 | 0.8541 | 0.8489 | 0.7183 | 5.9756206e-07 | 116 |
| 0.4193 | 0.8541 | 0.8471 | 0.7113 | 5.975201e-07 | 117 |
| 0.4082 | 0.8518 | 0.8460 | 0.7183 | 5.974778e-07 | 118 |
| 0.4137 | 0.8494 | 0.8437 | 0.7113 | 5.974352e-07 | 119 |
| 0.4151 | 0.8424 | 0.8581 | 0.7183 | 5.9739216e-07 | 120 |
| 0.4083 | 0.8588 | 0.8505 | 0.7113 | 5.973488e-07 | 121 |
| 0.4028 | 0.8682 | 0.8488 | 0.7113 | 5.973051e-07 | 122 |
| 0.3998 | 0.8565 | 0.8537 | 0.7183 | 5.97261e-07 | 123 |
| 0.3978 | 0.8612 | 0.8515 | 0.7113 | 5.9721657e-07 | 124 |
| 0.3859 | 0.8776 | 0.8555 | 0.7113 | 5.971718e-07 | 125 |
| 0.3994 | 0.8612 | 0.8553 | 0.7183 | 5.9712664e-07 | 126 |
| 0.3876 | 0.8682 | 0.8569 | 0.7183 | 5.9708117e-07 | 127 |
| 0.4019 | 0.8424 | 0.8539 | 0.7183 | 5.970353e-07 | 128 |
| 0.3757 | 0.8800 | 0.8644 | 0.7042 | 5.969891e-07 | 129 |
| 0.3696 | 0.8753 | 0.8550 | 0.7254 | 5.969425e-07 | 130 |
| 0.3706 | 0.8706 | 0.8592 | 0.7113 | 5.9689563e-07 | 131 |
| 0.3801 | 0.8729 | 0.8579 | 0.7254 | 5.9684834e-07 | 132 |
| 0.3740 | 0.8682 | 0.8553 | 0.7183 | 5.968007e-07 | 133 |
| 0.3728 | 0.8588 | 0.8628 | 0.7183 | 5.967527e-07 | 134 |
| 0.3671 | 0.8824 | 0.8625 | 0.7183 | 5.967044e-07 | 135 |
| 0.3592 | 0.8776 | 0.8593 | 0.7183 | 5.966557e-07 | 136 |
| 0.3556 | 0.8635 | 0.8616 | 0.7254 | 5.9660664e-07 | 137 |
| 0.3530 | 0.8824 | 0.8646 | 0.7254 | 5.9655724e-07 | 138 |
| 0.3656 | 0.8753 | 0.8633 | 0.7183 | 5.965075e-07 | 139 |
| 0.3501 | 0.8871 | 0.8632 | 0.7183 | 5.964574e-07 | 140 |
| 0.3623 | 0.8729 | 0.8680 | 0.7113 | 5.9640695e-07 | 141 |
| 0.3437 | 0.8941 | 0.8703 | 0.7183 | 5.9635613e-07 | 142 |
| 0.3397 | 0.8941 | 0.8710 | 0.7183 | 5.96305e-07 | 143 |
| 0.3143 | 0.9035 | 0.8710 | 0.7254 | 5.9625347e-07 | 144 |
| 0.3306 | 0.8753 | 0.8770 | 0.7183 | 5.9620163e-07 | 145 |
| 0.3378 | 0.8753 | 0.8768 | 0.7254 | 5.961494e-07 | 146 |
| 0.3277 | 0.8918 | 0.8802 | 0.7042 | 5.960968e-07 | 147 |
| 0.3216 | 0.8941 | 0.8809 | 0.7324 | 5.960439e-07 | 148 |
| 0.3530 | 0.8824 | 0.8838 | 0.6972 | 5.959906e-07 | 149 |
| 0.3338 | 0.8753 | 0.8801 | 0.7254 | 5.9593697e-07 | 150 |
| 0.3114 | 0.8894 | 0.8843 | 0.7183 | 5.9588297e-07 | 151 |
| 0.3228 | 0.8988 | 0.8834 | 0.7113 | 5.958286e-07 | 152 |
| 0.3162 | 0.8988 | 0.8882 | 0.7042 | 5.9577394e-07 | 153 |
| 0.3125 | 0.8965 | 0.8867 | 0.7042 | 5.957189e-07 | 154 |
| 0.3120 | 0.8988 | 0.8920 | 0.7183 | 5.956635e-07 | 155 |
| 0.2984 | 0.8988 | 0.8921 | 0.6972 | 5.9560773e-07 | 156 |
| 0.3147 | 0.8941 | 0.8891 | 0.7254 | 5.955516e-07 | 157 |
| 0.3136 | 0.8871 | 0.8990 | 0.7042 | 5.954952e-07 | 158 |
| 0.2914 | 0.9059 | 0.8947 | 0.7324 | 5.954384e-07 | 159 |
| 0.2877 | 0.9035 | 0.9009 | 0.7113 | 5.953812e-07 | 160 |
| 0.2859 | 0.9082 | 0.9017 | 0.7183 | 5.953237e-07 | 161 |
| 0.3024 | 0.9082 | 0.8994 | 0.7183 | 5.952658e-07 | 162 |
| 0.2909 | 0.9082 | 0.9036 | 0.7183 | 5.952076e-07 | 163 |
| 0.2917 | 0.9082 | 0.9023 | 0.7254 | 5.9514906e-07 | 164 |
| 0.2749 | 0.9224 | 0.9049 | 0.7183 | 5.9509017e-07 | 165 |
| 0.3004 | 0.8918 | 0.9072 | 0.7254 | 5.950309e-07 | 166 |
| 0.2936 | 0.9082 | 0.9147 | 0.6972 | 5.9497125e-07 | 167 |
| 0.2836 | 0.9059 | 0.9157 | 0.7113 | 5.949113e-07 | 168 |
| 0.2616 | 0.9247 | 0.9134 | 0.7183 | 5.94851e-07 | 169 |
| 0.2775 | 0.9082 | 0.9164 | 0.7183 | 5.947903e-07 | 170 |
| 0.2850 | 0.9059 | 0.9163 | 0.7254 | 5.9472933e-07 | 171 |
| 0.2721 | 0.9012 | 0.9207 | 0.7183 | 5.9466794e-07 | 172 |
| 0.2572 | 0.9271 | 0.9197 | 0.7254 | 5.946062e-07 | 173 |
| 0.2724 | 0.9012 | 0.9206 | 0.7183 | 5.9454413e-07 | 174 |
| 0.2564 | 0.9200 | 0.9281 | 0.6972 | 5.944817e-07 | 175 |
| 0.2622 | 0.9224 | 0.9254 | 0.7254 | 5.9441896e-07 | 176 |
| 0.2592 | 0.9129 | 0.9311 | 0.7113 | 5.9435587e-07 | 177 |
| 0.2557 | 0.9176 | 0.9315 | 0.7183 | 5.942924e-07 | 178 |
| 0.2547 | 0.9176 | 0.9313 | 0.7254 | 5.9422854e-07 | 179 |
| 0.2440 | 0.9341 | 0.9375 | 0.7113 | 5.9416436e-07 | 180 |
| 0.2401 | 0.9318 | 0.9389 | 0.7113 | 5.9409984e-07 | 181 |
| 0.2415 | 0.9224 | 0.9369 | 0.7254 | 5.94035e-07 | 182 |
| 0.2582 | 0.9106 | 0.9427 | 0.7183 | 5.939698e-07 | 183 |
| 0.2447 | 0.9224 | 0.9467 | 0.7113 | 5.9390425e-07 | 184 |
| 0.2456 | 0.9153 | 0.9416 | 0.7183 | 5.938383e-07 | 185 |
| 0.2343 | 0.9271 | 0.9481 | 0.7113 | 5.9377203e-07 | 186 |
| 0.2355 | 0.9294 | 0.9468 | 0.7183 | 5.937054e-07 | 187 |
| 0.2223 | 0.9365 | 0.9531 | 0.7042 | 5.9363845e-07 | 188 |
| 0.2511 | 0.9059 | 0.9528 | 0.7324 | 5.9357114e-07 | 189 |
| 0.2351 | 0.9388 | 0.9607 | 0.7183 | 5.935035e-07 | 190 |
| 0.2386 | 0.9106 | 0.9595 | 0.7183 | 5.934355e-07 | 191 |
| 0.2424 | 0.9224 | 0.9644 | 0.7183 | 5.9336713e-07 | 192 |
| 0.2227 | 0.9341 | 0.9657 | 0.7254 | 5.932984e-07 | 193 |
| 0.2221 | 0.9459 | 0.9603 | 0.7324 | 5.9322934e-07 | 194 |
| 0.2274 | 0.9318 | 0.9679 | 0.7113 | 5.9315994e-07 | 195 |
| 0.2182 | 0.9435 | 0.9695 | 0.7113 | 5.930902e-07 | 196 |
| 0.2175 | 0.9365 | 0.9704 | 0.7113 | 5.930201e-07 | 197 |
| 0.2206 | 0.9294 | 0.9682 | 0.7324 | 5.929497e-07 | 198 |
| 0.2078 | 0.9318 | 0.9737 | 0.7113 | 5.928789e-07 | 199 |
| 0.2088 | 0.9435 | 0.9763 | 0.7183 | 5.928078e-07 | 200 |
| 0.2208 | 0.9318 | 0.9788 | 0.7183 | 5.927363e-07 | 201 |
| 0.2102 | 0.9388 | 0.9755 | 0.7254 | 5.9266443e-07 | 202 |
| 0.2131 | 0.9294 | 0.9838 | 0.7183 | 5.9259224e-07 | 203 |
| 0.2054 | 0.9341 | 0.9804 | 0.7324 | 5.925197e-07 | 204 |
| 0.2035 | 0.9294 | 0.9946 | 0.7113 | 5.9244684e-07 | 205 |
| 0.1990 | 0.9341 | 0.9843 | 0.7254 | 5.923736e-07 | 206 |
| 0.1958 | 0.9435 | 0.9984 | 0.6972 | 5.9230007e-07 | 207 |
| 0.2006 | 0.9482 | 0.9917 | 0.7254 | 5.922262e-07 | 208 |
| 0.2022 | 0.9341 | 1.0000 | 0.7113 | 5.9215193e-07 | 209 |
| 0.1859 | 0.9506 | 0.9922 | 0.7254 | 5.9207736e-07 | 210 |
| 0.1975 | 0.9365 | 1.0027 | 0.7042 | 5.9200244e-07 | 211 |
| 0.1943 | 0.9459 | 1.0022 | 0.7254 | 5.919271e-07 | 212 |
| 0.1916 | 0.9388 | 0.9982 | 0.7324 | 5.9185146e-07 | 213 |
| 0.1888 | 0.9435 | 1.0158 | 0.7042 | 5.9177546e-07 | 214 |
| 0.1973 | 0.9294 | 1.0019 | 0.7254 | 5.916991e-07 | 215 |
| 0.1827 | 0.9506 | 1.0096 | 0.7254 | 5.9162244e-07 | 216 |
| 0.1864 | 0.9388 | 1.0124 | 0.7042 | 5.915454e-07 | 217 |
| 0.1804 | 0.9553 | 1.0168 | 0.7183 | 5.9146805e-07 | 218 |
| 0.1788 | 0.9506 | 1.0189 | 0.7183 | 5.9139035e-07 | 219 |
| 0.1860 | 0.9388 | 1.0170 | 0.7183 | 5.913123e-07 | 220 |
| 0.1659 | 0.9694 | 1.0226 | 0.7254 | 5.912339e-07 | 221 |
| 0.1727 | 0.9459 | 1.0176 | 0.7324 | 5.911552e-07 | 222 |
| 0.1600 | 0.9576 | 1.0255 | 0.7042 | 5.910761e-07 | 223 |
| 0.1583 | 0.9600 | 1.0239 | 0.7113 | 5.909967e-07 | 224 |
| 0.1636 | 0.9576 | 1.0278 | 0.7254 | 5.9091695e-07 | 225 |
| 0.1656 | 0.9459 | 1.0370 | 0.7183 | 5.9083686e-07 | 226 |
| 0.1562 | 0.9624 | 1.0365 | 0.7183 | 5.9075643e-07 | 227 |
| 0.1552 | 0.9576 | 1.0361 | 0.7183 | 5.906756e-07 | 228 |
| 0.1661 | 0.9459 | 1.0394 | 0.7254 | 5.905944e-07 | 229 |
| 0.1643 | 0.9435 | 1.0418 | 0.7254 | 5.905129e-07 | 230 |
| 0.1632 | 0.9459 | 1.0397 | 0.7254 | 5.9043106e-07 | 231 |
| 0.1536 | 0.9529 | 1.0555 | 0.6972 | 5.9034886e-07 | 232 |
| 0.1539 | 0.9553 | 1.0488 | 0.7113 | 5.902663e-07 | 233 |
| 0.1598 | 0.9506 | 1.0646 | 0.6972 | 5.9018345e-07 | 234 |
| 0.1472 | 0.9529 | 1.0525 | 0.7183 | 5.901002e-07 | 235 |
| 0.1574 | 0.9506 | 1.0647 | 0.6972 | 5.9001667e-07 | 236 |
| 0.1618 | 0.9482 | 1.0565 | 0.7183 | 5.8993277e-07 | 237 |
| 0.1525 | 0.9600 | 1.0578 | 0.7324 | 5.898485e-07 | 238 |
| 0.1562 | 0.9412 | 1.0654 | 0.7254 | 5.8976394e-07 | 239 |
| 0.1510 | 0.9506 | 1.0689 | 0.7183 | 5.89679e-07 | 240 |
| 0.1292 | 0.9694 | 1.0708 | 0.7183 | 5.8959375e-07 | 241 |
| 0.1343 | 0.9694 | 1.0690 | 0.7254 | 5.8950815e-07 | 242 |
| 0.1461 | 0.9553 | 1.0737 | 0.7183 | 5.894222e-07 | 243 |
| 0.1350 | 0.9553 | 1.0748 | 0.7254 | 5.893359e-07 | 244 |
| 0.1332 | 0.9671 | 1.0815 | 0.7254 | 5.892493e-07 | 245 |
| 0.1378 | 0.9647 | 1.0798 | 0.7254 | 5.891623e-07 | 246 |
| 0.1219 | 0.9694 | 1.0869 | 0.7324 | 5.89075e-07 | 247 |
| 0.1348 | 0.9576 | 1.0842 | 0.7324 | 5.8898735e-07 | 248 |
| 0.1434 | 0.9553 | 1.0929 | 0.7183 | 5.8889935e-07 | 249 |
| 0.1382 | 0.9529 | 1.0928 | 0.7183 | 5.88811e-07 | 250 |
| 0.1444 | 0.9600 | 1.0961 | 0.7183 | 5.8872234e-07 | 251 |
| 0.1279 | 0.9718 | 1.0967 | 0.7183 | 5.886333e-07 | 252 |
| 0.1362 | 0.9647 | 1.1076 | 0.7113 | 5.8854397e-07 | 253 |
| 0.1396 | 0.9600 | 1.0993 | 0.7394 | 5.8845427e-07 | 254 |
| 0.1308 | 0.9671 | 1.1054 | 0.7183 | 5.8836423e-07 | 255 |
| 0.1216 | 0.9694 | 1.1080 | 0.7183 | 5.8827385e-07 | 256 |
| 0.1147 | 0.9741 | 1.1088 | 0.7254 | 5.881831e-07 | 257 |
| 0.1122 | 0.9741 | 1.1154 | 0.7183 | 5.8809206e-07 | 258 |
| 0.1202 | 0.9718 | 1.1180 | 0.7183 | 5.880007e-07 | 259 |
| 0.1230 | 0.9671 | 1.1184 | 0.7183 | 5.87909e-07 | 260 |
| 0.1192 | 0.9694 | 1.1151 | 0.7254 | 5.87817e-07 | 261 |
| 0.1138 | 0.9741 | 1.1237 | 0.7183 | 5.877246e-07 | 262 |
| 0.1301 | 0.9647 | 1.1263 | 0.7183 | 5.876319e-07 | 263 |
| 0.1140 | 0.9765 | 1.1264 | 0.7183 | 5.8753886e-07 | 264 |
| 0.1025 | 0.9741 | 1.1213 | 0.7254 | 5.8744547e-07 | 265 |
| 0.1144 | 0.9694 | 1.1261 | 0.7254 | 5.8735174e-07 | 266 |
| 0.1311 | 0.9600 | 1.1274 | 0.7324 | 5.8725766e-07 | 267 |
| 0.0971 | 0.9788 | 1.1272 | 0.7324 | 5.8716324e-07 | 268 |
| 0.1023 | 0.9812 | 1.1344 | 0.7183 | 5.870685e-07 | 269 |
| 0.1048 | 0.9741 | 1.1364 | 0.7183 | 5.869734e-07 | 270 |
| 0.1080 | 0.9718 | 1.1433 | 0.7183 | 5.8687795e-07 | 271 |
| 0.1099 | 0.9765 | 1.1434 | 0.7183 | 5.8678216e-07 | 272 |
| 0.0997 | 0.9765 | 1.1484 | 0.7183 | 5.8668604e-07 | 273 |
| 0.1130 | 0.9694 | 1.1461 | 0.7254 | 5.865896e-07 | 274 |
| 0.1101 | 0.9718 | 1.1504 | 0.7254 | 5.8649283e-07 | 275 |
| 0.1059 | 0.9765 | 1.1507 | 0.7254 | 5.8639574e-07 | 276 |
| 0.1019 | 0.9741 | 1.1551 | 0.7254 | 5.862983e-07 | 277 |
| 0.1085 | 0.9671 | 1.1581 | 0.7183 | 5.8620054e-07 | 278 |
| 0.0946 | 0.9835 | 1.1625 | 0.7254 | 5.8610243e-07 | 279 |
| 0.1178 | 0.9647 | 1.1619 | 0.7183 | 5.86004e-07 | 280 |
| 0.1010 | 0.9741 | 1.1612 | 0.7254 | 5.859052e-07 | 281 |
| 0.0893 | 0.9906 | 1.1722 | 0.7183 | 5.8580605e-07 | 282 |
| 0.0985 | 0.9835 | 1.1724 | 0.7254 | 5.857066e-07 | 283 |
| 0.1049 | 0.9788 | 1.1756 | 0.7254 | 5.856068e-07 | 284 |
| 0.0965 | 0.9812 | 1.1866 | 0.7113 | 5.855067e-07 | 285 |
| 0.0933 | 0.9788 | 1.1836 | 0.7254 | 5.8540627e-07 | 286 |
| 0.0893 | 0.9788 | 1.1773 | 0.7254 | 5.853055e-07 | 287 |
| 0.0884 | 0.9812 | 1.1877 | 0.7254 | 5.8520436e-07 | 288 |
| 0.0948 | 0.9835 | 1.1976 | 0.7113 | 5.851029e-07 | 289 |
| 0.0882 | 0.9812 | 1.2017 | 0.7042 | 5.850011e-07 | 290 |
| 0.0871 | 0.9835 | 1.1941 | 0.7183 | 5.8489894e-07 | 291 |
| 0.0855 | 0.9812 | 1.1915 | 0.7183 | 5.847965e-07 | 292 |
| 0.0914 | 0.9835 | 1.1997 | 0.7254 | 5.8469374e-07 | 293 |
| 0.0879 | 0.9812 | 1.1964 | 0.7394 | 5.845906e-07 | 294 |
| 0.0911 | 0.9788 | 1.2113 | 0.7113 | 5.8448717e-07 | 295 |
| 0.0793 | 0.9859 | 1.2093 | 0.7183 | 5.843834e-07 | 296 |
| 0.0749 | 0.9835 | 1.2128 | 0.7324 | 5.8427923e-07 | 297 |
| 0.0869 | 0.9812 | 1.2238 | 0.7113 | 5.8417476e-07 | 298 |
| 0.0833 | 0.9859 | 1.2126 | 0.7183 | 5.8407e-07 | 299 |
| 0.0809 | 0.9882 | 1.2143 | 0.7113 | 5.839649e-07 | 300 |
| 0.0885 | 0.9788 | 1.2199 | 0.7183 | 5.8385945e-07 | 301 |
| 0.0830 | 0.9741 | 1.2274 | 0.7183 | 5.8375366e-07 | 302 |
| 0.0743 | 0.9835 | 1.2243 | 0.7183 | 5.8364753e-07 | 303 |
| 0.0714 | 0.9882 | 1.2301 | 0.7183 | 5.835411e-07 | 304 |
| 0.0855 | 0.9835 | 1.2338 | 0.7183 | 5.8343437e-07 | 305 |
| 0.0858 | 0.9859 | 1.2484 | 0.7183 | 5.833273e-07 | 306 |
| 0.0813 | 0.9835 | 1.2430 | 0.7183 | 5.8321984e-07 | 307 |
| 0.0749 | 0.9812 | 1.2510 | 0.7183 | 5.8311207e-07 | 308 |
| 0.0819 | 0.9835 | 1.2436 | 0.7183 | 5.8300395e-07 | 309 |
| 0.0694 | 0.9859 | 1.2412 | 0.7254 | 5.8289555e-07 | 310 |
| 0.0791 | 0.9882 | 1.2442 | 0.7183 | 5.827868e-07 | 311 |
| 0.0692 | 0.9906 | 1.2445 | 0.7183 | 5.826777e-07 | 312 |
| 0.0775 | 0.9859 | 1.2473 | 0.7254 | 5.825683e-07 | 313 |
| 0.0747 | 0.9835 | 1.2607 | 0.7113 | 5.824586e-07 | 314 |
| 0.0811 | 0.9788 | 1.2550 | 0.7183 | 5.8234855e-07 | 315 |
| 0.0700 | 0.9906 | 1.2831 | 0.6901 | 5.8223816e-07 | 316 |
| 0.0686 | 0.9882 | 1.2643 | 0.7183 | 5.821274e-07 | 317 |
| 0.0641 | 0.9929 | 1.2677 | 0.7183 | 5.8201636e-07 | 318 |
| 0.0709 | 0.9882 | 1.2750 | 0.7183 | 5.81905e-07 | 319 |
| 0.0733 | 0.9859 | 1.2675 | 0.7183 | 5.817933e-07 | 320 |
| 0.0754 | 0.9859 | 1.2801 | 0.7183 | 5.8168126e-07 | 321 |
| 0.0702 | 0.9835 | 1.2777 | 0.7113 | 5.815689e-07 | 322 |
| 0.0587 | 0.9929 | 1.2755 | 0.7113 | 5.814562e-07 | 323 |
| 0.0581 | 0.9906 | 1.2862 | 0.7042 | 5.813432e-07 | 324 |
| 0.0584 | 0.9929 | 1.2723 | 0.7113 | 5.8122987e-07 | 325 |
| 0.0654 | 0.9882 | 1.2781 | 0.7183 | 5.811162e-07 | 326 |
| 0.0684 | 0.9835 | 1.2898 | 0.7113 | 5.810022e-07 | 327 |
| 0.0584 | 0.9929 | 1.2840 | 0.7042 | 5.808879e-07 | 328 |
| 0.0521 | 0.9953 | 1.2946 | 0.7183 | 5.8077325e-07 | 329 |
| 0.0665 | 0.9882 | 1.2912 | 0.7183 | 5.8065825e-07 | 330 |
| 0.0585 | 0.9882 | 1.2874 | 0.7183 | 5.80543e-07 | 331 |
| 0.0489 | 1.0 | 1.2858 | 0.7113 | 5.8042735e-07 | 332 |
| 0.0600 | 0.9859 | 1.2909 | 0.7183 | 5.803114e-07 | 333 |
| 0.0602 | 0.9882 | 1.3038 | 0.6972 | 5.8019515e-07 | 334 |
| 0.0587 | 0.9882 | 1.2926 | 0.7183 | 5.8007856e-07 | 335 |
| 0.0678 | 0.9835 | 1.2940 | 0.7183 | 5.7996164e-07 | 336 |
| 0.0556 | 0.9929 | 1.3075 | 0.7183 | 5.7984437e-07 | 337 |
| 0.0616 | 0.9906 | 1.3025 | 0.7113 | 5.797268e-07 | 338 |
| 0.0575 | 0.9906 | 1.3010 | 0.7183 | 5.796089e-07 | 339 |
| 0.0623 | 0.9835 | 1.3050 | 0.7183 | 5.794907e-07 | 340 |
| 0.0572 | 0.9906 | 1.3142 | 0.7183 | 5.7937217e-07 | 341 |
| 0.0645 | 0.9859 | 1.3080 | 0.7113 | 5.792533e-07 | 342 |
| 0.0694 | 0.9788 | 1.3146 | 0.7183 | 5.791341e-07 | 343 |
| 0.0595 | 0.9882 | 1.3117 | 0.7183 | 5.790146e-07 | 344 |
| 0.0748 | 0.9835 | 1.3069 | 0.7113 | 5.788948e-07 | 345 |
| 0.0628 | 0.9882 | 1.3189 | 0.7042 | 5.7877463e-07 | 346 |
| 0.0505 | 0.9906 | 1.3233 | 0.7042 | 5.786542e-07 | 347 |
| 0.0441 | 0.9929 | 1.3371 | 0.6901 | 5.785334e-07 | 348 |
| 0.0576 | 0.9906 | 1.3225 | 0.7042 | 5.7841225e-07 | 349 |
| 0.0514 | 0.9906 | 1.3288 | 0.6972 | 5.7829084e-07 | 350 |
| 0.0434 | 1.0 | 1.3170 | 0.7042 | 5.781691e-07 | 351 |
| 0.0505 | 0.9906 | 1.3194 | 0.7113 | 5.78047e-07 | 352 |
| 0.0512 | 0.9906 | 1.3276 | 0.7042 | 5.779246e-07 | 353 |
| 0.0538 | 0.9929 | 1.3298 | 0.7042 | 5.7780187e-07 | 354 |
| 0.0426 | 0.9953 | 1.3368 | 0.7042 | 5.776788e-07 | 355 |
| 0.0521 | 0.9859 | 1.3339 | 0.7183 | 5.7755545e-07 | 356 |
| 0.0437 | 0.9976 | 1.3350 | 0.7183 | 5.7743176e-07 | 357 |
| 0.0570 | 0.9882 | 1.3503 | 0.7042 | 5.7730773e-07 | 358 |
| 0.0521 | 0.9906 | 1.3513 | 0.7183 | 5.771834e-07 | 359 |
| 0.0472 | 0.9906 | 1.3464 | 0.7183 | 5.7705876e-07 | 360 |
| 0.0552 | 0.9859 | 1.3497 | 0.7113 | 5.769338e-07 | 361 |
| 0.0582 | 0.9906 | 1.3492 | 0.7183 | 5.7680853e-07 | 362 |
| 0.0562 | 0.9882 | 1.3590 | 0.7183 | 5.766829e-07 | 363 |
| 0.0473 | 0.9929 | 1.3563 | 0.7183 | 5.76557e-07 | 364 |
| 0.0511 | 0.9906 | 1.3605 | 0.7183 | 5.7643075e-07 | 365 |
| 0.0478 | 0.9906 | 1.3725 | 0.7183 | 5.763042e-07 | 366 |
| 0.0416 | 0.9953 | 1.3764 | 0.7113 | 5.7617734e-07 | 367 |
| 0.0568 | 0.9859 | 1.3730 | 0.7183 | 5.760501e-07 | 368 |
| 0.0412 | 0.9953 | 1.3719 | 0.7113 | 5.759226e-07 | 369 |
| 0.0460 | 0.9906 | 1.3778 | 0.6901 | 5.757948e-07 | 370 |
| 0.0518 | 0.9882 | 1.3870 | 0.7113 | 5.7566666e-07 | 371 |
| 0.0501 | 0.9859 | 1.3685 | 0.7042 | 5.755382e-07 | 372 |
| 0.0442 | 0.9929 | 1.3763 | 0.7183 | 5.7540944e-07 | 373 |
| 0.0509 | 0.9953 | 1.3730 | 0.7183 | 5.7528035e-07 | 374 |
| 0.0389 | 0.9929 | 1.3815 | 0.7042 | 5.751509e-07 | 375 |
| 0.0478 | 0.9929 | 1.3782 | 0.7183 | 5.750212e-07 | 376 |
| 0.0440 | 0.9906 | 1.3763 | 0.7183 | 5.7489115e-07 | 377 |
| 0.0368 | 0.9976 | 1.3816 | 0.7113 | 5.747608e-07 | 378 |
| 0.0402 | 0.9953 | 1.3734 | 0.7113 | 5.746301e-07 | 379 |
| 0.0367 | 0.9953 | 1.3829 | 0.7113 | 5.7449915e-07 | 380 |
| 0.0432 | 0.9976 | 1.3785 | 0.7183 | 5.7436785e-07 | 381 |
| 0.0437 | 0.9882 | 1.3797 | 0.7183 | 5.7423625e-07 | 382 |
| 0.0395 | 0.9953 | 1.3881 | 0.7113 | 5.741043e-07 | 383 |
| 0.0400 | 0.9929 | 1.3749 | 0.7113 | 5.739721e-07 | 384 |
| 0.0373 | 0.9953 | 1.3774 | 0.7113 | 5.7383954e-07 | 385 |
| 0.0473 | 0.9882 | 1.4225 | 0.6901 | 5.737067e-07 | 386 |
| 0.0433 | 0.9953 | 1.3829 | 0.7042 | 5.735735e-07 | 387 |
| 0.0609 | 0.9812 | 1.3872 | 0.7183 | 5.7344e-07 | 388 |
| 0.0326 | 0.9976 | 1.4030 | 0.6972 | 5.733062e-07 | 389 |
| 0.0420 | 0.9906 | 1.3772 | 0.7113 | 5.7317203e-07 | 390 |
| 0.0422 | 0.9929 | 1.3796 | 0.7113 | 5.730376e-07 | 391 |
| 0.0482 | 0.9906 | 1.4174 | 0.6901 | 5.729029e-07 | 392 |
| 0.0488 | 0.9906 | 1.3892 | 0.7183 | 5.727678e-07 | 393 |
| 0.0376 | 0.9929 | 1.3983 | 0.7183 | 5.726325e-07 | 394 |
| 0.0381 | 0.9976 | 1.3945 | 0.7113 | 5.724968e-07 | 395 |
| 0.0403 | 0.9953 | 1.3995 | 0.7042 | 5.723608e-07 | 396 |
| 0.0363 | 0.9929 | 1.4092 | 0.7042 | 5.722245e-07 | 397 |
| 0.0338 | 0.9953 | 1.4108 | 0.7113 | 5.720879e-07 | 398 |
| 0.0401 | 0.9906 | 1.4041 | 0.7042 | 5.71951e-07 | 399 |
| 0.0321 | 0.9953 | 1.4170 | 0.7113 | 5.7181376e-07 | 400 |
| 0.0341 | 0.9953 | 1.4182 | 0.7113 | 5.716762e-07 | 401 |
| 0.0442 | 0.9929 | 1.4323 | 0.6901 | 5.7153835e-07 | 402 |
| 0.0353 | 0.9929 | 1.4135 | 0.7113 | 5.7140016e-07 | 403 |
| 0.0324 | 0.9953 | 1.4202 | 0.7113 | 5.712617e-07 | 404 |
| 0.0365 | 0.9953 | 1.4134 | 0.7113 | 5.7112294e-07 | 405 |
| 0.0358 | 0.9953 | 1.4119 | 0.7042 | 5.7098384e-07 | 406 |
| 0.0381 | 0.9929 | 1.4301 | 0.6972 | 5.7084446e-07 | 407 |
| 0.0347 | 0.9976 | 1.4142 | 0.7113 | 5.7070474e-07 | 408 |
| 0.0433 | 0.9859 | 1.4135 | 0.7113 | 5.7056474e-07 | 409 |
| 0.0339 | 0.9953 | 1.4155 | 0.7113 | 5.704244e-07 | 410 |
| 0.0325 | 0.9953 | 1.4130 | 0.7113 | 5.7028376e-07 | 411 |
| 0.0335 | 0.9953 | 1.4181 | 0.7113 | 5.7014284e-07 | 412 |
| 0.0420 | 0.9859 | 1.4270 | 0.7042 | 5.700016e-07 | 413 |
| 0.0303 | 0.9929 | 1.4272 | 0.7042 | 5.6986005e-07 | 414 |
| 0.0271 | 1.0 | 1.4205 | 0.7113 | 5.6971817e-07 | 415 |
| 0.0256 | 0.9976 | 1.4246 | 0.7113 | 5.69576e-07 | 416 |
| 0.0385 | 0.9859 | 1.4340 | 0.7113 | 5.6943355e-07 | 417 |
| 0.0354 | 0.9953 | 1.4421 | 0.7113 | 5.6929076e-07 | 418 |
| 0.0277 | 0.9976 | 1.4466 | 0.7042 | 5.691477e-07 | 419 |
| 0.0374 | 0.9859 | 1.4395 | 0.7113 | 5.6900427e-07 | 420 |
| 0.0307 | 0.9976 | 1.4620 | 0.7042 | 5.6886057e-07 | 421 |
| 0.0341 | 0.9906 | 1.4348 | 0.7113 | 5.687166e-07 | 422 |
| 0.0317 | 0.9929 | 1.4453 | 0.7113 | 5.6857226e-07 | 423 |
| 0.0366 | 0.9906 | 1.4563 | 0.7042 | 5.6842765e-07 | 424 |
| 0.0338 | 0.9976 | 1.4364 | 0.7042 | 5.6828276e-07 | 425 |
| 0.0323 | 0.9953 | 1.4578 | 0.7042 | 5.681375e-07 | 426 |
| 0.0347 | 0.9929 | 1.4429 | 0.7113 | 5.67992e-07 | 427 |
| 0.0272 | 1.0 | 1.4494 | 0.7042 | 5.678462e-07 | 428 |
| 0.0458 | 0.9906 | 1.4470 | 0.7042 | 5.6770006e-07 | 429 |
| 0.0256 | 0.9976 | 1.4515 | 0.7113 | 5.675536e-07 | 430 |
| 0.0367 | 0.9859 | 1.4696 | 0.7042 | 5.674069e-07 | 431 |
| 0.0347 | 0.9906 | 1.4622 | 0.7113 | 5.6725986e-07 | 432 |
| 0.0343 | 0.9953 | 1.4668 | 0.6972 | 5.671125e-07 | 433 |
| 0.0293 | 0.9953 | 1.4618 | 0.7113 | 5.669649e-07 | 434 |
| 0.0293 | 0.9953 | 1.4626 | 0.7042 | 5.6681694e-07 | 435 |
| 0.0303 | 0.9929 | 1.4833 | 0.6901 | 5.666687e-07 | 436 |
| 0.0275 | 0.9976 | 1.4565 | 0.6972 | 5.6652016e-07 | 437 |
| 0.0326 | 0.9906 | 1.4655 | 0.7113 | 5.6637134e-07 | 438 |
| 0.0263 | 1.0 | 1.4675 | 0.7113 | 5.662222e-07 | 439 |
| 0.0289 | 0.9953 | 1.4666 | 0.7113 | 5.6607274e-07 | 440 |
| 0.0278 | 0.9929 | 1.4808 | 0.7042 | 5.65923e-07 | 441 |
| 0.0311 | 0.9953 | 1.4835 | 0.7042 | 5.6577295e-07 | 442 |
| 0.0248 | 0.9953 | 1.4796 | 0.7042 | 5.656226e-07 | 443 |
| 0.0294 | 0.9953 | 1.4766 | 0.7042 | 5.6547196e-07 | 444 |
| 0.0266 | 0.9953 | 1.5128 | 0.6972 | 5.6532105e-07 | 445 |
| 0.0248 | 0.9976 | 1.4791 | 0.7042 | 5.651698e-07 | 446 |
| 0.0229 | 0.9953 | 1.4781 | 0.7042 | 5.6501824e-07 | 447 |
| 0.0239 | 0.9929 | 1.4921 | 0.7113 | 5.648664e-07 | 448 |
| 0.0229 | 0.9976 | 1.4881 | 0.7113 | 5.647143e-07 | 449 |
| 0.0273 | 0.9976 | 1.4811 | 0.7183 | 5.6456184e-07 | 450 |
| 0.0288 | 0.9953 | 1.4889 | 0.7183 | 5.644091e-07 | 451 |
| 0.0246 | 0.9953 | 1.4993 | 0.7113 | 5.642561e-07 | 452 |
| 0.0253 | 0.9953 | 1.4920 | 0.7042 | 5.641028e-07 | 453 |
| 0.0189 | 1.0 | 1.4994 | 0.7113 | 5.6394913e-07 | 454 |
| 0.0288 | 0.9929 | 1.5018 | 0.7042 | 5.637952e-07 | 455 |
| 0.0229 | 0.9976 | 1.5044 | 0.7042 | 5.63641e-07 | 456 |
| 0.0320 | 0.9906 | 1.5273 | 0.6901 | 5.634865e-07 | 457 |
| 0.0184 | 1.0 | 1.4992 | 0.7113 | 5.633317e-07 | 458 |
| 0.0263 | 0.9976 | 1.4960 | 0.7042 | 5.6317657e-07 | 459 |
| 0.0273 | 0.9953 | 1.4951 | 0.7113 | 5.6302116e-07 | 460 |
| 0.0206 | 1.0 | 1.5116 | 0.7042 | 5.6286547e-07 | 461 |
| 0.0223 | 0.9976 | 1.5222 | 0.7042 | 5.627095e-07 | 462 |
| 0.0201 | 0.9976 | 1.5234 | 0.7042 | 5.625532e-07 | 463 |
| 0.0257 | 0.9976 | 1.5070 | 0.7042 | 5.623967e-07 | 464 |
| 0.0246 | 0.9976 | 1.5116 | 0.7042 | 5.622398e-07 | 465 |
| 0.0208 | 1.0 | 1.5197 | 0.7042 | 5.620826e-07 | 466 |
| 0.0262 | 0.9929 | 1.5132 | 0.7113 | 5.6192516e-07 | 467 |
| 0.0185 | 1.0 | 1.5214 | 0.7113 | 5.617674e-07 | 468 |
| 0.0266 | 0.9953 | 1.5139 | 0.7042 | 5.616094e-07 | 469 |
| 0.0236 | 0.9953 | 1.5259 | 0.7113 | 5.614511e-07 | 470 |
| 0.0277 | 0.9953 | 1.5145 | 0.7113 | 5.612925e-07 | 471 |
| 0.0211 | 0.9976 | 1.5214 | 0.7113 | 5.6113356e-07 | 472 |
| 0.0204 | 0.9976 | 1.5225 | 0.7113 | 5.6097434e-07 | 473 |
| 0.0282 | 0.9953 | 1.5294 | 0.7042 | 5.6081484e-07 | 474 |
| 0.0174 | 1.0 | 1.5124 | 0.7183 | 5.6065505e-07 | 475 |
| 0.0240 | 0.9953 | 1.5274 | 0.7042 | 5.60495e-07 | 476 |
| 0.0251 | 0.9953 | 1.5321 | 0.7113 | 5.603346e-07 | 477 |
| 0.0380 | 0.9859 | 1.5567 | 0.6972 | 5.60174e-07 | 478 |
| 0.0170 | 0.9976 | 1.5365 | 0.7113 | 5.6001306e-07 | 479 |
| 0.0277 | 0.9906 | 1.5184 | 0.7113 | 5.598518e-07 | 480 |
| 0.0139 | 1.0 | 1.5301 | 0.7113 | 5.5969025e-07 | 481 |
| 0.0182 | 0.9976 | 1.5406 | 0.7113 | 5.595284e-07 | 482 |
| 0.0174 | 1.0 | 1.5286 | 0.7113 | 5.593663e-07 | 483 |
| 0.0187 | 1.0 | 1.5353 | 0.7113 | 5.592039e-07 | 484 |
| 0.0121 | 1.0 | 1.5471 | 0.7113 | 5.590412e-07 | 485 |
| 0.0202 | 0.9976 | 1.5423 | 0.7113 | 5.5887824e-07 | 486 |
| 0.0156 | 1.0 | 1.5384 | 0.7042 | 5.58715e-07 | 487 |
| 0.0230 | 0.9976 | 1.5390 | 0.7042 | 5.5855145e-07 | 488 |
| 0.0192 | 0.9976 | 1.5614 | 0.7042 | 5.583876e-07 | 489 |
| 0.0212 | 0.9976 | 1.5359 | 0.7113 | 5.582235e-07 | 490 |
| 0.0218 | 0.9953 | 1.5425 | 0.7113 | 5.580591e-07 | 491 |
| 0.0207 | 0.9976 | 1.5397 | 0.7113 | 5.5789445e-07 | 492 |
| 0.0184 | 0.9976 | 1.5564 | 0.7113 | 5.577295e-07 | 493 |
| 0.0323 | 0.9929 | 1.5373 | 0.7042 | 5.5756425e-07 | 494 |
| 0.0192 | 0.9976 | 1.5321 | 0.7113 | 5.573987e-07 | 495 |
| 0.0168 | 0.9976 | 1.5556 | 0.6901 | 5.572329e-07 | 496 |
| 0.0217 | 0.9953 | 1.5356 | 0.7113 | 5.570668e-07 | 497 |
| 0.0229 | 0.9953 | 1.5500 | 0.7113 | 5.5690043e-07 | 498 |
| 0.0153 | 1.0 | 1.5562 | 0.7113 | 5.5673377e-07 | 499 |
| 0.0221 | 0.9953 | 1.5349 | 0.7113 | 5.565668e-07 | 500 |
| 0.0190 | 0.9976 | 1.5521 | 0.7042 | 5.563996e-07 | 501 |
| 0.0131 | 1.0 | 1.5415 | 0.7113 | 5.5623207e-07 | 502 |
| 0.0198 | 0.9976 | 1.5476 | 0.7042 | 5.5606426e-07 | 503 |
| 0.0170 | 0.9976 | 1.5511 | 0.7042 | 5.558962e-07 | 504 |
| 0.0208 | 0.9976 | 1.5417 | 0.7113 | 5.557278e-07 | 505 |
| 0.0176 | 0.9976 | 1.5583 | 0.7113 | 5.5555915e-07 | 506 |
| 0.0244 | 0.9929 | 1.5365 | 0.7113 | 5.553902e-07 | 507 |
| 0.0224 | 0.9953 | 1.5411 | 0.7042 | 5.55221e-07 | 508 |
| 0.0221 | 0.9976 | 1.5508 | 0.7042 | 5.550515e-07 | 509 |
| 0.0208 | 0.9953 | 1.5442 | 0.7042 | 5.548817e-07 | 510 |
| 0.0144 | 1.0 | 1.5497 | 0.7042 | 5.547116e-07 | 511 |
| 0.0139 | 0.9976 | 1.5414 | 0.7113 | 5.5454126e-07 | 512 |
| 0.0170 | 1.0 | 1.5583 | 0.7113 | 5.543706e-07 | 513 |
| 0.0216 | 0.9953 | 1.5830 | 0.6901 | 5.541997e-07 | 514 |
| 0.0174 | 0.9976 | 1.5608 | 0.7042 | 5.540285e-07 | 515 |
| 0.0151 | 1.0 | 1.5540 | 0.7113 | 5.53857e-07 | 516 |
| 0.0275 | 0.9906 | 1.5765 | 0.7042 | 5.536852e-07 | 517 |
| 0.0196 | 0.9953 | 1.5607 | 0.7113 | 5.535132e-07 | 518 |
| 0.0158 | 0.9976 | 1.5574 | 0.7113 | 5.533409e-07 | 519 |
| 0.0145 | 1.0 | 1.5608 | 0.7113 | 5.531683e-07 | 520 |
| 0.0189 | 0.9953 | 1.5687 | 0.7113 | 5.5299546e-07 | 521 |
| 0.0108 | 1.0 | 1.5872 | 0.7042 | 5.528223e-07 | 522 |
| 0.0138 | 0.9976 | 1.5659 | 0.7113 | 5.526489e-07 | 523 |
| 0.0213 | 0.9953 | 1.5662 | 0.7113 | 5.5247517e-07 | 524 |
| 0.0146 | 1.0 | 1.5675 | 0.7113 | 5.523012e-07 | 525 |
| 0.0305 | 0.9929 | 1.5793 | 0.7042 | 5.5212695e-07 | 526 |
| 0.0251 | 0.9906 | 1.6106 | 0.7042 | 5.5195244e-07 | 527 |
| 0.0141 | 1.0 | 1.5905 | 0.7113 | 5.5177765e-07 | 528 |
| 0.0157 | 1.0 | 1.5796 | 0.7113 | 5.5160257e-07 | 529 |
| 0.0165 | 0.9976 | 1.5842 | 0.7042 | 5.514272e-07 | 530 |
| 0.0122 | 1.0 | 1.5858 | 0.7113 | 5.5125156e-07 | 531 |
| 0.0184 | 0.9953 | 1.5840 | 0.7113 | 5.5107563e-07 | 532 |
| 0.0148 | 0.9976 | 1.5894 | 0.7042 | 5.508995e-07 | 533 |
| 0.0107 | 1.0 | 1.5815 | 0.7113 | 5.5072303e-07 | 534 |
| 0.0109 | 1.0 | 1.5720 | 0.7113 | 5.505463e-07 | 535 |
| 0.0198 | 0.9976 | 1.5997 | 0.7042 | 5.503693e-07 | 536 |
| 0.0161 | 0.9976 | 1.5912 | 0.7042 | 5.50192e-07 | 537 |
| 0.0142 | 0.9976 | 1.5908 | 0.7113 | 5.500145e-07 | 538 |
| 0.0283 | 0.9953 | 1.6297 | 0.6901 | 5.4983667e-07 | 539 |
| 0.0140 | 0.9976 | 1.5868 | 0.7113 | 5.496586e-07 | 540 |
| 0.0198 | 0.9976 | 1.5935 | 0.7113 | 5.494802e-07 | 541 |
| 0.0162 | 0.9953 | 1.5808 | 0.7042 | 5.4930155e-07 | 542 |
| 0.0242 | 0.9953 | 1.5821 | 0.7113 | 5.4912266e-07 | 543 |
| 0.0134 | 0.9976 | 1.5811 | 0.7113 | 5.489435e-07 | 544 |
| 0.0109 | 1.0 | 1.5896 | 0.7042 | 5.4876404e-07 | 545 |
| 0.0207 | 0.9929 | 1.6128 | 0.7042 | 5.485843e-07 | 546 |
| 0.0186 | 0.9953 | 1.6122 | 0.7042 | 5.4840433e-07 | 547 |
| 0.0177 | 0.9929 | 1.5888 | 0.7113 | 5.482241e-07 | 548 |
| 0.0234 | 0.9929 | 1.6020 | 0.7113 | 5.4804354e-07 | 549 |
| 0.0128 | 1.0 | 1.6066 | 0.7042 | 5.478627e-07 | 550 |
| 0.0141 | 0.9976 | 1.6057 | 0.7113 | 5.476817e-07 | 551 |
| 0.0187 | 0.9953 | 1.5909 | 0.7042 | 5.4750035e-07 | 552 |
| 0.0080 | 1.0 | 1.5909 | 0.7042 | 5.4731873e-07 | 553 |
| 0.0119 | 1.0 | 1.5965 | 0.7042 | 5.471369e-07 | 554 |
| 0.0181 | 0.9976 | 1.6079 | 0.7113 | 5.4695477e-07 | 555 |
| 0.0146 | 0.9976 | 1.6027 | 0.7113 | 5.4677236e-07 | 556 |
| 0.0184 | 0.9953 | 1.6288 | 0.6901 | 5.4658966e-07 | 557 |
| 0.0214 | 0.9929 | 1.6405 | 0.6972 | 5.4640674e-07 | 558 |
| 0.0142 | 0.9976 | 1.6243 | 0.7183 | 5.4622353e-07 | 559 |
| 0.0156 | 0.9953 | 1.6258 | 0.6972 | 5.4604004e-07 | 560 |
| 0.0177 | 0.9953 | 1.6143 | 0.7183 | 5.458563e-07 | 561 |
| 0.0115 | 1.0 | 1.6293 | 0.7113 | 5.456723e-07 | 562 |
| 0.0172 | 0.9953 | 1.6376 | 0.7113 | 5.4548804e-07 | 563 |
| 0.0175 | 0.9953 | 1.6281 | 0.6972 | 5.453035e-07 | 564 |
| 0.0145 | 0.9953 | 1.6209 | 0.7183 | 5.451187e-07 | 565 |
| 0.0150 | 0.9953 | 1.6514 | 0.6972 | 5.4493364e-07 | 566 |
| 0.0126 | 1.0 | 1.6351 | 0.6972 | 5.4474833e-07 | 567 |
| 0.0131 | 1.0 | 1.6163 | 0.7113 | 5.4456274e-07 | 568 |
| 0.0107 | 0.9976 | 1.6161 | 0.7183 | 5.4437686e-07 | 569 |
| 0.0095 | 1.0 | 1.6306 | 0.7042 | 5.4419075e-07 | 570 |
| 0.0134 | 0.9976 | 1.6331 | 0.7042 | 5.4400437e-07 | 571 |
| 0.0091 | 1.0 | 1.6271 | 0.6972 | 5.4381775e-07 | 572 |
| 0.0095 | 0.9976 | 1.6186 | 0.7042 | 5.4363085e-07 | 573 |
| 0.0134 | 0.9953 | 1.6228 | 0.7042 | 5.4344366e-07 | 574 |
| 0.0081 | 1.0 | 1.6333 | 0.7042 | 5.4325625e-07 | 575 |
| 0.0128 | 1.0 | 1.6444 | 0.7042 | 5.4306855e-07 | 576 |
| 0.0107 | 1.0 | 1.6434 | 0.7042 | 5.428806e-07 | 577 |
| 0.0104 | 1.0 | 1.6371 | 0.6972 | 5.426924e-07 | 578 |
| 0.0076 | 1.0 | 1.6375 | 0.7042 | 5.425039e-07 | 579 |
| 0.0145 | 0.9976 | 1.6414 | 0.7042 | 5.423152e-07 | 580 |
| 0.0240 | 0.9929 | 1.6267 | 0.7183 | 5.421262e-07 | 581 |
| 0.0103 | 1.0 | 1.6475 | 0.7042 | 5.4193697e-07 | 582 |
| 0.0101 | 1.0 | 1.6514 | 0.7042 | 5.4174745e-07 | 583 |
| 0.0215 | 0.9976 | 1.6346 | 0.7183 | 5.415577e-07 | 584 |
| 0.0106 | 1.0 | 1.6294 | 0.7183 | 5.413677e-07 | 585 |
| 0.0158 | 0.9976 | 1.6687 | 0.6972 | 5.411774e-07 | 586 |
| 0.0159 | 0.9976 | 1.6515 | 0.6972 | 5.409869e-07 | 587 |
| 0.0090 | 1.0 | 1.6303 | 0.7042 | 5.407961e-07 | 588 |
| 0.0092 | 1.0 | 1.6390 | 0.7042 | 5.406051e-07 | 589 |
| 0.0124 | 0.9976 | 1.6467 | 0.7042 | 5.404138e-07 | 590 |
| 0.0139 | 0.9953 | 1.6394 | 0.7113 | 5.4022223e-07 | 591 |
| 0.0097 | 1.0 | 1.6605 | 0.7113 | 5.400304e-07 | 592 |
| 0.0135 | 1.0 | 1.6947 | 0.6901 | 5.398383e-07 | 593 |
| 0.0071 | 1.0 | 1.6584 | 0.7113 | 5.3964595e-07 | 594 |
| 0.0113 | 0.9976 | 1.6775 | 0.6901 | 5.3945337e-07 | 595 |
| 0.0163 | 0.9953 | 1.6672 | 0.7042 | 5.3926055e-07 | 596 |
| 0.0105 | 0.9976 | 1.6635 | 0.7113 | 5.3906746e-07 | 597 |
| 0.0077 | 1.0 | 1.6622 | 0.7113 | 5.3887413e-07 | 598 |
| 0.0133 | 0.9976 | 1.6621 | 0.7113 | 5.386805e-07 | 599 |
| 0.0148 | 0.9953 | 1.6754 | 0.7042 | 5.384867e-07 | 600 |
| 0.0185 | 0.9976 | 1.6523 | 0.7113 | 5.3829257e-07 | 601 |
| 0.0128 | 0.9976 | 1.6653 | 0.7113 | 5.380982e-07 | 602 |
| 0.0084 | 1.0 | 1.6917 | 0.7113 | 5.379036e-07 | 603 |
| 0.0114 | 1.0 | 1.6809 | 0.7042 | 5.3770873e-07 | 604 |
| 0.0112 | 1.0 | 1.6851 | 0.7042 | 5.375136e-07 | 605 |
| 0.0100 | 1.0 | 1.6672 | 0.7113 | 5.373182e-07 | 606 |
| 0.0225 | 0.9953 | 1.6814 | 0.7042 | 5.371226e-07 | 607 |
| 0.0109 | 0.9976 | 1.7314 | 0.6831 | 5.3692673e-07 | 608 |
| 0.0189 | 0.9976 | 1.7343 | 0.6831 | 5.367306e-07 | 609 |
| 0.0113 | 1.0 | 1.6949 | 0.7042 | 5.3653423e-07 | 610 |
| 0.0077 | 1.0 | 1.6795 | 0.7042 | 5.363376e-07 | 611 |
| 0.0092 | 0.9976 | 1.6835 | 0.7113 | 5.3614076e-07 | 612 |
| 0.0082 | 1.0 | 1.6651 | 0.7183 | 5.359436e-07 | 613 |
| 0.0086 | 1.0 | 1.6708 | 0.7183 | 5.3574627e-07 | 614 |
| 0.0164 | 0.9976 | 1.6803 | 0.7113 | 5.355486e-07 | 615 |
| 0.0110 | 1.0 | 1.6968 | 0.7042 | 5.3535075e-07 | 616 |
| 0.0084 | 1.0 | 1.7125 | 0.6901 | 5.3515265e-07 | 617 |
| 0.0064 | 1.0 | 1.6941 | 0.7042 | 5.3495427e-07 | 618 |
| 0.0082 | 1.0 | 1.6805 | 0.7042 | 5.3475566e-07 | 619 |
| 0.0205 | 0.9953 | 1.6883 | 0.7113 | 5.345568e-07 | 620 |
| 0.0099 | 0.9976 | 1.7190 | 0.6972 | 5.343577e-07 | 621 |
| 0.0137 | 0.9976 | 1.6911 | 0.7042 | 5.3415835e-07 | 622 |
| 0.0129 | 0.9976 | 1.7481 | 0.6761 | 5.3395877e-07 | 623 |
| 0.0148 | 0.9953 | 1.6827 | 0.7042 | 5.337589e-07 | 624 |
| 0.0098 | 1.0 | 1.6790 | 0.7042 | 5.335588e-07 | 625 |
| 0.0166 | 0.9976 | 1.7116 | 0.6901 | 5.333585e-07 | 626 |
| 0.0077 | 1.0 | 1.7069 | 0.7042 | 5.331579e-07 | 627 |
| 0.0086 | 1.0 | 1.6960 | 0.7042 | 5.329571e-07 | 628 |
| 0.0111 | 0.9976 | 1.6926 | 0.7113 | 5.32756e-07 | 629 |
| 0.0078 | 1.0 | 1.6968 | 0.6972 | 5.3255474e-07 | 630 |
| 0.0066 | 1.0 | 1.7007 | 0.7042 | 5.3235317e-07 | 631 |
| 0.0067 | 1.0 | 1.6966 | 0.7113 | 5.321514e-07 | 632 |
| 0.0082 | 0.9976 | 1.6978 | 0.7042 | 5.3194935e-07 | 633 |
| 0.0068 | 1.0 | 1.6891 | 0.7113 | 5.317471e-07 | 634 |
| 0.0073 | 0.9976 | 1.6884 | 0.7113 | 5.315446e-07 | 635 |
| 0.0073 | 1.0 | 1.6983 | 0.7042 | 5.313418e-07 | 636 |
| 0.0073 | 1.0 | 1.7076 | 0.7113 | 5.311388e-07 | 637 |
| 0.0079 | 1.0 | 1.7086 | 0.7113 | 5.309356e-07 | 638 |
| 0.0095 | 1.0 | 1.7020 | 0.7183 | 5.307321e-07 | 639 |
| 0.0120 | 0.9953 | 1.7186 | 0.7042 | 5.305284e-07 | 640 |
| 0.0083 | 1.0 | 1.7328 | 0.6831 | 5.303244e-07 | 641 |
| 0.0070 | 1.0 | 1.7046 | 0.7183 | 5.3012025e-07 | 642 |
| 0.0091 | 0.9976 | 1.6905 | 0.7183 | 5.299158e-07 | 643 |
| 0.0079 | 1.0 | 1.6824 | 0.7113 | 5.297111e-07 | 644 |
| 0.0111 | 0.9976 | 1.7233 | 0.7042 | 5.2950617e-07 | 645 |
| 0.0089 | 1.0 | 1.7362 | 0.7042 | 5.29301e-07 | 646 |
| 0.0066 | 1.0 | 1.7168 | 0.7183 | 5.2909564e-07 | 647 |
| 0.0075 | 1.0 | 1.7070 | 0.7183 | 5.2889e-07 | 648 |
| 0.0055 | 1.0 | 1.7062 | 0.7254 | 5.286841e-07 | 649 |
| 0.0140 | 0.9976 | 1.7121 | 0.7183 | 5.28478e-07 | 650 |
| 0.0082 | 1.0 | 1.7394 | 0.6972 | 5.2827164e-07 | 651 |
| 0.0082 | 1.0 | 1.7256 | 0.7042 | 5.280651e-07 | 652 |
| 0.0074 | 1.0 | 1.7367 | 0.7042 | 5.278583e-07 | 653 |
| 0.0128 | 0.9953 | 1.7230 | 0.7183 | 5.2765125e-07 | 654 |
| 0.0079 | 0.9976 | 1.7351 | 0.7113 | 5.2744394e-07 | 655 |
| 0.0093 | 1.0 | 1.7455 | 0.6972 | 5.272364e-07 | 656 |
| 0.0099 | 0.9976 | 1.7245 | 0.7113 | 5.2702865e-07 | 657 |
| 0.0061 | 1.0 | 1.7163 | 0.7183 | 5.2682066e-07 | 658 |
| 0.0075 | 1.0 | 1.7194 | 0.7113 | 5.2661244e-07 | 659 |
| 0.0067 | 0.9976 | 1.7400 | 0.7042 | 5.26404e-07 | 660 |
| 0.0066 | 1.0 | 1.7418 | 0.7042 | 5.261953e-07 | 661 |
| 0.0098 | 0.9976 | 1.7317 | 0.7042 | 5.259864e-07 | 662 |
| 0.0200 | 0.9953 | 1.7388 | 0.7042 | 5.257773e-07 | 663 |
| 0.0066 | 1.0 | 1.7145 | 0.7183 | 5.255679e-07 | 664 |
| 0.0137 | 0.9976 | 1.7167 | 0.7183 | 5.2535825e-07 | 665 |
| 0.0059 | 1.0 | 1.7267 | 0.7113 | 5.251484e-07 | 666 |
| 0.0074 | 0.9976 | 1.7218 | 0.7113 | 5.249383e-07 | 667 |
| 0.0111 | 0.9976 | 1.7525 | 0.7113 | 5.2472797e-07 | 668 |
| 0.0068 | 1.0 | 1.7534 | 0.7113 | 5.245174e-07 | 669 |
| 0.0069 | 1.0 | 1.7291 | 0.7183 | 5.2430664e-07 | 670 |
| 0.0077 | 1.0 | 1.7194 | 0.7183 | 5.2409564e-07 | 671 |
| 0.0087 | 1.0 | 1.7324 | 0.7183 | 5.238844e-07 | 672 |
| 0.0049 | 1.0 | 1.7482 | 0.7042 | 5.2367295e-07 | 673 |
| 0.0115 | 0.9976 | 1.7372 | 0.7183 | 5.2346127e-07 | 674 |
| 0.0144 | 0.9929 | 1.7595 | 0.7042 | 5.2324935e-07 | 675 |
| 0.0067 | 1.0 | 1.7565 | 0.7183 | 5.230372e-07 | 676 |
| 0.0045 | 1.0 | 1.7494 | 0.7254 | 5.2282485e-07 | 677 |
| 0.0087 | 0.9976 | 1.7469 | 0.7183 | 5.2261225e-07 | 678 |
| 0.0076 | 1.0 | 1.7649 | 0.6972 | 5.2239943e-07 | 679 |
| 0.0074 | 1.0 | 1.7787 | 0.6972 | 5.221864e-07 | 680 |
| 0.0058 | 1.0 | 1.7617 | 0.7042 | 5.219731e-07 | 681 |
| 0.0071 | 1.0 | 1.7590 | 0.7254 | 5.217596e-07 | 682 |
| 0.0116 | 0.9976 | 1.7443 | 0.7183 | 5.2154587e-07 | 683 |
| 0.0082 | 0.9976 | 1.7544 | 0.7254 | 5.213319e-07 | 684 |
| 0.0060 | 1.0 | 1.7720 | 0.7113 | 5.211177e-07 | 685 |
| 0.0058 | 1.0 | 1.7638 | 0.6972 | 5.209033e-07 | 686 |
| 0.0072 | 1.0 | 1.7495 | 0.7113 | 5.206887e-07 | 687 |
| 0.0089 | 0.9953 | 1.7672 | 0.7113 | 5.204738e-07 | 688 |
| 0.0086 | 0.9953 | 1.7573 | 0.7183 | 5.202587e-07 | 689 |
| 0.0048 | 1.0 | 1.7596 | 0.7113 | 5.200434e-07 | 690 |
| 0.0047 | 1.0 | 1.7659 | 0.7113 | 5.198279e-07 | 691 |
| 0.0102 | 0.9976 | 1.7692 | 0.7183 | 5.196122e-07 | 692 |
| 0.0076 | 1.0 | 1.7814 | 0.6901 | 5.193962e-07 | 693 |
| 0.0087 | 0.9976 | 1.8024 | 0.6901 | 5.1918005e-07 | 694 |
| 0.0144 | 0.9976 | 1.7628 | 0.7183 | 5.1896365e-07 | 695 |
| 0.0057 | 1.0 | 1.7604 | 0.7113 | 5.18747e-07 | 696 |
| 0.0063 | 1.0 | 1.7590 | 0.7183 | 5.1853016e-07 | 697 |
| 0.0081 | 0.9976 | 1.7719 | 0.7254 | 5.183131e-07 | 698 |
| 0.0054 | 1.0 | 1.7840 | 0.7183 | 5.1809576e-07 | 699 |
| 0.0076 | 0.9976 | 1.7832 | 0.7183 | 5.178783e-07 | 700 |
| 0.0096 | 0.9976 | 1.7788 | 0.7254 | 5.1766057e-07 | 701 |
| 0.0127 | 0.9953 | 1.7978 | 0.7042 | 5.1744263e-07 | 702 |
| 0.0105 | 0.9953 | 1.7733 | 0.7183 | 5.1722446e-07 | 703 |
| 0.0072 | 1.0 | 1.7518 | 0.7183 | 5.170061e-07 | 704 |
| 0.0063 | 1.0 | 1.7930 | 0.7113 | 5.1678745e-07 | 705 |
| 0.0063 | 1.0 | 1.7954 | 0.7042 | 5.1656866e-07 | 706 |
| 0.0034 | 1.0 | 1.7896 | 0.7183 | 5.1634964e-07 | 707 |
| 0.0069 | 1.0 | 1.7790 | 0.7113 | 5.161304e-07 | 708 |
| 0.0071 | 1.0 | 1.7808 | 0.7113 | 5.159109e-07 | 709 |
| 0.0045 | 1.0 | 1.7895 | 0.7183 | 5.156912e-07 | 710 |
| 0.0071 | 0.9976 | 1.7884 | 0.7254 | 5.154713e-07 | 711 |
| 0.0053 | 1.0 | 1.7899 | 0.7183 | 5.152512e-07 | 712 |
| 0.0068 | 1.0 | 1.8066 | 0.6972 | 5.150309e-07 | 713 |
| 0.0070 | 0.9976 | 1.8061 | 0.6972 | 5.148103e-07 | 714 |
| 0.0101 | 0.9976 | 1.7872 | 0.7254 | 5.1458954e-07 | 715 |
| 0.0053 | 1.0 | 1.7980 | 0.7254 | 5.143686e-07 | 716 |
| 0.0045 | 1.0 | 1.7966 | 0.7183 | 5.141474e-07 | 717 |
| 0.0056 | 1.0 | 1.7815 | 0.7183 | 5.13926e-07 | 718 |
| 0.0063 | 0.9976 | 1.7767 | 0.7183 | 5.137044e-07 | 719 |
| 0.0069 | 1.0 | 1.7798 | 0.7183 | 5.134826e-07 | 720 |
| 0.0077 | 0.9976 | 1.7694 | 0.7183 | 5.1326055e-07 | 721 |
| 0.0073 | 1.0 | 1.7625 | 0.7113 | 5.130383e-07 | 722 |
| 0.0088 | 0.9976 | 1.7686 | 0.7183 | 5.128158e-07 | 723 |
| 0.0060 | 0.9976 | 1.7948 | 0.7113 | 5.1259315e-07 | 724 |
| 0.0055 | 1.0 | 1.8171 | 0.6831 | 5.1237026e-07 | 725 |
| 0.0097 | 0.9976 | 1.7676 | 0.7324 | 5.1214715e-07 | 726 |
| 0.0107 | 0.9976 | 1.7711 | 0.7183 | 5.119239e-07 | 727 |
| 0.0054 | 1.0 | 1.8138 | 0.6901 | 5.1170036e-07 | 728 |
| 0.0066 | 1.0 | 1.8125 | 0.6901 | 5.114766e-07 | 729 |
| 0.0083 | 0.9976 | 1.8231 | 0.7042 | 5.112527e-07 | 730 |
| 0.0078 | 0.9976 | 1.8580 | 0.6972 | 5.110286e-07 | 731 |
| 0.0077 | 1.0 | 1.8353 | 0.6831 | 5.108042e-07 | 732 |
| 0.0060 | 0.9976 | 1.7904 | 0.7254 | 5.105797e-07 | 733 |
| 0.0076 | 1.0 | 1.7710 | 0.7042 | 5.1035494e-07 | 734 |
| 0.0059 | 1.0 | 1.7697 | 0.7113 | 5.1012995e-07 | 735 |
| 0.0090 | 0.9976 | 1.7907 | 0.7183 | 5.099048e-07 | 736 |
| 0.0066 | 0.9976 | 1.8409 | 0.6901 | 5.096794e-07 | 737 |
| 0.0063 | 0.9976 | 1.8506 | 0.6901 | 5.094538e-07 | 738 |
| 0.0093 | 0.9976 | 1.8044 | 0.7113 | 5.09228e-07 | 739 |
| 0.0045 | 1.0 | 1.7876 | 0.7113 | 5.09002e-07 | 740 |
| 0.0043 | 1.0 | 1.7848 | 0.7183 | 5.087758e-07 | 741 |
| 0.0045 | 1.0 | 1.7822 | 0.7113 | 5.085494e-07 | 742 |
| 0.0049 | 1.0 | 1.7880 | 0.7113 | 5.083228e-07 | 743 |
| 0.0063 | 1.0 | 1.7968 | 0.7183 | 5.08096e-07 | 744 |
| 0.0067 | 0.9976 | 1.8012 | 0.7113 | 5.0786895e-07 | 745 |
| 0.0065 | 1.0 | 1.7939 | 0.7113 | 5.0764174e-07 | 746 |
| 0.0044 | 1.0 | 1.7888 | 0.7042 | 5.074143e-07 | 747 |
| 0.0029 | 1.0 | 1.7825 | 0.7113 | 5.071867e-07 | 748 |
| 0.0045 | 1.0 | 1.7841 | 0.7113 | 5.069589e-07 | 749 |
| 0.0062 | 1.0 | 1.7973 | 0.7042 | 5.067308e-07 | 750 |
| 0.0039 | 1.0 | 1.7941 | 0.7113 | 5.065026e-07 | 751 |
| 0.0048 | 1.0 | 1.7969 | 0.7183 | 5.0627415e-07 | 752 |
| 0.0103 | 0.9953 | 1.7964 | 0.7183 | 5.060455e-07 | 753 |
| 0.0141 | 0.9929 | 1.7874 | 0.7113 | 5.058167e-07 | 754 |
| 0.0040 | 1.0 | 1.7976 | 0.7113 | 5.0558765e-07 | 755 |
| 0.0042 | 1.0 | 1.8004 | 0.7113 | 5.053584e-07 | 756 |
| 0.0081 | 0.9976 | 1.8181 | 0.6972 | 5.05129e-07 | 757 |
| 0.0057 | 1.0 | 1.8273 | 0.6972 | 5.0489933e-07 | 758 |
| 0.0108 | 0.9976 | 1.8447 | 0.6972 | 5.046695e-07 | 759 |
| 0.0048 | 1.0 | 1.8264 | 0.6972 | 5.0443947e-07 | 760 |
| 0.0056 | 0.9976 | 1.8100 | 0.7113 | 5.0420925e-07 | 761 |
| 0.0052 | 1.0 | 1.8257 | 0.7113 | 5.039788e-07 | 762 |
| 0.0061 | 0.9976 | 1.8248 | 0.6972 | 5.037482e-07 | 763 |
| 0.0046 | 1.0 | 1.8195 | 0.7042 | 5.035174e-07 | 764 |
| 0.0055 | 0.9976 | 1.8192 | 0.7113 | 5.032864e-07 | 765 |
| 0.0035 | 1.0 | 1.8272 | 0.7113 | 5.030552e-07 | 766 |
| 0.0070 | 0.9976 | 1.8315 | 0.6972 | 5.028238e-07 | 767 |
| 0.0077 | 1.0 | 1.8752 | 0.7042 | 5.0259223e-07 | 768 |
| 0.0084 | 0.9976 | 1.8060 | 0.7113 | 5.023604e-07 | 769 |
| 0.0089 | 0.9976 | 1.8444 | 0.7042 | 5.0212844e-07 | 770 |
| 0.0063 | 1.0 | 1.8493 | 0.7113 | 5.0189624e-07 | 771 |
| 0.0135 | 0.9976 | 1.8318 | 0.7113 | 5.0166386e-07 | 772 |
| 0.0044 | 1.0 | 1.8470 | 0.7113 | 5.014313e-07 | 773 |
| 0.0055 | 1.0 | 1.8332 | 0.7183 | 5.0119854e-07 | 774 |
| 0.0050 | 1.0 | 1.8332 | 0.7183 | 5.009656e-07 | 775 |
| 0.0043 | 1.0 | 1.8161 | 0.7042 | 5.007325e-07 | 776 |
| 0.0032 | 1.0 | 1.8121 | 0.7254 | 5.0049914e-07 | 777 |
| 0.0042 | 1.0 | 1.8253 | 0.7183 | 5.002656e-07 | 778 |
| 0.0085 | 0.9976 | 1.8455 | 0.7183 | 5.000319e-07 | 779 |
| 0.0036 | 1.0 | 1.8433 | 0.7254 | 4.99798e-07 | 780 |
| 0.0059 | 1.0 | 1.8384 | 0.7254 | 4.995639e-07 | 781 |
| 0.0064 | 0.9976 | 1.8386 | 0.6972 | 4.993296e-07 | 782 |
| 0.0044 | 1.0 | 1.8228 | 0.7183 | 4.990951e-07 | 783 |
| 0.0027 | 1.0 | 1.8179 | 0.7254 | 4.9886046e-07 | 784 |
| 0.0076 | 0.9976 | 1.8284 | 0.7183 | 4.986256e-07 | 785 |
| 0.0033 | 1.0 | 1.8639 | 0.6761 | 4.9839053e-07 | 786 |
| 0.0049 | 1.0 | 1.8448 | 0.7183 | 4.981553e-07 | 787 |
| 0.0035 | 1.0 | 1.8269 | 0.7254 | 4.979199e-07 | 788 |
| 0.0032 | 1.0 | 1.8259 | 0.7254 | 4.976843e-07 | 789 |
| 0.0036 | 1.0 | 1.8231 | 0.7254 | 4.974485e-07 | 790 |
| 0.0079 | 0.9953 | 1.8260 | 0.7183 | 4.9721257e-07 | 791 |
| 0.0041 | 1.0 | 1.8256 | 0.7113 | 4.969764e-07 | 792 |
| 0.0053 | 1.0 | 1.8387 | 0.7113 | 4.9674003e-07 | 793 |
| 0.0051 | 1.0 | 1.8712 | 0.6901 | 4.965035e-07 | 794 |
| 0.0066 | 1.0 | 1.8598 | 0.6972 | 4.962668e-07 | 795 |
| 0.0122 | 0.9976 | 1.8321 | 0.7254 | 4.960299e-07 | 796 |
| 0.0053 | 1.0 | 1.8249 | 0.7183 | 4.957928e-07 | 797 |
| 0.0029 | 1.0 | 1.8372 | 0.7254 | 4.955555e-07 | 798 |
| 0.0047 | 1.0 | 1.8478 | 0.7254 | 4.953181e-07 | 799 |
| 0.0044 | 1.0 | 1.8481 | 0.7254 | 4.950804e-07 | 800 |
| 0.0043 | 1.0 | 1.8544 | 0.7254 | 4.948426e-07 | 801 |
| 0.0050 | 0.9976 | 1.8542 | 0.7254 | 4.946046e-07 | 802 |
| 0.0036 | 1.0 | 1.8572 | 0.7254 | 4.943664e-07 | 803 |
| 0.0027 | 1.0 | 1.8518 | 0.7254 | 4.941281e-07 | 804 |
| 0.0033 | 1.0 | 1.8573 | 0.7254 | 4.938895e-07 | 805 |
| 0.0029 | 1.0 | 1.8601 | 0.7254 | 4.9365076e-07 | 806 |
| 0.0032 | 1.0 | 1.8491 | 0.7254 | 4.9341185e-07 | 807 |
| 0.0036 | 1.0 | 1.8501 | 0.7254 | 4.9317276e-07 | 808 |
| 0.0055 | 1.0 | 1.8385 | 0.7254 | 4.929335e-07 | 809 |
| 0.0048 | 1.0 | 1.8540 | 0.7113 | 4.926941e-07 | 810 |
| 0.0040 | 1.0 | 1.8993 | 0.6901 | 4.9245443e-07 | 811 |
| 0.0040 | 1.0 | 1.8872 | 0.6972 | 4.922146e-07 | 812 |
| 0.0057 | 0.9976 | 1.8741 | 0.7254 | 4.919746e-07 | 813 |
| 0.0072 | 0.9976 | 1.8578 | 0.7254 | 4.9173445e-07 | 814 |
| 0.0037 | 1.0 | 1.8616 | 0.7183 | 4.914941e-07 | 815 |
| 0.0118 | 0.9953 | 1.8656 | 0.7254 | 4.912536e-07 | 816 |
| 0.0029 | 1.0 | 1.8785 | 0.7113 | 4.9101294e-07 | 817 |
| 0.0050 | 1.0 | 1.8786 | 0.7113 | 4.907721e-07 | 818 |
| 0.0055 | 0.9976 | 1.8819 | 0.7113 | 4.90531e-07 | 819 |
| 0.0028 | 1.0 | 1.8748 | 0.7183 | 4.902898e-07 | 820 |
| 0.0026 | 1.0 | 1.8726 | 0.7183 | 4.9004836e-07 | 821 |
| 0.0025 | 1.0 | 1.8681 | 0.7183 | 4.898068e-07 | 822 |
| 0.0034 | 1.0 | 1.8657 | 0.7183 | 4.89565e-07 | 823 |
| 0.0061 | 0.9976 | 1.8800 | 0.6972 | 4.893231e-07 | 824 |
| 0.0149 | 0.9953 | 1.8571 | 0.7254 | 4.89081e-07 | 825 |
| 0.0066 | 1.0 | 1.8778 | 0.7254 | 4.8883874e-07 | 826 |
| 0.0088 | 0.9976 | 1.9055 | 0.6972 | 4.885963e-07 | 827 |
| 0.0039 | 1.0 | 1.8943 | 0.7183 | 4.883537e-07 | 828 |
| 0.0033 | 1.0 | 1.8912 | 0.7254 | 4.881109e-07 | 829 |
| 0.0035 | 1.0 | 1.8890 | 0.7254 | 4.8786796e-07 | 830 |
| 0.0036 | 1.0 | 1.8888 | 0.7254 | 4.8762485e-07 | 831 |
| 0.0024 | 1.0 | 1.8969 | 0.7254 | 4.8738156e-07 | 832 |
| 0.0047 | 1.0 | 1.8960 | 0.7254 | 4.871381e-07 | 833 |
| 0.0090 | 0.9976 | 1.8767 | 0.7183 | 4.8689446e-07 | 834 |
| 0.0144 | 0.9976 | 1.8723 | 0.7183 | 4.8665066e-07 | 835 |
| 0.0045 | 1.0 | 1.8643 | 0.7183 | 4.864067e-07 | 836 |
| 0.0042 | 1.0 | 1.8692 | 0.7254 | 4.8616255e-07 | 837 |
| 0.0034 | 1.0 | 1.8895 | 0.7183 | 4.8591824e-07 | 838 |
| 0.0039 | 1.0 | 1.8997 | 0.7113 | 4.8567375e-07 | 839 |
| 0.0033 | 1.0 | 1.9021 | 0.7183 | 4.854291e-07 | 840 |
| 0.0032 | 1.0 | 1.9021 | 0.7254 | 4.851843e-07 | 841 |
| 0.0022 | 1.0 | 1.9021 | 0.7254 | 4.849393e-07 | 842 |
| 0.0032 | 1.0 | 1.9004 | 0.7254 | 4.846941e-07 | 843 |
| 0.0033 | 1.0 | 1.9034 | 0.6901 | 4.844488e-07 | 844 |
| 0.0032 | 1.0 | 1.9111 | 0.6972 | 4.8420327e-07 | 845 |
| 0.0042 | 1.0 | 1.8925 | 0.7183 | 4.839576e-07 | 846 |
| 0.0044 | 0.9976 | 1.9021 | 0.7254 | 4.8371174e-07 | 847 |
| 0.0033 | 1.0 | 1.9051 | 0.7254 | 4.834658e-07 | 848 |
| 0.0023 | 1.0 | 1.9053 | 0.7254 | 4.8321965e-07 | 849 |
| 0.0047 | 0.9976 | 1.9078 | 0.7183 | 4.8297335e-07 | 850 |
| 0.0087 | 0.9976 | 1.9275 | 0.6831 | 4.827269e-07 | 851 |
| 0.0096 | 0.9953 | 1.9655 | 0.6831 | 4.8248023e-07 | 852 |
| 0.0080 | 0.9976 | 1.8785 | 0.7183 | 4.822334e-07 | 853 |
| 0.0040 | 1.0 | 1.8921 | 0.7183 | 4.8198643e-07 | 854 |
| 0.0051 | 1.0 | 1.9032 | 0.7183 | 4.817393e-07 | 855 |
| 0.0040 | 1.0 | 1.9027 | 0.7254 | 4.81492e-07 | 856 |
| 0.0034 | 1.0 | 1.9062 | 0.7183 | 4.8124457e-07 | 857 |
| 0.0028 | 1.0 | 1.9070 | 0.7042 | 4.8099696e-07 | 858 |
| 0.0031 | 1.0 | 1.8993 | 0.7183 | 4.807492e-07 | 859 |
| 0.0022 | 1.0 | 1.8946 | 0.7183 | 4.805012e-07 | 860 |
| 0.0040 | 1.0 | 1.9213 | 0.7042 | 4.802531e-07 | 861 |
| 0.0025 | 1.0 | 1.9199 | 0.7042 | 4.8000487e-07 | 862 |
| 0.0029 | 1.0 | 1.9206 | 0.7042 | 4.7975647e-07 | 863 |
| 0.0020 | 1.0 | 1.9297 | 0.6831 | 4.795079e-07 | 864 |
| 0.0030 | 1.0 | 1.9316 | 0.6831 | 4.7925914e-07 | 865 |
| 0.0057 | 0.9976 | 1.9181 | 0.7254 | 4.790102e-07 | 866 |
| 0.0067 | 0.9976 | 1.9630 | 0.7113 | 4.787612e-07 | 867 |
| 0.0067 | 0.9976 | 1.9602 | 0.6831 | 4.78512e-07 | 868 |
| 0.0035 | 1.0 | 1.9442 | 0.6901 | 4.782626e-07 | 869 |
| 0.0035 | 1.0 | 1.9149 | 0.6901 | 4.780131e-07 | 870 |
| 0.0105 | 0.9929 | 1.8873 | 0.7113 | 4.777634e-07 | 871 |
| 0.0043 | 1.0 | 1.9042 | 0.7183 | 4.775136e-07 | 872 |
| 0.0031 | 1.0 | 1.9162 | 0.7183 | 4.772636e-07 | 873 |
| 0.0038 | 1.0 | 1.9163 | 0.7183 | 4.7701343e-07 | 874 |
| 0.0031 | 1.0 | 1.9212 | 0.7183 | 4.7676312e-07 | 875 |
| 0.0105 | 0.9976 | 1.9248 | 0.7113 | 4.7651267e-07 | 876 |
| 0.0024 | 1.0 | 1.9274 | 0.7042 | 4.7626204e-07 | 877 |
| 0.0024 | 1.0 | 1.9252 | 0.7183 | 4.7601128e-07 | 878 |
| 0.0029 | 1.0 | 1.9225 | 0.7183 | 4.7576037e-07 | 879 |
| 0.0056 | 0.9976 | 1.9285 | 0.7113 | 4.755093e-07 | 880 |
| 0.0021 | 1.0 | 1.9329 | 0.7113 | 4.7525808e-07 | 881 |
| 0.0035 | 1.0 | 1.9333 | 0.7113 | 4.7500671e-07 | 882 |
| 0.0021 | 1.0 | 1.9296 | 0.7183 | 4.7475518e-07 | 883 |
| 0.0028 | 1.0 | 1.9301 | 0.7183 | 4.745035e-07 | 884 |
| 0.0032 | 1.0 | 1.9458 | 0.7113 | 4.742517e-07 | 885 |
| 0.0098 | 0.9953 | 1.9401 | 0.7113 | 4.739997e-07 | 886 |
| 0.0049 | 1.0 | 1.9427 | 0.7113 | 4.7374758e-07 | 887 |
| 0.0020 | 1.0 | 1.9364 | 0.7113 | 4.734953e-07 | 888 |
| 0.0025 | 1.0 | 1.9307 | 0.7113 | 4.7324286e-07 | 889 |
| 0.0028 | 1.0 | 1.9357 | 0.7113 | 4.7299028e-07 | 890 |
| 0.0022 | 1.0 | 1.9322 | 0.7113 | 4.7273755e-07 | 891 |
| 0.0034 | 1.0 | 1.9326 | 0.7113 | 4.724847e-07 | 892 |
| 0.0021 | 1.0 | 1.9342 | 0.7113 | 4.7223168e-07 | 893 |
| 0.0151 | 0.9976 | 1.9286 | 0.7183 | 4.719785e-07 | 894 |
| 0.0039 | 1.0 | 1.9247 | 0.7183 | 4.7172517e-07 | 895 |
| 0.0025 | 1.0 | 1.9099 | 0.7183 | 4.714717e-07 | 896 |
| 0.0018 | 1.0 | 1.9066 | 0.7183 | 4.712181e-07 | 897 |
| 0.0026 | 1.0 | 1.9148 | 0.7183 | 4.7096435e-07 | 898 |
| 0.0107 | 0.9953 | 1.9169 | 0.7183 | 4.7071046e-07 | 899 |
| 0.0022 | 1.0 | 1.9237 | 0.7183 | 4.7045643e-07 | 900 |
| 0.0037 | 1.0 | 1.9338 | 0.7113 | 4.7020222e-07 | 901 |
| 0.0027 | 1.0 | 1.9340 | 0.7183 | 4.6994788e-07 | 902 |
| 0.0037 | 0.9976 | 1.9319 | 0.7113 | 4.696934e-07 | 903 |
| 0.0027 | 1.0 | 1.9346 | 0.7113 | 4.6943876e-07 | 904 |
| 0.0064 | 0.9976 | 1.9163 | 0.7183 | 4.69184e-07 | 905 |
| 0.0035 | 1.0 | 1.9273 | 0.7113 | 4.6892907e-07 | 906 |
| 0.0018 | 1.0 | 1.9295 | 0.7183 | 4.6867402e-07 | 907 |
| 0.0041 | 0.9976 | 1.9350 | 0.7113 | 4.6841882e-07 | 908 |
| 0.0024 | 1.0 | 1.9408 | 0.7183 | 4.6816348e-07 | 909 |
| 0.0041 | 1.0 | 1.9156 | 0.7183 | 4.67908e-07 | 910 |
| 0.0024 | 1.0 | 1.9134 | 0.7183 | 4.6765237e-07 | 911 |
| 0.0023 | 1.0 | 1.9218 | 0.7183 | 4.673966e-07 | 912 |
| 0.0030 | 1.0 | 1.9427 | 0.7113 | 4.671407e-07 | 913 |
| 0.0024 | 1.0 | 1.9495 | 0.7042 | 4.6688464e-07 | 914 |
| 0.0031 | 1.0 | 1.9407 | 0.7113 | 4.6662848e-07 | 915 |
| 0.0023 | 1.0 | 1.9267 | 0.7183 | 4.6637217e-07 | 916 |
| 0.0023 | 1.0 | 1.9210 | 0.7183 | 4.6611572e-07 | 917 |
| 0.0016 | 1.0 | 1.9160 | 0.7183 | 4.6585913e-07 | 918 |
| 0.0037 | 1.0 | 1.9236 | 0.7183 | 4.656024e-07 | 919 |
| 0.0029 | 1.0 | 1.9533 | 0.7042 | 4.6534552e-07 | 920 |
| 0.0081 | 0.9976 | 1.9482 | 0.7183 | 4.650885e-07 | 921 |
| 0.0030 | 1.0 | 1.9483 | 0.7183 | 4.6483137e-07 | 922 |
| 0.0019 | 1.0 | 1.9338 | 0.7254 | 4.645741e-07 | 923 |
| 0.0025 | 1.0 | 1.9293 | 0.7254 | 4.6431668e-07 | 924 |
| 0.0021 | 1.0 | 1.9346 | 0.7113 | 4.6405913e-07 | 925 |
| 0.0017 | 1.0 | 1.9378 | 0.7254 | 4.6380143e-07 | 926 |
| 0.0021 | 1.0 | 1.9378 | 0.7254 | 4.635436e-07 | 927 |
| 0.0016 | 1.0 | 1.9379 | 0.7254 | 4.6328566e-07 | 928 |
| 0.0043 | 1.0 | 1.9354 | 0.7254 | 4.6302756e-07 | 929 |
| 0.0043 | 1.0 | 1.9338 | 0.7183 | 4.6276935e-07 | 930 |
| 0.0021 | 1.0 | 1.9351 | 0.7183 | 4.62511e-07 | 931 |
| 0.0029 | 1.0 | 1.9482 | 0.7254 | 4.622525e-07 | 932 |
| 0.0081 | 0.9976 | 1.9751 | 0.7113 | 4.6199386e-07 | 933 |
| 0.0089 | 0.9953 | 1.9900 | 0.7042 | 4.617351e-07 | 934 |
| 0.0035 | 1.0 | 1.9855 | 0.7042 | 4.6147622e-07 | 935 |
| 0.0026 | 1.0 | 1.9689 | 0.7254 | 4.612172e-07 | 936 |
| 0.0055 | 0.9976 | 1.9525 | 0.7254 | 4.6095806e-07 | 937 |
| 0.0064 | 0.9976 | 1.9332 | 0.7254 | 4.6069877e-07 | 938 |
| 0.0024 | 1.0 | 1.9105 | 0.7183 | 4.6043937e-07 | 939 |
| 0.0055 | 0.9976 | 1.9180 | 0.7254 | 4.6017982e-07 | 940 |
| 0.0025 | 1.0 | 1.9258 | 0.7183 | 4.5992016e-07 | 941 |
| 0.0035 | 1.0 | 1.9438 | 0.7183 | 4.5966036e-07 | 942 |
| 0.0109 | 0.9976 | 1.9523 | 0.7113 | 4.5940044e-07 | 943 |
| 0.0030 | 1.0 | 1.9533 | 0.7113 | 4.5914038e-07 | 944 |
| 0.0019 | 1.0 | 1.9525 | 0.7113 | 4.588802e-07 | 945 |
| 0.0033 | 1.0 | 1.9330 | 0.7183 | 4.586199e-07 | 946 |
| 0.0016 | 1.0 | 1.9337 | 0.7183 | 4.5835947e-07 | 947 |
| 0.0024 | 1.0 | 1.9400 | 0.7254 | 4.580989e-07 | 948 |
| 0.0016 | 1.0 | 1.9505 | 0.7254 | 4.578382e-07 | 949 |
| 0.0021 | 1.0 | 1.9571 | 0.7254 | 4.5757739e-07 | 950 |
| 0.0023 | 1.0 | 1.9599 | 0.7254 | 4.5731645e-07 | 951 |
| 0.0097 | 0.9976 | 1.9804 | 0.7113 | 4.5705536e-07 | 952 |
| 0.0038 | 1.0 | 1.9790 | 0.7042 | 4.5679417e-07 | 953 |
| 0.0033 | 1.0 | 1.9720 | 0.7113 | 4.5653286e-07 | 954 |
| 0.0025 | 1.0 | 1.9748 | 0.7254 | 4.562714e-07 | 955 |
| 0.0055 | 0.9976 | 1.9940 | 0.7254 | 4.5600984e-07 | 956 |
| 0.0042 | 1.0 | 2.0187 | 0.6972 | 4.5574816e-07 | 957 |
| 0.0022 | 1.0 | 2.0009 | 0.7042 | 4.5548634e-07 | 958 |
| 0.0031 | 1.0 | 1.9751 | 0.7183 | 4.552244e-07 | 959 |
| 0.0027 | 1.0 | 1.9586 | 0.7183 | 4.5496236e-07 | 960 |
| 0.0065 | 0.9976 | 1.9670 | 0.7254 | 4.5470017e-07 | 961 |
| 0.0033 | 1.0 | 1.9776 | 0.7254 | 4.5443787e-07 | 962 |
| 0.0020 | 1.0 | 1.9868 | 0.7254 | 4.5417545e-07 | 963 |
| 0.0023 | 1.0 | 1.9889 | 0.7183 | 4.5391292e-07 | 964 |
| 0.0033 | 1.0 | 2.0080 | 0.6831 | 4.5365024e-07 | 965 |
| 0.0123 | 0.9976 | 2.0169 | 0.7113 | 4.5338746e-07 | 966 |
| 0.0026 | 1.0 | 2.0220 | 0.7113 | 4.5312456e-07 | 967 |
| 0.0033 | 1.0 | 2.0058 | 0.7113 | 4.5286154e-07 | 968 |
| 0.0019 | 1.0 | 2.0016 | 0.7113 | 4.525984e-07 | 969 |
| 0.0018 | 1.0 | 2.0020 | 0.7113 | 4.5233514e-07 | 970 |
| 0.0019 | 1.0 | 1.9989 | 0.7183 | 4.5207176e-07 | 971 |
| 0.0024 | 1.0 | 1.9959 | 0.7183 | 4.5180826e-07 | 972 |
| 0.0017 | 1.0 | 1.9967 | 0.7183 | 4.5154465e-07 | 973 |
| 0.0031 | 1.0 | 1.9830 | 0.7183 | 4.5128093e-07 | 974 |
| 0.0019 | 1.0 | 1.9806 | 0.7183 | 4.510171e-07 | 975 |
| 0.0020 | 1.0 | 1.9803 | 0.7183 | 4.5075313e-07 | 976 |
| 0.0027 | 1.0 | 1.9881 | 0.7183 | 4.5048907e-07 | 977 |
| 0.0062 | 0.9976 | 2.0179 | 0.7042 | 4.502249e-07 | 978 |
| 0.0019 | 1.0 | 2.0200 | 0.7042 | 4.499606e-07 | 979 |
| 0.0091 | 0.9953 | 2.0537 | 0.7113 | 4.496962e-07 | 980 |
| 0.0043 | 1.0 | 2.0483 | 0.7113 | 4.4943164e-07 | 981 |
| 0.0030 | 1.0 | 2.0235 | 0.7042 | 4.4916698e-07 | 982 |
| 0.0066 | 0.9953 | 2.0017 | 0.7183 | 4.489022e-07 | 983 |
| 0.0044 | 0.9976 | 2.0148 | 0.7042 | 4.4863734e-07 | 984 |
| 0.0089 | 0.9976 | 2.0407 | 0.6972 | 4.4837236e-07 | 985 |
| 0.0023 | 1.0 | 2.0101 | 0.7183 | 4.4810727e-07 | 986 |
| 0.0013 | 1.0 | 2.0010 | 0.7254 | 4.4784207e-07 | 987 |
| 0.0059 | 0.9976 | 1.9844 | 0.7113 | 4.4757675e-07 | 988 |
| 0.0120 | 0.9953 | 1.9867 | 0.7183 | 4.4731132e-07 | 989 |
| 0.0031 | 1.0 | 2.0145 | 0.7042 | 4.4704578e-07 | 990 |
| 0.0175 | 0.9929 | 2.0260 | 0.6972 | 4.4678012e-07 | 991 |
| 0.0025 | 1.0 | 2.0280 | 0.7042 | 4.4651435e-07 | 992 |
| 0.0025 | 1.0 | 2.0180 | 0.7113 | 4.4624846e-07 | 993 |
| 0.0023 | 1.0 | 2.0092 | 0.7113 | 4.459825e-07 | 994 |
| 0.0039 | 1.0 | 1.9985 | 0.7183 | 4.457164e-07 | 995 |
| 0.0025 | 1.0 | 1.9721 | 0.7324 | 4.454502e-07 | 996 |
| 0.0023 | 1.0 | 1.9633 | 0.7254 | 4.451839e-07 | 997 |
| 0.0015 | 1.0 | 1.9683 | 0.7254 | 4.4491748e-07 | 998 |
| 0.0016 | 1.0 | 1.9732 | 0.7254 | 4.4465096e-07 | 999 |
### Framework versions
- Transformers 4.30.0.dev0
- TensorFlow 2.9.1
- Datasets 2.8.0
- Tokenizers 0.13.2
| 97,414 | [
[
-0.0498046875,
-0.0400390625,
0.022918701171875,
0.003902435302734375,
-0.00307464599609375,
0.0019083023071289062,
-0.0000015497207641601562,
0.00322723388671875,
0.057403564453125,
0.0215606689453125,
-0.0467529296875,
-0.046966552734375,
-0.03985595703125,
... |
jason1234/Ai3_bert_embedding_model | 2023-05-12T16:54:49.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | jason1234 | null | null | jason1234/Ai3_bert_embedding_model | 0 | 2 | sentence-transformers | 2023-05-12T16:04:09 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 57 with parameters:
```
{'batch_size': 8, 'sampler': 'torch.utils.data.sampler.SequentialSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.CosineSimilarityLoss.CosineSimilarityLoss`
Parameters of the fit()-Method:
```
{
"epochs": 20,
"evaluation_steps": 0,
"evaluator": "NoneType",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 2e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 92,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,700 | [
[
-0.01910400390625,
-0.061492919921875,
0.0209503173828125,
0.0233001708984375,
-0.0203094482421875,
-0.0322265625,
-0.01788330078125,
0.0014095306396484375,
0.0162506103515625,
0.027313232421875,
-0.048675537109375,
-0.046478271484375,
-0.0516357421875,
-0.0... |
AustinCarthy/Base_10Kphish_benignFall_IL_10Krealphish | 2023-05-12T17:22:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Base_10Kphish_benignFall_IL_10Krealphish | 0 | 2 | transformers | 2023-05-12T16:10:35 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Base_10Kphish_benignFall_IL_10Krealphish_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Base_10Kphish_benignFall_IL_10Krealphish_0.75
This model was trained from scratch on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0551
- Accuracy: 0.9938
- F1: 0.9303
- Precision: 0.9982
- Recall: 0.871
- Roc Auc Score: 0.9355
- Tpr At Fpr 0.01: 0.8794
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0079 | 1.0 | 6563 | 0.0209 | 0.9956 | 0.9525 | 0.9878 | 0.9196 | 0.9595 | 0.862 |
| 0.003 | 2.0 | 13126 | 0.0338 | 0.9949 | 0.9438 | 0.9940 | 0.8984 | 0.9491 | 0.8796 |
| 0.0024 | 3.0 | 19689 | 0.0410 | 0.9948 | 0.9427 | 0.9949 | 0.8958 | 0.9478 | 0.8648 |
| 0.0014 | 4.0 | 26252 | 0.0493 | 0.9941 | 0.9342 | 0.9982 | 0.878 | 0.9390 | 0.881 |
| 0.0003 | 5.0 | 32815 | 0.0551 | 0.9938 | 0.9303 | 0.9982 | 0.871 | 0.9355 | 0.8794 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,167 | [
[
-0.032928466796875,
-0.038330078125,
0.007663726806640625,
0.0105438232421875,
-0.0196533203125,
-0.0202484130859375,
0.005207061767578125,
-0.01380157470703125,
0.0245819091796875,
0.029144287109375,
-0.05120849609375,
-0.05718994140625,
-0.052032470703125,
... |
stillerman/MDEL-pubmed-feelaw-github-arxiv | 2023-05-12T18:18:39.000Z | [
"transformers",
"pytorch",
"gpt_neox",
"text-generation",
"MDEL",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | stillerman | null | null | stillerman/MDEL-pubmed-feelaw-github-arxiv | 0 | 2 | transformers | 2023-05-12T18:11:40 |
---
tags:
- MDEL
---
# Model Name
stillerman/MDEL-pubmed-feelaw-github-arxiv
# Model Description
This model was generated by averaging the weights of the following models
- [Multi-Domain-Expert-Layers/expert-pubmed_central](https://huggingface.co/Multi-Domain-Expert-Layers/expert-pubmed_central)
- [Multi-Domain-Expert-Layers/expert-freelaw](https://huggingface.co/Multi-Domain-Expert-Layers/expert-freelaw)
- [Multi-Domain-Expert-Layers/expert-github](https://huggingface.co/Multi-Domain-Expert-Layers/expert-github)
- [Multi-Domain-Expert-Layers/expert-arxiv](https://huggingface.co/Multi-Domain-Expert-Layers/expert-arxiv)
| 631 | [
[
-0.01311492919921875,
-0.0212860107421875,
0.034210205078125,
0.01497650146484375,
-0.0038127899169921875,
-0.0185546875,
0.0194549560546875,
-0.01525115966796875,
0.0377197265625,
0.024749755859375,
-0.043701171875,
-0.052459716796875,
-0.059326171875,
-0.0... |
IRI2070/dal-sbert-address-distilled-v1 | 2023-05-12T21:00:42.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | IRI2070 | null | null | IRI2070/dal-sbert-address-distilled-v1 | 0 | 2 | sentence-transformers | 2023-05-12T21:00:16 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7813 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 5000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 258, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,764 | [
[
-0.0188751220703125,
-0.061248779296875,
0.0211944580078125,
0.0239105224609375,
-0.0199127197265625,
-0.032379150390625,
-0.0185089111328125,
0.0010967254638671875,
0.0161285400390625,
0.026611328125,
-0.04833984375,
-0.045989990234375,
-0.05133056640625,
-... |
IRI2070/dal-sbert-address-distilled-384-v2 | 2023-05-12T22:24:52.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"transformers",
"endpoints_compatible",
"region:us"
] | sentence-similarity | IRI2070 | null | null | IRI2070/dal-sbert-address-distilled-384-v2 | 0 | 2 | sentence-transformers | 2023-05-12T22:23:40 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
---
# {MODEL_NAME}
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('{MODEL_NAME}')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('{MODEL_NAME}')
model = AutoModel.from_pretrained('{MODEL_NAME}')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7813 with parameters:
```
{'batch_size': 64, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MSELoss.MSELoss`
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 5000,
"evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"eps": 1e-06,
"lr": 0.0001
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 1000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 258, 'do_lower_case': False}) with Transformer model: BertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 3,764 | [
[
-0.0188751220703125,
-0.061248779296875,
0.0211944580078125,
0.0239105224609375,
-0.0199127197265625,
-0.032379150390625,
-0.0185089111328125,
0.0010967254638671875,
0.0161285400390625,
0.026611328125,
-0.04833984375,
-0.045989990234375,
-0.05133056640625,
-... |
hoang14/viettel-videberta-finetune-viquad-model7 | 2023-05-13T05:50:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | hoang14 | null | null | hoang14/viettel-videberta-finetune-viquad-model7 | 0 | 2 | transformers | 2023-05-13T03:40:00 | ---
tags:
- generated_from_trainer
model-index:
- name: viettel-videberta-finetune-viquad-model7
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# viettel-videberta-finetune-viquad-model7
This model is a fine-tuned version of [Fsoft-AIC/videberta-base](https://huggingface.co/Fsoft-AIC/videberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6730
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 13
- eval_batch_size: 13
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 65
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 0.87 | 260 | 3.2187 |
| 3.6521 | 1.74 | 520 | 2.8990 |
| 3.6521 | 2.61 | 780 | 2.7310 |
| 2.5664 | 3.48 | 1040 | 2.6730 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,542 | [
[
-0.0330810546875,
-0.03961181640625,
0.00838470458984375,
0.01383209228515625,
-0.0345458984375,
-0.036712646484375,
-0.006549835205078125,
-0.004169464111328125,
0.0017976760864257812,
0.03515625,
-0.045806884765625,
-0.047271728515625,
-0.041595458984375,
... |
hoang14/viettel-videberta-finetune-viquad-model8 | 2023-05-13T06:01:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"question-answering",
"generated_from_trainer",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | hoang14 | null | null | hoang14/viettel-videberta-finetune-viquad-model8 | 0 | 2 | transformers | 2023-05-13T04:25:42 | ---
tags:
- generated_from_trainer
model-index:
- name: viettel-videberta-finetune-viquad-model8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# viettel-videberta-finetune-viquad-model8
This model is a fine-tuned version of [Fsoft-AIC/videberta-base](https://huggingface.co/Fsoft-AIC/videberta-base) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 28
- eval_batch_size: 28
- seed: 42
- gradient_accumulation_steps: 5
- total_train_batch_size: 140
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.87 | 260 | 4.4064 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,313 | [
[
-0.03253173828125,
-0.04791259765625,
0.0089874267578125,
0.01493072509765625,
-0.035980224609375,
-0.0316162109375,
-0.00441741943359375,
-0.01006317138671875,
0.007007598876953125,
0.037628173828125,
-0.045928955078125,
-0.043365478515625,
-0.041748046875,
... |
AlekseyKorshuk/roberta-with-topic | 2023-05-13T11:09:06.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | AlekseyKorshuk | null | null | AlekseyKorshuk/roberta-with-topic | 0 | 2 | transformers | 2023-05-13T07:58:23 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta-with-topic
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-with-topic
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.5283
- Ndcg: 0.4453
- Accuracy: 0.2941
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- num_devices: 4
- total_train_batch_size: 64
- total_eval_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Ndcg | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:------:|:--------:|
| 1.5951 | 0.07 | 413 | 1.5693 | 0.4220 | 0.2766 |
| 1.5721 | 0.13 | 826 | 1.5537 | 0.4308 | 0.2828 |
| 1.5594 | 0.2 | 1239 | 1.5615 | 0.4236 | 0.2757 |
| 1.5753 | 0.27 | 1652 | 1.5645 | 0.4272 | 0.2778 |
| 1.5778 | 0.33 | 2065 | 1.5859 | 0.3736 | 0.2430 |
| 1.5673 | 0.4 | 2478 | 1.5576 | 0.4262 | 0.2812 |
| 1.5633 | 0.47 | 2891 | 1.5557 | 0.4294 | 0.2815 |
| 1.5606 | 0.53 | 3304 | 1.5459 | 0.4321 | 0.2836 |
| 1.5476 | 0.6 | 3717 | 1.5508 | 0.4269 | 0.2810 |
| 1.552 | 0.67 | 4130 | 1.5479 | 0.4302 | 0.2831 |
| 1.5469 | 0.73 | 4543 | 1.5430 | 0.4345 | 0.2882 |
| 1.5538 | 0.8 | 4956 | 1.5410 | 0.4371 | 0.2877 |
| 1.557 | 0.87 | 5369 | 1.5420 | 0.4368 | 0.2896 |
| 1.5427 | 0.93 | 5782 | 1.5449 | 0.4269 | 0.2814 |
| 1.5427 | 1.0 | 6195 | 1.5381 | 0.4380 | 0.2896 |
| 1.5469 | 1.07 | 6608 | 1.5381 | 0.4362 | 0.2849 |
| 1.5369 | 1.13 | 7021 | 1.5361 | 0.4383 | 0.2895 |
| 1.5465 | 1.2 | 7434 | 1.5361 | 0.4415 | 0.2940 |
| 1.5433 | 1.27 | 7847 | 1.5342 | 0.4399 | 0.2914 |
| 1.5355 | 1.33 | 8260 | 1.5342 | 0.4409 | 0.2937 |
| 1.5363 | 1.4 | 8673 | 1.5342 | 0.4414 | 0.2923 |
| 1.5372 | 1.47 | 9086 | 1.5312 | 0.4440 | 0.2949 |
| 1.5452 | 1.53 | 9499 | 1.5303 | 0.4439 | 0.2937 |
| 1.5386 | 1.6 | 9912 | 1.5293 | 0.4434 | 0.2915 |
| 1.5314 | 1.67 | 10325 | 1.5303 | 0.4443 | 0.2925 |
| 1.5216 | 1.73 | 10738 | 1.5293 | 0.4447 | 0.2930 |
| 1.5341 | 1.8 | 11151 | 1.5293 | 0.4450 | 0.2929 |
| 1.5315 | 1.87 | 11564 | 1.5283 | 0.4456 | 0.2947 |
| 1.5345 | 1.93 | 11977 | 1.5283 | 0.4455 | 0.2950 |
| 1.5238 | 2.0 | 12390 | 1.5283 | 0.4453 | 0.2941 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0-rc1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,548 | [
[
-0.0489501953125,
-0.04119873046875,
0.016754150390625,
0.00241851806640625,
-0.0030460357666015625,
-0.00670623779296875,
-0.0037994384765625,
-0.007610321044921875,
0.0421142578125,
0.0236358642578125,
-0.052734375,
-0.046966552734375,
-0.044281005859375,
... |
alup/bert-uncased-finetuned-mrpc | 2023-05-13T20:59:51.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | alup | null | null | alup/bert-uncased-finetuned-mrpc | 0 | 2 | transformers | 2023-05-13T18:49:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: bert-uncased-finetuned-mrpc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8676470588235294
- name: F1
type: f1
value: 0.9093959731543624
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-uncased-finetuned-mrpc
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6265
- Accuracy: 0.8676
- F1: 0.9094
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 230 | 0.3924 | 0.8554 | 0.9015 |
| No log | 2.0 | 460 | 0.3575 | 0.875 | 0.9128 |
| 0.3857 | 3.0 | 690 | 0.6265 | 0.8676 | 0.9094 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,879 | [
[
-0.0352783203125,
-0.041015625,
0.0031452178955078125,
0.01236724853515625,
-0.0311737060546875,
-0.0291595458984375,
-0.0199127197265625,
-0.01580810546875,
0.0156402587890625,
0.019256591796875,
-0.057861328125,
-0.0374755859375,
-0.049468994140625,
-0.022... |
huanvo88/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-13T20:48:25.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | huanvo88 | null | null | huanvo88/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-13T20:48:01 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 733.50 +/- 152.10
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga huanvo88 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga huanvo88 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga huanvo88
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 2000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.04107666015625,
-0.036102294921875,
0.02069091796875,
0.024658203125,
-0.01082611083984375,
-0.017852783203125,
0.01256561279296875,
-0.01396942138671875,
0.01251220703125,
0.025421142578125,
-0.0693359375,
-0.0352783203125,
-0.0267791748046875,
-0.004844... |
agestau/dummy-fashion-classification | 2023-05-13T21:58:05.000Z | [
"transformers",
"pytorch",
"tensorboard",
"swin",
"image-classification",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | image-classification | agestau | null | null | agestau/dummy-fashion-classification | 0 | 2 | transformers | 2023-05-13T20:52:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: dummy-fashion-classification
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# dummy-fashion-classification
This model is a fine-tuned version of [microsoft/swin-tiny-patch4-window7-224](https://huggingface.co/microsoft/swin-tiny-patch4-window7-224) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1122
- Accuracy: 0.9665
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 128
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.3331 | 1.0 | 294 | 0.1725 | 0.9519 |
| 0.296 | 2.0 | 588 | 0.1323 | 0.9591 |
| 0.2484 | 3.0 | 882 | 0.1122 | 0.9665 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,610 | [
[
-0.02239990234375,
-0.0304107666015625,
0.0062408447265625,
0.009521484375,
-0.0119781494140625,
-0.01806640625,
-0.005847930908203125,
-0.0214385986328125,
0.00225830078125,
0.00884246826171875,
-0.05511474609375,
-0.0517578125,
-0.033660888671875,
-0.00696... |
Ioanaaaaaaa/distilbert-base-uncased-finetuned-emotion | 2023-05-14T14:50:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Ioanaaaaaaa | null | null | Ioanaaaaaaa/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-13T21:01:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.929
- name: F1
type: f1
value: 0.9289634297429328
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2149
- Accuracy: 0.929
- F1: 0.9290
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8482 | 1.0 | 250 | 0.3132 | 0.907 | 0.9037 |
| 0.2466 | 2.0 | 500 | 0.2149 | 0.929 | 0.9290 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.03802490234375,
-0.04156494140625,
0.014892578125,
0.0214385986328125,
-0.0264739990234375,
-0.0187835693359375,
-0.012786865234375,
-0.00885009765625,
0.01065826416015625,
0.00864410400390625,
-0.057098388671875,
-0.051361083984375,
-0.05938720703125,
-0... |
swadesh7/finetuning-l3-bert-latest | 2023-05-13T23:11:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | swadesh7 | null | null | swadesh7/finetuning-l3-bert-latest | 0 | 2 | transformers | 2023-05-13T23:04:32 | ---
license: cc-by-4.0
tags:
- generated_from_trainer
model-index:
- name: finetuning-l3-bert-latest
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-l3-bert-latest
This model is a fine-tuned version of [l3cube-pune/telugu-bert](https://huggingface.co/l3cube-pune/telugu-bert) on an unknown dataset.
It achieves the following results on the evaluation set:
- eval_loss: 0.6283
- eval_accuracy: 0.7558
- eval_f1: 0.7529
- eval_runtime: 79.9067
- eval_samples_per_second: 51.61
- eval_steps_per_second: 6.458
- step: 0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Framework versions
- Transformers 4.29.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,250 | [
[
-0.034088134765625,
-0.058135986328125,
0.01284027099609375,
0.023681640625,
-0.03668212890625,
-0.032501220703125,
-0.020751953125,
-0.0279693603515625,
0.0019388198852539062,
0.0178375244140625,
-0.0491943359375,
-0.032684326171875,
-0.043426513671875,
-0.... |
jojo0616/my_SA_distilbert_model_finalversion | 2023-05-14T02:19:30.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jojo0616 | null | null | jojo0616/my_SA_distilbert_model_finalversion | 0 | 2 | transformers | 2023-05-14T01:29:32 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_SA_distilbert_model_finalversion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_SA_distilbert_model_finalversion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3031
- Accuracy: 0.9115
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.3696 | 1.0 | 2248 | 0.3310 | 0.8852 |
| 0.2624 | 2.0 | 4496 | 0.3118 | 0.9063 |
| 0.1817 | 3.0 | 6744 | 0.3314 | 0.9072 |
| 0.1398 | 4.0 | 8992 | 0.3031 | 0.9115 |
| 0.1294 | 5.0 | 11240 | 0.3801 | 0.9110 |
| 0.0974 | 6.0 | 13488 | 0.3968 | 0.9059 |
| 0.0662 | 7.0 | 15736 | 0.4742 | 0.9177 |
| 0.0634 | 8.0 | 17984 | 0.5182 | 0.9150 |
| 0.0377 | 9.0 | 20232 | 0.5356 | 0.9159 |
| 0.0298 | 10.0 | 22480 | 0.5717 | 0.9139 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,945 | [
[
-0.0309295654296875,
-0.039093017578125,
0.01364898681640625,
0.01148223876953125,
-0.021575927734375,
-0.0173797607421875,
-0.0014095306396484375,
-0.007568359375,
0.01100921630859375,
0.016204833984375,
-0.050201416015625,
-0.049102783203125,
-0.059814453125,
... |
vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridadDA-V1 | 2023-05-14T07:58:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | vg055 | null | null | vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridadDA-V1 | 0 | 2 | transformers | 2023-05-14T02:14:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridadDA-V1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-base-bne-finetuned-TripAdvisorDomainAdaptation-finetuned-e2-RestMex2023-polaridadDA-V1
This model is a fine-tuned version of [vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation](https://huggingface.co/vg055/roberta-base-bne-finetuned-TripAdvisorDomainAdaptation) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6264
- F1: 0.7402
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 0.5999 | 1.0 | 15245 | 0.5769 | 0.7385 |
| 0.4425 | 2.0 | 30490 | 0.6264 | 0.7402 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,612 | [
[
-0.038909912109375,
-0.042022705078125,
0.01380157470703125,
0.01538848876953125,
-0.033416748046875,
-0.04193115234375,
-0.01549530029296875,
-0.01557159423828125,
0.00864410400390625,
0.0323486328125,
-0.058746337890625,
-0.04730224609375,
-0.04852294921875,
... |
Svetlana0303/Regression_albert_NOaug_MSEloss | 2023-05-14T03:53:41.000Z | [
"transformers",
"pytorch",
"tensorboard",
"albert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Svetlana0303 | null | null | Svetlana0303/Regression_albert_NOaug_MSEloss | 0 | 2 | transformers | 2023-05-14T03:47:15 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: Regression_albert_NOaug_MSEloss
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Regression_albert_NOaug_MSEloss
This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4715
- Mse: 0.4715
- Mae: 0.6001
- R2: 0.1320
- Accuracy: 0.4737
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 15
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mse | Mae | R2 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:-------:|:--------:|
| No log | 1.0 | 33 | 0.2966 | 0.2966 | 0.4630 | 0.1139 | 0.7568 |
| No log | 2.0 | 66 | 0.2679 | 0.2679 | 0.4039 | 0.1995 | 0.7568 |
| No log | 3.0 | 99 | 0.4088 | 0.4088 | 0.5125 | -0.2213 | 0.5405 |
| No log | 4.0 | 132 | 0.4331 | 0.4331 | 0.5399 | -0.2939 | 0.4865 |
| No log | 5.0 | 165 | 0.3699 | 0.3699 | 0.4317 | -0.1053 | 0.6757 |
| No log | 6.0 | 198 | 0.3456 | 0.3456 | 0.4117 | -0.0325 | 0.6216 |
| No log | 7.0 | 231 | 0.3371 | 0.3371 | 0.4155 | -0.0072 | 0.6757 |
| No log | 8.0 | 264 | 0.3261 | 0.3261 | 0.3811 | 0.0256 | 0.7297 |
| No log | 9.0 | 297 | 0.2312 | 0.2312 | 0.2705 | 0.3092 | 0.8108 |
| No log | 10.0 | 330 | 0.3194 | 0.3194 | 0.3681 | 0.0457 | 0.6757 |
| No log | 11.0 | 363 | 0.3638 | 0.3638 | 0.4124 | -0.0870 | 0.6757 |
| No log | 12.0 | 396 | 0.3101 | 0.3101 | 0.3630 | 0.0734 | 0.7027 |
| No log | 13.0 | 429 | 0.2762 | 0.2762 | 0.3221 | 0.1748 | 0.7568 |
| No log | 14.0 | 462 | 0.2970 | 0.2970 | 0.3376 | 0.1126 | 0.7297 |
| No log | 15.0 | 495 | 0.3185 | 0.3185 | 0.3532 | 0.0483 | 0.7297 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,734 | [
[
-0.039154052734375,
-0.04296875,
0.01155853271484375,
0.01003265380859375,
-0.0000013709068298339844,
-0.0083160400390625,
0.0037555694580078125,
-0.006084442138671875,
0.038818359375,
0.025726318359375,
-0.0462646484375,
-0.055145263671875,
-0.049041748046875,
... |
50stars/distilbert_imdb_genre_classifier | 2023-05-14T09:39:05.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | 50stars | null | null | 50stars/distilbert_imdb_genre_classifier | 0 | 2 | transformers | 2023-05-14T07:13:55 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- precision
- recall
model-index:
- name: distilbert_imdb_genre_classifier
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_imdb_genre_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0196
- Precision: 0.4254
- Recall: 0.4432
- F1 Score: 0.4191
- Jaccard Score: 0.2966
- Average Precision Score: 0.4831
- Percentage Examples At Least 1 True: 0.8845
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 Score | Jaccard Score | Average Precision Score | Percentage Examples At Least 1 True |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:--------:|:-------------:|:-----------------------:|:-----------------------------------:|
| 0.0231 | 1.0 | 1500 | 0.0214 | 0.3601 | 0.4090 | 0.3601 | 0.2523 | 0.4326 | 0.8638 |
| 0.0196 | 2.0 | 3000 | 0.0198 | 0.4174 | 0.4367 | 0.4064 | 0.2864 | 0.4743 | 0.8842 |
| 0.0172 | 3.0 | 4500 | 0.0196 | 0.4216 | 0.4418 | 0.4155 | 0.2939 | 0.4822 | 0.887 |
| 0.016 | 4.0 | 6000 | 0.0196 | 0.4254 | 0.4432 | 0.4191 | 0.2966 | 0.4831 | 0.8845 |
### Framework versions
- Transformers 4.26.1
- Pytorch 2.0.0+cu118
- Tokenizers 0.13.3
| 2,295 | [
[
-0.04052734375,
-0.03662109375,
0.01324462890625,
0.01232147216796875,
-0.01413726806640625,
-0.00833892822265625,
-0.0020236968994140625,
-0.004856109619140625,
0.0153045654296875,
0.022491455078125,
-0.046234130859375,
-0.0517578125,
-0.061126708984375,
-0... |
andrei-saceleanu/vit-base-vocalsound-logmel | 2023-05-14T08:28:13.000Z | [
"transformers",
"tf",
"vit",
"feature-extraction",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | feature-extraction | andrei-saceleanu | null | null | andrei-saceleanu/vit-base-vocalsound-logmel | 0 | 2 | transformers | 2023-05-14T08:17:55 | ---
license: apache-2.0
model-index:
- name: vit-base-vocalsound-logmel
results: []
---
# vit-base-vocalsound-logmel
This model is a fine-tuned version of [google/vit-base-patch16-224](https://huggingface.co/google/vit-base-patch16-224) on [VocalSound](https://github.com/YuanGongND/vocalsound) dataset.
It achieves the following results on the evaluation set:
- accuracy: 88.8
- precision (micro): 91.3
- recall (micro): 87.1
- f1 score (micro): 89.1
- f1 score (macro): 89.1
## Training and evaluation data
Training: VocalSound training split (#samples = 15570)
Evaluation: VocalSound test split(#samples = 3594)
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: AdamW
- weight_decay: 0
- learning_rate: 5e-5
- batch_size: 32
- training_precision: float32
### Preprocessing
Differently from [vit-base-vocalsound](https://huggingface.co/andrei-saceleanu/vit-base-vocalsound), the log-melspectrogram is used(log was applied as an addition) and the preprocessor normalization
step uses VocalSound statistics(i.e. mean and std) instead of the default IMAGENET ones.
### Framework versions
- Transformers 4.27.4
- TensorFlow 2.12.0
- Tokenizers 0.13.3 | 1,215 | [
[
-0.037139892578125,
-0.032501220703125,
0.00341796875,
0.0251312255859375,
-0.034820556640625,
-0.01204681396484375,
-0.03521728515625,
-0.01369476318359375,
0.01508331298828125,
0.027618408203125,
-0.062255859375,
-0.0565185546875,
-0.039398193359375,
-0.01... |
yotoshihiro/ppo-PyramidsTESTCOLAB | 2023-05-14T09:58:32.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"Pyramids",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-Pyramids",
"region:us"
] | reinforcement-learning | yotoshihiro | null | null | yotoshihiro/ppo-PyramidsTESTCOLAB | 0 | 2 | ml-agents | 2023-05-14T09:57:11 | ---
library_name: ml-agents
tags:
- Pyramids
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-Pyramids
---
# **ppo** Agent playing **Pyramids**
This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids
2. Step 1: Find your model_id: brinkman/ppo-PyramidsTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 960 | [
[
-0.0284271240234375,
-0.0192718505859375,
-0.0007410049438476562,
0.025970458984375,
-0.00920867919921875,
0.004016876220703125,
0.0270538330078125,
-0.0038204193115234375,
0.03277587890625,
0.03643798828125,
-0.03485107421875,
-0.05108642578125,
-0.036560058593... |
songyi-ng/distilbert_base_uncased_SST2_finetune | 2023-05-26T03:56:50.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | songyi-ng | null | null | songyi-ng/distilbert_base_uncased_SST2_finetune | 0 | 2 | transformers | 2023-05-14T12:54:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: distilbert_base_uncased_SST2_finetune
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: sst2
split: validation
args: sst2
metrics:
- name: Accuracy
type: accuracy
value: 0.8371559633027523
- name: F1
type: f1
value: 0.8370839311854139
- name: Precision
type: precision
value: 0.8373294905842589
- name: Recall
type: recall
value: 0.8371559633027523
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_base_uncased_SST2_finetune
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3630
- Accuracy: 0.8372
- F1: 0.8371
- Precision: 0.8373
- Recall: 0.8372
- Learning Rate: 0.0000
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Rate |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:------:|
| 0.4616 | 1.0 | 8419 | 0.3845 | 0.8337 | 0.8334 | 0.8350 | 0.8337 | 0.0000 |
| 0.3644 | 2.0 | 16838 | 0.3730 | 0.8291 | 0.8291 | 0.8300 | 0.8291 | 0.0000 |
| 0.3526 | 3.0 | 25257 | 0.3661 | 0.8280 | 0.8277 | 0.8290 | 0.8280 | 0.0000 |
| 0.346 | 4.0 | 33676 | 0.3709 | 0.8349 | 0.8345 | 0.8369 | 0.8349 | 0.0000 |
| 0.3436 | 5.0 | 42095 | 0.3674 | 0.8383 | 0.8383 | 0.8384 | 0.8383 | 0.0000 |
| 0.3412 | 6.0 | 50514 | 0.3630 | 0.8372 | 0.8371 | 0.8373 | 0.8372 | 0.0000 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,597 | [
[
-0.03314208984375,
-0.045196533203125,
0.01224517822265625,
0.01044464111328125,
-0.0185394287109375,
-0.01265716552734375,
-0.004405975341796875,
-0.0027294158935546875,
0.0225372314453125,
0.01523590087890625,
-0.046783447265625,
-0.04400634765625,
-0.05676269... |
kargaranamir/T5R-base | 2023-10-24T01:27:07.000Z | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"en",
"dataset:tatsu-lab/alpaca",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | kargaranamir | null | null | kargaranamir/T5R-base | 0 | 2 | transformers | 2023-05-14T17:25:28 | ---
license: mit
datasets:
- tatsu-lab/alpaca
tags:
- generated_from_trainer
- text2text-generation
model-index:
- name: T5R-base
results: []
pipeline_tag: text2text-generation
language:
- en
widget:
- text: |
Instruction: X
Output: Adolf Hitler (German: [ˈadɔlf ˈhɪtlɐ] (listen); 20 April 1889 – 30 April 1945) was an Austrian-born German politician who was the dictator of Germany from 1933 until his suicide in 1945. He rose to power as the leader of the Nazi Party,[a] becoming the chancellor in 1933 and then taking the title of Führer und Reichskanzler in 1934.[b] During his dictatorship, he initiated World War II in Europe by invading Poland on 1 September 1939. He was closely involved in military operations throughout the war and was central to the perpetration of the Holocaust: the genocide of about six million Jews and millions of other victims.
X:
example_title: Example 1
- text: |
Instruction: X
Output: 1- Base your meals on higher fibre starchy carbohydrates. 2- Eat lots of fruit and veg. 3- Eat more fish, including a portion of oily fish.
What kind of instruction could this be the answer to?
X:
example_title: Example 2
---
# T5-Reverse (T5R)
This model can generate prompts (instructions) for any text!
This model is an instruction-tuned version of [google/flan-t5-base](https://huggingface.co/google/flan-t5-base) on [alpaca dataset](https://huggingface.co/datasets/tatsu-lab/alpaca) but in **reverse format**!
## How to Use the Model
You can use the `transformers` library to load and utilize the T5-Reverse (T5R) model for generating prompts based on text. Here's an example of how to do it:
```python
>>> # Import required libraries
>>> import torch
>>> from transformers import pipeline
>>> # Load the model and tokenizer using the pipeline from Hugging Face Hub
>>> inference = pipeline("text2text-generation", model="kargaranamir/T5R-base")
>>> # Example instruction and prompt
>>> sample = '''
>>> Instruction: X
>>> Output: 1- Base your meals on higher fibre starchy carbohydrates. 2- Eat lots of fruit and veg. 3- Eat more fish, including a portion of oily fish.
>>> What kind of instruction could this be the answer to?
>>> X:
>>> '''
>>> # Generate a response using the model
>>> res = inference(sample)
>>> # Print the generated response
>>> print(res)
[{'generated_text': 'Instruction: Generate three recommendations for a healthy diet.'}]
```
## Citation
If you find this model/approach useful, make a link to the huggingface model. | 2,555 | [
[
-0.01433563232421875,
-0.06439208984375,
0.0298004150390625,
0.0169677734375,
-0.00820159912109375,
-0.03387451171875,
0.0062255859375,
-0.01334381103515625,
0.0308380126953125,
0.050872802734375,
-0.07122802734375,
-0.050323486328125,
-0.041412353515625,
0.... |
choihyunsoo/distilbert-base-uncased-finetuned-emotion | 2023-05-14T20:03:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | choihyunsoo | null | null | choihyunsoo/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-14T19:59:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9275
- name: F1
type: f1
value: 0.9273308996920793
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2015
- Accuracy: 0.9275
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8115 | 1.0 | 250 | 0.2879 | 0.9105 | 0.9080 |
| 0.238 | 2.0 | 500 | 0.2015 | 0.9275 | 0.9273 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037994384765625,
-0.041748046875,
0.01531982421875,
0.0220184326171875,
-0.02801513671875,
-0.01849365234375,
-0.01280975341796875,
-0.00913238525390625,
0.011077880859375,
0.0085906982421875,
-0.057373046875,
-0.0523681640625,
-0.05938720703125,
-0.00860... |
HashShan/distilbert-base-uncased-finetuned-cola | 2023-05-14T20:30:04.000Z | [
"transformers",
"tf",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | HashShan | null | null | HashShan/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-14T20:26:08 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: HashShan/distilbert-base-uncased-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# HashShan/distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1933
- Validation Loss: 0.5637
- Train Matthews Correlation: 0.4878
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1602, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Train Matthews Correlation | Epoch |
|:----------:|:---------------:|:--------------------------:|:-----:|
| 0.5175 | 0.4695 | 0.4606 | 0 |
| 0.3218 | 0.4752 | 0.5125 | 1 |
| 0.1933 | 0.5637 | 0.4878 | 2 |
### Framework versions
- Transformers 4.29.1
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,945 | [
[
-0.0360107421875,
-0.047607421875,
0.0207366943359375,
0.0078582763671875,
-0.0285186767578125,
-0.0064697265625,
-0.013671875,
-0.00954437255859375,
0.01479339599609375,
0.0017290115356445312,
-0.04443359375,
-0.042572021484375,
-0.06402587890625,
-0.013488... |
Anikerry/en_pipeline | 2023-05-14T23:29:35.000Z | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | Anikerry | null | null | Anikerry/en_pipeline | 0 | 2 | spacy | 2023-05-14T21:37:44 | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_pipeline
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 1.0
- name: NER Recall
type: recall
value: 1.0
- name: NER F Score
type: f_score
value: 1.0
---
| Feature | Description |
| --- | --- |
| **Name** | `en_pipeline` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.2,<3.6.0` |
| **Default Pipeline** | `transformer`, `ner` |
| **Components** | `transformer`, `ner` |
| **Vectors** | 0 keys, 0 unique vectors (0 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (6 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `BMW_AVLB_PACKAGES`, `BMW_ENGINE`, `BMW_ROOF_CONFIG`, `BMW_SALES_DESCP`, `BMW_STR_CONFIG`, `BMW_VEHICLE` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 100.00 |
| `ENTS_P` | 100.00 |
| `ENTS_R` | 100.00 |
| `TRANSFORMER_LOSS` | 4036946.58 |
| `NER_LOSS` | 2486547.16 | | 1,147 | [
[
-0.06805419921875,
-0.0129852294921875,
0.0175018310546875,
0.0157928466796875,
-0.03656005859375,
0.01038360595703125,
0.007495880126953125,
-0.006378173828125,
0.02911376953125,
0.03460693359375,
-0.07373046875,
-0.0650634765625,
-0.05279541015625,
-0.0103... |
Yahiael1/mymodel_final_v2 | 2023-05-18T20:59:32.000Z | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.09700",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"has_space",
"region:us"
] | text2text-generation | Yahiael1 | null | null | Yahiael1/mymodel_final_v2 | 0 | 2 | transformers | 2023-05-15T01:54:44 | ---
model-index:
- name: Yahiael1/mymodel_final_v2
results:
- task:
type: summarization
name: summarization
dataset:
name: newsroom
type: newsroom
split: test
metrics:
- type: rouge1
value: 0.37837302008660717
name: rouge1
- type: rouge2
value: 0.26270145406405965
name: rouge2
- type: rougeL
value: 0.3439331100495976
name: rougeL
- type: rougeLsum
value: 0.34393742939541694
name: rougeLsum
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
- **Finetuned from model [optional]:** [More Information Needed]
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
### Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
### Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
## Training Details
### Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
#### Preprocessing [optional]
[More Information Needed]
#### Training Hyperparameters
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
#### Speeds, Sizes, Times [optional]
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
### Testing Data, Factors & Metrics
#### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
#### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
#### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
### Results
[More Information Needed]
#### Summary
## Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
## Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
## Technical Specifications [optional]
### Model Architecture and Objective
[More Information Needed]
### Compute Infrastructure
[More Information Needed]
#### Hardware
[More Information Needed]
#### Software
[More Information Needed]
## Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Model Card Authors [optional]
[More Information Needed]
## Model Card Contact
[More Information Needed]
| 5,443 | [
[
-0.045379638671875,
-0.04144287109375,
0.034088134765625,
0.00701141357421875,
-0.018157958984375,
-0.02362060546875,
0.0084228515625,
-0.043853759765625,
0.007595062255859375,
0.05035400390625,
-0.05584716796875,
-0.05072021484375,
-0.04168701171875,
-0.010... |
platzi/platzi-distilroberta-base-mrpc-glue-pablo-campino1 | 2023-05-15T03:47:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-pablo-campino1 | 0 | 2 | transformers | 2023-05-15T02:13:05 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["The traffic light are named fases", "The traffic lights are named overlaps"]
example_title : not Equivalent
- text: ["The traffic light are named fases", "The traffic lights are named signal groups"]
example_title : Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-pablo-campino1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8308823529411765
- name: F1
type: f1
value: 0.8747731397459164
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-pablo-campino1
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5724
- Accuracy: 0.8309
- F1: 0.8748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5013 | 1.09 | 500 | 0.7153 | 0.8309 | 0.8821 |
| 0.3396 | 2.18 | 1000 | 0.5724 | 0.8309 | 0.8748 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,138 | [
[
-0.0308990478515625,
-0.041168212890625,
0.01052093505859375,
0.022064208984375,
-0.029632568359375,
-0.024383544921875,
-0.01025390625,
-0.004364013671875,
0.006740570068359375,
0.00862884521484375,
-0.04937744140625,
-0.044403076171875,
-0.05633544921875,
... |
shihab17/bn-to-en-translation | 2023-05-21T04:17:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"text-generation-inference",
"bn",
"en",
"dataset:kde4",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | shihab17 | null | null | shihab17/bn-to-en-translation | 0 | 2 | transformers | 2023-05-15T03:26:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
- text-generation-inference
datasets:
- kde4
metrics:
- bleu
model-index:
- name: bengali-bn-to-en
results:
- task:
name: Sequence-to-sequence Language Modeling
type: text2text-generation
dataset:
name: kde4
type: kde4
config: bn-en
split: train
args: bn-en
metrics:
- name: Bleu
type: bleu
value: 50.9475
language:
- bn
- en
pipeline_tag: text2text-generation
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
### How to use
You can use this model directly with a pipeline:
```python
from transformers import AutoTokenizer, pipeline
tokenizer = AutoTokenizer.from_pretrained("shihab17/bn-to-en-translation")
model = AutoModelForSeq2SeqLM.from_pretrained("shihab17/bn-to-en-translation")
sentence = 'ম্যাচ শেষে পুরস্কার বিতরণের মঞ্চে তামিমের মুখে মোস্তাফিজের প্রশংসা শোনা গেল'
translator = pipeline("translation_en_to_bn", model=model, tokenizer=tokenizer)
output = translator(sentence)
print(output)
```
# bengali-en-to-bn
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-bn-en](https://huggingface.co/Helsinki-NLP/opus-mt-bn-en) on the kde4 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6885
- Bleu: 50.9475
- Gen Len: 6.7043
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|
| 1.8866 | 1.0 | 2047 | 1.6397 | 39.6617 | 8.0651 |
| 1.5769 | 2.0 | 4094 | 1.6160 | 33.0247 | 8.9865 |
| 1.3622 | 3.0 | 6141 | 1.6189 | 53.483 | 6.6037 |
| 1.2317 | 4.0 | 8188 | 1.6280 | 51.6882 | 6.762 |
| 1.1248 | 5.0 | 10235 | 1.6450 | 53.1619 | 6.5515 |
| 1.0297 | 6.0 | 12282 | 1.6587 | 52.3224 | 6.5905 |
| 0.9632 | 7.0 | 14329 | 1.6733 | 52.3362 | 6.5441 |
| 0.8831 | 8.0 | 16376 | 1.6802 | 49.3544 | 6.8272 |
| 0.8291 | 9.0 | 18423 | 1.6868 | 49.9486 | 6.792 |
| 0.8175 | 10.0 | 20470 | 1.6885 | 50.9475 | 6.7043 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 2,934 | [
[
-0.03424072265625,
-0.027374267578125,
0.005367279052734375,
0.006862640380859375,
-0.0217132568359375,
-0.0185546875,
0.0003101825714111328,
-0.004802703857421875,
0.02056884765625,
0.02545166015625,
-0.045257568359375,
-0.041259765625,
-0.05194091796875,
0... |
Yeran1225/distilbert-base-uncased-finetuned-emotion | 2023-05-15T07:48:53.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Yeran1225 | null | null | Yeran1225/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-15T07:43:14 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9295
- name: F1
type: f1
value: 0.9295577509501436
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2117
- Accuracy: 0.9295
- F1: 0.9296
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8103 | 1.0 | 250 | 0.3028 | 0.908 | 0.9054 |
| 0.2441 | 2.0 | 500 | 0.2117 | 0.9295 | 0.9296 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.038238525390625,
-0.041412353515625,
0.0152435302734375,
0.0217132568359375,
-0.0263519287109375,
-0.0190582275390625,
-0.0125274658203125,
-0.0084381103515625,
0.01055145263671875,
0.008514404296875,
-0.0570068359375,
-0.0518798828125,
-0.059295654296875,
... |
hihijiwon/distilbert-base-uncased-finetuned-emotion | 2023-05-15T07:48:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | hihijiwon | null | null | hihijiwon/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-15T07:43:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.92
- name: F1
type: f1
value: 0.920046667425008
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2247
- Accuracy: 0.92
- F1: 0.9200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8421 | 1.0 | 250 | 0.3195 | 0.903 | 0.8997 |
| 0.2547 | 2.0 | 500 | 0.2247 | 0.92 | 0.9200 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,843 | [
[
-0.03765869140625,
-0.041412353515625,
0.0144195556640625,
0.0210723876953125,
-0.026123046875,
-0.0190887451171875,
-0.013092041015625,
-0.00864410400390625,
0.0102081298828125,
0.00797271728515625,
-0.056365966796875,
-0.051788330078125,
-0.060333251953125,
... |
thetmyatnoe/distilbert-base-uncased-finetuned-emotion | 2023-05-15T07:49:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | thetmyatnoe | null | null | thetmyatnoe/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-15T07:43:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.924
- name: F1
type: f1
value: 0.9243324172542533
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2197
- Accuracy: 0.924
- F1: 0.9243
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8332 | 1.0 | 250 | 0.3188 | 0.908 | 0.9053 |
| 0.251 | 2.0 | 500 | 0.2197 | 0.924 | 0.9243 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037933349609375,
-0.041259765625,
0.01403045654296875,
0.0223236083984375,
-0.02606201171875,
-0.0186614990234375,
-0.0133819580078125,
-0.0086669921875,
0.0106658935546875,
0.00795745849609375,
-0.056640625,
-0.051849365234375,
-0.0596923828125,
-0.00760... |
DheHun/distilbert-base-uncased-finetuned-emotion | 2023-05-15T07:49:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | DheHun | null | null | DheHun/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-15T07:44:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9225
- name: F1
type: f1
value: 0.9226602737439042
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2281
- Accuracy: 0.9225
- F1: 0.9227
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8266 | 1.0 | 250 | 0.3223 | 0.897 | 0.8934 |
| 0.2512 | 2.0 | 500 | 0.2281 | 0.9225 | 0.9227 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.0+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.0379638671875,
-0.040985107421875,
0.015655517578125,
0.0209808349609375,
-0.02667236328125,
-0.019073486328125,
-0.0128936767578125,
-0.0084686279296875,
0.0107269287109375,
0.00833892822265625,
-0.05694580078125,
-0.052032470703125,
-0.0596923828125,
-0... |
yujiepan/mobilebert-uncased-squadv1-14blocks-structured39.8-int8 | 2023-05-15T12:32:55.000Z | [
"transformers",
"pytorch",
"onnx",
"openvino",
"mobilebert",
"generated_from_trainer",
"dataset:squad",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | null | yujiepan | null | null | yujiepan/mobilebert-uncased-squadv1-14blocks-structured39.8-int8 | 0 | 2 | transformers | 2023-05-15T12:10:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- squad
model-index:
- name: mobilebert-uncased-squadv1-14blocks-structured39.8-int8
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert-uncased-squadv1-14blocks-structured39.8-int8
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the squad dataset.
Notice that this model only has the first 14 transformer blocks. It is quantized and structually pruned by NNCF. The sparsity in remaining linear layers is 39.8%.
- Torch f1: 90.15
- IR f1: 89.8414
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.1+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| 889 | [
[
-0.0226898193359375,
-0.03729248046875,
-0.01293182373046875,
0.0273895263671875,
-0.0157012939453125,
0.0198974609375,
0.004825592041015625,
-0.005218505859375,
0.01465606689453125,
0.036376953125,
-0.0596923828125,
-0.036468505859375,
-0.04022216796875,
-0... |
burningfalls/my-fine-tuned-bert | 2023-05-28T01:18:42.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"en",
"ko",
"dataset:AI-Hub",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | burningfalls | null | null | burningfalls/my-fine-tuned-bert | 1 | 2 | transformers | 2023-05-15T13:05:24 | ---
language:
- en
- ko
license: apache-2.0
datasets: AI-Hub
metrics:
- accuracy
pipeline_tag: text-classification
---
# 1. Introduction
## 1.1 examples

## 1.2 f1-score

---
# 2. Requirements
```python
# my env
python==3.11.3
tensorflow==2.12.0
transformers==4.29.2
# maybe you need to
python>=3.6
tensorflow>=2.0
transformers>=4.0
```
---
# 3. Load
```python
from transformers import AutoTokenizer, TFAutoModelForSequenceClassification
from transformers import TextClassificationPipeline
BERT_PARH = "burningfalls/my-fine-tuned-bert"
def load_bert():
loaded_tokenizer = AutoTokenizer.from_pretrained(BERT_PATH)
loaded_model = TFAutoModelForSequenceClassification.from_pretrained(BERT_PATH)
text_classifier = TextClassificationPipeline(
tokenizer=loaded_tokenizer,
model=loaded_model,
framework='tf',
top_k=1
)
```
---
# 4. Usage
```python
import re
import sentiments
def predict_sentiment(text):
result = text_classifier(text)[0]
feel_idx = int(re.sub(r'[^0-9]', '', result[0]['label']))
feel = sentiments.Feel[feel_idx]["label"]
return feel
```
---
# 5. sentiments.py
```python
Feel = [
{"label": "가난한, 불우한", "index": 0},
{"label": "감사하는", "index": 1},
{"label": "걱정스러운", "index": 2},
{"label": "고립된", "index": 3},
{"label": "괴로워하는", "index": 4},
{"label": "구역질 나는", "index": 5},
{"label": "기쁨", "index": 6},
{"label": "낙담한", "index": 7},
{"label": "남의 시선을 의식하는", "index": 8},
{"label": "노여워하는", "index": 9},
{"label": "눈물이 나는", "index": 10},
{"label": "느긋", "index": 11},
{"label": "당혹스러운", "index": 12},
{"label": "당황", "index": 13},
{"label": "두려운", "index": 14},
{"label": "마비된", "index": 15},
{"label": "만족스러운", "index": 16},
{"label": "방어적인", "index": 17},
{"label": "배신당한", "index": 18},
{"label": "버려진", "index": 19},
{"label": "부끄러운", "index": 20},
{"label": "분노", "index": 21},
{"label": "불안", "index": 22},
{"label": "비통한", "index": 23},
{"label": "상처", "index": 24},
{"label": "성가신", "index": 25},
{"label": "스트레스 받는", "index": 26},
{"label": "슬픔", "index": 27},
{"label": "신뢰하는", "index": 28},
{"label": "신이 난", "index": 29},
{"label": "실망한", "index": 30},
{"label": "악의적인", "index": 31},
{"label": "안달하는", "index": 32},
{"label": "안도", "index": 33},
{"label": "억울한", "index": 34},
{"label": "열등감", "index": 35},
{"label": "염세적인", "index": 36},
{"label": "외로운", "index": 37},
{"label": "우울한", "index": 38},
{"label": "자신하는", "index": 39},
{"label": "조심스러운", "index": 40},
{"label": "좌절한", "index": 41},
{"label": "죄책감의", "index": 42},
{"label": "질투하는", "index": 43},
{"label": "짜증내는", "index": 44},
{"label": "초조한", "index": 45},
{"label": "충격 받은", "index": 46},
{"label": "취약한", "index": 47},
{"label": "툴툴대는", "index": 48},
{"label": "편안한", "index": 49},
{"label": "한심한", "index": 50},
{"label": "혐오스러운", "index": 51},
{"label": "혼란스러운", "index": 52},
{"label": "환멸을 느끼는", "index": 53},
{"label": "회의적인", "index": 54},
{"label": "후회되는", "index": 55},
{"label": "흥분", "index": 56},
{"label": "희생된", "index": 57},
]
```
---
# 6. Reference
* BERT: [klue/bert-base](https://huggingface.co/klue/bert-base)
* Dataset: [AI-Hub 감성 대화 말뭉치](https://www.aihub.or.kr/aihubdata/data/view.do?currMenu=115&topMenu=100&aihubDataSe=realm&dataSetSn=86) | 3,701 | [
[
-0.0494384765625,
-0.033935546875,
0.0159149169921875,
0.025909423828125,
-0.023223876953125,
0.0221710205078125,
0.01215362548828125,
-0.01393890380859375,
0.05145263671875,
0.00926971435546875,
-0.051422119140625,
-0.0662841796875,
-0.0438232421875,
0.0175... |
sai1881/bloom-560m-Forecast | 2023-05-15T16:24:25.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bloom",
"text-generation",
"generated_from_trainer",
"license:bigscience-bloom-rail-1.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | sai1881 | null | null | sai1881/bloom-560m-Forecast | 0 | 2 | transformers | 2023-05-15T14:46:20 | ---
license: bigscience-bloom-rail-1.0
tags:
- generated_from_trainer
model-index:
- name: bloom-560m-Forecast
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bloom-560m-Forecast
This model is a fine-tuned version of [bigscience/bloom-560m](https://huggingface.co/bigscience/bloom-560m) on the None dataset.
It achieves the following results on the evaluation set:
- eval_loss: 1.4876
- eval_runtime: 125.5708
- eval_samples_per_second: 42.12
- eval_steps_per_second: 5.272
- epoch: 2.0
- step: 1324
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0
- Datasets 2.1.0
- Tokenizers 0.13.3
| 1,217 | [
[
-0.025299072265625,
-0.03863525390625,
0.02862548828125,
0.037078857421875,
-0.0225982666015625,
-0.037139892578125,
-0.0078582763671875,
-0.0209503173828125,
0.00652313232421875,
0.0223846435546875,
-0.0582275390625,
-0.0452880859375,
-0.03271484375,
-0.016... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.