modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
gokuls/hBERTv1_new_pretrain_cola | 2023-06-06T06:39:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gokuls | null | null | gokuls/hBERTv1_new_pretrain_cola | 0 | 2 | transformers | 2023-05-31T10:49:20 | ---
language:
- en
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
- accuracy
model-index:
- name: hBERTv1_new_pretrain_cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: GLUE COLA
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.0
- name: Accuracy
type: accuracy
value: 0.6912751793861389
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv1_new_pretrain_cola
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v1_complete_training_new](https://huggingface.co/gokuls/bert_12_layer_model_v1_complete_training_new) on the GLUE COLA dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6176
- Matthews Correlation: 0.0
- Accuracy: 0.6913
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 128
- eval_batch_size: 128
- seed: 10
- distributed_type: multi-GPU
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|:--------:|
| 0.6331 | 1.0 | 67 | 0.6181 | 0.0 | 0.6913 |
| 0.624 | 2.0 | 134 | 0.6203 | 0.0 | 0.6913 |
| 0.6173 | 3.0 | 201 | 0.6176 | 0.0 | 0.6913 |
| 0.6176 | 4.0 | 268 | 0.6185 | 0.0 | 0.6913 |
| 0.6121 | 5.0 | 335 | 0.6194 | 0.0 | 0.6913 |
| 0.6112 | 6.0 | 402 | 0.6186 | 0.0 | 0.6913 |
| 0.6132 | 7.0 | 469 | 0.6267 | 0.0 | 0.6913 |
| 0.6124 | 8.0 | 536 | 0.6218 | 0.0 | 0.6913 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.14.0a0+410ce96
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,533 | [
[
-0.0293731689453125,
-0.046295166015625,
0.004764556884765625,
0.016754150390625,
-0.0160064697265625,
-0.0107574462890625,
0.0020236968994140625,
-0.01385498046875,
0.031982421875,
0.0197296142578125,
-0.056427001953125,
-0.034759521484375,
-0.052490234375,
... |
MJ03/distilbert-base-uncased-distilled-clinc | 2023-05-31T11:20:34.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | MJ03 | null | null | MJ03/distilbert-base-uncased-distilled-clinc | 0 | 2 | transformers | 2023-05-31T11:09:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9396774193548387
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1022
- Accuracy: 0.9397
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.9252 | 1.0 | 318 | 0.5759 | 0.7268 |
| 0.4452 | 2.0 | 636 | 0.2766 | 0.8787 |
| 0.2465 | 3.0 | 954 | 0.1728 | 0.9174 |
| 0.1722 | 4.0 | 1272 | 0.1356 | 0.93 |
| 0.1398 | 5.0 | 1590 | 0.1202 | 0.9348 |
| 0.1243 | 6.0 | 1908 | 0.1118 | 0.9387 |
| 0.1148 | 7.0 | 2226 | 0.1073 | 0.9387 |
| 0.109 | 8.0 | 2544 | 0.1044 | 0.9403 |
| 0.1056 | 9.0 | 2862 | 0.1027 | 0.9394 |
| 0.1043 | 10.0 | 3180 | 0.1022 | 0.9397 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
| 2,243 | [
[
-0.0338134765625,
-0.0369873046875,
0.0149993896484375,
0.004367828369140625,
-0.0241241455078125,
-0.0193023681640625,
-0.0102691650390625,
-0.004314422607421875,
0.00838470458984375,
0.021148681640625,
-0.043975830078125,
-0.049652099609375,
-0.0594482421875,
... |
NickThe1/ppo-SnowballTargetTESTCOLAB | 2023-06-01T04:53:49.000Z | [
"ml-agents",
"tensorboard",
"onnx",
"SnowballTarget",
"deep-reinforcement-learning",
"reinforcement-learning",
"ML-Agents-SnowballTarget",
"region:us"
] | reinforcement-learning | NickThe1 | null | null | NickThe1/ppo-SnowballTargetTESTCOLAB | 0 | 2 | ml-agents | 2023-05-31T11:53:27 | ---
library_name: ml-agents
tags:
- SnowballTarget
- deep-reinforcement-learning
- reinforcement-learning
- ML-Agents-SnowballTarget
---
# **ppo** Agent playing **SnowballTarget**
This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents).
## Usage (with ML-Agents)
The Documentation: https://github.com/huggingface/ml-agents#get-started
We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub:
### Resume the training
```
mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume
```
### Watch your Agent play
You can watch your agent **playing directly in your browser:**.
1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget
2. Step 1: Find your model_id: NickThe1/ppo-SnowballTargetTESTCOLAB
3. Step 2: Select your *.nn /*.onnx file
4. Click on Watch the agent play 👀
| 996 | [
[
-0.0169830322265625,
-0.0288848876953125,
0.006565093994140625,
0.0157318115234375,
-0.02276611328125,
0.016082763671875,
0.021575927734375,
-0.006420135498046875,
0.0251007080078125,
0.037811279296875,
-0.053985595703125,
-0.056488037109375,
-0.041717529296875,... |
trung0209/autotrain-testrum3-63013135311 | 2023-05-31T12:54:50.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"unk",
"dataset:trung0209/autotrain-data-testrum3",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | trung0209 | null | null | trung0209/autotrain-testrum3-63013135311 | 0 | 2 | transformers | 2023-05-31T12:53:08 | ---
tags:
- autotrain
- text-classification
language:
- unk
widget:
- text: "I love AutoTrain"
datasets:
- trung0209/autotrain-data-testrum3
co2_eq_emissions:
emissions: 0.18211514635496343
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 63013135311
- CO2 Emissions (in grams): 0.1821
## Validation Metrics
- Loss: 1.172
- Accuracy: 0.569
- Macro F1: 0.319
- Micro F1: 0.569
- Weighted F1: 0.656
- Macro Precision: 0.396
- Micro Precision: 0.569
- Weighted Precision: 0.806
- Macro Recall: 0.288
- Micro Recall: 0.569
- Weighted Recall: 0.569
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/trung0209/autotrain-testrum3-63013135311
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("trung0209/autotrain-testrum3-63013135311", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("trung0209/autotrain-testrum3-63013135311", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,284 | [
[
-0.031097412109375,
-0.022796630859375,
0.007106781005859375,
0.0099945068359375,
-0.00658416748046875,
0.00611114501953125,
0.0000559687614440918,
-0.015106201171875,
-0.0036487579345703125,
0.0048980712890625,
-0.0458984375,
-0.03167724609375,
-0.0539245605468... |
mlsyedrz/bart-prompt-generator | 2023-05-31T13:05:16.000Z | [
"transformers",
"tf",
"bart",
"text2text-generation",
"generated_from_keras_callback",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | text2text-generation | mlsyedrz | null | null | mlsyedrz/bart-prompt-generator | 0 | 2 | transformers | 2023-05-31T12:58:50 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: bart-prompt-generator
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# bart-prompt-generator
This model is a fine-tuned version of [facebook/bart-large](https://huggingface.co/facebook/bart-large) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.5060
- Validation Loss: 2.9050
- Epoch: 4
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 7.5114 | 5.6438 | 0 |
| 4.2598 | 3.2422 | 1 |
| 3.0802 | 2.9787 | 2 |
| 2.7291 | 2.9409 | 3 |
| 2.5060 | 2.9050 | 4 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,466 | [
[
-0.04620361328125,
-0.06298828125,
0.041107177734375,
0.0022068023681640625,
-0.0307464599609375,
-0.0203399658203125,
-0.014190673828125,
-0.01462554931640625,
0.021636962890625,
0.0158233642578125,
-0.066162109375,
-0.0411376953125,
-0.045257568359375,
-0.... |
poltextlab/xlm-roberta-large-danish-legal-cap | 2023-07-04T17:40:32.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"da",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-danish-legal-cap | 0 | 2 | transformers | 2023-05-31T13:56:31 |
---
---
license: mit
language:
- da
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-danish-legal-cap
## Model description
An `xlm-roberta-large` model finetuned on danish training data containing texts of the `legal` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-danish-legal-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-danish-legal-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 1693 examples (10% of the available data).<br>
Model accuracy is **0.84**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.83 | 0.82 | 0.82 | 199 |
| 1 | 0.81 | 0.73 | 0.77 | 59 |
| 2 | 0.9 | 0.91 | 0.9 | 86 |
| 3 | 0.89 | 0.77 | 0.83 | 74 |
| 4 | 0.83 | 0.9 | 0.86 | 107 |
| 5 | 0.95 | 0.9 | 0.92 | 99 |
| 6 | 0.73 | 0.91 | 0.81 | 74 |
| 7 | 0.84 | 0.88 | 0.86 | 48 |
| 8 | 0.7 | 0.92 | 0.79 | 48 |
| 9 | 0.88 | 0.91 | 0.9 | 90 |
| 10 | 0.73 | 0.77 | 0.75 | 90 |
| 11 | 0.81 | 0.89 | 0.85 | 138 |
| 12 | 0.86 | 0.79 | 0.82 | 112 |
| 13 | 0.85 | 0.82 | 0.84 | 133 |
| 14 | 0.7 | 0.84 | 0.76 | 19 |
| 15 | 0.83 | 0.89 | 0.86 | 28 |
| 16 | 0.77 | 0.67 | 0.71 | 15 |
| 17 | 0.88 | 0.79 | 0.83 | 71 |
| 18 | 0.93 | 0.78 | 0.85 | 134 |
| 19 | 0.96 | 0.96 | 0.96 | 50 |
| 20 | 0.94 | 0.79 | 0.86 | 19 |
| macro avg | 0.84 | 0.84 | 0.84 | 1693 |
| weighted avg | 0.85 | 0.84 | 0.84 | 1693 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,615 | [
[
-0.040374755859375,
-0.0462646484375,
0.00955963134765625,
0.0184478759765625,
-0.00725555419921875,
-0.0035800933837890625,
-0.02392578125,
-0.0245208740234375,
0.01204681396484375,
0.0258331298828125,
-0.03228759765625,
-0.049896240234375,
-0.057098388671875,
... |
poltextlab/xlm-roberta-large-german-media-cap | 2023-07-04T17:40:31.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"de",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-german-media-cap | 1 | 2 | transformers | 2023-05-31T14:17:59 |
---
---
license: mit
language:
- de
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-german-media-cap
## Model description
An `xlm-roberta-large` model finetuned on german training data containing texts of the `media` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-german-media-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-german-media-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 482 examples (10% of the available data).<br>
Model accuracy is **0.6**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.51 | 0.61 | 0.55 | 38 |
| 1 | 0.25 | 0.07 | 0.11 | 14 |
| 2 | 0.57 | 0.73 | 0.64 | 11 |
| 3 | 0.4 | 0.5 | 0.44 | 8 |
| 4 | 0.64 | 0.54 | 0.58 | 13 |
| 5 | 0.33 | 0.33 | 0.33 | 3 |
| 6 | 0 | 0 | 0 | 3 |
| 7 | 0.67 | 0.67 | 0.67 | 6 |
| 8 | 0.75 | 0.6 | 0.67 | 10 |
| 9 | 0.78 | 0.75 | 0.76 | 28 |
| 10 | 0.2 | 0.06 | 0.1 | 16 |
| 11 | 0.67 | 0.5 | 0.57 | 4 |
| 12 | 0 | 0 | 0 | 3 |
| 13 | 0.48 | 0.37 | 0.42 | 27 |
| 14 | 0.7 | 0.62 | 0.65 | 78 |
| 15 | 0.25 | 0.25 | 0.25 | 4 |
| 16 | 0 | 0 | 0 | 8 |
| 17 | 0.58 | 0.69 | 0.63 | 105 |
| 18 | 0.62 | 0.79 | 0.7 | 97 |
| 19 | 0 | 0 | 0 | 1 |
| 20 | 1 | 0.6 | 0.75 | 5 |
| macro avg | 0.45 | 0.41 | 0.42 | 482 |
| weighted avg | 0.57 | 0.6 | 0.58 | 482 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,613 | [
[
-0.04534912109375,
-0.04803466796875,
0.00734710693359375,
0.0205841064453125,
-0.005992889404296875,
-0.00505828857421875,
-0.026458740234375,
-0.0217437744140625,
0.01282501220703125,
0.0177154541015625,
-0.04046630859375,
-0.046234130859375,
-0.05743408203125... |
PFcoding/medicare-gpt2-accurate | 2023-05-31T14:30:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:pubmed-summarization",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | PFcoding | null | null | PFcoding/medicare-gpt2-accurate | 0 | 2 | transformers | 2023-05-31T14:24:24 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- pubmed-summarization
model-index:
- name: medicare-gpt2-accurate
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medicare-gpt2-accurate
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the pubmed-summarization dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,192 | [
[
-0.0068817138671875,
-0.040557861328125,
0.032623291015625,
-0.00527191162109375,
-0.03240966796875,
-0.0263671875,
0.00537109375,
-0.01125335693359375,
0.0112457275390625,
0.036041259765625,
-0.044891357421875,
-0.0281829833984375,
-0.0572509765625,
-0.0032... |
PFcoding/medicare-gpt2-large | 2023-07-31T02:10:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"dataset:pubmed-summarization",
"license:mit",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | PFcoding | null | null | PFcoding/medicare-gpt2-large | 1 | 2 | transformers | 2023-05-31T14:44:48 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- pubmed-summarization
model-index:
- name: medicare-gpt2-large
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medicare-gpt2-large
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the pubmed-summarization dataset.
It achieves the following results on the evaluation set:
- Loss: 2.6383
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0005
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- gradient_accumulation_steps: 8
- total_train_batch_size: 256
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: cosine
- lr_scheduler_warmup_steps: 1000
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.9036 | 0.08 | 500 | 4.2296 |
| 3.7554 | 0.16 | 1000 | 3.3542 |
| 3.2457 | 0.23 | 1500 | 3.0897 |
| 3.065 | 0.31 | 2000 | 2.9694 |
| 2.966 | 0.39 | 2500 | 2.8919 |
| 2.8912 | 0.47 | 3000 | 2.8305 |
| 2.8345 | 0.55 | 3500 | 2.7817 |
| 2.7818 | 0.62 | 4000 | 2.7378 |
| 2.7391 | 0.7 | 4500 | 2.7001 |
| 2.7052 | 0.78 | 5000 | 2.6689 |
| 2.6769 | 0.86 | 5500 | 2.6486 |
| 2.6599 | 0.94 | 6000 | 2.6383 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
### Test input samples
diabetes is caused by | 2,017 | [
[
-0.021881103515625,
-0.03521728515625,
0.0260009765625,
-0.00823211669921875,
-0.024017333984375,
-0.0213165283203125,
0.004268646240234375,
-0.01096343994140625,
0.019500732421875,
0.03582763671875,
-0.042327880859375,
-0.03863525390625,
-0.06292724609375,
... |
poltextlab/xlm-roberta-large-hungarian-other-cap | 2023-07-04T17:40:35.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"hu",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-hungarian-other-cap | 0 | 2 | transformers | 2023-05-31T14:44:52 |
---
---
license: mit
language:
- hu
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-hungarian-other-cap
## Model description
An `xlm-roberta-large` model finetuned on hungarian training data containing texts of the `other` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-hungarian-other-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-hungarian-other-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 121 examples (10% of the available data).<br>
Model accuracy is **0.81**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.69 | 1 | 0.81 | 22 |
| 1 | 0.6 | 0.6 | 0.6 | 5 |
| 2 | 0.94 | 0.94 | 0.94 | 16 |
| 3 | 1 | 0.88 | 0.93 | 8 |
| 4 | 1 | 0.62 | 0.77 | 8 |
| 5 | 0 | 0 | 0 | 3 |
| 6 | 0.67 | 1 | 0.8 | 2 |
| 7 | 0.86 | 1 | 0.92 | 6 |
| 8 | 0 | 0 | 0 | 0 |
| 9 | 0.78 | 1 | 0.88 | 7 |
| 10 | 0.85 | 0.85 | 0.85 | 13 |
| 11 | 0 | 0 | 0 | 3 |
| 12 | 0 | 0 | 0 | 2 |
| 13 | 0.75 | 0.5 | 0.6 | 6 |
| 14 | 1 | 0.71 | 0.83 | 7 |
| 15 | 0 | 0 | 0 | 1 |
| 16 | 0 | 0 | 0 | 0 |
| 17 | 0.67 | 1 | 0.8 | 4 |
| 18 | 0.89 | 1 | 0.94 | 8 |
| 19 | 0 | 0 | 0 | 0 |
| macro avg | 0.53 | 0.55 | 0.53 | 121 |
| weighted avg | 0.77 | 0.81 | 0.78 | 121 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,559 | [
[
-0.041412353515625,
-0.0484619140625,
0.008941650390625,
0.019073486328125,
-0.005474090576171875,
-0.005161285400390625,
-0.02813720703125,
-0.028076171875,
0.0120697021484375,
0.0226287841796875,
-0.038848876953125,
-0.049102783203125,
-0.057464599609375,
... |
YakovElm/Apache10Classic_Balance_DATA_ratio_2 | 2023-05-31T15:05:54.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache10Classic_Balance_DATA_ratio_2 | 0 | 2 | transformers | 2023-05-31T14:49:39 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache10Classic_Balance_DATA_ratio_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache10Classic_Balance_DATA_ratio_2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5541
- Train Accuracy: 0.7049
- Validation Loss: 0.5874
- Validation Accuracy: 0.6940
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6312 | 0.6685 | 0.6166 | 0.6694 | 0 |
| 0.5872 | 0.6867 | 0.6025 | 0.6831 | 1 |
| 0.5541 | 0.7049 | 0.5874 | 0.6940 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.0445556640625,
-0.0465087890625,
0.01172637939453125,
0.013824462890625,
-0.032257080078125,
-0.03302001953125,
-0.0115966796875,
-0.025970458984375,
0.0145263671875,
0.01346588134765625,
-0.053680419921875,
-0.03497314453125,
-0.05047607421875,
-0.024200... |
poltextlab/xlm-roberta-large-dutch-media-cap | 2023-07-04T17:40:36.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nl",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-dutch-media-cap | 0 | 2 | transformers | 2023-05-31T14:53:24 |
---
---
license: mit
language:
- nl
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-dutch-media-cap
## Model description
An `xlm-roberta-large` model finetuned on dutch training data containing texts of the `media` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-dutch-media-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-dutch-media-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 2969 examples (10% of the available data).<br>
Model accuracy is **0.92**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.83 | 0.85 | 0.84 | 130 |
| 1 | 0.8 | 0.84 | 0.82 | 70 |
| 2 | 0.89 | 0.97 | 0.93 | 105 |
| 3 | 0.88 | 0.9 | 0.89 | 31 |
| 4 | 0.86 | 0.86 | 0.86 | 126 |
| 5 | 0.91 | 0.93 | 0.92 | 90 |
| 6 | 0.82 | 1 | 0.9 | 36 |
| 7 | 0.97 | 0.84 | 0.9 | 37 |
| 8 | 0.96 | 0.92 | 0.94 | 59 |
| 9 | 0.93 | 0.93 | 0.93 | 82 |
| 10 | 0.94 | 0.89 | 0.91 | 293 |
| 11 | 0.83 | 0.75 | 0.78 | 51 |
| 12 | 0.85 | 0.79 | 0.81 | 28 |
| 13 | 0.86 | 0.83 | 0.85 | 193 |
| 14 | 0.71 | 0.86 | 0.77 | 28 |
| 15 | 0.98 | 0.88 | 0.92 | 49 |
| 16 | 0.71 | 1 | 0.83 | 10 |
| 17 | 0.96 | 0.97 | 0.96 | 948 |
| 18 | 0.94 | 0.93 | 0.93 | 419 |
| 19 | 0.88 | 0.78 | 0.82 | 27 |
| 20 | 0.94 | 0.92 | 0.93 | 157 |
| macro avg | 0.88 | 0.89 | 0.88 | 2969 |
| weighted avg | 0.92 | 0.92 | 0.92 | 2969 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,611 | [
[
-0.044525146484375,
-0.048980712890625,
0.004238128662109375,
0.0225830078125,
-0.004505157470703125,
-0.0036678314208984375,
-0.02655029296875,
-0.022705078125,
0.016937255859375,
0.02117919921875,
-0.0384521484375,
-0.046875,
-0.05853271484375,
0.008995056... |
poltextlab/xlm-roberta-large-hungarian-budget-cap | 2023-07-04T17:40:34.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"hu",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-hungarian-budget-cap | 0 | 2 | transformers | 2023-05-31T15:00:05 |
---
---
license: mit
language:
- hu
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-hungarian-budget-cap
## Model description
An `xlm-roberta-large` model finetuned on hungarian training data containing texts of the `budget` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-hungarian-budget-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-hungarian-budget-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 11408 examples (10% of the available data).<br>
Model accuracy is **0.98**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.99 | 0.98 | 0.98 | 1137 |
| 1 | 0.97 | 0.99 | 0.98 | 181 |
| 2 | 0.99 | 0.99 | 0.99 | 629 |
| 3 | 0.99 | 0.98 | 0.99 | 617 |
| 4 | 0.99 | 0.98 | 0.98 | 458 |
| 5 | 0.99 | 0.99 | 0.99 | 1592 |
| 6 | 0.99 | 0.99 | 0.99 | 190 |
| 7 | 0.98 | 1 | 0.99 | 92 |
| 8 | 0.94 | 1 | 0.97 | 32 |
| 9 | 0.98 | 0.98 | 0.98 | 505 |
| 10 | 0.99 | 0.98 | 0.99 | 933 |
| 11 | 0.98 | 0.97 | 0.97 | 520 |
| 12 | 0.98 | 0.97 | 0.98 | 274 |
| 13 | 0.98 | 0.98 | 0.98 | 648 |
| 14 | 0.99 | 1 | 0.99 | 373 |
| 15 | 0.99 | 1 | 0.99 | 467 |
| 16 | 0.98 | 0.97 | 0.97 | 91 |
| 17 | 0.98 | 0.97 | 0.98 | 279 |
| 18 | 0.98 | 0.98 | 0.98 | 1138 |
| 19 | 0.99 | 0.99 | 0.99 | 664 |
| 20 | 0.98 | 0.99 | 0.98 | 288 |
| 21 | 0.92 | 0.96 | 0.94 | 300 |
| macro avg | 0.98 | 0.98 | 0.98 | 11408 |
| weighted avg | 0.98 | 0.98 | 0.98 | 11408 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,699 | [
[
-0.0440673828125,
-0.048309326171875,
0.006664276123046875,
0.01995849609375,
-0.0021209716796875,
-0.0039520263671875,
-0.0242767333984375,
-0.0233154296875,
0.01427459716796875,
0.0218658447265625,
-0.040313720703125,
-0.045928955078125,
-0.053314208984375,
... |
YakovElm/Apache10Classic_Balance_DATA_ratio_3 | 2023-05-31T15:32:10.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache10Classic_Balance_DATA_ratio_3 | 0 | 2 | transformers | 2023-05-31T15:07:35 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache10Classic_Balance_DATA_ratio_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache10Classic_Balance_DATA_ratio_3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4992
- Train Accuracy: 0.7637
- Validation Loss: 0.5755
- Validation Accuracy: 0.7336
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5466 | 0.7493 | 0.5843 | 0.7029 | 0 |
| 0.5130 | 0.7596 | 0.5762 | 0.7377 | 1 |
| 0.4992 | 0.7637 | 0.5755 | 0.7336 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.045196533203125,
-0.04632568359375,
0.0144805908203125,
0.01320648193359375,
-0.031982421875,
-0.03363037109375,
-0.0111846923828125,
-0.0258331298828125,
0.0146636962890625,
0.0146484375,
-0.052825927734375,
-0.037689208984375,
-0.04937744140625,
-0.0220... |
poltextlab/xlm-roberta-large-dutch-social-cap | 2023-07-04T17:40:33.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nl",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-dutch-social-cap | 0 | 2 | transformers | 2023-05-31T15:20:13 |
---
---
license: mit
language:
- nl
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-dutch-social-cap
## Model description
An `xlm-roberta-large` model finetuned on dutch training data containing texts of the `social` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-dutch-social-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-dutch-social-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 1020 examples (10% of the available data).<br>
Model accuracy is **0.77**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.64 | 0.83 | 0.72 | 46 |
| 1 | 0.78 | 0.72 | 0.75 | 39 |
| 2 | 0.7 | 0.78 | 0.74 | 27 |
| 3 | 0.73 | 0.9 | 0.81 | 21 |
| 4 | 0.71 | 0.64 | 0.68 | 39 |
| 5 | 0.88 | 0.93 | 0.91 | 72 |
| 6 | 0.92 | 0.8 | 0.86 | 60 |
| 7 | 0.79 | 0.92 | 0.85 | 24 |
| 8 | 0.79 | 0.89 | 0.84 | 120 |
| 9 | 0.89 | 0.86 | 0.87 | 85 |
| 10 | 0.83 | 0.82 | 0.82 | 115 |
| 11 | 0.7 | 0.74 | 0.72 | 89 |
| 12 | 0.71 | 0.94 | 0.81 | 16 |
| 13 | 0.55 | 0.43 | 0.48 | 14 |
| 14 | 0.73 | 0.73 | 0.73 | 11 |
| 15 | 0.53 | 0.53 | 0.53 | 15 |
| 16 | 0 | 0 | 0 | 0 |
| 17 | 0.63 | 0.71 | 0.67 | 17 |
| 18 | 0.73 | 0.59 | 0.65 | 134 |
| 19 | 0.6 | 0.55 | 0.58 | 38 |
| 20 | 0.85 | 0.76 | 0.81 | 38 |
| macro avg | 0.7 | 0.72 | 0.71 | 1020 |
| weighted avg | 0.77 | 0.77 | 0.77 | 1020 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,615 | [
[
-0.04058837890625,
-0.05126953125,
0.006755828857421875,
0.0232391357421875,
-0.004825592041015625,
0.00007194280624389648,
-0.0255584716796875,
-0.026580810546875,
0.0196380615234375,
0.0201416015625,
-0.035552978515625,
-0.0496826171875,
-0.058746337890625,
... |
poltextlab/xlm-roberta-large-italian-speech-cap | 2023-07-04T17:40:30.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"it",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-italian-speech-cap | 0 | 2 | transformers | 2023-05-31T15:26:52 |
---
---
license: mit
language:
- it
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-italian-speech-cap
## Model description
An `xlm-roberta-large` model finetuned on italian training data containing texts of the `speech` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-italian-speech-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-italian-speech-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 335 examples (10% of the available data).<br>
Model accuracy is **0.64**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.63 | 0.66 | 0.64 | 29 |
| 1 | 0.33 | 0.12 | 0.18 | 16 |
| 2 | 0.87 | 0.77 | 0.82 | 26 |
| 3 | 0.69 | 0.85 | 0.76 | 13 |
| 4 | 0.69 | 0.75 | 0.72 | 24 |
| 5 | 0.89 | 0.8 | 0.84 | 10 |
| 6 | 0.58 | 0.7 | 0.64 | 10 |
| 7 | 0.67 | 0.86 | 0.75 | 7 |
| 8 | 0.55 | 0.55 | 0.55 | 11 |
| 9 | 0.64 | 0.75 | 0.69 | 28 |
| 10 | 0.65 | 0.76 | 0.7 | 54 |
| 11 | 0.25 | 1 | 0.4 | 2 |
| 12 | 0.67 | 0.5 | 0.57 | 4 |
| 13 | 0.57 | 0.69 | 0.62 | 29 |
| 14 | 1 | 0.46 | 0.63 | 13 |
| 15 | 0.86 | 0.6 | 0.71 | 10 |
| 16 | 0 | 0 | 0 | 3 |
| 17 | 0.29 | 0.62 | 0.4 | 8 |
| 18 | 0.65 | 0.34 | 0.45 | 32 |
| 19 | 1 | 0.8 | 0.89 | 5 |
| 20 | 0 | 0 | 0 | 1 |
| macro avg | 0.59 | 0.6 | 0.57 | 335 |
| weighted avg | 0.66 | 0.64 | 0.63 | 335 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,622 | [
[
-0.039947509765625,
-0.049835205078125,
0.00530242919921875,
0.0197601318359375,
-0.0055694580078125,
-0.004550933837890625,
-0.0285797119140625,
-0.02471923828125,
0.01306915283203125,
0.018310546875,
-0.038421630859375,
-0.04852294921875,
-0.05377197265625,
... |
YakovElm/Apache10Classic_Balance_DATA_ratio_4 | 2023-05-31T16:03:37.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache10Classic_Balance_DATA_ratio_4 | 0 | 2 | transformers | 2023-05-31T15:29:28 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache10Classic_Balance_DATA_ratio_4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache10Classic_Balance_DATA_ratio_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4309
- Train Accuracy: 0.8158
- Validation Loss: 0.5421
- Validation Accuracy: 0.8131
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5089 | 0.7891 | 0.4756 | 0.8016 | 0 |
| 0.4495 | 0.8044 | 0.4611 | 0.8148 | 1 |
| 0.4309 | 0.8158 | 0.5421 | 0.8131 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.045166015625,
-0.0458984375,
0.01442718505859375,
0.01342010498046875,
-0.0310211181640625,
-0.032440185546875,
-0.01242828369140625,
-0.025177001953125,
0.015655517578125,
0.0140380859375,
-0.05352783203125,
-0.0384521484375,
-0.04937744140625,
-0.018920... |
5IN/distilbert-base-uncased-finetuned-cola | 2023-06-10T15:05:17.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | 5IN | null | null | 5IN/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-31T15:33:51 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5588305747648582
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8049
- Matthews Correlation: 0.5588
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5219 | 1.0 | 535 | 0.5632 | 0.4160 |
| 0.3491 | 2.0 | 1070 | 0.5170 | 0.4779 |
| 0.2404 | 3.0 | 1605 | 0.5398 | 0.5331 |
| 0.179 | 4.0 | 2140 | 0.7745 | 0.5267 |
| 0.1244 | 5.0 | 2675 | 0.8049 | 0.5588 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,042 | [
[
-0.0222320556640625,
-0.05072021484375,
0.01104736328125,
0.018218994140625,
-0.0202789306640625,
-0.008148193359375,
-0.00566864013671875,
-0.00449371337890625,
0.0230712890625,
0.01019287109375,
-0.04486083984375,
-0.03570556640625,
-0.06304931640625,
-0.0... |
YakovElm/Apache15Classic_Balance_DATA_ratio_Half | 2023-05-31T16:12:30.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache15Classic_Balance_DATA_ratio_Half | 0 | 2 | transformers | 2023-05-31T15:36:02 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic_Balance_DATA_ratio_Half
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic_Balance_DATA_ratio_Half
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5594
- Train Accuracy: 0.7251
- Validation Loss: 0.6503
- Validation Accuracy: 0.7153
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6407 | 0.6472 | 0.6485 | 0.6131 | 0 |
| 0.6002 | 0.6813 | 0.6358 | 0.6131 | 1 |
| 0.5594 | 0.7251 | 0.6503 | 0.7153 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,822 | [
[
-0.044769287109375,
-0.0469970703125,
0.00936126708984375,
0.01129150390625,
-0.031524658203125,
-0.0295257568359375,
-0.008819580078125,
-0.0236358642578125,
0.0164642333984375,
0.01157379150390625,
-0.0567626953125,
-0.038543701171875,
-0.049896240234375,
... |
YakovElm/Apache15Classic_Balance_DATA_ratio_1 | 2023-05-31T16:23:55.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache15Classic_Balance_DATA_ratio_1 | 0 | 2 | transformers | 2023-05-31T15:43:57 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic_Balance_DATA_ratio_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic_Balance_DATA_ratio_1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6174
- Train Accuracy: 0.6515
- Validation Loss: 0.6344
- Validation Accuracy: 0.6284
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6983 | 0.5310 | 0.7039 | 0.5301 | 0 |
| 0.6642 | 0.5912 | 0.6633 | 0.6175 | 1 |
| 0.6174 | 0.6515 | 0.6344 | 0.6284 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.04608154296875,
-0.045867919921875,
0.01122283935546875,
0.01206207275390625,
-0.030303955078125,
-0.0322265625,
-0.01070404052734375,
-0.023712158203125,
0.015655517578125,
0.01348114013671875,
-0.0555419921875,
-0.03826904296875,
-0.04925537109375,
-0.0... |
poltextlab/xlm-roberta-large-spanish-media-cap | 2023-07-04T17:40:30.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"es",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-spanish-media-cap | 0 | 2 | transformers | 2023-05-31T15:48:05 |
---
---
license: mit
language:
- es
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-spanish-media-cap
## Model description
An `xlm-roberta-large` model finetuned on spanish training data containing texts of the `media` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-spanish-media-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-spanish-media-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 7155 examples (10% of the available data).<br>
Model accuracy is **0.75**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.81 | 0.78 | 0.8 | 301 |
| 1 | 0.65 | 0.46 | 0.54 | 417 |
| 2 | 0.82 | 0.85 | 0.83 | 231 |
| 3 | 0.81 | 0.81 | 0.81 | 58 |
| 4 | 0.83 | 0.67 | 0.74 | 164 |
| 5 | 0.8 | 0.81 | 0.81 | 85 |
| 6 | 0.72 | 0.67 | 0.7 | 89 |
| 7 | 0.71 | 0.8 | 0.75 | 121 |
| 8 | 0.76 | 0.81 | 0.78 | 134 |
| 9 | 0.86 | 0.83 | 0.84 | 230 |
| 10 | 0.72 | 0.87 | 0.79 | 1502 |
| 11 | 0.6 | 0.33 | 0.42 | 64 |
| 12 | 0.67 | 0.6 | 0.63 | 43 |
| 13 | 0.65 | 0.65 | 0.65 | 317 |
| 14 | 0.73 | 0.79 | 0.76 | 517 |
| 15 | 0.81 | 0.7 | 0.75 | 247 |
| 16 | 0.66 | 0.55 | 0.6 | 56 |
| 17 | 0.68 | 0.58 | 0.62 | 457 |
| 18 | 0.8 | 0.78 | 0.79 | 1549 |
| 19 | 0.77 | 0.71 | 0.74 | 24 |
| 20 | 0.77 | 0.75 | 0.76 | 549 |
| macro avg | 0.75 | 0.71 | 0.72 | 7155 |
| weighted avg | 0.75 | 0.75 | 0.75 | 7155 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,619 | [
[
-0.042755126953125,
-0.047332763671875,
0.004573822021484375,
0.02490234375,
-0.003444671630859375,
-0.0010461807250976562,
-0.025634765625,
-0.022003173828125,
0.016845703125,
0.0203094482421875,
-0.03912353515625,
-0.047454833984375,
-0.056182861328125,
0.... |
poltextlab/xlm-roberta-large-italian-legal-cap | 2023-07-04T17:40:33.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"it",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-italian-legal-cap | 0 | 2 | transformers | 2023-05-31T15:54:53 |
---
---
license: mit
language:
- it
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-italian-legal-cap
## Model description
An `xlm-roberta-large` model finetuned on italian training data containing texts of the `legal` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-italian-legal-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-italian-legal-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 463 examples (10% of the available data).<br>
Model accuracy is **0.82**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.8 | 0.92 | 0.86 | 39 |
| 1 | 0.5 | 0.29 | 0.36 | 7 |
| 2 | 0.58 | 0.88 | 0.7 | 8 |
| 3 | 0.87 | 0.87 | 0.87 | 23 |
| 4 | 0.5 | 0.64 | 0.56 | 11 |
| 5 | 0.88 | 0.88 | 0.88 | 26 |
| 6 | 0.79 | 0.81 | 0.8 | 27 |
| 7 | 0.85 | 0.92 | 0.88 | 12 |
| 8 | 0.8 | 0.8 | 0.8 | 5 |
| 9 | 0.86 | 0.9 | 0.88 | 41 |
| 10 | 0.88 | 0.93 | 0.9 | 60 |
| 11 | 0.83 | 0.45 | 0.59 | 11 |
| 12 | 1 | 0.67 | 0.8 | 3 |
| 13 | 0.86 | 0.8 | 0.83 | 40 |
| 14 | 0.77 | 0.89 | 0.83 | 19 |
| 15 | 0.94 | 0.94 | 0.94 | 16 |
| 16 | 0.9 | 0.64 | 0.75 | 14 |
| 17 | 0.88 | 0.72 | 0.79 | 39 |
| 18 | 0.82 | 0.69 | 0.75 | 48 |
| 19 | 0.38 | 0.75 | 0.5 | 4 |
| 20 | 0.69 | 0.9 | 0.78 | 10 |
| macro avg | 0.78 | 0.78 | 0.76 | 463 |
| weighted avg | 0.83 | 0.82 | 0.81 | 463 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,618 | [
[
-0.038726806640625,
-0.0460205078125,
0.008544921875,
0.0177459716796875,
-0.0075225830078125,
-0.0045928955078125,
-0.023529052734375,
-0.0263671875,
0.01485443115234375,
0.0227203369140625,
-0.03472900390625,
-0.04937744140625,
-0.05584716796875,
0.0090408... |
YakovElm/Apache15Classic_Balance_DATA_ratio_2 | 2023-05-31T16:40:38.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache15Classic_Balance_DATA_ratio_2 | 0 | 2 | transformers | 2023-05-31T15:55:14 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic_Balance_DATA_ratio_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic_Balance_DATA_ratio_2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5440
- Train Accuracy: 0.7056
- Validation Loss: 0.6670
- Validation Accuracy: 0.7190
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6106 | 0.6776 | 0.5839 | 0.6715 | 0 |
| 0.5830 | 0.6922 | 0.5614 | 0.6934 | 1 |
| 0.5440 | 0.7056 | 0.6670 | 0.7190 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.043670654296875,
-0.0457763671875,
0.011016845703125,
0.01220703125,
-0.032196044921875,
-0.031890869140625,
-0.01139068603515625,
-0.024566650390625,
0.0142364501953125,
0.012603759765625,
-0.054412841796875,
-0.035919189453125,
-0.050048828125,
-0.02630... |
poltextlab/xlm-roberta-large-dutch-manifesto-cap | 2023-07-04T17:40:35.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nl",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-dutch-manifesto-cap | 0 | 2 | transformers | 2023-05-31T16:01:37 |
---
---
license: mit
language:
- nl
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-dutch-manifesto-cap
## Model description
An `xlm-roberta-large` model finetuned on dutch training data containing texts of the `manifesto` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-dutch-manifesto-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-dutch-manifesto-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 464 examples (10% of the available data).<br>
Model accuracy is **0.79**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.78 | 0.78 | 0.78 | 54 |
| 1 | 0.33 | 0.2 | 0.25 | 10 |
| 2 | 0.84 | 0.88 | 0.86 | 41 |
| 3 | 0.7 | 0.88 | 0.78 | 8 |
| 4 | 0.86 | 0.68 | 0.76 | 47 |
| 5 | 0.97 | 0.88 | 0.92 | 34 |
| 6 | 0.88 | 0.54 | 0.67 | 13 |
| 7 | 1 | 0.83 | 0.91 | 18 |
| 8 | 0.87 | 0.87 | 0.87 | 23 |
| 9 | 0.84 | 0.95 | 0.89 | 22 |
| 10 | 0.83 | 0.83 | 0.83 | 24 |
| 11 | 0.62 | 0.74 | 0.68 | 31 |
| 12 | 0.83 | 0.86 | 0.84 | 22 |
| 13 | 0.65 | 0.88 | 0.75 | 17 |
| 14 | 1 | 0.75 | 0.86 | 4 |
| 15 | 0.71 | 1 | 0.83 | 10 |
| 16 | 0 | 0 | 0 | 2 |
| 17 | 0.82 | 0.93 | 0.87 | 29 |
| 18 | 0.7 | 0.7 | 0.7 | 46 |
| 19 | 0.43 | 0.5 | 0.46 | 6 |
| 20 | 1 | 0.67 | 0.8 | 3 |
| macro avg | 0.75 | 0.73 | 0.73 | 464 |
| weighted avg | 0.79 | 0.79 | 0.78 | 464 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,626 | [
[
-0.04168701171875,
-0.045074462890625,
0.005130767822265625,
0.022003173828125,
-0.003376007080078125,
-0.0024204254150390625,
-0.0258636474609375,
-0.026763916015625,
0.0156707763671875,
0.0217742919921875,
-0.037811279296875,
-0.04931640625,
-0.055908203125,
... |
YakovElm/Apache15Classic_Balance_DATA_ratio_3 | 2023-05-31T17:01:35.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache15Classic_Balance_DATA_ratio_3 | 0 | 2 | transformers | 2023-05-31T16:09:56 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic_Balance_DATA_ratio_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic_Balance_DATA_ratio_3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4925
- Train Accuracy: 0.7929
- Validation Loss: 0.5361
- Validation Accuracy: 0.7514
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5651 | 0.7354 | 0.6073 | 0.7295 | 0 |
| 0.5224 | 0.7637 | 0.5595 | 0.7322 | 1 |
| 0.4925 | 0.7929 | 0.5361 | 0.7514 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.045135498046875,
-0.045928955078125,
0.01389312744140625,
0.01216888427734375,
-0.0312042236328125,
-0.033050537109375,
-0.011138916015625,
-0.025360107421875,
0.013641357421875,
0.0146331787109375,
-0.053253173828125,
-0.03955078125,
-0.047943115234375,
... |
yoshivo/distilbert-base-uncased-finetuned-emotion | 2023-05-31T16:41:10.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yoshivo | null | null | yoshivo/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-31T16:21:48 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215741602989571
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2131
- Accuracy: 0.9215
- F1: 0.9216
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8158 | 1.0 | 250 | 0.3115 | 0.9015 | 0.8978 |
| 0.243 | 2.0 | 500 | 0.2131 | 0.9215 | 0.9216 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1.post201
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,851 | [
[
-0.03839111328125,
-0.04034423828125,
0.0140228271484375,
0.02386474609375,
-0.0265045166015625,
-0.0209197998046875,
-0.0128021240234375,
-0.008026123046875,
0.01107025146484375,
0.00887298583984375,
-0.057464599609375,
-0.052276611328125,
-0.060791015625,
... |
YakovElm/Apache15Classic_Balance_DATA_ratio_4 | 2023-05-31T17:26:47.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache15Classic_Balance_DATA_ratio_4 | 0 | 2 | transformers | 2023-05-31T16:27:20 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic_Balance_DATA_ratio_4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic_Balance_DATA_ratio_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3664
- Train Accuracy: 0.8534
- Validation Loss: 0.5348
- Validation Accuracy: 0.7659
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4719 | 0.8111 | 0.4984 | 0.7856 | 0 |
| 0.4410 | 0.8155 | 0.4861 | 0.7834 | 1 |
| 0.3664 | 0.8534 | 0.5348 | 0.7659 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.045257568359375,
-0.04461669921875,
0.01422119140625,
0.0122528076171875,
-0.0309600830078125,
-0.030731201171875,
-0.01190185546875,
-0.0247955322265625,
0.0142059326171875,
0.01336669921875,
-0.0543212890625,
-0.039581298828125,
-0.048614501953125,
-0.0... |
YakovElm/Apache20Classic_Balance_DATA_ratio_Half | 2023-05-31T17:34:48.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache20Classic_Balance_DATA_ratio_Half | 0 | 2 | transformers | 2023-05-31T16:32:53 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic_Balance_DATA_ratio_Half
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic_Balance_DATA_ratio_Half
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5978
- Train Accuracy: 0.6637
- Validation Loss: 0.6129
- Validation Accuracy: 0.6283
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6357 | 0.6814 | 0.6421 | 0.6283 | 0 |
| 0.6329 | 0.6667 | 0.6298 | 0.6283 | 1 |
| 0.5978 | 0.6637 | 0.6129 | 0.6283 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,822 | [
[
-0.045379638671875,
-0.047332763671875,
0.0112152099609375,
0.01422882080078125,
-0.0323486328125,
-0.031036376953125,
-0.007320404052734375,
-0.0248260498046875,
0.0172576904296875,
0.0132904052734375,
-0.058563232421875,
-0.038818359375,
-0.05072021484375,
... |
poltextlab/xlm-roberta-large-dutch-speech-cap | 2023-07-04T17:40:34.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nl",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-dutch-speech-cap | 0 | 2 | transformers | 2023-05-31T16:37:38 |
---
---
license: mit
language:
- nl
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-dutch-speech-cap
## Model description
An `xlm-roberta-large` model finetuned on dutch training data containing texts of the `speech` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-dutch-speech-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-dutch-speech-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 513 examples (10% of the available data).<br>
Model accuracy is **0.71**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.67 | 0.81 | 0.73 | 86 |
| 1 | 0.89 | 0.4 | 0.55 | 20 |
| 2 | 0.78 | 0.78 | 0.78 | 23 |
| 3 | 0.78 | 0.7 | 0.74 | 10 |
| 4 | 0.71 | 0.82 | 0.76 | 68 |
| 5 | 0.7 | 0.78 | 0.74 | 9 |
| 6 | 0.86 | 0.43 | 0.57 | 14 |
| 7 | 0.75 | 1 | 0.86 | 6 |
| 8 | 0.79 | 0.73 | 0.76 | 15 |
| 9 | 0.84 | 0.94 | 0.89 | 17 |
| 10 | 0.73 | 0.7 | 0.71 | 50 |
| 11 | 0.58 | 0.73 | 0.65 | 30 |
| 12 | 0.8 | 0.57 | 0.67 | 7 |
| 13 | 0.73 | 0.5 | 0.59 | 16 |
| 14 | 0.6 | 0.69 | 0.64 | 13 |
| 15 | 1 | 0.5 | 0.67 | 6 |
| 16 | 1 | 0.13 | 0.24 | 15 |
| 17 | 0.78 | 0.72 | 0.75 | 58 |
| 18 | 0.63 | 0.7 | 0.67 | 44 |
| 19 | 0 | 0 | 0 | 1 |
| 20 | 1 | 1 | 1 | 5 |
| macro avg | 0.74 | 0.65 | 0.66 | 513 |
| weighted avg | 0.73 | 0.71 | 0.7 | 513 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,614 | [
[
-0.042083740234375,
-0.052215576171875,
0.003490447998046875,
0.0234832763671875,
-0.00426483154296875,
-0.00395965576171875,
-0.0301971435546875,
-0.0254974365234375,
0.01433563232421875,
0.022430419921875,
-0.03558349609375,
-0.049041748046875,
-0.056274414062... |
YakovElm/Apache20Classic_Balance_DATA_ratio_1 | 2023-05-31T17:44:28.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache20Classic_Balance_DATA_ratio_1 | 0 | 2 | transformers | 2023-05-31T16:39:27 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic_Balance_DATA_ratio_1
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic_Balance_DATA_ratio_1
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.6064
- Train Accuracy: 0.6504
- Validation Loss: 0.6490
- Validation Accuracy: 0.5828
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.7151 | 0.4912 | 0.6588 | 0.5563 | 0 |
| 0.6553 | 0.6128 | 0.6629 | 0.6159 | 1 |
| 0.6064 | 0.6504 | 0.6490 | 0.5828 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.046661376953125,
-0.0472412109375,
0.01198577880859375,
0.0133209228515625,
-0.03155517578125,
-0.033966064453125,
-0.0105133056640625,
-0.024688720703125,
0.0155029296875,
0.0146331787109375,
-0.056060791015625,
-0.038421630859375,
-0.050079345703125,
-0... |
poltextlab/xlm-roberta-large-dutch-legal-cap | 2023-07-04T17:40:38.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nl",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-dutch-legal-cap | 1 | 2 | transformers | 2023-05-31T16:44:36 |
---
---
license: mit
language:
- nl
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-dutch-legal-cap
## Model description
An `xlm-roberta-large` model finetuned on dutch training data containing texts of the `legal` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-dutch-legal-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-dutch-legal-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 1039 examples (10% of the available data).<br>
Model accuracy is **0.79**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.76 | 0.85 | 0.8 | 118 |
| 1 | 0.67 | 0.29 | 0.4 | 21 |
| 2 | 0.79 | 0.78 | 0.78 | 49 |
| 3 | 0.67 | 0.6 | 0.63 | 10 |
| 4 | 0.64 | 0.73 | 0.68 | 66 |
| 5 | 0.98 | 0.88 | 0.92 | 49 |
| 6 | 0.55 | 0.73 | 0.63 | 30 |
| 7 | 0.56 | 0.5 | 0.53 | 10 |
| 8 | 0.89 | 0.73 | 0.8 | 11 |
| 9 | 0.92 | 0.88 | 0.9 | 52 |
| 10 | 0.9 | 0.91 | 0.9 | 219 |
| 11 | 0.84 | 0.78 | 0.81 | 88 |
| 12 | 0.69 | 0.75 | 0.72 | 36 |
| 13 | 0.8 | 0.79 | 0.79 | 85 |
| 14 | 0.81 | 0.81 | 0.81 | 32 |
| 15 | 0.56 | 0.62 | 0.59 | 8 |
| 16 | 0 | 0 | 0 | 6 |
| 17 | 0.66 | 0.66 | 0.66 | 38 |
| 18 | 0.77 | 0.77 | 0.77 | 99 |
| 19 | 0 | 0 | 0 | 4 |
| 20 | 0.88 | 0.88 | 0.88 | 8 |
| macro avg | 0.68 | 0.66 | 0.67 | 1039 |
| weighted avg | 0.79 | 0.79 | 0.79 | 1039 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,611 | [
[
-0.039520263671875,
-0.047271728515625,
0.00728607177734375,
0.0208587646484375,
-0.007205963134765625,
-0.005153656005859375,
-0.0245361328125,
-0.0266265869140625,
0.01416015625,
0.0257568359375,
-0.03228759765625,
-0.0496826171875,
-0.05816650390625,
0.00... |
YakovElm/Apache20Classic_Balance_DATA_ratio_2 | 2023-05-31T17:57:33.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache20Classic_Balance_DATA_ratio_2 | 0 | 2 | transformers | 2023-05-31T16:48:23 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic_Balance_DATA_ratio_2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic_Balance_DATA_ratio_2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.5301
- Train Accuracy: 0.7227
- Validation Loss: 0.6836
- Validation Accuracy: 0.6947
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.6315 | 0.6667 | 0.6081 | 0.6726 | 0 |
| 0.5750 | 0.6962 | 0.6117 | 0.6549 | 1 |
| 0.5301 | 0.7227 | 0.6836 | 0.6947 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.044158935546875,
-0.04693603515625,
0.0119476318359375,
0.0140838623046875,
-0.032440185546875,
-0.033294677734375,
-0.01078033447265625,
-0.0261993408203125,
0.0139007568359375,
0.01409912109375,
-0.055419921875,
-0.036224365234375,
-0.050567626953125,
-... |
poltextlab/xlm-roberta-large-spanish-other-cap | 2023-07-04T17:40:39.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"es",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-spanish-other-cap | 0 | 2 | transformers | 2023-05-31T16:51:21 |
---
---
license: mit
language:
- es
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-spanish-other-cap
## Model description
An `xlm-roberta-large` model finetuned on spanish training data containing texts of the `other` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-spanish-other-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-spanish-other-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 285 examples (10% of the available data).<br>
Model accuracy is **0.85**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0 | 0 | 0 | 0 |
| 1 | 0.84 | 0.73 | 0.78 | 22 |
| 2 | 0.78 | 0.82 | 0.8 | 17 |
| 3 | 1 | 0.92 | 0.96 | 12 |
| 4 | 0.91 | 0.88 | 0.9 | 34 |
| 5 | 0.86 | 0.97 | 0.91 | 38 |
| 6 | 0.8 | 0.8 | 0.8 | 15 |
| 7 | 0.73 | 1 | 0.85 | 11 |
| 8 | 0 | 0 | 0 | 0 |
| 9 | 0.72 | 0.81 | 0.76 | 16 |
| 10 | 1 | 0.8 | 0.89 | 15 |
| 11 | 0.79 | 0.85 | 0.81 | 13 |
| 12 | 0.8 | 0.67 | 0.73 | 6 |
| 13 | 0.85 | 0.92 | 0.88 | 49 |
| 14 | 1 | 1 | 1 | 3 |
| 15 | 0.82 | 0.82 | 0.82 | 11 |
| 16 | 0 | 0 | 0 | 0 |
| 17 | 0.92 | 0.92 | 0.92 | 12 |
| 18 | 1 | 0.33 | 0.5 | 6 |
| 19 | 0 | 0 | 0 | 0 |
| 20 | 1 | 0.2 | 0.33 | 5 |
| macro avg | 0.71 | 0.64 | 0.65 | 285 |
| weighted avg | 0.86 | 0.85 | 0.84 | 285 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,618 | [
[
-0.041015625,
-0.048797607421875,
0.00658416748046875,
0.025177001953125,
-0.00443267822265625,
-0.0009899139404296875,
-0.0270538330078125,
-0.02813720703125,
0.0164337158203125,
0.0222015380859375,
-0.03790283203125,
-0.048431396484375,
-0.057098388671875,
... |
YakovElm/Apache20Classic_Balance_DATA_ratio_3 | 2023-05-31T18:14:15.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache20Classic_Balance_DATA_ratio_3 | 0 | 2 | transformers | 2023-05-31T17:05:13 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic_Balance_DATA_ratio_3
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic_Balance_DATA_ratio_3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4946
- Train Accuracy: 0.7478
- Validation Loss: 0.4568
- Validation Accuracy: 0.7649
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5508 | 0.7489 | 0.4948 | 0.7616 | 0 |
| 0.5171 | 0.7533 | 0.4904 | 0.7616 | 1 |
| 0.4946 | 0.7478 | 0.4568 | 0.7649 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.0458984375,
-0.046966552734375,
0.01491546630859375,
0.01470184326171875,
-0.032318115234375,
-0.034698486328125,
-0.01055145263671875,
-0.0264892578125,
0.01375579833984375,
0.01568603515625,
-0.054229736328125,
-0.0390625,
-0.049163818359375,
-0.0230865... |
Ibrahim-Alam/finetuning-xlm-mlm-en-2048-on-sst2 | 2023-05-31T18:27:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm",
"text-classification",
"generated_from_trainer",
"dataset:sst2",
"license:cc-by-nc-4.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Ibrahim-Alam | null | null | Ibrahim-Alam/finetuning-xlm-mlm-en-2048-on-sst2 | 0 | 2 | transformers | 2023-05-31T17:24:55 | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
datasets:
- sst2
metrics:
- accuracy
- f1
model-index:
- name: finetuning-xlm-mlm-en-2048-on-sst2
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: sst2
type: sst2
config: default
split: validation
args: default
metrics:
- name: Accuracy
type: accuracy
value: 0.5091743119266054
- name: F1
type: f1
value: 0.6747720364741641
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-xlm-mlm-en-2048-on-sst2
This model is a fine-tuned version of [xlm-mlm-en-2048](https://huggingface.co/xlm-mlm-en-2048) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6985
- Accuracy: 0.5092
- F1: 0.6748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,540 | [
[
-0.01479339599609375,
-0.042388916015625,
0.018402099609375,
0.013031005859375,
-0.032623291015625,
-0.0239105224609375,
-0.013671875,
-0.0164337158203125,
0.0005583763122558594,
0.0389404296875,
-0.0577392578125,
-0.04119873046875,
-0.049713134765625,
-0.00... |
poltextlab/xlm-roberta-large-hungarian-media-cap | 2023-07-04T17:40:39.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"hu",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-hungarian-media-cap | 0 | 2 | transformers | 2023-05-31T17:25:14 |
---
---
license: mit
language:
- hu
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-hungarian-media-cap
## Model description
An `xlm-roberta-large` model finetuned on hungarian training data containing texts of the `media` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-hungarian-media-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-hungarian-media-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 5781 examples (10% of the available data).<br>
Model accuracy is **0.63**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.61 | 0.57 | 0.59 | 697 |
| 1 | 0.29 | 0.22 | 0.25 | 89 |
| 2 | 0.69 | 0.72 | 0.7 | 236 |
| 3 | 0.65 | 0.69 | 0.67 | 142 |
| 4 | 0.42 | 0.51 | 0.46 | 84 |
| 5 | 0.68 | 0.69 | 0.68 | 105 |
| 6 | 0.58 | 0.49 | 0.53 | 37 |
| 7 | 0.64 | 0.49 | 0.55 | 125 |
| 8 | 0.57 | 0.36 | 0.44 | 22 |
| 9 | 0.62 | 0.65 | 0.64 | 185 |
| 10 | 0.47 | 0.52 | 0.49 | 443 |
| 11 | 0.55 | 0.54 | 0.54 | 56 |
| 12 | 0.55 | 0.57 | 0.56 | 80 |
| 13 | 0.51 | 0.38 | 0.43 | 119 |
| 14 | 0.65 | 0.45 | 0.53 | 231 |
| 15 | 0.66 | 0.71 | 0.68 | 92 |
| 16 | 0 | 0 | 0 | 16 |
| 17 | 0.69 | 0.66 | 0.67 | 1161 |
| 18 | 0.43 | 0.56 | 0.49 | 482 |
| 19 | 0.5 | 0.17 | 0.25 | 18 |
| 20 | 0.39 | 0.3 | 0.34 | 37 |
| 21 | 0.79 | 0.82 | 0.8 | 1324 |
| macro avg | 0.54 | 0.5 | 0.51 | 5781 |
| weighted avg | 0.64 | 0.63 | 0.63 | 5781 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,694 | [
[
-0.042633056640625,
-0.048126220703125,
0.00534820556640625,
0.019622802734375,
-0.0043487548828125,
-0.003437042236328125,
-0.026519775390625,
-0.0217132568359375,
0.01457977294921875,
0.019561767578125,
-0.040252685546875,
-0.048431396484375,
-0.05670166015625... |
YakovElm/Apache20Classic_Balance_DATA_ratio_4 | 2023-05-31T18:34:20.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache20Classic_Balance_DATA_ratio_4 | 0 | 2 | transformers | 2023-05-31T17:26:33 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic_Balance_DATA_ratio_4
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic_Balance_DATA_ratio_4
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3771
- Train Accuracy: 0.8462
- Validation Loss: 0.6704
- Validation Accuracy: 0.7719
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4746 | 0.8099 | 0.5650 | 0.7586 | 0 |
| 0.4254 | 0.8258 | 0.5185 | 0.7613 | 1 |
| 0.3771 | 0.8462 | 0.6704 | 0.7719 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,816 | [
[
-0.045318603515625,
-0.046142578125,
0.014739990234375,
0.01364898681640625,
-0.031005859375,
-0.033111572265625,
-0.0113677978515625,
-0.0258331298828125,
0.01409149169921875,
0.01502227783203125,
-0.05474853515625,
-0.03936767578125,
-0.049530029296875,
-0... |
poltextlab/xlm-roberta-large-english-media-cap | 2023-07-04T17:40:29.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-english-media-cap | 0 | 2 | transformers | 2023-05-31T17:31:55 |
---
---
license: mit
language:
- en
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-english-media-cap
## Model description
An `xlm-roberta-large` model finetuned on english training data containing texts of the `media` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-english-media-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-english-media-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 13802 examples (10% of the available data).<br>
Model accuracy is **0.78**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.75 | 0.8 | 0.77 | 618 |
| 1 | 0.75 | 0.61 | 0.67 | 385 |
| 2 | 0.86 | 0.79 | 0.82 | 780 |
| 3 | 0.72 | 0.71 | 0.71 | 143 |
| 4 | 0.68 | 0.64 | 0.66 | 312 |
| 5 | 0.83 | 0.89 | 0.86 | 746 |
| 6 | 0.79 | 0.83 | 0.81 | 407 |
| 7 | 0.81 | 0.82 | 0.81 | 406 |
| 8 | 0.59 | 0.55 | 0.56 | 44 |
| 9 | 0.8 | 0.81 | 0.81 | 683 |
| 10 | 0.81 | 0.8 | 0.8 | 1297 |
| 11 | 0.65 | 0.69 | 0.67 | 167 |
| 12 | 0.64 | 0.74 | 0.69 | 345 |
| 13 | 0.76 | 0.74 | 0.75 | 1068 |
| 14 | 0.75 | 0.77 | 0.76 | 1168 |
| 15 | 0.73 | 0.64 | 0.68 | 306 |
| 16 | 0.78 | 0.51 | 0.61 | 152 |
| 17 | 0.77 | 0.84 | 0.81 | 1775 |
| 18 | 0.84 | 0.82 | 0.83 | 2475 |
| 19 | 0.69 | 0.53 | 0.6 | 158 |
| 20 | 0.62 | 0.71 | 0.66 | 367 |
| 21 | 0 | 0 | 0 | 0 |
| macro avg | 0.71 | 0.69 | 0.7 | 13802 |
| weighted avg | 0.78 | 0.78 | 0.78 | 13802 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,687 | [
[
-0.041290283203125,
-0.0458984375,
0.00540924072265625,
0.020751953125,
-0.00400543212890625,
-0.0027370452880859375,
-0.0269012451171875,
-0.0205535888671875,
0.01535797119140625,
0.0193939208984375,
-0.03851318359375,
-0.046905517578125,
-0.056365966796875,
... |
poltextlab/xlm-roberta-large-english-other-cap | 2023-07-04T17:40:39.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-english-other-cap | 0 | 2 | transformers | 2023-05-31T17:38:35 |
---
---
license: mit
language:
- en
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-english-other-cap
## Model description
An `xlm-roberta-large` model finetuned on english training data containing texts of the `other` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-english-other-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-english-other-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 4512 examples (10% of the available data).<br>
Model accuracy is **0.78**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.79 | 0.77 | 0.78 | 602 |
| 1 | 0.78 | 0.76 | 0.77 | 139 |
| 2 | 0.89 | 0.9 | 0.9 | 90 |
| 3 | 0.8 | 0.78 | 0.79 | 106 |
| 4 | 0.81 | 0.76 | 0.78 | 221 |
| 5 | 0.8 | 0.71 | 0.76 | 63 |
| 6 | 0.66 | 0.78 | 0.71 | 174 |
| 7 | 0.87 | 0.73 | 0.79 | 152 |
| 8 | 0.83 | 0.8 | 0.82 | 94 |
| 9 | 0.83 | 0.83 | 0.83 | 88 |
| 10 | 0.83 | 0.73 | 0.78 | 227 |
| 11 | 0.77 | 0.77 | 0.77 | 77 |
| 12 | 0.64 | 0.68 | 0.66 | 56 |
| 13 | 0.74 | 0.79 | 0.77 | 278 |
| 14 | 0.83 | 0.76 | 0.8 | 394 |
| 15 | 0.74 | 0.81 | 0.77 | 105 |
| 16 | 0.76 | 0.78 | 0.77 | 165 |
| 17 | 0.76 | 0.83 | 0.79 | 799 |
| 18 | 0.76 | 0.78 | 0.77 | 531 |
| 19 | 0.88 | 0.91 | 0.9 | 76 |
| 20 | 0.93 | 0.72 | 0.81 | 18 |
| 21 | 0.98 | 0.74 | 0.84 | 57 |
| macro avg | 0.8 | 0.78 | 0.79 | 4512 |
| weighted avg | 0.79 | 0.78 | 0.78 | 4512 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,686 | [
[
-0.040069580078125,
-0.04840087890625,
0.006252288818359375,
0.0184326171875,
-0.002628326416015625,
-0.0014600753784179688,
-0.02716064453125,
-0.02362060546875,
0.01605224609375,
0.0220489501953125,
-0.035919189453125,
-0.0482177734375,
-0.05615234375,
0.0... |
Showroom/beauty_subcategory_classifier | 2023-05-31T17:44:37.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"en",
"dataset:Showroom/autotrain-data-beauty_categories",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | Showroom | null | null | Showroom/beauty_subcategory_classifier | 1 | 2 | transformers | 2023-05-31T17:41:57 | ---
tags:
- autotrain
- text-classification
language:
- en
widget:
- text: "I love AutoTrain"
datasets:
- Showroom/autotrain-data-beauty_categories
co2_eq_emissions:
emissions: 0.4401601303255541
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 63190135345
- CO2 Emissions (in grams): 0.4402
## Validation Metrics
- Loss: 0.745
- Accuracy: 0.829
- Macro F1: 0.550
- Micro F1: 0.829
- Weighted F1: 0.815
- Macro Precision: 0.580
- Micro Precision: 0.829
- Weighted Precision: 0.811
- Macro Recall: 0.543
- Micro Recall: 0.829
- Weighted Recall: 0.829
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Showroom/autotrain-beauty_categories-63190135345
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Showroom/autotrain-beauty_categories-63190135345", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Showroom/autotrain-beauty_categories-63190135345", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,314 | [
[
-0.031494140625,
-0.0231781005859375,
0.00015854835510253906,
0.0130615234375,
-0.0012073516845703125,
0.00447845458984375,
-0.001979827880859375,
-0.0126190185546875,
0.0011882781982421875,
0.00754547119140625,
-0.047943115234375,
-0.036529541015625,
-0.0526428... |
guilhermelabigalini/distilbert-base-uncased-finetuned-emotion | 2023-05-31T21:07:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | guilhermelabigalini | null | null | guilhermelabigalini/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-31T18:03:05 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.921
- name: F1
type: f1
value: 0.9211019825750986
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2219
- Accuracy: 0.921
- F1: 0.9211
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8128 | 1.0 | 250 | 0.3195 | 0.9035 | 0.9011 |
| 0.2509 | 2.0 | 500 | 0.2219 | 0.921 | 0.9211 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.13.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,845 | [
[
-0.0389404296875,
-0.04180908203125,
0.01519775390625,
0.0218658447265625,
-0.0269012451171875,
-0.0198211669921875,
-0.01393890380859375,
-0.0084991455078125,
0.009918212890625,
0.0088043212890625,
-0.057403564453125,
-0.05126953125,
-0.05926513671875,
-0.0... |
sofia-todeschini/BioLinkBERT-LitCovid-v1.0 | 2023-06-15T17:44:27.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | sofia-todeschini | null | null | sofia-todeschini/BioLinkBERT-LitCovid-v1.0 | 0 | 2 | transformers | 2023-05-31T18:48:52 | ---
license: mit
---
# BioLinkBERT-LitCovid-v1.0
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1098
- F1: 0.8992
- Roc Auc: 0.9330
- Accuracy: 0.7945
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|
| 0.1172 | 1.0 | 3120 | 0.1098 | 0.8992 | 0.9330 | 0.7945 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,146 | [
[
-0.02288818359375,
-0.041900634765625,
0.018585205078125,
-0.0035247802734375,
-0.03387451171875,
0.00273895263671875,
0.003505706787109375,
-0.01421356201171875,
0.0234527587890625,
0.0125732421875,
-0.062469482421875,
-0.043121337890625,
-0.038055419921875,
... |
Xenova/finbert | 2023-05-31T20:20:46.000Z | [
"transformers.js",
"onnx",
"bert",
"text-classification",
"region:us"
] | text-classification | Xenova | null | null | Xenova/finbert | 0 | 2 | transformers.js | 2023-05-31T20:20:06 | ---
library_name: "transformers.js"
---
https://huggingface.co/ProsusAI/finbert with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`). | 495 | [
[
-0.0404052734375,
0.01849365234375,
0.0233001708984375,
0.0439453125,
-0.0014400482177734375,
0.00435638427734375,
-0.0038890838623046875,
-0.0122833251953125,
0.0242767333984375,
0.0472412109375,
-0.057586669921875,
-0.038665771484375,
-0.03875732421875,
0.... |
jfforero/a_different_name2 | 2023-05-31T20:22:27.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jfforero | null | null | jfforero/a_different_name2 | 0 | 2 | transformers | 2023-05-31T20:22:00 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: a_different_name2
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# a_different_name2
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: None
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Tokenizers 0.13.3
| 936 | [
[
-0.0360107421875,
-0.046539306640625,
0.021270751953125,
0.00405120849609375,
-0.04693603515625,
-0.02349853515625,
-0.01261138916015625,
-0.03057861328125,
0.00726318359375,
0.033935546875,
-0.050750732421875,
-0.037872314453125,
-0.06591796875,
-0.02839660... |
LazarusNLP/simcse-indobert-lite-base | 2023-05-31T20:57:21.000Z | [
"sentence-transformers",
"pytorch",
"albert",
"feature-extraction",
"sentence-similarity",
"transformers",
"dataset:LazarusNLP/wikipedia_id_20230520",
"endpoints_compatible",
"region:us"
] | sentence-similarity | LazarusNLP | null | null | LazarusNLP/simcse-indobert-lite-base | 0 | 2 | sentence-transformers | 2023-05-31T20:57:17 | ---
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- transformers
datasets:
- LazarusNLP/wikipedia_id_20230520
---
# LazarusNLP/simcse-indobert-lite-base
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search.
<!--- Describe your model here -->
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed:
```
pip install -U sentence-transformers
```
Then you can use the model like this:
```python
from sentence_transformers import SentenceTransformer
sentences = ["This is an example sentence", "Each sentence is converted"]
model = SentenceTransformer('LazarusNLP/simcse-indobert-lite-base')
embeddings = model.encode(sentences)
print(embeddings)
```
## Usage (HuggingFace Transformers)
Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings.
```python
from transformers import AutoTokenizer, AutoModel
import torch
#Mean Pooling - Take attention mask into account for correct averaging
def mean_pooling(model_output, attention_mask):
token_embeddings = model_output[0] #First element of model_output contains all token embeddings
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9)
# Sentences we want sentence embeddings for
sentences = ['This is an example sentence', 'Each sentence is converted']
# Load model from HuggingFace Hub
tokenizer = AutoTokenizer.from_pretrained('LazarusNLP/simcse-indobert-lite-base')
model = AutoModel.from_pretrained('LazarusNLP/simcse-indobert-lite-base')
# Tokenize sentences
encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt')
# Compute token embeddings
with torch.no_grad():
model_output = model(**encoded_input)
# Perform pooling. In this case, mean pooling.
sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask'])
print("Sentence embeddings:")
print(sentence_embeddings)
```
## Evaluation Results
<!--- Describe how your model was evaluated -->
For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=LazarusNLP/simcse-indobert-lite-base)
## Training
The model was trained with the parameters:
**DataLoader**:
`torch.utils.data.dataloader.DataLoader` of length 7813 with parameters:
```
{'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'}
```
**Loss**:
`sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters:
```
{'scale': 20.0, 'similarity_fct': 'cos_sim'}
```
Parameters of the fit()-Method:
```
{
"epochs": 1,
"evaluation_steps": 0,
"evaluator": "sentence_transformers.evaluation.EmbeddingSimilarityEvaluator.EmbeddingSimilarityEvaluator",
"max_grad_norm": 1,
"optimizer_class": "<class 'torch.optim.adamw.AdamW'>",
"optimizer_params": {
"lr": 3e-05
},
"scheduler": "WarmupLinear",
"steps_per_epoch": null,
"warmup_steps": 10000,
"weight_decay": 0.01
}
```
## Full Model Architecture
```
SentenceTransformer(
(0): Transformer({'max_seq_length': 75, 'do_lower_case': False}) with Transformer model: AlbertModel
(1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False})
)
```
## Citing & Authors
<!--- Describe where people can find more information --> | 4,116 | [
[
-0.01800537109375,
-0.04986572265625,
0.0238800048828125,
0.0261077880859375,
-0.0273590087890625,
-0.0307159423828125,
-0.0210418701171875,
0.00782012939453125,
0.02001953125,
0.020751953125,
-0.042633056640625,
-0.0458984375,
-0.050872802734375,
0.00272178... |
jayanta/xlm-roberta-base-english-sentweet-derogatory | 2023-05-31T22:33:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/xlm-roberta-base-english-sentweet-derogatory | 0 | 2 | transformers | 2023-05-31T22:12:28 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: xlm-roberta-base-english-sentweet-derogatory
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-english-sentweet-derogatory
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6089
- Accuracy: 0.8125
- Precision: 0.8214
- Recall: 0.8214
- F1: 0.8125
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4374 | 0.8090 | 0.8268 | 0.8212 | 0.8088 |
| No log | 2.0 | 162 | 0.5010 | 0.8125 | 0.8250 | 0.8229 | 0.8125 |
| No log | 3.0 | 243 | 0.5245 | 0.8056 | 0.8180 | 0.8159 | 0.8055 |
| No log | 4.0 | 324 | 0.4806 | 0.8090 | 0.8156 | 0.8168 | 0.8090 |
| No log | 5.0 | 405 | 0.5957 | 0.7986 | 0.7998 | 0.8030 | 0.7983 |
| No log | 6.0 | 486 | 0.6089 | 0.8125 | 0.8214 | 0.8214 | 0.8125 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 1,997 | [
[
-0.033294677734375,
-0.045135498046875,
0.0159149169921875,
0.0035991668701171875,
-0.016937255859375,
-0.022979736328125,
-0.00704193115234375,
-0.01788330078125,
0.0169219970703125,
0.033447265625,
-0.051971435546875,
-0.0555419921875,
-0.05572509765625,
-... |
dedgington/vit-small-ds | 2023-06-15T01:00:12.000Z | [
"keras",
"region:us"
] | null | dedgington | null | null | dedgington/vit-small-ds | 0 | 2 | keras | 2023-05-31T23:21:30 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | AdamW |
| weight_decay | 0.0001 |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | True |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 843 | [
[
-0.03631591796875,
-0.035919189453125,
0.027496337890625,
0.01032257080078125,
-0.04302978515625,
-0.024169921875,
0.0131988525390625,
-0.007198333740234375,
0.0162353515625,
0.033416748046875,
-0.043731689453125,
-0.048370361328125,
-0.039215087890625,
-0.0... |
AG6019/distilbert-base-uncased-finetuned-sst2-ag | 2023-06-01T01:07:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AG6019 | null | null | AG6019/distilbert-base-uncased-finetuned-sst2-ag | 0 | 2 | transformers | 2023-06-01T00:57:26 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-sst2-ag
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sst2-ag
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5233
- Accuracy: 0.1520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 290 | 0.4902 | 0.2435 |
| 0.379 | 2.0 | 580 | 0.4798 | 0.2176 |
| 0.379 | 3.0 | 870 | 0.4815 | 0.1986 |
| 0.3232 | 4.0 | 1160 | 0.5008 | 0.1675 |
| 0.3232 | 5.0 | 1450 | 0.5090 | 0.1727 |
| 0.295 | 6.0 | 1740 | 0.5092 | 0.1762 |
| 0.2697 | 7.0 | 2030 | 0.5164 | 0.1641 |
| 0.2697 | 8.0 | 2320 | 0.5151 | 0.1589 |
| 0.2597 | 9.0 | 2610 | 0.5210 | 0.1572 |
| 0.2597 | 10.0 | 2900 | 0.5233 | 0.1520 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,947 | [
[
-0.0269317626953125,
-0.042938232421875,
0.0125732421875,
0.0029144287109375,
-0.029052734375,
-0.0159149169921875,
-0.00803375244140625,
-0.006336212158203125,
0.00601959228515625,
0.0119476318359375,
-0.047149658203125,
-0.0426025390625,
-0.060211181640625,
... |
razerblade072611/EleutherAI2 | 2023-06-02T01:09:28.000Z | [
"transformers",
"pytorch",
"jax",
"rust",
"gpt_neo",
"text-generation",
"doi:10.57967/hf/0709",
"endpoints_compatible",
"region:us"
] | text-generation | razerblade072611 | null | null | razerblade072611/EleutherAI2 | 0 | 2 | transformers | 2023-06-01T01:26:42 | MAIN_SCRIPT_MODULE
(common_module)
import atexit
import nltk
import pyttsx3
import spacy
import speech_recognition as sr
import torch
from transformers import GPTNeoForCausalLM, AutoTokenizer
from nltk.sentiment import SentimentIntensityAnalyzer
import os
import json
from memory_module import MemoryModule
from sentiment_module import SentimentAnalysisModule
# Get the current directory
current_directory = os.getcwd()
# Get a list of files and directories in the current directory
file_list = os.listdir(current_directory)
# Print the list
for file_name in file_list:
print(file_name)
sia = SentimentIntensityAnalyzer()
sentence = "This is a positive sentence."
sentiment = sia.polarity_scores(sentence)
# Access sentiment scores
compound_score = sentiment['compound']
positive_score = sentiment['pos']
negative_score = sentiment['neg']
model_directory = "EleutherAI/gpt-neo-125m"
# Download necessary NLTK resources
nltk.download('punkt')
nltk.download('wordnet')
nltk.download('stopwords')
# Check if GPU is available and set the device accordingly
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
if torch.cuda.is_available():
current_device = torch.cuda.current_device()
print(f"Using GPU: {torch.cuda.get_device_name(current_device)}")
else:
print("No GPU available, using CPU.")
# Initialize the speech engine
speech_engine = pyttsx3.init()
# Get the list of available voices
voices = speech_engine.getProperty('voices')
for voice in voices:
print(voice.id, voice.name)
# Set the desired voice
desired_voice = "Microsoft Hazel Desktop - English (Great Britain)"
voice_id = None
# Find the voice ID based on the desired voice name
for voice in voices:
if desired_voice in voice.name:
voice_id = voice.id
break
if voice_id:
speech_engine.setProperty('voice', voice_id)
print("Desired voice set successfully.")
else:
print("Desired voice not found.")
# Load the spaCy English model
nlp = spacy.load('en_core_web_sm')
# Update the CommonModule instantiation
load_memory_file = "load_memory.json"
save_memory_file = "save_memory.json"
class CommonModule:
def __init__(self, model, name, param1, param2, load_memory_file, save_memory_file):
# Initialize the instance variables using the provided arguments
self.memory = [] # Initialize memory as a list
self.name = name
self.param1 = param1
self.param2 = param2
self.model = GPTNeoForCausalLM.from_pretrained(model_directory)
self.tokenizer = AutoTokenizer.from_pretrained(model_directory)
self.tokenizer.add_special_tokens({'pad_token': '[PAD]'})
self.gpt3_model = GPTNeoForCausalLM.from_pretrained(model_directory)
self.gpt3_model.to(device) # Move model to the device (GPU or CPU)
self.load_memory_file = "C:\\Users\\withe\\PycharmProjects\\no hope2\\Chat_Bot4\\load_memory.json"
self.save_memory_file = "C:\\Users\\withe\\PycharmProjects\\no hope2\\Chat_Bot4\\save_memory.json"
self.memory_module = MemoryModule(self.load_memory_file, self.save_memory_file)
self.sentiment_module = SentimentAnalysisModule()
self.speech_engine = speech_engine # Assign the initialized speech engine
self.max_sequence_length = 200 # Decrease the value for faster response
self.num_beams = 4 # Reduce the value for faster response
self.no_repeat_ngram_size = 2
self.temperature = 0.3
self.response_cache = {} # Cache for storing frequently occurring responses
# Initialize speech recognition
self.recognizer = sr.Recognizer()
def reset_conversation(self):
self.memory_module.reset_memory()
def retrieve_cached_response(self, input_text):
named_entities = self.memory_module.get_named_entities()
for entity in named_entities:
if entity.lower() in input_text.lower():
return self.response_cache.get(entity)
return None
def generate_gpt2_response(self, input_text, conversation_history):
# Prepare the conversation history for GPT-2 input format
if len(conversation_history) == 0:
gpt2_input = "USER: " + input_text + "\n"
else:
gpt2_input = "USER: " + conversation_history[-1] + "\n" # Append the user's query
gpt2_input += "BOT: " + conversation_history[-2] + "\n" # Append the bot's previous response
# Append the rest of the conversation history in reverse order
for i in range(len(conversation_history) - 3, -1, -2):
gpt2_input += "USER: " + conversation_history[i] + "\n"
gpt2_input += "BOT: " + conversation_history[i - 1] + "\n"
# Append the current user input to the conversation history
gpt2_input += "USER: " + input_text + "\n"
# Tokenize the input text
input_ids = self.tokenizer.encode(gpt2_input, return_tensors='pt')
# Generate response using the GPT-2 model
with torch.no_grad():
output = self.model.generate(input_ids, max_length=100, num_return_sequences=1)
# Decode the generated response
generated_text = self.tokenizer.decode(output[:, input_ids.shape[-1]:][0], skip_special_tokens=True)
# Process the GPT-2 response
response = generated_text.strip().split("\n")[-1] # Extract the last line (bot's response)
return response
def process_input(self, input_text, conversation_history):
named_entities = list(self.memory_module.get_named_entities())
for entity in named_entities:
if entity in input_text:
response = self.generate_gpt2_response(input_text, conversation_history)
self.memory_module.add_to_memory(response)
return response
# Check if the input contains a question
if '?' in input_text:
return "You're making me angry, you wouldn't like me when I'm angry."
# Check if the input is a greeting
greetings = ['hello', 'hi', 'hey', 'hola']
for greeting in greetings:
if greeting in input_text.lower():
return "Hello! How can I assist you today?"
# Check if the input is a statement about the model
if self.name.lower() in input_text.lower():
return "Yes, I am {}. How can I assist you today?".format(self.name)
# Check if the input is a statement about the creator
if 'creator' in input_text.lower():
return "I was created by {}.".format(self.param1)
# Check if the input is a sentiment analysis request
if 'sentiment' in input_text.lower():
sentiment = self.sentiment_module.analyze_sentiment(input_text)
if sentiment == 'positive':
return "The sentiment of the text is positive."
elif sentiment == 'negative':
return "The sentiment of the text is negative."
else:
return "The sentiment of the text is neutral."
# Retrieve a cached response if available
cached_response = self.retrieve_cached_response(input_text)
if cached_response:
return cached_response
# Generate a response using GPT-2
response = self.generate_gpt2_response(input_text, conversation_history)
# Update the conversation history and cache the response
conversation_history.append(input_text)
conversation_history.append(response)
self.response_cache[input_text] = response
# Update memory with the generated response
self.memory_module.add_to_memory(response)
return response
common_module = CommonModule(model_directory, "Chatbot", "John Doe", "Jane Smith", load_memory_file, save_memory_file)
def text_to_speech(text):
common_module.speech_engine.say(text)
common_module.speech_engine.runAndWait()
def exit_handler():
common_module.reset_conversation()
atexit.register(exit_handler)
recognizer = sr.Recognizer()
while True:
with sr.Microphone() as source:
print("Listening...")
audio = recognizer.listen(source)
try:
user_input = recognizer.recognize_google(audio)
print("User:", user_input)
except sr.UnknownValueError:
print("Sorry, I could not understand your speech.")
continue
except sr.RequestError:
print("Sorry, the speech recognition service is currently unavailable.")
continue
response = common_module.process_input(user_input, [])
print("Bot:", response)
text_to_speech(response)
MEMORY_MODULE
import json
import spacy
# Load the spaCy English model
nlp = spacy.load('en_core_web_sm')
class MemoryModule:
def __init__(self, load_file, save_file):
self.memory = []
self.load_file = load_file
self.save_file = save_file
self.load_memory()
def add_to_memory(self, statement):
self.memory.append(statement)
self.save_memory()
def reset_memory(self):
self.memory = []
self.save_memory()
def save_memory(self):
with open(self.save_file, 'w') as file:
json.dump(self.memory, file)
def load_memory(self):
try:
with open(self.load_file, 'r') as file:
loaded_memory = json.load(file)
if isinstance(loaded_memory, list):
self.memory = loaded_memory
else:
print("Loaded memory is not a list. Starting with an empty memory.")
except FileNotFoundError:
print("Load memory file not found. Starting with an empty memory.")
def get_named_entities(self):
named_entities = set()
for statement in self.memory:
doc = nlp(statement)
for entity in doc.ents:
if entity.label_:
named_entities.add(entity.text)
return named_entities
memory_module = MemoryModule(
r"C:\Users\withe\PycharmProjects\no hope2\Chat_Bot4\load_memory.json",
r"C:\Users\withe\PycharmProjects\no hope2\Chat_Bot4\save_memory.json"
)
SENTIMENT_MODULE
class SentimentAnalysisModule:
def __init__(self):
self.sia = SentimentIntensityAnalyzer()
def analyze_sentiment(self, text):
sentiment = self.sia.polarity_scores(text)
compound_score = sentiment['compound']
if compound_score >= 0.05:
return 'positive'
elif compound_score <= -0.05:
return 'negative'
else:
return 'neutral'
| 10,675 | [
[
-0.016326904296875,
-0.07464599609375,
0.0220184326171875,
0.024505615234375,
-0.00919342041015625,
-0.0025787353515625,
-0.020751953125,
-0.009124755859375,
0.0014791488647460938,
0.02471923828125,
-0.04461669921875,
-0.0406494140625,
-0.0360107421875,
-0.0... |
Shuddup/depression_classifier_2 | 2023-06-01T03:06:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Shuddup | null | null | Shuddup/depression_classifier_2 | 0 | 2 | transformers | 2023-06-01T02:51:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: depression_classifier_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# depression_classifier_2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7466
- Accuracy: 0.6635
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 451 | 0.8134 | 0.6515 |
| 0.9111 | 2.0 | 902 | 0.7466 | 0.6635 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,415 | [
[
-0.029449462890625,
-0.0440673828125,
0.0292205810546875,
0.0240325927734375,
-0.0230560302734375,
-0.0265960693359375,
-0.01190948486328125,
-0.003055572509765625,
-0.0012311935424804688,
0.0114898681640625,
-0.046905517578125,
-0.056610107421875,
-0.0690307617... |
Augustin99/distilbert-base-uncased-finetuned-cola | 2023-06-01T03:33:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Augustin99 | null | null | Augustin99/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-06-01T02:51:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5294395294021531
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5668
- Matthews Correlation: 0.5294
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5286 | 1.0 | 535 | 0.5356 | 0.4033 |
| 0.3541 | 2.0 | 1070 | 0.5061 | 0.4858 |
| 0.2383 | 3.0 | 1605 | 0.5668 | 0.5294 |
| 0.1799 | 4.0 | 2140 | 0.7793 | 0.4925 |
| 0.1372 | 5.0 | 2675 | 0.8256 | 0.5056 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,042 | [
[
-0.0224456787109375,
-0.04962158203125,
0.01190948486328125,
0.0189056396484375,
-0.023193359375,
-0.00875091552734375,
-0.004924774169921875,
-0.0027828216552734375,
0.0229644775390625,
0.010284423828125,
-0.045013427734375,
-0.034942626953125,
-0.06201171875,
... |
exbow/TinyStories-wikitrain-33m-ethan | 2023-06-01T06:26:02.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt_neo",
"text-generation",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-generation | exbow | null | null | exbow/TinyStories-wikitrain-33m-ethan | 0 | 2 | transformers | 2023-06-01T03:19:18 | ---
tags:
- generated_from_trainer
model-index:
- name: TinyStories-wikitrain-33m-ethan
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# TinyStories-wikitrain-33m-ethan
This model is a fine-tuned version of [roneneldan/TinyStories-33M](https://huggingface.co/roneneldan/TinyStories-33M) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 6.3716
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 6.5842 | 1.0 | 2334 | 6.5360 |
| 6.4139 | 2.0 | 4668 | 6.4101 |
| 6.3566 | 3.0 | 7002 | 6.3716 |
### Framework versions
- Transformers 4.30.0.dev0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,385 | [
[
-0.035003662109375,
-0.033203125,
0.01409912109375,
0.004322052001953125,
-0.023529052734375,
-0.04290771484375,
-0.01241302490234375,
-0.0160064697265625,
0.01093292236328125,
0.01947021484375,
-0.059600830078125,
-0.038665771484375,
-0.036468505859375,
-0.... |
wesleyacheng/twitter-emotion-classification-with-bert | 2023-06-08T00:04:58.000Z | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:tweet_eval",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wesleyacheng | null | null | wesleyacheng/twitter-emotion-classification-with-bert | 0 | 2 | transformers | 2023-06-01T03:27:25 | ---
license: apache-2.0
datasets:
- tweet_eval
language:
- en
metrics:
- accuracy
- f1
pipeline_tag: text-classification
widget:
- text: Yay!
example_title: Joy Example
- text: There is no meaning in life.
example_title: Sadness Example
- text: I hate you!
example_title: Anger Example
---
First posted in my [Kaggle](https://www.kaggle.com/code/wesleyacheng/twitter-emotion-classification-with-bert).
Hello, I'm **Wesley**, nice to meet you! 👋
While I was making my **[Angry Birds Classifier](https://www.kaggle.com/code/wesleyacheng/angry-birds-classifier)** to classify if tweets are angry or not, I thought why don't we add **2** more emotions! **Joy and Sadness** into the mix!
Here I created a **Multiclass Text Classifier** that classifies tweets as either having **JOY, SADNESS, or ANGER**.
I used the [Twitter Emotion Dataset](https://huggingface.co/datasets/tweet_eval/viewer/emotion/train) and [BERT](https://huggingface.co/distilbert-base-uncased) to do [Transfer Learning](https://en.wikipedia.org/wiki/Transfer_learning) with [PyTorch](https://pytorch.org) and [HuggingFace](https://huggingface.co). | 1,136 | [
[
-0.023895263671875,
-0.0249786376953125,
0.0204620361328125,
0.053680419921875,
-0.0196990966796875,
0.023956298828125,
-0.01201629638671875,
-0.046966552734375,
0.0196990966796875,
-0.02288818359375,
-0.03009033203125,
-0.032928466796875,
-0.06317138671875,
... |
TigerResearch/tigerbot-7b-sft-v1-4bit | 2023-08-10T08:43:46.000Z | [
"transformers",
"bloom",
"text-generation",
"license:apache-2.0",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | TigerResearch | null | null | TigerResearch/tigerbot-7b-sft-v1-4bit | 6 | 2 | transformers | 2023-06-01T03:38:20 | ---
license: apache-2.0
---
<div style="width: 100%;">
<img src="https://github.com/TigerResearch/TigerBot/blob/main/image/logo_core.png" alt="TigerBot" style="width: 20%; display: block; margin: auto;">
</div>
<p align="center">
<font face="黑体" size=5"> A cutting-edge foundation for your very own LLM. </font>
</p>
<p align="center">
🌐 <a href="https://tigerbot.com/" target="_blank">TigerBot</a> • 🤗 <a href="https://huggingface.co/TigerResearch" target="_blank">Hugging Face</a>
</p>
This is a 4-bit GPTQ version of the [Tigerbot 7B sft](https://huggingface.co/TigerResearch/tigerbot-7b-sft).
It was quantized to 4bit using: https://github.com/TigerResearch/TigerBot/tree/main/gptq
## How to download and use this model in github: https://github.com/TigerResearch/TigerBot
Here are commands to clone the TigerBot and install.
```
conda create --name tigerbot python=3.8
conda activate tigerbot
conda install pytorch torchvision torchaudio pytorch-cuda=11.7 -c pytorch -c nvidia
git clone https://github.com/TigerResearch/TigerBot
cd TigerBot
pip install -r requirements.txt
```
Inference with command line interface
```
cd TigerBot/gptq
CUDA_VISIBLE_DEVICES=0 python tigerbot_infer.py TigerResearch/tigerbot-7b-sft-4bit-128g --wbits 4 --groupsize 128 --load TigerResearch/tigerbot-7b-sft-4bit-128g/tigerbot-7b-4bit-128g.pt
```
| 1,349 | [
[
-0.04364013671875,
-0.042724609375,
0.03814697265625,
0.01297760009765625,
-0.03955078125,
0.01316070556640625,
0.01320648193359375,
-0.018096923828125,
0.033050537109375,
0.017974853515625,
-0.039306640625,
-0.0263671875,
-0.0158233642578125,
0.000774383544... |
jojoUla/bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-fast-21 | 2023-06-01T07:18:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"fill-mask",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | fill-mask | jojoUla | null | null | jojoUla/bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-fast-21 | 0 | 2 | transformers | 2023-06-01T04:17:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-fast-21
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-cased-sigir-support-refute-no-label-40-2nd-test-LR10-8-fast-21
This model is a fine-tuned version of [jojoUla/bert-large-cased-sigir-support-refute-no-label-40](https://huggingface.co/jojoUla/bert-large-cased-sigir-support-refute-no-label-40) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0284
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 8.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 5.0732 | 1.0 | 1 | 1.2125 |
| 3.4503 | 2.0 | 2 | 0.9209 |
| 2.1567 | 3.0 | 3 | 1.2078 |
| 1.9993 | 4.0 | 4 | 0.0449 |
| 1.1486 | 5.0 | 5 | 0.0010 |
| 1.8055 | 6.0 | 6 | 1.4200 |
| 2.687 | 7.0 | 7 | 7.9692 |
| 0.6934 | 8.0 | 8 | 0.0001 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,807 | [
[
-0.04150390625,
-0.04071044921875,
0.017791748046875,
0.01105499267578125,
-0.0182952880859375,
-0.036102294921875,
-0.01324462890625,
-0.0102691650390625,
0.0103759765625,
0.0281219482421875,
-0.05133056640625,
-0.037811279296875,
-0.05401611328125,
-0.0119... |
vagrawal787/trip-review-test-2 | 2023-06-01T05:14:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | vagrawal787 | null | null | vagrawal787/trip-review-test-2 | 0 | 2 | transformers | 2023-06-01T04:58:16 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: trip-review-test-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# trip-review-test-2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,027 | [
[
-0.038238525390625,
-0.0521240234375,
0.0189971923828125,
0.0195770263671875,
-0.034332275390625,
-0.0341796875,
-0.00971221923828125,
-0.01448822021484375,
0.0188751220703125,
0.03497314453125,
-0.0540771484375,
-0.03680419921875,
-0.035919189453125,
-0.010... |
Retrial9842/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-01T05:47:13.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Retrial9842 | null | null | Retrial9842/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-01T05:46:35 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 552.00 +/- 203.15
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Retrial9842 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Retrial9842 -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Retrial9842
```
## Hyperparameters
```python
OrderedDict([('batch_size', 64),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,768 | [
[
-0.04266357421875,
-0.039794921875,
0.019256591796875,
0.024383544921875,
-0.01070404052734375,
-0.0172576904296875,
0.0100860595703125,
-0.01303863525390625,
0.01332855224609375,
0.022735595703125,
-0.0723876953125,
-0.034515380859375,
-0.02520751953125,
-0... |
SHENMU007/neunit_BASE_V7 | 2023-06-05T06:35:07.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | SHENMU007 | null | null | SHENMU007/neunit_BASE_V7 | 0 | 2 | transformers | 2023-06-01T06:11:29 | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,246 | [
[
-0.03460693359375,
-0.05291748046875,
-0.0037441253662109375,
0.01134490966796875,
-0.0254364013671875,
-0.020538330078125,
-0.0173797607421875,
-0.027374267578125,
0.01081085205078125,
0.020751953125,
-0.0419921875,
-0.05059814453125,
-0.042449951171875,
0.... |
notaphoenix/shakespeare_classifier_model | 2023-09-27T12:00:41.000Z | [
"transformers",
"pytorch",
"safetensors",
"distilbert",
"text-classification",
"en",
"dataset:notaphoenix/shakespeare_dataset",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | notaphoenix | null | null | notaphoenix/shakespeare_classifier_model | 0 | 2 | transformers | 2023-06-01T06:41:40 | ---
license: mit
datasets:
- notaphoenix/shakespeare_dataset
language:
- en
metrics:
- f1
pipeline_tag: text-classification
---
# Shakespeare/Modern English DistilBert-base
# Description ℹ
With this model, you can classify if an English sentence has a *Shakespearean* style or a *modern* style
The model is a fine-tuned checkpoint of [DistilBERT-base-uncased](https://huggingface.co/distilbert-base-uncased).
# Application 🚀
## Return all labels
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="notaphoenix/shakespeare_classifier_model", top_k=None)
classifier("This is a modern sentence!")
```
```json
[[
{'label': 'modern', 'score': 0.901931643486023},
{'label': 'shakespearean', 'score': 0.09806833416223526}
]]
```
## Return top label
```python
from transformers import pipeline
classifier = pipeline("text-classification", model="notaphoenix/shakespeare_classifier_model")
classifier("This is a modern sentence!")
```
```json
[
{'label': 'modern', 'score': 0.901931643486023}
]
```
| 1,047 | [
[
-0.00801849365234375,
-0.04119873046875,
0.010162353515625,
0.01393890380859375,
-0.01053619384765625,
0.0156707763671875,
-0.01012420654296875,
-0.00742340087890625,
0.0271759033203125,
0.0244598388671875,
-0.032379150390625,
-0.043609619140625,
-0.069763183593... |
poltextlab/xlm-roberta-large-dutch-budget-cap | 2023-07-04T17:40:37.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nl",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-dutch-budget-cap | 0 | 2 | transformers | 2023-06-01T06:53:14 |
---
---
license: mit
language:
- nl
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-dutch-budget-cap
## Model description
An `xlm-roberta-large` model finetuned on dutch training data containing texts of the `budget` domain labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-dutch-budget-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-dutch-budget-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 395 examples (10% of the available data).<br>
Model accuracy is **0.83**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 1 | 1 | 1 | 1 |
| 1 | 0.75 | 0.92 | 0.83 | 13 |
| 2 | 0 | 0 | 0 | 0 |
| 3 | 0 | 0 | 0 | 0 |
| 4 | 0.71 | 0.68 | 0.69 | 25 |
| 5 | 0.78 | 0.78 | 0.78 | 9 |
| 6 | 0 | 0 | 0 | 0 |
| 7 | 0 | 0 | 0 | 0 |
| 8 | 0.85 | 0.69 | 0.76 | 16 |
| 9 | 0 | 0 | 0 | 0 |
| 10 | 0.86 | 0.83 | 0.84 | 65 |
| 11 | 0 | 0 | 0 | 3 |
| 12 | 0.8 | 0.73 | 0.76 | 11 |
| 13 | 0.84 | 0.73 | 0.78 | 22 |
| 14 | 1 | 0.67 | 0.8 | 3 |
| 15 | 0.6 | 0.38 | 0.46 | 8 |
| 16 | 0 | 0 | 0 | 2 |
| 17 | 0.7 | 0.54 | 0.61 | 13 |
| 18 | 0.86 | 0.94 | 0.89 | 204 |
| 19 | 0 | 0 | 0 | 0 |
| macro avg | 0.49 | 0.44 | 0.46 | 395 |
| weighted avg | 0.82 | 0.83 | 0.82 | 395 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,547 | [
[
-0.043853759765625,
-0.04498291015625,
0.006710052490234375,
0.0243682861328125,
-0.0061187744140625,
-0.007778167724609375,
-0.0269927978515625,
-0.0265655517578125,
0.010894775390625,
0.0233612060546875,
-0.036407470703125,
-0.0428466796875,
-0.054443359375,
... |
seungkim1313/distilbert-base-uncased-finetuned-emotion | 2023-06-01T10:06:00.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | seungkim1313 | null | null | seungkim1313/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-06-01T07:21:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9205
- name: F1
type: f1
value: 0.9206572337666142
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2256
- Accuracy: 0.9205
- F1: 0.9207
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8468 | 1.0 | 250 | 0.3451 | 0.897 | 0.8924 |
| 0.2629 | 2.0 | 500 | 0.2256 | 0.9205 | 0.9207 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.038055419921875,
-0.041473388671875,
0.0150909423828125,
0.021514892578125,
-0.02642822265625,
-0.01947021484375,
-0.01314544677734375,
-0.0085296630859375,
0.01016998291015625,
0.00848388671875,
-0.056121826171875,
-0.051300048828125,
-0.06005859375,
-0.... |
jangmin/whisper-small-ko-normalized-debug | 2023-06-01T09:00:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | jangmin | null | null | jangmin/whisper-small-ko-normalized-debug | 0 | 2 | transformers | 2023-06-01T08:35:29 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: whisper-small-ko-normalized-debug
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# whisper-small-ko-normalized-debug
This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6194
- Wer: 0.3928
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 3
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 4 | 0.6447 | 0.4031 |
| No log | 2.0 | 8 | 0.6389 | 0.3992 |
| 0.4891 | 3.0 | 12 | 0.6194 | 0.3928 |
### Framework versions
- Transformers 4.28.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,546 | [
[
-0.026702880859375,
-0.047332763671875,
0.0111236572265625,
0.0032558441162109375,
-0.030242919921875,
-0.046234130859375,
-0.0221099853515625,
-0.023040771484375,
0.0124359130859375,
0.0203704833984375,
-0.04986572265625,
-0.043670654296875,
-0.047210693359375,... |
emresvd/u160 | 2023-06-01T09:26:53.000Z | [
"keras",
"region:us"
] | null | emresvd | null | null | emresvd/u160 | 0 | 2 | keras | 2023-06-01T09:26:50 | ---
library_name: keras
---
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
| Hyperparameters | Value |
| :-- | :-- |
| name | Adam |
| weight_decay | None |
| clipnorm | None |
| global_clipnorm | None |
| clipvalue | None |
| use_ema | False |
| ema_momentum | 0.99 |
| ema_overwrite_frequency | None |
| jit_compile | False |
| is_legacy_optimizer | False |
| learning_rate | 0.0010000000474974513 |
| beta_1 | 0.9 |
| beta_2 | 0.999 |
| epsilon | 1e-07 |
| amsgrad | False |
| training_precision | float32 |
## Model Plot
<details>
<summary>View Model Plot</summary>

</details> | 841 | [
[
-0.037200927734375,
-0.03997802734375,
0.031890869140625,
0.00814056396484375,
-0.043243408203125,
-0.0177154541015625,
0.0109710693359375,
-0.0033969879150390625,
0.0204620361328125,
0.030548095703125,
-0.043731689453125,
-0.051177978515625,
-0.03997802734375,
... |
jayanta/distilbert-base-uncased-english-sentweet-derogatory | 2023-06-01T20:09:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/distilbert-base-uncased-english-sentweet-derogatory | 0 | 2 | transformers | 2023-06-01T11:20:59 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: distilbert-base-uncased-english-sentweet-derogatory
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-english-sentweet-derogatory
This model is a fine-tuned version of [bhadresh-savani/distilbert-base-uncased-emotion](https://huggingface.co/bhadresh-savani/distilbert-base-uncased-emotion) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8426
- Accuracy: 0.7917
- Precision: 0.8038
- Recall: 0.8018
- F1: 0.7916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4182 | 0.8194 | 0.8363 | 0.8314 | 0.8193 |
| No log | 2.0 | 162 | 0.4585 | 0.8125 | 0.8394 | 0.8273 | 0.8119 |
| No log | 3.0 | 243 | 0.4828 | 0.8125 | 0.8394 | 0.8273 | 0.8119 |
| No log | 4.0 | 324 | 0.5100 | 0.8125 | 0.8198 | 0.8207 | 0.8125 |
| No log | 5.0 | 405 | 0.7268 | 0.8021 | 0.8029 | 0.8061 | 0.8017 |
| No log | 6.0 | 486 | 0.8426 | 0.7917 | 0.8038 | 0.8018 | 0.7916 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 2,080 | [
[
-0.032623291015625,
-0.039459228515625,
0.007556915283203125,
0.0146484375,
-0.019775390625,
-0.01181793212890625,
-0.0059661865234375,
-0.0102081298828125,
0.0185546875,
0.01415252685546875,
-0.051300048828125,
-0.05340576171875,
-0.05328369140625,
-0.00844... |
golaxy/gogpt-math-560m | 2023-06-01T14:17:30.000Z | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"zh",
"dataset:BelleGroup/train_2M_CN",
"dataset:BelleGroup/train_3.5M_CN",
"dataset:BelleGroup/train_1M_CN",
"dataset:BelleGroup/train_0.5M_CN",
"dataset:BelleGroup/school_math_0.25M",
"license:apache-2.0",
"endpoints_compatible",
"text... | text-generation | golaxy | null | null | golaxy/gogpt-math-560m | 0 | 2 | transformers | 2023-06-01T13:19:14 | ---
license: apache-2.0
datasets:
- BelleGroup/train_2M_CN
- BelleGroup/train_3.5M_CN
- BelleGroup/train_1M_CN
- BelleGroup/train_0.5M_CN
- BelleGroup/school_math_0.25M
language:
- zh
---
## GoGPT
基于中文指令数据微调BLOOM

> 训练第一轮足够了,后续第二轮和第三轮提升不大
- 🚀多样性指令数据
- 🚀筛选高质量中文数据
| 模型名字 | 参数量 | 模型地址 |
|------------|--------|------|
| gogpt-560m | 5.6亿参数 | 🤗[golaxy/gogpt-560m](https://huggingface.co/golaxy/gogpt-560m) |
| gogpt-3b | 30亿参数 | 🤗[golaxy/gogpt-3b](https://huggingface.co/golaxy/gogpt-3b) |
| gogpt-7b | 70亿参数 | 🤗[golaxy/gogpt-7b](https://huggingface.co/golaxy/gogpt-7b) |
| gogpt-math-560m | 5.6亿参数 | 🤗[gogpt-math-560m](https://huggingface.co/golaxy/gogpt-math-560m) |
## 测试效果






## TODO
- 进行RLFH训练
- 后续加入中英平行语料
## 感谢
- [@hz大佬-zero_nlp](https://github.com/yuanzhoulvpi2017/zero_nlp)
- [stanford_alpaca](https://github.com/tatsu-lab/stanford_alpaca)
- [Belle数据](https://huggingface.co/BelleGroup)
| 1,139 | [
[
-0.0277099609375,
-0.04852294921875,
0.005641937255859375,
0.049957275390625,
-0.037628173828125,
-0.00998687744140625,
-0.00598907470703125,
-0.040252685546875,
0.04412841796875,
0.01885986328125,
-0.0341796875,
-0.03692626953125,
-0.04278564453125,
-0.0119... |
Sandiago21/llama-13b-hf-prompt-answering | 2023-06-12T09:30:27.000Z | [
"transformers",
"pytorch",
"llama",
"text-generation",
"decapoda-research-13b-hf",
"prompt answering",
"peft",
"en",
"license:other",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | Sandiago21 | null | null | Sandiago21/llama-13b-hf-prompt-answering | 1 | 2 | transformers | 2023-06-01T13:53:12 | ---
license: other
language:
- en
library_name: transformers
pipeline_tag: text-generation
tags:
- llama
- decapoda-research-13b-hf
- prompt answering
- peft
---
## Model Card for Model ID
This repository contains a LLaMA-13B further fine-tuned model on conversations and question answering prompts.
⚠️ **I used [LLaMA-13B-hf](https://huggingface.co/decapoda-research/llama-13b-hf) as a base model, so this model is for Research purpose only (See the [license](https://huggingface.co/decapoda-research/llama-13b-hf/blob/main/LICENSE))**
## Model Details
Anyone can use (ask prompts) and play with the model using the pre-existing Jupyter Notebook in the **noteboooks** folder. The Jupyter Notebook contains example code to load the model and ask prompts to it as well as example prompts to get you started.
### Model Description
The decapoda-research/llama-13b-hf model was finetuned on conversations and question answering prompts.
**Developed by:** [More Information Needed]
**Shared by:** [More Information Needed]
**Model type:** Causal LM
**Language(s) (NLP):** English, multilingual
**License:** Research
**Finetuned from model:** decapoda-research/llama-13b-hf
## Model Sources [optional]
**Repository:** [More Information Needed]
**Paper:** [More Information Needed]
**Demo:** [More Information Needed]
## Uses
The model can be used for prompt answering
### Direct Use
The model can be used for prompt answering
### Downstream Use
Generating text and prompt answering
## Recommendations
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
# Usage
## Creating prompt
The model was trained on the following kind of prompt:
```python
def generate_prompt(instruction: str, input_ctxt: str = None) -> str:
if input_ctxt:
return f"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Input:
{input_ctxt}
### Response:"""
else:
return f"""Below is an instruction that describes a task. Write a response that appropriately completes the request.
### Instruction:
{instruction}
### Response:"""
```
## How to Get Started with the Model
Use the code below to get started with the model.
1. You can git clone the repo, which contains also the artifacts for the base model for simplicity and completeness, and run the following code snippet to load the mode:
```python
import torch
from peft import PeftConfig, PeftModel
from transformers import GenerationConfig, LlamaTokenizer, LlamaForCausalLM
MODEL_NAME = "Sandiago21/llama-13b-hf-prompt-answering"
config = PeftConfig.from_pretrained(MODEL_NAME)
# Setting the path to look at your repo directory, assuming that you are at that directory when running this script
config.base_model_name_or_path = "decapoda-research/llama-13b-hf/"
model = LlamaForCausalLM.from_pretrained(
config.base_model_name_or_path,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
tokenizer = LlamaTokenizer.from_pretrained(MODEL_NAME)
model = PeftModel.from_pretrained(model, MODEL_NAME)
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.75,
top_k=40,
num_beams=4,
max_new_tokens=32,
)
model.eval()
if torch.__version__ >= "2":
model = torch.compile(model)
```
### Example of Usage
```python
instruction = "What is the capital city of Greece and with which countries does Greece border?"
input_ctxt = None # For some tasks, you can provide an input context to help the model generate a better response.
prompt = generate_prompt(instruction, input_ctxt)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to(model.device)
with torch.no_grad():
outputs = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
)
response = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
print(response)
>>> The capital city of Greece is Athens and it borders Turkey, Bulgaria, Macedonia, Albania, and the Aegean Sea.
```
2. You can directly call the model from HuggingFace using the following code snippet:
```python
import torch
from peft import PeftConfig, PeftModel
from transformers import GenerationConfig, LlamaTokenizer, LlamaForCausalLM
MODEL_NAME = "Sandiago21/llama-13b-hf-prompt-answering"
BASE_MODEL = "decapoda-research/llama-13b-hf"
config = PeftConfig.from_pretrained(MODEL_NAME)
model = LlamaForCausalLM.from_pretrained(
BASE_MODEL,
load_in_8bit=True,
torch_dtype=torch.float16,
device_map="auto",
)
tokenizer = LlamaTokenizer.from_pretrained(MODEL_NAME)
model = PeftModel.from_pretrained(model, MODEL_NAME)
generation_config = GenerationConfig(
temperature=0.2,
top_p=0.75,
top_k=40,
num_beams=4,
max_new_tokens=32,
)
model.eval()
if torch.__version__ >= "2":
model = torch.compile(model)
```
### Example of Usage
```python
instruction = "What is the capital city of Greece and with which countries does Greece border?"
input_ctxt = None # For some tasks, you can provide an input context to help the model generate a better response.
prompt = generate_prompt(instruction, input_ctxt)
input_ids = tokenizer(prompt, return_tensors="pt").input_ids
input_ids = input_ids.to(model.device)
with torch.no_grad():
outputs = model.generate(
input_ids=input_ids,
generation_config=generation_config,
return_dict_in_generate=True,
output_scores=True,
)
response = tokenizer.decode(outputs.sequences[0], skip_special_tokens=True)
print(response)
>>> The capital city of Greece is Athens and it borders Turkey, Bulgaria, Macedonia, Albania, and the Aegean Sea.
```
## Training Details
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 50
- num_epochs: 2
- mixed_precision_training: Native AMP
### Framework versions
- Transformers 4.28.1
- Pytorch 2.0.0+cu117
- Datasets 2.12.0
- Tokenizers 0.12.1
### Training Data
The decapoda-research/llama-13b-hf was finetuned on conversations and question answering data
### Training Procedure
The decapoda-research/llama-13b-hf model was further trained and finetuned on question answering and prompts data for 1 epoch (approximately 10 hours of training on a single GPU)
## Model Architecture and Objective
The model is based on decapoda-research/llama-13b-hf model and finetuned adapters on top of the main model on conversations and question answering data.
| 6,992 | [
[
-0.0362548828125,
-0.071044921875,
0.044281005859375,
0.0125732421875,
-0.0211334228515625,
-0.00656890869140625,
-0.01174163818359375,
-0.024658203125,
0.0114898681640625,
0.0307159423828125,
-0.05120849609375,
-0.035797119140625,
-0.038177490234375,
0.0115... |
Alexandra2398/deberta_amazon_reviews_v1 | 2023-06-01T18:03:55.000Z | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Alexandra2398 | null | null | Alexandra2398/deberta_amazon_reviews_v1 | 0 | 2 | transformers | 2023-06-01T14:38:52 | ---
license: mit
tags:
- generated_from_trainer
model-index:
- name: deberta_amazon_reviews_v1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta_amazon_reviews_v1
This model is a fine-tuned version of [patrickvonplaten/deberta_v3_amazon_reviews](https://huggingface.co/patrickvonplaten/deberta_v3_amazon_reviews) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 200
- num_epochs: 2
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,097 | [
[
-0.032806396484375,
-0.05023193359375,
0.0178680419921875,
0.0298004150390625,
-0.040283203125,
-0.03155517578125,
0.00852203369140625,
-0.0283355712890625,
0.0186767578125,
0.033233642578125,
-0.047943115234375,
-0.032073974609375,
-0.052398681640625,
-0.00... |
0xYuan/autotrain-b-63449135459 | 2023-06-01T14:50:08.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"autotrain",
"zh",
"dataset:0xYuan/autotrain-data-b",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | 0xYuan | null | null | 0xYuan/autotrain-b-63449135459 | 0 | 2 | transformers | 2023-06-01T14:42:45 | ---
tags:
- autotrain
- text-classification
language:
- zh
widget:
- text: "I love AutoTrain"
datasets:
- 0xYuan/autotrain-data-b
co2_eq_emissions:
emissions: 4.720376981365927
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 63449135459
- CO2 Emissions (in grams): 4.7204
## Validation Metrics
- Loss: 0.375
- Accuracy: 0.852
- Precision: 0.866
- Recall: 0.893
- AUC: 0.906
- F1: 0.879
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/0xYuan/autotrain-b-63449135459
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("0xYuan/autotrain-b-63449135459", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("0xYuan/autotrain-b-63449135459", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,091 | [
[
-0.02764892578125,
-0.02886962890625,
0.01363372802734375,
0.01090240478515625,
-0.0040435791015625,
-0.0005307197570800781,
0.006591796875,
-0.0152435302734375,
0.0006470680236816406,
0.01009368896484375,
-0.054168701171875,
-0.036346435546875,
-0.0618591308593... |
peanutacake/autotrain-ann_nl-63427135534 | 2023-06-01T18:28:28.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain",
"nl",
"dataset:peanutacake/autotrain-data-ann_nl",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | peanutacake | null | null | peanutacake/autotrain-ann_nl-63427135534 | 0 | 2 | transformers | 2023-06-01T18:27:23 | ---
tags:
- autotrain
- token-classification
language:
- nl
widget:
- text: "I love AutoTrain"
datasets:
- peanutacake/autotrain-data-ann_nl
co2_eq_emissions:
emissions: 0.18640961989795524
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 63427135534
- CO2 Emissions (in grams): 0.1864
## Validation Metrics
- Loss: 0.428
- Accuracy: 0.846
- Precision: 0.685
- Recall: 0.621
- F1: 0.652
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/peanutacake/autotrain-ann_nl-63427135534
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("peanutacake/autotrain-ann_nl-63427135534", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("peanutacake/autotrain-ann_nl-63427135534", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,111 | [
[
-0.0243072509765625,
-0.039581298828125,
0.0160675048828125,
0.01554107666015625,
-0.004985809326171875,
0.0012636184692382812,
-0.004955291748046875,
-0.0154266357421875,
0.007152557373046875,
0.014007568359375,
-0.049774169921875,
-0.03729248046875,
-0.0620727... |
AG6019/reddit-comment-sentiment-final | 2023-06-01T19:54:57.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AG6019 | null | null | AG6019/reddit-comment-sentiment-final | 0 | 2 | transformers | 2023-06-01T18:49:42 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: reddit-comment-sentiment-final
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# reddit-comment-sentiment-final
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2564
- Accuracy: 0.8971
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-06
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.5164 | 1.0 | 603 | 0.3938 | 0.8196 |
| 0.3583 | 2.0 | 1206 | 0.3110 | 0.8615 |
| 0.29 | 3.0 | 1809 | 0.2748 | 0.8843 |
| 0.2428 | 4.0 | 2412 | 0.2691 | 0.8884 |
| 0.2042 | 5.0 | 3015 | 0.2564 | 0.8971 |
| 0.1881 | 6.0 | 3618 | 0.2575 | 0.8963 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,676 | [
[
-0.035064697265625,
-0.044189453125,
0.01438140869140625,
0.01230621337890625,
-0.0265960693359375,
-0.01727294921875,
-0.0077972412109375,
-0.008270263671875,
0.005626678466796875,
0.0158843994140625,
-0.05419921875,
-0.04888916015625,
-0.05859375,
-0.00939... |
peanutacake/autotrain-nes_nl-63520135542 | 2023-06-01T19:04:47.000Z | [
"transformers",
"pytorch",
"safetensors",
"bert",
"token-classification",
"autotrain",
"nl",
"dataset:peanutacake/autotrain-data-nes_nl",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | peanutacake | null | null | peanutacake/autotrain-nes_nl-63520135542 | 0 | 2 | transformers | 2023-06-01T19:03:42 | ---
tags:
- autotrain
- token-classification
language:
- nl
widget:
- text: "I love AutoTrain"
datasets:
- peanutacake/autotrain-data-nes_nl
co2_eq_emissions:
emissions: 0.24241091204905035
---
# Model Trained Using AutoTrain
- Problem type: Entity Extraction
- Model ID: 63520135542
- CO2 Emissions (in grams): 0.2424
## Validation Metrics
- Loss: 0.447
- Accuracy: 0.838
- Precision: 0.688
- Recall: 0.607
- F1: 0.645
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/peanutacake/autotrain-nes_nl-63520135542
```
Or Python API:
```
from transformers import AutoModelForTokenClassification, AutoTokenizer
model = AutoModelForTokenClassification.from_pretrained("peanutacake/autotrain-nes_nl-63520135542", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("peanutacake/autotrain-nes_nl-63520135542", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,111 | [
[
-0.025604248046875,
-0.03692626953125,
0.0174713134765625,
0.0140380859375,
-0.004314422607421875,
0.0027313232421875,
-0.003208160400390625,
-0.01381683349609375,
0.00684356689453125,
0.01568603515625,
-0.050872802734375,
-0.0384521484375,
-0.062042236328125,
... |
jayanta/bert-base-uncased-english-sentweet-derogatory | 2023-06-01T20:47:47.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/bert-base-uncased-english-sentweet-derogatory | 0 | 2 | transformers | 2023-06-01T20:20:50 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert-base-uncased-english-sentweet-derogatory
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-english-sentweet-derogatory
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1640
- Accuracy: 0.7917
- Precision: 0.8058
- Recall: 0.8025
- F1: 0.7916
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 0.4757 | 0.8021 | 0.8300 | 0.8171 | 0.8014 |
| No log | 2.0 | 162 | 0.5035 | 0.8194 | 0.8412 | 0.8328 | 0.8191 |
| No log | 3.0 | 243 | 0.5446 | 0.8021 | 0.8220 | 0.8149 | 0.8018 |
| No log | 4.0 | 324 | 0.7602 | 0.7465 | 0.7482 | 0.7507 | 0.7462 |
| No log | 5.0 | 405 | 1.0083 | 0.7743 | 0.7793 | 0.7810 | 0.7742 |
| No log | 6.0 | 486 | 1.1640 | 0.7917 | 0.8058 | 0.8025 | 0.7916 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 2,063 | [
[
-0.039337158203125,
-0.040924072265625,
0.005619049072265625,
0.01611328125,
-0.0257568359375,
-0.0179901123046875,
-0.0153045654296875,
-0.018341064453125,
0.024139404296875,
0.022186279296875,
-0.049163818359375,
-0.0548095703125,
-0.04840087890625,
-0.013... |
jayanta/microsoft-resnet-50-english-sentweet-derogatory | 2023-06-01T21:10:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | jayanta | null | null | jayanta/microsoft-resnet-50-english-sentweet-derogatory | 0 | 2 | transformers | 2023-06-01T20:57:48 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: microsoft-resnet-50-english-sentweet-derogatory
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# microsoft-resnet-50-english-sentweet-derogatory
This model is a fine-tuned version of [nlptown/bert-base-multilingual-uncased-sentiment](https://huggingface.co/nlptown/bert-base-multilingual-uncased-sentiment) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3923
- Accuracy: 0.8229
- Precision: 0.8388
- Recall: 0.8345
- F1: 0.8228
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|
| No log | 1.0 | 81 | 1.4751 | 0.8021 | 0.8101 | 0.8105 | 0.8021 |
| No log | 2.0 | 162 | 1.2925 | 0.8021 | 0.8086 | 0.8098 | 0.8021 |
| No log | 3.0 | 243 | 1.4240 | 0.8090 | 0.8268 | 0.8212 | 0.8088 |
| No log | 4.0 | 324 | 1.3803 | 0.8125 | 0.8214 | 0.8214 | 0.8125 |
| No log | 5.0 | 405 | 1.3698 | 0.8090 | 0.8187 | 0.8183 | 0.8090 |
| No log | 6.0 | 486 | 1.3923 | 0.8229 | 0.8388 | 0.8345 | 0.8228 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.1+cu117
- Datasets 2.6.1
- Tokenizers 0.11.0
| 2,067 | [
[
-0.036895751953125,
-0.03466796875,
0.0017290115356445312,
0.0147705078125,
-0.0189971923828125,
-0.018951416015625,
-0.010589599609375,
-0.022064208984375,
0.021026611328125,
0.019805908203125,
-0.0537109375,
-0.052581787109375,
-0.0440673828125,
-0.0068588... |
gcagrici/distilbert-base-uncased-finetuned-emotion | 2023-06-02T01:14:27.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gcagrici | null | null | gcagrici/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-06-02T00:51:56 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9215
- name: F1
type: f1
value: 0.9215212244993529
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2110
- Accuracy: 0.9215
- F1: 0.9215
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8353 | 1.0 | 250 | 0.3069 | 0.908 | 0.9053 |
| 0.2433 | 2.0 | 500 | 0.2110 | 0.9215 | 0.9215 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.03839111328125,
-0.04119873046875,
0.01540374755859375,
0.0213165283203125,
-0.02630615234375,
-0.019195556640625,
-0.01300811767578125,
-0.00846099853515625,
0.01035308837890625,
0.007904052734375,
-0.057220458984375,
-0.051300048828125,
-0.059234619140625,
... |
platzi/platzi-distilroberta-base-mrpc-joel-orellana | 2023-06-02T02:20:52.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-joel-orellana | 0 | 2 | transformers | 2023-06-02T01:55:26 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.","Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.", "With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-joel-orellana
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8829787234042553
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-joel-orellana
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4957
- Accuracy: 0.8382
- F1: 0.8830
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1982 | 1.09 | 500 | 0.4957 | 0.8382 | 0.8830 |
| 0.1914 | 2.18 | 1000 | 0.4957 | 0.8382 | 0.8830 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,407 | [
[
-0.031585693359375,
-0.04168701171875,
0.0093231201171875,
0.018341064453125,
-0.032867431640625,
-0.027252197265625,
-0.0098876953125,
-0.0033283233642578125,
0.005191802978515625,
0.01102447509765625,
-0.05023193359375,
-0.040924072265625,
-0.055999755859375,
... |
tingtone/jq_emo_distilbert | 2023-06-02T05:22:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | tingtone | null | null | tingtone/jq_emo_distilbert | 2 | 2 | transformers | 2023-06-02T02:25:25 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
model-index:
- name: jq_emo_distilbert
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9385
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# jq_emo_distilbert
This model is a fine-tuned version of [tingtone/jq_emo_distilbert](https://huggingface.co/tingtone/jq_emo_distilbert) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3185
- Accuracy: 0.9385
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 16000
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|
| 0.1042 | 1.0 | 1000 | 0.1816 | 0.932 |
| 0.0998 | 2.0 | 2000 | 0.1799 | 0.934 |
| 0.0957 | 3.0 | 3000 | 0.2015 | 0.935 |
| 0.0846 | 4.0 | 4000 | 0.2129 | 0.9335 |
| 0.0943 | 5.0 | 5000 | 0.2215 | 0.935 |
| 0.075 | 6.0 | 6000 | 0.2627 | 0.9375 |
| 0.0607 | 7.0 | 7000 | 0.2908 | 0.9345 |
| 0.0636 | 8.0 | 8000 | 0.3207 | 0.935 |
| 0.0953 | 9.0 | 9000 | 0.3165 | 0.936 |
| 0.0748 | 10.0 | 10000 | 0.3185 | 0.9385 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,235 | [
[
-0.030059814453125,
-0.040985107421875,
0.0104827880859375,
0.005466461181640625,
-0.014190673828125,
-0.0160980224609375,
-0.0066375732421875,
-0.0056304931640625,
0.0214996337890625,
0.0160064697265625,
-0.057098388671875,
-0.055267333984375,
-0.04791259765625... |
yoshivo/bert-japanese-ner | 2023-06-02T07:56:04.000Z | [
"transformers",
"pytorch",
"bert",
"token-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | token-classification | yoshivo | null | null | yoshivo/bert-japanese-ner | 0 | 2 | transformers | 2023-06-02T07:11:02 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
model-index:
- name: bert-japanese-ner
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-japanese-ner
This model is a fine-tuned version of [cl-tohoku/bert-base-japanese-whole-word-masking](https://huggingface.co/cl-tohoku/bert-base-japanese-whole-word-masking) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.2834 | 1.0 | 179 | 0.0915 |
| 0.0548 | 2.0 | 358 | 0.0831 |
| 0.0235 | 3.0 | 537 | 0.0842 |
### Framework versions
- Transformers 4.29.2
- Pytorch 1.12.1.post201
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,423 | [
[
-0.0379638671875,
-0.056243896484375,
0.0183868408203125,
0.0203704833984375,
-0.04034423828125,
-0.0291290283203125,
-0.0234222412109375,
-0.0269317626953125,
0.0198516845703125,
0.032501220703125,
-0.0657958984375,
-0.048797607421875,
-0.057769775390625,
-... |
kristinehara/test_trainer | 2023-06-02T07:53:41.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | kristinehara | null | null | kristinehara/test_trainer | 0 | 2 | transformers | 2023-06-02T07:43:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
model-index:
- name: test_trainer
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,026 | [
[
-0.03564453125,
-0.05029296875,
0.0139007568359375,
0.01015472412109375,
-0.031158447265625,
-0.037139892578125,
-0.01267242431640625,
-0.019378662109375,
0.016937255859375,
0.0259552001953125,
-0.0577392578125,
-0.03271484375,
-0.0364990234375,
-0.013847351... |
poltextlab/xlm-roberta-large-dutch-cap | 2023-07-04T17:40:22.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"nl",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-dutch-cap | 0 | 2 | transformers | 2023-06-02T09:11:21 |
---
---
license: mit
language:
- nl
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-dutch-cap
## Model description
An `xlm-roberta-large` model finetuned on dutch training data labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-dutch-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-dutch-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 6398 examples (10% of the available data).<br>
Model accuracy is **0.83**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.81 | 0.77 | 0.79 | 471 |
| 1 | 0.7 | 0.72 | 0.71 | 148 |
| 2 | 0.88 | 0.8 | 0.84 | 242 |
| 3 | 0.76 | 0.87 | 0.81 | 78 |
| 4 | 0.76 | 0.78 | 0.77 | 374 |
| 5 | 0.9 | 0.92 | 0.91 | 248 |
| 6 | 0.86 | 0.75 | 0.8 | 155 |
| 7 | 0.79 | 0.86 | 0.82 | 95 |
| 8 | 0.86 | 0.82 | 0.84 | 217 |
| 9 | 0.88 | 0.9 | 0.89 | 244 |
| 10 | 0.85 | 0.87 | 0.86 | 763 |
| 11 | 0.73 | 0.75 | 0.74 | 319 |
| 12 | 0.79 | 0.83 | 0.81 | 121 |
| 13 | 0.75 | 0.77 | 0.76 | 378 |
| 14 | 0.82 | 0.83 | 0.83 | 123 |
| 15 | 0.7 | 0.75 | 0.72 | 106 |
| 16 | 0.39 | 0.58 | 0.47 | 19 |
| 17 | 0.93 | 0.92 | 0.93 | 1136 |
| 18 | 0.86 | 0.84 | 0.85 | 903 |
| 19 | 0.64 | 0.75 | 0.69 | 72 |
| 20 | 0.86 | 0.82 | 0.84 | 186 |
| macro avg | 0.79 | 0.8 | 0.79 | 6398 |
| weighted avg | 0.84 | 0.83 | 0.83 | 6398 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,554 | [
[
-0.044189453125,
-0.0472412109375,
0.0054931640625,
0.0217742919921875,
-0.0037136077880859375,
-0.0042572021484375,
-0.0244293212890625,
-0.025909423828125,
0.0171966552734375,
0.022430419921875,
-0.035308837890625,
-0.047698974609375,
-0.0550537109375,
0.0... |
fredymad/bert_Pfinal_4CLASES_2e-5_16_2 | 2023-06-02T10:50:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/bert_Pfinal_4CLASES_2e-5_16_2 | 0 | 2 | transformers | 2023-06-02T09:59:35 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_Pfinal_4CLASES_2e-5_16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_Pfinal_4CLASES_2e-5_16_2
This model is a fine-tuned version of [dccuchile/bert-base-spanish-wwm-uncased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3365
- Accuracy: 0.8987
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4009 | 1.0 | 669 | 0.2939 | 0.8979 |
| 0.2618 | 2.0 | 1338 | 0.3365 | 0.8987 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,437 | [
[
-0.034088134765625,
-0.044891357421875,
0.0119171142578125,
0.023681640625,
-0.0301513671875,
-0.0276641845703125,
-0.02398681640625,
-0.0208282470703125,
0.005687713623046875,
0.016021728515625,
-0.05548095703125,
-0.048248291015625,
-0.04632568359375,
-0.0... |
fredymad/bert_Pfinal_4CLASES_2e-5_16_10 | 2023-06-02T11:45:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/bert_Pfinal_4CLASES_2e-5_16_10 | 0 | 2 | transformers | 2023-06-02T10:12:44 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bert_Pfinal_4CLASES_2e-5_16_10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_Pfinal_4CLASES_2e-5_16_10
This model is a fine-tuned version of [fredymad/bert_Pfinal_4CLASES_2e-5_16_2](https://huggingface.co/fredymad/bert_Pfinal_4CLASES_2e-5_16_2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9126
- Accuracy: 0.8960
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.1814 | 1.0 | 669 | 0.4063 | 0.8960 |
| 0.1821 | 2.0 | 1338 | 0.4814 | 0.8904 |
| 0.1029 | 3.0 | 2007 | 0.5948 | 0.8968 |
| 0.0545 | 4.0 | 2676 | 0.6543 | 0.8949 |
| 0.038 | 5.0 | 3345 | 0.7463 | 0.8953 |
| 0.0122 | 6.0 | 4014 | 0.8268 | 0.8968 |
| 0.0137 | 7.0 | 4683 | 0.8442 | 0.8964 |
| 0.0061 | 8.0 | 5352 | 0.8852 | 0.8953 |
| 0.0073 | 9.0 | 6021 | 0.9132 | 0.8957 |
| 0.002 | 10.0 | 6690 | 0.9126 | 0.8960 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,934 | [
[
-0.041534423828125,
-0.03973388671875,
0.0124664306640625,
0.0117340087890625,
-0.0189361572265625,
-0.0272674560546875,
-0.0091094970703125,
-0.017822265625,
0.0188140869140625,
0.0187225341796875,
-0.054595947265625,
-0.0438232421875,
-0.046142578125,
-0.0... |
poltextlab/xlm-roberta-large-spanish-cap | 2023-07-04T17:40:24.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"es",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-spanish-cap | 0 | 2 | transformers | 2023-06-02T10:33:45 |
---
---
license: mit
language:
- es
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-spanish-cap
## Model description
An `xlm-roberta-large` model finetuned on spanish training data labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-spanish-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=8,
per_device_eval_batch_size=8
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-spanish-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=8,
per_device_eval_batch_size=8,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 18055 examples (10% of the available data).<br>
Model accuracy is **0.62**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.64 | 0.63 | 0.63 | 783 |
| 1 | 0.62 | 0.46 | 0.53 | 787 |
| 2 | 0.56 | 0.83 | 0.67 | 703 |
| 3 | 0.54 | 0.5 | 0.52 | 566 |
| 4 | 0.61 | 0.67 | 0.64 | 738 |
| 5 | 0.76 | 0.43 | 0.54 | 574 |
| 6 | 0.5 | 0.75 | 0.6 | 346 |
| 7 | 0.68 | 0.52 | 0.59 | 325 |
| 8 | 0.51 | 0.45 | 0.48 | 661 |
| 9 | 0.53 | 0.76 | 0.62 | 1232 |
| 10 | 0.78 | 0.7 | 0.73 | 2196 |
| 11 | 0.66 | 0.58 | 0.61 | 576 |
| 12 | 0.48 | 0.68 | 0.56 | 370 |
| 13 | 0.6 | 0.6 | 0.6 | 721 |
| 14 | 0.7 | 0.63 | 0.66 | 798 |
| 15 | 0.59 | 0.73 | 0.65 | 762 |
| 16 | 0.47 | 0.69 | 0.56 | 587 |
| 17 | 0.6 | 0.61 | 0.61 | 973 |
| 18 | 0.77 | 0.68 | 0.72 | 2199 |
| 19 | 0.54 | 0.24 | 0.33 | 796 |
| 20 | 0.74 | 0.69 | 0.71 | 625 |
| 21 | 0.46 | 0.48 | 0.47 | 737 |
| macro avg | 0.61 | 0.6 | 0.59 | 18055 |
| weighted avg | 0.63 | 0.62 | 0.62 | 18055 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,630 | [
[
-0.042236328125,
-0.046875,
0.004444122314453125,
0.02435302734375,
-0.002532958984375,
-0.0006422996520996094,
-0.02362060546875,
-0.0252838134765625,
0.0176239013671875,
0.0206756591796875,
-0.03790283203125,
-0.048370361328125,
-0.052703857421875,
0.00884... |
poltextlab/xlm-roberta-large-hungarian-cap | 2023-07-04T17:40:25.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"zero-shot-classification",
"hu",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | poltextlab | null | null | poltextlab/xlm-roberta-large-hungarian-cap | 0 | 2 | transformers | 2023-06-02T10:37:15 |
---
---
license: mit
language:
- hu
tags:
- zero-shot-classification
- text-classification
- pytorch
metrics:
- accuracy
- f1-score
---
# xlm-roberta-large-hungarian-cap
## Model description
An `xlm-roberta-large` model finetuned on hungarian training data labelled with [major topic codes](https://www.comparativeagendas.net/pages/master-codebook) from the [Comparative Agendas Project](https://www.comparativeagendas.net/).
## How to use the model
#### Loading and tokenizing input data
```python
import pandas as pd
import numpy as np
from datasets import Dataset
from transformers import (AutoModelForSequenceClassification, AutoTokenizer,
Trainer, TrainingArguments)
CAP_NUM_DICT = {0: '1', 1: '2', 2: '3', 3: '4', 4: '5', 5: '6',
6: '7', 7: '8', 8: '9', 9: '10', 10: '12', 11: '13', 12: '14',
13: '15', 14: '16', 15: '17', 16: '18', 17: '19', 18: '20', 19:
'21', 20: '23', 21: '999'}
tokenizer = AutoTokenizer.from_pretrained('xlm-roberta-large')
num_labels = len(CAP_NUM_DICT)
def tokenize_dataset(data : pd.DataFrame):
tokenized = tokenizer(data["text"],
max_length=MAXLEN,
truncation=True,
padding="max_length")
return tokenized
hg_data = Dataset.from_pandas(data)
dataset = hg_data.map(tokenize_dataset, batched=True, remove_columns=hg_data.column_names)
```
#### Inference using the Trainer class
```python
model = AutoModelForSequenceClassification.from_pretrained('poltextlab/xlm-roberta-large-hungarian-cap',
num_labels=num_labels,
problem_type="multi_label_classification",
ignore_mismatched_sizes=True
)
training_args = TrainingArguments(
output_dir='.',
per_device_train_batch_size=16,
per_device_eval_batch_size=16
)
trainer = Trainer(
model=model,
args=training_args
)
probs = trainer.predict(test_dataset=dataset).predictions
predicted = pd.DataFrame(np.argmax(probs, axis=1)).replace({0: CAP_NUM_DICT}).rename(
columns={0: 'predicted'}).reset_index(drop=True)
```
### Fine-tuning procedure
`xlm-roberta-large-hungarian-cap` was fine-tuned using the Hugging Face Trainer class with the following hyperparameters:
```python
training_args = TrainingArguments(
output_dir=f"../model/{model_dir}/tmp/",
logging_dir=f"../logs/{model_dir}/",
logging_strategy='epoch',
num_train_epochs=10,
per_device_train_batch_size=16,
per_device_eval_batch_size=16,
learning_rate=5e-06,
seed=42,
save_strategy='epoch',
evaluation_strategy='epoch',
save_total_limit=1,
load_best_model_at_end=True
)
```
We also incorporated an EarlyStoppingCallback in the process with a patience of 2 epochs.
## Model performance
The model was evaluated on a test set of 67749 examples (10% of the available data).<br>
Model accuracy is **0.83**.
| label | precision | recall | f1-score | support |
|:-------------|------------:|---------:|-----------:|----------:|
| 0 | 0.76 | 0.77 | 0.76 | 5815 |
| 1 | 0.64 | 0.6 | 0.62 | 1534 |
| 2 | 0.85 | 0.82 | 0.84 | 2217 |
| 3 | 0.82 | 0.81 | 0.81 | 1789 |
| 4 | 0.67 | 0.71 | 0.69 | 1635 |
| 5 | 0.91 | 0.88 | 0.9 | 2812 |
| 6 | 0.75 | 0.68 | 0.71 | 847 |
| 7 | 0.76 | 0.71 | 0.73 | 821 |
| 8 | 0.71 | 0.66 | 0.68 | 351 |
| 9 | 0.85 | 0.83 | 0.84 | 1489 |
| 10 | 0.74 | 0.77 | 0.76 | 2991 |
| 11 | 0.78 | 0.7 | 0.73 | 1476 |
| 12 | 0.72 | 0.67 | 0.7 | 1120 |
| 13 | 0.74 | 0.71 | 0.72 | 2129 |
| 14 | 0.82 | 0.76 | 0.79 | 1227 |
| 15 | 0.87 | 0.81 | 0.84 | 1104 |
| 16 | 0.66 | 0.55 | 0.6 | 456 |
| 17 | 0.64 | 0.7 | 0.67 | 3163 |
| 18 | 0.72 | 0.68 | 0.7 | 6056 |
| 19 | 0.76 | 0.8 | 0.78 | 1418 |
| 20 | 0.71 | 0.76 | 0.74 | 616 |
| 21 | 0.94 | 0.96 | 0.95 | 26683 |
| macro avg | 0.76 | 0.74 | 0.75 | 67749 |
| weighted avg | 0.83 | 0.83 | 0.83 | 67749 |
## Inference platform
This model is used by the [CAP Babel Machine](https://babel.poltextlab.com), an open-source and free natural language processing tool, designed to simplify and speed up projects for comparative research.
## Cooperation
Model performance can be significantly improved by extending our training sets. We appreciate every submission of CAP-coded corpora (of any domain and language) at poltextlab{at}poltextlab{dot}com or by using the [CAP Babel Machine](https://babel.poltextlab.com).
## Debugging and issues
This architecture uses the `sentencepiece` tokenizer. In order to run the model before `transformers==4.27` you need to install it manually.
If you encounter a `RuntimeError` when loading the model using the `from_pretrained()` method, adding `ignore_mismatched_sizes=True` should solve the issue.
| 5,642 | [
[
-0.043609619140625,
-0.047332763671875,
0.007411956787109375,
0.018280029296875,
-0.00199127197265625,
-0.004302978515625,
-0.0244140625,
-0.0236663818359375,
0.0150909423828125,
0.020965576171875,
-0.03778076171875,
-0.049957275390625,
-0.0538330078125,
0.0... |
fredymad/roberta_Pfinal_4CLASES_2e-5_16_2 | 2023-06-02T15:50:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/roberta_Pfinal_4CLASES_2e-5_16_2 | 0 | 2 | transformers | 2023-06-02T11:49:40 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: roberta_Pfinal_4CLASES_2e-5_16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_Pfinal_4CLASES_2e-5_16_2
This model is a fine-tuned version of [PlanTL-GOB-ES/roberta-base-bne](https://huggingface.co/PlanTL-GOB-ES/roberta-base-bne) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3321
- Accuracy: 0.9031
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4479 | 1.0 | 669 | 0.3181 | 0.8927 |
| 0.2679 | 2.0 | 1338 | 0.3321 | 0.9031 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,445 | [
[
-0.0278472900390625,
-0.046844482421875,
0.01427459716796875,
0.0117645263671875,
-0.0269775390625,
-0.040496826171875,
-0.014556884765625,
-0.017669677734375,
0.002593994140625,
0.0231170654296875,
-0.051544189453125,
-0.0477294921875,
-0.0467529296875,
-0.... |
fredymad/robertuito_4CLASES_Pfinal | 2023-06-02T12:16:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/robertuito_4CLASES_Pfinal | 0 | 2 | transformers | 2023-06-02T12:07:35 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: robertuito_4CLASES_Pfinal
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# robertuito_4CLASES_Pfinal
This model is a fine-tuned version of [pysentimiento/robertuito-base-uncased](https://huggingface.co/pysentimiento/robertuito-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3067
- Accuracy: 0.9061
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.4139 | 1.0 | 669 | 0.2918 | 0.9035 |
| 0.2619 | 2.0 | 1338 | 0.3067 | 0.9061 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,425 | [
[
-0.0257720947265625,
-0.032562255859375,
0.0171966552734375,
0.0169830322265625,
-0.0295867919921875,
-0.0275421142578125,
-0.0192718505859375,
-0.013397216796875,
0.01383209228515625,
0.0361328125,
-0.0516357421875,
-0.05499267578125,
-0.04656982421875,
-0.... |
GCopoulos/deberta-finetuned-answer-polarity-warmup-f1 | 2023-06-02T13:43:13.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GCopoulos | null | null | GCopoulos/deberta-finetuned-answer-polarity-warmup-f1 | 0 | 2 | transformers | 2023-06-02T13:23:57 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
model-index:
- name: deberta-finetuned-answer-polarity-warmup-f1
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: answer_pol
split: validation
args: answer_pol
metrics:
- name: F1
type: f1
value: 0.8602499021892139
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-finetuned-answer-polarity-warmup-f1
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3748
- F1: 0.8602
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 364 | 0.5669 | 0.8303 |
| 0.0791 | 2.0 | 728 | 0.5405 | 0.4630 |
| 0.3408 | 3.0 | 1092 | 0.3748 | 0.8602 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,767 | [
[
-0.0282135009765625,
-0.047515869140625,
0.013916015625,
0.019866943359375,
-0.0287628173828125,
-0.022979736328125,
-0.00926971435546875,
-0.006832122802734375,
0.01226043701171875,
0.0149993896484375,
-0.05401611328125,
-0.041351318359375,
-0.049102783203125,
... |
GCopoulos/deberta-finetuned-answer-polarity-5e | 2023-06-02T14:05:16.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GCopoulos | null | null | GCopoulos/deberta-finetuned-answer-polarity-5e | 0 | 2 | transformers | 2023-06-02T13:49:18 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
model-index:
- name: deberta-finetuned-answer-polarity-5e
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: answer_pol
split: validation
args: answer_pol
metrics:
- name: F1
type: f1
value: 0.857225787640563
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-finetuned-answer-polarity-5e
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.5116
- F1: 0.8572
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 364 | 0.5350 | 0.8208 |
| 0.1301 | 2.0 | 728 | 0.7435 | 0.7378 |
| 0.1716 | 3.0 | 1092 | 0.4829 | 0.8193 |
| 0.1716 | 4.0 | 1456 | 0.5184 | 0.8124 |
| 0.1455 | 5.0 | 1820 | 0.5116 | 0.8572 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,872 | [
[
-0.027740478515625,
-0.045562744140625,
0.017242431640625,
0.017303466796875,
-0.02825927734375,
-0.023040771484375,
-0.0036678314208984375,
-0.009307861328125,
0.0179901123046875,
0.01654052734375,
-0.057220458984375,
-0.04345703125,
-0.0516357421875,
-0.01... |
leofn3/modelo_multiclass_teste01 | 2023-06-02T13:55:06.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | leofn3 | null | null | leofn3/modelo_multiclass_teste01 | 0 | 2 | sentence-transformers | 2023-06-02T13:53:38 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# /var/folders/l0/32nshlfj7rq1xg2dxcjs9y9w0000gn/T/tmpfrcg6j3b/leofn3/modelo_multiclass_teste01
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("/var/folders/l0/32nshlfj7rq1xg2dxcjs9y9w0000gn/T/tmpfrcg6j3b/leofn3/modelo_multiclass_teste01")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,675 | [
[
-0.0144805908203125,
-0.05877685546875,
0.0239105224609375,
-0.004547119140625,
0.00043845176696777344,
-0.0193634033203125,
-0.017669677734375,
-0.01416778564453125,
-0.01363372802734375,
0.0284881591796875,
-0.044036865234375,
-0.0164031982421875,
-0.040435791... |
GCopoulos/deberta-finetuned-answer-polarity-7e-adj | 2023-06-02T14:24:27.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GCopoulos | null | null | GCopoulos/deberta-finetuned-answer-polarity-7e-adj | 0 | 2 | transformers | 2023-06-02T14:16:18 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
model-index:
- name: deberta-finetuned-answer-polarity-7e-adj
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: answer_pol
split: validation
args: answer_pol
metrics:
- name: F1
type: f1
value: 0.8582290105968754
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-finetuned-answer-polarity-7e-adj
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7605
- F1: 0.8582
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 262 | 0.3918 | 0.8901 |
| 0.4372 | 2.0 | 524 | 0.4592 | 0.9138 |
| 0.4372 | 3.0 | 786 | 0.7605 | 0.8582 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,761 | [
[
-0.0276336669921875,
-0.04986572265625,
0.0171356201171875,
0.017974853515625,
-0.0279998779296875,
-0.0259246826171875,
-0.0038433074951171875,
-0.010101318359375,
0.01751708984375,
0.0179901123046875,
-0.05419921875,
-0.041259765625,
-0.05322265625,
-0.017... |
Guerosharp/dqn-SpaceInvadersNoFrameskip-v4 | 2023-06-02T14:42:41.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | Guerosharp | null | null | Guerosharp/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-06-02T14:42:12 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 352.00 +/- 136.24
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Guerosharp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga Guerosharp -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Guerosharp
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
# Environment Arguments
```python
{'render_mode': 'rgb_array'}
```
| 2,765 | [
[
-0.043060302734375,
-0.039031982421875,
0.0191192626953125,
0.025238037109375,
-0.01099395751953125,
-0.017181396484375,
0.0101318359375,
-0.0135955810546875,
0.013092041015625,
0.0218353271484375,
-0.0712890625,
-0.035430908203125,
-0.025299072265625,
-0.00... |
Ttonio/distilbert-base-uncased-finetuned-emotion | 2023-06-02T15:00:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Ttonio | null | null | Ttonio/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-06-02T14:48:02 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.926
- name: F1
type: f1
value: 0.9260719878508991
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2180
- Accuracy: 0.926
- F1: 0.9261
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8044 | 1.0 | 250 | 0.3045 | 0.906 | 0.9039 |
| 0.2453 | 2.0 | 500 | 0.2180 | 0.926 | 0.9261 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037567138671875,
-0.0411376953125,
0.0141754150390625,
0.021820068359375,
-0.026214599609375,
-0.019073486328125,
-0.01313018798828125,
-0.008453369140625,
0.01020050048828125,
0.007701873779296875,
-0.05615234375,
-0.05181884765625,
-0.059967041015625,
-... |
GCopoulos/deberta-finetuned-answer-polarity-1e6 | 2023-06-02T15:04:06.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GCopoulos | null | null | GCopoulos/deberta-finetuned-answer-polarity-1e6 | 0 | 2 | transformers | 2023-06-02T14:53:37 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
model-index:
- name: deberta-finetuned-answer-polarity-1e6
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: answer_pol
split: validation
args: answer_pol
metrics:
- name: F1
type: f1
value: 0.8586364216686151
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-finetuned-answer-polarity-1e6
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7823
- F1: 0.8586
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 262 | 0.7424 | 0.4877 |
| 0.8987 | 2.0 | 524 | 0.3792 | 0.8774 |
| 0.2993 | 3.0 | 786 | 0.5936 | 0.8413 |
| 0.1483 | 4.0 | 1048 | 0.4211 | 0.8859 |
| 0.1175 | 5.0 | 1310 | 0.4684 | 0.8959 |
| 0.0816 | 6.0 | 1572 | 0.6284 | 0.8712 |
| 0.0624 | 7.0 | 1834 | 0.7823 | 0.8586 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,995 | [
[
-0.0290679931640625,
-0.045013427734375,
0.0174102783203125,
0.01397705078125,
-0.0227203369140625,
-0.0216064453125,
-0.0009756088256835938,
-0.009429931640625,
0.020599365234375,
0.0172119140625,
-0.055877685546875,
-0.04229736328125,
-0.05426025390625,
-0... |
namedotpg/ppo-LunarLander-v2 | 2023-06-03T22:54:32.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | namedotpg | null | null | namedotpg/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-06-02T15:24:56 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 259.35 +/- 24.98
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
GCopoulos/deberta-finetuned-answer-polarity-7e6 | 2023-06-02T15:57:19.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GCopoulos | null | null | GCopoulos/deberta-finetuned-answer-polarity-7e6 | 0 | 2 | transformers | 2023-06-02T15:50:15 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
model-index:
- name: deberta-finetuned-answer-polarity-7e6
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: answer_pol
split: validation
args: answer_pol
metrics:
- name: F1
type: f1
value: 0.8625097340010413
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-finetuned-answer-polarity-7e6
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9143
- F1: 0.8625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 214 | 0.6748 | 0.8696 |
| 0.0795 | 2.0 | 428 | 0.8541 | 0.8512 |
| 0.0508 | 3.0 | 642 | 0.9143 | 0.8625 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,755 | [
[
-0.0265045166015625,
-0.048126220703125,
0.0189208984375,
0.01873779296875,
-0.026611328125,
-0.0265960693359375,
-0.00305938720703125,
-0.0113372802734375,
0.0154571533203125,
0.0174560546875,
-0.055450439453125,
-0.039794921875,
-0.05230712890625,
-0.01873... |
derguene/carpooling-MiniLM-L12-v2-fr | 2023-06-07T21:34:28.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | derguene | null | null | derguene/carpooling-MiniLM-L12-v2-fr | 0 | 2 | sentence-transformers | 2023-06-02T16:02:52 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# derguene/carpooling-MiniLM-L12-v2-fr
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("derguene/carpooling-MiniLM-L12-v2-fr")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,561 | [
[
-0.0082244873046875,
-0.054351806640625,
0.037017822265625,
-0.0230560302734375,
-0.0137939453125,
-0.0086212158203125,
-0.00989532470703125,
-0.0078277587890625,
-0.0109710693359375,
0.0232391357421875,
-0.054107666015625,
-0.0158233642578125,
-0.0372314453125,... |
fredymad/HATE_Pfinal_2e-5_16_2 | 2023-06-02T16:15:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | fredymad | null | null | fredymad/HATE_Pfinal_2e-5_16_2 | 0 | 2 | transformers | 2023-06-02T16:03:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: HATE_Pfinal_2e-5_16_2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# HATE_Pfinal_2e-5_16_2
This model is a fine-tuned version of [Hate-speech-CNERG/dehatebert-mono-spanish](https://huggingface.co/Hate-speech-CNERG/dehatebert-mono-spanish) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2569
- F1: 0.6748
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.3025 | 1.0 | 669 | 0.2434 | 0.6724 |
| 0.2559 | 2.0 | 1338 | 0.2569 | 0.6748 |
### Framework versions
- Transformers 4.28.0
- Pytorch 1.13.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,425 | [
[
-0.044158935546875,
-0.0526123046875,
0.004375457763671875,
0.01366424560546875,
-0.027191162109375,
-0.025604248046875,
-0.01300811767578125,
-0.017791748046875,
0.0135650634765625,
0.0190277099609375,
-0.0555419921875,
-0.04901123046875,
-0.056640625,
-0.0... |
Surya-3719/distilbert-base-uncased-finetuned-emotion | 2023-06-03T07:36:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Surya-3719 | null | null | Surya-3719/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-06-02T16:50:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9255
- name: F1
type: f1
value: 0.9253738195435528
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2218
- Accuracy: 0.9255
- F1: 0.9254
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.848 | 1.0 | 250 | 0.3244 | 0.9045 | 0.9008 |
| 0.2603 | 2.0 | 500 | 0.2218 | 0.9255 | 0.9254 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037078857421875,
-0.041778564453125,
0.01354217529296875,
0.022247314453125,
-0.025360107421875,
-0.018951416015625,
-0.0139923095703125,
-0.008575439453125,
0.0100250244140625,
0.00762176513671875,
-0.055877685546875,
-0.0516357421875,
-0.060089111328125,
... |
Mariamtc/finetuned-twitter-roberta-base-sep2022-tweetcognition | 2023-06-28T22:07:15.000Z | [
"transformers",
"pytorch",
"roberta",
"text-classification",
"generated_from_trainer",
"en",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | Mariamtc | null | null | Mariamtc/finetuned-twitter-roberta-base-sep2022-tweetcognition | 1 | 2 | transformers | 2023-06-02T17:05:52 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetuned-twitter-roberta-base-sep2022-tweetcognition
results: []
language:
- en
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-twitter-roberta-base-sep2022-tweetcognition
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-sep2022](https://huggingface.co/cardiffnlp/twitter-roberta-base-sep2022) on custom dataset
consisting of 2527 recent tweets related to major life events that occur during the lifespan of the users.
It achieves the following results on the evaluation set:
- Loss: 0.2433
- Accuracy: 0.9545
## Model description
A RoBERTa-base model trained on 168.86M tweets until the end of September 2022 (15M tweets increment) finetuned and trained on custom dataset
consisting of 2527 recent tweets related to major life events that occur during the lifespan of the users with the scope of performing a specific text xlassification task:
classify posts from the Twitter social media platform into a set of 30 distinct classes, each representing a major life event that the author of the post recently experienced.
RoBERTa (Robustly Optimized BERT approach) is a state-of-the-art natural language processing (NLP) model developed by Facebook AI.
## Intended uses & limitations
The scope of this fine-tuned language model is to be used for a specific text classification task: classify posts from the Twitter social media platform into a set of
30 distinct classes, each representing a major life event that the author of the post recently experienced.
The model can be further improved by training on an even larger training dataset with an extended and more diverse set of life events classes.
## Training procedure
A fine-tuning process was applied to the original model [cardiffnlp/twitter-roberta-base-sep2022](https://huggingface.co/cardiffnlp/twitter-roberta-base-sep2022) by:
- trainig the original model on a custom dataset consisting of 2527 recent tweets related to major life events that occur during the lifespan of the users
- setting the model's hyperparameters with the values mentioned in the table below
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.0283 | 1.0 | 127 | 1.4553 | 0.8162 |
| 0.9216 | 2.0 | 254 | 0.5951 | 0.8992 |
| 0.4343 | 3.0 | 381 | 0.3544 | 0.9348 |
| 0.2629 | 4.0 | 508 | 0.2613 | 0.9486 |
| 0.1861 | 5.0 | 635 | 0.2433 | 0.9545 |
### Framework versions
- Transformers 4.29.0
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3 | 3,148 | [
[
-0.023895263671875,
-0.060760498046875,
0.01375579833984375,
0.01324462890625,
-0.01462554931640625,
-0.0029582977294921875,
-0.016998291015625,
-0.03936767578125,
0.00969696044921875,
0.0186920166015625,
-0.059112548828125,
-0.048919677734375,
-0.05303955078125... |
HasinMDG/X-Sent-Deberta_v3 | 2023-06-02T17:24:41.000Z | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | HasinMDG | null | null | HasinMDG/X-Sent-Deberta_v3 | 0 | 2 | sentence-transformers | 2023-06-02T17:24:23 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# HasinMDG/X-Sent-Deberta_v3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("HasinMDG/X-Sent-Deberta_v3")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,541 | [
[
-0.004150390625,
-0.060516357421875,
0.03399658203125,
-0.0094757080078125,
-0.01313018798828125,
-0.0162200927734375,
-0.0159912109375,
-0.0114288330078125,
0.00441741943359375,
0.036590576171875,
-0.047393798828125,
-0.0259552001953125,
-0.048492431640625,
... |
GCopoulos/deberta-finetuned-answer-polarity-3e6-newdata3 | 2023-06-02T18:36:56.000Z | [
"transformers",
"pytorch",
"deberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:mit",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | GCopoulos | null | null | GCopoulos/deberta-finetuned-answer-polarity-3e6-newdata3 | 0 | 2 | transformers | 2023-06-02T18:24:51 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- f1
model-index:
- name: deberta-finetuned-answer-polarity-3e6-newdata3
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: answer_pol
split: validation
args: answer_pol
metrics:
- name: F1
type: f1
value: 0.8847581890627116
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# deberta-finetuned-answer-polarity-3e6-newdata3
This model is a fine-tuned version of [microsoft/deberta-large](https://huggingface.co/microsoft/deberta-large) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7485
- F1: 0.8848
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-06
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 219 | 0.4594 | 0.8532 |
| 0.5223 | 2.0 | 438 | 0.5479 | 0.8841 |
| 0.0962 | 3.0 | 657 | 0.7485 | 0.8848 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,773 | [
[
-0.02679443359375,
-0.048187255859375,
0.0188446044921875,
0.0193023681640625,
-0.02764892578125,
-0.0269012451171875,
-0.0030117034912109375,
-0.0129241943359375,
0.0159912109375,
0.017181396484375,
-0.052093505859375,
-0.041015625,
-0.0531005859375,
-0.016... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.