modelId stringlengths 6 107 | label list | readme stringlengths 0 56.2k | readme_len int64 0 56.2k |
|---|---|---|---|
Jeevesh8/6ep_bert_ft_cola-79 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-83 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-84 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-85 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-89 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-93 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-96 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-98 | null | Entry not found | 15 |
Jeevesh8/6ep_bert_ft_cola-99 | null | Entry not found | 15 |
Barik/testvata | null | Entry not found | 15 |
aliosm/sha3bor-metre-detector-arabertv2-base | [
"LABEL_0",
"LABEL_1",
"LABEL_10",
"LABEL_11",
"LABEL_12",
"LABEL_13",
"LABEL_14",
"LABEL_15",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | ---
language: ar
license: mit
widget:
- text: "إن العيون التي في طرفها حور [شطر] قتلننا ثم لم يحيين قتلانا"
- text: "إذا ما فعلت الخير ضوعف شرهم [شطر] وكل إناء بالذي فيه ينضح"
- text: "واحر قلباه ممن قلبه شبم [شطر] ومن بجسمي وحالي عنده سقم"
---
| 245 |
Yarn007/autotrain-Napkin-872827783 | [
"CRIME",
"ENTERTAINMENT",
"Finance",
"POLITICS",
"SPORTS",
"Terrorism"
] | ---
tags: autotrain
language: unk
widget:
- text: "I love AutoTrain 🤗"
datasets:
- Yarn007/autotrain-data-Napkin
co2_eq_emissions: 0.020162211418903533
---
# Model Trained Using AutoTrain
- Problem type: Multi-class Classification
- Model ID: 872827783
- CO2 Emissions (in grams): 0.020162211418903533
## Validation Metrics
- Loss: 0.25198695063591003
- Accuracy: 0.9325714285714286
- Macro F1: 0.9254931094274171
- Micro F1: 0.9325714285714286
- Weighted F1: 0.9323540959391766
- Macro Precision: 0.9286720054236212
- Micro Precision: 0.9325714285714286
- Weighted Precision: 0.9324375609546055
- Macro Recall: 0.9227549386201338
- Micro Recall: 0.9325714285714286
- Weighted Recall: 0.9325714285714286
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/Yarn007/autotrain-Napkin-872827783
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("Yarn007/autotrain-Napkin-872827783", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("Yarn007/autotrain-Napkin-872827783", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,382 |
drGOD/rubert-tiny-finetuned-cola | null | ---
license: mit
tags:
- generated_from_trainer
metrics:
- matthews_correlation
model-index:
- name: rubert-tiny-finetuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rubert-tiny-finetuned-cola
This model is a fine-tuned version of [cointegrated/rubert-tiny](https://huggingface.co/cointegrated/rubert-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0013
- Matthews Correlation: 0.9994
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.0640317288646484e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 28
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:-----:|:---------------:|:--------------------:|
| 0.0326 | 1.0 | 2667 | 0.0180 | 0.9907 |
| 0.0143 | 2.0 | 5334 | 0.0075 | 0.9957 |
| 0.0102 | 3.0 | 8001 | 0.0049 | 0.9979 |
| 0.0026 | 4.0 | 10668 | 0.0019 | 0.9993 |
| 0.0018 | 5.0 | 13335 | 0.0013 | 0.9994 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.1
- Tokenizers 0.12.1
| 1,733 |
Danni/distilbert-base-uncased-finetuned-dbpedia-label | [
"Animal",
"Biomolecule",
"ChemicalSubstance",
"Company",
"Device",
"Food",
"MeanOfTransportation",
"Plant",
"Product"
] | Entry not found | 15 |
anuj55/roberta-base-squad2-finetuned-polifact | null | Entry not found | 15 |
CEBaB/roberta-base.CEBaB.absa.exclusive.seed_66 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/lstm.CEBaB.absa.exclusive.seed_66 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/lstm.CEBaB.absa.exclusive.seed_77 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/lstm.CEBaB.absa.exclusive.seed_88 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/lstm.CEBaB.absa.exclusive.seed_99 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/lstm.CEBaB.absa.inclusive.seed_42 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/lstm.CEBaB.absa.inclusive.seed_66 | [
"0",
"1",
"2"
] | Entry not found | 15 |
CEBaB/lstm.CEBaB.absa.inclusive.seed_77 | [
"0",
"1",
"2"
] | Entry not found | 15 |
FrGes/xlm-roberta-large-finetuned-EUJAV-datasetAB | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Fine-tuned model based on
#XLM-RoBERTa (large-sized model)
Data for finetuning:
Italian vaccine stance data: 1042 training tweets and 348 evaluation tweets
#BibTeX entry and citation info
to be added | 206 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-0 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-1 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-2 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-54 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-55 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-64 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-65 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-66 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-67 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-69 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-82 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-83 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-84 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-87 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-88 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-89 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-90 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-91 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-96 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-97 | null | Entry not found | 15 |
Jeevesh8/512seq_len_6ep_bert_ft_cola-98 | null | Entry not found | 15 |
Suhong/distilbert-base-uncased-emoji_mask_wearing | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5",
"LABEL_6",
"LABEL_7",
"LABEL_8",
"LABEL_9"
] | Entry not found | 15 |
calcworks/distilbert-base-uncased-finetuned-clinc | [
"accept_reservations",
"account_blocked",
"alarm",
"application_status",
"apr",
"are_you_a_bot",
"balance",
"bill_balance",
"bill_due",
"book_flight",
"book_hotel",
"calculator",
"calendar",
"calendar_update",
"calories",
"cancel",
"cancel_reservation",
"car_rental",
"card_declin... | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9161290322580645
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7755
- Accuracy: 0.9161
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 4.2893 | 1.0 | 318 | 3.2831 | 0.7403 |
| 2.629 | 2.0 | 636 | 1.8731 | 0.8348 |
| 1.5481 | 3.0 | 954 | 1.1581 | 0.8906 |
| 1.0137 | 4.0 | 1272 | 0.8585 | 0.9077 |
| 0.797 | 5.0 | 1590 | 0.7755 | 0.9161 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.11.0
- Datasets 1.16.1
- Tokenizers 0.10.3
| 1,884 |
nreimers/mmarco-mMiniLMv2-L12-H384-v1 | [
"LABEL_0"
] | Entry not found | 15 |
ragarwal/deberta-v3-base-nli-mixer-binary | [
"LABEL_0"
] | ---
license: mit
---
**NLI-Mixer** is an attempt to tackle the Natural Language Inference (NLI) task by mixing multiple datasets together.
The approach is simple:
1. Combine all available NLI data without any domain-dependent re-balancing or re-weighting.
2. Finetune several SOTA transformers of different sizes (20m parameters to 300m parameters) on the combined data.
3. Evaluate on challenging NLI datasets.
This model was trained using [SentenceTransformers](https://sbert.net) [Cross-Encoder](https://www.sbert.net/examples/applications/cross-encoder/README.html) class. It is based on [microsoft/deberta-v3-base](https://huggingface.co/microsoft/deberta-v3-base).
### Data
20+ NLI datasets were combined to train a binary classification model. The `contradiction` and `neutral` labels were combined to form a `non-entailment` class.
### Usage
In Transformers
```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch
from torch.nn.functional import softmax, sigmoid
device = "cuda" if torch.cuda.is_available() else "cpu"
model_name="ragarwal/deberta-v3-base-nli-mixer-binary"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForSequenceClassification.from_pretrained(model_name)
sentence = "During its monthly call, the National Oceanic and Atmospheric Administration warned of \
increased temperatures and low precipitation"
labels = ["Computer", "Climate Change", "Tablet", "Football", "Artificial Intelligence", "Global Warming"]
features = tokenizer([[sentence, l] for l in labels], padding=True, truncation=True, return_tensors="pt")
model.eval()
with torch.no_grad():
scores = model(**features).logits
print("Multi-Label:", sigmoid(scores)) #Multi-Label Classification
print("Single-Label:", softmax(scores, dim=0)) #Single-Label Classification
#Multi-Label: tensor([[0.0412],[0.2436],[0.0394],[0.0020],[0.0050],[0.1424]])
#Single-Label: tensor([[0.0742],[0.5561],[0.0709],[0.0035],[0.0087],[0.2867]])
```
In Sentence-Transformers
```python
from sentence_transformers import CrossEncoder
model_name="ragarwal/deberta-v3-base-nli-mixer-binary"
model = CrossEncoder(model_name, max_length=256)
sentence = "During its monthly call, the National Oceanic and Atmospheric Administration warned of \
increased temperatures and low precipitation"
labels = ["Computer", "Climate Change", "Tablet", "Football", "Artificial Intelligence", "Global Warming"]
scores = model.predict([[sentence, l] for l in labels])
print(scores)
#array([0.04118565, 0.2435827 , 0.03941465, 0.00203637, 0.00501176, 0.1423797], dtype=float32)
``` | 2,631 |
apthakur/distilbert-base-uncased-apala-finetuned-emotion | [
"LABEL_0",
"LABEL_1",
"LABEL_2",
"LABEL_3",
"LABEL_4",
"LABEL_5"
] | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-apala-finetuned-emotion
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-apala-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3696
- Accuracy: 0.476
- F1: 0.4250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 1.3899 | 0.476 | 0.4059 |
| No log | 2.0 | 500 | 1.3696 | 0.476 | 0.4250 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 1,513 |
connectivity/feather_berts_1 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_2 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_3 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_4 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_5 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_6 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_7 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_8 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_9 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_10 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_11 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_12 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_30 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_31 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_33 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_34 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_35 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_36 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_39 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_41 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_46 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_47 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_48 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_49 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_50 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_51 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_52 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_53 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_54 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_55 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_56 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_57 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_58 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_59 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_60 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_62 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_65 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_71 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_72 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_76 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_80 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_81 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_83 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_89 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_90 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_91 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_93 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_95 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_96 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
connectivity/feather_berts_97 | [
"LABEL_0",
"LABEL_1",
"LABEL_2"
] | Entry not found | 15 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.