modelId stringlengths 4 111 | lastModified stringlengths 24 24 | tags list | pipeline_tag stringlengths 5 30 ⌀ | author stringlengths 2 34 ⌀ | config null | securityStatus null | id stringlengths 4 111 | likes int64 0 9.53k | downloads int64 2 73.6M | library_name stringlengths 2 84 ⌀ | created timestamp[us] | card stringlengths 101 901k | card_len int64 101 901k | embeddings list |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Sejan/distilbert_classifier_newsgroups | 2023-05-18T23:17:32.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Sejan | null | null | Sejan/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-18T23:16:57 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.038665771484375,
-0.0419921875,
0.021209716796875,
0.0084075927734375,
-0.033599853515625,
-0.0068206787109375,
-0.0117340087890625,
-0.0108184814453125,
-0.0029163360595703125,
-0.006195068359375,
-0.041473388671875,
-0.0504150390625,
-0.067138671875,
-0... |
seegs2248/dp2 | 2023-05-19T00:02:28.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"dp2",
"en",
"endpoints_compatible",
"region:us"
] | text-classification | seegs2248 | null | null | seegs2248/dp2 | 1 | 2 | transformers | 2023-05-18T23:49:06 | ---
language: "en"
tags:
- dp2
widget:
- text: "oh and we'll mi thing uh is there bike clo ars or bike crac where i can park my thee"
- text: "oh and one more thing uhhh is there bike lockers or a bike rack where i can park my bike"
- text: "ni yeah that sounds great ummm dold you have the any idea er could you check for me if there's hat three wifie available there"
- text: "nice yeah that sounds great ummm do you have any idea or could you check for me if there's uhhh free wi-fi available there"
- text: "perfect and what is the check kin time for that"
---
This is the model used for knowledge cluster classification for the DSTC10 track2 knowledge selection task, trained with double heads, i.e., classifier head and LM head
For further information, please refer to https://github.com/yctam/dstc10_track2_task2 for the Github repository. This model is used to predict knowledge clusters under noisey dialogues generated by speech recognition errors\\
--- | 966 | [
[
-0.0265045166015625,
-0.043487548828125,
0.039886474609375,
-0.01456451416015625,
-0.0193023681640625,
0.004848480224609375,
0.0013532638549804688,
-0.031585693359375,
0.006687164306640625,
0.055328369140625,
-0.06732177734375,
-0.03411865234375,
-0.048248291015... |
sajid73/bert-fine-tuned-cola | 2023-05-19T00:10:54.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | sajid73 | null | null | sajid73/bert-fine-tuned-cola | 0 | 2 | transformers | 2023-05-18T23:54:31 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
model-index:
- name: bert-fine-tuned-cola
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-fine-tuned-cola
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,022 | [
[
-0.0259857177734375,
-0.06329345703125,
0.004642486572265625,
0.0229339599609375,
-0.022857666015625,
-0.01593017578125,
-0.01334381103515625,
-0.0168304443359375,
0.0250701904296875,
0.01277923583984375,
-0.05340576171875,
-0.023590087890625,
-0.047821044921875... |
cmpatino/dqn-SpaceInvadersNoFrameskip-v4 | 2023-05-19T00:21:40.000Z | [
"stable-baselines3",
"SpaceInvadersNoFrameskip-v4",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | cmpatino | null | null | cmpatino/dqn-SpaceInvadersNoFrameskip-v4 | 0 | 2 | stable-baselines3 | 2023-05-19T00:21:02 | ---
library_name: stable-baselines3
tags:
- SpaceInvadersNoFrameskip-v4
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: DQN
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: SpaceInvadersNoFrameskip-v4
type: SpaceInvadersNoFrameskip-v4
metrics:
- type: mean_reward
value: 638.50 +/- 179.43
name: mean_reward
verified: false
---
# **DQN** Agent playing **SpaceInvadersNoFrameskip-v4**
This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3)
and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo).
The RL Zoo is a training framework for Stable Baselines3
reinforcement learning agents,
with hyperparameter optimization and pre-trained agents included.
## Usage (with SB3 RL Zoo)
RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/>
SB3: https://github.com/DLR-RM/stable-baselines3<br/>
SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib
Install the RL Zoo (with SB3 and SB3-Contrib):
```bash
pip install rl_zoo3
```
```
# Download model and save it into the logs/ folder
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cmpatino -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do:
```
python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga cmpatino -f logs/
python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
```
## Training (with the RL Zoo)
```
python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/
# Upload the model and generate video (when possible)
python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga cmpatino
```
## Hyperparameters
```python
OrderedDict([('batch_size', 32),
('buffer_size', 100000),
('env_wrapper',
['stable_baselines3.common.atari_wrappers.AtariWrapper']),
('exploration_final_eps', 0.01),
('exploration_fraction', 0.1),
('frame_stack', 4),
('gradient_steps', 1),
('learning_rate', 0.0001),
('learning_starts', 100000),
('n_timesteps', 1000000.0),
('optimize_memory_usage', False),
('policy', 'CnnPolicy'),
('target_update_interval', 1000),
('train_freq', 4),
('normalize', False)])
```
| 2,691 | [
[
-0.041748046875,
-0.0362548828125,
0.0215911865234375,
0.024444580078125,
-0.01062774658203125,
-0.017822265625,
0.0120849609375,
-0.01406097412109375,
0.01329803466796875,
0.024688720703125,
-0.071044921875,
-0.035491943359375,
-0.0270233154296875,
-0.00399... |
pnfproj/sports-lover-model | 2023-05-19T00:58:58.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"license:cc-by-nc-nd-4.0",
"region:us"
] | text-classification | pnfproj | null | null | pnfproj/sports-lover-model | 0 | 2 | transformers | 2023-05-19T00:36:21 | ---
license: cc-by-nc-nd-4.0
inference: false
---
# sports-lover-model
A demo model for PNF that ranks all sports news high and other news low.
## Usage
To use this model, download the checkpoints. Create a new directory called `news_model` in your PNF directory, and move all the files in this model to the directory. If your server is running, restart it. Make sure to add new links.
## Why is the Inference API disabled?
This model is intended for use in PNF **only.**
## License
License: CC-BY-NC-ND-4.0, with the following additions: a) You may only use this model from inside PNF b) You may not redistribute this model (These additions override any statements inside the CC license) | 695 | [
[
-0.031463623046875,
-0.040313720703125,
0.030548095703125,
0.041534423828125,
-0.03973388671875,
-0.0289459228515625,
0.005588531494140625,
-0.0241546630859375,
0.028167724609375,
0.043487548828125,
-0.05572509765625,
-0.04364013671875,
-0.053985595703125,
0... |
michaelfeil/ct2fast-opus-mt-fr-en | 2023-05-19T00:44:58.000Z | [
"transformers",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | translation | michaelfeil | null | null | michaelfeil/ct2fast-opus-mt-fr-en | 1 | 2 | transformers | 2023-05-19T00:44:07 | ---
tags:
- ctranslate2
- translation
license: apache-2.0
---
# # Fast-Inference with Ctranslate2
Speedup inference by 2x-8x using int8 inference in C++
quantized version of [Helsinki-NLP/opus-mt-fr-en](https://huggingface.co/Helsinki-NLP/opus-mt-fr-en)
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
Converted using
```
ct2-transformers-converter --model Helsinki-NLP/opus-mt-fr-en --output_dir /home/michael/tmp-ct2fast-opus-mt-fr-en --force --copy_files README.md generation_config.json tokenizer_config.json vocab.json source.spm .gitattributes target.spm --quantization float16
```
Checkpoint compatible to [ctranslate2](https://github.com/OpenNMT/CTranslate2) and [hf-hub-ctranslate2](https://github.com/michaelfeil/hf-hub-ctranslate2)
- `compute_type=int8_float16` for `device="cuda"`
- `compute_type=int8` for `device="cpu"`
```python
from hf_hub_ctranslate2 import TranslatorCT2fromHfHub, GeneratorCT2fromHfHub
from transformers import AutoTokenizer
model_name = "michaelfeil/ct2fast-opus-mt-fr-en"
# use either TranslatorCT2fromHfHub or GeneratorCT2fromHfHub here, depending on model.
model = TranslatorCT2fromHfHub(
# load in int8 on CUDA
model_name_or_path=model_name,
device="cuda",
compute_type="int8_float16",
tokenizer=AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-fr-en")
)
outputs = model.generate(
text=["How do you call a fast Flan-ingo?", "User: How are you doing?"],
)
print(outputs)
```
# Licence and other remarks:
This is just a quantized version. Licence conditions are intended to be idential to original huggingface repo.
# Original description
### opus-mt-fr-en
* source languages: fr
* target languages: en
* OPUS readme: [fr-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.zip)
* test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.test.txt)
* test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-en/opus-2020-02-26.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdiscussdev2015-enfr.fr.en | 33.1 | 0.580 |
| newsdiscusstest2015-enfr.fr.en | 38.7 | 0.614 |
| newssyscomb2009.fr.en | 30.3 | 0.569 |
| news-test2008.fr.en | 26.2 | 0.542 |
| newstest2009.fr.en | 30.2 | 0.570 |
| newstest2010.fr.en | 32.2 | 0.590 |
| newstest2011.fr.en | 33.0 | 0.597 |
| newstest2012.fr.en | 32.8 | 0.591 |
| newstest2013.fr.en | 33.9 | 0.591 |
| newstest2014-fren.fr.en | 37.8 | 0.633 |
| Tatoeba.fr.en | 57.5 | 0.720 |
| 2,862 | [
[
-0.0345458984375,
-0.0458984375,
0.028289794921875,
0.04144287109375,
-0.02288818359375,
-0.0216522216796875,
-0.02642822265625,
-0.030517578125,
0.0082550048828125,
0.017974853515625,
-0.0275115966796875,
-0.037872314453125,
-0.047088623046875,
0.0101623535... |
nvidia/stt_be_fastconformer_hybrid_large_pc | 2023-05-19T01:18:36.000Z | [
"nemo",
"automatic-speech-recognition",
"speech",
"audio",
"Transducer",
"FastConformer",
"CTC",
"Transformer",
"pytorch",
"NeMo",
"hf-asr-leaderboard",
"be",
"dataset:mozilla-foundation/common_voice_12_0",
"arxiv:2305.05084",
"license:cc-by-4.0",
"model-index",
"region:us"
] | automatic-speech-recognition | nvidia | null | null | nvidia/stt_be_fastconformer_hybrid_large_pc | 0 | 2 | nemo | 2023-05-19T00:49:42 | ---
language:
- be
library_name: nemo
datasets:
- mozilla-foundation/common_voice_12_0
thumbnail: null
tags:
- automatic-speech-recognition
- speech
- audio
- Transducer
- FastConformer
- CTC
- Transformer
- pytorch
- NeMo
- hf-asr-leaderboard
license: cc-by-4.0
model-index:
- name: stt_de_fastconformer_hybrid_large_pc
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common-voice-12-0
type: mozilla-foundation/common_voice_12_0
config: be
split: test
args:
language: be
metrics:
- name: Test WER
type: wer
value: 2.72
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: common-voice-12-0
type: mozilla-foundation/common_voice_12_0
config: Belarusian P&C
split: test
args:
language: be
metrics:
- name: Test WER P&C
type: wer
value: 3.87
---
# NVIDIA FastConformer-Hybrid Large (be)
<style>
img {
display: inline;
}
</style>
| [](#model-architecture)
| [](#model-architecture)
| [](#datasets)
This model transcribes speech in upper and lower case Belarusian alphabet along with spaces, periods, commas, and question marks.
It is a "large" version of FastConformer Transducer-CTC (around 115M parameters) model. This is a hybrid model trained on two losses: Transducer (default) and CTC.
See the [model architecture](#model-architecture) section and [NeMo documentation](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) for complete architecture details.
## NVIDIA NeMo: Training
To train, fine-tune or play with the model you will need to install [NVIDIA NeMo](https://github.com/NVIDIA/NeMo). We recommend you install it after you've installed latest Pytorch version.
```
pip install nemo_toolkit['all']
```
## How to Use this Model
The model is available for use in the NeMo toolkit [3], and can be used as a pre-trained checkpoint for inference or for fine-tuning on another dataset.
### Automatically instantiate the model
```python
import nemo.collections.asr as nemo_asr
asr_model = nemo_asr.models.EncDecHybridRNNTCTCBPEModel.from_pretrained(model_name="nvidia/stt_be_fastconformer_hybrid_large_pc")
```
### Transcribing using Python
First, let's get a sample
```
wget https://dldata-public.s3.us-east-2.amazonaws.com/2086-149220-0033.wav
```
Then simply do:
```
asr_model.transcribe(['2086-149220-0033.wav'])
```
### Transcribing many audio files
Using Transducer mode inference:
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_be_fastconformer_hybrid_large_pc"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
```
Using CTC mode inference:
```shell
python [NEMO_GIT_FOLDER]/examples/asr/transcribe_speech.py
pretrained_name="nvidia/stt_be_fastconformer_hybrid_large_pc"
audio_dir="<DIRECTORY CONTAINING AUDIO FILES>"
decoder_type="ctc"
```
### Input
This model accepts 16000 Hz Mono-channel Audio (wav files) as input.
### Output
This model provides transcribed speech as a string for a given audio sample.
## Model Architecture
FastConformer [1] is an optimized version of the Conformer model with 8x depthwise-separable convolutional downsampling. The model is trained in a multitask setup with joint Transducer and CTC decoder loss. You may find more information on the details of FastConformer here: [Fast-Conformer Model](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#fast-conformer) and about Hybrid Transducer-CTC training here: [Hybrid Transducer-CTC](https://docs.nvidia.com/deeplearning/nemo/user-guide/docs/en/main/asr/models.html#hybrid-transducer-ctc).
## Training
The NeMo toolkit [3] was used for training the models for over several hundred epochs. These model are trained with this [example script](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/asr_hybrid_transducer_ctc/speech_to_text_hybrid_rnnt_ctc_bpe.py) and this [base config](https://github.com/NVIDIA/NeMo/blob/main/examples/asr/conf/fastconformer/hybrid_transducer_ctc/fastconformer_hybrid_transducer_ctc_bpe.yaml).
The tokenizers for these models were built using the text transcripts of the train set with this [script](https://github.com/NVIDIA/NeMo/blob/main/scripts/tokenizers/process_asr_text_tokenizer.py).
### Datasets
All the models in this collection are trained on MCV12 BY corpus comprising of 1500 hours of Belarusian speech.
## Performance
The performance of Automatic Speech Recognition models is measuring using Word Error Rate. Since this dataset is trained on multiple domains and a much larger corpus, it will generally perform better at transcribing audio in general.
The following tables summarizes the performance of the available models in this collection with the Transducer decoder. Performances of the ASR models are reported in terms of Word Error Rate (WER%) with greedy decoding.
a) On data without Punctuation and Capitalization with Transducer decoder
| **Version** | **Tokenizer** | **Vocabulary Size** | **MCV12 DEV** | **MCV12 TEST** |
|:-----------:|:---------------------:|:-------------------:|:-------------:|:--------------:|
| 1.18.0 | SentencePiece Unigram | 1024 | 2.68 | 2.72 |
b) On data with Punctuation and Capitalization with Transducer decoder
| **Version** | **Tokenizer** | **Vocabulary Size** | **MCV12 DEV** | **MCV12 TEST** |
|:-----------:|:---------------------:|:-------------------:|:-------------:|:--------------:|
| 1.18.0 | SentencePiece Unigram | 1024 | 3.84 | 3.87 |
## Limitations
Since this model was trained on publically available speech datasets, the performance of this model might degrade for speech which includes technical terms, or vernacular that the model has not been trained on. The model might also perform worse for accented speech. The model only outputs the punctuations: ```'.', ',', '?' ``` and hence might not do well in scenarios where other punctuations are also expected.
## NVIDIA Riva: Deployment
[NVIDIA Riva](https://developer.nvidia.com/riva), is an accelerated speech AI SDK deployable on-prem, in all clouds, multi-cloud, hybrid, on edge, and embedded.
Additionally, Riva provides:
* World-class out-of-the-box accuracy for the most common languages with model checkpoints trained on proprietary data with hundreds of thousands of GPU-compute hours
* Best in class accuracy with run-time word boosting (e.g., brand and product names) and customization of acoustic model, language model, and inverse text normalization
* Streaming speech recognition, Kubernetes compatible scaling, and enterprise-grade support
Although this model isn’t supported yet by Riva, the [list of supported models is here](https://huggingface.co/models?other=Riva).
Check out [Riva live demo](https://developer.nvidia.com/riva#demos).
## References
[1] [Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition](https://arxiv.org/abs/2305.05084)
[2] [Google Sentencepiece Tokenizer](https://github.com/google/sentencepiece)
[3] [NVIDIA NeMo Toolkit](https://github.com/NVIDIA/NeMo)
## Licence
License to use this model is covered by the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/). By downloading the public and release version of the model, you accept the terms and conditions of the [CC-BY-4.0](https://creativecommons.org/licenses/by/4.0/) license. | 7,858 | [
[
-0.0305633544921875,
-0.05706787109375,
0.015045166015625,
-0.0004284381866455078,
-0.025390625,
0.0015745162963867188,
-0.0191497802734375,
-0.039764404296875,
-0.00273895263671875,
0.0228729248046875,
-0.037811279296875,
-0.042572021484375,
-0.051849365234375,... |
SHENMU007/neunit_tts_BASE_V4.1 | 2023-05-19T03:59:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"speecht5",
"text-to-audio",
"1.1.0",
"generated_from_trainer",
"zh",
"dataset:facebook/voxpopuli",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-to-audio | SHENMU007 | null | null | SHENMU007/neunit_tts_BASE_V4.1 | 0 | 2 | transformers | 2023-05-19T01:43:44 | ---
language:
- zh
license: mit
tags:
- 1.1.0
- generated_from_trainer
datasets:
- facebook/voxpopuli
model-index:
- name: SpeechT5 TTS Dutch neunit
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SpeechT5 TTS Dutch neunit
This model is a fine-tuned version of [microsoft/speecht5_tts](https://huggingface.co/microsoft/speecht5_tts) on the VoxPopuli dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 4000
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.29.0.dev0
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.12.1
| 1,251 | [
[
-0.035003662109375,
-0.0517578125,
-0.005970001220703125,
0.012664794921875,
-0.025421142578125,
-0.019439697265625,
-0.0176239013671875,
-0.026519775390625,
0.01140594482421875,
0.021240234375,
-0.0411376953125,
-0.050079345703125,
-0.043182373046875,
0.008... |
xeonkai/setfit-articles-labels | 2023-05-20T00:23:12.000Z | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | text-classification | xeonkai | null | null | xeonkai/setfit-articles-labels | 0 | 2 | sentence-transformers | 2023-05-19T01:58:20 | ---
license: apache-2.0
tags:
- setfit
- sentence-transformers
- text-classification
pipeline_tag: text-classification
---
# C:\Users\LINUSL~1\AppData\Local\Temp\tmph5kuk9wb\xeonkai\setfit-articles-labels
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. Training a classification head with features from the fine-tuned Sentence Transformer.
## Usage
To use this model for inference, first install the SetFit library:
```bash
python -m pip install setfit
```
You can then run inference as follows:
```python
from setfit import SetFitModel
# Download from Hub and run inference
model = SetFitModel.from_pretrained("C:\Users\LINUSL~1\AppData\Local\Temp\tmph5kuk9wb\xeonkai\setfit-articles-labels")
# Run inference
preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"])
```
## BibTeX entry and citation info
```bibtex
@article{https://doi.org/10.48550/arxiv.2209.11055,
doi = {10.48550/ARXIV.2209.11055},
url = {https://arxiv.org/abs/2209.11055},
author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren},
keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Efficient Few-Shot Learning Without Prompts},
publisher = {arXiv},
year = {2022},
copyright = {Creative Commons Attribution 4.0 International}
}
```
| 1,647 | [
[
-0.01312255859375,
-0.048431396484375,
0.026397705078125,
-0.01230621337890625,
-0.00595855712890625,
-0.00698089599609375,
-0.01555633544921875,
-0.01509857177734375,
0.0012445449829101562,
0.03125,
-0.034515380859375,
-0.0247802734375,
-0.03857421875,
0.00... |
gwnavarro/assignment1_distilbert_classifier_newsgroups | 2023-05-20T09:24:56.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | gwnavarro | null | null | gwnavarro/assignment1_distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-19T02:58:05 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: assignment1_distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# assignment1_distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,495 | [
[
-0.03759765625,
-0.040008544921875,
0.02227783203125,
0.0087738037109375,
-0.03076171875,
-0.01067352294921875,
-0.01102447509765625,
-0.007518768310546875,
-0.005168914794921875,
-0.0072784423828125,
-0.040069580078125,
-0.04754638671875,
-0.06573486328125,
... |
khyatikhandelwal/autotrain-hatespeech-59891134251 | 2023-05-19T06:58:10.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"autotrain",
"hi",
"dataset:khyatikhandelwal/autotrain-data-hatespeech",
"co2_eq_emissions",
"endpoints_compatible",
"region:us"
] | text-classification | khyatikhandelwal | null | null | khyatikhandelwal/autotrain-hatespeech-59891134251 | 0 | 2 | transformers | 2023-05-19T06:56:48 | ---
tags:
- autotrain
- text-classification
language:
- hi
widget:
- text: "I love AutoTrain 🤗"
datasets:
- khyatikhandelwal/autotrain-data-hatespeech
co2_eq_emissions:
emissions: 0.3713708751565804
---
# Model Trained Using AutoTrain
- Problem type: Binary Classification
- Model ID: 59891134251
- CO2 Emissions (in grams): 0.3714
## Validation Metrics
- Loss: 0.000
- Accuracy: 1.000
- Precision: 1.000
- Recall: 1.000
- AUC: 1.000
- F1: 1.000
## Usage
You can use cURL to access this model:
```
$ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/khyatikhandelwal/autotrain-hatespeech-59891134251
```
Or Python API:
```
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("khyatikhandelwal/autotrain-hatespeech-59891134251", use_auth_token=True)
tokenizer = AutoTokenizer.from_pretrained("khyatikhandelwal/autotrain-hatespeech-59891134251", use_auth_token=True)
inputs = tokenizer("I love AutoTrain", return_tensors="pt")
outputs = model(**inputs)
``` | 1,170 | [
[
-0.03076171875,
-0.0297393798828125,
0.01067352294921875,
0.0089263916015625,
-0.00801849365234375,
-0.0045166015625,
0.011383056640625,
-0.0131683349609375,
0.0017080307006835938,
0.00881195068359375,
-0.05584716796875,
-0.034637451171875,
-0.063720703125,
... |
kentnish/ppo-LunarLander-v2 | 2023-05-19T08:05:53.000Z | [
"stable-baselines3",
"LunarLander-v2",
"deep-reinforcement-learning",
"reinforcement-learning",
"model-index",
"region:us"
] | reinforcement-learning | kentnish | null | null | kentnish/ppo-LunarLander-v2 | 0 | 2 | stable-baselines3 | 2023-05-19T07:02:59 | ---
library_name: stable-baselines3
tags:
- LunarLander-v2
- deep-reinforcement-learning
- reinforcement-learning
- stable-baselines3
model-index:
- name: PPO
results:
- task:
type: reinforcement-learning
name: reinforcement-learning
dataset:
name: LunarLander-v2
type: LunarLander-v2
metrics:
- type: mean_reward
value: 278.74 +/- 16.21
name: mean_reward
verified: false
---
# **PPO** Agent playing **LunarLander-v2**
This is a trained model of a **PPO** agent playing **LunarLander-v2**
using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3).
## Usage (with Stable-baselines3)
TODO: Add your code
```python
from stable_baselines3 import ...
from huggingface_sb3 import load_from_hub
...
```
| 784 | [
[
-0.00023484230041503906,
-0.02716064453125,
0.017059326171875,
0.023345947265625,
-0.00606536865234375,
0.002735137939453125,
0.034454345703125,
-0.012115478515625,
0.019866943359375,
0.06500244140625,
-0.043212890625,
-0.035247802734375,
-0.0343017578125,
-... |
hbXNov/ucla-mint-finetune-sd-im1k | 2023-05-24T22:40:58.000Z | [
"diffusers",
"arxiv:2302.02503",
"license:mit",
"endpoints_compatible",
"diffusers:StableDiffusionPipeline",
"region:us"
] | null | hbXNov | null | null | hbXNov/ucla-mint-finetune-sd-im1k | 1 | 2 | diffusers | 2023-05-19T07:13:40 | ---
license: mit
---
Paper: Leaving Reality to Imagination: Robust Classification via Generated Datasets (https://arxiv.org/abs/2302.02503)
Colab Notebook for Data Generation: https://colab.research.google.com/drive/1I2IO8tD_l9JdCRJHOqlAP6ojMPq_BsoR?usp=sharing
All the generated images from the finetuned Stable Diffusion and the pretrained (base) Stable Diffusion are present here - https://drive.google.com/drive/folders/14DJyU_xx018Ir6Cw-mETKw9a0yLtc2NJ?usp=sharing
Finetuning Recipe:
1. We finetune the Stable Diffusion V1.5 model for 1 epoch on the complete ImageNet-1K training dataset, which contains ~1.3M images. The model was finetuned on a single 24GB A5000 GPU. It took us ~1day to complete the finetuning.
2. The finetuning code was adopted directly from the Huggingface Diffusers library - https://github.com/huggingface/diffusers/tree/main/examples/text_to_image.
3. Link to our GitHub code: https://github.com/Hritikbansal/generative-robustness/tree/main/sd_finetune
4. The complete set of finetuning arguments are present here - https://docs.google.com/document/d/17ggIdEuhAS0rhX7gIFp2q6H0JjkpERYFkCLTO_MtdgY/edit?usp=sharing
Post-finetuning, we repeatedly sample the data from the generative model to generate 1.3M training and 50K validation images.
Github Repo for the paper: https://github.com/Hritikbansal/generative-robustness
Authors: Hritik Bansal (https://sites.google.com/view/hbansal), Aditya Grover (https://aditya-grover.github.io/) | 1,470 | [
[
-0.048126220703125,
-0.06549072265625,
0.02349853515625,
-0.0029125213623046875,
-0.01013946533203125,
-0.0017442703247070312,
-0.01511383056640625,
-0.035675048828125,
-0.00884246826171875,
0.0091705322265625,
-0.031280517578125,
-0.030181884765625,
-0.03604125... |
DataIntelligenceTeam/en_qspot_import_v2_190523 | 2023-05-19T08:50:54.000Z | [
"spacy",
"token-classification",
"en",
"model-index",
"region:us"
] | token-classification | DataIntelligenceTeam | null | null | DataIntelligenceTeam/en_qspot_import_v2_190523 | 0 | 2 | spacy | 2023-05-19T08:50:19 | ---
tags:
- spacy
- token-classification
language:
- en
model-index:
- name: en_qspot_import_v2_190523
results:
- task:
name: NER
type: token-classification
metrics:
- name: NER Precision
type: precision
value: 0.9064220183
- name: NER Recall
type: recall
value: 0.8904912123
- name: NER F Score
type: f_score
value: 0.8983859968
---
| Feature | Description |
| --- | --- |
| **Name** | `en_qspot_import_v2_190523` |
| **Version** | `0.0.0` |
| **spaCy** | `>=3.5.2,<3.6.0` |
| **Default Pipeline** | `tok2vec`, `ner` |
| **Components** | `tok2vec`, `ner` |
| **Vectors** | 514157 keys, 514157 unique vectors (300 dimensions) |
| **Sources** | n/a |
| **License** | n/a |
| **Author** | [n/a]() |
### Label Scheme
<details>
<summary>View label scheme (21 labels for 1 components)</summary>
| Component | Labels |
| --- | --- |
| **`ner`** | `commodity`, `company`, `delivery_cap`, `delivery_country`, `delivery_location`, `delivery_port`, `delivery_state`, `delivery_statecompany`, `incoterms`, `measures`, `package_type`, `pickup_cap`, `pickup_country`, `pickup_location`, `pickup_port`, `pickup_state`, `pickup_statecompany`, `quantity`, `stackable`, `volume`, `weight` |
</details>
### Accuracy
| Type | Score |
| --- | --- |
| `ENTS_F` | 89.84 |
| `ENTS_P` | 90.64 |
| `ENTS_R` | 89.05 |
| `TOK2VEC_LOSS` | 17244.87 |
| `NER_LOSS` | 578546.29 | | 1,423 | [
[
-0.024078369140625,
-0.01088714599609375,
0.0218658447265625,
0.0234527587890625,
-0.04315185546875,
0.0126953125,
0.009674072265625,
-0.0101470947265625,
0.035247802734375,
0.028778076171875,
-0.0628662109375,
-0.07110595703125,
-0.04278564453125,
-0.019226... |
r45289/distilbert-base-uncased-finetuned-emotion | 2023-05-19T10:54:34.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | r45289 | null | null | r45289/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-19T09:20:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9220675629348325
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2191
- Accuracy: 0.922
- F1: 0.9221
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 250 | 0.3063 | 0.9085 | 0.9063 |
| No log | 2.0 | 500 | 0.2191 | 0.922 | 0.9221 |
### Framework versions
- Transformers 4.27.3
- Pytorch 1.13.1+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
| 1,847 | [
[
-0.03656005859375,
-0.04296875,
0.014007568359375,
0.02362060546875,
-0.0263519287109375,
-0.019744873046875,
-0.0135955810546875,
-0.0108489990234375,
0.01103973388671875,
0.00855255126953125,
-0.056304931640625,
-0.052001953125,
-0.059814453125,
-0.0081558... |
IsraelSonseca/videomae-base-finetuned-ucf101_sport-subset | 2023-05-19T10:54:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"videomae",
"video-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | video-classification | IsraelSonseca | null | null | IsraelSonseca/videomae-base-finetuned-ucf101_sport-subset | 0 | 2 | transformers | 2023-05-19T10:23:58 | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: videomae-base-finetuned-ucf101_sport-subset
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# videomae-base-finetuned-ucf101_sport-subset
This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9609
- Accuracy: 0.7692
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- training_steps: 140
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.3391 | 0.26 | 36 | 2.0478 | 0.3636 |
| 1.7926 | 1.26 | 72 | 1.5327 | 0.5455 |
| 1.4841 | 2.26 | 108 | 1.1706 | 0.6364 |
| 1.119 | 3.23 | 140 | 0.9609 | 0.7692 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,613 | [
[
-0.04376220703125,
-0.043182373046875,
0.006015777587890625,
0.004932403564453125,
-0.027984619140625,
-0.0310516357421875,
-0.01300048828125,
-0.0042877197265625,
0.0113677978515625,
0.02813720703125,
-0.058197021484375,
-0.050567626953125,
-0.06549072265625,
... |
Ivydata/whisper-small-japanese | 2023-05-19T10:50:13.000Z | [
"transformers",
"pytorch",
"whisper",
"automatic-speech-recognition",
"audio",
"ja",
"dataset:common_voice",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | automatic-speech-recognition | Ivydata | null | null | Ivydata/whisper-small-japanese | 2 | 2 | transformers | 2023-05-19T10:42:27 | ---
license: apache-2.0
datasets:
- common_voice
language:
- ja
tags:
- audio
---
# Fine-tuned Japanese Whisper model for speech recognition using whisper-small
Fine-tuned [openai/whisper-small](https://huggingface.co/openai/whisper-small) on Japanese using [Common Voice](https://commonvoice.mozilla.org/ja/datasets), [JVS](https://sites.google.com/site/shinnosuketakamichi/research-topics/jvs_corpus) and [JSUT](https://sites.google.com/site/shinnosuketakamichi/publication/jsut).
When using this model, make sure that your speech input is sampled at 16kHz.
## Usage
The model can be used directly as follows.
```python
from transformers import WhisperForConditionalGeneration, WhisperProcessor
from datasets import load_dataset
import librosa
import torch
LANG_ID = "ja"
MODEL_ID = "Ivydata/whisper-small-japanese"
SAMPLES = 10
test_dataset = load_dataset("common_voice", LANG_ID, split=f"test[:{SAMPLES}]")
processor = WhisperProcessor.from_pretrained(MODEL_ID)
model = WhisperForConditionalGeneration.from_pretrained(MODEL_ID)
model.config.forced_decoder_ids = processor.get_decoder_prompt_ids(
language="ja", task="transcribe"
)
model.config.suppress_tokens = []
# Preprocessing the datasets.
# We need to read the audio files as arrays
def speech_file_to_array_fn(batch):
speech_array, sampling_rate = librosa.load(batch["path"], sr=16_000)
batch["speech"] = speech_array
batch["sentence"] = batch["sentence"].upper()
batch["sampling_rate"] = sampling_rate
return batch
test_dataset = test_dataset.map(speech_file_to_array_fn)
sample = test_dataset[0]
input_features = processor(sample["speech"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
predicted_ids = model.generate(input_features)
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
# ['<|startoftranscript|><|ja|><|transcribe|><|notimestamps|>木村さんに電話を貸してもらいました。<|endoftext|>']
transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
# ['木村さんに電話を貸してもらいました。']
```
## Test Result
In the table below I report the Character Error Rate (CER) of the model tested on [TEDxJP-10K](https://github.com/laboroai/TEDxJP-10K) dataset.
| Model | CER |
| ------------- | ------------- |
| Ivydata/whisper-small-japanese | **23.10%** |
| Ivydata/wav2vec2-large-xlsr-53-japanese | **27.87%** |
| jonatasgrosman/wav2vec2-large-xlsr-53-japanese | 34.18% | | 2,419 | [
[
-0.017547607421875,
-0.058685302734375,
0.0194091796875,
0.016998291015625,
-0.0160064697265625,
-0.006687164306640625,
-0.035400390625,
-0.040740966796875,
0.0091094970703125,
0.035980224609375,
-0.047210693359375,
-0.06097412109375,
-0.0322265625,
0.010948... |
pabagcha/roberta_crypto_profiling_task1_3 | 2023-05-19T11:27:29.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | pabagcha | null | null | pabagcha/roberta_crypto_profiling_task1_3 | 0 | 2 | transformers | 2023-05-19T11:08:36 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta_crypto_profiling_task1_3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_crypto_profiling_task1_3
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-large-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 3.4017
- Accuracy: 0.4471
- F1: 0.4355
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,229 | [
[
-0.021148681640625,
-0.0555419921875,
0.0160980224609375,
0.00974273681640625,
-0.037017822265625,
-0.0096893310546875,
-0.017730712890625,
-0.03851318359375,
0.0208282470703125,
0.0307769775390625,
-0.04827880859375,
-0.053314208984375,
-0.06048583984375,
-... |
robinreinecke/distilbert-base-uncased-finetuned-cola | 2023-05-19T12:01:40.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | robinreinecke | null | null | robinreinecke/distilbert-base-uncased-finetuned-cola | 0 | 2 | transformers | 2023-05-19T11:43:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- matthews_correlation
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: cola
split: validation
args: cola
metrics:
- name: Matthews Correlation
type: matthews_correlation
value: 0.5541301365636306
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8233
- Matthews Correlation: 0.5541
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Matthews Correlation |
|:-------------:|:-----:|:----:|:---------------:|:--------------------:|
| 0.5229 | 1.0 | 535 | 0.5306 | 0.4277 |
| 0.3478 | 2.0 | 1070 | 0.5107 | 0.5091 |
| 0.2334 | 3.0 | 1605 | 0.5299 | 0.5472 |
| 0.1766 | 4.0 | 2140 | 0.7634 | 0.5317 |
| 0.1231 | 5.0 | 2675 | 0.8233 | 0.5541 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+rocm5.4.2
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,046 | [
[
-0.0220794677734375,
-0.0501708984375,
0.01081085205078125,
0.0186614990234375,
-0.02313232421875,
-0.009490966796875,
-0.006740570068359375,
-0.004100799560546875,
0.02276611328125,
0.01250457763671875,
-0.0469970703125,
-0.037017822265625,
-0.06292724609375,
... |
indikamk/distilbert_finetuned_newsgroups | 2023-05-19T13:48:15.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | indikamk | null | null | indikamk/distilbert_finetuned_newsgroups | 0 | 2 | transformers | 2023-05-19T13:27:35 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_finetuned_newsgroups
results: []
---
# distilbert_finetuned_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on [20 Newsgroups dataset](http://qwone.com/~jason/20Newsgroups/).
## Training procedure
Used 10% of the training set as the validation set.
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
Achieves 83.13% accuracy on Test set.
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,204 | [
[
-0.04205322265625,
-0.04461669921875,
0.0115814208984375,
0.024261474609375,
-0.026519775390625,
0.00189971923828125,
-0.0116729736328125,
0.0036487579345703125,
-0.010467529296875,
0.0078887939453125,
-0.056884765625,
-0.054595947265625,
-0.053497314453125,
... |
AustinCarthy/Baseline_20Kphish_benignWinter_20_20_20 | 2023-05-19T16:13:33.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_20Kphish_benignWinter_20_20_20 | 0 | 2 | transformers | 2023-05-19T14:30:44 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_20Kphish_benignWinter_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_20Kphish_benignWinter_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0591
- Accuracy: 0.9939
- F1: 0.9315
- Precision: 0.9986
- Recall: 0.8728
- Roc Auc Score: 0.9364
- Tpr At Fpr 0.01: 0.8742
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0092 | 1.0 | 13125 | 0.0432 | 0.9899 | 0.8824 | 0.9957 | 0.7922 | 0.8960 | 0.7636 |
| 0.0038 | 2.0 | 26250 | 0.0458 | 0.9935 | 0.9273 | 0.9956 | 0.8678 | 0.9338 | 0.8316 |
| 0.0015 | 3.0 | 39375 | 0.0518 | 0.9938 | 0.9303 | 0.9968 | 0.8722 | 0.9360 | 0.8686 |
| 0.0013 | 4.0 | 52500 | 0.0500 | 0.9941 | 0.9339 | 0.9977 | 0.8778 | 0.9389 | 0.8768 |
| 0.0002 | 5.0 | 65625 | 0.0591 | 0.9939 | 0.9315 | 0.9986 | 0.8728 | 0.9364 | 0.8742 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,240 | [
[
-0.040496826171875,
-0.042572021484375,
0.00968170166015625,
0.00867462158203125,
-0.0198211669921875,
-0.022918701171875,
-0.006038665771484375,
-0.0192413330078125,
0.0277099609375,
0.027008056640625,
-0.053314208984375,
-0.056549072265625,
-0.051177978515625,... |
mrm8488/byt5-small-ft-americas23 | 2023-05-19T17:39:06.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | mrm8488 | null | null | mrm8488/byt5-small-ft-americas23 | 0 | 2 | transformers | 2023-05-19T14:56:10 | ---
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: byt5-small-ft-americas23-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# byt5-small-ft-americas23-3
This model is a fine-tuned version of [google/byt5-small](https://huggingface.co/google/byt5-small) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2834
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4496 | 0.13 | 1000 | 0.2987 |
| 0.3864 | 0.26 | 2000 | 0.2873 |
| 0.3677 | 0.39 | 3000 | 0.2861 |
| 0.3515 | 0.53 | 4000 | 0.2838 |
| 0.3521 | 0.66 | 5000 | 0.2831 |
| 0.3408 | 0.79 | 6000 | 0.2827 |
| 0.346 | 0.92 | 7000 | 0.2834 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,578 | [
[
-0.034515380859375,
-0.035125732421875,
0.0167694091796875,
0.00004106760025024414,
-0.02581787109375,
-0.034881591796875,
-0.010498046875,
-0.01541900634765625,
0.00635528564453125,
0.020477294921875,
-0.06298828125,
-0.041595458984375,
-0.046722412109375,
... |
Lyhoon/distilbert-base-uncased-finetuned-emotion | 2023-05-19T15:20:18.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Lyhoon | null | null | Lyhoon/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-19T15:15:57 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.923
- name: F1
type: f1
value: 0.9230955220517978
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.923
- F1: 0.9231
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8482 | 1.0 | 250 | 0.3087 | 0.9095 | 0.9075 |
| 0.2457 | 2.0 | 500 | 0.2222 | 0.923 | 0.9231 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037628173828125,
-0.0419921875,
0.01464080810546875,
0.0218505859375,
-0.0257110595703125,
-0.0189056396484375,
-0.01334381103515625,
-0.00858306884765625,
0.0101776123046875,
0.00832366943359375,
-0.056671142578125,
-0.051910400390625,
-0.0595703125,
-0.... |
himayla/fake_real | 2023-05-19T16:10:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | himayla | null | null | himayla/fake_real | 0 | 2 | transformers | 2023-05-19T15:54:37 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: fake_real
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# fake_real
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2456
- Accuracy: 1.0
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 1 | 0.2974 | 1.0 |
| No log | 2.0 | 2 | 0.2456 | 1.0 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.0
- Datasets 2.7.1
- Tokenizers 0.13.2
| 1,362 | [
[
-0.031707763671875,
-0.05230712890625,
0.0163116455078125,
0.012481689453125,
-0.0274810791015625,
-0.0269622802734375,
-0.009124755859375,
-0.0289459228515625,
0.0194854736328125,
0.0226287841796875,
-0.06182861328125,
-0.0394287109375,
-0.039459228515625,
... |
AustinCarthy/Baseline_30Kphish_benignWinter_20_20_20 | 2023-05-19T18:35:46.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_30Kphish_benignWinter_20_20_20 | 0 | 2 | transformers | 2023-05-19T16:13:53 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_30Kphish_benignWinter_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_30Kphish_benignWinter_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0546
- Accuracy: 0.9949
- F1: 0.9438
- Precision: 0.9967
- Recall: 0.8962
- Roc Auc Score: 0.9480
- Tpr At Fpr 0.01: 0.8872
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0097 | 1.0 | 19688 | 0.0272 | 0.9936 | 0.9283 | 0.9869 | 0.8762 | 0.9378 | 0.7798 |
| 0.005 | 2.0 | 39376 | 0.0444 | 0.9916 | 0.9028 | 0.9985 | 0.8238 | 0.9119 | 0.8272 |
| 0.0008 | 3.0 | 59064 | 0.0382 | 0.9943 | 0.9368 | 0.9984 | 0.8824 | 0.9412 | 0.8846 |
| 0.0008 | 4.0 | 78752 | 0.0416 | 0.9952 | 0.9476 | 0.9954 | 0.9042 | 0.9520 | 0.8832 |
| 0.0 | 5.0 | 98440 | 0.0546 | 0.9949 | 0.9438 | 0.9967 | 0.8962 | 0.9480 | 0.8872 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,240 | [
[
-0.04095458984375,
-0.04229736328125,
0.01050567626953125,
0.00888824462890625,
-0.020782470703125,
-0.023406982421875,
-0.005649566650390625,
-0.020172119140625,
0.0264129638671875,
0.0264739990234375,
-0.054290771484375,
-0.05523681640625,
-0.050567626953125,
... |
platzi/platzi-distilroberta-base-mrpc-glue-miguel-uicab | 2023-05-19T18:44:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-miguel-uicab | 0 | 2 | transformers | 2023-05-19T18:08:02 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text: ["Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5 billion.",
"Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for $ 1.8 billion in 1998."]
example_title: Not Equivalent
- text: ["Revenue in the first quarter of the year dropped 15 percent from the same period a year earlier.",
"With the scandal hanging over Stewart's company revenue the first quarter of the year dropped 15 percent from the same period a year earlier."]
example_title: Equivalent
model-index:
- name: platzi-distilroberta-base-mrpc-glue-miguel-uicab
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.803921568627451
- name: F1
type: f1
value: 0.8648648648648648
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-miguel-uicab
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.6276
- Accuracy: 0.8039
- F1: 0.8649
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5096 | 1.09 | 500 | 0.6276 | 0.8039 | 0.8649 |
| 0.3267 | 2.18 | 1000 | 0.7474 | 0.8260 | 0.8711 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,425 | [
[
-0.0306396484375,
-0.040618896484375,
0.0091094970703125,
0.021453857421875,
-0.03125,
-0.02655029296875,
-0.010711669921875,
-0.00437164306640625,
0.00653839111328125,
0.01068878173828125,
-0.048065185546875,
-0.04315185546875,
-0.05621337890625,
-0.0071449... |
pabagcha/roberta_crypto_profiling_task1_complete | 2023-05-21T15:52:10.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | pabagcha | null | null | pabagcha/roberta_crypto_profiling_task1_complete | 1 | 2 | transformers | 2023-05-19T18:34:08 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta_crypto_profiling_task1_complete
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_crypto_profiling_task1_complete
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-large-2022-154m](https://huggingface.co/cardiffnlp/twitter-roberta-large-2022-154m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7421
- Accuracy: 0.6954
- F1: 0.7128
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 7
### Training results
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,243 | [
[
-0.0206146240234375,
-0.057342529296875,
0.0167694091796875,
0.0087127685546875,
-0.0369873046875,
-0.007251739501953125,
-0.0182037353515625,
-0.0386962890625,
0.02349853515625,
0.0333251953125,
-0.049163818359375,
-0.053985595703125,
-0.06158447265625,
-0.... |
AustinCarthy/Baseline_40Kphish_benignWinter_20_20_20 | 2023-05-19T21:38:13.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_40Kphish_benignWinter_20_20_20 | 0 | 2 | transformers | 2023-05-19T18:36:21 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_40Kphish_benignWinter_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_40Kphish_benignWinter_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0374
- Accuracy: 0.9955
- F1: 0.9501
- Precision: 0.9989
- Recall: 0.9058
- Roc Auc Score: 0.9529
- Tpr At Fpr 0.01: 0.9122
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0055 | 1.0 | 26250 | 0.0223 | 0.9944 | 0.9384 | 0.9913 | 0.8908 | 0.9452 | 0.8514 |
| 0.0026 | 2.0 | 52500 | 0.0300 | 0.9958 | 0.9539 | 0.9905 | 0.9198 | 0.9597 | 0.0 |
| 0.0045 | 3.0 | 78750 | 0.0355 | 0.9954 | 0.9489 | 0.9982 | 0.9042 | 0.9521 | 0.9054 |
| 0.0025 | 4.0 | 105000 | 0.0311 | 0.9955 | 0.9500 | 0.9987 | 0.9058 | 0.9529 | 0.9142 |
| 0.0004 | 5.0 | 131250 | 0.0374 | 0.9955 | 0.9501 | 0.9989 | 0.9058 | 0.9529 | 0.9122 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,247 | [
[
-0.041046142578125,
-0.0413818359375,
0.00885772705078125,
0.00800323486328125,
-0.0202178955078125,
-0.0211944580078125,
-0.006290435791015625,
-0.018951416015625,
0.028472900390625,
0.0266265869140625,
-0.053955078125,
-0.0560302734375,
-0.05059814453125,
... |
platzi/platzi-distilroberta-base-mrpc-glue-roberto-vilchis | 2023-05-27T22:30:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | platzi | null | null | platzi/platzi-distilroberta-base-mrpc-glue-roberto-vilchis | 0 | 2 | transformers | 2023-05-19T19:46:22 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
model-index:
- name: platzi-distilroberta-base-mrpc-glue-roberto-vilchis
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8137254901960784
- name: F1
type: f1
value: 0.8647686832740213
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# platzi-distilroberta-base-mrpc-glue-roberto-vilchis
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.5542
- Accuracy: 0.8137
- F1: 0.8648
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5359 | 1.09 | 500 | 0.5542 | 0.8137 | 0.8648 |
| 0.357 | 2.18 | 1000 | 0.5562 | 0.8309 | 0.8729 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,890 | [
[
-0.030853271484375,
-0.042449951171875,
0.00878143310546875,
0.018402099609375,
-0.0333251953125,
-0.0240020751953125,
-0.0101165771484375,
-0.002979278564453125,
0.005550384521484375,
0.00891876220703125,
-0.049713134765625,
-0.04132080078125,
-0.05517578125,
... |
MinaAlmasi/ES-ENG-xlm-roberta-sentiment | 2023-05-22T20:14:43.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:cc-by-nc-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | MinaAlmasi | null | null | MinaAlmasi/ES-ENG-xlm-roberta-sentiment | 0 | 2 | transformers | 2023-05-19T21:33:02 | ---
license: cc-by-nc-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: ES-ENG-xlm-roberta-sentiment
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# ES-ENG-xlm-roberta-sentiment
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on a Custom dataset.
The best model (stopped after 20 epochs) achieves the following results on the evaluation set:
- Loss: 0.7743
- Accuracy: 0.6702
- F1: 0.6672
- Precision: 0.6664
- Recall: 0.6702
## Intended uses & limitations
Note that commercial use with this model is prohibited.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-06
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 1.1099 | 1.0 | 208 | 1.0718 | 0.3968 | 0.3851 | 0.4857 | 0.3968 |
| 1.0057 | 2.0 | 416 | 0.8926 | 0.5492 | 0.5080 | 0.5639 | 0.5492 |
| 0.8988 | 3.0 | 624 | 0.8384 | 0.5883 | 0.5792 | 0.5789 | 0.5883 |
| 0.8606 | 4.0 | 832 | 0.8209 | 0.6168 | 0.6086 | 0.6086 | 0.6168 |
| 0.8338 | 5.0 | 1040 | 0.8006 | 0.6120 | 0.6068 | 0.6046 | 0.6120 |
| 0.8081 | 6.0 | 1248 | 0.8074 | 0.6026 | 0.5935 | 0.5966 | 0.6026 |
| 0.7872 | 7.0 | 1456 | 0.7786 | 0.6194 | 0.6149 | 0.6127 | 0.6194 |
| 0.7624 | 8.0 | 1664 | 0.7783 | 0.6379 | 0.6277 | 0.6342 | 0.6379 |
| 0.7446 | 9.0 | 1872 | 0.7643 | 0.6366 | 0.6287 | 0.6314 | 0.6366 |
| 0.7274 | 10.0 | 2080 | 0.7846 | 0.6395 | 0.6297 | 0.6351 | 0.6395 |
| 0.7116 | 11.0 | 2288 | 0.7465 | 0.6495 | 0.6425 | 0.6462 | 0.6495 |
| 0.6998 | 12.0 | 2496 | 0.7599 | 0.6537 | 0.6474 | 0.6494 | 0.6537 |
| 0.6852 | 13.0 | 2704 | 0.7651 | 0.6515 | 0.6443 | 0.6465 | 0.6515 |
| 0.6726 | 14.0 | 2912 | 0.7571 | 0.6576 | 0.6536 | 0.6530 | 0.6576 |
| 0.6665 | 15.0 | 3120 | 0.7597 | 0.6557 | 0.6506 | 0.6514 | 0.6557 |
| 0.6541 | 16.0 | 3328 | 0.7590 | 0.6615 | 0.6584 | 0.6576 | 0.6615 |
| 0.6513 | 17.0 | 3536 | 0.7617 | 0.6599 | 0.6544 | 0.6555 | 0.6599 |
| 0.6392 | 18.0 | 3744 | 0.7740 | 0.6628 | 0.6585 | 0.6582 | 0.6628 |
| 0.6369 | 19.0 | 3952 | 0.7666 | 0.6631 | 0.6588 | 0.6585 | 0.6631 |
| 0.6268 | 20.0 | 4160 | 0.7743 | 0.6702 | 0.6672 | 0.6664 | 0.6702 |
| 0.62 | 21.0 | 4368 | 0.7712 | 0.6680 | 0.6638 | 0.6638 | 0.6680 |
| 0.619 | 22.0 | 4576 | 0.7720 | 0.6689 | 0.6656 | 0.6649 | 0.6689 |
| 0.6074 | 23.0 | 4784 | 0.7729 | 0.6663 | 0.6630 | 0.6621 | 0.6663 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cu117
- Datasets 2.12.0
- Tokenizers 0.13.3 | 3,507 | [
[
-0.045562744140625,
-0.04315185546875,
0.0162506103515625,
0.00229644775390625,
-0.0017290115356445312,
0.0003304481506347656,
-0.001132965087890625,
-0.002964019775390625,
0.04217529296875,
0.0257720947265625,
-0.04998779296875,
-0.053619384765625,
-0.047149658... |
AustinCarthy/Baseline_50Kphish_benignWinter_20_20_20 | 2023-05-20T01:17:35.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_50Kphish_benignWinter_20_20_20 | 0 | 2 | transformers | 2023-05-19T21:38:39 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_50Kphish_benignWinter_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_50Kphish_benignWinter_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0294
- Accuracy: 0.9959
- F1: 0.9549
- Precision: 0.9996
- Recall: 0.914
- Roc Auc Score: 0.9570
- Tpr At Fpr 0.01: 0.932
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0089 | 1.0 | 32813 | 0.0386 | 0.9944 | 0.9379 | 0.9957 | 0.8864 | 0.9431 | 0.8642 |
| 0.008 | 2.0 | 65626 | 0.0524 | 0.9917 | 0.9046 | 0.9995 | 0.8262 | 0.9131 | 0.8586 |
| 0.0027 | 3.0 | 98439 | 0.0265 | 0.9965 | 0.9624 | 0.9961 | 0.9308 | 0.9653 | 0.919 |
| 0.0013 | 4.0 | 131252 | 0.0302 | 0.9962 | 0.9585 | 0.9989 | 0.9212 | 0.9606 | 0.9236 |
| 0.0006 | 5.0 | 164065 | 0.0294 | 0.9959 | 0.9549 | 0.9996 | 0.914 | 0.9570 | 0.932 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,245 | [
[
-0.0408935546875,
-0.041351318359375,
0.00795745849609375,
0.007358551025390625,
-0.01953125,
-0.02197265625,
-0.0057220458984375,
-0.019317626953125,
0.0283050537109375,
0.025543212890625,
-0.05487060546875,
-0.05584716796875,
-0.04937744140625,
-0.01156616... |
miguel-uicab/distilroberta-base-mrpc-glue | 2023-05-20T00:32:50.000Z | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | miguel-uicab | null | null | miguel-uicab/distilroberta-base-mrpc-glue | 0 | 2 | transformers | 2023-05-20T00:21:56 | ---
license: apache-2.0
tags:
- text-classification
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
- f1
widget:
- text:
- >-
Yucaipa owned Dominick 's before selling the chain to Safeway in 1998
for $ 2.5 billion.
- >-
Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to
Safeway for $ 1.8 billion in 1998.
example_title: Not Equivalent
- text:
- >-
Revenue in the first quarter of the year dropped 15 percent from the
same period a year earlier.
- >-
With the scandal hanging over Stewart's company revenue the first
quarter of the year dropped 15 percent from the same period a year
earlier.
example_title: Equivalent
model-index:
- name: distilroberta-base-mrpc-glue
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: mrpc
split: validation
args: mrpc
metrics:
- name: Accuracy
type: accuracy
value: 0.8382352941176471
- name: F1
type: f1
value: 0.8842105263157894
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-mrpc-glue
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets.
It achieves the following results on the evaluation set:
- Loss: 0.4990
- Accuracy: 0.8382
- F1: 0.8842
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.5065 | 1.09 | 500 | 0.4990 | 0.8382 | 0.8842 |
| 0.3328 | 2.18 | 1000 | 0.6793 | 0.8235 | 0.8686 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,482 | [
[
-0.029327392578125,
-0.04632568359375,
0.00754547119140625,
0.01776123046875,
-0.0277557373046875,
-0.0220794677734375,
-0.005992889404296875,
-0.007137298583984375,
0.0099639892578125,
0.009918212890625,
-0.0479736328125,
-0.037200927734375,
-0.058135986328125,... |
zonghaoyang/BioLinkBERT | 2023-05-21T03:17:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | zonghaoyang | null | null | zonghaoyang/BioLinkBERT | 0 | 2 | transformers | 2023-05-20T00:58:24 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: BioLinkBERT
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BioLinkBERT
This model is a fine-tuned version of [michiyasunaga/BioLinkBERT-base](https://huggingface.co/michiyasunaga/BioLinkBERT-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6209
- Accuracy: 0.8987
- F1: 0.5922
- Precision: 0.6630
- Recall: 0.5351
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:|
| 0.2278 | 1.0 | 1626 | 0.2833 | 0.9050 | 0.5842 | 0.7334 | 0.4854 |
| 0.1896 | 2.0 | 3252 | 0.3267 | 0.9012 | 0.6006 | 0.6752 | 0.5409 |
| 0.144 | 3.0 | 4878 | 0.4336 | 0.8989 | 0.6246 | 0.6376 | 0.6121 |
| 0.1156 | 4.0 | 6504 | 0.4667 | 0.8939 | 0.5918 | 0.6280 | 0.5595 |
| 0.0864 | 5.0 | 8130 | 0.5413 | 0.8969 | 0.6103 | 0.6347 | 0.5877 |
| 0.0515 | 6.0 | 9756 | 0.6209 | 0.8987 | 0.5922 | 0.6630 | 0.5351 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,968 | [
[
-0.035919189453125,
-0.033355712890625,
0.01517486572265625,
0.004669189453125,
-0.0169525146484375,
-0.0175628662109375,
0.002788543701171875,
-0.0177459716796875,
0.0290374755859375,
0.015838623046875,
-0.058746337890625,
-0.048797607421875,
-0.04638671875,
... |
shinta0615/distilbert-base-uncased-finetuned-clinc | 2023-05-24T19:39:54.000Z | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | shinta0615 | null | null | shinta0615/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-05-20T02:10:00 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- clinc_oos
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
split: validation
args: plus
metrics:
- name: Accuracy
type: accuracy
value: 0.9158064516129032
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7786
- Accuracy: 0.9158
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2787 | 0.7455 |
| 3.7798 | 2.0 | 636 | 1.8706 | 0.8332 |
| 3.7798 | 3.0 | 954 | 1.1623 | 0.8939 |
| 1.6917 | 4.0 | 1272 | 0.8619 | 0.91 |
| 0.9059 | 5.0 | 1590 | 0.7786 | 0.9158 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,926 | [
[
-0.03448486328125,
-0.040374755859375,
0.01322174072265625,
0.007595062255859375,
-0.0293426513671875,
-0.027069091796875,
-0.0122833251953125,
-0.0090179443359375,
0.0017871856689453125,
0.02301025390625,
-0.04693603515625,
-0.047698974609375,
-0.05731201171875... |
wenhao1/distilbert-base-uncased-finetuned-emotion | 2023-05-20T04:00:56.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | wenhao1 | null | null | wenhao1/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-20T03:24:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9285
- name: F1
type: f1
value: 0.92860925314864
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2156
- Accuracy: 0.9285
- F1: 0.9286
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8021 | 1.0 | 250 | 0.3065 | 0.907 | 0.9039 |
| 0.2397 | 2.0 | 500 | 0.2156 | 0.9285 | 0.9286 |
### Framework versions
- Transformers 4.13.0
- Pytorch 1.8.1+cu102
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,801 | [
[
-0.038116455078125,
-0.041839599609375,
0.01421356201171875,
0.023040771484375,
-0.0251312255859375,
-0.019561767578125,
-0.01332855224609375,
-0.007732391357421875,
0.01096343994140625,
0.00821685791015625,
-0.056640625,
-0.05096435546875,
-0.059722900390625,
... |
YakovElm/Apache5Classic | 2023-05-20T11:23:19.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache5Classic | 0 | 2 | transformers | 2023-05-20T08:25:23 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache5Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache5Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2495
- Train Accuracy: 0.9123
- Validation Loss: 0.5992
- Validation Accuracy: 0.8018
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3064 | 0.9075 | 0.5009 | 0.8233 | 0 |
| 0.2899 | 0.9107 | 0.5166 | 0.8233 | 1 |
| 0.2495 | 0.9123 | 0.5992 | 0.8018 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,772 | [
[
-0.04620361328125,
-0.0452880859375,
0.0201568603515625,
0.006195068359375,
-0.034210205078125,
-0.0297393798828125,
-0.01788330078125,
-0.03106689453125,
0.007549285888671875,
0.01302337646484375,
-0.054534912109375,
-0.0496826171875,
-0.053375244140625,
-0... |
YakovElm/Apache10Classic | 2023-05-20T13:49:30.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache10Classic | 0 | 2 | transformers | 2023-05-20T08:25:53 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache10Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache10Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2072
- Train Accuracy: 0.9383
- Validation Loss: 0.4268
- Validation Accuracy: 0.8644
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2370 | 0.9379 | 0.4355 | 0.8644 | 0 |
| 0.2213 | 0.9383 | 0.4657 | 0.8644 | 1 |
| 0.2072 | 0.9383 | 0.4268 | 0.8644 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,774 | [
[
-0.0469970703125,
-0.048675537109375,
0.020172119140625,
0.008636474609375,
-0.034027099609375,
-0.031646728515625,
-0.020050048828125,
-0.02935791015625,
0.01102447509765625,
0.01520538330078125,
-0.053314208984375,
-0.04681396484375,
-0.053741455078125,
-0... |
YakovElm/Apache15Classic | 2023-05-20T15:18:14.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache15Classic | 0 | 2 | transformers | 2023-05-20T08:26:04 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache15Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache15Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1773
- Train Accuracy: 0.9542
- Validation Loss: 0.3408
- Validation Accuracy: 0.8924
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1927 | 0.9533 | 0.3561 | 0.8924 | 0 |
| 0.1808 | 0.9542 | 0.3380 | 0.8924 | 1 |
| 0.1773 | 0.9542 | 0.3408 | 0.8924 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,774 | [
[
-0.04632568359375,
-0.048309326171875,
0.018463134765625,
0.00795745849609375,
-0.0361328125,
-0.030548095703125,
-0.020233154296875,
-0.0275115966796875,
0.00969696044921875,
0.01410675048828125,
-0.054595947265625,
-0.0479736328125,
-0.051971435546875,
-0.... |
YakovElm/Apache20Classic | 2023-05-20T16:54:27.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Apache20Classic | 0 | 2 | transformers | 2023-05-20T08:26:12 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Apache20Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Apache20Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1326
- Train Accuracy: 0.9622
- Validation Loss: 0.3266
- Validation Accuracy: 0.9055
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1771 | 0.9572 | 0.2994 | 0.9055 | 0 |
| 0.1510 | 0.9624 | 0.3152 | 0.9055 | 1 |
| 0.1326 | 0.9622 | 0.3266 | 0.9055 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,774 | [
[
-0.046539306640625,
-0.0489501953125,
0.0203094482421875,
0.0097198486328125,
-0.033843994140625,
-0.0322265625,
-0.0192108154296875,
-0.02972412109375,
0.00888824462890625,
0.01502227783203125,
-0.05523681640625,
-0.0482177734375,
-0.053741455078125,
-0.021... |
unklefedor/xlm-roberta-base-language-detection | 2023-05-20T09:37:32.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | unklefedor | null | null | unklefedor/xlm-roberta-base-language-detection | 0 | 2 | transformers | 2023-05-20T09:26:18 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: train
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# train
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0145
- Accuracy: 0.9966
- F1: 0.9966
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 128
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.2363 | 1.0 | 1422 | 0.0150 | 0.9963 | 0.9963 |
| 0.0116 | 2.0 | 2844 | 0.0145 | 0.9966 | 0.9966 |
### Framework versions
- Transformers 4.28.1
- Pytorch 1.11.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,407 | [
[
-0.02862548828125,
-0.03924560546875,
0.0252532958984375,
0.00734710693359375,
-0.0271148681640625,
-0.030609130859375,
-0.0142669677734375,
-0.01442718505859375,
0.00018739700317382812,
0.0321044921875,
-0.0604248046875,
-0.04571533203125,
-0.05987548828125,
... |
itoh5588/distilbert-base-uncased-finetuned-emotion | 2023-07-29T13:12:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | itoh5588 | null | null | itoh5588/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-20T10:18:30 | ---
license: apache-2.0
base_model: distilbert-base-uncased
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9345
- name: F1
type: f1
value: 0.9347579750092575
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1583
- Accuracy: 0.9345
- F1: 0.9348
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1701 | 1.0 | 250 | 0.1701 | 0.9335 | 0.9343 |
| 0.1114 | 2.0 | 500 | 0.1583 | 0.9345 | 0.9348 |
### Framework versions
- Transformers 4.31.0
- Pytorch 2.0.1+cu118
- Datasets 1.16.1
- Tokenizers 0.13.3
| 1,884 | [
[
-0.037811279296875,
-0.040924072265625,
0.01384735107421875,
0.02203369140625,
-0.0259857177734375,
-0.0186614990234375,
-0.01346588134765625,
-0.0082244873046875,
0.011505126953125,
0.0079803466796875,
-0.056182861328125,
-0.050689697265625,
-0.059814453125,
... |
yubin0727/distilbert-base-uncased-finetuned-emotion | 2023-05-20T10:55:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | yubin0727 | null | null | yubin0727/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-20T10:46:38 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9355
- name: F1
type: f1
value: 0.9355908388975606
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1583
- Accuracy: 0.9355
- F1: 0.9356
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.1842 | 1.0 | 250 | 0.1697 | 0.935 | 0.9347 |
| 0.1168 | 2.0 | 500 | 0.1583 | 0.9355 | 0.9356 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.038055419921875,
-0.04180908203125,
0.01546478271484375,
0.0216064453125,
-0.026092529296875,
-0.0188751220703125,
-0.01309967041015625,
-0.00870513916015625,
0.0105743408203125,
0.008026123046875,
-0.0562744140625,
-0.050048828125,
-0.058990478515625,
-0... |
AustinCarthy/Baseline_100Kphish_benignWinter_20_20_20 | 2023-05-20T22:19:44.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Baseline_100Kphish_benignWinter_20_20_20 | 0 | 2 | transformers | 2023-05-20T13:13:50 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Baseline_100Kphish_benignWinter_20_20_20
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Baseline_100Kphish_benignWinter_20_20_20
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0187
- Accuracy: 0.9973
- F1: 0.9705
- Precision: 0.9996
- Recall: 0.943
- Roc Auc Score: 0.9715
- Tpr At Fpr 0.01: 0.9568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0043 | 1.0 | 65625 | 0.0343 | 0.9944 | 0.9379 | 0.9973 | 0.8852 | 0.9425 | 0.8798 |
| 0.0047 | 2.0 | 131250 | 0.0326 | 0.9951 | 0.9462 | 0.9996 | 0.8982 | 0.9491 | 0.9194 |
| 0.0027 | 3.0 | 196875 | 0.0308 | 0.9960 | 0.9559 | 0.9985 | 0.9168 | 0.9584 | 0.9276 |
| 0.0021 | 4.0 | 262500 | 0.0185 | 0.9971 | 0.9691 | 0.9996 | 0.9404 | 0.9702 | 0.9508 |
| 0.0004 | 5.0 | 328125 | 0.0187 | 0.9973 | 0.9705 | 0.9996 | 0.943 | 0.9715 | 0.9568 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,248 | [
[
-0.04144287109375,
-0.042724609375,
0.0100250244140625,
0.0077667236328125,
-0.0189971923828125,
-0.0222625732421875,
-0.004947662353515625,
-0.0191192626953125,
0.0283203125,
0.02508544921875,
-0.0543212890625,
-0.05523681640625,
-0.050201416015625,
-0.0110... |
Liangym/distilbert-base-uncased-finetuned-emotion | 2023-05-20T13:29:34.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | Liangym | null | null | Liangym/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-20T13:21:20 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9233379482532471
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2237
- Accuracy: 0.9235
- F1: 0.9233
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8145 | 1.0 | 250 | 0.3196 | 0.9065 | 0.9045 |
| 0.2417 | 2.0 | 500 | 0.2237 | 0.9235 | 0.9233 |
### Framework versions
- Transformers 4.13.0
- Pytorch 2.0.0
- Datasets 2.8.0
- Tokenizers 0.10.3
| 1,797 | [
[
-0.0379638671875,
-0.040679931640625,
0.01367950439453125,
0.02301025390625,
-0.026153564453125,
-0.019989013671875,
-0.0126800537109375,
-0.00829315185546875,
0.01010894775390625,
0.00859832763671875,
-0.056060791015625,
-0.051177978515625,
-0.060089111328125,
... |
SharKRippeR/distilbert-base-uncased-finetuned-clinc | 2023-05-20T14:53:36.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | SharKRippeR | null | null | SharKRippeR/distilbert-base-uncased-finetuned-clinc | 0 | 2 | transformers | 2023-05-20T13:41:41 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7720
- Accuracy: 0.9181
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 48
- eval_batch_size: 48
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 318 | 3.2887 | 0.7419 |
| 3.7868 | 2.0 | 636 | 1.8753 | 0.8371 |
| 3.7868 | 3.0 | 954 | 1.1570 | 0.8961 |
| 1.6927 | 4.0 | 1272 | 0.8573 | 0.9129 |
| 0.9056 | 5.0 | 1590 | 0.7720 | 0.9181 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Tokenizers 0.13.3
| 1,614 | [
[
-0.03509521484375,
-0.04534912109375,
0.0147705078125,
0.01284027099609375,
-0.0273895263671875,
-0.0214080810546875,
-0.0096893310546875,
-0.006252288818359375,
0.0032024383544921875,
0.0201873779296875,
-0.0501708984375,
-0.046417236328125,
-0.058929443359375,... |
DonMakar/bert-base-Daichi_support | 2023-05-24T19:58:49.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | DonMakar | null | null | DonMakar/bert-base-Daichi_support | 0 | 2 | transformers | 2023-05-20T14:56:46 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- f1
model-index:
- name: bert-base-Daichi_support
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-Daichi_support
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.7348
- F1: 0.5408
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 30
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| No log | 1.0 | 7 | 1.7976 | 0.3806 |
| No log | 2.0 | 14 | 1.6849 | 0.3806 |
| No log | 3.0 | 21 | 1.5963 | 0.3806 |
| No log | 4.0 | 28 | 1.4947 | 0.3806 |
| No log | 5.0 | 35 | 1.4645 | 0.3806 |
| No log | 6.0 | 42 | 1.4063 | 0.3806 |
| No log | 7.0 | 49 | 1.4314 | 0.4935 |
| No log | 8.0 | 56 | 1.2979 | 0.5274 |
| No log | 9.0 | 63 | 1.3582 | 0.4626 |
| No log | 10.0 | 70 | 1.5711 | 0.5164 |
| No log | 11.0 | 77 | 1.2483 | 0.5881 |
| No log | 12.0 | 84 | 1.1974 | 0.5860 |
| No log | 13.0 | 91 | 1.2582 | 0.5426 |
| No log | 14.0 | 98 | 1.7688 | 0.4504 |
| No log | 15.0 | 105 | 1.3278 | 0.5557 |
| No log | 16.0 | 112 | 1.6230 | 0.5119 |
| No log | 17.0 | 119 | 1.4229 | 0.5536 |
| No log | 18.0 | 126 | 1.4000 | 0.5536 |
| No log | 19.0 | 133 | 1.4614 | 0.5408 |
| No log | 20.0 | 140 | 1.4676 | 0.5536 |
| No log | 21.0 | 147 | 1.7174 | 0.555 |
| No log | 22.0 | 154 | 1.5338 | 0.5536 |
| No log | 23.0 | 161 | 1.6979 | 0.6179 |
| No log | 24.0 | 168 | 1.7075 | 0.5408 |
| No log | 25.0 | 175 | 1.6655 | 0.5408 |
| No log | 26.0 | 182 | 1.6043 | 0.6179 |
| No log | 27.0 | 189 | 1.6945 | 0.6051 |
| No log | 28.0 | 196 | 1.7289 | 0.5408 |
| 1.1079 | 29.0 | 203 | 1.7329 | 0.5408 |
| 1.1079 | 30.0 | 210 | 1.7348 | 0.5408 |
### Framework versions
- Transformers 4.27.3
- Pytorch 2.0.0+cu117
- Datasets 2.11.0
- Tokenizers 0.13.2
| 3,063 | [
[
-0.0440673828125,
-0.039215087890625,
0.0087432861328125,
0.00716400146484375,
-0.011566162109375,
-0.01097869873046875,
-0.0007891654968261719,
-0.006931304931640625,
0.03863525390625,
0.018829345703125,
-0.05609130859375,
-0.052825927734375,
-0.04833984375,
... |
gyuturn/distilbert-base-uncased-finetuned-emotion | 2023-05-20T15:17:04.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | gyuturn | null | null | gyuturn/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-20T15:12:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9263780074691081
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2125
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7897 | 1.0 | 250 | 0.2971 | 0.9095 | 0.9067 |
| 0.241 | 2.0 | 500 | 0.2125 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.03753662109375,
-0.041839599609375,
0.01522064208984375,
0.0222015380859375,
-0.026092529296875,
-0.01922607421875,
-0.013519287109375,
-0.00878143310546875,
0.01033782958984375,
0.008056640625,
-0.05615234375,
-0.052001953125,
-0.059539794921875,
-0.0085... |
declare-lab/segue-w2v2-base | 2023-05-29T12:58:43.000Z | [
"transformers",
"pytorch",
"segue",
"audio",
"speech",
"pre-training",
"spoken language understanding",
"music",
"en",
"dataset:librispeech_asr",
"dataset:declare-lab/MELD",
"dataset:PolyAI/minds14",
"dataset:google/fleurs",
"arxiv:2305.12301",
"license:apache-2.0",
"endpoints_compatib... | null | declare-lab | null | null | declare-lab/segue-w2v2-base | 0 | 2 | transformers | 2023-05-20T15:28:04 | ---
datasets:
- librispeech_asr
- declare-lab/MELD
- PolyAI/minds14
- google/fleurs
language:
- en
metrics:
- accuracy
- f1
- mae
- pearsonr
- exact_match
tags:
- audio
- speech
- pre-training
- spoken language understanding
- music
license: apache-2.0
---
**Repository:** https://github.com/declare-lab/segue
**Paper:** https://arxiv.org/abs/2305.12301
SEGUE is a pre-training approach for sequence-level spoken language understanding (SLU) tasks.
We use knowledge distillation on a parallel speech-text corpus (e.g. an ASR corpus) to distil
language understanding knowledge from a textual sentence embedder to a pre-trained speech encoder.
SEGUE applied to Wav2Vec 2.0 improves performance for many SLU tasks, including
intent classification / slot-filling, spoken sentiment analysis, and spoken emotion classification.
These improvements were observed in both fine-tuned and non-fine-tuned settings, as well as few-shot settings.
## How to Get Started with the Model
To use this model checkpoint, you need to use the model classes on [our GitHub repository](https://github.com/declare-lab/segue).
```python3
from segue.modeling_segue import SegueModel
import soundfile
# assuming this is 16kHz mono audio
raw_audio_array, sampling_rate = soundfile.read('example.wav')
model = SegueModel.from_pretrained('declare-lab/segue-w2v2-base')
inputs = model.processor(audio = raw_audio_array, sampling_rate = sampling_rate)
outputs = model(**inputs)
```
You do not need to create the `Processor` yourself, it is already available as `model.processor`.
`SegueForRegression` and `SegueForClassification` are also available. For classification,
the number of classes can be specified through the n_classes field in model config,
e.g. `SegueForClassification.from_pretrained('declare-lab/segue-w2v2-base', n_classes=7)`.
Multi-label classification is also supported, e.g. `n_classes=[3, 7]` for two labels with 3 and 7 classes respectively.
Pre-training and downstream task training scripts are available on [our GitHub repository](https://github.com/declare-lab/segue).
## Results
We show only simplified MInDS-14 and MELD results for brevity.
Please refer to the paper for full results.
### MInDS-14 (intent classification)
*Note: we used only the en-US subset of MInDS-14.*
#### Fine-tuning
|Model|Accuracy|
|-|-|
|w2v 2.0|89.4±2.3|
|SEGUE|**97.6±0.5**|
*Note: Wav2Vec 2.0 fine-tuning was unstable. Only 3 out of 6 runs converged, the result shown were taken from converged runs only.*
#### Frozen encoder
|Model|Accuracy|
|-|-|
|w2v 2.0|54.0|
|SEGUE|**77.9**|
### MELD (sentiment and emotion classification)
#### Fine-tuning
|Model|Sentiment F1|Emotion F1|
|-|-|-|
|w2v 2.0|47.3|39.3|
|SEGUE|53.2|41.1|
|SEGUE (higher LR)|**54.1**|**47.2**|
*Note: Wav2Vec 2.0 fine-tuning was unstable at the higher LR.*
#### Frozen encoder
|Model|Sentiment F1|Emotion F1|
|-|-|-|
|w2v 2.0|45.0±0.7|34.3±1.2|
|SEGUE|**45.8±0.1**|**35.7±0.3**|
## Limitations
In the paper, we hypothesized that SEGUE may perform worse on tasks that rely less on
understanding and more on word detection. This may explain why SEGUE did not manage to
improve upon Wav2Vec 2.0 on the Fluent Speech Commands (FSC) task. We also experimented with
an ASR task (FLEURS), which heavily relies on word detection, to further demonstrate this.
However, this is does not mean that SEGUE performs worse on intent classification tasks
in general. MInDS-14, was able to benifit greatly from SEGUE despite also being an intent
classification task, as it has more free-form utterances that may benefit more from
understanding.
## Citation
```bibtex
@inproceedings{segue2023,
title={Sentence Embedder Guided Utterance Encoder (SEGUE) for Spoken Language Understanding},
author={Tan, Yi Xuan and Majumder, Navonil and Poria, Soujanya},
booktitle={Interspeech},
year={2023}
}
``` | 3,899 | [
[
-0.019317626953125,
-0.0548095703125,
0.03814697265625,
0.0155029296875,
-0.0193939208984375,
-0.00696563720703125,
-0.01593017578125,
-0.0230560302734375,
-0.0017118453979492188,
0.0298614501953125,
-0.040435791015625,
-0.041656494140625,
-0.058013916015625,
... |
YakovElm/MariaDB20Classic | 2023-05-22T04:39:35.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/MariaDB20Classic | 0 | 2 | transformers | 2023-05-20T16:22:59 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB20Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB20Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1851
- Train Accuracy: 0.9356
- Validation Loss: 0.1420
- Validation Accuracy: 0.9698
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2787 | 0.9079 | 0.1294 | 0.9698 | 0 |
| 0.2064 | 0.9356 | 0.1271 | 0.9698 | 1 |
| 0.1851 | 0.9356 | 0.1420 | 0.9698 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,776 | [
[
-0.04296875,
-0.044189453125,
0.0222625732421875,
0.002956390380859375,
-0.0333251953125,
-0.0296783447265625,
-0.0152435302734375,
-0.0263519287109375,
0.0172271728515625,
0.0163726806640625,
-0.057342529296875,
-0.051666259765625,
-0.05023193359375,
-0.026... |
YakovElm/MariaDB15Classic | 2023-05-22T04:04:54.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/MariaDB15Classic | 0 | 2 | transformers | 2023-05-20T16:23:07 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB15Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB15Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1835
- Train Accuracy: 0.9305
- Validation Loss: 0.1779
- Validation Accuracy: 0.9598
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2748 | 0.9264 | 0.1661 | 0.9598 | 0 |
| 0.2065 | 0.9297 | 0.1757 | 0.9598 | 1 |
| 0.1835 | 0.9305 | 0.1779 | 0.9598 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,776 | [
[
-0.04364013671875,
-0.04241943359375,
0.0213623046875,
0.00455474853515625,
-0.0340576171875,
-0.029541015625,
-0.016357421875,
-0.025726318359375,
0.01548004150390625,
0.01520538330078125,
-0.05584716796875,
-0.04949951171875,
-0.051727294921875,
-0.0256500... |
YakovElm/MariaDB10Classic | 2023-05-22T03:32:04.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/MariaDB10Classic | 0 | 2 | transformers | 2023-05-20T16:23:13 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB10Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB10Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1897
- Train Accuracy: 0.9280
- Validation Loss: 0.2292
- Validation Accuracy: 0.9523
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2918 | 0.9163 | 0.1944 | 0.9523 | 0 |
| 0.2313 | 0.9205 | 0.1865 | 0.9523 | 1 |
| 0.1897 | 0.9280 | 0.2292 | 0.9523 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,776 | [
[
-0.042327880859375,
-0.04290771484375,
0.022186279296875,
0.002391815185546875,
-0.03564453125,
-0.0281982421875,
-0.014862060546875,
-0.0246124267578125,
0.0186614990234375,
0.01499176025390625,
-0.055877685546875,
-0.050628662109375,
-0.050628662109375,
-0... |
YakovElm/MariaDB5Classic | 2023-05-22T03:01:54.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/MariaDB5Classic | 0 | 2 | transformers | 2023-05-20T16:23:21 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: MariaDB5Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaDB5Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2669
- Train Accuracy: 0.9013
- Validation Loss: 0.2763
- Validation Accuracy: 0.9322
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3419 | 0.8820 | 0.2456 | 0.9322 | 0 |
| 0.2844 | 0.8971 | 0.2508 | 0.9322 | 1 |
| 0.2669 | 0.9013 | 0.2763 | 0.9322 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,774 | [
[
-0.043792724609375,
-0.042327880859375,
0.0211029052734375,
0.0022640228271484375,
-0.0328369140625,
-0.0292510986328125,
-0.0155029296875,
-0.0267181396484375,
0.01509857177734375,
0.01568603515625,
-0.056121826171875,
-0.0526123046875,
-0.0504150390625,
-0... |
YakovElm/Jira5Classic | 2023-05-22T01:47:28.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Jira5Classic | 0 | 2 | transformers | 2023-05-20T16:23:40 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira5Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira5Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4002
- Train Accuracy: 0.8090
- Validation Loss: 0.7369
- Validation Accuracy: 0.6278
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5501 | 0.7314 | 0.7472 | 0.4858 | 0 |
| 0.4620 | 0.7681 | 0.7721 | 0.5047 | 1 |
| 0.4002 | 0.8090 | 0.7369 | 0.6278 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,768 | [
[
-0.038116455078125,
-0.0389404296875,
0.020355224609375,
-0.002452850341796875,
-0.033538818359375,
-0.0218505859375,
-0.0171661376953125,
-0.0264892578125,
0.015167236328125,
0.0113067626953125,
-0.050628662109375,
-0.051055908203125,
-0.04986572265625,
-0.... |
YakovElm/Jira10Classic | 2023-05-22T02:04:36.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Jira10Classic | 0 | 2 | transformers | 2023-05-20T16:23:47 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira10Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira10Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3408
- Train Accuracy: 0.8437
- Validation Loss: 0.8415
- Validation Accuracy: 0.6435
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5042 | 0.7765 | 0.9294 | 0.4921 | 0 |
| 0.4428 | 0.7912 | 0.6721 | 0.5552 | 1 |
| 0.3408 | 0.8437 | 0.8415 | 0.6435 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,770 | [
[
-0.03717041015625,
-0.04229736328125,
0.019989013671875,
-0.0014638900756835938,
-0.0328369140625,
-0.0228271484375,
-0.0192718505859375,
-0.0247650146484375,
0.019012451171875,
0.01209259033203125,
-0.04925537109375,
-0.046661376953125,
-0.05133056640625,
-... |
YakovElm/Jira15Classic | 2023-05-22T02:22:27.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Jira15Classic | 0 | 2 | transformers | 2023-05-20T16:23:53 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira15Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira15Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.4184
- Train Accuracy: 0.8027
- Validation Loss: 0.7165
- Validation Accuracy: 0.5331
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.5149 | 0.7754 | 0.7342 | 0.5205 | 0 |
| 0.4648 | 0.7912 | 0.7246 | 0.5205 | 1 |
| 0.4184 | 0.8027 | 0.7165 | 0.5331 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,770 | [
[
-0.039642333984375,
-0.0438232421875,
0.0188140869140625,
0.0002582073211669922,
-0.03424072265625,
-0.024261474609375,
-0.01947021484375,
-0.0247344970703125,
0.016876220703125,
0.01261138916015625,
-0.051513671875,
-0.048187255859375,
-0.049957275390625,
-... |
YakovElm/Jira20Classic | 2023-05-22T02:40:02.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Jira20Classic | 0 | 2 | transformers | 2023-05-20T16:24:00 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Jira20Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Jira20Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2068
- Train Accuracy: 0.9255
- Validation Loss: 0.2729
- Validation Accuracy: 0.9338
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3798 | 0.8657 | 0.2552 | 0.9338 | 0 |
| 0.2667 | 0.9003 | 0.2573 | 0.9338 | 1 |
| 0.2068 | 0.9255 | 0.2729 | 0.9338 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,770 | [
[
-0.036346435546875,
-0.0413818359375,
0.0203857421875,
-0.000052034854888916016,
-0.0333251953125,
-0.0214691162109375,
-0.01861572265625,
-0.0250396728515625,
0.018096923828125,
0.01253509521484375,
-0.052093505859375,
-0.048370361328125,
-0.05059814453125,
... |
YakovElm/IntelDAOS20Classic | 2023-05-22T01:32:01.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS20Classic | 0 | 2 | transformers | 2023-05-20T16:24:13 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS20Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS20Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.1389
- Train Accuracy: 0.9610
- Validation Loss: 0.3308
- Validation Accuracy: 0.9099
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.1973 | 0.9610 | 0.3487 | 0.9099 | 0 |
| 0.1567 | 0.9610 | 0.3067 | 0.9099 | 1 |
| 0.1389 | 0.9610 | 0.3308 | 0.9099 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,780 | [
[
-0.04510498046875,
-0.04034423828125,
0.0210113525390625,
0.0009393692016601562,
-0.034759521484375,
-0.024566650390625,
-0.01922607421875,
-0.0284576416015625,
0.01495361328125,
0.010162353515625,
-0.054931640625,
-0.0478515625,
-0.0517578125,
-0.0255584716... |
YakovElm/IntelDAOS15Classic | 2023-05-22T01:15:47.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS15Classic | 0 | 2 | transformers | 2023-05-20T16:24:19 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS15Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS15Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2054
- Train Accuracy: 0.9460
- Validation Loss: 0.3533
- Validation Accuracy: 0.8859
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.2571 | 0.9290 | 0.3861 | 0.8859 | 0 |
| 0.2009 | 0.9460 | 0.3728 | 0.8859 | 1 |
| 0.2054 | 0.9460 | 0.3533 | 0.8859 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,780 | [
[
-0.04449462890625,
-0.04144287109375,
0.020263671875,
0.0012645721435546875,
-0.03546142578125,
-0.0262298583984375,
-0.0205230712890625,
-0.027130126953125,
0.0144500732421875,
0.0100250244140625,
-0.05474853515625,
-0.048858642578125,
-0.050933837890625,
-... |
YakovElm/IntelDAOS10Classic | 2023-05-22T00:59:15.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS10Classic | 0 | 2 | transformers | 2023-05-20T16:24:26 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS10Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS10Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2598
- Train Accuracy: 0.9200
- Validation Loss: 0.3966
- Validation Accuracy: 0.8739
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3055 | 0.9190 | 0.3794 | 0.8739 | 0 |
| 0.2784 | 0.9200 | 0.3869 | 0.8739 | 1 |
| 0.2598 | 0.9200 | 0.3966 | 0.8739 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,780 | [
[
-0.0443115234375,
-0.039947509765625,
0.0217132568359375,
-0.00138092041015625,
-0.03271484375,
-0.02484130859375,
-0.0204925537109375,
-0.0287017822265625,
0.015472412109375,
0.00997161865234375,
-0.052978515625,
-0.047760009765625,
-0.05194091796875,
-0.02... |
YakovElm/IntelDAOS5Classic | 2023-05-22T00:45:34.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/IntelDAOS5Classic | 0 | 2 | transformers | 2023-05-20T16:24:35 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: IntelDAOS5Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# IntelDAOS5Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3501
- Train Accuracy: 0.8730
- Validation Loss: 0.4440
- Validation Accuracy: 0.8438
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4025 | 0.8690 | 0.4303 | 0.8438 | 0 |
| 0.3795 | 0.8740 | 0.4275 | 0.8438 | 1 |
| 0.3501 | 0.8730 | 0.4440 | 0.8438 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,778 | [
[
-0.045074462890625,
-0.037261962890625,
0.0214996337890625,
-0.001842498779296875,
-0.03338623046875,
-0.0247955322265625,
-0.0192413330078125,
-0.0297698974609375,
0.01216888427734375,
0.0094757080078125,
-0.0538330078125,
-0.05010986328125,
-0.0516357421875,
... |
YakovElm/Hyperledger5Classic | 2023-05-21T22:40:44.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger5Classic | 0 | 2 | transformers | 2023-05-20T16:24:50 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger5Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger5Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.3584
- Train Accuracy: 0.8609
- Validation Loss: 0.4363
- Validation Accuracy: 0.8288
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.4097 | 0.8547 | 0.4168 | 0.8361 | 0 |
| 0.3943 | 0.8547 | 0.4342 | 0.8361 | 1 |
| 0.3584 | 0.8609 | 0.4363 | 0.8288 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,782 | [
[
-0.0478515625,
-0.04083251953125,
0.0217437744140625,
0.00113677978515625,
-0.030487060546875,
-0.02496337890625,
-0.0177001953125,
-0.02825927734375,
0.0095062255859375,
0.0134124755859375,
-0.054107666015625,
-0.051605224609375,
-0.05364990234375,
-0.01545... |
YakovElm/Hyperledger10Classic | 2023-05-21T23:17:01.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger10Classic | 0 | 2 | transformers | 2023-05-20T16:24:58 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger10Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger10Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2836
- Train Accuracy: 0.8893
- Validation Loss: 0.3855
- Validation Accuracy: 0.8579
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3564 | 0.8776 | 0.3768 | 0.8600 | 0 |
| 0.3291 | 0.8838 | 0.4137 | 0.8600 | 1 |
| 0.2836 | 0.8893 | 0.3855 | 0.8579 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,784 | [
[
-0.047149658203125,
-0.044281005859375,
0.0211334228515625,
0.0023345947265625,
-0.0286865234375,
-0.02630615234375,
-0.0209503173828125,
-0.025726318359375,
0.0144805908203125,
0.01419830322265625,
-0.05108642578125,
-0.0467529296875,
-0.05450439453125,
-0.... |
YakovElm/Hyperledger15Classic | 2023-05-21T23:53:33.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger15Classic | 0 | 2 | transformers | 2023-05-20T16:25:05 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger15Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger15Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2734
- Train Accuracy: 0.9035
- Validation Loss: 0.3337
- Validation Accuracy: 0.8807
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3185 | 0.8990 | 0.3453 | 0.8807 | 0 |
| 0.2980 | 0.9035 | 0.3266 | 0.8807 | 1 |
| 0.2734 | 0.9035 | 0.3337 | 0.8807 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,784 | [
[
-0.049530029296875,
-0.04547119140625,
0.0206451416015625,
0.004425048828125,
-0.0299224853515625,
-0.0265655517578125,
-0.0204620361328125,
-0.025421142578125,
0.01152801513671875,
0.01433563232421875,
-0.053741455078125,
-0.049041748046875,
-0.051971435546875,... |
YakovElm/Hyperledger20Classic | 2023-05-22T00:30:45.000Z | [
"transformers",
"tf",
"bert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | YakovElm | null | null | YakovElm/Hyperledger20Classic | 0 | 2 | transformers | 2023-05-20T16:25:12 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: Hyperledger20Classic
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# Hyperledger20Classic
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.2349
- Train Accuracy: 0.9170
- Validation Loss: 0.3001
- Validation Accuracy: 0.8921
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': 1.0, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': 3e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch |
|:----------:|:--------------:|:---------------:|:-------------------:|:-----:|
| 0.3071 | 0.9063 | 0.3043 | 0.8983 | 0 |
| 0.2632 | 0.9149 | 0.3454 | 0.8983 | 1 |
| 0.2349 | 0.9170 | 0.3001 | 0.8921 | 2 |
### Framework versions
- Transformers 4.29.2
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,784 | [
[
-0.0487060546875,
-0.043914794921875,
0.021697998046875,
0.00261688232421875,
-0.029266357421875,
-0.0257415771484375,
-0.0188446044921875,
-0.0261077880859375,
0.01248931884765625,
0.0156097412109375,
-0.055389404296875,
-0.048126220703125,
-0.054107666015625,
... |
tchebonenko/As1b-distilbert_classifier | 2023-05-20T20:50:21.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"en",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | tchebonenko | null | null | tchebonenko/As1b-distilbert_classifier | 0 | 2 | transformers | 2023-05-20T20:17:51 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: As1b-distilbert_classifier
results: []
language:
- en
metrics:
- accuracy
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# As1b-distilbert_classifier
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the 20 newsgroups dataset.
The [details](https://scikit-learn.org/stable/datasets/real_world.html#newsgroups-dataset) about the dataset from Scikit Learn.
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
Achieved 83.4% accuracy.
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3 | 1,435 | [
[
-0.03765869140625,
-0.03570556640625,
0.0132904052734375,
0.0113372802734375,
-0.0281219482421875,
-0.003887176513671875,
-0.0034351348876953125,
-0.0169219970703125,
0.0025463104248046875,
-0.0009927749633789062,
-0.040374755859375,
-0.04718017578125,
-0.061676... |
maxbarshay/distilbert-base-uncased-finetuned-emotion | 2023-05-20T22:03:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | maxbarshay | null | null | maxbarshay/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-20T21:44:39 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9265
- name: F1
type: f1
value: 0.9264388876891729
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2222
- Accuracy: 0.9265
- F1: 0.9264
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8772 | 1.0 | 250 | 0.3281 | 0.9035 | 0.9008 |
| 0.2625 | 2.0 | 500 | 0.2222 | 0.9265 | 0.9264 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037445068359375,
-0.041595458984375,
0.01493072509765625,
0.0221099853515625,
-0.025787353515625,
-0.01922607421875,
-0.01346588134765625,
-0.008392333984375,
0.01001739501953125,
0.007755279541015625,
-0.056182861328125,
-0.051788330078125,
-0.05984497070312... |
AustinCarthy/Onlyphish_100KP_BFall_fromB_10KGen_topP_0.75 | 2023-05-21T06:01:03.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_100KP_BFall_fromB_10KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-20T22:22:11 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromB_10KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromB_10KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0215
- Accuracy: 0.9974
- F1: 0.9724
- Precision: 0.9989
- Recall: 0.9472
- Roc Auc Score: 0.9736
- Tpr At Fpr 0.01: 0.9548
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0021 | 1.0 | 72188 | 0.0346 | 0.995 | 0.9449 | 0.9943 | 0.9002 | 0.9500 | 0.8748 |
| 0.0025 | 2.0 | 144376 | 0.0316 | 0.9959 | 0.9547 | 0.9989 | 0.9142 | 0.9571 | 0.9218 |
| 0.0019 | 3.0 | 216564 | 0.0289 | 0.9960 | 0.9566 | 0.9996 | 0.9172 | 0.9586 | 0.9382 |
| 0.0013 | 4.0 | 288752 | 0.0193 | 0.9975 | 0.9727 | 0.9985 | 0.9482 | 0.9741 | 0.9494 |
| 0.001 | 5.0 | 360940 | 0.0215 | 0.9974 | 0.9724 | 0.9989 | 0.9472 | 0.9736 | 0.9548 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,257 | [
[
-0.042205810546875,
-0.043426513671875,
0.0093231201171875,
0.0095977783203125,
-0.0203399658203125,
-0.023681640625,
-0.0080413818359375,
-0.01715087890625,
0.0296173095703125,
0.0279693603515625,
-0.052703857421875,
-0.053070068359375,
-0.0496826171875,
-0... |
ramortegui/distilbert_based_classifier_with_newsgroups | 2023-05-20T22:24:00.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ramortegui | null | null | ramortegui/distilbert_based_classifier_with_newsgroups | 0 | 2 | transformers | 2023-05-20T22:23:29 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_based_classifier_with_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_based_classifier_with_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Tokenizers 0.13.3
| 1,475 | [
[
-0.037139892578125,
-0.04486083984375,
0.0207672119140625,
0.0081329345703125,
-0.033721923828125,
-0.00702667236328125,
-0.0127716064453125,
-0.0137176513671875,
0.0004973411560058594,
-0.00524139404296875,
-0.040313720703125,
-0.052764892578125,
-0.06604003906... |
Echiguerkh/rinna-roberta-qa-ar2 | 2023-05-21T02:54:28.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"question-answering",
"generated_from_trainer",
"dataset:arcd",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | question-answering | Echiguerkh | null | null | Echiguerkh/rinna-roberta-qa-ar2 | 1 | 2 | transformers | 2023-05-20T23:55:50 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- arcd
model-index:
- name: rinna-roberta-qa-ar2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rinna-roberta-qa-ar2
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the arcd dataset.
It achieves the following results on the evaluation set:
- Loss: 7.3167
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 2
- eval_batch_size: 1
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- num_epochs: 170
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:------:|:----:|:---------------:|
| 0.3148 | 6.86 | 150 | 4.5451 |
| 0.2021 | 13.71 | 300 | 4.3560 |
| 0.1134 | 20.57 | 450 | 5.1730 |
| 0.0648 | 27.43 | 600 | 5.0504 |
| 0.0734 | 34.29 | 750 | 5.3601 |
| 0.032 | 41.14 | 900 | 5.4291 |
| 0.0171 | 48.0 | 1050 | 6.9606 |
| 0.0343 | 54.86 | 1200 | 4.9076 |
| 0.0186 | 61.71 | 1350 | 6.7967 |
| 0.0054 | 68.57 | 1500 | 6.0515 |
| 0.0118 | 75.43 | 1650 | 7.0908 |
| 0.0027 | 82.29 | 1800 | 7.5651 |
| 0.0078 | 89.14 | 1950 | 7.3787 |
| 0.0172 | 96.0 | 2100 | 7.7559 |
| 0.0077 | 102.86 | 2250 | 7.1376 |
| 0.0041 | 109.71 | 2400 | 7.3236 |
| 0.0022 | 116.57 | 2550 | 7.3134 |
| 0.0004 | 123.43 | 2700 | 7.2484 |
| 0.0018 | 130.29 | 2850 | 7.1747 |
| 0.0009 | 137.14 | 3000 | 7.4311 |
| 0.0008 | 144.0 | 3150 | 7.5083 |
| 0.0006 | 150.86 | 3300 | 7.4622 |
| 0.0002 | 157.71 | 3450 | 7.3167 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 2,511 | [
[
-0.04217529296875,
-0.04150390625,
0.01015472412109375,
0.0011463165283203125,
-0.00968170166015625,
-0.015655517578125,
-0.0018205642700195312,
-0.00701141357421875,
0.0201873779296875,
0.040130615234375,
-0.0504150390625,
-0.050567626953125,
-0.053497314453125... |
j15r/test_trainer | 2023-05-21T02:58:07.000Z | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:yelp_review_full",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | j15r | null | null | j15r/test_trainer | 0 | 2 | transformers | 2023-05-21T02:56:18 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- yelp_review_full
metrics:
- accuracy
model-index:
- name: test_trainer
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: yelp_review_full
type: yelp_review_full
config: yelp_review_full
split: test
args: yelp_review_full
metrics:
- name: Accuracy
type: accuracy
value: 0.41
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# test_trainer
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the yelp_review_full dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4217
- Accuracy: 0.41
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 13 | 1.5608 | 0.29 |
| No log | 2.0 | 26 | 1.4456 | 0.42 |
| No log | 3.0 | 39 | 1.4217 | 0.41 |
### Framework versions
- Transformers 4.28.1
- Pytorch 2.1.0.dev20230506
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,774 | [
[
-0.032073974609375,
-0.045135498046875,
0.01052093505859375,
0.01202392578125,
-0.0269012451171875,
-0.037445068359375,
-0.015716552734375,
-0.0191497802734375,
0.0116729736328125,
0.0225830078125,
-0.0587158203125,
-0.04248046875,
-0.042938232421875,
-0.019... |
vincha77/distilbert_classifier_newsgroups | 2023-05-21T04:22:24.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | vincha77 | null | null | vincha77/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-21T04:21:52 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.038726806640625,
-0.042022705078125,
0.021209716796875,
0.00841522216796875,
-0.033599853515625,
-0.00681304931640625,
-0.01174163818359375,
-0.01084136962890625,
-0.002910614013671875,
-0.00621795654296875,
-0.041534423828125,
-0.050445556640625,
-0.06713867... |
wiorz/bert_small | 2023-05-21T05:46:17.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/bert_small | 0 | 2 | transformers | 2023-05-21T04:25:36 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_small
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_small
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4537
- Accuracy: 0.88
- Precision: 0.625
- Recall: 0.3571
- F1: 0.4545
- D-index: 1.6429
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1600
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 200 | 0.3773 | 0.86 | 0.0 | 0.0 | 0.0 | 1.4803 |
| No log | 2.0 | 400 | 0.4271 | 0.86 | 0.0 | 0.0 | 0.0 | 1.4803 |
| 0.5126 | 3.0 | 600 | 0.4598 | 0.87 | 0.55 | 0.3929 | 0.4583 | 1.6431 |
| 0.5126 | 4.0 | 800 | 0.6620 | 0.865 | 0.52 | 0.4643 | 0.4906 | 1.6624 |
| 0.2953 | 5.0 | 1000 | 0.8149 | 0.855 | 0.4615 | 0.2143 | 0.2927 | 1.5575 |
| 0.2953 | 6.0 | 1200 | 0.7819 | 0.875 | 0.5714 | 0.4286 | 0.4898 | 1.6623 |
| 0.2953 | 7.0 | 1400 | 1.0426 | 0.86 | 0.5 | 0.3571 | 0.4167 | 1.6173 |
| 0.1565 | 8.0 | 1600 | 1.0078 | 0.885 | 0.7273 | 0.2857 | 0.4103 | 1.6231 |
| 0.1565 | 9.0 | 1800 | 1.2939 | 0.865 | 0.6 | 0.1071 | 0.1818 | 1.5294 |
| 0.0643 | 10.0 | 2000 | 1.2661 | 0.88 | 0.6429 | 0.3214 | 0.4286 | 1.6299 |
| 0.0643 | 11.0 | 2200 | 1.3556 | 0.87 | 0.5833 | 0.25 | 0.3500 | 1.5905 |
| 0.0643 | 12.0 | 2400 | 1.2393 | 0.87 | 0.625 | 0.1786 | 0.2778 | 1.5635 |
| 0.0306 | 13.0 | 2600 | 1.3059 | 0.88 | 0.625 | 0.3571 | 0.4545 | 1.6429 |
| 0.0306 | 14.0 | 2800 | 1.3446 | 0.88 | 0.625 | 0.3571 | 0.4545 | 1.6429 |
| 0.0019 | 15.0 | 3000 | 1.3618 | 0.885 | 0.6471 | 0.3929 | 0.4889 | 1.6622 |
| 0.0019 | 16.0 | 3200 | 1.3785 | 0.885 | 0.6471 | 0.3929 | 0.4889 | 1.6622 |
| 0.0019 | 17.0 | 3400 | 1.4361 | 0.88 | 0.625 | 0.3571 | 0.4545 | 1.6429 |
| 0.0098 | 18.0 | 3600 | 1.4466 | 0.88 | 0.625 | 0.3571 | 0.4545 | 1.6429 |
| 0.0098 | 19.0 | 3800 | 1.4518 | 0.88 | 0.625 | 0.3571 | 0.4545 | 1.6429 |
| 0.0 | 20.0 | 4000 | 1.4537 | 0.88 | 0.625 | 0.3571 | 0.4545 | 1.6429 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,533 | [
[
-0.045166015625,
-0.04095458984375,
0.01419830322265625,
0.003513336181640625,
-0.005657196044921875,
-0.01267242431640625,
-0.00449371337890625,
-0.0109405517578125,
0.04376220703125,
0.019775390625,
-0.04498291015625,
-0.047943115234375,
-0.04705810546875,
... |
AustinCarthy/Onlyphish_100KP_BFall_fromB_20KGen_topP_0.75 | 2023-05-21T14:16:37.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_100KP_BFall_fromB_20KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-21T06:03:08 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromB_20KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromB_20KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0187
- Accuracy: 0.9974
- F1: 0.9714
- Precision: 0.9987
- Recall: 0.9456
- Roc Auc Score: 0.9728
- Tpr At Fpr 0.01: 0.9596
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0036 | 1.0 | 78750 | 0.0305 | 0.9963 | 0.9593 | 0.9991 | 0.9226 | 0.9613 | 0.9348 |
| 0.0074 | 2.0 | 157500 | 0.0234 | 0.9967 | 0.9643 | 0.9947 | 0.9358 | 0.9678 | 0.0 |
| 0.0038 | 3.0 | 236250 | 0.0244 | 0.9967 | 0.9637 | 0.9987 | 0.931 | 0.9655 | 0.9352 |
| 0.0009 | 4.0 | 315000 | 0.0223 | 0.9970 | 0.9678 | 0.9991 | 0.9384 | 0.9692 | 0.9632 |
| 0.0011 | 5.0 | 393750 | 0.0187 | 0.9974 | 0.9714 | 0.9987 | 0.9456 | 0.9728 | 0.9596 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,257 | [
[
-0.041900634765625,
-0.04296875,
0.00994110107421875,
0.0102386474609375,
-0.019683837890625,
-0.0233612060546875,
-0.0069122314453125,
-0.01678466796875,
0.0296630859375,
0.0287933349609375,
-0.0528564453125,
-0.05413818359375,
-0.049224853515625,
-0.012496... |
joys000/distilbert-base-uncased-finetuned-emotion | 2023-05-21T06:55:22.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | joys000 | null | null | joys000/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-21T06:42:54 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.925
- name: F1
type: f1
value: 0.9250252118821467
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2169
- Accuracy: 0.925
- F1: 0.9250
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.7976 | 1.0 | 250 | 0.3073 | 0.902 | 0.8987 |
| 0.2413 | 2.0 | 500 | 0.2169 | 0.925 | 0.9250 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037445068359375,
-0.041778564453125,
0.01392364501953125,
0.0214691162109375,
-0.026092529296875,
-0.0188140869140625,
-0.013153076171875,
-0.00852203369140625,
0.01050567626953125,
0.007537841796875,
-0.05609130859375,
-0.051116943359375,
-0.060272216796875,... |
huggingtweets/lopezmirasf | 2023-05-21T09:27:26.000Z | [
"transformers",
"pytorch",
"gpt2",
"text-generation",
"huggingtweets",
"en",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | huggingtweets | null | null | huggingtweets/lopezmirasf | 0 | 2 | transformers | 2023-05-21T09:25:39 | ---
language: en
thumbnail: http://www.huggingtweets.com/lopezmirasf/1684661241781/predictions.png
tags:
- huggingtweets
widget:
- text: "My dream is"
---
<div class="inline-flex flex-col" style="line-height: 1.5;">
<div class="flex">
<div
style="display:inherit; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('https://pbs.twimg.com/profile_images/1656942108467511296/CUMm5Bl4_400x400.jpg')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
<div
style="display:none; margin-left: 4px; margin-right: 4px; width: 92px; height:92px; border-radius: 50%; background-size: cover; background-image: url('')">
</div>
</div>
<div style="text-align: center; margin-top: 3px; font-size: 16px; font-weight: 800">🤖 AI BOT 🤖</div>
<div style="text-align: center; font-size: 16px; font-weight: 800">Fernando López Miras</div>
<div style="text-align: center; font-size: 14px;">@lopezmirasf</div>
</div>
I was made with [huggingtweets](https://github.com/borisdayma/huggingtweets).
Create your own bot based on your favorite user with [the demo](https://colab.research.google.com/github/borisdayma/huggingtweets/blob/master/huggingtweets-demo.ipynb)!
## How does it work?
The model uses the following pipeline.

To understand how the model was developed, check the [W&B report](https://wandb.ai/wandb/huggingtweets/reports/HuggingTweets-Train-a-Model-to-Generate-Tweets--VmlldzoxMTY5MjI).
## Training data
The model was trained on tweets from Fernando López Miras.
| Data | Fernando López Miras |
| --- | --- |
| Tweets downloaded | 3245 |
| Retweets | 866 |
| Short tweets | 81 |
| Tweets kept | 2298 |
[Explore the data](https://wandb.ai/wandb/huggingtweets/runs/t6k7mewt/artifacts), which is tracked with [W&B artifacts](https://docs.wandb.com/artifacts) at every step of the pipeline.
## Training procedure
The model is based on a pre-trained [GPT-2](https://huggingface.co/gpt2) which is fine-tuned on @lopezmirasf's tweets.
Hyperparameters and metrics are recorded in the [W&B training run](https://wandb.ai/wandb/huggingtweets/runs/ouluex1y) for full transparency and reproducibility.
At the end of training, [the final model](https://wandb.ai/wandb/huggingtweets/runs/ouluex1y/artifacts) is logged and versioned.
## How to use
You can use this model directly with a pipeline for text generation:
```python
from transformers import pipeline
generator = pipeline('text-generation',
model='huggingtweets/lopezmirasf')
generator("My dream is", num_return_sequences=5)
```
## Limitations and bias
The model suffers from [the same limitations and bias as GPT-2](https://huggingface.co/gpt2#limitations-and-bias).
In addition, the data present in the user's tweets further affects the text generated by the model.
## About
*Built by Boris Dayma*
[](https://twitter.com/intent/follow?screen_name=borisdayma)
For more details, visit the project repository.
[](https://github.com/borisdayma/huggingtweets)
| 3,519 | [
[
-0.0241851806640625,
-0.06298828125,
0.026458740234375,
0.0184783935546875,
-0.01934814453125,
0.011444091796875,
-0.0051422119140625,
-0.03729248046875,
0.0271759033203125,
0.00777435302734375,
-0.07470703125,
-0.03436279296875,
-0.050018310546875,
-0.01079... |
Den4ikAI/FRED-T5-XL-chitchat | 2023-06-04T16:37:28.000Z | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ru",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Den4ikAI | null | null | Den4ikAI/FRED-T5-XL-chitchat | 0 | 2 | transformers | 2023-05-21T09:32:17 | ---
license: mit
language:
- ru
pipeline_tag: text2text-generation
widget:
- text: '<SC1>- Как тебя зовут?\n- Даша\n- А меня Денис\n- <extra_id_0>'
---
# Den4ikAI/FRED-T5-XL-chitchat
Болталка на основе FRED-T5-XL. Длина контекста модели 6-8 реплик.
# Пример использования
```python
import torch
import transformers
use_cuda = torch.cuda.is_available()
device = torch.device("cuda" if use_cuda else "cpu")
t5_tokenizer = transformers.GPT2Tokenizer.from_pretrained("Den4ikAI/FRED-T5-XL-chitchat")
t5_model = transformers.T5ForConditionalGeneration.from_pretrained("Den4ikAI/FRED-T5-XL-chitchat")
while True:
print('-'*80)
dialog = []
while True:
msg = input('H:> ').strip()
if len(msg) == 0:
break
dialog.append('- ' + msg)
dialog.append('- <extra_id_0>')
input_ids = t5_tokenizer('<SC1>'+'\n'.join(dialog), return_tensors='pt').input_ids
out_ids = t5_model.generate(input_ids=input_ids,
max_length=200,
eos_token_id=t5_tokenizer.eos_token_id,
early_stopping=True,
do_sample=True,
temperature=1.0,
top_k=0,
top_p=0.85)
dialog.pop(-1)
t5_output = t5_tokenizer.decode(out_ids[0][1:]).replace('<extra_id_0>','')
if '</s>' in t5_output:
t5_output = t5_output[:t5_output.find('</s>')].strip()
print('B:> {}'.format(t5_output))
dialog.append('- '+t5_output)
```
# Citation
```
@MISC{Den4ikAI/FRED-T5-XL-chitchat,
author = {Denis Petrov},
title = {Russian chitchat model},
url = {https://huggingface.co/Den4ikAI/FRED-T5-XL-chitchat},
year = 2023
}
```
| 1,852 | [
[
-0.0164337158203125,
-0.049896240234375,
0.019805908203125,
0.00888824462890625,
-0.0208892822265625,
0.01204681396484375,
-0.00653839111328125,
-0.021240234375,
0.005825042724609375,
-0.0131988525390625,
-0.0472412109375,
-0.031646728515625,
-0.035369873046875,... |
kpbth/distilbert-base-uncased-finetuned-emotion | 2023-05-21T10:16:38.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | kpbth | null | null | kpbth/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-21T09:51:46 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.922
- name: F1
type: f1
value: 0.9223471923096423
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2206
- Accuracy: 0.922
- F1: 0.9223
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8353 | 1.0 | 250 | 0.3154 | 0.906 | 0.9033 |
| 0.2476 | 2.0 | 500 | 0.2206 | 0.922 | 0.9223 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,846 | [
[
-0.037689208984375,
-0.041229248046875,
0.01477813720703125,
0.021697998046875,
-0.026397705078125,
-0.0186767578125,
-0.0133209228515625,
-0.00853729248046875,
0.01044464111328125,
0.00782012939453125,
-0.056732177734375,
-0.051513671875,
-0.06005859375,
-0... |
hny17/finetune_req | 2023-05-21T11:19:45.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | hny17 | null | null | hny17/finetune_req | 0 | 2 | transformers | 2023-05-21T11:11:35 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: finetune_req
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetune_req
This model is a fine-tuned version of [deprem-ml/deprem_bert_128k](https://huggingface.co/deprem-ml/deprem_bert_128k) on a private dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1891
- Accuracy: 0.875
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 670 | [
[
-0.0279388427734375,
-0.04779052734375,
-0.00182342529296875,
0.008758544921875,
-0.0240478515625,
-0.0261993408203125,
-0.005336761474609375,
-0.028778076171875,
0.0062103271484375,
0.0408935546875,
-0.057037353515625,
-0.045867919921875,
-0.047576904296875,
... |
christinacdl/bigbird_moderate_severe_depression | 2023-05-21T17:46:51.000Z | [
"transformers",
"pytorch",
"tensorboard",
"big_bird",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"has_space",
"region:us"
] | text-classification | christinacdl | null | null | christinacdl/bigbird_moderate_severe_depression | 0 | 2 | transformers | 2023-05-21T11:14:01 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: bigbird_moderate_severe_depression
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bigbird_moderate_severe_depression
This model is a fine-tuned version of [google/bigbird-roberta-base](https://huggingface.co/google/bigbird-roberta-base) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6286
- Macro F1: 0.8843
- Accuracy: 0.8856
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 3
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 2
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Macro F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:--------:|:--------:|
| 0.569 | 1.0 | 10786 | 0.5698 | 0.8563 | 0.8555 |
| 0.4866 | 2.0 | 21573 | 0.5080 | 0.8777 | 0.8785 |
| 0.4099 | 3.0 | 32359 | 0.6262 | 0.8796 | 0.8802 |
| 0.3165 | 4.0 | 43144 | 0.6286 | 0.8843 | 0.8856 |
### Framework versions
- Transformers 4.27.1
- Pytorch 2.0.1+cu118
- Datasets 2.9.0
- Tokenizers 0.13.3
| 1,756 | [
[
-0.03155517578125,
-0.053619384765625,
0.0234222412109375,
0.0283966064453125,
-0.0157928466796875,
-0.032501220703125,
-0.01837158203125,
-0.01192474365234375,
0.01435089111328125,
0.015380859375,
-0.0526123046875,
-0.058441162109375,
-0.06396484375,
0.0100... |
Ravencer/rut5_base_sum_gazeta-finetuned-mlsum | 2023-06-26T11:07:17.000Z | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"generated_from_trainer",
"dataset:mlsum",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text2text-generation | Ravencer | null | null | Ravencer/rut5_base_sum_gazeta-finetuned-mlsum | 0 | 2 | transformers | 2023-05-21T12:10:39 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- mlsum
model-index:
- name: rut5_base_sum_gazeta-finetuned-mlsum
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# rut5_base_sum_gazeta-finetuned-mlsum
This model is a fine-tuned version of [IlyaGusev/rut5_base_sum_gazeta](https://huggingface.co/IlyaGusev/rut5_base_sum_gazeta) on the mlsum dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:|
| No log | 1.0 | 13 | 3.4842 | 10.3333 | 0.0 | 10.3333 | 10.3333 | 78.7 |
### Framework versions
- Transformers 4.29.1
- Pytorch 2.0.1+cpu
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,414 | [
[
-0.0330810546875,
-0.03076171875,
0.0027370452880859375,
0.01849365234375,
-0.0246734619140625,
-0.0174102783203125,
-0.00409698486328125,
-0.0239105224609375,
0.01486968994140625,
0.029998779296875,
-0.0576171875,
-0.044036865234375,
-0.043121337890625,
-0.... |
vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-50000 | 2023-05-21T13:01:38.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-50000 | 0 | 2 | transformers | 2023-05-21T12:55:59 | # Vocabulary Trimmed [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en): `vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-50000`
This model is a trimmed version of [cardiffnlp/xlm-roberta-base-tweet-sentiment-en](https://huggingface.co/cardiffnlp/xlm-roberta-base-tweet-sentiment-en) by [`vocabtrimmer`](https://github.com/asahi417/lm-vocab-trimmer), a tool for trimming vocabulary of language models to compress the model size.
Following table shows a summary of the trimming process.
| | cardiffnlp/xlm-roberta-base-tweet-sentiment-en | vocabtrimmer/xlm-roberta-base-tweet-sentiment-en-trimmed-en-50000 |
|:---------------------------|:-------------------------------------------------|:--------------------------------------------------------------------|
| parameter_size_full | 278,045,955 | 124,445,955 |
| parameter_size_embedding | 192,001,536 | 38,401,536 |
| vocab_size | 250,002 | 50,002 |
| compression_rate_full | 100.0 | 44.76 |
| compression_rate_embedding | 100.0 | 20.0 |
Following table shows the parameter used to trim vocabulary.
| language | dataset | dataset_column | dataset_name | dataset_split | target_vocab_size | min_frequency |
|:-----------|:----------------------------|:-----------------|:---------------|:----------------|--------------------:|----------------:|
| en | vocabtrimmer/mc4_validation | text | en | validation | 50000 | 2 | | 2,114 | [
[
-0.056854248046875,
-0.0469970703125,
-0.00005561113357543945,
0.01522064208984375,
-0.035491943359375,
-0.007793426513671875,
-0.02166748046875,
-0.00835418701171875,
0.039337158203125,
0.040863037109375,
-0.0579833984375,
-0.06024169921875,
-0.041351318359375,... |
vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en | 2023-05-21T13:16:57.000Z | [
"transformers",
"pytorch",
"xlm-roberta",
"text-classification",
"endpoints_compatible",
"region:us"
] | text-classification | vocabtrimmer | null | null | vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en | 0 | 2 | transformers | 2023-05-21T13:14:24 | # `vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en`
This model is a fine-tuned version of [vocabtrimmer/xlm-roberta-base-trimmed-en-50000](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-50000) on the
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual) (english).
Following metrics are computed on the `test` split of
[cardiffnlp/tweet_sentiment_multilingual](https://huggingface.co/datasets/cardiffnlp/tweet_sentiment_multilingual)(english).
| | eval_f1_micro | eval_recall_micro | eval_precision_micro | eval_f1_macro | eval_recall_macro | eval_precision_macro | eval_accuracy |
|---:|----------------:|--------------------:|-----------------------:|----------------:|--------------------:|-----------------------:|----------------:|
| 0 | 68.51 | 68.51 | 68.51 | 67.26 | 68.51 | 68.63 | 68.51 |
Check the result file [here](https://huggingface.co/vocabtrimmer/xlm-roberta-base-trimmed-en-50000-tweet-sentiment-en/raw/main/eval.json). | 1,149 | [
[
-0.03533935546875,
-0.03692626953125,
0.01322174072265625,
0.0280609130859375,
-0.037567138671875,
0.0177459716796875,
-0.026031494140625,
-0.018768310546875,
0.037628173828125,
0.032958984375,
-0.053802490234375,
-0.0791015625,
-0.0567626953125,
0.007217407... |
PGCaptain/xlm-roberta-base-finetuned-marc | 2023-05-21T13:44:19.000Z | [
"transformers",
"pytorch",
"tensorboard",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | PGCaptain | null | null | PGCaptain/xlm-roberta-base-finetuned-marc | 0 | 2 | transformers | 2023-05-21T13:22:41 | ---
license: mit
tags:
- generated_from_trainer
datasets:
- amazon_reviews_multi
model-index:
- name: xlm-roberta-base-finetuned-marc
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-marc
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the amazon_reviews_multi dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1497
- Mae: 0.6986
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 1.2594 | 1.0 | 196 | 1.2004 | 0.7123 |
| 1.1455 | 2.0 | 392 | 1.1497 | 0.6986 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,423 | [
[
-0.037689208984375,
-0.04876708984375,
0.0264434814453125,
0.01158905029296875,
-0.0243682861328125,
-0.0287322998046875,
-0.0186614990234375,
-0.013671875,
0.0004892349243164062,
0.046844482421875,
-0.0606689453125,
-0.044647216796875,
-0.056304931640625,
-... |
AustinCarthy/Onlyphish_100KP_BFall_fromB_30KGen_topP_0.75 | 2023-05-22T00:05:08.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/Onlyphish_100KP_BFall_fromB_30KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-21T15:12:19 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: Onlyphish_100KP_BFall_fromB_30KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Onlyphish_100KP_BFall_fromB_30KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0188
- Accuracy: 0.9973
- F1: 0.9707
- Precision: 0.9996
- Recall: 0.9434
- Roc Auc Score: 0.9717
- Tpr At Fpr 0.01: 0.9624
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0022 | 1.0 | 85313 | 0.0263 | 0.9963 | 0.9599 | 0.9938 | 0.9282 | 0.9640 | 0.8954 |
| 0.0032 | 2.0 | 170626 | 0.0296 | 0.9954 | 0.9490 | 0.9987 | 0.904 | 0.9520 | 0.925 |
| 0.0042 | 3.0 | 255939 | 0.0226 | 0.9971 | 0.9683 | 0.9985 | 0.9398 | 0.9699 | 0.946 |
| 0.001 | 4.0 | 341252 | 0.0187 | 0.9973 | 0.9708 | 0.9996 | 0.9436 | 0.9718 | 0.957 |
| 0.0 | 5.0 | 426565 | 0.0188 | 0.9973 | 0.9707 | 0.9996 | 0.9434 | 0.9717 | 0.9624 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,257 | [
[
-0.042724609375,
-0.04248046875,
0.01007843017578125,
0.00860595703125,
-0.0198211669921875,
-0.0223388671875,
-0.006832122802734375,
-0.0166015625,
0.0303802490234375,
0.028564453125,
-0.054351806640625,
-0.05328369140625,
-0.048980712890625,
-0.01210021972... |
deepspringer/my_bert_model | 2023-05-21T16:15:21.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | deepspringer | null | null | deepspringer/my_bert_model | 0 | 2 | transformers | 2023-05-21T16:15:12 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my_bert_model
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my_bert_model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,433 | [
[
-0.04400634765625,
-0.051239013671875,
0.024871826171875,
0.0144195556640625,
-0.035675048828125,
-0.01305389404296875,
-0.0169219970703125,
-0.0187835693359375,
0.005039215087890625,
0.00043129920959472656,
-0.052215576171875,
-0.04583740234375,
-0.060302734375... |
pabagcha/roberta_crypto_profiling_task1_deberta | 2023-05-21T17:13:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"license:mit",
"endpoints_compatible",
"region:us"
] | text-classification | pabagcha | null | null | pabagcha/roberta_crypto_profiling_task1_deberta | 0 | 2 | transformers | 2023-05-21T16:47:27 | ---
license: mit
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: roberta_crypto_profiling_task1_deberta
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta_crypto_profiling_task1_deberta
This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.4722
- Accuracy: 0.5176
- F1: 0.4814
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| No log | 1.0 | 211 | 0.9966 | 0.5882 | 0.5030 |
| No log | 2.0 | 422 | 1.7145 | 0.5647 | 0.5360 |
| 0.5073 | 3.0 | 633 | 2.2226 | 0.5176 | 0.4695 |
| 0.5073 | 4.0 | 844 | 2.1071 | 0.5647 | 0.5222 |
| 0.112 | 5.0 | 1055 | 2.4722 | 0.5176 | 0.4814 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,706 | [
[
-0.0244293212890625,
-0.052276611328125,
0.012786865234375,
0.0079498291015625,
-0.0257110595703125,
-0.006313323974609375,
0.0042572021484375,
-0.033355712890625,
0.01427459716796875,
0.0247039794921875,
-0.0458984375,
-0.053009033203125,
-0.061737060546875,
... |
ntedeschi/distilbert_classifier_newsgroups | 2023-05-21T16:59:06.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ntedeschi | null | null | ntedeschi/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-21T16:58:34 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.0386962890625,
-0.042022705078125,
0.021240234375,
0.0084228515625,
-0.033599853515625,
-0.0068359375,
-0.01174163818359375,
-0.010833740234375,
-0.002910614013671875,
-0.00620269775390625,
-0.041534423828125,
-0.050445556640625,
-0.067138671875,
-0.01020... |
jinhybr/distilbert_classifier_newsgroups | 2023-05-21T17:31:47.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | jinhybr | null | null | jinhybr/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-21T17:31:15 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.0386962890625,
-0.042022705078125,
0.021240234375,
0.0084228515625,
-0.033599853515625,
-0.0068359375,
-0.01174163818359375,
-0.010833740234375,
-0.002910614013671875,
-0.00620269775390625,
-0.041534423828125,
-0.050445556640625,
-0.067138671875,
-0.01020... |
satyamverma/distilbert-base-uncased-finetuned-rte | 2023-05-21T19:08:21.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | satyamverma | null | null | satyamverma/distilbert-base-uncased-finetuned-rte | 0 | 2 | transformers | 2023-05-21T18:31:40 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- glue
metrics:
- accuracy
model-index:
- name: distilbert-base-uncased-finetuned-rte
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: glue
type: glue
config: rte
split: validation
args: rte
metrics:
- name: Accuracy
type: accuracy
value: 0.631768953068592
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-rte
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6827
- Accuracy: 0.6318
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.247277359513074e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 78 | 0.6898 | 0.5812 |
| No log | 2.0 | 156 | 0.6654 | 0.6065 |
| No log | 3.0 | 234 | 0.6827 | 0.6318 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,796 | [
[
-0.0231781005859375,
-0.052093505859375,
0.0097198486328125,
0.0175323486328125,
-0.0248870849609375,
-0.02203369140625,
-0.00922393798828125,
-0.00922393798828125,
0.01081085205078125,
0.015350341796875,
-0.047027587890625,
-0.043243408203125,
-0.05984497070312... |
denaneek/distilbert_classifier_newsgroups | 2023-05-21T19:22:03.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | denaneek | null | null | denaneek/distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-21T19:21:48 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,471 | [
[
-0.0386962890625,
-0.042022705078125,
0.021240234375,
0.0084228515625,
-0.033599853515625,
-0.0068359375,
-0.01174163818359375,
-0.010833740234375,
-0.002910614013671875,
-0.00620269775390625,
-0.041534423828125,
-0.050445556640625,
-0.067138671875,
-0.01020... |
afsuarezg/my_awesome_model | 2023-06-06T02:16:09.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"endpoints_compatible",
"region:us"
] | text-classification | afsuarezg | null | null | afsuarezg/my_awesome_model | 0 | 2 | transformers | 2023-05-21T19:34:52 | ---
tags:
- generated_from_trainer
metrics:
- accuracy
model-index:
- name: my_awesome_model
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# my_awesome_model
This model is a fine-tuned version of [pile-of-law/legalbert-large-1.7M-2](https://huggingface.co/pile-of-law/legalbert-large-1.7M-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7448
- Accuracy: 0.6333
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| No log | 1.0 | 150 | 0.6502 | 0.6 |
| No log | 2.0 | 300 | 0.6360 | 0.66 |
| No log | 3.0 | 450 | 0.6546 | 0.69 |
| 0.6614 | 4.0 | 600 | 0.6632 | 0.6333 |
| 0.6614 | 5.0 | 750 | 0.7435 | 0.65 |
| 0.6614 | 6.0 | 900 | 0.7448 | 0.6333 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,648 | [
[
-0.032135009765625,
-0.036865234375,
0.0137786865234375,
-0.0003161430358886719,
-0.0283660888671875,
-0.034698486328125,
0.003650665283203125,
-0.01457977294921875,
0.012908935546875,
0.040374755859375,
-0.0323486328125,
-0.052154541015625,
-0.05377197265625,
... |
marco-c88/gpt2-small-italian-finetuned-mstatmem_1ep_gpt2_no_valid_verga | 2023-05-21T20:43:12.000Z | [
"transformers",
"pytorch",
"tensorboard",
"gpt2",
"text-generation",
"generated_from_trainer",
"endpoints_compatible",
"text-generation-inference",
"region:us"
] | text-generation | marco-c88 | null | null | marco-c88/gpt2-small-italian-finetuned-mstatmem_1ep_gpt2_no_valid_verga | 0 | 2 | transformers | 2023-05-21T20:41:28 | ---
tags:
- generated_from_trainer
model-index:
- name: gpt2-small-italian-finetuned-mstatmem_1ep_gpt2_no_valid_verga
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-small-italian-finetuned-mstatmem_1ep_gpt2_no_valid_verga
This model is a fine-tuned version of [GroNLP/gpt2-small-italian](https://huggingface.co/GroNLP/gpt2-small-italian) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 4.1398
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 392 | 4.1398 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,336 | [
[
-0.031707763671875,
-0.043609619140625,
0.0209197998046875,
0.0011568069458007812,
-0.0377197265625,
-0.043792724609375,
-0.0206146240234375,
-0.027099609375,
-0.00337982177734375,
0.017364501953125,
-0.04803466796875,
-0.032440185546875,
-0.055755615234375,
... |
wiorz/legal_bert_small_summarized | 2023-05-23T23:19:43.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/legal_bert_small_summarized | 0 | 2 | transformers | 2023-05-21T21:36:02 | ---
license: cc-by-sa-4.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: legal_bert_small_summarized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# legal_bert_small_summarized
This model is a fine-tuned version of [nlpaueb/legal-bert-base-uncased](https://huggingface.co/nlpaueb/legal-bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.0708
- Accuracy: 0.815
- Precision: 0.5
- Recall: 0.1622
- F1: 0.2449
- D-index: 1.5040
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1600
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 200 | 0.4799 | 0.815 | 0.0 | 0.0 | 0.0 | 1.4449 |
| No log | 2.0 | 400 | 0.5646 | 0.815 | 0.0 | 0.0 | 0.0 | 1.4449 |
| 0.5383 | 3.0 | 600 | 0.5505 | 0.815 | 0.0 | 0.0 | 0.0 | 1.4449 |
| 0.5383 | 4.0 | 800 | 0.4502 | 0.815 | 0.5 | 0.2162 | 0.3019 | 1.5231 |
| 0.5116 | 5.0 | 1000 | 0.6932 | 0.805 | 0.4444 | 0.2162 | 0.2909 | 1.5096 |
| 0.5116 | 6.0 | 1200 | 1.0173 | 0.795 | 0.4231 | 0.2973 | 0.3492 | 1.5244 |
| 0.5116 | 7.0 | 1400 | 1.2308 | 0.82 | 0.5714 | 0.1081 | 0.1818 | 1.4914 |
| 0.1778 | 8.0 | 1600 | 1.4035 | 0.815 | 0.5 | 0.2432 | 0.3273 | 1.5326 |
| 0.1778 | 9.0 | 1800 | 1.6336 | 0.815 | 0.5 | 0.1622 | 0.2449 | 1.5040 |
| 0.0255 | 10.0 | 2000 | 1.7291 | 0.82 | 0.5385 | 0.1892 | 0.28 | 1.5204 |
| 0.0255 | 11.0 | 2200 | 1.7801 | 0.825 | 0.5714 | 0.2162 | 0.3137 | 1.5367 |
| 0.0255 | 12.0 | 2400 | 1.8364 | 0.825 | 0.5714 | 0.2162 | 0.3137 | 1.5367 |
| 0.0 | 13.0 | 2600 | 1.8688 | 0.825 | 0.5714 | 0.2162 | 0.3137 | 1.5367 |
| 0.0 | 14.0 | 2800 | 1.9549 | 0.815 | 0.5 | 0.1622 | 0.2449 | 1.5040 |
| 0.0 | 15.0 | 3000 | 2.0022 | 0.815 | 0.5 | 0.1622 | 0.2449 | 1.5040 |
| 0.0 | 16.0 | 3200 | 1.9795 | 0.82 | 0.5385 | 0.1892 | 0.28 | 1.5204 |
| 0.0 | 17.0 | 3400 | 2.0438 | 0.815 | 0.5 | 0.1622 | 0.2449 | 1.5040 |
| 0.0 | 18.0 | 3600 | 2.0603 | 0.815 | 0.5 | 0.1622 | 0.2449 | 1.5040 |
| 0.0 | 19.0 | 3800 | 2.0722 | 0.815 | 0.5 | 0.1622 | 0.2449 | 1.5040 |
| 0.0014 | 20.0 | 4000 | 2.0708 | 0.815 | 0.5 | 0.1622 | 0.2449 | 1.5040 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,596 | [
[
-0.043121337890625,
-0.037567138671875,
0.0168304443359375,
0.007411956787109375,
-0.01157379150390625,
-0.0178985595703125,
-0.0025424957275390625,
-0.016265869140625,
0.039398193359375,
0.02593994140625,
-0.04339599609375,
-0.050140380859375,
-0.04379272460937... |
wiorz/bert_small_summarized | 2023-05-21T22:02:26.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | wiorz | null | null | wiorz/bert_small_summarized | 0 | 2 | transformers | 2023-05-21T21:45:34 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- precision
- recall
- f1
model-index:
- name: bert_small_summarized
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_small_summarized
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.1652
- Accuracy: 0.82
- Precision: 0.4667
- Recall: 0.2
- F1: 0.2800
- D-index: 1.5200
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1600
- num_epochs: 20
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | D-index |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:|:-------:|
| No log | 1.0 | 200 | 0.4533 | 0.825 | 0.0 | 0.0 | 0.0 | 1.4529 |
| No log | 2.0 | 400 | 0.4694 | 0.825 | 0.0 | 0.0 | 0.0 | 1.4529 |
| 0.5094 | 3.0 | 600 | 0.6237 | 0.825 | 0.0 | 0.0 | 0.0 | 1.4529 |
| 0.5094 | 4.0 | 800 | 0.7898 | 0.81 | 0.4286 | 0.2571 | 0.3214 | 1.5270 |
| 0.3984 | 5.0 | 1000 | 0.9268 | 0.83 | 0.5556 | 0.1429 | 0.2273 | 1.5127 |
| 0.3984 | 6.0 | 1200 | 1.3541 | 0.8 | 0.4074 | 0.3143 | 0.3548 | 1.5339 |
| 0.3984 | 7.0 | 1400 | 1.4264 | 0.805 | 0.375 | 0.1714 | 0.2353 | 1.4893 |
| 0.0939 | 8.0 | 1600 | 1.8870 | 0.8 | 0.4194 | 0.3714 | 0.3939 | 1.5539 |
| 0.0939 | 9.0 | 1800 | 1.8734 | 0.825 | 0.5 | 0.1143 | 0.1860 | 1.4955 |
| 0.0061 | 10.0 | 2000 | 1.8938 | 0.825 | 0.5 | 0.1714 | 0.2553 | 1.5164 |
| 0.0061 | 11.0 | 2200 | 2.0755 | 0.825 | 0.5 | 0.1143 | 0.1860 | 1.4955 |
| 0.0061 | 12.0 | 2400 | 2.1068 | 0.805 | 0.4231 | 0.3143 | 0.3607 | 1.5406 |
| 0.0134 | 13.0 | 2600 | 2.0895 | 0.82 | 0.4444 | 0.1143 | 0.1818 | 1.4887 |
| 0.0134 | 14.0 | 2800 | 2.0520 | 0.815 | 0.4545 | 0.2857 | 0.3509 | 1.5439 |
| 0.0011 | 15.0 | 3000 | 2.0795 | 0.81 | 0.4211 | 0.2286 | 0.2963 | 1.5168 |
| 0.0011 | 16.0 | 3200 | 2.1177 | 0.815 | 0.4444 | 0.2286 | 0.3019 | 1.5235 |
| 0.0011 | 17.0 | 3400 | 2.1396 | 0.815 | 0.4444 | 0.2286 | 0.3019 | 1.5235 |
| 0.0003 | 18.0 | 3600 | 2.1605 | 0.825 | 0.5 | 0.2286 | 0.3137 | 1.5370 |
| 0.0003 | 19.0 | 3800 | 2.1677 | 0.825 | 0.5 | 0.2286 | 0.3137 | 1.5370 |
| 0.0 | 20.0 | 4000 | 2.1652 | 0.82 | 0.4667 | 0.2 | 0.2800 | 1.5200 |
### Framework versions
- Transformers 4.28.0
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 3,553 | [
[
-0.04443359375,
-0.0380859375,
0.017425537109375,
0.00670623779296875,
-0.01035308837890625,
-0.01715087890625,
-0.00292205810546875,
-0.0135650634765625,
0.04034423828125,
0.0211944580078125,
-0.044342041015625,
-0.0496826171875,
-0.046661376953125,
-0.0132... |
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism | 2023-05-22T00:52:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Ioanaaaaaaa | null | null | Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism | 0 | 2 | transformers | 2023-05-21T23:11:23 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sexism
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sexism
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4452
- Accuracy: 0.8523
- F1: 0.8507
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.4307 | 1.0 | 1876 | 0.3620 | 0.8518 | 0.8495 |
| 0.308 | 2.0 | 3752 | 0.4452 | 0.8523 | 0.8507 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,498 | [
[
-0.0268707275390625,
-0.04412841796875,
0.01158905029296875,
0.020965576171875,
-0.02069091796875,
-0.023834228515625,
-0.0039215087890625,
-0.00635528564453125,
0.0035800933837890625,
0.0193023681640625,
-0.052886962890625,
-0.05029296875,
-0.051544189453125,
... |
futureStar02/distilbert-base-uncased-finetuned-emotion | 2023-05-22T00:53:32.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"endpoints_compatible",
"region:us"
] | text-classification | futureStar02 | null | null | futureStar02/distilbert-base-uncased-finetuned-emotion | 0 | 2 | transformers | 2023-05-22T00:48:55 | ---
license: apache-2.0
tags:
- generated_from_trainer
datasets:
- emotion
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
name: Text Classification
type: text-classification
dataset:
name: emotion
type: emotion
config: split
split: validation
args: split
metrics:
- name: Accuracy
type: accuracy
value: 0.9235
- name: F1
type: f1
value: 0.9233899899889855
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2224
- Accuracy: 0.9235
- F1: 0.9234
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8348 | 1.0 | 250 | 0.3170 | 0.9075 | 0.9042 |
| 0.2525 | 2.0 | 500 | 0.2224 | 0.9235 | 0.9234 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,848 | [
[
-0.037689208984375,
-0.041534423828125,
0.0138397216796875,
0.02227783203125,
-0.0257720947265625,
-0.018768310546875,
-0.0137176513671875,
-0.0083160400390625,
0.0101318359375,
0.00826263427734375,
-0.056182861328125,
-0.051666259765625,
-0.060394287109375,
... |
Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-2 | 2023-05-22T10:27:39.000Z | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | Ioanaaaaaaa | null | null | Ioanaaaaaaa/distilbert-base-uncased-finetuned-sexism-2 | 0 | 2 | transformers | 2023-05-22T00:55:09 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
model-index:
- name: distilbert-base-uncased-finetuned-sexism-2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-sexism-2
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3597
- Accuracy: 0.8555
- F1: 0.8540
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.43 | 1.0 | 1876 | 0.3597 | 0.8555 | 0.8540 |
### Framework versions
- Transformers 4.29.2
- Pytorch 2.0.1+cu118
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,431 | [
[
-0.0257720947265625,
-0.045257568359375,
0.01345062255859375,
0.0216827392578125,
-0.0252838134765625,
-0.02569580078125,
-0.0063323974609375,
-0.007472991943359375,
0.0010023117065429688,
0.0170745849609375,
-0.05096435546875,
-0.047607421875,
-0.0517578125,
... |
ttogun/fourthbrain_wk1_distilbert_classifier_newsgroups | 2023-05-22T01:20:12.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | ttogun | null | null | ttogun/fourthbrain_wk1_distilbert_classifier_newsgroups | 0 | 2 | transformers | 2023-05-22T01:19:45 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: fourthbrain_wk1_distilbert_classifier_newsgroups
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# fourthbrain_wk1_distilbert_classifier_newsgroups
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1908, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,503 | [
[
-0.040618896484375,
-0.04107666015625,
0.0258026123046875,
0.0107574462890625,
-0.032073974609375,
-0.0033969879150390625,
-0.0087890625,
-0.014739990234375,
-0.004451751708984375,
-0.003753662109375,
-0.0435791015625,
-0.054107666015625,
-0.06317138671875,
... |
deepspringer/my_bert_model_courses_and_subjects | 2023-05-22T02:36:44.000Z | [
"transformers",
"tf",
"distilbert",
"text-classification",
"generated_from_keras_callback",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | deepspringer | null | null | deepspringer/my_bert_model_courses_and_subjects | 0 | 2 | transformers | 2023-05-22T02:36:16 | ---
license: apache-2.0
tags:
- generated_from_keras_callback
model-index:
- name: my_bert_model_courses_and_subjects
results: []
---
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# my_bert_model_courses_and_subjects
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1188, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
### Framework versions
- Transformers 4.28.0
- TensorFlow 2.12.0
- Datasets 2.12.0
- Tokenizers 0.13.3
| 1,475 | [
[
-0.04443359375,
-0.0518798828125,
0.0248260498046875,
0.0082855224609375,
-0.02716064453125,
-0.011322021484375,
-0.019287109375,
-0.01427459716796875,
0.0030307769775390625,
0.003910064697265625,
-0.05316162109375,
-0.0479736328125,
-0.05303955078125,
-0.01... |
AustinCarthy/MixGPT2_100KP_BFall_fromB_10KGen_topP_0.75 | 2023-05-22T11:27:48.000Z | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | text-classification | AustinCarthy | null | null | AustinCarthy/MixGPT2_100KP_BFall_fromB_10KGen_topP_0.75 | 0 | 2 | transformers | 2023-05-22T03:53:30 | ---
license: apache-2.0
tags:
- generated_from_trainer
metrics:
- accuracy
- f1
- precision
- recall
model-index:
- name: MixGPT2_100KP_BFall_fromB_10KGen_topP_0.75
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# MixGPT2_100KP_BFall_fromB_10KGen_topP_0.75
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0191
- Accuracy: 0.9977
- F1: 0.9755
- Precision: 0.9990
- Recall: 0.9532
- Roc Auc Score: 0.9766
- Tpr At Fpr 0.01: 0.9616
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | Roc Auc Score | Tpr At Fpr 0.01 |
|:-------------:|:-----:|:------:|:---------------:|:--------:|:------:|:---------:|:------:|:-------------:|:---------------:|
| 0.0017 | 1.0 | 72188 | 0.0194 | 0.9972 | 0.9700 | 0.9960 | 0.9454 | 0.9726 | 0.9264 |
| 0.0017 | 2.0 | 144376 | 0.0220 | 0.9971 | 0.9688 | 0.9991 | 0.9402 | 0.9701 | 0.9466 |
| 0.0022 | 3.0 | 216564 | 0.0258 | 0.9963 | 0.9597 | 0.9994 | 0.923 | 0.9615 | 0.9518 |
| 0.0023 | 4.0 | 288752 | 0.0154 | 0.9973 | 0.9713 | 0.9987 | 0.9454 | 0.9727 | 0.9614 |
| 0.0009 | 5.0 | 360940 | 0.0191 | 0.9977 | 0.9755 | 0.9990 | 0.9532 | 0.9766 | 0.9616 |
### Framework versions
- Transformers 4.29.1
- Pytorch 1.9.0+cu111
- Datasets 2.10.1
- Tokenizers 0.13.2
| 2,253 | [
[
-0.044769287109375,
-0.042083740234375,
0.007106781005859375,
0.01493072509765625,
-0.0223236083984375,
-0.0195159912109375,
-0.0061798095703125,
-0.01904296875,
0.0274200439453125,
0.02423095703125,
-0.0511474609375,
-0.044097900390625,
-0.053680419921875,
... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.