repo_id stringlengths 4 110 | author stringlengths 2 27 ⌀ | model_type stringlengths 2 29 ⌀ | files_per_repo int64 2 15.4k | downloads_30d int64 0 19.9M | library stringlengths 2 37 ⌀ | likes int64 0 4.34k | pipeline stringlengths 5 30 ⌀ | pytorch bool 2 classes | tensorflow bool 2 classes | jax bool 2 classes | license stringlengths 2 30 | languages stringlengths 4 1.63k ⌀ | datasets stringlengths 2 2.58k ⌀ | co2 stringclasses 29 values | prs_count int64 0 125 | prs_open int64 0 120 | prs_merged int64 0 15 | prs_closed int64 0 28 | discussions_count int64 0 218 | discussions_open int64 0 148 | discussions_closed int64 0 70 | tags stringlengths 2 513 | has_model_index bool 2 classes | has_metadata bool 1 class | has_text bool 1 class | text_length int64 401 598k | is_nc bool 1 class | readme stringlengths 0 598k | hash stringlengths 32 32 |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
EIStakovskii/german_toxicity_classifier_plus | EIStakovskii | bert | 8 | 5 | transformers | 0 | text-classification | true | false | false | other | ['de'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,266 | false | This model was trained for toxicity labeling. Label_1 means TOXIC, Label_0 means NOT TOXIC
The model was fine-tuned based off the already existing sentiment classifier oliverguhr/german-sentiment-bert . The aforementioned classifier performed poorly (44% accuracy on my test sample), so I trained the current toxicity classifier. It was noted that the same performance achieved training on the [dbmdz/bert-base-german-cased model](https://huggingface.co/dbmdz/bert-base-german-cased).
The accuracy is 91% on the test split during training and 83% on a manually picked (and thus harder) sample of 200 sentences (100 label 1, 100 label 0) at the end of the training.
The model was finetuned on 37k sentences. The train data was the translations of the English data (around 30k sentences) from [the multilingual_detox dataset](https://github.com/s-nlp/multilingual_detox) by [Skolkovo Institute](https://huggingface.co/SkolkovoInstitute) using [the opus-mt-en-de translation model](https://huggingface.co/Helsinki-NLP/opus-mt-en-de) by [Helsinki-NLP](https://huggingface.co/Helsinki-NLP) and semi-manually collected data (around 7 k) by crawling [the dict.cc web dictionary](https://www.dict.cc/) and [the Reverso Context](https://context.reverso.net/translation/). | 3e23b855c39248c620cf688dbf9f205d |
ahnafsamin/FastSpeech2-gronings | ahnafsamin | null | 5 | 0 | null | 0 | text-to-speech | false | false | false | afl-3.0 | ['gos'] | ['gronings'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-speech', 'gronings', 'FastSpeech 2'] | false | true | true | 6,850 | false | ## GroTTS Model
This model was trained with the [FastSpeech 2](https://arxiv.org/abs/2006.04558) architecture using approx. 2 hours of Gronings TTS dataset. For the best results, you need to download the vocoder separately from [here](https://huggingface.co/ahnafsamin/parallelwavegan-gronings) and then use the following code:
```
from espnet2.bin.tts_inference import Text2Speech
from scipy.io.wavfile import write
model = Text2Speech.from_pretrained(
model_file="path_to_the_model_file_in_pth_format",
vocoder_file="path_to_the_vocoder_file_in_pkl_format"
)
output = model("This is a simple test.")
write("x.wav", 22050, output['wav'].numpy())
```
The GroTTS model is deployed [here](https://huggingface.co/spaces/ahnafsamin/GroTTS-FastSpeech2).
## TTS config
<details><summary>expand</summary>
```
config: conf/tuning/train_fastspeech2.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: sequence
output_dir: exp/tts_train_fastspeech2_raw_char_tacotron
ngpu: 1
seed: 0
num_workers: 1
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 1000
patience: null
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- loss
- min
- - train
- loss
- min
keep_nbest_models: 5
nbest_averaging_interval: 0
grad_clip: 1.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 8
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: 800
batch_size: 20
valid_batch_size: null
batch_bins: 3000000
valid_batch_bins: null
train_shape_file:
- exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/text_shape.char
- exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/speech_shape
valid_shape_file:
- exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/text_shape.char
- exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/speech_shape
batch_type: numel
valid_batch_type: null
fold_length:
- 150
- 204800
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 500
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_no_dev/text
- text
- text
- - exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/tr_no_dev/durations
- durations
- text_int
- - dump/raw/tr_no_dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/collect_feats/energy.scp
- energy
- npy
valid_data_path_and_name_and_type:
- - dump/raw/dev/text
- text
- text
- - exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/dev/durations
- durations
- text_int
- - dump/raw/dev/wav.scp
- speech
- sound
- - exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/collect_feats/pitch.scp
- pitch
- npy
- - exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/valid/collect_feats/energy.scp
- energy
- npy
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 1.0
scheduler: noamlr
scheduler_conf:
model_size: 384
warmup_steps: 4000
token_list:
- <blank>
- <unk>
- <space>
- E
- N
- A
- O
- T
- I
- R
- D
- L
- S
- K
- M
- G
- U
- H
- .
- W
- V
- Z
- P
- B
- ','
- J
- C
- F
- '?'
- ''''
- '!'
- Y
- X
- '`'
- <sos/eos>
odim: null
model_conf: {}
use_preprocessor: true
token_type: char
bpemodel: null
non_linguistic_symbols: null
cleaner: tacotron
g2p: g2p_en
feats_extract: fbank
feats_extract_conf:
n_fft: 1024
hop_length: 256
win_length: null
fs: 22050
fmin: 80
fmax: 7600
n_mels: 80
normalize: global_mvn
normalize_conf:
stats_file: exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/feats_stats.npz
tts: fastspeech2
tts_conf:
adim: 384
aheads: 2
elayers: 4
eunits: 1536
dlayers: 4
dunits: 1536
positionwise_layer_type: conv1d
positionwise_conv_kernel_size: 3
duration_predictor_layers: 2
duration_predictor_chans: 256
duration_predictor_kernel_size: 3
postnet_layers: 5
postnet_filts: 5
postnet_chans: 256
use_masking: true
use_scaled_pos_enc: true
encoder_normalize_before: true
decoder_normalize_before: true
reduction_factor: 1
init_type: xavier_uniform
init_enc_alpha: 1.0
init_dec_alpha: 1.0
transformer_enc_dropout_rate: 0.2
transformer_enc_positional_dropout_rate: 0.2
transformer_enc_attn_dropout_rate: 0.2
transformer_dec_dropout_rate: 0.2
transformer_dec_positional_dropout_rate: 0.2
transformer_dec_attn_dropout_rate: 0.2
pitch_predictor_layers: 5
pitch_predictor_chans: 256
pitch_predictor_kernel_size: 5
pitch_predictor_dropout: 0.5
pitch_embed_kernel_size: 1
pitch_embed_dropout: 0.0
stop_gradient_from_pitch_predictor: true
energy_predictor_layers: 2
energy_predictor_chans: 256
energy_predictor_kernel_size: 3
energy_predictor_dropout: 0.5
energy_embed_kernel_size: 1
energy_embed_dropout: 0.0
stop_gradient_from_energy_predictor: false
pitch_extract: dio
pitch_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
f0max: 400
f0min: 80
reduction_factor: 1
pitch_normalize: global_mvn
pitch_normalize_conf:
stats_file: exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/pitch_stats.npz
energy_extract: energy
energy_extract_conf:
fs: 22050
n_fft: 1024
hop_length: 256
win_length: null
reduction_factor: 1
energy_normalize: global_mvn
energy_normalize_conf:
stats_file: exp/tts_train_raw_char_tacotron/decode_use_teacher_forcingtrue_train.loss.ave/stats/train/energy_stats.npz
required:
- output_dir
- token_list
version: 0.10.7a1
distributed: false
```
</details> | f6e86d37c7fdf1af9e7cf2d1d8efbaeb |
lmchion/distilbert-finetuned-esg-a4s | lmchion | distilbert | 8 | 2 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,908 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# lmchion/distilbert-finetuned-esg-a4s
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 2.2859
- Validation Loss: 2.3354
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 2e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': -812, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 2.8805 | 2.7153 | 0 |
| 2.6414 | 2.5472 | 1 |
| 2.5202 | 2.4813 | 2 |
| 2.4306 | 2.3834 | 3 |
| 2.3452 | 2.3297 | 4 |
| 2.2940 | 2.3201 | 5 |
| 2.2889 | 2.3061 | 6 |
| 2.2726 | 2.3471 | 7 |
| 2.2827 | 2.3432 | 8 |
| 2.2859 | 2.3354 | 9 |
### Framework versions
- Transformers 4.20.0
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 3b3047f8e83969bdf7dba8997467ae01 |
paola-md/distilr2-lr1e05-wd0.1-bs32 | paola-md | roberta | 6 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,674 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilr2-lr1e05-wd0.1-bs32
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2744
- Rmse: 0.5238
- Mse: 0.2744
- Mae: 0.4135
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 256
- eval_batch_size: 256
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rmse | Mse | Mae |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|
| 0.2775 | 1.0 | 623 | 0.2735 | 0.5229 | 0.2735 | 0.4180 |
| 0.2738 | 2.0 | 1246 | 0.2726 | 0.5221 | 0.2726 | 0.4124 |
| 0.2722 | 3.0 | 1869 | 0.2727 | 0.5222 | 0.2727 | 0.4165 |
| 0.2702 | 4.0 | 2492 | 0.2756 | 0.5249 | 0.2756 | 0.3995 |
| 0.2684 | 5.0 | 3115 | 0.2767 | 0.5260 | 0.2767 | 0.4229 |
| 0.2668 | 6.0 | 3738 | 0.2744 | 0.5238 | 0.2744 | 0.4135 |
### Framework versions
- Transformers 4.19.0.dev0
- Pytorch 1.9.0+cu111
- Datasets 2.4.0
- Tokenizers 0.12.1
| 3d236138966176c37f3a75b4eb4391f8 |
romainlhardy/finetuned-ner | romainlhardy | bert | 12 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['conll2003'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,329 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-ner
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0712
- Precision: 0.9048
- Recall: 0.9310
- F1: 0.9177
- Accuracy: 0.9817
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0849 | 1.0 | 1756 | 0.0712 | 0.9048 | 0.9310 | 0.9177 | 0.9817 |
### Framework versions
- Transformers 4.21.0
- Pytorch 1.12.0+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| 725c080b58634c91d45fbd3bf3f39170 |
openai/whisper-base.en | openai | whisper | 14 | 6,308 | transformers | 3 | automatic-speech-recognition | true | true | false | apache-2.0 | ['en'] | null | null | 6 | 0 | 6 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | true | true | true | 13,544 | false |
# Whisper
Whisper is a pre-trained model for automatic speech recognition (ASR) and speech translation. Trained on 680k hours
of labelled data, Whisper models demonstrate a strong ability to generalise to many datasets and domains **without** the need
for fine-tuning.
Whisper was proposed in the paper [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
by Alec Radford et al. from OpenAI. The original code repository can be found [here](https://github.com/openai/whisper).
**Disclaimer**: Content for this model card has partly been written by the Hugging Face team, and parts of it were
copied and pasted from the original model card.
## Model details
Whisper is a Transformer based encoder-decoder model, also referred to as a _sequence-to-sequence_ model.
It was trained on 680k hours of labelled speech data annotated using large-scale weak supervision.
The models were trained on either English-only data or multilingual data. The English-only models were trained
on the task of speech recognition. The multilingual models were trained on both speech recognition and speech
translation. For speech recognition, the model predicts transcriptions in the *same* language as the audio.
For speech translation, the model predicts transcriptions to a *different* language to the audio.
Whisper checkpoints come in five configurations of varying model sizes.
The smallest four are trained on either English-only or multilingual data.
The largest checkpoints are multilingual only. All ten of the pre-trained checkpoints
are available on the [Hugging Face Hub](https://huggingface.co/models?search=openai/whisper). The
checkpoints are summarised in the following table with links to the models on the Hub:
| Size | Parameters | English-only | Multilingual |
|----------|------------|------------------------------------------------------|-----------------------------------------------------|
| tiny | 39 M | [✓](https://huggingface.co/openai/whisper-tiny.en) | [✓](https://huggingface.co/openai/whisper-tiny) |
| base | 74 M | [✓](https://huggingface.co/openai/whisper-base.en) | [✓](https://huggingface.co/openai/whisper-base) |
| small | 244 M | [✓](https://huggingface.co/openai/whisper-small.en) | [✓](https://huggingface.co/openai/whisper-small) |
| medium | 769 M | [✓](https://huggingface.co/openai/whisper-medium.en) | [✓](https://huggingface.co/openai/whisper-medium) |
| large | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large) |
| large-v2 | 1550 M | x | [✓](https://huggingface.co/openai/whisper-large-v2) |
# Usage
This checkpoint is an *English-only* model, meaning it can be used for English speech recognition. Multilingual speech
recognition or speech translation is possible through use of a multilingual checkpoint.
To transcribe audio samples, the model has to be used alongside a [`WhisperProcessor`](https://huggingface.co/docs/transformers/model_doc/whisper#transformers.WhisperProcessor).
The `WhisperProcessor` is used to:
1. Pre-process the audio inputs (converting them to log-Mel spectrograms for the model)
2. Post-process the model outputs (converting them from tokens to text)
## Transcription
```python
>>> from transformers import WhisperProcessor, WhisperForConditionalGeneration
>>> from datasets import load_dataset
>>> # load model and processor
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base.en")
>>> # load dummy dataset and read audio files
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> input_features = processor(sample["array"], sampling_rate=sample["sampling_rate"], return_tensors="pt").input_features
>>> # generate token ids
>>> predicted_ids = model.generate(input_features)
>>> # decode token ids to text
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=False)
['<|startoftranscript|><|notimestamps|> Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel.<|endoftext|>']
>>> transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)
[' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.']
```
The context tokens can be removed from the start of the transcription by setting `skip_special_tokens=True`.
## Evaluation
This code snippet shows how to evaluate Whisper base.en on [LibriSpeech test-clean](https://huggingface.co/datasets/librispeech_asr):
```python
>>> from datasets import load_dataset
>>> from transformers import WhisperForConditionalGeneration, WhisperProcessor
>>> import torch
>>> from evaluate import load
>>> librispeech_test_clean = load_dataset("librispeech_asr", "clean", split="test")
>>> processor = WhisperProcessor.from_pretrained("openai/whisper-base.en")
>>> model = WhisperForConditionalGeneration.from_pretrained("openai/whisper-base.en").to("cuda")
>>> def map_to_pred(batch):
>>> audio = batch["audio"]
>>> input_features = processor(audio["array"], sampling_rate=audio["sampling_rate"], return_tensors="pt").input_features
>>> batch["reference"] = processor.tokenizer._normalize(batch['text'])
>>>
>>> with torch.no_grad():
>>> predicted_ids = model.generate(input_features.to("cuda"))[0]
>>> transcription = processor.decode(predicted_ids)
>>> batch["prediction"] = processor.tokenizer._normalize(transcription)
>>> return batch
>>> result = librispeech_test_clean.map(map_to_pred)
>>> wer = load("wer")
>>> print(100 * wer.compute(references=result["reference"], predictions=result["prediction"]))
4.271408904897505
```
## Long-Form Transcription
The Whisper model is intrinsically designed to work on audio samples of up to 30s in duration. However, by using a chunking
algorithm, it can be used to transcribe audio samples of up to arbitrary length. This is possible through Transformers
[`pipeline`](https://huggingface.co/docs/transformers/main_classes/pipelines#transformers.AutomaticSpeechRecognitionPipeline)
method. Chunking is enabled by setting `chunk_length_s=30` when instantiating the pipeline. It can also be extended to
predict utterance level timestamps by passing `return_timestamps=True`:
```python
>>> import torch
>>> from transformers import pipeline
>>> from datasets import load_dataset
>>> device = "cuda:0" if torch.cuda.is_available() else "cpu"
>>> pipe = pipeline(
>>> "automatic-speech-recognition",
>>> model="openai/whisper-base.en",
>>> chunk_length_s=30,
>>> device=device,
>>> )
>>> ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
>>> sample = ds[0]["audio"]
>>> prediction = pipe(sample.copy())["text"]
" Mr. Quilter is the apostle of the middle classes, and we are glad to welcome his gospel."
>>> # we can also return timestamps for the predictions
>>> prediction = pipe(sample, return_timestamps=True)["chunks"]
[{'text': ' Mr. Quilter is the apostle of the middle classes and we are glad to welcome his gospel.',
'timestamp': (0.0, 5.44)}]
```
## Fine-Tuning
The pre-trained Whisper model demonstrates a strong ability to generalise to different datasets and domains. However,
its predictive capabilities can be improved further for certain languages and tasks through *fine-tuning*. The blog
post [Fine-Tune Whisper with 🤗 Transformers](https://huggingface.co/blog/fine-tune-whisper) provides a step-by-step
guide to fine-tuning the Whisper model with as little as 5 hours of labelled data.
### Evaluated Use
The primary intended users of these models are AI researchers studying robustness, generalization, capabilities, biases, and constraints of the current model. However, Whisper is also potentially quite useful as an ASR solution for developers, especially for English speech recognition. We recognize that once models are released, it is impossible to restrict access to only “intended” uses or to draw reasonable guidelines around what is or is not research.
The models are primarily trained and evaluated on ASR and speech translation to English tasks. They show strong ASR results in ~10 languages. They may exhibit additional capabilities, particularly if fine-tuned on certain tasks like voice activity detection, speaker classification, or speaker diarization but have not been robustly evaluated in these areas. We strongly recommend that users perform robust evaluations of the models in a particular context and domain before deploying them.
In particular, we caution against using Whisper models to transcribe recordings of individuals taken without their consent or purporting to use these models for any kind of subjective classification. We recommend against use in high-risk domains like decision-making contexts, where flaws in accuracy can lead to pronounced flaws in outcomes. The models are intended to transcribe and translate speech, use of the model for classification is not only not evaluated but also not appropriate, particularly to infer human attributes.
## Training Data
The models are trained on 680,000 hours of audio and the corresponding transcripts collected from the internet. 65% of this data (or 438,000 hours) represents English-language audio and matched English transcripts, roughly 18% (or 126,000 hours) represents non-English audio and English transcripts, while the final 17% (or 117,000 hours) represents non-English audio and the corresponding transcript. This non-English data represents 98 different languages.
As discussed in [the accompanying paper](https://cdn.openai.com/papers/whisper.pdf), we see that performance on transcription in a given language is directly correlated with the amount of training data we employ in that language.
## Performance and Limitations
Our studies show that, over many existing ASR systems, the models exhibit improved robustness to accents, background noise, technical language, as well as zero shot translation from multiple languages into English; and that accuracy on speech recognition and translation is near the state-of-the-art level.
However, because the models are trained in a weakly supervised manner using large-scale noisy data, the predictions may include texts that are not actually spoken in the audio input (i.e. hallucination). We hypothesize that this happens because, given their general knowledge of language, the models combine trying to predict the next word in audio with trying to transcribe the audio itself.
Our models perform unevenly across languages, and we observe lower accuracy on low-resource and/or low-discoverability languages or languages where we have less training data. The models also exhibit disparate performance on different accents and dialects of particular languages, which may include higher word error rate across speakers of different genders, races, ages, or other demographic criteria. Our full evaluation results are presented in [the paper accompanying this release](https://cdn.openai.com/papers/whisper.pdf).
In addition, the sequence-to-sequence architecture of the model makes it prone to generating repetitive texts, which can be mitigated to some degree by beam search and temperature scheduling but not perfectly. Further analysis on these limitations are provided in [the paper](https://cdn.openai.com/papers/whisper.pdf). It is likely that this behavior and hallucinations may be worse on lower-resource and/or lower-discoverability languages.
## Broader Implications
We anticipate that Whisper models’ transcription capabilities may be used for improving accessibility tools. While Whisper models cannot be used for real-time transcription out of the box – their speed and size suggest that others may be able to build applications on top of them that allow for near-real-time speech recognition and translation. The real value of beneficial applications built on top of Whisper models suggests that the disparate performance of these models may have real economic implications.
There are also potential dual use concerns that come with releasing Whisper. While we hope the technology will be used primarily for beneficial purposes, making ASR technology more accessible could enable more actors to build capable surveillance technologies or scale up existing surveillance efforts, as the speed and accuracy allow for affordable automatic transcription and translation of large volumes of audio communication. Moreover, these models may have some capabilities to recognize specific individuals out of the box, which in turn presents safety concerns related both to dual use and disparate performance. In practice, we expect that the cost of transcription is not the limiting factor of scaling up surveillance projects.
### BibTeX entry and citation info
```bibtex
@misc{radford2022whisper,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
```
| fd9755bb8e3cb18dcdd8c916fdce602e |
yirmibesogluz/t2t-ner-ade-balanced | yirmibesogluz | t5 | 9 | 1 | transformers | 0 | text2text-generation | true | false | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['adverse-drug-events', 'twitter', 'social-media-mining-for-health', 'SMM4H'] | false | true | true | 1,822 | false |
## t2t-ner-ade-balanced
t2t-ner-ade-balanced is a text-to-text (**t2t**) adverse drug event (**ade**) extraction (NER) model trained with over- and undersampled (balanced) English tweets reporting adverse drug events. It is trained as part of BOUN-TABI system for the Social Media Mining for Health (SMM4H) 2022 shared task. The system description paper has been accepted for publication in *Proceedings of the Seventh Social Media Mining for Health (#SMM4H) Workshop and Shared Task* and will be available soon. The source code has been released on GitHub at [https://github.com/gokceuludogan/boun-tabi-smm4h22](https://github.com/gokceuludogan/boun-tabi-smm4h22).
The model utilizes the T5 model and its text-to-text formulation. The inputs are fed to the model with the task prefix "ner ade:", followed with a sentence/tweet. In turn, either the extracted adverse event span is returned, or "none".
## Requirements
```
sentencepiece
transformers
```
## Usage
```python
from transformers import pipeline, AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("yirmibesogluz/t2t-ner-ade-balanced")
model = AutoModelForSeq2SeqLM.from_pretrained("yirmibesogluz/t2t-ner-ade-balanced")
predictor = pipeline("text2text-generation", model=model, tokenizer=tokenizer)
predictor("ner ade: i'm so irritable when my vyvanse wears off")
```
## Citation
```bibtex
@inproceedings{uludogan-gokce-yirmibesoglu-zeynep-2022-boun-tabi-smm4h22,
title = "{BOUN}-{TABI}@{SMM4H}'22: Text-to-{T}ext {A}dverse {D}rug {E}vent {E}xtraction with {D}ata {B}alancing and {P}rompting",
author = "Uludo{\u{g}}an, G{\"{o}}k{\c{c}}e and Yirmibe{\c{s}}o{\u{g}}lu, Zeynep",
booktitle = "Proceedings of the Seventh Social Media Mining for Health ({\#}SMM4H) Workshop and Shared Task",
year = "2022",
}
```
| 8b0385a0718f62b7510ff9c24f10acd1 |
rashedsafa/wav2vec2-large-xls-r-300m-bengali-v8 | rashedsafa | wav2vec2 | 13 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['common_voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,960 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-bengali-v8
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset.
It achieves the following results on the evaluation set:
- Loss: 0.7874
- Wer: 0.6777
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 9e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 7.2332 | 0.85 | 400 | 3.3381 | 1.0 |
| 2.3574 | 1.71 | 800 | 0.8236 | 0.7516 |
| 0.8096 | 2.56 | 1200 | 0.9337 | 0.6717 |
| 1.1487 | 3.41 | 1600 | 1.1691 | 0.7665 |
| 0.806 | 4.26 | 2000 | 0.7716 | 0.6642 |
| 0.7746 | 5.12 | 2400 | 0.7874 | 0.6777 |
| 0.7736 | 5.97 | 2800 | 0.7874 | 0.6777 |
| 0.775 | 6.82 | 3200 | 0.7874 | 0.6777 |
| 0.7718 | 7.68 | 3600 | 0.7874 | 0.6777 |
| 0.7757 | 8.53 | 4000 | 0.7874 | 0.6777 |
| 0.7761 | 9.38 | 4400 | 0.7874 | 0.6777 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| e5005989027b2b422f392e55e0a31a60 |
sh-lee/ddpm-butterflies-128 | sh-lee | null | 14 | 2 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,228 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/sh-lee/ddpm-butterflies-128/tensorboard?#scalars)
| ca3ee2a68c332af6e2fd2bc05cf601a3 |
cammy/bart-large-cnn-10k-pad-early-lit | cammy | bart | 11 | 1 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,559 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-10k-pad-early-lit
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3758
- Rouge1: 27.7351
- Rouge2: 13.1664
- Rougel: 21.6559
- Rougelsum: 24.648
- Gen Len: 69.343
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 10
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len |
|:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:|
| 0.2516 | 1.0 | 9998 | 0.3540 | 28.1151 | 13.3875 | 22.1496 | 25.1745 | 66.578 |
| 0.1747 | 2.0 | 19996 | 0.3758 | 27.7351 | 13.1664 | 21.6559 | 24.648 | 69.343 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.10.2
- Datasets 1.18.3
- Tokenizers 0.11.0
| a9633bf951e462b59e64da24e7ee7d76 |
jonatasgrosman/exp_w2v2t_de_vp-100k_s627 | jonatasgrosman | wav2vec2 | 10 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['de'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'de'] | false | true | true | 475 | false | # exp_w2v2t_de_vp-100k_s627
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| e0dcd74995bac6dfb49c82421007445d |
quincyqiang/distilbert-base-uncased-finetuned-emotion | quincyqiang | distilbert | 14 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['emotion'] | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,343 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset.
It achieves the following results on the evaluation set:
- Loss: 0.2106
- Accuracy: 0.927
- F1: 0.9273
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.8007 | 1.0 | 250 | 0.2955 | 0.914 | 0.9117 |
| 0.2417 | 2.0 | 500 | 0.2106 | 0.927 | 0.9273 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.8.0+cu111
- Datasets 1.17.0
- Tokenizers 0.10.3
| c8d3d859733de24e4d6231b52ad40a11 |
tae898/emoberta-base | tae898 | roberta | 8 | 115 | transformers | 4 | text-classification | true | false | false | mit | ['en'] | ['MELD', 'IEMOCAP'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['emoberta', 'roberta'] | false | true | true | 5,937 | false |
Check https://github.com/tae898/erc for the details
[Watch a demo video!](https://youtu.be/qbr7fNd6J28)
# Emotion Recognition in Coversation (ERC)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on?p=emoberta-speaker-aware-emotion-recognition-in)
[](https://paperswithcode.com/sota/emotion-recognition-in-conversation-on-meld?p=emoberta-speaker-aware-emotion-recognition-in)
At the moment, we only use the text modality to correctly classify the emotion of the utterances.The experiments were carried out on two datasets (i.e. MELD and IEMOCAP)
## Prerequisites
1. An x86-64 Unix or Unix-like machine
1. Python 3.8 or higher
1. Running in a virtual environment (e.g., conda, virtualenv, etc.) is highly recommended so that you don't mess up with the system python.
1. [`multimodal-datasets` repo](https://github.com/tae898/multimodal-datasets) (submodule)
1. pip install -r requirements.txt
## EmoBERTa training
First configure the hyper parameters and the dataset in `train-erc-text.yaml` and then,
In this directory run the below commands. I recommend you to run this in a virtualenv.
```sh
python train-erc-text.py
```
This will subsequently call `train-erc-text-hp.py` and `train-erc-text-full.py`.
## Results on the test split (weighted f1 scores)
| Model | | MELD | IEMOCAP |
| -------- | ------------------------------- | :-------: | :-------: |
| EmoBERTa | No past and future utterances | 63.46 | 56.09 |
| | Only past utterances | 64.55 | **68.57** |
| | Only future utterances | 64.23 | 66.56 |
| | Both past and future utterances | **65.61** | 67.42 |
| | → *without speaker names* | 65.07 | 64.02 |
Above numbers are the mean values of five random seed runs.
If you want to see more training test details, check out `./results/`
If you want to download the trained checkpoints and stuff, then [here](https://surfdrive.surf.nl/files/index.php/s/khREwk4MUI7MSnO/download) is where you can download them. It's a pretty big zip file.
## Deployment
### Huggingface
We have released our models on huggingface:
- [emoberta-base](https://huggingface.co/tae898/emoberta-base)
- [emoberta-large](https://huggingface.co/tae898/emoberta-large)
They are based on [RoBERTa-base](https://huggingface.co/roberta-base) and [RoBERTa-large](https://huggingface.co/roberta-large), respectively. They were trained on [both MELD and IEMOCAP datasets](utterance-ordered-MELD_IEMOCAP.json). Our deployed models are neither speaker-aware nor take previous utterances into account, meaning that it only classifies one utterance at a time without the speaker information (e.g., "I love you").
### Flask app
You can either run the Flask RESTful server app as a docker container or just as a python script.
1. Running the app as a docker container **(recommended)**.
There are four images. Take what you need:
- `docker run -it --rm -p 10006:10006 tae898/emoberta-base`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-base-cuda`
- `docker run -it --rm -p 10006:10006 tae898/emoberta-large`
- `docker run -it --rm -p 10006:10006 --gpus all tae898/emoberta-large-cuda`
1. Running the app in your python environment:
This method is less recommended than the docker one.
Run `pip install -r requirements-deploy.txt` first.<br>
The [`app.py`](app.py) is a flask RESTful server. The usage is below:
```console
app.py [-h] [--host HOST] [--port PORT] [--device DEVICE] [--model-type MODEL_TYPE]
```
For example:
```sh
python app.py --host 0.0.0.0 --port 10006 --device cpu --model-type emoberta-base
```
### Client
Once the app is running, you can send a text to the server. First install the necessary packages: `pip install -r requirements-client.txt`, and the run the [client.py](client.py). The usage is as below:
```console
client.py [-h] [--url-emoberta URL_EMOBERTA] --text TEXT
```
For example:
```sh
python client.py --text "Emotion recognition is so cool\!"
```
will give you:
```json
{
"neutral": 0.0049800905,
"joy": 0.96399665,
"surprise": 0.018937444,
"anger": 0.0071516023,
"sadness": 0.002021492,
"disgust": 0.001495996,
"fear": 0.0014167271
}
```
## Troubleshooting
The best way to find and solve your problems is to see in the github issue tab. If you can't find what you want, feel free to raise an issue. We are pretty responsive.
## Contributing
Contributions are what make the open source community such an amazing place to be learn, inspire, and create. Any contributions you make are **greatly appreciated**.
1. Fork the Project
1. Create your Feature Branch (`git checkout -b feature/AmazingFeature`)
1. Run `make style && quality` in the root repo directory, to ensure code quality.
1. Commit your Changes (`git commit -m 'Add some AmazingFeature'`)
1. Push to the Branch (`git push origin feature/AmazingFeature`)
1. Open a Pull Request
## Cite our work
Check out the [paper](https://arxiv.org/abs/2108.12009).
```bibtex
@misc{kim2021emoberta,
title={EmoBERTa: Speaker-Aware Emotion Recognition in Conversation with RoBERTa},
author={Taewoon Kim and Piek Vossen},
year={2021},
eprint={2108.12009},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
[](https://zenodo.org/badge/latestdoi/328375452)<br>
## Authors
- [Taewoon Kim](https://taewoonkim.com/)
## License
[MIT](https://choosealicense.com/licenses/mit/)
| 7181c3dc949152a4ab4f78f613a8cbca |
Helsinki-NLP/opus-mt-en-hy | Helsinki-NLP | marian | 11 | 16 | transformers | 0 | translation | true | true | false | apache-2.0 | ['en', 'hy'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 2,007 | false |
### eng-hye
* source group: English
* target group: Armenian
* OPUS readme: [eng-hye](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md)
* model: transformer-align
* source language(s): eng
* target language(s): hye
* model: transformer-align
* pre-processing: normalization + SentencePiece (spm4k,spm4k)
* download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip)
* test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt)
* test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| Tatoeba-test.eng.hye | 16.6 | 0.404 |
### System Info:
- hf_name: eng-hye
- source_languages: eng
- target_languages: hye
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-hye/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'hy']
- src_constituents: {'eng'}
- tgt_constituents: {'hye', 'hye_Latn'}
- src_multilingual: False
- tgt_multilingual: False
- prepro: normalization + SentencePiece (spm4k,spm4k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-hye/opus-2020-06-16.test.txt
- src_alpha3: eng
- tgt_alpha3: hye
- short_pair: en-hy
- chrF2_score: 0.40399999999999997
- bleu: 16.6
- brevity_penalty: 1.0
- ref_len: 5115.0
- src_name: English
- tgt_name: Armenian
- train_date: 2020-06-16
- src_alpha2: en
- tgt_alpha2: hy
- prefer_old: False
- long_pair: eng-hye
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 2ac1c3c7928c4bd03afe829e1b1345f7 |
nandysoham/19-clustered | nandysoham | distilbert | 8 | 2 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 2,072 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# nandysoham/19-clustered
This model is a fine-tuned version of [Rocketknight1/distilbert-base-uncased-finetuned-squad](https://huggingface.co/Rocketknight1/distilbert-base-uncased-finetuned-squad) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7685
- Train End Logits Accuracy: 0.7826
- Train Start Logits Accuracy: 0.75
- Validation Loss: 0.9786
- Validation End Logits Accuracy: 0.6912
- Validation Start Logits Accuracy: 0.6838
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 134, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch |
|:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:|
| 1.0803 | 0.6931 | 0.6922 | 0.9561 | 0.6838 | 0.6875 | 0 |
| 0.7685 | 0.7826 | 0.75 | 0.9786 | 0.6912 | 0.6838 | 1 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| 6b57321174dcf2929dcb6f1d39ddf524 |
infinitejoy/wav2vec2-large-xls-r-300m-latvian | infinitejoy | wav2vec2 | 19 | 20 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['lv'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'generated_from_trainer', 'hf-asr-leaderboard', 'lv', 'model_for_talk', 'mozilla-foundation/common_voice_7_0', 'robust-speech-event'] | true | true | true | 1,713 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xls-r-300m-latvian
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the MOZILLA-FOUNDATION/COMMON_VOICE_7_0 - LV dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1892
- Wer: 0.1698
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 7e-05
- train_batch_size: 32
- eval_batch_size: 1
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 2000
- num_epochs: 100.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 1.4235 | 12.82 | 2000 | 0.4475 | 0.4551 |
| 0.9383 | 25.64 | 4000 | 0.2235 | 0.2328 |
| 0.8359 | 38.46 | 6000 | 0.2004 | 0.2098 |
| 0.7633 | 51.28 | 8000 | 0.1960 | 0.1882 |
| 0.7001 | 64.1 | 10000 | 0.1902 | 0.1809 |
| 0.652 | 76.92 | 12000 | 0.1979 | 0.1775 |
| 0.6025 | 89.74 | 14000 | 0.1866 | 0.1696 |
### Framework versions
- Transformers 4.16.0.dev0
- Pytorch 1.10.1+cu102
- Datasets 1.17.1.dev0
- Tokenizers 0.11.0
| 4529282f318c637e5c56c4aed5a43ec1 |
ali2066/finetuned-token-argumentative | ali2066 | distilbert | 13 | 16 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,775 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned-token-argumentative
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1573
- Precision: 0.3777
- Recall: 0.3919
- F1: 0.3847
- Accuracy: 0.9497
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 75 | 0.3241 | 0.1109 | 0.2178 | 0.1470 | 0.8488 |
| No log | 2.0 | 150 | 0.3145 | 0.1615 | 0.2462 | 0.1950 | 0.8606 |
| No log | 3.0 | 225 | 0.3035 | 0.1913 | 0.3258 | 0.2411 | 0.8590 |
| No log | 4.0 | 300 | 0.3080 | 0.2199 | 0.3220 | 0.2613 | 0.8612 |
| No log | 5.0 | 375 | 0.3038 | 0.2209 | 0.3277 | 0.2639 | 0.8630 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| c6c670eb2ea4b46f75fee5144fdc08c4 |
sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-ft-ncbi-disease | sarahmiller137 | bert | 8 | 14 | transformers | 0 | token-classification | true | false | false | cc | ['en'] | ['ncbi_disease'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['named-entity-recognition', 'token-classification', 'entity_extraction', 'multi_class_classification'] | false | true | true | 1,248 | false |
## Model information:
microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract model finetuned using the ncbi_disease dataset from the datasets library.
## Intended uses:
This model is intended to be used for named entity recoginition tasks. The model will identify disease entities in text. The model will predict lables based upon the NCBI-disease dataset, please see the dataset information for details.
## Limitations:
Note that the dataset and model may not be fully represetative or suitable for all needs it is recommended that the paper for the dataset and the base model card should be reviewed before using the model -
- [NCBI Disease](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3951655/pdf/nihms557856.pdf)
- [microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-ft-ncbi-disease](https://huggingface.co/microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract)
## How to use:
Load the model from the library using the following checkpoints:
```python
from transformers import AutoTokenizer, AutoModel
tokenizer = AutoTokenizer.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-ft-ncbi-disease")
model = AutoModel.from_pretrained("sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-ft-ncbi-disease")
```
| 149b8b345a799580e6d4ad74099e37fa |
sd-dreambooth-library/mirtha-legrand | sd-dreambooth-library | null | 24 | 3 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 1 | 1 | 0 | [] | false | true | true | 1,192 | false | ### mirtha legrand on Stable Diffusion via Dreambooth
#### model by machinelearnear
This your the Stable Diffusion model fine-tuned the mirtha legrand concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a photo of sks mirtha legrand**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
Here are the images used for training this concept:






| d1be181c7197ec64caff1e7204fad447 |
sd-concepts-library/wojaks-now-now-now | sd-concepts-library | null | 10 | 0 | null | 1 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,180 | false | ### wojaks-now-now-now on Stable Diffusion
This is the `<red-wojak>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:





| ebe50f73212fa6b12fa28ca984005023 |
StacyYang/finetuning-sentiment-model-3000-samples | StacyYang | distilbert | 45 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,041 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1815
- Accuracy: 0.9663
- F1: 0.9686
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0002
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 2
### Training results
### Framework versions
- Transformers 4.24.0
- Pytorch 1.12.1+cu113
- Tokenizers 0.13.2
| 04ad54941aed6417a6f7fb6059d4d226 |
batterydata/batteryscibert-cased-abstract | batterydata | bert | 20 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['batterydata/paper-abstracts'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | Text Classification | false | true | true | 1,335 | false |
# BatterySciBERT-cased for Battery Abstract Classification
**Language model:** batteryscibert-cased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 32
n_epochs = 11
base_LM_model = "batteryscibert-cased"
learning_rate = 2e-5
```
## Performance
```
"Validation accuracy": 97.06,
"Test accuracy": 97.19,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryscibert-cased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | 7247b67f24f38febd813f61fc4128f7e |
ksabeh/distilbert-attribute-correction-mlm-titles | ksabeh | distilbert | 8 | 34 | transformers | 0 | question-answering | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,395 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# ksabeh/bert_attrs_qa_large
This model is a fine-tuned version of [ksabeh/distilbert-attribute-correction-mlm](https://huggingface.co/ksabeh/distilbert-attribute-correction-mlm) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0560
- Validation Loss: 0.0722
- Epoch: 1
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 23878, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.1745 | 0.0875 | 0 |
| 0.0560 | 0.0722 | 1 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.6.4
- Datasets 2.1.0
- Tokenizers 0.12.1
| 34b41278c8d932c17399c383d4ea2531 |
khairi/bert-tweet-disaster | khairi | bert | 6 | 3 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,905 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-tweet-disaster
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9563
- Accuracy: 0.8320
- F1: 0.8095
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 64
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|
| 0.605 | 1.0 | 108 | 0.4455 | 0.8123 | 0.7741 |
| 0.3878 | 2.0 | 216 | 0.3940 | 0.8438 | 0.8126 |
| 0.3228 | 3.0 | 324 | 0.4441 | 0.8241 | 0.8006 |
| 0.2526 | 4.0 | 432 | 0.4714 | 0.8333 | 0.8006 |
| 0.2002 | 5.0 | 540 | 0.5677 | 0.8189 | 0.7890 |
| 0.1391 | 6.0 | 648 | 0.6633 | 0.8307 | 0.8000 |
| 0.0922 | 7.0 | 756 | 0.8019 | 0.8294 | 0.8071 |
| 0.0693 | 8.0 | 864 | 0.8526 | 0.8333 | 0.8049 |
| 0.0495 | 9.0 | 972 | 0.9813 | 0.8241 | 0.8075 |
| 0.0345 | 10.0 | 1080 | 0.9563 | 0.8320 | 0.8095 |
### Framework versions
- Transformers 4.20.1
- Pytorch 1.11.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| 1f2770dc0f49b460a5c95115c046d488 |
koanlp/bart-large-cnn-finetuned-wiki | koanlp | bart | 11 | 2 | transformers | 0 | text2text-generation | true | false | false | mit | null | ['wiki_lingua'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,009 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-large-cnn-finetuned-wiki
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the wiki_lingua dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- num_epochs: 1
- label_smoothing_factor: 0.1
### Training results
### Framework versions
- Transformers 4.20.1
- Pytorch 1.12.0+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| ce39a0b528878d5492bcce5212ce36b4 |
internetoftim/gpt2-finetuned-wikitext2 | internetoftim | gpt2 | 7 | 6 | transformers | 0 | text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,373 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: nan
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: IPU
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- total_eval_batch_size: 18
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
- training precision: Mixed Precision
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| nan | 1.0 | 291 | nan |
| nan | 2.0 | 582 | nan |
| nan | 3.0 | 873 | nan |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.10.0+cpu
- Datasets 2.6.1
- Tokenizers 0.13.2
| 02c397310671daa35396b1048238a6bc |
mariolinml/roberta-large-finetuned-chunking | mariolinml | bert | 10 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,807 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-finetuned-chunking
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.4192
- Precision: 0.3222
- Recall: 0.3161
- F1: 0.3191
- Accuracy: 0.8632
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 6
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:|
| 0.0373 | 1.0 | 2498 | 0.9545 | 0.3166 | 0.2545 | 0.2822 | 0.8656 |
| 0.0045 | 2.0 | 4996 | 1.1324 | 0.2667 | 0.3142 | 0.2885 | 0.8525 |
| 0.0022 | 3.0 | 7494 | 1.3138 | 0.3349 | 0.3097 | 0.3218 | 0.8672 |
| 0.0015 | 4.0 | 9992 | 1.3454 | 0.3261 | 0.3260 | 0.3260 | 0.8647 |
| 0.0014 | 5.0 | 12490 | 1.3640 | 0.3064 | 0.3126 | 0.3095 | 0.8603 |
| 0.0008 | 6.0 | 14988 | 1.4192 | 0.3222 | 0.3161 | 0.3191 | 0.8632 |
### Framework versions
- Transformers 4.20.0
- Pytorch 1.11.0
- Datasets 2.3.2
- Tokenizers 0.12.1
| 91ab0c7b2be2c1215fb85268416db76f |
DrishtiSharma/poem-gen-gpt2-small-spanish | DrishtiSharma | gpt2 | 9 | 6 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,280 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# poem-gen-gpt2-small-spanish
This model is a fine-tuned version of [datificate/gpt2-small-spanish](https://huggingface.co/datificate/gpt2-small-spanish) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 3.9229
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.2121 | 1.0 | 2569 | 3.9954 |
| 4.0612 | 2.0 | 5138 | 3.9375 |
| 3.9988 | 3.0 | 7707 | 3.9229 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.0+cu111
- Datasets 2.0.0
- Tokenizers 0.11.6
| 453b02f7da2781425d7976f95a28cf3d |
anuragshas/whisper-large-v2-mr | anuragshas | whisper | 23 | 1 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['mr'] | ['mozilla-foundation/common_voice_11_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 1,627 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large-v2 Marathi
This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the mozilla-foundation/common_voice_11_0 mr dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3108
- Wer: 15.2206
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- distributed_type: multi-GPU
- gradient_accumulation_steps: 2
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 100
- training_steps: 1000
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:-------:|
| 0.1931 | 3.04 | 200 | 0.2491 | 16.9270 |
| 0.1108 | 7.03 | 400 | 0.2379 | 15.2711 |
| 0.0548 | 11.02 | 600 | 0.2668 | 15.3120 |
| 0.0189 | 15.01 | 800 | 0.3108 | 15.2206 |
| 0.0078 | 18.05 | 1000 | 0.3499 | 15.5571 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu117
- Datasets 2.7.1.dev0
- Tokenizers 0.13.2
| ddd793e1fa6e788a979890bba143b2ef |
Joqsan/bert-base-uncased-finetuned-qnli | Joqsan | bert | 10 | 7 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 915 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-finetuned-qnli
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Framework versions
- Transformers 4.27.0.dev0
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| d654bd6843eef5f646b5329d8fdc8f56 |
yoshitomo-matsubara/bert-large-uncased-stsb | yoshitomo-matsubara | bert | 9 | 46 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['stsb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['bert', 'stsb', 'glue', 'torchdistill'] | false | true | true | 710 | false |
`bert-large-uncased` fine-tuned on STS-B dataset, using [***torchdistill***](https://github.com/yoshitomo-matsubara/torchdistill) and [Google Colab](https://colab.research.google.com/github/yoshitomo-matsubara/torchdistill/blob/master/demo/glue_finetuning_and_submission.ipynb).
The hyperparameters are the same as those in Hugging Face's example and/or the paper of BERT, and the training configuration (including hyperparameters) is available [here](https://github.com/yoshitomo-matsubara/torchdistill/blob/main/configs/sample/glue/stsb/mse/bert_large_uncased.yaml).
I submitted prediction files to [the GLUE leaderboard](https://gluebenchmark.com/leaderboard), and the overall GLUE score was **80.2**.
| 081e3fde9bd7708e3529a145e5e65fee |
sd-concepts-library/ugly_sonic_enhanced | sd-concepts-library | null | 3 | 0 | null | 2 | null | false | false | false | openrail | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 487 | false |
Yes, he is back, better than ever. And with a beautiful Green Hill Zone.
Renders in Automatic1111



| 12c972f8cd4354224ea1af4f9ed91a1a |
aadvari/movie-recommender | aadvari | null | 18 | 0 | null | 0 | null | false | false | false | openrail | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['code'] | false | true | true | 733 | false | # Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This is a model for recommending movies to users based on imdb dataset [Model-Codes](https://github.com/AAdvari/movie-recommender).
# Model Details
This Model uses content-based, collaborative based and ensemble approaches to recommend movies
## Model Description
this model is developed as a multi-approach knn recommending system using sklearn & pytorch.
<!-- Provide a longer summary of what this model is. -->
- **Developed by:** [AmirHossein Advari, Parsa MohammadPour]
- **Model type:** [KNN]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [https://github.com/AAdvari/movie-recommender] | c6a29bcc9af034e0577de838994595c7 |
gcmsrc/xlm-roberta-base-finetuned-panx-all | gcmsrc | xlm-roberta | 10 | 7 | transformers | 0 | token-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,319 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-all
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1454
- F1: 0.8732
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.297 | 1.0 | 739 | 0.1785 | 0.8273 |
| 0.1536 | 2.0 | 1478 | 0.1524 | 0.8574 |
| 0.0998 | 3.0 | 2217 | 0.1454 | 0.8732 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1+cu102
- Datasets 1.16.1
- Tokenizers 0.10.3
| 853e07c292a63e1ff53326b5f0aa5105 |
drmeeseeks/whisper-large-v2-amet | drmeeseeks | whisper | 28 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | ['google/fleurs'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['whisper-event', 'generated_from_trainer'] | true | true | true | 7,530 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Large Amharic FLEURS
This model is a fine-tuned version of [openai/whisper-large](https://huggingface.co/openai/whisper-large) on the google/fleurs am_et dataset.
It achieves the following results on the evaluation set:
- Loss: 12.2408
- Wer: 102.9412
## Model description
- The main Whisper Small Hugging Face page: [Hugging Face - Whisper Small](https://huggingface.co/openai/whisper-small)
## Intended uses & limitations
- For experimentation and curiosity.
- Based on the paper [AXRIV](https://arxiv.org/abs/2212.04356) and [Benchmarking OpenAI Whisper for non-English ASR - Dan Shafer](https://blog.deepgram.com/benchmarking-openai-whisper-for-non-english-asr/), there is a performance bias towards certain languages and curated datasets.
- From the Whisper paper, am_et is a low resource language (Table E), with the WER results ranging from 120-229, based on model size. Whisper small WER=120.2, indicating more training time may improve the fine tuning.
## Training and evaluation data
- This model was trained/evaluated on "test+validation" data from google/fleurs [google/fluers - HuggingFace Datasets](https://huggingface.co/datasets/google/fleurs).
## Training procedure
- The training was done in Lambda Cloud GPU on A100/40GB GPUs, which were provided by OpenAI Community Events [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The training was done using [HuggingFace Community Events - Whisper - run_speech_recognition_seq2seq_streaming.py](https://github.com/huggingface/community-events/blob/main/whisper-fine-tuning-event/run_speech_recognition_seq2seq_streaming.py) using the included [whisper_python_am_et.ipynb](https://huggingface.co/drmeeseeks/whisper-small-am_et/blob/main/am_et_fine_tune_whisper_streaming_colab_RUNNING-evalerrir.ipynb) to setup the Lambda Cloud GPU/Colab environment. For Colab, you must reduce the train batch size to the recommended amount mentioned at , as the T4 GPUs have 16GB of memory [Whisper Fine Tuning Event - Dec 2022](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#fine-tune-whisper). The notebook sets up the environment, logs into your huggingface account, and generates a bash script. The bash script generated in the IPYNB, `run.sh` was run from the terminal to train `bash run.sh`, as described on the Whisper community events GITHUB page.
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 128
- eval_batch_size: 64
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 5000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 0.0 | 1000.0 | 1000 | 8.3822 | 156.0160 |
| 0.0 | 2000.0 | 2000 | 9.7961 | 110.4278 |
| 0.0 | 3000.0 | 3000 | 12.0014 | 102.8075 |
| 0.0 | 4000.0 | 4000 | 12.2633 | 103.3422 |
| 0.0 | 5000.0 | 5000 | 12.2408 | 102.9412 |
### Recommendations
Limit training duration for smaller datasets to ~ 2000 to 3000 steps to avoid overfitting. 5000 steps using the [HuggingFace - Whisper Small](https://huggingface.co/openai/whisper-small) takes ~ 5hrs on A100 GPUs (1hr/1000 steps). Encountered `RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1` which is related to [Trainer RuntimeError](https://discuss.huggingface.co/t/trainer-runtimeerror-the-size-of-tensor-a-462-must-match-the-size-of-tensor-b-448-at-non-singleton-dimension-1/26010) as some languages datasets have input lengths that have non-standard lengths. The link did not resolve my issue, and appears elsewhere too [Training languagemodel – RuntimeError the expanded size of the tensor (100) must match the existing size (64) at non singleton dimension 1](https://hungsblog.de/en/technology/troubleshooting/training-languagemodel-runtimeerror-the-expanded-size-of-the-tensor-100-must-match-the-existing-size-64-at-non-singleton-dimension-1/). To circumvent this issue, `run.sh` paremeters are adjusted. Then run `python run_eval_whisper_streaming.py --model_id="openai/whisper-small" --dataset="google/fleurs" --config="am_et" --batch_size=32 --max_eval_samples=64 --device=0 --language="am"` to find the WER score manually. Otherwise, erroring out during evaluation prevents the trained model from loading to HugginFace. Based on the paper [AXRIV](https://arxiv.org/abs/2212.04356) and [Benchmarking OpenAI Whisper for non-English ASR - Dan Shafer](https://blog.deepgram.com/benchmarking-openai-whisper-for-non-english-asr/), there is a performance bias towards certain languages and curated datasets. The OpenAI fintuning community event provided ample _free_ GPU time to help develop the model further and improve WER scores.
### Environmental Impact
Carbon emissions were estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). In total roughly 100 hours were used primarily in US East/Asia Pacific (80%/20%), with AWS as the reference. Additional resources are available at [Our World in Data - CO2 Emissions](https://ourworldindata.org/co2-emissions)
- __Hardware Type__: AMD EPYC 7J13 64-Core Processor (30 core VM) 197GB RAM, with NVIDIA A100-SXM 40GB
- __Hours Used__: 100 hrs
- __Cloud Provider__: Lambda Cloud GPU
- __Compute Region__: US East/Asia Pacific
- __Carbon Emitted__: 12 kg (GPU) + 13 kg (CPU) = 25 kg (the weight of 3 gallons of water)
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.1+cu117
- Datasets 2.8.1.dev0
- Tokenizers 0.13.2
### Citation
- [Whisper - GITHUB](https://github.com/openai/whisper)
- [Whisper - OpenAI - BLOG](https://openai.com/blog/whisper/)
- [Model Card - HuggingFace Hub - GITHUB](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md)
```bibtex
@misc{https://doi.org/10.48550/arxiv.2212.04356,
doi = {10.48550/ARXIV.2212.04356},
url = {https://arxiv.org/abs/2212.04356},
author = {Radford, Alec and Kim, Jong Wook and Xu, Tao and Brockman, Greg and McLeavey, Christine and Sutskever, Ilya},
keywords = {Audio and Speech Processing (eess.AS), Computation and Language (cs.CL), Machine Learning (cs.LG), Sound (cs.SD), FOS: Electrical engineering, electronic engineering, information engineering, FOS: Electrical engineering, electronic engineering, information engineering, FOS: Computer and information sciences, FOS: Computer and information sciences},
title = {Robust Speech Recognition via Large-Scale Weak Supervision},
publisher = {arXiv},
year = {2022},
copyright = {arXiv.org perpetual, non-exclusive license}
}
@article{owidco2andothergreenhousegasemissions,
author = {Hannah Ritchie and Max Roser and Pablo Rosado},
title = {CO₂ and Greenhouse Gas Emissions},
journal = {Our World in Data},
year = {2020},
note = {https://ourworldindata.org/co2-and-other-greenhouse-gas-emissions}
}
```
| 762352d33948b5b2ddd216a678d9acff |
Botnoi/wav2vec2-xls-r-300m-th-v2 | Botnoi | wav2vec2 | 13 | 4 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,083 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-xls-r-300m-th-v2
This model is a fine-tuned version of [Botnoi/wav2vec2-xls-r-300m-th-v1](https://huggingface.co/Botnoi/wav2vec2-xls-r-300m-th-v1) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3630
- Wer: 0.3962
- Cer: 0.0942
- Clean Cer: 0.0767
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 4.533e-08
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- training_steps: 9000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Cer | Clean Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:---------:|
| 0.3323 | 0.68 | 1000 | 0.3635 | 0.3961 | 0.0942 | 0.0767 |
| 0.3386 | 1.36 | 2000 | 0.3632 | 0.3962 | 0.0943 | 0.0768 |
| 0.3453 | 2.03 | 3000 | 0.3632 | 0.3964 | 0.0943 | 0.0768 |
| 0.3392 | 2.71 | 4000 | 0.3632 | 0.3961 | 0.0943 | 0.0767 |
| 0.3399 | 3.39 | 5000 | 0.3634 | 0.3961 | 0.0942 | 0.0768 |
| 0.347 | 4.07 | 6000 | 0.3632 | 0.3961 | 0.0942 | 0.0767 |
| 0.3414 | 4.74 | 7000 | 0.3631 | 0.3962 | 0.0942 | 0.0767 |
| 0.3378 | 5.42 | 8000 | 0.3631 | 0.3961 | 0.0942 | 0.0767 |
| 0.3421 | 6.1 | 9000 | 0.3630 | 0.3962 | 0.0942 | 0.0767 |
### Framework versions
- Transformers 4.26.0.dev0
- Pytorch 1.13.0+cu116
- Datasets 2.8.0
- Tokenizers 0.13.2
| c6d25cc00024fead3fef7226495103e7 |
studio-ousia/luke-japanese-base | studio-ousia | luke | 10 | 385 | transformers | 2 | fill-mask | true | false | false | apache-2.0 | ['ja'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['luke', 'named entity recognition', 'entity typing', 'relation classification', 'question answering'] | false | true | true | 2,393 | false |
## luke-japanese
**luke-japanese** is the Japanese version of **LUKE** (**L**anguage **U**nderstanding with **K**nowledge-based **E**mbeddings), a pre-trained _knowledge-enhanced_ contextualized representation of words and entities. LUKE treats words and entities in a given text as independent tokens, and outputs contextualized representations of them. Please refer to our [GitHub repository](https://github.com/studio-ousia/luke) for more details and updates.
This model contains Wikipedia entity embeddings which are not used in general NLP tasks. Please use the [lite version](https://huggingface.co/studio-ousia/luke-japanese-base-lite/) for tasks that do not use Wikipedia entities as inputs.
**luke-japanese**は、単語とエンティティの知識拡張型訓練済みTransformerモデル**LUKE**の日本語版です。LUKEは単語とエンティティを独立したトークンとして扱い、これらの文脈を考慮した表現を出力します。詳細については、[GitHub リポジトリ](https://github.com/studio-ousia/luke)を参照してください。
このモデルは、通常のNLPタスクでは使われないWikipediaエンティティのエンベディングを含んでいます。単語の入力のみを使うタスクには、[lite version](https://huggingface.co/studio-ousia/luke-japanese-base-lite/)を使用してください。
### Experimental results on JGLUE
The experimental results evaluated on the dev set of
[JGLUE](https://github.com/yahoojapan/JGLUE) are shown as follows:
| Model | MARC-ja | JSTS | JNLI | JCommonsenseQA |
| ---------------------- | --------- | ------------------- | --------- | -------------- |
| | acc | Pearson/Spearman | acc | acc |
| **LUKE Japanese base** | **0.965** | **0.916**/**0.877** | **0.912** | **0.842** |
| _Baselines:_ | |
| Tohoku BERT base | 0.958 | 0.909/0.868 | 0.899 | 0.808 |
| NICT BERT base | 0.958 | 0.910/0.871 | 0.902 | 0.823 |
| Waseda RoBERTa base | 0.962 | 0.913/0.873 | 0.895 | 0.840 |
| XLM RoBERTa base | 0.961 | 0.877/0.831 | 0.893 | 0.687 |
The baseline scores are obtained from
[here](https://github.com/yahoojapan/JGLUE/blob/a6832af23895d6faec8ecf39ec925f1a91601d62/README.md).
### Citation
```latex
@inproceedings{yamada2020luke,
title={LUKE: Deep Contextualized Entity Representations with Entity-aware Self-attention},
author={Ikuya Yamada and Akari Asai and Hiroyuki Shindo and Hideaki Takeda and Yuji Matsumoto},
booktitle={EMNLP},
year={2020}
}
```
| b116d79ff6b0688c8c84543053ff53ff |
evanz37/bert-finetuned-ard | evanz37 | bert | 8 | 3 | transformers | 0 | token-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,585 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# evanz37/bert-finetuned-ard
This model is a fine-tuned version of [evanz37/bert-finetuned-ner](https://huggingface.co/evanz37/bert-finetuned-ner) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0722
- Validation Loss: 0.0861
- Epoch: 2
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 669, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000}
- training_precision: mixed_float16
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3408 | 0.1290 | 0 |
| 0.1065 | 0.0894 | 1 |
| 0.0722 | 0.0861 | 2 |
### Framework versions
- Transformers 4.18.0
- TensorFlow 2.8.0
- Datasets 2.1.0
- Tokenizers 0.12.1
| ca2893ebff1deca631c0c2fac66552f2 |
obi/deid_roberta_i2b2 | obi | roberta | 9 | 83,577 | transformers | 0 | token-classification | true | false | false | mit | ['en'] | ['I2B2'] | null | 0 | 0 | 0 | 0 | 1 | 0 | 1 | ['deidentification', 'medical notes', 'ehr', 'phi'] | false | true | true | 4,587 | false |
# Model Description
* A RoBERTa [[Liu et al., 2019]](https://arxiv.org/pdf/1907.11692.pdf) model fine-tuned for de-identification of medical notes.
* Sequence Labeling (token classification): The model was trained to predict protected health information (PHI/PII) entities (spans). A list of protected health information categories is given by [HIPAA](https://www.hhs.gov/hipaa/for-professionals/privacy/laws-regulations/index.html).
* A token can either be classified as non-PHI or as one of the 11 PHI types. Token predictions are aggregated to spans by making use of BILOU tagging.
* The PHI labels that were used for training and other details can be found here: [Annotation Guidelines](https://github.com/obi-ml-public/ehr_deidentification/blob/master/AnnotationGuidelines.md)
* More details on how to use this model, the format of data and other useful information is present in the GitHub repo: [Robust DeID](https://github.com/obi-ml-public/ehr_deidentification).
# How to use
* A demo on how the model works (using model predictions to de-identify a medical note) is on this space: [Medical-Note-Deidentification](https://huggingface.co/spaces/obi/Medical-Note-Deidentification).
* Steps on how this model can be used to run a forward pass can be found here: [Forward Pass](https://github.com/obi-ml-public/ehr_deidentification/tree/master/steps/forward_pass)
* In brief, the steps are:
* Sentencize (the model aggregates the sentences back to the note level) and tokenize the dataset.
* Use the predict function of this model to gather the predictions (i.e., predictions for each token).
* Additionally, the model predictions can be used to remove PHI from the original note/text.
# Dataset
* The I2B2 2014 [[Stubbs and Uzuner, 2015]](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4978170/) dataset was used to train this model.
| | I2B2 | | I2B2 | |
| --------- | --------------------- | ---------- | -------------------- | ---------- |
| | TRAIN SET - 790 NOTES | | TEST SET - 514 NOTES | |
| PHI LABEL | COUNT | PERCENTAGE | COUNT | PERCENTAGE |
| DATE | 7502 | 43.69 | 4980 | 44.14 |
| STAFF | 3149 | 18.34 | 2004 | 17.76 |
| HOSP | 1437 | 8.37 | 875 | 7.76 |
| AGE | 1233 | 7.18 | 764 | 6.77 |
| LOC | 1206 | 7.02 | 856 | 7.59 |
| PATIENT | 1316 | 7.66 | 879 | 7.79 |
| PHONE | 317 | 1.85 | 217 | 1.92 |
| ID | 881 | 5.13 | 625 | 5.54 |
| PATORG | 124 | 0.72 | 82 | 0.73 |
| EMAIL | 4 | 0.02 | 1 | 0.01 |
| OTHERPHI | 2 | 0.01 | 0 | 0 |
| TOTAL | 17171 | 100 | 11283 | 100 |
# Training procedure
* Steps on how this model was trained can be found here: [Training](https://github.com/obi-ml-public/ehr_deidentification/tree/master/steps/train). The "model_name_or_path" was set to: "roberta-large".
* The dataset was sentencized with the en_core_sci_sm sentencizer from spacy.
* The dataset was then tokenized with a custom tokenizer built on top of the en_core_sci_sm tokenizer from spacy.
* For each sentence we added 32 tokens on the left (from previous sentences) and 32 tokens on the right (from the next sentences).
* The added tokens are not used for learning - i.e, the loss is not computed on these tokens - they are used as additional context.
* Each sequence contained a maximum of 128 tokens (including the 32 tokens added on). Longer sequences were split.
* The sentencized and tokenized dataset with the token level labels based on the BILOU notation was used to train the model.
* The model is fine-tuned from a pre-trained RoBERTa model.
* Training details:
* Input sequence length: 128
* Batch size: 32 (16 with 2 gradient accumulation steps)
* Optimizer: AdamW
* Learning rate: 5e-5
* Dropout: 0.1
## Results
# Questions?
Post a Github issue on the repo: [Robust DeID](https://github.com/obi-ml-public/ehr_deidentification).
| 711f90ef71d82e93061709a110cb7088 |
Fireman4740/messi-ronaldo-v1-5 | Fireman4740 | null | 77 | 84 | diffusers | 1 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 7,097 | false | ### Messi-Ronaldo-v1.5 Dreambooth model trained by Fireman4740 with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
Sample pictures of:
Messi (use that on your prompt)
Ronaldo (use that on your prompt)

| f7774a0c4dff37d308fe0a8ea5f4642a |
xaqren/sentiment_analysis | xaqren | distilbert | 9 | 7 | transformers | 1 | text-classification | true | false | false | apache-2.0 | ['en'] | ['Confidential'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['exbert'] | false | true | true | 2,135 | false | # BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
between english and English.
## Model description
BERT is a transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it
was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of
publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it
was pretrained with two objectives:
- Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run
the entire masked sentence through the model and has to predict the masked words. This is different from traditional
recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like
GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the
sentence.
- Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes
they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to
predict if the two sentences were following each other or not.
This way, the model learns an inner representation of the English language that can then be used to extract features
useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard
classifier using the features produced by the BERT model as inputs.
## Model description [xaqren/sentiment_analysis]
This is a fine-tuned downstream version of the bert-base-uncased model for sentiment analysis, this model is not intended for
further downstream fine-tuning for any other tasks. This model is trained on a classified dataset for text-classification. | abc5f248e43390c18d6ccfe2a631b47d |
aXhyra/presentation_irony_42 | aXhyra | distilbert | 10 | 5 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['tweet_eval'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,395 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# presentation_irony_42
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the tweet_eval dataset.
It achieves the following results on the evaluation set:
- Loss: 0.9344
- F1: 0.6745
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5.1637764704815665e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 4
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.6675 | 1.0 | 90 | 0.5988 | 0.6684 |
| 0.5872 | 2.0 | 180 | 0.6039 | 0.6742 |
| 0.3953 | 3.0 | 270 | 0.8549 | 0.6557 |
| 0.0355 | 4.0 | 360 | 0.9344 | 0.6745 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.9.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| b00d08791b69c61c962730bc7ed75f05 |
research-backup/t5-base-tweetqa-qag-np | research-backup | t5 | 13 | 1 | transformers | 0 | text2text-generation | true | false | false | cc-by-4.0 | ['en'] | ['lmqg/qag_tweetqa'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['questions and answers generation'] | true | true | true | 4,951 | false |
# Model Card of `research-backup/t5-base-tweetqa-qag-np`
This model is fine-tuned version of [t5-base](https://huggingface.co/t5-base) for question & answer pair generation task on the [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
This model is fine-tuned without a task prefix.
### Overview
- **Language model:** [t5-base](https://huggingface.co/t5-base)
- **Language:** en
- **Training data:** [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) (default)
- **Online Demo:** [https://autoqg.net/](https://autoqg.net/)
- **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation)
- **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
### Usage
- With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-)
```python
from lmqg import TransformersQG
# initialize model
model = TransformersQG(language="en", model="research-backup/t5-base-tweetqa-qag-np")
# model prediction
question_answer_pairs = model.generate_qa("William Turner was an English painter who specialised in watercolour landscapes")
```
- With `transformers`
```python
from transformers import pipeline
pipe = pipeline("text2text-generation", "research-backup/t5-base-tweetqa-qag-np")
output = pipe("Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.")
```
## Evaluation
- ***Metric (Question & Answer Generation)***: [raw metric file](https://huggingface.co/research-backup/t5-base-tweetqa-qag-np/raw/main/eval/metric.first.answer.paragraph.questions_answers.lmqg_qag_tweetqa.default.json)
| | Score | Type | Dataset |
|:--------------------------------|--------:|:--------|:---------------------------------------------------------------------|
| BERTScore | 90.8 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_1 | 40.49 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_2 | 27.77 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_3 | 19.18 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| Bleu_4 | 13.4 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| METEOR | 31.14 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| MoverScore | 62.26 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (BERTScore) | 92.4 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedF1Score (MoverScore) | 64.83 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (BERTScore) | 92.78 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedPrecision (MoverScore) | 65.68 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (BERTScore) | 92.03 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| QAAlignedRecall (MoverScore) | 64.07 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
| ROUGE_L | 37.23 | default | [lmqg/qag_tweetqa](https://huggingface.co/datasets/lmqg/qag_tweetqa) |
## Training hyperparameters
The following hyperparameters were used during fine-tuning:
- dataset_path: lmqg/qag_tweetqa
- dataset_name: default
- input_types: ['paragraph']
- output_types: ['questions_answers']
- prefix_types: None
- model: t5-base
- max_length: 256
- max_length_output: 128
- epoch: 15
- batch: 32
- lr: 0.0001
- fp16: False
- random_seed: 1
- gradient_accumulation_steps: 2
- label_smoothing: 0.0
The full configuration can be found at [fine-tuning config file](https://huggingface.co/research-backup/t5-base-tweetqa-qag-np/raw/main/trainer_config.json).
## Citation
```
@inproceedings{ushio-etal-2022-generative,
title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration",
author = "Ushio, Asahi and
Alva-Manchego, Fernando and
Camacho-Collados, Jose",
booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2022",
address = "Abu Dhabi, U.A.E.",
publisher = "Association for Computational Linguistics",
}
```
| e98e7f655e173896760fe154e2a85d87 |
tclong/wav2vec2-base-vios-commonvoice-1 | tclong | wav2vec2 | 15 | 5 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 4,590 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-vios-commonvoice-1
This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8913
- Wer: 0.3621
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 30
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| 3.4706 | 0.55 | 500 | 3.4725 | 1.0 |
| 3.202 | 1.1 | 1000 | 2.7555 | 1.0008 |
| 1.0507 | 1.66 | 1500 | 1.0481 | 0.6196 |
| 0.7325 | 2.21 | 2000 | 0.8120 | 0.4958 |
| 0.599 | 2.76 | 2500 | 0.7035 | 0.4447 |
| 0.5224 | 3.31 | 3000 | 0.6761 | 0.4078 |
| 0.4844 | 3.86 | 3500 | 0.6688 | 0.4011 |
| 0.4234 | 4.42 | 4000 | 0.6080 | 0.3729 |
| 0.4237 | 4.97 | 4500 | 0.5953 | 0.3556 |
| 0.3986 | 5.52 | 5000 | 0.6054 | 0.3478 |
| 0.3554 | 6.07 | 5500 | 0.6193 | 0.3479 |
| 0.3446 | 6.62 | 6000 | 0.5809 | 0.3302 |
| 0.3104 | 7.17 | 6500 | 0.5713 | 0.3283 |
| 0.3166 | 7.73 | 7000 | 0.5593 | 0.3133 |
| 0.2938 | 8.28 | 7500 | 0.5645 | 0.3081 |
| 0.3061 | 8.83 | 8000 | 0.5508 | 0.3020 |
| 0.2986 | 9.38 | 8500 | 0.5462 | 0.3024 |
| 0.2939 | 9.93 | 9000 | 0.5544 | 0.3028 |
| 0.2633 | 10.49 | 9500 | 0.5496 | 0.3024 |
| 0.2683 | 11.04 | 10000 | 0.5439 | 0.2946 |
| 0.2714 | 11.59 | 10500 | 0.5524 | 0.2947 |
| 0.2354 | 12.14 | 11000 | 0.5267 | 0.2918 |
| 0.2488 | 12.69 | 11500 | 0.5728 | 0.2938 |
| 0.2479 | 13.25 | 12000 | 0.5802 | 0.2951 |
| 0.245 | 13.8 | 12500 | 0.5571 | 0.2890 |
| 0.2422 | 14.35 | 13000 | 0.5531 | 0.2871 |
| 0.2369 | 14.9 | 13500 | 0.5453 | 0.2860 |
| 0.2345 | 15.45 | 14000 | 0.5452 | 0.2847 |
| 0.2507 | 16.0 | 14500 | 0.5536 | 0.2884 |
| 0.2454 | 16.56 | 15000 | 0.5577 | 0.2871 |
| 0.2729 | 17.11 | 15500 | 0.6019 | 0.2931 |
| 0.2743 | 17.66 | 16000 | 0.5619 | 0.2905 |
| 0.3031 | 18.21 | 16500 | 0.6401 | 0.3006 |
| 0.315 | 18.76 | 17000 | 0.6044 | 0.2990 |
| 0.4025 | 19.32 | 17500 | 0.6739 | 0.3304 |
| 0.4915 | 19.87 | 18000 | 0.7267 | 0.3472 |
| 0.5539 | 20.42 | 18500 | 0.8078 | 0.3483 |
| 0.7138 | 20.97 | 19000 | 0.9362 | 0.3765 |
| 0.5766 | 21.52 | 19500 | 0.7921 | 0.3392 |
| 0.688 | 22.08 | 20000 | 0.8833 | 0.3693 |
| 0.6964 | 22.63 | 20500 | 0.9137 | 0.3469 |
| 0.7389 | 23.18 | 21000 | 0.9379 | 0.3460 |
| 0.7851 | 23.73 | 21500 | 1.0438 | 0.3653 |
| 0.7619 | 24.28 | 22000 | 0.9313 | 0.3873 |
| 0.7175 | 24.83 | 22500 | 0.8668 | 0.3789 |
| 0.6842 | 25.39 | 23000 | 0.8243 | 0.3761 |
| 0.6941 | 25.94 | 23500 | 0.8557 | 0.3804 |
| 0.7167 | 26.49 | 24000 | 0.8618 | 0.3875 |
| 0.721 | 27.04 | 24500 | 0.8686 | 0.3764 |
| 0.6949 | 27.59 | 25000 | 0.8773 | 0.3690 |
| 0.727 | 28.15 | 25500 | 0.8769 | 0.3666 |
| 0.7363 | 28.7 | 26000 | 0.8867 | 0.3634 |
| 0.7157 | 29.25 | 26500 | 0.8895 | 0.3626 |
| 0.7385 | 29.8 | 27000 | 0.8913 | 0.3621 |
### Framework versions
- Transformers 4.19.3
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 19ec92ee3e79ea2ecf3e802fe80f7f12 |
RUCAIBox/mtl-task-dialog | RUCAIBox | mvp | 9 | 1 | transformers | 1 | text2text-generation | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-generation', 'text2text-generation'] | false | true | true | 3,639 | false |
# MTL-task-dialog
The MTL-task-dialog model was proposed in [**MVP: Multi-task Supervised Pre-training for Natural Language Generation**](https://arxiv.org/abs/2206.12131) by Tianyi Tang, Junyi Li, Wayne Xin Zhao and Ji-Rong Wen.
The detailed information and instructions can be found [https://github.com/RUCAIBox/MVP](https://github.com/RUCAIBox/MVP).
## Model Description
MTL-task-dialog is supervised pre-trained using a mixture of labeled task-oriented system datasets. It is a variant (Single) of our main [MVP](https://huggingface.co/RUCAIBox/mvp) model. It follows a standard Transformer encoder-decoder architecture.
MTL-task-dialog is specially designed for task-oriented system tasks, such as MultiWOZ.
## Example
```python
>>> from transformers import MvpTokenizer, MvpForConditionalGeneration
>>> tokenizer = MvpTokenizer.from_pretrained("RUCAIBox/mvp")
>>> model = MvpForConditionalGeneration.from_pretrained("RUCAIBox/mtl-task-dialog")
>>> inputs = tokenizer(
... "Given the task dialog: System response [X_SEP] I'm looking for a affordable BBQ restaurant in Dallas for a large group of guest.",
... return_tensors="pt",
... )
>>> generated_ids = model.generate(**inputs)
>>> tokenizer.batch_decode(generated_ids, skip_special_tokens=True)
['What date and time would you like to go?']
```
## Related Models
**MVP**: [https://huggingface.co/RUCAIBox/mvp](https://huggingface.co/RUCAIBox/mvp).
**Prompt-based models**:
- MVP-multi-task: [https://huggingface.co/RUCAIBox/mvp-multi-task](https://huggingface.co/RUCAIBox/mvp-multi-task).
- MVP-summarization: [https://huggingface.co/RUCAIBox/mvp-summarization](https://huggingface.co/RUCAIBox/mvp-summarization).
- MVP-open-dialog: [https://huggingface.co/RUCAIBox/mvp-open-dialog](https://huggingface.co/RUCAIBox/mvp-open-dialog).
- MVP-data-to-text: [https://huggingface.co/RUCAIBox/mvp-data-to-text](https://huggingface.co/RUCAIBox/mvp-data-to-text).
- MVP-story: [https://huggingface.co/RUCAIBox/mvp-story](https://huggingface.co/RUCAIBox/mvp-story).
- MVP-question-answering: [https://huggingface.co/RUCAIBox/mvp-question-answering](https://huggingface.co/RUCAIBox/mvp-question-answering).
- MVP-question-generation: [https://huggingface.co/RUCAIBox/mvp-question-generation](https://huggingface.co/RUCAIBox/mvp-question-generation).
- MVP-task-dialog: [https://huggingface.co/RUCAIBox/mvp-task-dialog](https://huggingface.co/RUCAIBox/mvp-task-dialog).
**Multi-task models**:
- MTL-summarization: [https://huggingface.co/RUCAIBox/mtl-summarization](https://huggingface.co/RUCAIBox/mtl-summarization).
- MTL-open-dialog: [https://huggingface.co/RUCAIBox/mtl-open-dialog](https://huggingface.co/RUCAIBox/mtl-open-dialog).
- MTL-data-to-text: [https://huggingface.co/RUCAIBox/mtl-data-to-text](https://huggingface.co/RUCAIBox/mtl-data-to-text).
- MTL-story: [https://huggingface.co/RUCAIBox/mtl-story](https://huggingface.co/RUCAIBox/mtl-story).
- MTL-question-answering: [https://huggingface.co/RUCAIBox/mtl-question-answering](https://huggingface.co/RUCAIBox/mtl-question-answering).
- MTL-question-generation: [https://huggingface.co/RUCAIBox/mtl-question-generation](https://huggingface.co/RUCAIBox/mtl-question-generation).
- MTL-task-dialog: [https://huggingface.co/RUCAIBox/mtl-task-dialog](https://huggingface.co/RUCAIBox/mtl-task-dialog).
## Citation
```bibtex
@article{tang2022mvp,
title={MVP: Multi-task Supervised Pre-training for Natural Language Generation},
author={Tang, Tianyi and Li, Junyi and Zhao, Wayne Xin and Wen, Ji-Rong},
journal={arXiv preprint arXiv:2206.12131},
year={2022},
url={https://arxiv.org/abs/2206.12131},
}
```
| 429124e92fc2290d90a19eec0e98a36b |
venkateshdas/electra-base-squad2-ta-qna-electra | venkateshdas | electra | 16 | 4 | transformers | 0 | question-answering | true | false | false | cc-by-4.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,266 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# electra-base-squad2-ta-qna-electra
This model is a fine-tuned version of [deepset/electra-base-squad2](https://huggingface.co/deepset/electra-base-squad2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1644
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| No log | 1.0 | 44 | 0.2352 |
| No log | 2.0 | 88 | 0.1644 |
### Framework versions
- Transformers 4.22.1
- Pytorch 1.12.1+cu113
- Datasets 2.4.0
- Tokenizers 0.12.1
| a074bceca6366e95cfa9683f4c4ff5c7 |
EnsorcelledEther/Grief-Seed | EnsorcelledEther | null | 49 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 1 | 0 | 1 | 0 | 0 | 0 | 0 | [] | false | true | true | 660 | false | ### Grief Seed on Stable Diffusion
This is the `grief seed` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
I guess because they are png you can't see them? Idk, I'll fix it later. The look like grief seeds from Puella Magi Madoka Magica. | 0a3e0a043e4b558a2b7863657a56703f |
laituan245/molt5-large-caption2smiles | laituan245 | t5 | 8 | 19 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,024 | false |
This model can be used to generate a SMILES string from an input caption.
## Example Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
tokenizer = T5Tokenizer.from_pretrained("laituan245/molt5-large-caption2smiles", model_max_length=512)
model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large-caption2smiles')
input_text = 'The molecule is a monomethoxybenzene that is 2-methoxyphenol substituted by a hydroxymethyl group at position 4. It has a role as a plant metabolite. It is a member of guaiacols and a member of benzyl alcohols.'
input_ids = tokenizer(input_text, return_tensors="pt").input_ids
outputs = model.generate(input_ids, num_beams=5, max_length=512)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Paper
For more information, please take a look at our paper.
Paper: [Translation between Molecules and Natural Language](https://arxiv.org/abs/2204.11817)
Authors: *Carl Edwards\*, Tuan Lai\*, Kevin Ros, Garrett Honke, Heng Ji*
| 74ed75b25d9dacaa49ebe73a8afced44 |
stevems1/distilroberta-base-SmithsModel | stevems1 | roberta | 15 | 5 | transformers | 0 | text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,259 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-SmithsModel
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3070
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3.0
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 4.6589 | 1.0 | 830 | 2.8652 |
| 2.8362 | 2.0 | 1660 | 2.4309 |
| 2.6291 | 3.0 | 2490 | 2.2826 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.1.0
- Tokenizers 0.12.1
| 52f317efbe2f38342eb590fab3221310 |
adrian78/ddpm-butterflies-128 | adrian78 | null | 13 | 0 | diffusers | 0 | null | false | false | false | apache-2.0 | ['en'] | ['huggan/smithsonian_butterflies_subset'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,230 | false |
<!-- This model card has been generated automatically according to the information the training script had access to. You
should probably proofread and complete it, then remove this comment. -->
# ddpm-butterflies-128
## Model description
This diffusion model is trained with the [🤗 Diffusers](https://github.com/huggingface/diffusers) library
on the `huggan/smithsonian_butterflies_subset` dataset.
## Intended uses & limitations
#### How to use
```python
# TODO: add an example code snippet for running this diffusion pipeline
```
#### Limitations and bias
[TODO: provide examples of latent issues and potential remediations]
## Training data
[TODO: describe the data used to train the model]
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 16
- eval_batch_size: 16
- gradient_accumulation_steps: 1
- optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None
- lr_scheduler: None
- lr_warmup_steps: 500
- ema_inv_gamma: None
- ema_inv_gamma: None
- ema_inv_gamma: None
- mixed_precision: fp16
### Training results
📈 [TensorBoard logs](https://huggingface.co/adrian78/ddpm-butterflies-128/tensorboard?#scalars)
| 896868fa96c02ee2cfcbacfee916367a |
hsohn3/mayo-timebert-visit-uncased-wordlevel-block512-batch4-ep100 | hsohn3 | bert | 8 | 4 | transformers | 0 | fill-mask | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 3,416 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# hsohn3/mayo-timebert-visit-uncased-wordlevel-block512-batch4-ep100
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.8536
- Epoch: 99
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Epoch |
|:----------:|:-----:|
| 3.9508 | 0 |
| 3.4063 | 1 |
| 3.3682 | 2 |
| 3.3468 | 3 |
| 3.3330 | 4 |
| 3.3308 | 5 |
| 3.3225 | 6 |
| 3.3106 | 7 |
| 3.2518 | 8 |
| 3.1859 | 9 |
| 3.1373 | 10 |
| 3.0923 | 11 |
| 3.0390 | 12 |
| 2.9560 | 13 |
| 2.8605 | 14 |
| 2.7564 | 15 |
| 2.4969 | 16 |
| 2.2044 | 17 |
| 1.9566 | 18 |
| 1.7686 | 19 |
| 1.5995 | 20 |
| 1.4932 | 21 |
| 1.4100 | 22 |
| 1.3538 | 23 |
| 1.2973 | 24 |
| 1.2610 | 25 |
| 1.2160 | 26 |
| 1.1916 | 27 |
| 1.1607 | 28 |
| 1.1468 | 29 |
| 1.1262 | 30 |
| 1.1123 | 31 |
| 1.0942 | 32 |
| 1.0816 | 33 |
| 1.0717 | 34 |
| 1.0575 | 35 |
| 1.0503 | 36 |
| 1.0411 | 37 |
| 1.0293 | 38 |
| 1.0229 | 39 |
| 1.0139 | 40 |
| 1.0081 | 41 |
| 1.0028 | 42 |
| 0.9967 | 43 |
| 0.9906 | 44 |
| 0.9834 | 45 |
| 0.9782 | 46 |
| 0.9766 | 47 |
| 0.9676 | 48 |
| 0.9618 | 49 |
| 0.9611 | 50 |
| 0.9553 | 51 |
| 0.9504 | 52 |
| 0.9483 | 53 |
| 0.9404 | 54 |
| 0.9423 | 55 |
| 0.9361 | 56 |
| 0.9327 | 57 |
| 0.9327 | 58 |
| 0.9263 | 59 |
| 0.9275 | 60 |
| 0.9218 | 61 |
| 0.9202 | 62 |
| 0.9158 | 63 |
| 0.9152 | 64 |
| 0.9091 | 65 |
| 0.9104 | 66 |
| 0.9094 | 67 |
| 0.9087 | 68 |
| 0.9034 | 69 |
| 0.9063 | 70 |
| 0.8984 | 71 |
| 0.8966 | 72 |
| 0.8953 | 73 |
| 0.8910 | 74 |
| 0.8913 | 75 |
| 0.8887 | 76 |
| 0.8868 | 77 |
| 0.8868 | 78 |
| 0.8815 | 79 |
| 0.8821 | 80 |
| 0.8791 | 81 |
| 0.8752 | 82 |
| 0.8731 | 83 |
| 0.8779 | 84 |
| 0.8727 | 85 |
| 0.8702 | 86 |
| 0.8712 | 87 |
| 0.8689 | 88 |
| 0.8646 | 89 |
| 0.8644 | 90 |
| 0.8608 | 91 |
| 0.8643 | 92 |
| 0.8602 | 93 |
| 0.8605 | 94 |
| 0.8568 | 95 |
| 0.8567 | 96 |
| 0.8557 | 97 |
| 0.8543 | 98 |
| 0.8536 | 99 |
### Framework versions
- Transformers 4.20.1
- TensorFlow 2.8.2
- Datasets 2.3.2
- Tokenizers 0.12.1
| 6eebf442d7f777ea24a965bf25ccaa33 |
henryscheible/mnli_bert-base-uncased_144 | henryscheible | null | 13 | 0 | null | 0 | null | true | false | false | apache-2.0 | ['en'] | ['glue'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,018 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mnli_bert-base-uncased_144
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MNLI dataset.
It achieves the following results on the evaluation set:
- Loss: 0.4509
- Accuracy: 0.8422
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 400
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5.0
### Training results
### Framework versions
- Transformers 4.23.1
- Pytorch 1.12.1
- Datasets 2.6.1
- Tokenizers 0.13.1
| 5ce3162895aec6e031bdc29af9c46e05 |
ScottaStrong/DialogGPT-small-joshua | ScottaStrong | gpt2 | 10 | 7 | transformers | 0 | conversational | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['conversational'] | false | true | true | 1,735 | false | # DialoGPT Trained on the Speech of a Game Character
This is an instance of [microsoft/DialoGPT-medium](https://huggingface.co/microsoft/DialoGPT-medium) trained on a game character, Joshua from [The World Ends With You](https://en.wikipedia.org/wiki/The_World_Ends_with_You). The data comes from [a Kaggle game script dataset](https://www.kaggle.com/ruolinzheng/twewy-game-script).
I built a Discord AI chatbot based on this model. [Check out my GitHub repo.](https://github.com/RuolinZheng08/twewy-discord-chatbot)
Chat with the model:
```python
from transformers import AutoTokenizer, AutoModelWithLMHead
tokenizer = AutoTokenizer.from_pretrained("scottastrong/DialogGPT-small-joshua")
model = AutoModelWithLMHead.from_pretrained("scottastrong/DialogGPT-small-joshua")
# Let's chat for 4 lines
for step in range(4):
# encode the new user input, add the eos_token and return a tensor in Pytorch
new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt')
# print(new_user_input_ids)
# append the new user input tokens to the chat history
bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids
# generated a response while limiting the total chat history to 1000 tokens,
chat_history_ids = model.generate(
bot_input_ids, max_length=200,
pad_token_id=tokenizer.eos_token_id,
no_repeat_ngram_size=3,
do_sample=True,
top_k=100,
top_p=0.7,
temperature=0.8
)
# pretty print last ouput tokens from bot
print("JoshuaBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
``` | 69f7aacfbf36b13e3be635c232127065 |
Jungwoo4021/wav2vec2-base-ks-padpt200 | Jungwoo4021 | wav2vec2 | 14 | 1 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | ['superb'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio-classification', 'generated_from_trainer'] | true | true | true | 1,917 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-ks-padpt200
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset.
It achieves the following results on the evaluation set:
- Loss: 1.6540
- Accuracy: 0.6037
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.003
- train_batch_size: 256
- eval_batch_size: 256
- seed: 0
- gradient_accumulation_steps: 4
- total_train_batch_size: 1024
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 1.2728 | 1.0 | 50 | 1.6540 | 0.6037 |
| 0.8498 | 2.0 | 100 | 1.2559 | 0.6015 |
| 0.7563 | 3.0 | 150 | 1.4192 | 0.5035 |
| 0.701 | 4.0 | 200 | 1.3318 | 0.5641 |
| 0.6592 | 5.0 | 250 | 1.3236 | 0.5666 |
| 0.6404 | 6.0 | 300 | 1.3653 | 0.5469 |
| 0.6315 | 7.0 | 350 | 1.4052 | 0.5082 |
| 0.6306 | 8.0 | 400 | 1.2818 | 0.5590 |
| 0.6297 | 9.0 | 450 | 1.3096 | 0.5659 |
| 0.6056 | 10.0 | 500 | 1.3595 | 0.5368 |
### Framework versions
- Transformers 4.22.0.dev0
- Pytorch 1.11.0+cu115
- Datasets 2.4.0
- Tokenizers 0.12.1
| 795a9894cb6c4d00dfecb65e603fefd2 |
raduion/bert-medium-luxembourgish | raduion | bert | 7 | 5 | transformers | 1 | fill-mask | false | true | false | mit | ['lu'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text', 'MLM'] | false | true | true | 414 | false |
## BERT Medium for Luxembourgish
Created from a dataset with 1M Luxembourgish sentences from Wikipedia. Corpus has approx. 16M words.
The MLM objective was trained. The BERT model has parameters `L=8` and `H=512`. Vocabulary has 70K word pieces.
Final loss scores, after 3 epochs:
- Final train loss: 4.230
- Final train perplexity: 68.726
- Final validation loss: 4.074
- Final validation perplexity: 58.765
| aae780411a0a27c112479c56b6388ce1 |
groadabike/ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline | groadabike | null | 3 | 5 | asteroid | 1 | audio-to-audio | true | false | false | cc-by-sa-4.0 | null | ['DAMP-VSEP', 'Singing/Accompaniment Separation'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio'] | false | true | true | 2,412 | false |
## Description:
This model was trained by Gerardo Roa using the dampvsep recipe in Asteroid.
It was trained on the `singing/accompaniment` task of the `DAMP-VSEP` dataset.
## Training config:
```yaml
data:
channels: 1
emb_model: 'no'
metadata_path: metadata
mixture: remix
root_path: /fastdata/acp13gr/DAMP/DAMP-VSEP
sample_rate: 16000
train_set: english_nonenglish
filterbank:
kernel_size: 20
n_filters: 256
stride: 10
main_args:
exp_dir: exp/train_convtasnet_remix-no-0.0-english_nonenglish-0.0005-jade
help: null
masknet:
bn_chan: 256
conv_kernel_size: 3
hid_chan: 512
mask_act: relu
n_blocks: 10
n_repeats: 4
n_src: 2
norm_type: gLN
skip_chan: 256
optim:
lr: 0.0005
optimizer: adam
weight_decay: 0.0
positional arguments: {}
training:
batch_size: 7
early_stop: true
epochs: 50
half_lr: true
loss_alpha: 0.0
num_workers: 10
```
## Results:
```yaml
"si_sdr": 15.111802516750586,
"si_sdr_imp": 15.178209807687663,
"si_sdr_s0": 12.160261214703553,
"si_sdr_s0_imp": 17.434593619085675,
"si_sdr_s1": 18.063343818797623,
"si_sdr_s1_imp": 12.92182599628965,
"sdr": 15.959722569460281,
"sdr_imp": 14.927002467087567,
"sdr_s0": 13.270412028426595,
"sdr_s0_imp": 16.45867572657551,
"sdr_s1": 18.64903311049397,
"sdr_s1_imp": 13.39532920759962,
"sir": 23.935932341084754,
"sir_imp": 22.903212238712012,
"sir_s0": 22.30777879911744,
"sir_s0_imp": 25.49604249726635,
"sir_s1": 25.56408588305207,
"sir_s1_imp": 20.310381980157665,
"sar": 17.174899162445882,
"sar_imp": -134.47377304178818,
"sar_s0": 14.268071153965913,
"sar_s0_imp": -137.38060105026818,
"sar_s1": 20.081727170925856,
"sar_s1_imp": -131.56694503330817,
"stoi": 0.7746496376326059,
"stoi_imp": 0.19613735629114643,
"stoi_s0": 0.6611376621212413,
"stoi_s0_imp": 0.21162695175464794,
"stoi_s1": 0.8881616131439705,
"stoi_s1_imp": 0.1806477608276449
```
## License notice:
** This is important, please fill it, if you need help, you can ask on Asteroid's slack.**
This work "ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline"
is a derivative of [DAMP-VSEP corpus](https://zenodo.org/record/3553059) by
[Smule, Inc](https://www.smule.com/),
used under [Restricted License](https://zenodo.org/record/3553059)(Research only).
"ConvTasNet_DAMPVSEP_EnglishNonEnglish_baseline"
is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/)
by Gerardo Roa.
| 574ec25856eed0fdc3bccb1dc3179bdf |
jhaochenz/finetuned_gpt2-medium_sst2_negation0.001_pretrainedTrue_epochs3 | jhaochenz | gpt2 | 17 | 0 | transformers | 0 | text-generation | true | false | false | mit | null | ['sst2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,269 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuned_gpt2-medium_sst2_negation0.001_pretrainedTrue_epochs3
This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset.
It achieves the following results on the evaluation set:
- Loss: 3.0568
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.2831 | 1.0 | 1322 | 2.8944 |
| 1.971 | 2.0 | 2644 | 2.9808 |
| 1.8553 | 3.0 | 3966 | 3.0568 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.7.0
- Datasets 2.8.0
- Tokenizers 0.13.2
| 2f45c86ecf792118c8ffdadb6618325d |
batterydata/batteryonlybert-uncased-abstract | batterydata | bert | 20 | 39 | transformers | 0 | text-classification | true | false | false | apache-2.0 | ['en'] | ['batterydata/paper-abstracts'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | Text Classification | false | true | true | 1,347 | false |
# BatteryOnlyBERT-uncased for Battery Abstract Classification
**Language model:** batteryonlybert-uncased
**Language:** English
**Downstream-task:** Text Classification
**Training data:** training\_data.csv
**Eval data:** val\_data.csv
**Code:** See [example](https://github.com/ShuHuang/batterybert)
**Infrastructure**: 8x DGX A100
## Hyperparameters
```
batch_size = 16
n_epochs = 13
base_LM_model = "batteryonlybert-uncased"
learning_rate = 3e-5
```
## Performance
```
"Validation accuracy": 97.18,
"Test accuracy": 97.08,
```
## Usage
### In Transformers
```python
from transformers import AutoModelForSequenceClassification, AutoTokenizer, pipeline
model_name = "batterydata/batteryonlybert-uncased-abstract"
# a) Get predictions
nlp = pipeline('text-classification', model=model_name, tokenizer=model_name)
input = {'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'}
res = nlp(input)
# b) Load model & tokenizer
model = AutoModelForSequenceClassification.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
```
## Authors
Shu Huang: `sh2009 [at] cam.ac.uk`
Jacqueline Cole: `jmc61 [at] cam.ac.uk`
## Citation
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement | 0be306e7f46dba59d8a39b565e60de3b |
jonatasgrosman/exp_w2v2r_es_vp-100k_age_teens-0_sixties-10_s666 | jonatasgrosman | wav2vec2 | 10 | 0 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['es'] | ['mozilla-foundation/common_voice_7_0'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['automatic-speech-recognition', 'es'] | false | true | true | 498 | false | # exp_w2v2r_es_vp-100k_age_teens-0_sixties-10_s666
Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (es)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0).
When using this model, make sure that your speech input is sampled at 16kHz.
This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
| 2e2f7894178d957af4d9c544bc434829 |
tyoyo/t5-base-TEDxJP-11body-0context | tyoyo | t5 | 12 | 3 | transformers | 0 | text2text-generation | true | false | false | cc-by-sa-4.0 | null | ['te_dx_jp'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,954 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-base-TEDxJP-11body-0context
This model is a fine-tuned version of [sonoisa/t5-base-japanese](https://huggingface.co/sonoisa/t5-base-japanese) on the te_dx_jp dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8068
- Wer: 0.1976
- Mer: 0.1904
- Wil: 0.2816
- Wip: 0.7184
- Hits: 602335
- Substitutions: 75050
- Deletions: 39435
- Insertions: 27185
- Cer: 0.1625
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 64
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer | Mer | Wil | Wip | Hits | Substitutions | Deletions | Insertions | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:------:|:------:|:-------------:|:---------:|:----------:|:------:|
| 0.8909 | 1.0 | 746 | 0.7722 | 0.3120 | 0.2861 | 0.3989 | 0.6011 | 558138 | 99887 | 58795 | 64983 | 0.2652 |
| 0.6786 | 2.0 | 1492 | 0.7021 | 0.2226 | 0.2122 | 0.3069 | 0.6931 | 592242 | 78773 | 45805 | 34978 | 0.1862 |
| 0.5627 | 3.0 | 2238 | 0.6996 | 0.2104 | 0.2016 | 0.2942 | 0.7058 | 597381 | 76593 | 42846 | 31392 | 0.1752 |
| 0.489 | 4.0 | 2984 | 0.7161 | 0.2030 | 0.1952 | 0.2865 | 0.7135 | 599808 | 75155 | 41857 | 28506 | 0.1684 |
| 0.4355 | 5.0 | 3730 | 0.7389 | 0.2000 | 0.1924 | 0.2837 | 0.7163 | 601815 | 75247 | 39758 | 28335 | 0.1651 |
| 0.3836 | 6.0 | 4476 | 0.7537 | 0.1992 | 0.1918 | 0.2829 | 0.7171 | 601846 | 75046 | 39928 | 27815 | 0.1640 |
| 0.3617 | 7.0 | 5222 | 0.7743 | 0.1995 | 0.1918 | 0.2832 | 0.7168 | 602287 | 75268 | 39265 | 28445 | 0.1642 |
| 0.3258 | 8.0 | 5968 | 0.7907 | 0.1971 | 0.1899 | 0.2809 | 0.7191 | 602800 | 74887 | 39133 | 27258 | 0.1620 |
| 0.3225 | 9.0 | 6714 | 0.8035 | 0.1981 | 0.1908 | 0.2823 | 0.7177 | 602418 | 75372 | 39030 | 27625 | 0.1630 |
| 0.3162 | 10.0 | 7460 | 0.8068 | 0.1976 | 0.1904 | 0.2816 | 0.7184 | 602335 | 75050 | 39435 | 27185 | 0.1625 |
### Framework versions
- Transformers 4.12.5
- Pytorch 1.10.0+cu102
- Datasets 1.15.1
- Tokenizers 0.10.3
| 5b3a214e3e38eb5d40256f0f156f2b46 |
naver-clova-ix/donut-base-finetuned-docvqa | naver-clova-ix | vision-encoder-decoder | 11 | 7,717 | transformers | 24 | document-question-answering | true | false | false | mit | null | null | null | 2 | 0 | 2 | 0 | 2 | 2 | 0 | ['donut', 'image-to-text', 'vision'] | false | true | true | 1,970 | false |
# Donut (base-sized model, fine-tuned on DocVQA)
Donut model fine-tuned on DocVQA. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut).
Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
## Model description
Donut consists of a vision encoder (Swin Transformer) and a text decoder (BART). Given an image, the encoder first encodes the image into a tensor of embeddings (of shape batch_size, seq_len, hidden_size), after which the decoder autoregressively generates text, conditioned on the encoding of the encoder.

## Intended uses & limitations
This model is fine-tuned on DocVQA, a document visual question answering dataset.
We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
### BibTeX entry and citation info
```bibtex
@article{DBLP:journals/corr/abs-2111-15664,
author = {Geewook Kim and
Teakgyu Hong and
Moonbin Yim and
Jinyoung Park and
Jinyeong Yim and
Wonseok Hwang and
Sangdoo Yun and
Dongyoon Han and
Seunghyun Park},
title = {Donut: Document Understanding Transformer without {OCR}},
journal = {CoRR},
volume = {abs/2111.15664},
year = {2021},
url = {https://arxiv.org/abs/2111.15664},
eprinttype = {arXiv},
eprint = {2111.15664},
timestamp = {Thu, 02 Dec 2021 10:50:44 +0100},
biburl = {https://dblp.org/rec/journals/corr/abs-2111-15664.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
``` | bdfa895c5cb9ab2d2612666e2e538abd |
ali2066/twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20 | ali2066 | distilbert | 13 | 16 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,790 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# twitter_RoBERTa_token_itr0_1e-05_webDiscourse_01_03_2022-14_45_20
This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6113
- Precision: 0.0097
- Recall: 0.0145
- F1: 0.0116
- Accuracy: 0.6780
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 32
- eval_batch_size: 32
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:---:|:--------:|
| No log | 1.0 | 10 | 0.6399 | 0.0 | 0.0 | 0.0 | 0.6603 |
| No log | 2.0 | 20 | 0.6192 | 0.0 | 0.0 | 0.0 | 0.6603 |
| No log | 3.0 | 30 | 0.6133 | 0.0 | 0.0 | 0.0 | 0.6605 |
| No log | 4.0 | 40 | 0.6142 | 0.0 | 0.0 | 0.0 | 0.6617 |
| No log | 5.0 | 50 | 0.6129 | 0.0 | 0.0 | 0.0 | 0.6632 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.1+cu113
- Datasets 1.18.0
- Tokenizers 0.10.3
| cb5341901b4b1b94cdca5650f7ae4bb6 |
pserna/bert2bert-spanish-paraphraser | pserna | encoder-decoder | 10 | 2 | transformers | 0 | text2text-generation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 866 | false |
# Spanish Bert2Bert fine-tuned on Quora question pairs dataset
Fine-tuning of a [question generator model](https://huggingface.co/mrm8488/bert2bert-spanish-question-generation) into a paraphraser model using a poor-man's translation of the Quora question pairs dataset. It basically rephrases questions into similar questions. Non interrogative sentences are not handled very well.
- Original models: [mrm8488/bert2bert-spanish-question-generation](https://huggingface.co/mrm8488/bert2bert-spanish-question-generation?text=Manuel+vive+en+Murcia%2C+Espa%C3%B1a), which is based on [dccuchile/bert-base-spanish-wwm-cased](https://huggingface.co/dccuchile/bert-base-spanish-wwm-cased) (?).
- Custom database: "Poor-man's" translation of duplicated questions in Quora (translated with [Helsinki-NLP/opus-mt-en-es](https://huggingface.co/Helsinki-NLP/opus-mt-en-es))
| dc63ecd20b22a947fe83d5aae2d0718b |
abhilashawasthi/bert-base-uncased-issues-128 | abhilashawasthi | bert | 10 | 5 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,918 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-uncased-issues-128
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 1.2520
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 32
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 16
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 2.0949 | 1.0 | 291 | 1.7072 |
| 1.649 | 2.0 | 582 | 1.4409 |
| 1.4835 | 3.0 | 873 | 1.4099 |
| 1.3938 | 4.0 | 1164 | 1.3858 |
| 1.3326 | 5.0 | 1455 | 1.2004 |
| 1.2949 | 6.0 | 1746 | 1.2955 |
| 1.2451 | 7.0 | 2037 | 1.2682 |
| 1.1992 | 8.0 | 2328 | 1.1938 |
| 1.1784 | 9.0 | 2619 | 1.1686 |
| 1.1397 | 10.0 | 2910 | 1.2050 |
| 1.1293 | 11.0 | 3201 | 1.2058 |
| 1.1006 | 12.0 | 3492 | 1.1680 |
| 1.0835 | 13.0 | 3783 | 1.2414 |
| 1.0757 | 14.0 | 4074 | 1.1522 |
| 1.062 | 15.0 | 4365 | 1.1176 |
| 1.0535 | 16.0 | 4656 | 1.2520 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.10.2+cu102
- Datasets 2.1.0
- Tokenizers 0.12.1
| ad337d4358d9a9a55ac0d364adad581b |
deepset/bert-base-uncased-squad2 | deepset | bert | 8 | 517 | transformers | 2 | question-answering | true | false | false | cc-by-4.0 | ['en'] | ['squad_v2'] | null | 5 | 1 | 2 | 2 | 0 | 0 | 0 | [] | true | true | true | 1,626 | false |
# bert-base-uncased for QA
## Overview
**Language model:** bert-base-uncased
**Language:** English
**Downstream-task:** Extractive QA
**Training data:** SQuAD 2.0
**Eval data:** SQuAD 2.0
**Infrastructure**: 1x Tesla v100
## Hyperparameters
```
batch_size = 32
n_epochs = 3
base_LM_model = "bert-base-uncased"
max_seq_len = 384
learning_rate = 3e-5
lr_schedule = LinearWarmup
warmup_proportion = 0.2
doc_stride=128
max_query_length=64
```
## Performance
```
"exact": 73.67977764676156
"f1": 77.87647139308865
```
## Authors
- Timo Möller: `timo.moeller [at] deepset.ai`
- Julian Risch: `julian.risch [at] deepset.ai`
- Malte Pietsch: `malte.pietsch [at] deepset.ai`
- Michel Bartels: `michel.bartels [at] deepset.ai`
## About us

We bring NLP to the industry via open source!
Our focus: Industry specific language models & large scale QA systems.
Some of our work:
- [German BERT (aka "bert-base-german-cased")](https://deepset.ai/german-bert)
- [GermanQuAD and GermanDPR datasets and models (aka "gelectra-base-germanquad", "gbert-base-germandpr")](https://deepset.ai/germanquad)
- [FARM](https://github.com/deepset-ai/FARM)
- [Haystack](https://github.com/deepset-ai/haystack/)
Get in touch:
[Twitter](https://twitter.com/deepset_ai) | [LinkedIn](https://www.linkedin.com/company/deepset-ai/) | [Discord](https://haystack.deepset.ai/community/join) | [GitHub Discussions](https://github.com/deepset-ai/haystack/discussions) | [Website](https://deepset.ai)
By the way: [we're hiring!](http://www.deepset.ai/jobs) | b7843cec3248b689b0dc74e37c43672e |
it5/mt5-small-news-summarization | it5 | mt5 | 11 | 5 | transformers | 0 | summarization | true | true | true | apache-2.0 | ['it'] | ['ARTeLab/fanpage', 'ARTeLab/ilpost'] | {'emissions': '17g', 'source': 'Google Cloud Platform Carbon Footprint', 'training_type': 'fine-tuning', 'geographical_location': 'Eemshaven, Netherlands, Europe', 'hardware_used': '1 TPU v3-8 VM'} | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['italian', 'sequence-to-sequence', 'fanpage', 'ilpost', 'summarization'] | true | true | true | 2,795 | false | # mT5 Small for News Summarization ✂️🗞️ 🇮🇹
This repository contains the checkpoint for the [mT5 Small](https://huggingface.co/google/mt5-small) model fine-tuned on news summarization on the [Fanpage](https://huggingface.co/datasets/ARTeLab/fanpage) and [Il Post](https://huggingface.co/datasets/ARTeLab/ilpost) corpora as part of the experiments of the paper [IT5: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation](https://arxiv.org/abs/2203.03759) by [Gabriele Sarti](https://gsarti.com) and [Malvina Nissim](https://malvinanissim.github.io).
A comprehensive overview of other released materials is provided in the [gsarti/it5](https://github.com/gsarti/it5) repository. Refer to the paper for additional details concerning the reported scores and the evaluation approach.
## Using the model
Model checkpoints are available for usage in Tensorflow, Pytorch and JAX. They can be used directly with pipelines as:
```python
from transformers import pipelines
newsum = pipeline("summarization", model='it5/mt5-small-news-summarization')
newsum("Dal 31 maggio è infine partita la piattaforma ITsART, a più di un anno da quando – durante il primo lockdown – il ministro della Cultura Dario Franceschini ne aveva parlato come di «una sorta di Netflix della cultura», pensata per «offrire a tutto il mondo la cultura italiana a pagamento». È presto per dare giudizi definitivi sulla piattaforma, e di certo sarà difficile farlo anche più avanti senza numeri precisi. Al momento, l’unica cosa che si può fare è guardare com’è fatto il sito, contare quanti contenuti ci sono (circa 700 “titoli”, tra film, documentari, spettacoli teatrali e musicali e altri eventi) e provare a dare un giudizio sul loro valore e sulla loro varietà. Intanto, una cosa notata da più parti è che diversi contenuti di ITsART sono a pagamento sulla piattaforma sebbene altrove, per esempio su RaiPlay, siano invece disponibili gratuitamente.")
>>> [{"generated_text": "ITsART, la Netflix della cultura italiana, parte da maggio. Film, documentari, spettacoli teatrali e musicali disponibili sul nuovo sito a pagamento."}]
```
or loaded using autoclasses:
```python
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("it5/mt5-small-news-summarization")
model = AutoModelForSeq2SeqLM.from_pretrained("it5/mt5-small-news-summarization")
```
If you use this model in your research, please cite our work as:
```bibtex
@article{sarti-nissim-2022-it5,
title={{IT5}: Large-scale Text-to-text Pretraining for Italian Language Understanding and Generation},
author={Sarti, Gabriele and Nissim, Malvina},
journal={ArXiv preprint 2203.03759},
url={https://arxiv.org/abs/2203.03759},
year={2022},
month={mar}
}
``` | b11bd9733f6c2803e99c661e97b31f8c |
Helsinki-NLP/opus-mt-en-roa | Helsinki-NLP | marian | 12 | 956 | transformers | 0 | translation | true | true | false | apache-2.0 | ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa'] | null | null | 1 | 1 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 5,113 | false |
### eng-roa
* source group: English
* target group: Romance languages
* OPUS readme: [eng-roa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-roa/README.md)
* model: transformer
* source language(s): eng
* target language(s): arg ast cat cos egl ext fra frm_Latn gcf_Latn glg hat ind ita lad lad_Latn lij lld_Latn lmo max_Latn mfe min mwl oci pap pms por roh ron scn spa tmw_Latn vec wln zlm_Latn zsm_Latn
* model: transformer
* pre-processing: normalization + SentencePiece (spm32k,spm32k)
* a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID)
* download original weights: [opus2m-2020-08-01.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.zip)
* test set translations: [opus2m-2020-08-01.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.test.txt)
* test set scores: [opus2m-2020-08-01.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| newsdev2016-enro-engron.eng.ron | 27.6 | 0.567 |
| newsdiscussdev2015-enfr-engfra.eng.fra | 30.2 | 0.575 |
| newsdiscusstest2015-enfr-engfra.eng.fra | 35.5 | 0.612 |
| newssyscomb2009-engfra.eng.fra | 27.9 | 0.570 |
| newssyscomb2009-engita.eng.ita | 29.3 | 0.590 |
| newssyscomb2009-engspa.eng.spa | 29.6 | 0.570 |
| news-test2008-engfra.eng.fra | 25.2 | 0.538 |
| news-test2008-engspa.eng.spa | 27.3 | 0.548 |
| newstest2009-engfra.eng.fra | 26.9 | 0.560 |
| newstest2009-engita.eng.ita | 28.7 | 0.583 |
| newstest2009-engspa.eng.spa | 29.0 | 0.568 |
| newstest2010-engfra.eng.fra | 29.3 | 0.574 |
| newstest2010-engspa.eng.spa | 34.2 | 0.601 |
| newstest2011-engfra.eng.fra | 31.4 | 0.592 |
| newstest2011-engspa.eng.spa | 35.0 | 0.599 |
| newstest2012-engfra.eng.fra | 29.5 | 0.576 |
| newstest2012-engspa.eng.spa | 35.5 | 0.603 |
| newstest2013-engfra.eng.fra | 29.9 | 0.567 |
| newstest2013-engspa.eng.spa | 32.1 | 0.578 |
| newstest2016-enro-engron.eng.ron | 26.1 | 0.551 |
| Tatoeba-test.eng-arg.eng.arg | 1.4 | 0.125 |
| Tatoeba-test.eng-ast.eng.ast | 17.8 | 0.406 |
| Tatoeba-test.eng-cat.eng.cat | 48.3 | 0.676 |
| Tatoeba-test.eng-cos.eng.cos | 3.2 | 0.275 |
| Tatoeba-test.eng-egl.eng.egl | 0.2 | 0.084 |
| Tatoeba-test.eng-ext.eng.ext | 11.2 | 0.344 |
| Tatoeba-test.eng-fra.eng.fra | 45.3 | 0.637 |
| Tatoeba-test.eng-frm.eng.frm | 1.1 | 0.221 |
| Tatoeba-test.eng-gcf.eng.gcf | 0.6 | 0.118 |
| Tatoeba-test.eng-glg.eng.glg | 44.2 | 0.645 |
| Tatoeba-test.eng-hat.eng.hat | 28.0 | 0.502 |
| Tatoeba-test.eng-ita.eng.ita | 45.6 | 0.674 |
| Tatoeba-test.eng-lad.eng.lad | 8.2 | 0.322 |
| Tatoeba-test.eng-lij.eng.lij | 1.4 | 0.182 |
| Tatoeba-test.eng-lld.eng.lld | 0.8 | 0.217 |
| Tatoeba-test.eng-lmo.eng.lmo | 0.7 | 0.190 |
| Tatoeba-test.eng-mfe.eng.mfe | 91.9 | 0.956 |
| Tatoeba-test.eng-msa.eng.msa | 31.1 | 0.548 |
| Tatoeba-test.eng.multi | 42.9 | 0.636 |
| Tatoeba-test.eng-mwl.eng.mwl | 2.1 | 0.234 |
| Tatoeba-test.eng-oci.eng.oci | 7.9 | 0.297 |
| Tatoeba-test.eng-pap.eng.pap | 44.1 | 0.648 |
| Tatoeba-test.eng-pms.eng.pms | 2.1 | 0.190 |
| Tatoeba-test.eng-por.eng.por | 41.8 | 0.639 |
| Tatoeba-test.eng-roh.eng.roh | 3.5 | 0.261 |
| Tatoeba-test.eng-ron.eng.ron | 41.0 | 0.635 |
| Tatoeba-test.eng-scn.eng.scn | 1.7 | 0.184 |
| Tatoeba-test.eng-spa.eng.spa | 50.1 | 0.689 |
| Tatoeba-test.eng-vec.eng.vec | 3.2 | 0.248 |
| Tatoeba-test.eng-wln.eng.wln | 7.2 | 0.220 |
### System Info:
- hf_name: eng-roa
- source_languages: eng
- target_languages: roa
- opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-roa/README.md
- original_repo: Tatoeba-Challenge
- tags: ['translation']
- languages: ['en', 'it', 'ca', 'rm', 'es', 'ro', 'gl', 'co', 'wa', 'pt', 'oc', 'an', 'id', 'fr', 'ht', 'roa']
- src_constituents: {'eng'}
- tgt_constituents: {'ita', 'cat', 'roh', 'spa', 'pap', 'lmo', 'mwl', 'lij', 'lad_Latn', 'ext', 'ron', 'ast', 'glg', 'pms', 'zsm_Latn', 'gcf_Latn', 'lld_Latn', 'min', 'tmw_Latn', 'cos', 'wln', 'zlm_Latn', 'por', 'egl', 'oci', 'vec', 'arg', 'ind', 'fra', 'hat', 'lad', 'max_Latn', 'frm_Latn', 'scn', 'mfe'}
- src_multilingual: False
- tgt_multilingual: True
- prepro: normalization + SentencePiece (spm32k,spm32k)
- url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.zip
- url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/eng-roa/opus2m-2020-08-01.test.txt
- src_alpha3: eng
- tgt_alpha3: roa
- short_pair: en-roa
- chrF2_score: 0.636
- bleu: 42.9
- brevity_penalty: 0.978
- ref_len: 72751.0
- src_name: English
- tgt_name: Romance languages
- train_date: 2020-08-01
- src_alpha2: en
- tgt_alpha2: roa
- prefer_old: False
- long_pair: eng-roa
- helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535
- transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b
- port_machine: brutasse
- port_time: 2020-08-21-14:41 | 33f18788f8d742da15534bb3dfb78772 |
grantslewis/spelling-correction-english-base-location-unique-2-2-proportional | grantslewis | bart | 13 | 26 | transformers | 0 | text2text-generation | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,325 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spelling-correction-english-base-location-unique-2-2-proportional
This model is a fine-tuned version of [grantslewis/spelling-correction-english-base-location-unique-2-2](https://huggingface.co/grantslewis/spelling-correction-english-base-location-unique-2-2) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0771
- Cer: 0.0183
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 75
- eval_batch_size: 75
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss | Cer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.097 | 1.0 | 5659 | 0.0771 | 0.0183 |
### Framework versions
- Transformers 4.22.2
- Pytorch 1.12.1+cu113
- Datasets 2.5.2
- Tokenizers 0.12.1
| 898b1185c16333648f29ea55c764b535 |
google/realm-orqa-wq-reader | google | realm | 7 | 5 | transformers | 0 | null | true | false | false | apache-2.0 | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 455 | false |
# realm-orqa-wq-reader
## Model description
The REALM checkpoint finetuned with WebQuestions(WQ) dataset, converted from the TF checkpoint provided by Google Language.
The original paper, code, and checkpoints can be found [here](https://github.com/google-research/language/tree/master/language/realm).
## Usage
```python
from transformers import RealmReader
reader = RealmReader.from_pretrained("qqaatw/realm-orqa-wq-reader")
```
| fb005a6966d3f47945a5cd594c746265 |
Yulinfeng/wsj0_2mix_enh_train_enh_dan_tf_raw_valid.si_snr.ave | Yulinfeng | null | 16 | 3 | espnet | 0 | audio-to-audio | false | false | false | cc-by-4.0 | ['en'] | ['wsj0_2mix'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['espnet', 'audio', 'audio-to-audio'] | false | true | true | 6,130 | false |
## ESPnet2 ENH model
### `Yulinfeng/wsj0_2mix_enh_train_enh_dan_tf_raw_valid.si_snr.ave`
This model was trained by earthmanylf using wsj0_2mix recipe in [espnet](https://github.com/espnet/espnet/).
### Demo: How to use in ESPnet2
```bash
cd espnet
git checkout ec1acec03d109f06d829b80862e0388f7234d0d1
pip install -e .
cd egs2/wsj0_2mix/enh1
./run.sh --skip_data_prep false --skip_train true --download_model Yulinfeng/wsj0_2mix_enh_train_enh_dan_tf_raw_valid.si_snr.ave
```
<!-- Generated by ./scripts/utils/show_enh_score.sh -->
# RESULTS
## Environments
- date: `Thu Mar 3 14:33:32 CST 2022`
- python version: `3.8.10 (default, May 19 2021, 18:05:58) [GCC 7.3.0]`
- espnet version: `espnet 0.10.7a1`
- pytorch version: `pytorch 1.5.1+cu101`
- Git hash: `ec1acec03d109f06d829b80862e0388f7234d0d1`
- Commit date: `Fri Feb 25 14:12:45 2022 +0800`
## ..
config: conf/tuning/train_enh_dan_tf.yaml
|dataset|PESQ|STOI|SAR|SDR|SIR|SI_SNR|
|---|---|---|---|---|---|---|
|enhanced_cv_min_8k|2.68|0.88|12.28|11.01|18.03|10.48|
|enhanced_tt_min_8k|2.68|0.89|12.10|10.84|17.98|10.30|
## ENH config
<details><summary>expand</summary>
```
config: conf/tuning/train_enh_dan_tf.yaml
print_config: false
log_level: INFO
dry_run: false
iterator_type: chunk
output_dir: exp/enh_train_enh_dan_tf_raw
ngpu: 1
seed: 0
num_workers: 4
num_att_plot: 3
dist_backend: nccl
dist_init_method: env://
dist_world_size: null
dist_rank: null
local_rank: 0
dist_master_addr: null
dist_master_port: null
dist_launcher: null
multiprocessing_distributed: false
unused_parameters: false
sharded_ddp: false
cudnn_enabled: true
cudnn_benchmark: false
cudnn_deterministic: true
collect_stats: false
write_collected_feats: false
max_epoch: 100
patience: 10
val_scheduler_criterion:
- valid
- loss
early_stopping_criterion:
- valid
- loss
- min
best_model_criterion:
- - valid
- si_snr
- max
- - valid
- loss
- min
keep_nbest_models: 1
nbest_averaging_interval: 0
grad_clip: 5.0
grad_clip_type: 2.0
grad_noise: false
accum_grad: 1
no_forward_run: false
resume: true
train_dtype: float32
use_amp: false
log_interval: null
use_matplotlib: true
use_tensorboard: true
use_wandb: false
wandb_project: null
wandb_id: null
wandb_entity: null
wandb_name: null
wandb_model_log_interval: -1
detect_anomaly: false
pretrain_path: null
init_param: []
ignore_init_mismatch: false
freeze_param: []
num_iters_per_epoch: null
batch_size: 8
valid_batch_size: null
batch_bins: 1000000
valid_batch_bins: null
train_shape_file:
- exp/enh_stats_8k/train/speech_mix_shape
- exp/enh_stats_8k/train/speech_ref1_shape
- exp/enh_stats_8k/train/speech_ref2_shape
valid_shape_file:
- exp/enh_stats_8k/valid/speech_mix_shape
- exp/enh_stats_8k/valid/speech_ref1_shape
- exp/enh_stats_8k/valid/speech_ref2_shape
batch_type: folded
valid_batch_type: null
fold_length:
- 80000
- 80000
- 80000
sort_in_batch: descending
sort_batch: descending
multiple_iterator: false
chunk_length: 32000
chunk_shift_ratio: 0.5
num_cache_chunks: 1024
train_data_path_and_name_and_type:
- - dump/raw/tr_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/tr_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/tr_min_8k/spk2.scp
- speech_ref2
- sound
valid_data_path_and_name_and_type:
- - dump/raw/cv_min_8k/wav.scp
- speech_mix
- sound
- - dump/raw/cv_min_8k/spk1.scp
- speech_ref1
- sound
- - dump/raw/cv_min_8k/spk2.scp
- speech_ref2
- sound
allow_variable_data_keys: false
max_cache_size: 0.0
max_cache_fd: 32
valid_max_cache_size: null
optim: adam
optim_conf:
lr: 0.0001
eps: 1.0e-08
weight_decay: 1.0e-07
scheduler: reducelronplateau
scheduler_conf:
mode: min
factor: 0.7
patience: 1
init: xavier_uniform
model_conf:
stft_consistency: false
loss_type: mask_mse
mask_type: PSM
ref_channel: 0
criterions:
- name: mse
conf:
compute_on_mask: false
mask_type: PSM
wrapper: pit
wrapper_conf:
weight: 1.0
use_preprocessor: false
encoder: stft
encoder_conf:
n_fft: 256
hop_length: 64
separator: dan
separator_conf:
rnn_type: blstm
num_spk: 2
nonlinear: tanh
layer: 4
unit: 600
dropout: 0.1
emb_D: 20
decoder: stft
decoder_conf:
n_fft: 256
hop_length: 64
required:
- output_dir
version: 0.10.7a1
distributed: false
```
</details>
### Citing ESPnet
```BibTex
@inproceedings{watanabe2018espnet,
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
title={{ESPnet}: End-to-End Speech Processing Toolkit},
year={2018},
booktitle={Proceedings of Interspeech},
pages={2207--2211},
doi={10.21437/Interspeech.2018-1456},
url={http://dx.doi.org/10.21437/Interspeech.2018-1456}
}
@inproceedings{ESPnet-SE,
author = {Chenda Li and Jing Shi and Wangyou Zhang and Aswin Shanmugam Subramanian and Xuankai Chang and
Naoyuki Kamo and Moto Hira and Tomoki Hayashi and Christoph B{"{o}}ddeker and Zhuo Chen and Shinji Watanabe},
title = {ESPnet-SE: End-To-End Speech Enhancement and Separation Toolkit Designed for {ASR} Integration},
booktitle = {{IEEE} Spoken Language Technology Workshop, {SLT} 2021, Shenzhen, China, January 19-22, 2021},
pages = {785--792},
publisher = {{IEEE}},
year = {2021},
url = {https://doi.org/10.1109/SLT48900.2021.9383615},
doi = {10.1109/SLT48900.2021.9383615},
timestamp = {Mon, 12 Apr 2021 17:08:59 +0200},
biburl = {https://dblp.org/rec/conf/slt/Li0ZSCKHHBC021.bib},
bibsource = {dblp computer science bibliography, https://dblp.org}
}
```
or arXiv:
```bibtex
@misc{watanabe2018espnet,
title={ESPnet: End-to-End Speech Processing Toolkit},
author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Yalta and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai},
year={2018},
eprint={1804.00015},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
| e6698612d62fdc80607116a1aef4c383 |
UchihaMadara/model1-thesis-3 | UchihaMadara | bert | 12 | 9 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,700 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# model1-thesis-3
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 1.1377
- Precision: 0.4527
- Recall: 0.5051
- F1: 0.4774
- Accuracy: 0.6190
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 45 | 1.3105 | 0.3737 | 0.4765 | 0.4189 | 0.5364 |
| No log | 2.0 | 90 | 1.0783 | 0.4009 | 0.4523 | 0.4250 | 0.5781 |
| No log | 3.0 | 135 | 1.0601 | 0.4444 | 0.4750 | 0.4592 | 0.6127 |
| No log | 4.0 | 180 | 1.0953 | 0.4745 | 0.4876 | 0.4809 | 0.6266 |
| No log | 5.0 | 225 | 1.1377 | 0.4527 | 0.5051 | 0.4774 | 0.6190 |
### Framework versions
- Transformers 4.26.1
- Pytorch 1.13.1+cu116
- Datasets 2.9.0
- Tokenizers 0.13.2
| 612d359b9f9f63d8ecf46d9216966968 |
azaidi06/xlm-roberta-base-finetuned-panx-de | azaidi06 | xlm-roberta | 12 | 7 | transformers | 0 | token-classification | true | false | false | mit | null | ['xtreme'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,314 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-finetuned-panx-de
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1339
- F1: 0.8663
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 5e-05
- train_batch_size: 24
- eval_batch_size: 24
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss | F1 |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 0.2581 | 1.0 | 525 | 0.1690 | 0.8303 |
| 0.1305 | 2.0 | 1050 | 0.1352 | 0.8484 |
| 0.0839 | 3.0 | 1575 | 0.1339 | 0.8663 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.12.1
- Datasets 1.16.1
- Tokenizers 0.10.3
| eaab46f613e30018feb63847bd5a5bec |
saattrupdan/job-listing-filtering-model | saattrupdan | xlm-roberta | 10 | 4 | transformers | 0 | text-classification | true | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,976 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# job-listing-filtering-model
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1992
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 4
- total_train_batch_size: 32
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 25
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.4639 | 1.55 | 50 | 0.4343 |
| 0.407 | 3.12 | 100 | 0.3589 |
| 0.3459 | 4.68 | 150 | 0.3110 |
| 0.2871 | 6.25 | 200 | 0.2604 |
| 0.1966 | 7.8 | 250 | 0.2004 |
| 0.0994 | 9.37 | 300 | 0.1766 |
| 0.0961 | 10.92 | 350 | 0.2007 |
| 0.0954 | 12.49 | 400 | 0.1716 |
| 0.0498 | 14.06 | 450 | 0.1642 |
| 0.0419 | 15.62 | 500 | 0.1811 |
| 0.0232 | 17.18 | 550 | 0.1872 |
| 0.0146 | 18.74 | 600 | 0.1789 |
| 0.0356 | 20.31 | 650 | 0.1984 |
| 0.0325 | 21.86 | 700 | 0.1845 |
| 0.0381 | 23.43 | 750 | 0.1994 |
| 0.0063 | 24.98 | 800 | 0.1992 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| 0afba01909f6e3e01561af0f5c714bd7 |
Helsinki-NLP/opus-mt-efi-sv | Helsinki-NLP | marian | 10 | 7 | transformers | 0 | translation | true | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['translation'] | false | true | true | 776 | false |
### opus-mt-efi-sv
* source languages: efi
* target languages: sv
* OPUS readme: [efi-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/efi-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.zip)
* test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.test.txt)
* test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/efi-sv/opus-2020-01-08.eval.txt)
## Benchmarks
| testset | BLEU | chr-F |
|-----------------------|-------|-------|
| JW300.efi.sv | 26.8 | 0.447 |
| ef47bb9408ed207f65d107c2dda5f5af |
jbdaniel/bert-large-uncased-finetuned-bert-large-uncase-p1 | jbdaniel | bert | 53 | 0 | transformers | 0 | fill-mask | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,177 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-large-uncased-finetuned-bert-large-uncase-p1
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0993
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 0.0816 | 1.0 | 11392 | 0.0993 |
### Framework versions
- Transformers 4.25.1
- Pytorch 1.13.0+cu116
- Datasets 2.7.1
- Tokenizers 0.13.2
| 49f3988f9272c2d29e112c4db9ef2f99 |
SetFit/distilbert-base-uncased__sst2__train-16-6 | SetFit | distilbert | 10 | 6 | transformers | 0 | text-classification | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,385 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased__sst2__train-16-6
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.8356
- Accuracy: 0.6480
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 4
- eval_batch_size: 4
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 50
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 0.6978 | 1.0 | 7 | 0.6807 | 0.4286 |
| 0.6482 | 2.0 | 14 | 0.6775 | 0.4286 |
| 0.6051 | 3.0 | 21 | 0.6623 | 0.5714 |
| 0.486 | 4.0 | 28 | 0.6710 | 0.5714 |
| 0.4612 | 5.0 | 35 | 0.5325 | 0.7143 |
| 0.2233 | 6.0 | 42 | 0.4992 | 0.7143 |
| 0.1328 | 7.0 | 49 | 0.4753 | 0.7143 |
| 0.0905 | 8.0 | 56 | 0.2416 | 1.0 |
| 0.0413 | 9.0 | 63 | 0.2079 | 1.0 |
| 0.0356 | 10.0 | 70 | 0.2234 | 0.8571 |
| 0.0217 | 11.0 | 77 | 0.2639 | 0.8571 |
| 0.0121 | 12.0 | 84 | 0.2977 | 0.8571 |
| 0.0105 | 13.0 | 91 | 0.3468 | 0.8571 |
| 0.0085 | 14.0 | 98 | 0.3912 | 0.8571 |
| 0.0077 | 15.0 | 105 | 0.4000 | 0.8571 |
| 0.0071 | 16.0 | 112 | 0.4015 | 0.8571 |
| 0.0078 | 17.0 | 119 | 0.3865 | 0.8571 |
| 0.0059 | 18.0 | 126 | 0.3603 | 0.8571 |
| 0.0051 | 19.0 | 133 | 0.3231 | 0.8571 |
### Framework versions
- Transformers 4.15.0
- Pytorch 1.10.2+cu102
- Datasets 1.18.2
- Tokenizers 0.10.3
| 5043549068bcd989d75bced351a79711 |
Zekunli/flan-t5-large-da-multiwoz_fs0.05 | Zekunli | t5 | 10 | 51 | transformers | 1 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,842 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# flan-t5-large-da-multiwoz_fs0.05
This model is a fine-tuned version of [google/flan-t5-large](https://huggingface.co/google/flan-t5-large) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.3984
- Accuracy: 37.0884
- Num: 367
- Gen Len: 15.5232
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 48
- seed: 1799
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 5
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy | Num | Gen Len |
|:-------------:|:-----:|:----:|:---------------:|:--------:|:---:|:-------:|
| 1.734 | 0.56 | 100 | 0.6414 | 20.7578 | 367 | 12.6294 |
| 0.7022 | 1.12 | 200 | 0.4979 | 28.9542 | 367 | 14.5041 |
| 0.6029 | 1.69 | 300 | 0.4452 | 34.0597 | 367 | 14.7302 |
| 0.5516 | 2.25 | 400 | 0.4306 | 34.5725 | 367 | 14.6703 |
| 0.5069 | 2.81 | 500 | 0.4162 | 36.2341 | 367 | 14.4142 |
| 0.5128 | 3.37 | 600 | 0.4061 | 33.6886 | 367 | 14.5286 |
| 0.4721 | 3.93 | 700 | 0.4003 | 35.4136 | 367 | 14.6567 |
| 0.48 | 4.49 | 800 | 0.3984 | 37.0884 | 367 | 15.5232 |
### Framework versions
- Transformers 4.18.0
- Pytorch 1.10.0+cu111
- Datasets 2.5.1
- Tokenizers 0.12.1
| 5b2dc36a3c5fef3495f01343844f6bf3 |
akolov/vasko-style-second-try | akolov | null | 36 | 4 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 1 | 1 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,696 | false | ### Vasko style second try on Stable Diffusion via Dreambooth
#### model by akolov
This your the Stable Diffusion model fine-tuned the Vasko style second try concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **a painting by vasko style**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:


















| 24d950d32dfa9d6f64dbff3377e3c1cc |
sd-concepts-library/orientalist-art | sd-concepts-library | null | 18 | 0 | null | 8 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,146 | false | ### orientalist art on Stable Diffusion
This is the `<orientalist-art>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as a `style`:













| b4765137e77dbf0f5645b538c316a6e5 |
MariaZafar/gpt2-finetuned-wikitext2 | MariaZafar | gpt2 | 9 | 2 | transformers | 0 | text-generation | false | true | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 3,171 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# MariaZafar/gpt2-finetuned-wikitext2
This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.7785
- Validation Loss: 3.7004
- Epoch: 49
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 5.8858 | 7.5655 | 0 |
| 4.0619 | 5.8193 | 1 |
| 3.3766 | 4.9585 | 2 |
| 3.0686 | 4.5764 | 3 |
| 2.9022 | 4.3847 | 4 |
| 2.7838 | 4.2249 | 5 |
| 2.6997 | 4.1060 | 6 |
| 2.6154 | 4.0100 | 7 |
| 2.5575 | 3.9412 | 8 |
| 2.4933 | 3.8447 | 9 |
| 2.4397 | 3.7619 | 10 |
| 2.3835 | 3.7510 | 11 |
| 2.3403 | 3.6810 | 12 |
| 2.2924 | 3.6716 | 13 |
| 2.2513 | 3.6335 | 14 |
| 2.2031 | 3.6208 | 15 |
| 2.1619 | 3.5915 | 16 |
| 2.1234 | 3.5497 | 17 |
| 2.0792 | 3.5540 | 18 |
| 2.0398 | 3.5461 | 19 |
| 1.9976 | 3.5282 | 20 |
| 1.9577 | 3.5260 | 21 |
| 1.9176 | 3.5041 | 22 |
| 1.8745 | 3.4994 | 23 |
| 1.8304 | 3.5250 | 24 |
| 1.7881 | 3.4864 | 25 |
| 1.7423 | 3.4718 | 26 |
| 1.6993 | 3.5194 | 27 |
| 1.6503 | 3.5019 | 28 |
| 1.6025 | 3.5055 | 29 |
| 1.5500 | 3.5109 | 30 |
| 1.4964 | 3.5389 | 31 |
| 1.4448 | 3.5393 | 32 |
| 1.3954 | 3.5363 | 33 |
| 1.3464 | 3.5446 | 34 |
| 1.2978 | 3.5117 | 35 |
| 1.2494 | 3.5225 | 36 |
| 1.2004 | 3.5443 | 37 |
| 1.1534 | 3.5909 | 38 |
| 1.1124 | 3.5380 | 39 |
| 1.0709 | 3.6162 | 40 |
| 1.0265 | 3.6758 | 41 |
| 0.9936 | 3.6168 | 42 |
| 0.9590 | 3.6243 | 43 |
| 0.9238 | 3.6308 | 44 |
| 0.8886 | 3.6429 | 45 |
| 0.8635 | 3.7137 | 46 |
| 0.8352 | 3.6512 | 47 |
| 0.8050 | 3.7033 | 48 |
| 0.7785 | 3.7004 | 49 |
### Framework versions
- Transformers 4.19.2
- TensorFlow 2.8.0
- Datasets 2.2.1
- Tokenizers 0.12.1
| e3fcde686158c09b9e2a251c3ac7f548 |
anas-awadalla/roberta-large-few-shot-k-256-finetuned-squad-seed-4 | anas-awadalla | roberta | 17 | 3 | transformers | 0 | question-answering | true | false | false | mit | null | ['squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 983 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# roberta-large-few-shot-k-256-finetuned-squad-seed-4
This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3e-05
- train_batch_size: 12
- eval_batch_size: 8
- seed: 4
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 10.0
### Training results
### Framework versions
- Transformers 4.20.0.dev0
- Pytorch 1.11.0+cu113
- Datasets 2.0.0
- Tokenizers 0.11.6
| aead99d853866c3cf3d19e5e9d99c223 |
jha2ee/StableDiffusion_finetuning_SisterIcon | jha2ee | null | 25 | 9 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image', 'stable-diffusion'] | false | true | true | 1,045 | false | ### Sister-icon-style Dreambooth model trained by jha2ee
### with [TheLastBen's fast-DreamBooth]
(https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
Test the concept via A1111 Colab
[fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb)
Sample pictures of this concept:






| 692c69736b04045e4b5b4424a1390479 |
gonzpen/gbert-base-ft-edu-redux | gonzpen | bert | 12 | 1 | transformers | 0 | text-classification | true | false | false | mit | ['de'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 2,260 | false |
# German BERT base fine-tuned to predict educational requirements
This is a fine-tuned version of the German BERT base language model [deepset/gbert-base](https://huggingface.co/deepset/gbert-base). The multilabel task this model was trained on was to predict education requirements from job ad texts. The dataset used for training is not available to the public. The 7 labels in the task are (in the classification head order):
- `'Bachelor'`
- `'Berufsausbildung'`
- `'Doktorat oder äquivalent'`
- `'Höhere Berufsausbildung'`
- `'Master'`
- `'Sonstiges'`
- `'keine Ausbildungserfordernisse'`
The number of representatives of these labels in each of the splits (train/test/val) of the dataset is summarized in the following table:
| Label name | All data | Training | Validation | Test |
|------------|----------|----------|------------|------|
| Bachelor | 521 | 365 | 52 | 104 |
| Berufsausbildung | 1854 | 1298 | 185 | 371 |
| Doktorat oder äquivalent | 38 | 27 | 4 | 7 |
| Höhere Berufsausbildung | 564 | 395 | 56 | 113 |
| Master | 245 | 171 | 25 | 49 |
| Sonstiges | 819 | 573 | 82 | 164 |
| keine Ausbildungserfordernisse | 176 | 123 | 18 | 35 |
## Performance
Training consisted of [minimizing the binary cross-entropy (BCE)](https://en.wikipedia.org/wiki/Cross_entropy#Cross-entropy_minimization) loss between the model's predictions and the actual labels in the training set. During training, a weighted version of the [label ranking average precision (LRAP)](https://scikit-learn.org/stable/modules/model_evaluation.html#label-ranking-average-precision) was tracked for the testing set. LRAP measures what fraction of higher-ranked labels produced by the model were true labels. To account for the label imbalance, the rankings were weighted so that improperly ranked rare labels are penalized more than their more frequent counterparts. After training was complete, the model with highest weighted LRAP was saved.
```
LRAP: 0.93
```
# See also:
- [deepset/gbert-base](https://huggingface.co/deepset/gbert-base)
- [deepset/gbert-large](https://huggingface.co/deepset/gbert-large)
- [gonzpen/gbert-large-ft-edu-redux](https://huggingface.co/gonzpen/gbert-large-ft-edu-redux)
## Authors
Rodrigo C. G. Pena: `rodrigocgp [at] gmail.com`
| ffd31ba3515beb91d186b8df49a0b3f8 |
lewtun/bert-finetuned-squad | lewtun | bert | 12 | 5 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad', 'lewtun/autoevaluate__squad'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 955 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-squad
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset.
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 1
- mixed_precision_training: Native AMP
### Training results
### Framework versions
- Transformers 4.12.3
- Pytorch 1.10.0+cu111
- Datasets 1.15.1
- Tokenizers 0.10.3
| 190d29ce3faa7ff9eba7219dbad1c53e |
nandysoham16/3-clustered_aug | nandysoham16 | distilbert | 8 | 0 | keras | 0 | null | false | true | false | mit | ['en'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 5,161 | false |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
# Model Details
## Model Description
<!-- Provide a longer summary of what this model is. -->
['Frédéric_Chopin', 'Prime_minister', 'Arnold_Schwarzenegger', 'Alexander_Graham_Bell', 'Virgil', 'Mary_(mother_of_Jesus)', 'John,_King_of_England', 'Athanasius_of_Alexandria', 'Bill_%26_Melinda_Gates_Foundation', 'Edmund_Burke', 'Pope_Paul_VI', 'Gamal_Abdel_Nasser', 'Pope_John_XXIII', 'John_von_Neumann', 'George_VI', 'Karl_Popper', 'Friedrich_Hayek', 'John_Kerry', 'Richard_Feynman', 'Muammar_Gaddafi', 'Steven_Spielberg', 'Alfred_North_Whitehead', 'Party_leaders_of_the_United_States_House_of_Representatives', 'Dwight_D._Eisenhower']
- **Developed by:** nandysoham
- **Shared by [optional]:** [More Information Needed]
- **Model type:** [More Information Needed]
- **Language(s) (NLP):** en
- **License:** mit
- **Finetuned from model [optional]:** [More Information Needed]
## Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
# Uses
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
## Direct Use
<!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. -->
[More Information Needed]
## Downstream Use [optional]
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
[More Information Needed]
## Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
[More Information Needed]
# Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
## Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
## How to Get Started with the Model
Use the code below to get started with the model.
[More Information Needed]
# Training Details
## Training Data
<!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
[More Information Needed]
## Training Procedure [optional]
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
### Preprocessing
[More Information Needed]
### Speeds, Sizes, Times
<!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. -->
[More Information Needed]
# Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## Testing Data, Factors & Metrics
### Testing Data
<!-- This should link to a Data Card if possible. -->
[More Information Needed]
### Factors
<!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. -->
[More Information Needed]
### Metrics
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
[More Information Needed]
## Results
[More Information Needed]
### Summary
# Model Examination [optional]
<!-- Relevant interpretability work for the model goes here -->
[More Information Needed]
# Environmental Impact
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
- **Hardware Type:** [More Information Needed]
- **Hours used:** [More Information Needed]
- **Cloud Provider:** [More Information Needed]
- **Compute Region:** [More Information Needed]
- **Carbon Emitted:** [More Information Needed]
# Technical Specifications [optional]
## Model Architecture and Objective
[More Information Needed]
## Compute Infrastructure
[More Information Needed]
### Hardware
[More Information Needed]
### Software
[More Information Needed]
# Citation [optional]
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
# Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
[More Information Needed]
# More Information [optional]
[More Information Needed]
# Model Card Authors [optional]
[More Information Needed]
# Model Card Contact
[More Information Needed]
| 159a7c4791966f59e3726f4e50ff8771 |
ChrisZeng/bart-base-detox | ChrisZeng | bart | 291 | 3 | transformers | 0 | text2text-generation | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,179 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-detox
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1819
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 4
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 16
- total_train_batch_size: 64
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:----:|:---------------:|
| 0.5633 | 1.0 | 135 | 0.2524 |
| 0.2589 | 2.0 | 270 | 0.2193 |
| 0.2307 | 3.0 | 405 | 0.1993 |
| 0.2171 | 4.0 | 540 | 0.2002 |
| 0.2027 | 5.0 | 675 | 0.1937 |
| 0.1946 | 6.0 | 810 | 0.1972 |
| 0.1874 | 7.0 | 945 | 0.1917 |
| 0.1853 | 8.0 | 1080 | 0.1868 |
| 0.1811 | 9.0 | 1215 | 0.1890 |
| 0.1776 | 10.0 | 1350 | 0.1871 |
| 0.1798 | 11.0 | 1485 | 0.1858 |
| 0.1745 | 12.0 | 1620 | 0.1820 |
| 0.1689 | 13.0 | 1755 | 0.1827 |
| 0.1707 | 14.0 | 1890 | 0.1843 |
| 0.1658 | 15.0 | 2025 | 0.1834 |
| 0.1647 | 16.0 | 2160 | 0.1820 |
| 0.1645 | 17.0 | 2295 | 0.1837 |
| 0.1633 | 18.0 | 2430 | 0.1814 |
| 0.1612 | 19.0 | 2565 | 0.1815 |
| 0.1603 | 20.0 | 2700 | 0.1819 |
### Framework versions
- Transformers 4.16.2
- Pytorch 1.12.0.dev20220429
- Datasets 2.1.0
- Tokenizers 0.10.3
| 6ea318453c3881f7c88babcba4825179 |
gary109/wav2vec2-base-MIR_ST500-demo-colab | gary109 | wav2vec2 | 22 | 6 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,058 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-base-MIR_ST500-demo-colab
This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 2.7360
- Wer: 0.9837
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 500
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:------:|:----:|:---------------:|:------:|
| 101.0917 | 16.67 | 100 | 18.8979 | 0.8208 |
| 15.5054 | 33.33 | 200 | 10.9184 | 0.8208 |
| 10.1879 | 50.0 | 300 | 7.6480 | 0.8208 |
| 6.777 | 66.67 | 400 | 3.5386 | 1.0 |
| 3.0546 | 83.33 | 500 | 2.8794 | 1.0 |
| 2.8661 | 100.0 | 600 | 2.8405 | 1.0 |
| 2.847 | 116.67 | 700 | 2.8554 | 1.0 |
| 2.7661 | 133.33 | 800 | 2.6343 | 1.0 |
| 2.3474 | 150.0 | 900 | 2.7464 | 1.0 |
| 2.2464 | 166.67 | 1000 | 2.3565 | 1.0 |
| 2.207 | 183.33 | 1100 | 2.8854 | 1.0 |
| 2.3138 | 200.0 | 1200 | 2.5868 | 1.0 |
| 2.259 | 216.67 | 1300 | 2.6530 | 1.0 |
| 2.1667 | 233.33 | 1400 | 2.4921 | 1.0 |
| 2.1268 | 250.0 | 1500 | 2.5435 | 1.0 |
| 2.1089 | 266.67 | 1600 | 2.5444 | 1.0 |
| 2.0845 | 283.33 | 1700 | 2.6796 | 1.0 |
| 2.0672 | 300.0 | 1800 | 2.5824 | 1.0 |
| 2.055 | 316.67 | 1900 | 2.4631 | 1.0 |
| 2.0317 | 333.33 | 2000 | 2.5751 | 1.0 |
| 2.0141 | 350.0 | 2100 | 2.5627 | 1.0 |
| 1.9914 | 366.67 | 2200 | 2.6132 | 1.0 |
| 1.9489 | 383.33 | 2300 | 2.7527 | 1.0 |
| 1.9146 | 400.0 | 2400 | 2.6121 | 0.9935 |
| 1.893 | 416.67 | 2500 | 2.7110 | 0.9902 |
| 1.845 | 433.33 | 2600 | 2.7410 | 0.9967 |
| 1.8095 | 450.0 | 2700 | 2.7013 | 0.9935 |
| 1.7708 | 466.67 | 2800 | 2.7719 | 0.9935 |
| 1.7224 | 483.33 | 2900 | 2.7740 | 0.9837 |
| 1.6961 | 500.0 | 3000 | 2.7360 | 0.9837 |
### Framework versions
- Transformers 4.11.3
- Pytorch 1.9.1+cu102
- Datasets 1.18.3
- Tokenizers 0.10.3
| 9b527a59f66dfd9dee268449cb1b9133 |
nguyenkhoa2407/bert-base-cased-NER-favsbot | nguyenkhoa2407 | bert | 10 | 5 | transformers | 0 | token-classification | true | false | false | apache-2.0 | null | ['favsbot'] | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 3,086 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-NER-favsbot
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the favsbot dataset.
It achieves the following results on the evaluation set:
- Loss: 0.1680
- Precision: 0.8462
- Recall: 0.88
- F1: 0.8627
- Accuracy: 0.9444
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1.5e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:|
| No log | 1.0 | 7 | 1.8761 | 0.0 | 0.0 | 0.0 | 0.5833 |
| No log | 2.0 | 14 | 1.3530 | 0.0 | 0.0 | 0.0 | 0.5972 |
| No log | 3.0 | 21 | 1.0400 | 1.0 | 0.12 | 0.2143 | 0.6389 |
| No log | 4.0 | 28 | 0.7987 | 0.7895 | 0.6 | 0.6818 | 0.8194 |
| No log | 5.0 | 35 | 0.6055 | 0.85 | 0.68 | 0.7556 | 0.875 |
| No log | 6.0 | 42 | 0.4749 | 0.8696 | 0.8 | 0.8333 | 0.9167 |
| No log | 7.0 | 49 | 0.3838 | 0.84 | 0.84 | 0.8400 | 0.9444 |
| No log | 8.0 | 56 | 0.3084 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 9.0 | 63 | 0.2643 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 10.0 | 70 | 0.2360 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 11.0 | 77 | 0.2168 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 12.0 | 84 | 0.2031 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 13.0 | 91 | 0.1937 | 0.88 | 0.88 | 0.88 | 0.9583 |
| No log | 14.0 | 98 | 0.1853 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 15.0 | 105 | 0.1791 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 16.0 | 112 | 0.1757 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 17.0 | 119 | 0.1718 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
| No log | 18.0 | 126 | 0.1698 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 19.0 | 133 | 0.1686 | 0.8148 | 0.88 | 0.8462 | 0.9444 |
| No log | 20.0 | 140 | 0.1680 | 0.8462 | 0.88 | 0.8627 | 0.9444 |
### Framework versions
- Transformers 4.21.1
- Pytorch 1.12.1
- Datasets 2.4.0
- Tokenizers 0.12.1
| 11da76f192a2c6073afa35784dc4695e |
Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc-and-atcosim | Jzuluaga | wav2vec2 | 20 | 22 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | ['en'] | ['Jzuluaga/atcosim_corpus', 'Jzuluaga/uwb_atcc'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'automatic-speech-recognition', 'en-atc', 'en', 'generated_from_trainer'] | true | true | true | 8,895 | false |
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the EXPERIMENTS/DATA/ATCOSIM_UWB_ATCC/TRAIN - NA dataset.
It achieves the following results on the evaluation set:
# wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc-and-atcosim
This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on two corpus:
- [UWB-ATCC corpus](https://huggingface.co/datasets/Jzuluaga/uwb_atcc), and
- [ATCOSIM corpus](https://huggingface.co/datasets/Jzuluaga/atcosim_corpus).
<a href="https://colab.research.google.com/github/idiap/w2v2-air-traffic/blob/main/src/eval_xlsr_atc_model.ipynb">
<img alt="GitHub" src="https://colab.research.google.com/assets/colab-badge.svg\">
</a>
<a href="https://github.com/idiap/w2v2-air-traffic">
<img alt="GitHub" src="https://img.shields.io/badge/GitHub-Open%20source-green\">
</a>
It achieves the following results on the evaluation set (two tests sets joined together: UWB-ATCC and ATCOSIM):
- Loss: 0.4042
- Wer: 0.1049
Paper: [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822).
Authors: Juan Zuluaga-Gomez, Amrutha Prasad, Iuliia Nigmatulina, Saeed Sarfjoo, Petr Motlicek, Matthias Kleinert, Hartmut Helmke, Oliver Ohneiser, Qingran Zhan
Abstract: Recent work on self-supervised pre-training focus</b> on leveraging large-scale unlabeled speech data to build robust end-to-end (E2E)acoustic models (AM) that can be later fine-tuned on downstream tasks e.g., automatic speech recognition (ASR). Yet, few works investigated the impact on performance when the data properties substantially differ between the pre-training and fine-tuning phases, termed domain shift. We target this scenario by analyzing the robustness of Wav2Vec 2.0 and XLS-R models on downstream ASR for a completely unseen domain, air traffic control (ATC) communications. We benchmark these two models on several open-source and challenging ATC databases with signal-to-noise ratio between 5 and 20 dB. Relative word error rate (WER) reductions between 20% to 40% are obtained in comparison to hybrid-based ASR baselines by only fine-tuning E2E acoustic models with a smaller fraction of labeled data. We analyze WERs on the low-resource scenario and gender bias carried by one ATC dataset.
Code — GitHub repository: https://github.com/idiap/w2v2-air-traffic
## Usage
You can use our Google Colab notebook to run and evaluate our model: https://github.com/idiap/w2v2-air-traffic/blob/master/src/eval_xlsr_atc_model.ipynb
## Intended uses & limitations
This model was fine-tuned on air traffic control data. We don't expect that it keeps the same performance on some others datasets, e.g., LibriSpeech or CommonVoice.
## Training and evaluation data
See Table 1 (page 3) in our paper: [How Does Pre-trained Wav2Vec 2.0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications](https://arxiv.org/abs/2203.16822). We described there the partitions of how to use our model.
- We use the UWB-ATCC + ATCOSIM corpus to fine-tune this model. You can download the raw data here:
- https://lindat.mff.cuni.cz/repository/xmlui/handle/11858/00-097C-0000-0001-CCA1-0 and,
- https://www.spsc.tugraz.at/databases-and-tools/atcosim-air-traffic-control-simulation-speech-corpus.html
- However, do not worry, we have prepared the database in `Datasets format`:
- Here, [UWB-ATCC corpus on HuggingFace](https://huggingface.co/datasets/Jzuluaga/uwb_atcc).
- Here: [ATCOSIM CORPUS on HuggingFace](https://huggingface.co/datasets/Jzuluaga/atcosim_corpus).
- If you want to prepare a database in HuggingFace format, you can follow the data loader script in: [data_loader_atc.py](https://huggingface.co/datasets/Jzuluaga/uwb_atcc/blob/main/atc_data_loader.py).
## Writing your own inference script
If you use language model, you need to install the KenLM bindings with:
```bash
conda activate your_environment
pip install https://github.com/kpu/kenlm/archive/master.zip
```
The snippet of code:
```python
from datasets import load_dataset, load_metric, Audio
import torch
from transformers import AutoModelForCTC, Wav2Vec2Processor, Wav2Vec2ProcessorWithLM
import torchaudio.functional as F
USE_LM = False
DATASET_ID = "Jzuluaga/uwb_atcc"
MODEL_ID = "Jzuluaga/wav2vec2-large-960h-lv60-self-en-atc-uwb-atcc-and-atcosim"
# 1. Load the dataset
# we only load the 'test' partition, however, if you want to load the 'train' partition, you can change it accordingly
uwb_atcc_corpus_test = load_dataset(DATASET_ID, "test", split="test")
# 2. Load the model
model = AutoModelForCTC.from_pretrained(MODEL_ID)
# 3. Load the processors, we offer support with LM, which should yield better resutls
if USE_LM:
processor = Wav2Vec2ProcessorWithLM.from_pretrained(MODEL_ID)
else:
processor = Wav2Vec2Processor.from_pretrained(MODEL_ID)
# 4. Format the test sample
sample = next(iter(uwb_atcc_corpus_test))
file_sampling_rate = sample['audio']['sampling_rate']
# resample if neccessary
if file_sampling_rate != 16000:
resampled_audio = F.resample(torch.tensor(sample["audio"]["array"]), file_sampling_rate, 16000).numpy()
else:
resampled_audio = torch.tensor(sample["audio"]["array"]).numpy()
input_values = processor(resampled_audio, return_tensors="pt").input_values
# 5. Run the forward pass in the model
with torch.no_grad():
logits = model(input_values).logits
# get the transcription with processor
if USE_LM:
transcription = processor.batch_decode(logits.numpy()).text
else:
pred_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(pred_ids)
# print the output
print(transcription)
```
# Cite us
If you use this code for your research, please cite our paper with:
```
@article{zuluaga2022how,
title={How Does Pre-trained Wav2Vec2. 0 Perform on Domain Shifted ASR? An Extensive Benchmark on Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Prasad, Amrutha and Nigmatulina, Iuliia and Sarfjoo, Saeed and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022bertraffic,
title={BERTraffic: BERT-based Joint Speaker Role and Speaker Change Detection for Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Sarfjoo, Seyyed Saeed and Prasad, Amrutha and others},
journal={IEEE Spoken Language Technology Workshop (SLT), Doha, Qatar},
year={2022}
}
```
and,
```
@article{zuluaga2022atco2,
title={ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications},
author={Zuluaga-Gomez, Juan and Vesel{\`y}, Karel and Sz{\"o}ke, Igor and Motlicek, Petr and others},
journal={arXiv preprint arXiv:2211.04054},
year={2022}
}
```
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 24
- eval_batch_size: 12
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- training_steps: 10000
- mixed_precision_training: Native AMP
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:-----:|:---------------:|:------:|
| No log | 0.63 | 500 | 2.2638 | 0.9359 |
| 2.6089 | 1.27 | 1000 | 0.7277 | 0.2407 |
| 2.6089 | 1.9 | 1500 | 0.5800 | 0.1745 |
| 0.6019 | 2.53 | 2000 | 0.4887 | 0.1514 |
| 0.6019 | 3.17 | 2500 | 0.4666 | 0.1421 |
| 0.4722 | 3.8 | 3000 | 0.4426 | 0.1451 |
| 0.4722 | 4.44 | 3500 | 0.4176 | 0.1248 |
| 0.4278 | 5.07 | 4000 | 0.4365 | 0.1239 |
| 0.4278 | 5.7 | 4500 | 0.3816 | 0.1177 |
| 0.369 | 6.34 | 5000 | 0.4113 | 0.1172 |
| 0.369 | 6.97 | 5500 | 0.3863 | 0.1230 |
| 0.341 | 7.6 | 6000 | 0.3850 | 0.1116 |
| 0.341 | 8.24 | 6500 | 0.4014 | 0.1141 |
| 0.3119 | 8.87 | 7000 | 0.3953 | 0.1078 |
| 0.3119 | 9.51 | 7500 | 0.4018 | 0.1080 |
| 0.3008 | 10.14 | 8000 | 0.3964 | 0.1074 |
| 0.3008 | 10.77 | 8500 | 0.3917 | 0.1078 |
| 0.2741 | 11.41 | 9000 | 0.3961 | 0.1057 |
| 0.2741 | 12.04 | 9500 | 0.3974 | 0.1053 |
| 0.2531 | 12.67 | 10000 | 0.4042 | 0.1049 |
### Framework versions
- Transformers 4.24.0
- Pytorch 1.13.0+cu117
- Datasets 2.6.1
- Tokenizers 0.13.2
| 6a75d96037ba27f552aaba42f548c969 |
BearlyWorkingYT/OPT-125M-Christmas-List-Generator | BearlyWorkingYT | opt | 8 | 1 | transformers | 1 | text-generation | true | false | false | other | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 411 | false |
This is the model trained for this short video:
https://www.youtube.com/shorts/hpIUKRTopAY
This AI generates Christmas gift ideas.
This model was trained on a small dataset webscraped from the Toys-R-Us website.
This dataset consisted of search terms and the names of the best selling items corresponding to said search terms.
In total, 31 term-list pair training examples were used to train this model.
| adcd7b05c2c31991ae83074a3e40b84d |
Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese | Edresson | wav2vec2 | 14 | 12 | transformers | 1 | automatic-speech-recognition | true | false | false | apache-2.0 | ['pt'] | ['Common Voice'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['audio', 'speech', 'wav2vec2', 'pt', 'portuguese-speech-corpus', 'automatic-speech-recognition', 'speech', 'PyTorch'] | false | true | true | 1,538 | false |
# Wav2vec2 Large 100k Voxpopuli fine-tuned with a single-speaker dataset plus Data Augmentation in Portuguese
[Wav2vec2 Large 100k Voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) fine-tuned in Portuguese using a single-speaker dataset plus a data augmentation method based on TTS and voice conversion.
# Use this model
```python
from transformers import AutoTokenizer, Wav2Vec2ForCTC
tokenizer = AutoTokenizer.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese")
model = Wav2Vec2ForCTC.from_pretrained("Edresson/wav2vec2-large-100k-voxpopuli-ft-TTS-Dataset-plus-data-augmentation-portuguese")
```
# Results
For the results check the [paper](https://arxiv.org/abs/2204.00618)
# Example test with Common Voice Dataset
```python
dataset = load_dataset("common_voice", "pt", split="test", data_dir="./cv-corpus-7.0-2021-07-21")
resampler = torchaudio.transforms.Resample(orig_freq=48_000, new_freq=16_000)
def map_to_array(batch):
speech, _ = torchaudio.load(batch["path"])
batch["speech"] = resampler.forward(speech.squeeze(0)).numpy()
batch["sampling_rate"] = resampler.new_freq
batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower().replace("’", "'")
return batch
```
```python
ds = dataset.map(map_to_array)
result = ds.map(map_to_pred, batched=True, batch_size=1, remove_columns=list(ds.features.keys()))
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
| 4892407028e4566a963a1e4e12f66801 |
AlekseyKorshuk/6.7b-dalio-book-handwritten-io-constant-1e-6 | AlekseyKorshuk | opt | 13 | 2 | transformers | 0 | text-generation | true | false | false | other | null | ['AlekseyKorshuk/dalio-book-handwritten-io-sorted'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,869 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 6.7b-dalio-book-handwritten-io-constant-1e-6
This model is a fine-tuned version of [facebook/opt-6.7b](https://huggingface.co/facebook/opt-6.7b) on the AlekseyKorshuk/dalio-book-handwritten-io-sorted dataset.
It achieves the following results on the evaluation set:
- Loss: 2.3633
- Accuracy: 0.3103
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 1e-06
- train_batch_size: 1
- eval_batch_size: 1
- seed: 42
- distributed_type: multi-GPU
- num_devices: 8
- total_train_batch_size: 8
- total_eval_batch_size: 8
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: constant
- num_epochs: 1.0
### Training results
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|:-------------:|:-----:|:----:|:---------------:|:--------:|
| 2.6396 | 0.11 | 6 | 2.5039 | 0.2989 |
| 2.5754 | 0.21 | 12 | 2.4902 | 0.2999 |
| 2.5859 | 0.32 | 18 | 2.4648 | 0.3018 |
| 2.5432 | 0.43 | 24 | 2.4434 | 0.3035 |
| 2.472 | 0.54 | 30 | 2.4238 | 0.3053 |
| 2.5184 | 0.64 | 36 | 2.4082 | 0.3064 |
| 2.4524 | 0.75 | 42 | 2.3926 | 0.3078 |
| 2.3876 | 0.86 | 48 | 2.3789 | 0.3092 |
| 2.4456 | 0.96 | 54 | 2.3633 | 0.3103 |
### Framework versions
- Transformers 4.25.0.dev0
- Pytorch 1.12.1+cu113
- Datasets 2.3.2
- Tokenizers 0.12.1
| be72f62d2708a75b36a573d2d3c8bce6 |
kasrahabib/XXX08_02_23__-bucket-finetunned | kasrahabib | bert | 12 | 19 | transformers | 0 | text-classification | false | true | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_keras_callback'] | true | true | true | 1,834 | false |
<!-- This model card has been generated automatically according to the information Keras had access to. You should
probably proofread and complete it, then remove this comment. -->
# kasrahabib/XXX08_02_23__-bucket-finetunned
This model is a fine-tuned version of [kasrahabib/after_training_rus_combined_relabeled_data_from-bucket-finetunned_batch_size_16](https://huggingface.co/kasrahabib/after_training_rus_combined_relabeled_data_from-bucket-finetunned_batch_size_16) on an unknown dataset.
It achieves the following results on the evaluation set:
- Train Loss: 0.0316
- Validation Loss: 0.3645
- Epoch: 9
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 8010, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False}
- training_precision: float32
### Training results
| Train Loss | Validation Loss | Epoch |
|:----------:|:---------------:|:-----:|
| 0.3976 | 0.3499 | 0 |
| 0.2199 | 0.3588 | 1 |
| 0.1392 | 0.3404 | 2 |
| 0.0962 | 0.3372 | 3 |
| 0.0684 | 0.3182 | 4 |
| 0.0595 | 0.3414 | 5 |
| 0.0411 | 0.3519 | 6 |
| 0.0394 | 0.3500 | 7 |
| 0.0338 | 0.3647 | 8 |
| 0.0316 | 0.3645 | 9 |
### Framework versions
- Transformers 4.26.0
- TensorFlow 2.9.2
- Datasets 2.9.0
- Tokenizers 0.13.2
| 62652ec148c8ed2373b0ea9409131088 |
sd-concepts-library/ransom | sd-concepts-library | null | 13 | 0 | null | 0 | null | false | false | false | mit | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,390 | false | ### ransom on Stable Diffusion
This is the `<ransom>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb).
Here is the new concept you will be able to use as an `object`:








| 67a07d642206bc1e63c0f6de7955207d |
scasutt/wav2vec2-large-xlsr-53_toy_train_data_masked_audio | scasutt | wav2vec2 | 7 | 7 | transformers | 0 | automatic-speech-recognition | true | false | false | apache-2.0 | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 2,420 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# wav2vec2-large-xlsr-53_toy_train_data_masked_audio
This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset.
It achieves the following results on the evaluation set:
- Loss: 0.6445
- Wer: 0.4938
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 0.0001
- train_batch_size: 8
- eval_batch_size: 8
- seed: 42
- gradient_accumulation_steps: 2
- total_train_batch_size: 16
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 1000
- num_epochs: 20
### Training results
| Training Loss | Epoch | Step | Validation Loss | Wer |
|:-------------:|:-----:|:----:|:---------------:|:------:|
| 3.3761 | 1.05 | 250 | 3.4022 | 0.9954 |
| 3.0858 | 2.1 | 500 | 3.4684 | 0.9954 |
| 2.6302 | 3.15 | 750 | 1.7989 | 0.9865 |
| 1.1292 | 4.2 | 1000 | 0.8558 | 0.7355 |
| 0.8371 | 5.25 | 1250 | 0.7319 | 0.6621 |
| 0.5992 | 6.3 | 1500 | 0.6848 | 0.6147 |
| 0.5189 | 7.35 | 1750 | 0.6522 | 0.5742 |
| 0.454 | 8.4 | 2000 | 0.6601 | 0.5531 |
| 0.3896 | 9.45 | 2250 | 0.6138 | 0.5439 |
| 0.3678 | 10.5 | 2500 | 0.6436 | 0.5320 |
| 0.3232 | 11.55 | 2750 | 0.5920 | 0.5174 |
| 0.2926 | 12.6 | 3000 | 0.6615 | 0.5107 |
| 0.3041 | 13.65 | 3250 | 0.6311 | 0.5015 |
| 0.2882 | 14.7 | 3500 | 0.6182 | 0.5004 |
| 0.2868 | 15.75 | 3750 | 0.6266 | 0.4943 |
| 0.2508 | 16.81 | 4000 | 0.6587 | 0.4965 |
| 0.2563 | 17.86 | 4250 | 0.6634 | 0.4939 |
| 0.2213 | 18.91 | 4500 | 0.6441 | 0.4925 |
| 0.2255 | 19.96 | 4750 | 0.6445 | 0.4938 |
### Framework versions
- Transformers 4.17.0
- Pytorch 1.11.0+cu102
- Datasets 2.0.0
- Tokenizers 0.11.6
| 382f4f84a340136ba80160f14c28e84b |
DavLeonardo/sofi | DavLeonardo | null | 23 | 4 | diffusers | 0 | null | false | false | false | mit | null | null | null | 2 | 2 | 0 | 0 | 0 | 0 | 0 | [] | false | true | true | 1,258 | false | ### sofi on Stable Diffusion via Dreambooth
#### model by DavLeonardo
This your the Stable Diffusion model fine-tuned the sofi concept taught to Stable Diffusion with Dreambooth.
It can be used by modifying the `instance_prompt`: **sofi**
You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb).
And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts)
Here are the images used for training this concept:





| 196b4ae844a4d83a6cf99490db51bfe4 |
victorlee071200/bert-base-cased-finetuned-squad_v2 | victorlee071200 | bert | 10 | 7 | transformers | 0 | question-answering | true | false | false | apache-2.0 | null | ['squad_v2'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['generated_from_trainer'] | true | true | true | 1,266 | false |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-cased-finetuned-squad_v2
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad_v2 dataset.
It achieves the following results on the evaluation set:
- Loss: 1.3226
## Model description
More information needed
## Intended uses & limitations
More information needed
## Training and evaluation data
More information needed
## Training procedure
### Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2e-05
- train_batch_size: 16
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- num_epochs: 3
### Training results
| Training Loss | Epoch | Step | Validation Loss |
|:-------------:|:-----:|:-----:|:---------------:|
| 1.03 | 1.0 | 8255 | 1.1334 |
| 0.7511 | 2.0 | 16510 | 1.1299 |
| 0.5376 | 3.0 | 24765 | 1.3226 |
### Framework versions
- Transformers 4.19.2
- Pytorch 1.11.0+cu113
- Datasets 2.2.2
- Tokenizers 0.12.1
| 15ee4cd29112f9887f6f1debbddd1842 |
HusseinHE/alisks | HusseinHE | null | 31 | 7 | diffusers | 0 | text-to-image | false | false | false | creativeml-openrail-m | null | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['text-to-image'] | false | true | true | 431 | false | ### alisks Dreambooth model trained by HusseinHE with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model
You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts!
\ | 1de68c3cb0921caa2d12035a082ca7bd |
BlinkDL/rwkv-3-pile-1b5 | BlinkDL | null | 4 | 0 | null | 5 | text-generation | true | false | false | apache-2.0 | ['en'] | ['The Pile'] | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['pytorch', 'text-generation', 'causal-lm', 'rwkv'] | false | true | true | 797 | false |
# RWKV-3 1.5B
## Model Description
RWKV-3 1.5B is a L24-D2048 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details.
RWKV-4 1.5B is out: https://huggingface.co/BlinkDL/rwkv-4-pile-1b5
At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-v2-RNN-Pile) to run it.
ctx_len = 896
n_layer = 24
n_embd = 2048
Preview checkpoint: RWKV-3-Pile-20220723-3542.pth : Trained on the Pile for 127B tokens.
* Pile loss 2.102
* LAMBADA ppl 7.52, acc 54.71%
* PIQA acc 71.11%
* SC2016 acc 67.24%
* Hellaswag acc_norm 50.45%
Preview checkpoint: 20220708-1905.pth : Trained on the Pile for 68B tokens.
* Pile loss 2.148
* LAMBADA ppl 8.41, acc 53.17%
* PIQA acc 69.64%
* SC2016 acc 67.08%
* Hellaswag acc_norm 48.20%
(I am still training it) | 5bda3cceca40172ba6cdda1feb8f592d |
IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | IDEA-CCNL | null | 6 | 0 | transformers | 0 | null | true | false | false | apache-2.0 | ['zh'] | null | null | 0 | 0 | 0 | 0 | 0 | 0 | 0 | ['ZEN', 'chinese'] | false | true | true | 4,776 | false | # Erlangshen-ZEN1-224M-Chinese
- Github: [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)
- Docs: [Fengshenbang-Docs](https://fengshenbang-doc.readthedocs.io/)
## 简介 Brief Introduction
善于处理NLU任务,使用了N-gram编码增强文本语义,2.24亿参数量的ZEN1
ZEN1 model, which uses N-gram to enhance text semantic and has 224M parameters, is adept at NLU tasks.
## 模型分类 Model Taxonomy
| 需求 Demand | 任务 Task | 系列 Series | 模型 Model | 参数 Parameter | 额外 Extra |
| :----: | :----: | :----: | :----: | :----: | :----: |
| 通用 General | 自然语言理解 NLU | 二郎神 Erlangshen | ZEN1 | 224M | 中文-Chinese |
## 模型信息 Model Information
我们与[ZEN团队](https://github.com/sinovation/ZEN)合作,使用我们的封神框架,开源发布了ZEN1模型。具体而言,通过引入无监督学习中提取的知识,ZEN通过N-gram方法学习不同的文本粒度信息。ZEN1可以通过仅在单个小语料库(低资源场景)上进行训练来获得良好的性能增益。下一步,我们将继续与ZEN团队一起探索PLM的优化,并提高下游任务的性能
We open source and publicly release ZEN1 using our Fengshen Framework in collaboration with the [ZEN team](https://github.com/sinovation/ZEN). More precisely, by bringing together knowledge extracted by unsupervised learning, ZEN learns different textual granularity information through N-gram methods. ZEN1 can obtain good performance gains by training only on a single small corpus (low-resource scenarios). In the next step, we continue with the ZEN team to explore the optimization of PLM and improve the performance on downstream tasks.
### 下游效果 Performance
**分类任务 Classification**
| model | dataset | Acc |
| ---- | ---- | ---- |
| IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | Tnews | 56.82% |
**抽取任务 Extraction**
| model | dataset | F1 |
| ---- | ---- | ---- |
| IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese | OntoNote4.0 | 80.8% |
## 使用 Usage
因为[transformers](https://github.com/huggingface/transformers)库中是没有ZEN1相关的模型结构的,所以你可以在我们的[Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM)中找到并且运行代码。
Since there is no structure of ZEN1 in [transformers library](https://github.com/huggingface/transformers), you can find the structure of ZEN1 and run the codes in [Fengshenbang-LM](https://github.com/IDEA-CCNL/Fengshenbang-LM).
```shell
git clone https://github.com/IDEA-CCNL/Fengshenbang-LM.git
```
```python
from fengshen.models.zen1.ngram_utils import ZenNgramDict
from fengshen.models.zen1.tokenization import BertTokenizer
from fengshen.models.zen1.modeling import ZenForSequenceClassification, ZenForTokenClassification
pretrain_path = 'IDEA-CCNL/Erlangshen-ZEN1-224M-Chinese'
tokenizer = BertTokenizer.from_pretrained(pretrain_path)
model_classification = ZenForSequenceClassification.from_pretrained(pretrain_path)
model_extraction = ZenForTokenClassification.from_pretrained(pretrain_path)
ngram_dict = ZenNgramDict.from_pretrained(pretrain_path, tokenizer=tokenizer)
```
你可以从下方的链接获得我们做分类和抽取的详细示例。
You can get classification and extraction examples below.
[分类 classification example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen1_finetune/fs_zen1_tnews.sh)
[抽取 extraction example on fengshen](https://github.com/IDEA-CCNL/Fengshenbang-LM/blob/main/fengshen/examples/zen1_finetune/ner_zen1_ontonotes4.sh)
## 引用 Citation
如果您在您的工作中使用了我们的模型,可以引用我们的对该模型的论文:
If you are using the resource for your work, please cite the our paper for this model:
```text
@inproceedings{diao-etal-2020-zen,
title = "ZEN: Pre-training Chinese Text Encoder Enhanced by N-gram Representations",
author = "Diao, Shizhe and Bai, Jiaxin and Song, Yan and Zhang, Tong and Wang, Yonggang",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2020",
month = nov,
year = "2020",
address = "Online",
pages = "4729--4740",
}
```
如果您在您的工作中使用了我们的模型,也可以引用我们的[总论文](https://arxiv.org/abs/2209.02970):
If you are using the resource for your work, please cite the our [overview paper](https://arxiv.org/abs/2209.02970):
```text
@article{fengshenbang,
author = {Junjie Wang and Yuxiang Zhang and Lin Zhang and Ping Yang and Xinyu Gao and Ziwei Wu and Xiaoqun Dong and Junqing He and Jianheng Zhuo and Qi Yang and Yongfeng Huang and Xiayu Li and Yanghan Wu and Junyu Lu and Xinyu Zhu and Weifeng Chen and Ting Han and Kunhao Pan and Rui Wang and Hao Wang and Xiaojun Wu and Zhongshen Zeng and Chongpei Chen and Ruyi Gan and Jiaxing Zhang},
title = {Fengshenbang 1.0: Being the Foundation of Chinese Cognitive Intelligence},
journal = {CoRR},
volume = {abs/2209.02970},
year = {2022}
}
```
也可以引用我们的[网站](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
You can also cite our [website](https://github.com/IDEA-CCNL/Fengshenbang-LM/):
```text
@misc{Fengshenbang-LM,
title={Fengshenbang-LM},
author={IDEA-CCNL},
year={2021},
howpublished={\url{https://github.com/IDEA-CCNL/Fengshenbang-LM}},
}
``` | 110a2d72edc946d75fe8f509f2fe0967 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.