license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['generated_from_trainer']
false
finetuned_gpt2-medium_sst2_negation0.01_pretrainedTrue_epochs1 This model is a fine-tuned version of [gpt2-medium](https://huggingface.co/gpt2-medium) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 2.8746
d405052705df58831aa16afe6589851d
mit
['lao-roberta-base']
false
Lao RoBERTa Base Lao RoBERTa Base is a masked language model based on the [RoBERTa](https://arxiv.org/abs/1907.11692) model. It was trained on the [OSCAR-2109](https://huggingface.co/datasets/oscar-corpus/OSCAR-2109) dataset, specifically the `deduplicated_lo` subset. The model was trained from scratch and achieved an evaluation loss of 1.4556 and an evaluation perplexity of 4.287. This model was trained using HuggingFace's PyTorch framework and the training script found [here](https://github.com/huggingface/transformers/blob/master/examples/pytorch/language-modeling/run_mlm.py). All training was done on a TPUv3-8, provided by the [TPU Research Cloud](https://sites.research.google/trc/about/) program. You can view the detailed training results in the [Training metrics](https://huggingface.co/w11wo/lao-roberta-base/tensorboard) tab, logged via Tensorboard.
320f90cd04eb95bcfcd283791d1ab97b
mit
['lao-roberta-base']
false
params | Arch. | Training/Validation data (text) | | ------------------ | ------- | ------- | ------------------------------------ | | `lao-roberta-base` | 124M | RoBERTa | OSCAR-2109 `deduplicated_lo` Dataset |
5f4051e53ef94d8aa1c8e956c851af5f
mit
['lao-roberta-base']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - distributed_type: tpu - num_devices: 8 - total_train_batch_size: 1024 - total_eval_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.98) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30.0
fa4f4e76b53aef0c45432f82c2c3b7cb
mit
['lao-roberta-base']
false
Training results | Training Loss | Epoch | Step | Validation Loss | | :-----------: | :---: | :--: | :-------------: | | No log | 1.0 | 216 | 5.8586 | | No log | 2.0 | 432 | 5.5095 | | 6.688 | 3.0 | 648 | 5.3976 | | 6.688 | 4.0 | 864 | 5.3562 | | 5.3629 | 5.0 | 1080 | 5.2912 | | 5.3629 | 6.0 | 1296 | 5.2385 | | 5.22 | 7.0 | 1512 | 5.1955 | | 5.22 | 8.0 | 1728 | 5.1785 | | 5.22 | 9.0 | 1944 | 5.1327 | | 5.1248 | 10.0 | 2160 | 5.1243 | | 5.1248 | 11.0 | 2376 | 5.0889 | | 5.0591 | 12.0 | 2592 | 5.0732 | | 5.0591 | 13.0 | 2808 | 5.0417 | | 5.0094 | 14.0 | 3024 | 5.0388 | | 5.0094 | 15.0 | 3240 | 4.9299 | | 5.0094 | 16.0 | 3456 | 4.2991 | | 4.7527 | 17.0 | 3672 | 3.6541 | | 4.7527 | 18.0 | 3888 | 2.7826 | | 3.4431 | 19.0 | 4104 | 2.2796 | | 3.4431 | 20.0 | 4320 | 2.0213 | | 2.2803 | 21.0 | 4536 | 1.8809 | | 2.2803 | 22.0 | 4752 | 1.7615 | | 2.2803 | 23.0 | 4968 | 1.6925 | | 1.8601 | 24.0 | 5184 | 1.6205 | | 1.8601 | 25.0 | 5400 | 1.5751 | | 1.6697 | 26.0 | 5616 | 1.5391 | | 1.6697 | 27.0 | 5832 | 1.5200 | | 1.5655 | 28.0 | 6048 | 1.4866 | | 1.5655 | 29.0 | 6264 | 1.4656 | | 1.5655 | 30.0 | 6480 | 1.4627 |
b40a4a3e4b6fa76e7f50de8c326a8fb8
mit
['lao-roberta-base']
false
As Masked Language Model ```python from transformers import pipeline pretrained_name = "w11wo/lao-roberta-base" prompt = "REPLACE WITH MASKED PROMPT" fill_mask = pipeline( "fill-mask", model=pretrained_name, tokenizer=pretrained_name ) fill_mask(prompt) ```
90e07c6e54ea7144428ec50cc5cd5a26
mit
['lao-roberta-base']
false
Feature Extraction in PyTorch ```python from transformers import RobertaModel, RobertaTokenizerFast pretrained_name = "w11wo/lao-roberta-base" model = RobertaModel.from_pretrained(pretrained_name) tokenizer = RobertaTokenizerFast.from_pretrained(pretrained_name) prompt = "ສະ​ບາຍ​ດີ​ຊາວ​ໂລກ." encoded_input = tokenizer(prompt, return_tensors='pt') output = model(**encoded_input) ```
a08164f8fbc57e088b4dc942b12fca21
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_120k']
false
MultiBERTs, Intermediate Checkpoint - Seed 3, Step 120k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
011c87b8f1467aa425c0177a9d1611e3
apache-2.0
['multiberts', 'multiberts-seed_3', 'multiberts-seed_3-step_120k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_120k') model = TFBertModel.from_pretrained("google/multiberts-seed_3-step_120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_3-step_120k') model = BertModel.from_pretrained("google/multiberts-seed_3-step_120k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
8e54cd76ba3c2c972bf8b2668b77f072
cc-by-sa-4.0
['transformers', 'sentence-similarity', 'feature-extraction', 'sentence-transformers']
false
summary model name: `pkshatech/simcse-ja-bert-base-clcmlp` This is a Japanese [SimCSE](https://arxiv.org/abs/2104.08821) model. You can easily extract sentence embedding representations from Japanese sentences. This model is based on [`cl-tohoku/bert-base-japanese-v2`](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) and trained on [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88) dataset, which is a Japanese natural language inference dataset.
0f4850b0b3ce65c1783cf9f5d525291a
cc-by-sa-4.0
['transformers', 'sentence-similarity', 'feature-extraction', 'sentence-transformers']
false
Usage (Sentence-Transformers) You can use this model easily with [sentence-transformers](https://www.SBERT.net). You need [fugashi](https://github.com/polm/fugashi) and [unidic-lite](https://pypi.org/project/unidic-lite/) for tokenization. Please install sentence-transformers, fugashi, and unidic-lite with pip as follows: ``` pip install -U fugashi[unidic-lite] sentence-transformers ``` You can load the model and convert sentences to dense vectors as follows: ```python from sentence_transformers import SentenceTransformer sentences = [ "PKSHA Technologyは機械学習/深層学習技術に関わるアルゴリズムソリューションを展開している。", "この深層学習モデルはPKSHA Technologyによって学習され、公開された。", "広目天は、仏教における四天王の一尊であり、サンスクリット語の「種々の眼をした者」を名前の由来とする。", ] model = SentenceTransformer('pkshatech/simcse-ja-bert-base-clcmlp') embeddings = model.encode(sentences) print(embeddings) ``` Since the loss function used during training is cosine similarity, we recommend using cosine similarity for downstream tasks.
0b22ec0667e249843cd26926da85f917
cc-by-sa-4.0
['transformers', 'sentence-similarity', 'feature-extraction', 'sentence-transformers']
false
Tokenization We use the same tokenizer as `tohoku/bert-base-japanese-v2`. Please see the [README of `tohoku/bert-base-japanese-v2`](https://huggingface.co/cl-tohoku/bert-base-japanese-v2) for details.
245d37decb2bf82e01c15ff62d8a9be7
cc-by-sa-4.0
['transformers', 'sentence-similarity', 'feature-extraction', 'sentence-transformers']
false
Training We set `tohoku/bert-base-japanese-v2` as the initial value and trained it on the train set of [JSNLI](https://nlp.ist.i.kyoto-u.ac.jp/?%E6%97%A5%E6%9C%AC%E8%AA%9ESNLI%28JSNLI%29%E3%83%87%E3%83%BC%E3%82%BF%E3%82%BB%E3%83%83%E3%83%88). We trained 20 epochs and published the checkpoint of the model with the highest Spearman's correlation coefficient on the validation set [^1] of the train set of [JSTS](https://github.com/yahoojapan/JGLUE)
b122c7abd959a15d7f0b941a9d438fed
cc-by-sa-4.0
['transformers', 'sentence-similarity', 'feature-extraction', 'sentence-transformers']
false
Training Parameters | Parameter | Value | | --- | --- | |pooling_strategy | [CLS] -> single fully-connected layer | | max_seq_length | 128 | | with hard negative | true | | temperature of contrastive loss | 0.05 | | Batch size | 200 | | Learning rate | 1e-5 | | Weight decay | 0.01 | | Max gradient norm | 1.0 | | Warmup steps | 2012 | | Scheduler | WarmupLinear | | Epochs | 20 | | Evaluation steps | 250 |
6ed777ad7d73ec38ef25faea51b6433b
cc-by-sa-4.0
['transformers', 'sentence-similarity', 'feature-extraction', 'sentence-transformers']
false
Licenses This models are distributed under the terms of the Creative [Creative Commons Attribution-ShareAlike 4.0](https://creativecommons.org/licenses/by-sa/4.0/). [^1]: When we trained this model, the test data of JGLUE was not released, so we used the dev set of JGLUE as a private evaluation data. Therefore, we selected the checkpoint on the train set of JGLUE insted of its dev set.
568e46ae0c65029c0ec35a89ab004dcb
cc-by-4.0
['language model']
false
--> [BioMegatron](https://arxiv.org/pdf/2010.06060.pdf) is a transformer developed by the Applied Deep Learning Research team at NVIDIA. This particular Megatron model trained on top of the Megatron-LM model, adding a PubMed corpusto the Megatron-LM corpora(Wikipedia, RealNews, OpenWebText, and CC-Stories). BioMegatron follows a similar (albeit not identical) architecture as BERT and it has 345 million parameters: * 24 layers * 16 attention heads with a hidden size of 1024. More information available at [nVIDIA NGC CATALOG](https://catalog.ngc.nvidia.com/orgs/nvidia/models/biomegatron345muncased)
b72d2cf2e90591271ff82fb56e82199e
cc-by-4.0
['language model']
false
Running BioMegatron in 🤗 transformers In this implementation we have followed the commands of the [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-uncased-345m) repository to make BioMegatron available in 🤗. However, the file [`convert_megatron_bert_checkpoint.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py) needed a modification. The reason is that the Megatron model shown in [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-uncased-345m) has included head layers, while the weights of the BioMegatron model that we upload to this repository do not contain a head. We provide in the repository an alternative version of the [python script](https://huggingface.co/EMBO/BioMegatron345mUncased/blob/main/convert_biomegatron_checkpoint.py) in order to any user to cross-check the validity of the model replicated in this repository. The code below is a modification of the original [`convert_megatron_bert_checkpoint.py`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/megatron_bert/convert_megatron_bert_checkpoint.py). ```python import os import torch from convert_biomegatron_checkpoint import convert_megatron_checkpoint print_checkpoint_structure = True path_to_checkpoint = "/path/to/BioMegatron345mUncased/"
3d1bfdbceaa88cbd41564f609e1697f9
cc-by-4.0
['language model']
false
Store the config to file. output_config_file = os.path.join(path_to_checkpoint, "config.json") print(f'Saving config to "{output_config_file}"') with open(output_config_file, "w") as f: json.dump(output_config, f)
f3a95669723bbd989515abce0c1bda3b
cc-by-4.0
['language model']
false
Store the state_dict to file. output_checkpoint_file = os.path.join(path_to_checkpoint, "pytorch_model.bin") print(f'Saving checkpoint to "{output_checkpoint_file}"') torch.save(output_state_dict, output_checkpoint_file) ``` BioMegatron can be run with the standard 🤗 script for loading models. Here we show an example identical to that of [`nvidia/megatron-bert-uncased-345m`](https://huggingface.co/nvidia/megatron-bert-uncased-345m). ```python import os import torch from transformers import BertTokenizer, MegatronBertForMaskedLM, AutoModelForMaskedLM checkpoint = "EMBO/BioMegatron345mUncased"
9a8bdb4fc0e826dbe8f14a01156cfee1
cc-by-4.0
['language model']
false
Create inputs (from the BERT example page). input = tokenizer("The capital of France is [MASK]", return_tensors="pt").to(device) label = tokenizer("The capital of France is Paris", return_tensors="pt")["input_ids"].to(device)
5c1c0adfa309a7768230a7d5eb103f3b
apache-2.0
['generated_from_trainer']
false
mnli This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.4595 - Accuracy: 0.8230
58ebe5c36a264952e5d487e2966bb6a3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 48 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1.3
28ee94c335801be0301ded877758f2c5
apache-2.0
['collaborative', 'bengali', 'NER']
false
Model description [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) fine-tuned for NER using the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann). Named Entities predicted by the model: | Label id | Label | |:--------:|:----:| |0 |O| |1 |B-PER| |2 |I-PER| |3 |B-ORG| |4 |I-ORG| |5 |B-LOC| |6 |I-LOC|
a8e67823bb625ddf976362142fe9ecf8
apache-2.0
['collaborative', 'bengali', 'NER']
false
How to use You can use this model directly with a pipeline for token classification: ```python from transformers import AlbertForTokenClassification, TokenClassificationPipeline, PreTrainedTokenizerFast
6081958c33a0f4214dd18b8700e7d1ef
apache-2.0
['collaborative', 'bengali', 'NER']
false
Training data The model was initialized with pre-trained weights of [sahajBERT](https://huggingface.co/neuropark/sahajBERT-NER) at step 19519 and trained on the bengali split of [WikiANN ](https://huggingface.co/datasets/wikiann)
2bde44d355cd46b2fdd4a227e40f680c
apache-2.0
['automatic-speech-recognition', 'de']
false
exp_w2v2t_de_vp-sv_s470 Fine-tuned [facebook/wav2vec2-large-sv-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-sv-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (de)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
81c3bdc0a5e8ee8b11a2f23b86545461
cc-by-4.0
['generated_from_trainer']
false
roberta-base-bne-finetuned-amazon_reviews_multi-taller This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2463 - Accuracy: 0.9113
79cb0086f722f7943cd6362bda138500
cc-by-4.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.2474 | 1.0 | 125 | 0.2463 | 0.9113 |
ed640f205de2f8b549e2819ad097f60b
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-ft-google This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the [steciuk/google](https://huggingface.co/datasets/steciuk/google) dataset. It achieves the following results on the evaluation set: - Loss: 0.3195 - Accuracy: 0.9105 - F1: 0.9174 and flowing results on the testing set: - Accuracy: 0.9096 - F1: 0.9161
66a56af57f5fa018967969900ac88531
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3651 | 0.37 | 196 | 0.2641 | 0.8962 | 0.9064 | | 0.2765 | 0.75 | 392 | 0.2484 | 0.9019 | 0.9099 | | 0.2349 | 1.12 | 588 | 0.2532 | 0.9133 | 0.9205 | | 0.2015 | 1.49 | 784 | 0.2692 | 0.9095 | 0.9139 | | 0.1817 | 1.86 | 980 | 0.2957 | 0.9095 | 0.9180 | | 0.1683 | 2.24 | 1176 | 0.2941 | 0.9143 | 0.9213 | | 0.1204 | 2.61 | 1372 | 0.3230 | 0.9143 | 0.9223 | | 0.1271 | 2.98 | 1568 | 0.3195 | 0.9105 | 0.9174 |
03bee46c9f87255dae6a03f1658c841e
cc-by-4.0
['espnet', 'audio', 'self-supervised-learning']
false
`simpleoier/simpleoier_librispeech_hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw` This model was trained by simpleoier using librispeech recipe in [espnet](https://github.com/espnet/espnet/).
e49b0d6cb138d3d4037e55bb39611e0a
cc-by-4.0
['espnet', 'audio', 'self-supervised-learning']
false
Demo: How to use in ESPnet2 Follow the [ESPnet installation instructions](https://espnet.github.io/espnet/installation.html) if you haven't done that already. ```bash cd espnet git checkout 753f40d61813436d4e76660904d02eaed7a6649e pip install -e . cd egs2/librispeech/ssl1 ./run.sh --skip_data_prep false --skip_train true --download_model simpleoier/simpleoier_librispeech_hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw ```
b18968544101d50b47c48a8dfea10ef0
cc-by-4.0
['espnet', 'audio', 'self-supervised-learning']
false
SSL config <details><summary>expand</summary> ``` config: conf/tuning/train_ssl_torchaudiohubert_base_960h_pretrain_it1.yaml print_config: false log_level: INFO dry_run: false iterator_type: sequence output_dir: exp/hubert_iter1_train_ssl_torchaudiohubert_base_960h_pretrain_it1_raw ngpu: 1 seed: 0 num_workers: 64 num_att_plot: 3 dist_backend: nccl dist_init_method: env:// dist_world_size: 8 dist_rank: 0 local_rank: 0 dist_master_addr: localhost dist_master_port: 49251 dist_launcher: null multiprocessing_distributed: true unused_parameters: true sharded_ddp: false cudnn_enabled: true cudnn_benchmark: false cudnn_deterministic: true collect_stats: false write_collected_feats: false max_epoch: 250 patience: null val_scheduler_criterion: - valid - loss early_stopping_criterion: - valid - loss - min best_model_criterion: - - valid - loss - min keep_nbest_models: 10 nbest_averaging_interval: 0 grad_clip: 5.0 grad_clip_type: 2.0 grad_noise: false accum_grad: 2 no_forward_run: false resume: true train_dtype: float32 use_amp: true log_interval: null use_matplotlib: true use_tensorboard: true create_graph_in_tensorboard: false use_wandb: false wandb_project: null wandb_id: null wandb_entity: null wandb_name: null wandb_model_log_interval: -1 detect_anomaly: false pretrain_path: null init_param: [] ignore_init_mismatch: false freeze_param: [] num_iters_per_epoch: null batch_size: 20 valid_batch_size: null batch_bins: 45000000 valid_batch_bins: null train_shape_file: - exp/hubert_iter1_stats_raw/train/speech_shape - exp/hubert_iter1_stats_raw/train/text_shape.word valid_shape_file: - exp/hubert_iter1_stats_raw/valid/speech_shape - exp/hubert_iter1_stats_raw/valid/text_shape.word batch_type: numel valid_batch_type: null fold_length: - 80000 - 400 sort_in_batch: descending sort_batch: descending multiple_iterator: false chunk_length: 500 chunk_shift_ratio: 0.5 num_cache_chunks: 1024 train_data_path_and_name_and_type: - - dump/raw/train_960/wav.scp - speech - sound - - dump/raw/train_960/text.km.kmeans_iter1_hubert_train_960_portion0.1 - text - text valid_data_path_and_name_and_type: - - dump/raw/dev/wav.scp - speech - sound - - dump/raw/dev/text.km.kmeans_iter1_hubert_train_960_portion0.1 - text - text allow_variable_data_keys: false max_cache_size: 0.0 max_cache_fd: 32 valid_max_cache_size: null optim: adam optim_conf: lr: 0.0005 scheduler: warmuplr scheduler_conf: warmup_steps: 32000 token_list: - '386' - '160' - '89' - '3' - '448' - '431' - '319' - '247' - '256' - '23' - '267' - '274' - '479' - '227' - '197' - '74' - '362' - '159' - '190' - '275' - '241' - '147' - '242' - '105' - '7' - '320' - '311' - '327' - '130' - '485' - '427' - '22' - '493' - '254' - '451' - '399' - '342' - '443' - '38' - '33' - '53' - '238' - '86' - '61' - '263' - '218' - '316' - '350' - '96' - '492' - '341' - '496' - '325' - '462' - '24' - '328' - '133' - '407' - '41' - '304' - '373' - '167' - '352' - '456' - '149' - '279' - '84' - '217' - '494' - '139' - '381' - '416' - '305' - '446' - '337' - '228' - '35' - '372' - '55' - '237' - '66' - '13' - '188' - '291' - '43' - '132' - '232' - '144' - '497' - '318' - '0' - '31' - '49' - '400' - '10' - '406' - '398' - '154' - '300' - '226' - '93' - '348' - '82' - '2' - '423' - '113' - '395' - '92' - '394' - '293' - '62' - '137' - '476' - '216' - '432' - '155' - '29' - '369' - '64' - '163' - '389' - '278' - '25' - '164' - '310' - '213' - '126' - '331' - '414' - '11' - '404' - '185' - '365' - '484' - '409' - '17' - '193' - '178' - '273' - '37' - '390' - '128' - '170' - '203' - '298' - '229' - '383' - '67' - '27' - '118' - '72' - '142' - '73' - '65' - '231' - '104' - '124' - '428' - '345' - '230' - '287' - '175' - '294' - '184' - '97' - '48' - '457' - '288' - '204' - '379' - '107' - '200' - '99' - '269' - '442' - '353' - '129' - '445' - '51' - '360' - '80' - '83' - '201' - '223' - '312' - '69' - '30' - '202' - '70' - '286' - '236' - '50' - '123' - '88' - '205' - '151' - '127' - '186' - '367' - '299' - '313' - '220' - '206' - '297' - '422' - '71' - '44' - '281' - '91' - '57' - '408' - '112' - '26' - '145' - '16' - '75' - '235' - '183' - '222' - '171' - '121' - '250' - '472' - '195' - '94' - '357' - '393' - '380' - '370' - '363' - '103' - '396' - '468' - '346' - '40' - '180' - '42' - '351' - '450' - '477' - '239' - '143' - '361' - '314' - '392' - '161' - '473' - '198' - '194' - '371' - '433' - '56' - '444' - '138' - '157' - '245' - '140' - '165' - '412' - '354' - '9' - '333' - '85' - '176' - '323' - '301' - '215' - '264' - '434' - '489' - '355' - '488' - '382' - '177' - '268' - '290' - '114' - '266' - '334' - '356' - '90' - '244' - '259' - '368' - '6' - '303' - '478' - '199' - '376' - '480' - '401' - '1' - '168' - '453' - '19' - '54' - '221' - '100' - '4' - '495' - '77' - '240' - '45' - '481' - '224' - '20' - '120' - '58' - '162' - '12' - '109' - '491' - '115' - '397' - '340' - '196' - '68' - '34' - '415' - '429' - '421' - '475' - '335' - '338' - '172' - '39' - '258' - '330' - '246' - '425' - '296' - '125' - '60' - '52' - '271' - '173' - '469' - '289' - '439' - '207' - '487' - '272' - '332' - '284' - '308' - '388' - '95' - '248' - '101' - '36' - '14' - '315' - '262' - '146' - '343' - '79' - '426' - '21' - '253' - '63' - '292' - '81' - '385' - '309' - '366' - '116' - '131' - '87' - '449' - '283' - '214' - '474' - '329' - '471' - '225' - '108' - '136' - '148' - '306' - '150' - '378' - '460' - '307' - '141' - '98' - '436' - '402' - '192' - '8' - '483' - '440' - '47' - '466' - '486' - '5' - '257' - '447' - '377' - '111' - '251' - '490' - '265' - '438' - '158' - '384' - '135' - '102' - '276' - '211' - '219' - '187' - '347' - '32' - '182' - '169' - '410' - '455' - '461' - '482' - '374' - '463' - '452' - '59' - '152' - '174' - '418' - '166' - '470' - '459' - '153' - '179' - '498' - '430' - '419' - '467' - '208' - '326' - '210' - '270' - '243' - '255' - '233' - '261' - '336' - '282' - '234' - '464' - '181' - '156' - '359' - '454' - '420' - '28' - '249' - '106' - '302' - '191' - '209' - '46' - '117' - '403' - '280' - '324' - '458' - '134' - '122' - '212' - '18' - '437' - '78' - '375' - '252' - '405' - '295' - '435' - '317' - '260' - '364' - '322' - '15' - '339' - '413' - '465' - '285' - '189' - '417' - '344' - '110' - '119' - '277' - '499' - '358' - '411' - '387' - '349' - '424' - '391' - '76' - '441' - '321' - <unk> - <sos/eos> init: null collate_fn_conf: label_downsampling: 1 pad: false rand_crop: true input_size: 1 num_classes: 500 use_preprocessor: true token_type: word bpemodel: null non_linguistic_symbols: null cleaner: null g2p: null speech_volume_normalize: null rir_scp: null rir_apply_prob: 1.0 noise_scp: null noise_apply_prob: 1.0 noise_db_range: '13_15' pred_masked_weight: 1.0 pred_nomask_weight: 0.0 loss_weights: 0.0 frontend: null frontend_conf: {} specaug: null specaug_conf: {} normalize: null normalize_conf: {} preencoder: null preencoder_conf: {} encoder: torchaudio_hubert encoder_conf: encoder_projection_dropout: 0.1 encoder_attention_dropout: 0.1 encoder_ff_interm_dropout: 0.0 encoder_dropout: 0.1 encoder_layer_drop: 0.05 model: torchaudio model_conf: {} required: - output_dir - token_list version: '202209' distributed: true ``` </details>
8ddac8f8faa4a91deb762255fbf8e495
mit
['GPT-2']
false
Spanish GPT-2 trained on [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus) This is a Spanish GPT-2 model trained from scratch on the [large_spanish_corpus](https://huggingface.co/datasets/viewer/?dataset=large_spanish_corpus) aka BETO's corpus with [Flax](https://github.com/google/flax) This is part of the [Flax/Jax Community Week](https://discuss.huggingface.co/t/open-to-the-community-community-week-using-jax-flax-for-nlp-cv/7104), organised by [HuggingFace](https://huggingface.co/) and TPU usage sponsored by Google.
2e0269547bbf89fe9c5c8f6d4f8b5158
mit
['GPT-2']
false
Team members - Manuel Romero ([mrm8488](https://huggingface.co/mrm8488)) - María Grandury ([mariagrandury](https://huggingface.co/)) - Pablo González de Prado ([Pablogps](https://huggingface.co/Pablogps)) - Daniel Vera ([daveni](https://huggingface.co/daveni)) - Sri Lakshmi ([srisweet](https://huggingface.co/srisweet)) - José Posada ([jdposa](https://huggingface.co/jdposa)) - Santiago Hincapie ([shpotes](https://huggingface.co/shpotes)) - Jorge ([jorgealro](https://huggingface.co/jorgealro))
12f41b457e51ad51496498c32d6dfa6c
mit
['GPT-2']
false
summary-timeline-calendar-6) - [Community Week README](https://github.com/huggingface/transformers/blob/master/examples/research_projects/jax-projects/README.md) - [Community Week thread](https://discuss.huggingface.co/t/pretrain-gpt2-from-scratch-in-spanish/7086/8)
ba6a7c4744647ab9efef51f9c375168d
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-home-7-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3789 - Accuracy: 0.3356
0579a855e3f7158d6790ae0e5b3407ed
afl-3.0
[]
false
Citation Information ``` @inproceedings{adelani-etal-2022-thousand, title = "A Few Thousand Translations Go a Long Way! Leveraging Pre-trained Models for {A}frican News Translation", author = "Adelani, David and Alabi, Jesujoba and Fan, Angela and Kreutzer, Julia and Shen, Xiaoyu and Reid, Machel and Ruiter, Dana and Klakow, Dietrich and Nabende, Peter and Chang, Ernie and Gwadabe, Tajuddeen and Sackey, Freshia and Dossou, Bonaventure F. P. and Emezue, Chris and Leong, Colin and Beukman, Michael and Muhammad, Shamsuddeen and Jarso, Guyo and Yousuf, Oreen and Niyongabo Rubungo, Andre and Hacheme, Gilles and Wairagala, Eric Peter and Nasir, Muhammad Umair and Ajibade, Benjamin and Ajayi, Tunde and Gitau, Yvonne and Abbott, Jade and Ahmed, Mohamed and Ochieng, Millicent and Aremu, Anuoluwapo and Ogayo, Perez and Mukiibi, Jonathan and Ouoba Kabore, Fatoumata and Kalipe, Godson and Mbaye, Derguene and Tapo, Allahsera Auguste and Memdjokam Koagne, Victoire and Munkoh-Buabeng, Edwin and Wagner, Valencia and Abdulmumin, Idris and Awokoya, Ayodele and Buzaaba, Happy and Sibanda, Blessing and Bukula, Andiswa and Manthalu, Sam", booktitle = "Proceedings of the 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies", month = jul, year = "2022", address = "Seattle, United States", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2022.naacl-main.223", doi = "10.18653/v1/2022.naacl-main.223", pages = "3053--3070", abstract = "Recent advances in the pre-training for language models leverage large-scale datasets to create multilingual models. However, low-resource languages are mostly left out in these datasets. This is primarily because many widely spoken languages that are not well represented on the web and therefore excluded from the large-scale crawls for datasets. Furthermore, downstream users of these models are restricted to the selection of languages originally chosen for pre-training. This work investigates how to optimally leverage existing pre-trained models to create low-resource translation systems for 16 African languages. We focus on two questions: 1) How can pre-trained models be used for languages not included in the initial pretraining? and 2) How can the resulting translation models effectively transfer to new domains? To answer these questions, we create a novel African news corpus covering 16 languages, of which eight languages are not part of any existing evaluation dataset. We demonstrate that the most effective strategy for transferring both additional languages and additional domains is to leverage small quantities of high-quality translation data to fine-tune large pre-trained models.", } ```
72d0559f8d27d239678354a735b3827e
mit
[]
false
huang guang jian on Stable Diffusion This is the `<huang-guang-jian>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<huang-guang-jian> 0](https://huggingface.co/sd-concepts-library/huang-guang-jian/resolve/main/concept_images/1.jpeg) ![<huang-guang-jian> 1](https://huggingface.co/sd-concepts-library/huang-guang-jian/resolve/main/concept_images/3.jpeg) ![<huang-guang-jian> 2](https://huggingface.co/sd-concepts-library/huang-guang-jian/resolve/main/concept_images/2.jpeg) ![<huang-guang-jian> 3](https://huggingface.co/sd-concepts-library/huang-guang-jian/resolve/main/concept_images/0.jpeg)
402414378236300089adf8765eb1bf57
mit
['donut', 'image-to-text', 'vision']
false
Donut (base-sized model, fine-tuned on RVL-CDIP) Donut model fine-tuned on RVL-CDIP. It was introduced in the paper [OCR-free Document Understanding Transformer](https://arxiv.org/abs/2111.15664) by Geewok et al. and first released in [this repository](https://github.com/clovaai/donut). Disclaimer: The team releasing Donut did not write a model card for this model so this model card has been written by the Hugging Face team.
e9e66df2eeb8dc6b98b685c9f72234e9
mit
['donut', 'image-to-text', 'vision']
false
Intended uses & limitations This model is fine-tuned on RVL-CDIP, a document image classification dataset. We refer to the [documentation](https://huggingface.co/docs/transformers/main/en/model_doc/donut) which includes code examples.
652a5420dfef6c88cc36f832c167ab66
apache-2.0
['generated_from_trainer', 'pt']
false
WavLM-large-CORAA-pt This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on [CORAA dataset](https://github.com/nilc-nlp/CORAA). It achieves the following results on the evaluation set: - Loss: 0.6144 - Wer: 0.3840
b4e3dc7a3c7da86f522729c5bb73b0e7
apache-2.0
['generated_from_trainer', 'pt']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - training_steps: 40000 - mixed_precision_training: Native AMP
8a48a3190ea12fb5dba58e244ad36ada
apache-2.0
['generated_from_trainer', 'pt']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | No log | 0.04 | 1000 | 1.9230 | 0.9960 | | 5.153 | 0.08 | 2000 | 1.3733 | 0.8444 | | 5.153 | 0.13 | 3000 | 1.1992 | 0.7362 | | 1.367 | 0.17 | 4000 | 1.1289 | 0.6957 | | 1.367 | 0.21 | 5000 | 1.0357 | 0.6470 | | 1.1824 | 0.25 | 6000 | 1.0216 | 0.6201 | | 1.1824 | 0.29 | 7000 | 0.9338 | 0.6036 | | 1.097 | 0.33 | 8000 | 0.9149 | 0.5760 | | 1.097 | 0.38 | 9000 | 0.8885 | 0.5541 | | 1.0254 | 0.42 | 10000 | 0.8678 | 0.5366 | | 1.0254 | 0.46 | 11000 | 0.8349 | 0.5323 | | 0.9782 | 0.5 | 12000 | 0.8230 | 0.5155 | | 0.9782 | 0.54 | 13000 | 0.8245 | 0.5049 | | 0.9448 | 0.59 | 14000 | 0.7802 | 0.4990 | | 0.9448 | 0.63 | 15000 | 0.7650 | 0.4900 | | 0.9092 | 0.67 | 16000 | 0.7665 | 0.4796 | | 0.9092 | 0.71 | 17000 | 0.7568 | 0.4795 | | 0.8764 | 0.75 | 18000 | 0.7403 | 0.4615 | | 0.8764 | 0.8 | 19000 | 0.7219 | 0.4644 | | 0.8498 | 0.84 | 20000 | 0.7180 | 0.4502 | | 0.8498 | 0.88 | 21000 | 0.7017 | 0.4436 | | 0.8278 | 0.92 | 22000 | 0.6992 | 0.4395 | | 0.8278 | 0.96 | 23000 | 0.7021 | 0.4329 | | 0.8077 | 1.0 | 24000 | 0.6892 | 0.4265 | | 0.8077 | 1.05 | 25000 | 0.6940 | 0.4248 | | 0.7486 | 1.09 | 26000 | 0.6767 | 0.4202 | | 0.7486 | 1.13 | 27000 | 0.6734 | 0.4150 | | 0.7459 | 1.17 | 28000 | 0.6650 | 0.4152 | | 0.7459 | 1.21 | 29000 | 0.6559 | 0.4078 | | 0.7304 | 1.26 | 30000 | 0.6536 | 0.4088 | | 0.7304 | 1.3 | 31000 | 0.6537 | 0.4025 | | 0.7183 | 1.34 | 32000 | 0.6462 | 0.4008 | | 0.7183 | 1.38 | 33000 | 0.6381 | 0.3973 | | 0.7059 | 1.42 | 34000 | 0.6266 | 0.3930 | | 0.7059 | 1.46 | 35000 | 0.6280 | 0.3921 | | 0.6983 | 1.51 | 36000 | 0.6248 | 0.3897 | | 0.6983 | 1.55 | 37000 | 0.6275 | 0.3872 | | 0.6892 | 1.59 | 38000 | 0.6199 | 0.3852 | | 0.6892 | 1.63 | 39000 | 0.6180 | 0.3842 | | 0.691 | 1.67 | 40000 | 0.6144 | 0.3840 |
a91e4159bde1dc7826cff844047ecf00
apache-2.0
['stanza', 'token-classification']
false
Stanza model for Galician (gl) Stanza is a collection of accurate and efficient tools for the linguistic analysis of many human languages. Starting from raw text to syntactic analysis and entity recognition, Stanza brings state-of-the-art NLP models to languages of your choosing. Find more about it in [our website](https://stanfordnlp.github.io/stanza) and our [GitHub repository](https://github.com/stanfordnlp/stanza). This card and repo were automatically prepared with `hugging_stanza.py` in the `stanfordnlp/huggingface-models` repo Last updated 2022-09-25 01:24:37.165
0f459592d4ac385d1e796e9d88e75082
apache-2.0
['translation']
false
opus-mt-tn-fr * source languages: tn * target languages: fr * OPUS readme: [tn-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tn-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tn-fr/opus-2020-01-16.eval.txt)
276556fa3acfef4315447cf1d7251d00
apache-2.0
['generated_from_keras_callback']
false
Question Answering with Hugging Face Transformers and Keras 🤗❤️ This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on SQuAD dataset. It achieves the following results on the evaluation set: - Train Loss: 0.9300 - Validation Loss: 1.1437 - Epoch: 1
b5091834bbb4fbba63bfe21f6603b080
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: mixed_float16
97540e5a398a21cfd2e839a691770363
mit
['generated_from_trainer']
false
bart-cnn-pubmed-arxiv-pubmed-arxiv-earlystopping This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8793 - Rouge1: 56.2055 - Rouge2: 41.9231 - Rougel: 45.0616 - Rougelsum: 54.6643 - Gen Len: 142.0
5811ab2475798d5c414c15d8628889a2
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1000 - mixed_precision_training: Native AMP
ec57d428866ca0fcba609050c3f84ec0
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 0.31 | 125 | 1.2057 | 50.9339 | 30.6777 | 32.6396 | 47.9592 | 141.3519 | | No log | 0.63 | 250 | 1.0933 | 52.0728 | 31.2361 | 32.8214 | 48.9776 | 141.9815 | | No log | 0.94 | 375 | 0.9685 | 51.6847 | 32.1578 | 34.1933 | 48.8808 | 141.5556 | | 1.1594 | 1.26 | 500 | 0.9725 | 50.5131 | 30.6043 | 32.1861 | 47.4346 | 142.0 | | 1.1594 | 1.57 | 625 | 0.9342 | 52.228 | 32.2073 | 33.797 | 49.2395 | 142.0 | | 1.1594 | 1.88 | 750 | 0.8715 | 52.2 | 33.6602 | 36.1303 | 49.7138 | 141.6481 | | 1.1594 | 2.2 | 875 | 0.8334 | 53.116 | 33.9871 | 35.9641 | 50.7658 | 141.8889 | | 0.6845 | 2.51 | 1000 | 0.8241 | 52.2612 | 32.8025 | 35.27 | 49.5694 | 142.0 | | 0.6845 | 2.83 | 1125 | 0.7986 | 54.1803 | 35.0019 | 37.4582 | 51.4577 | 142.0 | | 0.6845 | 3.14 | 1250 | 0.8532 | 52.1328 | 32.6086 | 34.7455 | 49.6219 | 141.7037 | | 0.6845 | 3.45 | 1375 | 0.8319 | 51.9614 | 32.8544 | 35.3269 | 49.3279 | 141.7593 | | 0.4488 | 3.77 | 1500 | 0.8033 | 53.1404 | 34.6086 | 37.5482 | 50.7414 | 142.0 | | 0.4488 | 4.08 | 1625 | 0.8322 | 53.1736 | 34.8662 | 37.7514 | 51.0601 | 142.0 | | 0.4488 | 4.4 | 1750 | 0.7985 | 51.8251 | 32.9457 | 36.4164 | 49.55 | 142.0 | | 0.4488 | 4.71 | 1875 | 0.8049 | 54.3423 | 36.6293 | 39.1316 | 52.2706 | 141.8148 | | 0.3017 | 5.03 | 2000 | 0.8148 | 53.0698 | 35.2569 | 38.406 | 50.9346 | 141.7778 | | 0.3017 | 5.34 | 2125 | 0.8153 | 53.4479 | 35.1525 | 37.8071 | 51.3731 | 141.0741 | | 0.3017 | 5.65 | 2250 | 0.8009 | 52.5517 | 34.8287 | 37.999 | 50.2889 | 141.6111 | | 0.3017 | 5.97 | 2375 | 0.7509 | 54.2725 | 37.4164 | 40.516 | 52.1379 | 142.0 | | 0.2052 | 6.28 | 2500 | 0.8019 | 54.622 | 36.4776 | 39.9306 | 52.5069 | 142.0 | | 0.2052 | 6.6 | 2625 | 0.8176 | 55.4796 | 38.4502 | 41.5523 | 53.5211 | 142.0 | | 0.2052 | 6.91 | 2750 | 0.7956 | 55.4906 | 37.9064 | 40.845 | 53.107 | 141.9815 | | 0.2052 | 7.22 | 2875 | 0.7966 | 54.5177 | 37.3399 | 40.7678 | 52.4241 | 142.0 | | 0.1465 | 7.54 | 3000 | 0.8311 | 54.3473 | 37.0659 | 40.2507 | 52.372 | 142.0 | | 0.1465 | 7.85 | 3125 | 0.8227 | 53.9245 | 36.4695 | 39.1205 | 51.9416 | 141.8889 | | 0.1465 | 8.17 | 3250 | 0.7947 | 54.766 | 38.4275 | 41.2293 | 52.9075 | 142.0 | | 0.1465 | 8.48 | 3375 | 0.7954 | 54.5305 | 37.6934 | 40.6804 | 52.5884 | 141.9444 | | 0.115 | 8.79 | 3500 | 0.8433 | 54.7962 | 37.9373 | 41.3906 | 52.3778 | 142.0 | | 0.115 | 9.11 | 3625 | 0.8416 | 56.59 | 41.2271 | 44.4207 | 54.7199 | 142.0 | | 0.115 | 9.42 | 3750 | 0.8164 | 55.1903 | 39.0588 | 41.4908 | 53.4897 | 142.0 | | 0.115 | 9.74 | 3875 | 0.8363 | 55.2894 | 39.3598 | 42.1138 | 53.831 | 141.8889 | | 0.0912 | 10.05 | 4000 | 0.8850 | 55.7705 | 40.4924 | 43.1048 | 54.254 | 142.0 | | 0.0912 | 10.36 | 4125 | 0.8268 | 56.1664 | 40.641 | 42.798 | 54.0001 | 141.9259 | | 0.0912 | 10.68 | 4250 | 0.8564 | 55.4701 | 39.4949 | 42.2559 | 53.4486 | 141.8889 | | 0.0912 | 10.99 | 4375 | 0.8557 | 56.0849 | 41.2861 | 45.8277 | 54.5999 | 141.6667 | | 0.0707 | 11.31 | 4500 | 0.8432 | 54.9496 | 39.3006 | 42.0025 | 53.3854 | 142.0 | | 0.0707 | 11.62 | 4625 | 0.8377 | 54.2438 | 37.6959 | 40.4637 | 52.3088 | 142.0 | | 0.0707 | 11.93 | 4750 | 0.8794 | 55.9488 | 40.5401 | 43.7347 | 54.1282 | 142.0 | | 0.0707 | 12.25 | 4875 | 0.8563 | 57.8762 | 43.366 | 46.6757 | 56.6985 | 142.0 | | 0.0604 | 12.56 | 5000 | 0.8835 | 54.8926 | 39.3755 | 42.384 | 53.2687 | 141.6481 | | 0.0604 | 12.88 | 5125 | 0.8570 | 55.6656 | 39.849 | 42.1455 | 54.352 | 142.0 | | 0.0604 | 13.19 | 5250 | 0.8539 | 57.1549 | 41.901 | 45.153 | 55.213 | 142.0 | | 0.0604 | 13.51 | 5375 | 0.8847 | 56.3279 | 40.9269 | 43.416 | 54.7242 | 142.0 | | 0.051 | 13.82 | 5500 | 0.8795 | 56.8982 | 42.3333 | 45.2669 | 55.1034 | 142.0 | | 0.051 | 14.13 | 5625 | 0.8751 | 55.3173 | 40.2853 | 43.2479 | 53.7236 | 142.0 | | 0.051 | 14.45 | 5750 | 0.8799 | 56.1678 | 41.0862 | 43.8581 | 54.6316 | 142.0 | | 0.051 | 14.76 | 5875 | 0.8678 | 57.3539 | 43.0473 | 44.8511 | 55.6474 | 142.0 | | 0.0467 | 15.08 | 6000 | 0.8945 | 56.1939 | 41.985 | 45.0266 | 54.8139 | 142.0 | | 0.0467 | 15.39 | 6125 | 0.9245 | 56.2071 | 41.5265 | 44.3228 | 54.5042 | 141.4074 | | 0.0467 | 15.7 | 6250 | 0.8793 | 56.2055 | 41.9231 | 45.0616 | 54.6643 | 142.0 |
39dd2eac0445d03ced413683b1c1b7ef
apache-2.0
['generated_from_trainer']
false
bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0656 - Precision: 0.9308 - Recall: 0.9482 - F1: 0.9394 - Accuracy: 0.9858
9cf9ebf2a69d41f6e71fee14c6baa100
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0877 | 1.0 | 1756 | 0.0811 | 0.9077 | 0.9273 | 0.9174 | 0.9804 | | 0.0341 | 2.0 | 3512 | 0.0642 | 0.9234 | 0.9448 | 0.9340 | 0.9854 | | 0.0187 | 3.0 | 5268 | 0.0656 | 0.9308 | 0.9482 | 0.9394 | 0.9858 |
6b22e5bf773d36757d34e775bfaf5138
apache-2.0
['automatic-speech-recognition', 'fr']
false
exp_w2v2r_fr_vp-100k_age_teens-0_sixties-10_s423 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fr)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
ab7b45af3988462def363ae672f17907
apache-2.0
['deep-narrow']
false
T5-Efficient-BASE (Deep-Narrow version) T5-Efficient-BASE is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
da744b56a96c26e708d44904b223ae08
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-base** - is of model type **Base** with no variations. It has **222.93** million parameters and thus requires *ca.* **891.73 MB** of memory in full precision (*fp32*) or **445.86 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
d487ad7590dbb7c193649353842e0c86
apache-2.0
['generated_from_trainer']
false
koelectra-base-86371428 This model is a fine-tuned version of [monologg/koelectra-base-v3-discriminator](https://huggingface.co/monologg/koelectra-base-v3-discriminator) on the custom_squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.6169
43a3d9b8c4599cfc3f6f1a779c9d00be
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0004 - train_batch_size: 128 - eval_batch_size: 128 - seed: 30 - gradient_accumulation_steps: 8 - total_train_batch_size: 1024 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - num_epochs: 2 - mixed_precision_training: Native AMP
6f6ea9cdb01077341b25d3e3d94d1801
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.94 | 10 | 1.8078 | | No log | 1.94 | 20 | 1.6169 |
57cfe84354ef738c4860d5963fb2fe58
apache-2.0
['translation']
false
opus-mt-lu-sv * source languages: lu * target languages: sv * OPUS readme: [lu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/lu-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/lu-sv/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-sv/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/lu-sv/opus-2020-01-09.eval.txt)
05e5dbfe9fb6ba1d06343d059d72c1a5
apache-2.0
['generated_from_trainer']
false
chinese-address-ner This model is a fine-tuned version of [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) on an unkown dataset. It achieves the following results on the evaluation set: - Loss: 0.1080 - Precision: 0.9664 - Recall: 0.9774 - F1: 0.9719 - Accuracy: 0.9758
941e0aad728336322b7c726b3a2a0a7b
apache-2.0
['generated_from_trainer']
false
Model description 输入一串地址中文信息,比如快递单:`北京市海淀区西北旺东路10号院(马连洼街道西北旺社区东北方向)`,按照行政级别(总有 7 级)抽取地址信息,返回每个 token 的类别。具体类别含义表示如下: | 返回类别 | BIO 体系 | 解释 | | ----------- | -------- | ---------------------- | | **LABEL_0** | O | 忽略信息 | | **LABEL_1** | B-A1 | 第一级地址(头) | | **LABEL_2** | I-A1 | 第一级地址(其余部分) | | ... | ... | ... | More information needed
d4043b708d109eaf5fe2e68da63e38ce
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 50 - eval_batch_size: 50 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50
dcb0cbdd7ab5a780d0aaa66e4f3d52bb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 2.5055 | 1.0 | 7 | 1.6719 | 0.1977 | 0.2604 | 0.2248 | 0.5649 | | 1.837 | 2.0 | 14 | 1.0719 | 0.4676 | 0.6 | 0.5256 | 0.7421 | | 1.0661 | 3.0 | 21 | 0.7306 | 0.6266 | 0.7472 | 0.6816 | 0.8106 | | 0.8373 | 4.0 | 28 | 0.5197 | 0.6456 | 0.8113 | 0.7191 | 0.8614 | | 0.522 | 5.0 | 35 | 0.3830 | 0.7667 | 0.8679 | 0.8142 | 0.9001 | | 0.4295 | 6.0 | 42 | 0.3104 | 0.8138 | 0.8906 | 0.8505 | 0.9178 | | 0.3483 | 7.0 | 49 | 0.2453 | 0.8462 | 0.9132 | 0.8784 | 0.9404 | | 0.2471 | 8.0 | 56 | 0.2081 | 0.8403 | 0.9132 | 0.8752 | 0.9428 | | 0.2299 | 9.0 | 63 | 0.1979 | 0.8419 | 0.9245 | 0.8813 | 0.9420 | | 0.1761 | 10.0 | 70 | 0.1823 | 0.8830 | 0.9396 | 0.9104 | 0.9500 | | 0.1434 | 11.0 | 77 | 0.1480 | 0.9036 | 0.9547 | 0.9284 | 0.9629 | | 0.134 | 12.0 | 84 | 0.1341 | 0.9173 | 0.9623 | 0.9392 | 0.9678 | | 0.128 | 13.0 | 91 | 0.1365 | 0.9375 | 0.9623 | 0.9497 | 0.9694 | | 0.0824 | 14.0 | 98 | 0.1159 | 0.9557 | 0.9774 | 0.9664 | 0.9734 | | 0.0744 | 15.0 | 105 | 0.1092 | 0.9591 | 0.9736 | 0.9663 | 0.9766 | | 0.0569 | 16.0 | 112 | 0.1117 | 0.9556 | 0.9736 | 0.9645 | 0.9742 | | 0.0559 | 17.0 | 119 | 0.1040 | 0.9628 | 0.9774 | 0.9700 | 0.9790 | | 0.0456 | 18.0 | 126 | 0.1052 | 0.9593 | 0.9774 | 0.9682 | 0.9782 | | 0.0405 | 19.0 | 133 | 0.1133 | 0.9590 | 0.9698 | 0.9644 | 0.9718 | | 0.0315 | 20.0 | 140 | 0.1060 | 0.9591 | 0.9736 | 0.9663 | 0.9750 | | 0.0262 | 21.0 | 147 | 0.1087 | 0.9554 | 0.9698 | 0.9625 | 0.9718 | | 0.0338 | 22.0 | 154 | 0.1183 | 0.9625 | 0.9698 | 0.9662 | 0.9726 | | 0.0225 | 23.0 | 161 | 0.1080 | 0.9664 | 0.9774 | 0.9719 | 0.9758 | | 0.028 | 24.0 | 168 | 0.1057 | 0.9591 | 0.9736 | 0.9663 | 0.9742 | | 0.0202 | 25.0 | 175 | 0.1062 | 0.9628 | 0.9774 | 0.9700 | 0.9766 | | 0.0168 | 26.0 | 182 | 0.1097 | 0.9664 | 0.9774 | 0.9719 | 0.9758 | | 0.0173 | 27.0 | 189 | 0.1093 | 0.9628 | 0.9774 | 0.9700 | 0.9774 | | 0.0151 | 28.0 | 196 | 0.1162 | 0.9628 | 0.9774 | 0.9700 | 0.9766 | | 0.0135 | 29.0 | 203 | 0.1126 | 0.9483 | 0.9698 | 0.9590 | 0.9758 | | 0.0179 | 30.0 | 210 | 0.1100 | 0.9449 | 0.9698 | 0.9572 | 0.9774 | | 0.0161 | 31.0 | 217 | 0.1098 | 0.9449 | 0.9698 | 0.9572 | 0.9766 | | 0.0158 | 32.0 | 224 | 0.1191 | 0.9483 | 0.9698 | 0.9590 | 0.9734 | | 0.0151 | 33.0 | 231 | 0.1058 | 0.9483 | 0.9698 | 0.9590 | 0.9750 | | 0.0121 | 34.0 | 238 | 0.0990 | 0.9593 | 0.9774 | 0.9682 | 0.9790 | | 0.0092 | 35.0 | 245 | 0.1128 | 0.9519 | 0.9698 | 0.9607 | 0.9774 | | 0.0097 | 36.0 | 252 | 0.1181 | 0.9627 | 0.9736 | 0.9681 | 0.9766 | | 0.0118 | 37.0 | 259 | 0.1185 | 0.9591 | 0.9736 | 0.9663 | 0.9782 | | 0.0118 | 38.0 | 266 | 0.1021 | 0.9557 | 0.9774 | 0.9664 | 0.9823 | | 0.0099 | 39.0 | 273 | 0.1000 | 0.9559 | 0.9811 | 0.9683 | 0.9815 | | 0.0102 | 40.0 | 280 | 0.1025 | 0.9559 | 0.9811 | 0.9683 | 0.9815 | | 0.0068 | 41.0 | 287 | 0.1080 | 0.9522 | 0.9774 | 0.9646 | 0.9807 | | 0.0105 | 42.0 | 294 | 0.1157 | 0.9449 | 0.9698 | 0.9572 | 0.9766 | | 0.0083 | 43.0 | 301 | 0.1207 | 0.9380 | 0.9698 | 0.9536 | 0.9766 | | 0.0077 | 44.0 | 308 | 0.1208 | 0.9483 | 0.9698 | 0.9590 | 0.9766 | | 0.0077 | 45.0 | 315 | 0.1176 | 0.9483 | 0.9698 | 0.9590 | 0.9774 | | 0.0071 | 46.0 | 322 | 0.1137 | 0.9483 | 0.9698 | 0.9590 | 0.9790 | | 0.0075 | 47.0 | 329 | 0.1144 | 0.9483 | 0.9698 | 0.9590 | 0.9782 | | 0.0084 | 48.0 | 336 | 0.1198 | 0.9483 | 0.9698 | 0.9590 | 0.9766 | | 0.0103 | 49.0 | 343 | 0.1217 | 0.9519 | 0.9698 | 0.9607 | 0.9766 | | 0.0087 | 50.0 | 350 | 0.1230 | 0.9519 | 0.9698 | 0.9607 | 0.9766 |
a9e3b11beb58b0f3b0cccfe7eeb7185c
apache-2.0
['image-classification']
false
resnet50d Implementation of ResNet proposed in [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385) ``` python ResNet.resnet18() ResNet.resnet26() ResNet.resnet34() ResNet.resnet50() ResNet.resnet101() ResNet.resnet152() ResNet.resnet200() Variants (d) proposed in `Bag of Tricks for Image Classification with Convolutional Neural Networks <https://arxiv.org/pdf/1812.01187.pdf`_ ResNet.resnet26d() ResNet.resnet34d() ResNet.resnet50d()
2e86e7bae3a0780d4ce8a5189be88a15
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2240 - Accuracy: 0.925 - F1: 0.9249
b48bf1dbda9a6a99ad826b1ea1581698
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8487 | 1.0 | 250 | 0.3310 | 0.9045 | 0.9011 | | 0.2606 | 2.0 | 500 | 0.2240 | 0.925 | 0.9249 |
2a66cafb6faa9b221174d5e8ce7c1951
apache-2.0
['image-classification', 'timm']
false
Model card for maxvit_rmlp_tiny_rw_256.sw_in1k A timm specific MaxViT (w/ a MLP Log-CPB (continuous log-coordinate relative position bias motivated by Swin-V2) image classification model. Trained in `timm` on ImageNet-1k by Ross Wightman. ImageNet-1k training done on TPUs thanks to support of the [TRC](https://sites.research.google/trc/about/) program.
0bed7438215d8c18fb20c6b21a3e206f
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 29.1 - GMACs: 6.8 - Activations (M): 46.9 - Image size: 256 x 256 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - Swin Transformer V2: Scaling Up Capacity and Resolution: https://arxiv.org/abs/2111.09883 - **Dataset:** ImageNet-1k
fa4ee95ab0007159df228b3621ad1461
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_rmlp_tiny_rw_256.sw_in1k', pretrained=True) model = model.eval()
86c5e4642189741bc10ce7a30ceac7ef
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_rmlp_tiny_rw_256.sw_in1k', pretrained=True, features_only=True, ) model = model.eval()
fb0ec709b4de77a3ee89a95f34efa0de
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_rmlp_tiny_rw_256.sw_in1k', pretrained=True, num_classes=0,
464b7697f9cf6431b2f0c4eca4060083
apache-2.0
[]
false
About An Abstractive text summarizer trained using lstm based sequence to sequence model with attention mechanisim. The attention model is used for generating each word of the summary conditioned on the input sentence. Used CNN_DailyMail dataset.
866385ab0cd57ecd040d30b15981235e
apache-2.0
[]
false
Training Model Overview loss graph ![train_log.jpg](https://s3.amazonaws.com/moonup/production/uploads/1665205596889-634100e4b8b51e0098db811f.jpeg) encoder-decoder overview ![model_plot.jpg](https://s3.amazonaws.com/moonup/production/uploads/1665205551954-634100e4b8b51e0098db811f.jpeg)
9c6dee1cf46e42dd52905f228507aba6
mit
['generated_from_keras_callback']
false
ChiefTheLord/codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 2.7143 - Validation Loss: 2.2348 - Epoch: 0
01abcf8e27c8b6de79fdf526f65b474c
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'inner_optimizer': {'class_name': 'AdamWeightDecay', 'config': {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'WarmUp', 'config': {'initial_learning_rate': 5e-05, 'decay_schedule_fn': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 1378398, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}, '__passive_serialization__': True}, 'warmup_steps': 1000, 'power': 1.0, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01}}, 'dynamic': True, 'initial_scale': 32768.0, 'dynamic_growth_steps': 2000} - training_precision: mixed_float16
c812613ee8d62523a8166c59192159de
apache-2.0
['translation']
false
ara-epo * source group: Arabic * target group: Esperanto * OPUS readme: [ara-epo](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md) * model: transformer-align * source language(s): apc apc_Latn ara arq arq_Latn arz * target language(s): epo * model: transformer-align * pre-processing: normalization + SentencePiece (spm4k,spm4k) * download original weights: [opus-2020-06-16.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip) * test set translations: [opus-2020-06-16.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt) * test set scores: [opus-2020-06-16.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.eval.txt)
e4ec0c06c29e3c3ced43c9819ba8eb28
apache-2.0
['translation']
false
System Info: - hf_name: ara-epo - source_languages: ara - target_languages: epo - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/ara-epo/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ar', 'eo'] - src_constituents: {'apc', 'ara', 'arq_Latn', 'arq', 'afb', 'ara_Latn', 'apc_Latn', 'arz'} - tgt_constituents: {'epo'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm4k,spm4k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/ara-epo/opus-2020-06-16.test.txt - src_alpha3: ara - tgt_alpha3: epo - short_pair: ar-eo - chrF2_score: 0.376 - bleu: 18.9 - brevity_penalty: 0.948 - ref_len: 4506.0 - src_name: Arabic - tgt_name: Esperanto - train_date: 2020-06-16 - src_alpha2: ar - tgt_alpha2: eo - prefer_old: False - long_pair: ara-epo - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
0358c3b05bc0ad39d7610d0bfbc10839
apache-2.0
[]
false
Model description This is an [t5-base](https://huggingface.co/t5-base) model, finetuned to generate questions given a table using [WikiSQL](https://huggingface.co/datasets/wikisql) dataset. It was trained to take the SQL, answer and column header of a table as input to generate questions. For more information check our T3QA [paper](https://aclanthology.org/2021.emnlp-main.342/) from EMNLP 2021.
ef0769b0700aef362f9c5d6289923f58
apache-2.0
[]
false
Usage One can use this model directly in the [PrimeQA](https://github.com/primeqa/primeqa) framework as in this example [notebook](https://github.com/primeqa/primeqa/blob/tableqg/notebooks/qg/tableqg_inference.ipynb).
3c880ddf1634b376d2c37427ed58de9e
apache-2.0
[]
false
Citation ```bibtex @inproceedings{chemmengath2021topic, title={Topic Transferable Table Question Answering}, author={Chemmengath, Saneem and Kumar, Vishwajeet and Bharadwaj, Samarth and Sen, Jaydeep and Canim, Mustafa and Chakrabarti, Soumen and Gliozzo, Alfio and Sankaranarayanan, Karthik}, booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing}, pages={4159--4172}, year={2021} } ```
23ba8cce9af3821bd8197ec473eb876c
apache-2.0
['generated_from_trainer']
false
bert-large-uncased-finetuned-vi-infovqa This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 7.4878
607685c99cd2a497ec8744fa560eccbe
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 0.11 | 100 | 4.6256 | | No log | 0.21 | 200 | 4.4042 | | No log | 0.32 | 300 | 5.0021 | | No log | 0.43 | 400 | 4.2825 | | 4.6758 | 0.53 | 500 | 4.3886 | | 4.6758 | 0.64 | 600 | 4.2519 | | 4.6758 | 0.75 | 700 | 4.2977 | | 4.6758 | 0.85 | 800 | 3.9916 | | 4.6758 | 0.96 | 900 | 4.1650 | | 4.1715 | 1.07 | 1000 | 4.5001 | | 4.1715 | 1.17 | 1100 | 4.0898 | | 4.1715 | 1.28 | 1200 | 4.1623 | | 4.1715 | 1.39 | 1300 | 4.3271 | | 4.1715 | 1.49 | 1400 | 3.9661 | | 3.7926 | 1.6 | 1500 | 3.8727 | | 3.7926 | 1.71 | 1600 | 3.8934 | | 3.7926 | 1.81 | 1700 | 3.7262 | | 3.7926 | 1.92 | 1800 | 3.7701 | | 3.7926 | 2.03 | 1900 | 3.7653 | | 3.5041 | 2.13 | 2000 | 3.9261 | | 3.5041 | 2.24 | 2100 | 4.0915 | | 3.5041 | 2.35 | 2200 | 4.0348 | | 3.5041 | 2.45 | 2300 | 4.0212 | | 3.5041 | 2.56 | 2400 | 4.4653 | | 2.8475 | 2.67 | 2500 | 4.2959 | | 2.8475 | 2.77 | 2600 | 4.1039 | | 2.8475 | 2.88 | 2700 | 3.8037 | | 2.8475 | 2.99 | 2800 | 3.7552 | | 2.8475 | 3.09 | 2900 | 4.2476 | | 2.5488 | 3.2 | 3000 | 4.6716 | | 2.5488 | 3.3 | 3100 | 4.7058 | | 2.5488 | 3.41 | 3200 | 4.6266 | | 2.5488 | 3.52 | 3300 | 4.5697 | | 2.5488 | 3.62 | 3400 | 5.1017 | | 2.0347 | 3.73 | 3500 | 4.6254 | | 2.0347 | 3.84 | 3600 | 4.4822 | | 2.0347 | 3.94 | 3700 | 4.9413 | | 2.0347 | 4.05 | 3800 | 5.3600 | | 2.0347 | 4.16 | 3900 | 5.7323 | | 1.6566 | 4.26 | 4000 | 5.8822 | | 1.6566 | 4.37 | 4100 | 6.0173 | | 1.6566 | 4.48 | 4200 | 5.6688 | | 1.6566 | 4.58 | 4300 | 6.0617 | | 1.6566 | 4.69 | 4400 | 6.6631 | | 1.3348 | 4.8 | 4500 | 6.0290 | | 1.3348 | 4.9 | 4600 | 6.2455 | | 1.3348 | 5.01 | 4700 | 6.0963 | | 1.3348 | 5.12 | 4800 | 7.0983 | | 1.3348 | 5.22 | 4900 | 7.5483 | | 1.0701 | 5.33 | 5000 | 7.7187 | | 1.0701 | 5.44 | 5100 | 7.4630 | | 1.0701 | 5.54 | 5200 | 7.1394 | | 1.0701 | 5.65 | 5300 | 7.0703 | | 1.0701 | 5.76 | 5400 | 7.5611 | | 0.9414 | 5.86 | 5500 | 7.6038 | | 0.9414 | 5.97 | 5600 | 7.4878 |
d26cd608e77693d46dc287c9d7ac3040
mit
[]
false
German GPT2-XL (1.5B) - trained with [BigScience's DeepSpeed-Megatron-LM code base](https://github.com/bigscience-workshop/Megatron-DeepSpeed) - word embedding initialized with [WECHSEL](https://arxiv.org/abs/2112.06598) and all other weights taken from English [gpt2-xl](https://huggingface.co/gpt2-xl) - ~ 3 days on 16xA100 GPUs (~ 80 TFLOPs / GPU) - stopped after 100k steps - 26.2B tokens - less than a single epoch on `oscar_unshuffled_deduplicated_de` (excluding validation set; original model was trained for 75 epochs on less data) - bf16 - zero stage 0 - tp/pp = 1
9439c75e2bb08d7ac3e6804259eed7d5
mit
[]
false
How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='malteos/gpt2-xl-wechsel-german') >>> set_seed(42) >>> generator("Hello, I'm a language model,", max_length=30, num_return_sequences=5) [{'generated_text': "Hello, I'm a language model, a language for thinking, a language for expressing thoughts."}, {'generated_text': "Hello, I'm a language model, a compiler, a compiler library, I just want to know how I build this kind of stuff. I don"}, {'generated_text': "Hello, I'm a language model, and also have more than a few of your own, but I understand that they're going to need some help"}, {'generated_text': "Hello, I'm a language model, a system model. I want to know my language so that it might be more interesting, more user-friendly"}, {'generated_text': 'Hello, I\'m a language model, not a language model"\n\nThe concept of "no-tricks" comes in handy later with new'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model tokenizer = GPT2Tokenizer.from_pretrained('malteos/gpt2-xl-wechsel-german') model = GPT2Model.from_pretrained('malteos/gpt2-xl-wechsel-german') text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
6ccfd9ece4276953e720f91fb8b49eab
mit
[]
false
Evaluation | Model (size) | PPL | |---|---| | `gpt2-xl-wechsel-german` (1.5B) | **14.5** | | `gpt2-wechsel-german-ds-meg` (117M) | 26.4 | | `gpt2-wechsel-german` (117M) | 26.8 | | `gpt2` (retrained from scratch) (117M) | 27.63 |
bff63a6d8ee20545d96c13be9d7f0d2b
apache-2.0
['generated_from_trainer']
false
bart-base-finetuned-parth This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 4.1122 - Rouge1: 43.9082 - Rouge2: 33.2868 - Rougel: 40.0465 - Rougelsum: 43.7776 - Gen Len: 20.0
e2fb9ff9ff918822198cf7bc0d907e99
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 20 - label_smoothing_factor: 0.1
1c57bdec1d6f3d5c23148684409e1f21
mit
['generated_from_trainer']
false
xlm-sentiment-new This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.6166 - Accuracy: 0.7405 - Precision: 0.7375 - Recall: 0.7405 - F1: 0.7386
080a242c10787664885e32cb13da13b4
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | No log | 1.0 | 296 | 0.5519 | 0.7310 | 0.7266 | 0.7310 | 0.7277 | | 0.5719 | 2.0 | 592 | 0.5569 | 0.75 | 0.7562 | 0.75 | 0.7302 | | 0.5719 | 3.0 | 888 | 0.5320 | 0.7243 | 0.7269 | 0.7243 | 0.7254 | | 0.477 | 4.0 | 1184 | 0.5771 | 0.7300 | 0.7264 | 0.7300 | 0.7276 | | 0.477 | 5.0 | 1480 | 0.6051 | 0.7376 | 0.7361 | 0.7376 | 0.7368 | | 0.428 | 6.0 | 1776 | 0.6166 | 0.7405 | 0.7375 | 0.7405 | 0.7386 |
ebc80c6b97afb0268d2f07bd66c6b95d
mit
['russian']
false
This is a smaller version of the [google/mt5-base](https://huggingface.co/google/mt5-base) with only some Rusian and English embeddings left. More details are given in a Russian post: https://habr.com/ru/post/581932/ The model has been fine-tuned for several tasks with sentences or short paragraphs: * translation (`translate ru-en` and `translate en-ru`) * Paraphrasing (`paraphrase`) * Filling gaps in a text (`fill`). The gaps can be denoted as `___` or `_3_`, where `3` is the approximate number of words that should be inserted. * Restoring the text from a noisy bag of words (`assemble`) * Simplification of texts (`simplify`) * Dialogue response generation (`reply` based on fiction and `answer` based on online forums) * Open-book question answering (`comprehend`) * Asking questions about a text (`ask`) * News title generation (`headline`) For each task, the task name is joined with the input text by the ` | ` separator. The model can be run with the following code: ```
acc03abb02656e4760e00d36c7fd626e
mit
['russian']
false
!pip install transformers sentencepiece import torch from transformers import T5ForConditionalGeneration, T5Tokenizer tokenizer = T5Tokenizer.from_pretrained("cointegrated/rut5-base-multitask") model = T5ForConditionalGeneration.from_pretrained("cointegrated/rut5-base-multitask") def generate(text, **kwargs): inputs = tokenizer(text, return_tensors='pt') with torch.no_grad(): hypotheses = model.generate(**inputs, num_beams=5, **kwargs) return tokenizer.decode(hypotheses[0], skip_special_tokens=True) ``` The model can be applied to each of the pretraining tasks: ``` print(generate('translate ru-en | Каждый охотник желает знать, где сидит фазан.'))
f72c04bc0296b3870d28a86bbfe67b99
mit
['russian']
false
Each hunter wants to know, where he is. print(generate('paraphrase | Каждый охотник желает знать, где сидит фазан.', encoder_no_repeat_ngram_size=1, repetition_penalty=0.5, no_repeat_ngram_size=1))
db2b02876748bf51d1b55c297aea84f8
mit
['russian']
false
Каждый охотник знает, что фазан сидит. print(generate('simplify | Местным продуктом-специалитетом с защищённым географическим наименованием по происхождению считается люнебургский степной барашек.', max_length=32))
56c9d07d3abf78be2fc2a2b5fe8374c1
mit
['russian']
false
я хочу познакомиться с девушкой!!!!!!!! print(generate("comprehend | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, " "прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо. Вопрос: откуда приехал Морган?"))
833e3dbb7a53e538ac30aeed2a6d22f4
mit
['russian']
false
из Австралии print(generate("ask | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, " "прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32))
58646b17b295027787702b6d248bedbc
mit
['russian']
false
Что разворачивается на фоне земельного конфликта между владельцами овец и ранчеро? print(generate("headline | На фоне земельного конфликта между владельцами овец и ранчеро разворачивается история любви овцевода Моргана Лейна, " "прибывшего в США из Австралии, и Марии Синглетон, владелицы богатого скотоводческого ранчо.", max_length=32))
29ebe612e0557bc62654e9286ef7f5d7
apache-2.0
['generated_from_trainer']
false
bart-model2-1510-e8 This model is a fine-tuned version of [theojolliffe/bart-paraphrase-v4-e1-feedback](https://huggingface.co/theojolliffe/bart-paraphrase-v4-e1-feedback) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3655 - Rouge1: 61.3129 - Rouge2: 57.3305 - Rougel: 60.8028 - Rougelsum: 60.5111 - Gen Len: 20.0
bfd42c47fc3d2fa44641a3f0f62e7f67
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 409 | 0.4572 | 56.7459 | 47.5708 | 54.6144 | 54.9188 | 20.0 | | 0.5704 | 2.0 | 818 | 0.4349 | 58.4751 | 50.7958 | 56.5975 | 56.941 | 20.0 | | 0.1956 | 3.0 | 1227 | 0.3952 | 61.6499 | 55.4368 | 60.157 | 60.2961 | 20.0 | | 0.1177 | 4.0 | 1636 | 0.3685 | 59.8851 | 54.1843 | 58.6443 | 58.8519 | 20.0 | | 0.0752 | 5.0 | 2045 | 0.3654 | 60.975 | 55.9124 | 60.0336 | 59.8978 | 20.0 | | 0.0752 | 6.0 | 2454 | 0.3525 | 61.268 | 55.7247 | 60.2274 | 60.1515 | 20.0 | | 0.0526 | 7.0 | 2863 | 0.3519 | 61.6626 | 57.9242 | 61.0212 | 60.8486 | 20.0 | | 0.0388 | 8.0 | 3272 | 0.3655 | 61.3129 | 57.3305 | 60.8028 | 60.5111 | 20.0 |
409c7a9fdf68d627a796f3307e632c8d
apache-2.0
['translation']
false
opus-mt-fi-nl * source languages: fi * target languages: nl * OPUS readme: [fi-nl](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-nl/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-02-26.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.zip) * test set translations: [opus-2020-02-26.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.test.txt) * test set scores: [opus-2020-02-26.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-nl/opus-2020-02-26.eval.txt)
e2c2a97be9abe626a5d2139d89ad9cb0