license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
mit
['question-answering', 'roberta', 'roberta-base']
false
Example Usage ```python from transformers import pipeline qa_pipeline = pipeline( "question-answering", model="csarron/roberta-base-squad-v1", tokenizer="csarron/roberta-base-squad-v1" ) predictions = qa_pipeline({ 'context': "The game was played on February 7, 2016 at Levi's Stadium in the San Francisco Bay Area at Santa Clara, California.", 'question': "What day was the game played on?" }) print(predictions)
e17be22d3ec69928df502821d41b2c7f
mit
['question-answering', 'roberta', 'roberta-base']
false
{'score': 0.8625259399414062, 'start': 23, 'end': 39, 'answer': 'February 7, 2016'} ``` > Created by [Qingqing Cao](https://awk.ai/) | [GitHub](https://github.com/csarron) | [Twitter](https://twitter.com/sysnlp) > Made with ❤️ in New York.
7ee8e82ffcb8ed1ff3111b7adb0eaff1
cc-by-4.0
['translation']
false
Model Details - **Model Description:** - **Developed by:** Language Technology Research Group at the University of Helsinki - **Model Type:** Translation - **Language(s):** - Source Language: Chinese - Target Language: English - **License:** CC-BY-4.0 - **Resources for more information:** - [GitHub Repo](https://github.com/Helsinki-NLP/OPUS-MT-train)
b388f1e61e6cba275061e209552800ce
cc-by-4.0
['translation']
false
Risks, Limitations and Biases **CONTENT WARNING: Readers should be aware this section contains content that is disturbing, offensive, and can propagate historical and current stereotypes.** Significant research has explored bias and fairness issues with language models (see, e.g., [Sheng et al. (2021)](https://aclanthology.org/2021.acl-long.330.pdf) and [Bender et al. (2021)](https://dl.acm.org/doi/pdf/10.1145/3442188.3445922)). Further details about the dataset for this model can be found in the OPUS readme: [zho-eng](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/zho-eng/README.md)
e62baf494ebb9f01e1194df46c4de04e
cc-by-4.0
['translation']
false
System Information * helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 * transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b * port_machine: brutasse * port_time: 2020-08-21-14:41 * src_multilingual: False * tgt_multilingual: False
14c4d8d769a4fe7b90d78f362f8cefe5
cc-by-4.0
['translation']
false
Preprocessing * pre-processing: normalization + SentencePiece (spm32k,spm32k) * ref_len: 82826.0 * dataset: [opus](https://github.com/Helsinki-NLP/Opus-MT) * download original weights: [opus-2020-07-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.zip) * test set translations: [opus-2020-07-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/zho-eng/opus-2020-07-17.test.txt)
99480e6adcfcd0b248287b81340595c3
cc-by-4.0
['translation']
false
Citation Information ```bibtex @InProceedings{TiedemannThottingal:EAMT2020, author = {J{\"o}rg Tiedemann and Santhosh Thottingal}, title = {{OPUS-MT} — {B}uilding open translation services for the {W}orld}, booktitle = {Proceedings of the 22nd Annual Conferenec of the European Association for Machine Translation (EAMT)}, year = {2020}, address = {Lisbon, Portugal} } ```
ee1d96c203309883b9bef3909c45df31
cc-by-4.0
['translation']
false
How to Get Started With the Model ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("Helsinki-NLP/opus-mt-zh-en") model = AutoModelForSeq2SeqLM.from_pretrained("Helsinki-NLP/opus-mt-zh-en") ```
5a31ceed492064b6a9cb8bcbdb784cc3
apache-2.0
['generated_from_trainer']
false
wav2vec2-tcrs This model is a fine-tuned version of [facebook/wav2vec2-large-lv60](https://huggingface.co/facebook/wav2vec2-large-lv60) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.9550 - Wer: 1.0657
944441e5ff1c55597ed39ed3f61307f9
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 100 - mixed_precision_training: Native AMP
6092b594a1f6683efa610464c6464762
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 13.6613 | 3.38 | 500 | 3.2415 | 1.0 | | 2.9524 | 6.76 | 1000 | 3.0199 | 1.0 | | 2.9425 | 10.14 | 1500 | 3.0673 | 1.0 | | 2.9387 | 13.51 | 2000 | 3.0151 | 1.0 | | 2.9384 | 16.89 | 2500 | 3.0320 | 1.0 | | 2.929 | 20.27 | 3000 | 2.9691 | 1.0 | | 2.9194 | 23.65 | 3500 | 2.9596 | 1.0 | | 2.9079 | 27.03 | 4000 | 2.9279 | 1.0 | | 2.8957 | 30.41 | 4500 | 2.9647 | 1.0 | | 2.8385 | 33.78 | 5000 | 2.8114 | 1.0193 | | 2.6546 | 37.16 | 5500 | 2.6744 | 1.0983 | | 2.5866 | 40.54 | 6000 | 2.6192 | 1.1071 | | 2.5475 | 43.92 | 6500 | 2.5777 | 1.0950 | | 2.5177 | 47.3 | 7000 | 2.5845 | 1.1220 | | 2.482 | 50.68 | 7500 | 2.5730 | 1.1264 | | 2.4343 | 54.05 | 8000 | 2.5722 | 1.0955 | | 2.3754 | 57.43 | 8500 | 2.5781 | 1.1353 | | 2.3055 | 60.81 | 9000 | 2.6177 | 1.0972 | | 2.2446 | 64.19 | 9500 | 2.6351 | 1.1027 | | 2.1625 | 67.57 | 10000 | 2.6924 | 1.0756 | | 2.1078 | 70.95 | 10500 | 2.6817 | 1.0795 | | 2.0366 | 74.32 | 11000 | 2.7629 | 1.0657 | | 1.9899 | 77.7 | 11500 | 2.7972 | 1.0845 | | 1.9309 | 81.08 | 12000 | 2.8450 | 1.0734 | | 1.8861 | 84.46 | 12500 | 2.8703 | 1.0668 | | 1.8437 | 87.84 | 13000 | 2.9308 | 1.0917 | | 1.8192 | 91.22 | 13500 | 2.9298 | 1.0701 | | 1.7952 | 94.59 | 14000 | 2.9488 | 1.0685 | | 1.7745 | 97.97 | 14500 | 2.9550 | 1.0657 |
e340d6d0018812daaf19d2d195bdfe86
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200
8a982adbc1de574ce60dc9eda975f7d9
apache-2.0
['automatic-speech-recognition', 'fa']
false
exp_w2v2t_fa_vp-fr_s165 Fine-tuned [facebook/wav2vec2-large-fr-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-fr-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (fa)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
9642685e22f9df4487daab5b3881ee2d
apache-2.0
['generated_from_trainer']
false
mt5-base-finetuned-rabbi-kook-nave-4 This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: nan
1e3fc30e251b3ebccd84659f18202dbf
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 - mixed_precision_training: Native AMP
cdb6eda2a36d5a31359c7f18821a4ad2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 0.0 | 1.0 | 1784 | nan | | 0.0 | 2.0 | 3568 | nan | | 0.0 | 3.0 | 5352 | nan | | 0.0 | 4.0 | 7136 | nan | | 0.0 | 5.0 | 8920 | nan |
ddd469165da3b5d25e0dee7105c33580
apache-2.0
['Token Classification']
false
About the Model An English Named Entity Recognition model, trained on Maccrobat to recognize the bio-medical entities (107 entities) from a given text corpus (case reports etc.). This model was built on top of distilbert-base-uncased - Dataset: Maccrobat https://figshare.com/articles/dataset/MACCROBAT2018/9764942 - Carbon emission: 0.0279399890043426 Kg - Training time: 30.16527 minutes - GPU used : 1 x GeForce RTX 3060 Laptop GPU Checkout the tutorial video for explanation of this model and corresponding python library: https://youtu.be/xpiDPdBpS18
e99a4e3730f4ff98f5b3b690ea714af9
apache-2.0
['Token Classification']
false
Usage The easiest way is to load the inference api from huggingface and second method is through the pipeline object offered by transformers library. ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("d4data/biomedical-ner-all") model = AutoModelForTokenClassification.from_pretrained("d4data/biomedical-ner-all") pipe = pipeline("ner", model=model, tokenizer=tokenizer, aggregation_strategy="simple")
24f4932a6e0360ad678c3aca7bb16fdc
apache-2.0
['Token Classification']
false
Author This model is part of the Research topic "AI in Biomedical field" conducted by Deepak John Reji, Shaina Raza. If you use this work (code, model or dataset), please star at: > https://github.com/dreji18/Bio-Epidemiology-NER
94d2d82c59e2b9ca0aa35cf130aa56de
apache-2.0
[]
false
bert-base-en-de-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf).
e2576ac87df5f95b771530db694031d5
apache-2.0
[]
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-de-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-en-de-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers).
12b8f5f94f8aa3767fd156a4f7eefc02
apache-2.0
[]
false
How to cite ```bibtex @inproceedings{smallermbert, title={Load What You Need: Smaller Versions of Mutlilingual BERT}, author={Abdaoui, Amine and Pradel, Camille and Sigel, Grégoire}, booktitle={SustaiNLP / EMNLP}, year={2020} } ```
9fab4189bf2022c59457ff9f8574069c
cc-by-4.0
[]
false
HindTweetBERT A HindBERT (l3cube-pune/hindi-bert-v2) model finetuned on Hindi Tweets.<br> More details on the dataset, models, and baseline results can be found in our [paper] (<a href='https://arxiv.org/abs/2210.04267'> link </a>)<br> ``` @article{gokhale2022spread, title={Spread Love Not Hate: Undermining the Importance of Hateful Pre-training for Hate Speech Detection}, author={Gokhale, Omkar and Kane, Aditya and Patankar, Shantanu and Chavan, Tanmay and Joshi, Raviraj}, journal={arXiv preprint arXiv:2210.04267}, year={2022} } ```
cbd36fdb84c1d79c1aa802ff7b9cc9fd
apache-2.0
['generated_from_trainer']
false
Graphcore/lxmert-vqa-uncased Optimum Graphcore is a new open-source library and toolkit that enables developers to access IPU-optimized models certified by Hugging Face. It is an extension of Transformers, providing a set of performance optimization tools enabling maximum efficiency to train and run models on Graphcore’s IPUs - a completely new kind of massively parallel processor to accelerate machine intelligence. Learn more about how to take train Transformer models faster with IPUs at [hf.co/hardware/graphcore](https://huggingface.co/hardware/graphcore). Through HuggingFace Optimum, Graphcore released ready-to-use IPU-trained model checkpoints and IPU configuration files to make it easy to train models with maximum efficiency in the IPU. Optimum shortens the development lifecycle of your AI models by letting you plug-and-play any public dataset and allows a seamless integration to our State-of-the-art hardware giving you a quicker time-to-value for your AI project.
9eaa013bdbd592f045c8d709b750d1f5
apache-2.0
['generated_from_trainer']
false
Model description LXMERT is a transformer model for learning vision-and-language cross-modality representations. It has a Transformer model that has three encoders: object relationship encoder, a language encoder, and a cross-modality encoder. It is pretrained via a combination of masked language modelling, visual-language text alignment, ROI-feature regression, masked visual-attribute modelling, masked visual-object modelling, and visual-question answering objectives. It achieves the state-of-the-art results on VQA and GQA. Paper link : [LXMERT: Learning Cross-Modality Encoder Representations from Transformers](https://arxiv.org/pdf/1908.07490.pdf)
3f10af9af3498bf168369411b98ddaa4
apache-2.0
['generated_from_trainer']
false
Intended uses & limitations This model is a fine-tuned version of [unc-nlp/lxmert-base-uncased](https://huggingface.co/unc-nlp/lxmert-base-uncased) on the [Graphcore/vqa-lxmert](https://huggingface.co/datasets/Graphcore/vqa-lxmert) dataset. It achieves the following results on the evaluation set: - Loss: 0.0009 - Accuracy: 0.7242
69472d0bd82e0ea6f3106470d14c9d25
apache-2.0
['generated_from_trainer']
false
Training procedure Trained on 16 Graphcore Mk2 IPUs using [optimum-graphcore](https://github.com/huggingface/optimum-graphcore). Command line: ``` python examples/question-answering/run_vqa.py \ --model_name_or_path unc-nlp/lxmert-base-uncased \ --ipu_config_name Graphcore/lxmert-base-ipu \ --dataset_name Graphcore/vqa-lxmert \ --do_train \ --do_eval \ --max_seq_length 512 \ --per_device_train_batch_size 1 \ --num_train_epochs 4 \ --dataloader_num_workers 64 \ --logging_steps 5 \ --learning_rate 5e-5 \ --lr_scheduler_type linear \ --loss_scaling 16384 \ --weight_decay 0.01 \ --warmup_ratio 0.1 \ --output_dir /tmp/vqa/ \ --dataloader_drop_last \ --replace_qa_head \ --pod_type pod16 ```
dda10e1c8f269d20699bfc7e408dea10
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 8 - seed: 42 - distributed_type: IPU - total_train_batch_size: 64 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 4.0 - training precision: Mixed Precision
28928b9a54928588173b468d613d66ee
apache-2.0
['generated_from_trainer']
false
Training results ``` ***** train metrics ***** "epoch": 4.0, "train_loss": 0.0060005393999575125, "train_runtime": 13854.802, "train_samples": 443757, "train_samples_per_second": 128.116, "train_steps_per_second": 2.002 ***** eval metrics ***** "eval_accuracy": 0.7242196202278137, "eval_loss": 0.0008745193481445312, "eval_samples": 214354, ```
e174d5954f58a0a712ffdd299a4b8682
other
['vision', 'image-classification']
false
MobileViT (small-sized model) MobileViT model pre-trained on ImageNet-1k at resolution 256x256. It was introduced in [MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/abs/2110.02178) by Sachin Mehta and Mohammad Rastegari, and first released in [this repository](https://github.com/apple/ml-cvnets). The license used is [Apple sample code license](https://github.com/apple/ml-cvnets/blob/main/LICENSE). Disclaimer: The team releasing MobileViT did not write a model card for this model so this model card has been written by the Hugging Face team.
86a23aae840362ad815a7cc29f0d6f65
other
['vision', 'image-classification']
false
Model description MobileViT is a light-weight, low latency convolutional neural network that combines MobileNetV2-style layers with a new block that replaces local processing in convolutions with global processing using transformers. As with ViT (Vision Transformer), the image data is converted into flattened patches before it is processed by the transformer layers. Afterwards, however, the patches are "unflattened" back into feature maps. This allows the MobileViT-block to be placed anywhere inside a CNN. MobileViT does not require any positional embeddings.
3446a67e1b529d8f1fdedac5b271673a
other
['vision', 'image-classification']
false
Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=mobilevit) to look for fine-tuned versions on a task that interests you.
133c17ea16b2615da0730f8f86b7fa51
other
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import MobileViTFeatureExtractor, MobileViTForImageClassification from PIL import Image import requests url = 'http://images.cocodataset.org/val2017/000000039769.jpg' image = Image.open(requests.get(url, stream=True).raw) feature_extractor = MobileViTFeatureExtractor.from_pretrained('Matthijs/mobilevit-small') model = MobileViTForImageClassification.from_pretrained('Matthijs/mobilevit-small') inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
c309ac03b95adbc90dcee3a83ef70f20
other
['vision', 'image-classification']
false
model predicts one of the 1000 ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` Currently, both the feature extractor and model support PyTorch.
d4af970992005ccff46bc96159968ba8
other
['vision', 'image-classification']
false
Preprocessing Training requires only basic data augmentation, i.e. random resized cropping and horizontal flipping. To learn multi-scale representations without requiring fine-tuning, a multi-scale sampler was used during training, with image sizes randomly sampled from: (160, 160), (192, 192), (256, 256), (288, 288), (320, 320). At inference time, images are resized/rescaled to the same resolution (288x288), and center-cropped at 256x256. Pixels are normalized to the range [0, 1]. Images are expected to be in BGR pixel order, not RGB.
55998f41314cb15a612380a82914aec2
other
['vision', 'image-classification']
false
Pretraining The MobileViT networks are trained from scratch for 300 epochs on ImageNet-1k on 8 NVIDIA GPUs with an effective batch size of 1024 and learning rate warmup for 3k steps, followed by cosine annealing. Also used were label smoothing cross-entropy loss and L2 weight decay. Training resolution varies from 160x160 to 320x320, using multi-scale sampling.
b5439f4a9463130420e1abd4f8a8d696
other
['vision', 'image-classification']
false
params | URL | |------------------|-------------------------|-------------------------|-----------|----------------------------------------------------| | MobileViT-XXS | 69.0 | 88.9 | 1.3 M | https://huggingface.co/Matthijs/mobilevit-xx-small | | MobileViT-XS | 74.8 | 92.3 | 2.3 M | https://huggingface.co/Matthijs/mobilevit-x-small | | **MobileViT-S** | **78.4** | **94.1** | **5.6 M** | https://huggingface.co/Matthijs/mobilevit-small |
da38d2f2c66403f000e5f13e73301aab
other
['vision', 'image-classification']
false
BibTeX entry and citation info ```bibtex @inproceedings{vision-transformer, title = {MobileViT: Light-weight, General-purpose, and Mobile-friendly Vision Transformer}, author = {Sachin Mehta and Mohammad Rastegari}, year = {2022}, URL = {https://arxiv.org/abs/2110.02178} } ```
8eb282d9c9367338be5376c3e804c524
mit
['generated_from_keras_callback']
false
W4nkel/distilbertBase128KTrain This model is a fine-tuned version of [dbmdz/distilbert-base-turkish-cased](https://huggingface.co/dbmdz/distilbert-base-turkish-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7462 - Validation Loss: 0.5115 - Train Accuracy: 0.7675 - Epoch: 0
b2c952ee8f733e5daaebb630c71460ce
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': False, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 1500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
df86bd1bb2cfec8f9ca412db530b0be3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 35.0
4f934d1f3f7fa71660de43c105ff35c0
apache-2.0
['generated_from_trainer']
false
chinese-macbert-base-finetuned-ner This model is a fine-tuned version of [hfl/chinese-macbert-base](https://huggingface.co/hfl/chinese-macbert-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2420 - F1: 0.9224
8ce7c4bdb19289669f4d725e3022ef89
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 57 - eval_batch_size: 57 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
5805f40b16a3a3a6f6631bc273756f67
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.6141 | 1.0 | 1 | 2.6454 | 0.0 | | 2.7076 | 2.0 | 2 | 2.0034 | 0.0 | | 2.0979 | 3.0 | 3 | 1.6276 | 0.0 | | 1.7264 | 4.0 | 4 | 1.3419 | 0.3522 | | 1.4691 | 5.0 | 5 | 1.1239 | 0.4091 | | 1.2504 | 6.0 | 6 | 0.9532 | 0.5514 | | 1.0798 | 7.0 | 7 | 0.8129 | 0.5895 | | 0.9279 | 8.0 | 8 | 0.6987 | 0.625 | | 0.8179 | 9.0 | 9 | 0.6081 | 0.6392 | | 0.7202 | 10.0 | 10 | 0.5346 | 0.6667 | | 0.6377 | 11.0 | 11 | 0.4731 | 0.7451 | | 0.5751 | 12.0 | 12 | 0.4226 | 0.7925 | | 0.5202 | 13.0 | 13 | 0.3804 | 0.7685 | | 0.4733 | 14.0 | 14 | 0.3447 | 0.7928 | | 0.44 | 15.0 | 15 | 0.3145 | 0.8509 | | 0.4047 | 16.0 | 16 | 0.2899 | 0.8918 | | 0.3773 | 17.0 | 17 | 0.2707 | 0.8966 | | 0.353 | 18.0 | 18 | 0.2563 | 0.9052 | | 0.3413 | 19.0 | 19 | 0.2468 | 0.9224 | | 0.3314 | 20.0 | 20 | 0.2420 | 0.9224 |
cd69945852a937901fc2ed15301b3a3a
cc-by-4.0
['translation', 'opus-mt-tc']
false
opus-mt-tc-big-en-pt Neural machine translation model for translating from English (en) to Portuguese (pt). This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All models are originally trained using the amazing framework of [Marian NMT](https://marian-nmt.github.io/), an efficient NMT implementation written in pure C++. The models have been converted to pyTorch using the transformers library by huggingface. Training data is taken from [OPUS](https://opus.nlpl.eu/) and training pipelines use the procedures of [OPUS-MT-train](https://github.com/Helsinki-NLP/Opus-MT-train). * Publications: [OPUS-MT – Building open translation services for the World](https://aclanthology.org/2020.eamt-1.61/) and [The Tatoeba Translation Challenge – Realistic Data Sets for Low Resource and Multilingual MT](https://aclanthology.org/2020.wmt-1.139/) (Please, cite if you use this model.) ``` @inproceedings{tiedemann-thottingal-2020-opus, title = "{OPUS}-{MT} {--} Building open translation services for the World", author = {Tiedemann, J{\"o}rg and Thottingal, Santhosh}, booktitle = "Proceedings of the 22nd Annual Conference of the European Association for Machine Translation", month = nov, year = "2020", address = "Lisboa, Portugal", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2020.eamt-1.61", pages = "479--480", } @inproceedings{tiedemann-2020-tatoeba, title = "The Tatoeba Translation Challenge {--} Realistic Data Sets for Low Resource and Multilingual {MT}", author = {Tiedemann, J{\"o}rg}, booktitle = "Proceedings of the Fifth Conference on Machine Translation", month = nov, year = "2020", address = "Online", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2020.wmt-1.139", pages = "1174--1182", } ```
5d72c12d9391e6eccd39b59aa8c0a08b
cc-by-4.0
['translation', 'opus-mt-tc']
false
Model info * Release: 2022-03-13 * source language(s): eng * target language(s): pob por * valid target language labels: >>pob<< >>por<< * model: transformer-big * data: opusTCv20210807+bt ([source](https://github.com/Helsinki-NLP/Tatoeba-Challenge)) * tokenization: SentencePiece (spm32k,spm32k) * original model: [opusTCv20210807+bt_transformer-big_2022-03-13.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-por/opusTCv20210807+bt_transformer-big_2022-03-13.zip) * more information released models: [OPUS-MT eng-por README](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-por/README.md) * more information about the model: [MarianMT](https://huggingface.co/docs/transformers/model_doc/marian) This is a multilingual translation model with multiple target languages. A sentence initial language token is required in the form of `>>id<<` (id = valid target language ID), e.g. `>>pob<<`
8fed331a01bff0f19fe8bea148799be7
cc-by-4.0
['translation', 'opus-mt-tc']
false
Usage A short example code: ```python from transformers import MarianMTModel, MarianTokenizer src_text = [ ">>por<< Tom tried to stab me.", ">>por<< He has been to Hawaii several times." ] model_name = "pytorch-models/opus-mt-tc-big-en-pt" tokenizer = MarianTokenizer.from_pretrained(model_name) model = MarianMTModel.from_pretrained(model_name) translated = model.generate(**tokenizer(src_text, return_tensors="pt", padding=True)) for t in translated: print( tokenizer.decode(t, skip_special_tokens=True) )
c298b4e3dd992f587c8b625728b9df07
cc-by-4.0
['translation', 'opus-mt-tc']
false
Ele já esteve no Havaí várias vezes. ``` You can also use OPUS-MT models with the transformers pipelines, for example: ```python from transformers import pipeline pipe = pipeline("translation", model="Helsinki-NLP/opus-mt-tc-big-en-pt") print(pipe(">>por<< Tom tried to stab me."))
f147db48b204027afbe3105de7258bd9
cc-by-4.0
['translation', 'opus-mt-tc']
false
Benchmarks * test set translations: [opusTCv20210807+bt_transformer-big_2022-03-13.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-por/opusTCv20210807+bt_transformer-big_2022-03-13.test.txt) * test set scores: [opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/eng-por/opusTCv20210807+bt_transformer-big_2022-03-13.eval.txt) * benchmark results: [benchmark_results.txt](benchmark_results.txt) * benchmark output: [benchmark_translations.zip](benchmark_translations.zip) | langpair | testset | chr-F | BLEU |
33e29625fd7209a4658d01129f4d5a89
cc-by-4.0
['translation', 'opus-mt-tc']
false
words | |----------|---------|-------|-------|-------|--------| | eng-por | tatoeba-test-v2021-08-07 | 0.69320 | 49.6 | 13222 | 105265 | | eng-por | flores101-devtest | 0.71673 | 50.4 | 1012 | 26519 |
65a1fcb4d6bc19e1bf924d38dd279968
cc-by-4.0
['translation', 'opus-mt-tc']
false
/resource/projects/2866), by the [FoTran project](https://www.helsinki.fi/en/researchgroups/natural-language-understanding-with-cross-lingual-grounding), funded by the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No 771113), and the [MeMAD project](https://memad.eu/), funded by the European Union’s Horizon 2020 Research and Innovation Programme under grant agreement No 780069. We are also grateful for the generous computational resources and IT infrastructure provided by [CSC -- IT Center for Science](https://www.csc.fi/), Finland.
5d24f12dbe35b245c44f9ab6dd18827e
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-mrpc This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.3693 - Accuracy: 0.8407 - F1: 0.8825 - Combined Score: 0.8616
dc7ff7f17893324667bcf242d8ff2a63
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 10 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50
76273477242fb1b11c4e304efaef23e3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.5716 | 1.0 | 29 | 0.5020 | 0.7475 | 0.8437 | 0.7956 | | 0.3969 | 2.0 | 58 | 0.3693 | 0.8407 | 0.8825 | 0.8616 | | 0.2182 | 3.0 | 87 | 0.5412 | 0.8235 | 0.88 | 0.8518 | | 0.1135 | 4.0 | 116 | 0.5104 | 0.8260 | 0.8748 | 0.8504 | | 0.0772 | 5.0 | 145 | 0.6428 | 0.8186 | 0.8655 | 0.8420 | | 0.049 | 6.0 | 174 | 0.6366 | 0.8260 | 0.8725 | 0.8493 | | 0.0356 | 7.0 | 203 | 0.8414 | 0.8358 | 0.8896 | 0.8627 | | 0.0335 | 8.0 | 232 | 0.8573 | 0.8137 | 0.8676 | 0.8407 | | 0.0234 | 9.0 | 261 | 0.8893 | 0.8309 | 0.8856 | 0.8582 |
4f0ae90701ad74c31cc20d47b1b475e4
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the clinc_oos dataset. The model is used in Chapter 8: Making Transformers Efficient in Production in the [NLP with Transformers book](https://learning.oreilly.com/library/view/natural-language-processing/9781098103231/). You can find the full code in the accompanying [Github repository](https://github.com/nlp-with-transformers/notebooks/blob/main/08_model-compression.ipynb). It achieves the following results on the evaluation set: - Loss: 0.7773 - Accuracy: 0.9174
9b8dc6a0ebf4943cd8c6104070461860
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 48 - eval_batch_size: 48 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5
a64ba5f4b04ac071dc321e35fb889219
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 4.2923 | 1.0 | 318 | 3.2893 | 0.7423 | | 2.6307 | 2.0 | 636 | 1.8837 | 0.8281 | | 1.5483 | 3.0 | 954 | 1.1583 | 0.8968 | | 1.0153 | 4.0 | 1272 | 0.8618 | 0.9094 | | 0.7958 | 5.0 | 1590 | 0.7773 | 0.9174 |
019a2169a73258ff38773efbe9f215a6
mit
['generated_from_trainer']
false
xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.1388 - F1: 0.9069
e6d30aba12dd145a9c75bab609d89c82
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7753 | 1.0 | 96 | 0.3149 | 0.7673 | | 0.3286 | 2.0 | 192 | 0.1819 | 0.8707 | | 0.2197 | 3.0 | 288 | 0.1388 | 0.9069 |
bb098fb27118a8b32d9431b88cb37101
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
2eb87fdd5b7f32a99c86ef00ff9b023a
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased__sst2__train-32-2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4805 - Accuracy: 0.7699
e0fc8a2c9eb3df7df63272d2c4c1fcbb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7124 | 1.0 | 13 | 0.6882 | 0.5385 | | 0.6502 | 2.0 | 26 | 0.6715 | 0.5385 | | 0.6001 | 3.0 | 39 | 0.6342 | 0.6154 | | 0.455 | 4.0 | 52 | 0.5713 | 0.7692 | | 0.2605 | 5.0 | 65 | 0.5562 | 0.7692 | | 0.1258 | 6.0 | 78 | 0.6799 | 0.7692 | | 0.0444 | 7.0 | 91 | 0.8096 | 0.7692 | | 0.0175 | 8.0 | 104 | 0.9281 | 0.6923 | | 0.0106 | 9.0 | 117 | 0.9826 | 0.6923 | | 0.0077 | 10.0 | 130 | 1.0254 | 0.7692 | | 0.0056 | 11.0 | 143 | 1.0667 | 0.7692 | | 0.0042 | 12.0 | 156 | 1.1003 | 0.7692 | | 0.0036 | 13.0 | 169 | 1.1299 | 0.7692 | | 0.0034 | 14.0 | 182 | 1.1623 | 0.6923 | | 0.003 | 15.0 | 195 | 1.1938 | 0.6923 |
2bb4ccedd502ed6e343c839eca6efaa9
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Wav2Vec2-Large-XLSR-53-Chuvash Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on Chuvash using the [Common Voice](https://huggingface.co/datasets/common_voice) dataset. When using this model, make sure that your speech input is sampled at 16kHz.
731273903991c27f5ff1c9bafc34c1a1
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "cv", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash") resampler = torchaudio.transforms.Resample(48_000, 16_000)
2563e5aa3947957f841418b2dd8675e9
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ```
cd8aca2c0905d2eab5ba1d48fcdb31bc
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Chuvash test data of Common Voice. ```python import torch import torchaudio import urllib.request import tarfile import pandas as pd from tqdm.auto import tqdm from datasets import load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
873f3bdfda792401e47fa86f9bf78e1a
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Download the raw data instead of using HF datasets to save disk space data_url = "https://voice-prod-bundler-ee1969a6ce8178826482b88e843c335139bd3fb4.s3.amazonaws.com/cv-corpus-6.1-2020-12-11/cv.tar.gz" filestream = urllib.request.urlopen(data_url) data_file = tarfile.open(fileobj=filestream, mode="r|gz") data_file.extractall() wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash") model = Wav2Vec2ForCTC.from_pretrained("anton-l/wav2vec2-large-xlsr-53-chuvash") model.to("cuda") cv_test = pd.read_csv("cv-corpus-6.1-2020-12-11/cv/test.tsv", sep='\t') clips_path = "cv-corpus-6.1-2020-12-11/cv/clips/" def clean_sentence(sent): sent = sent.lower()
f8e9202c84da7aca7b6909cd73633edd
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
remove repeated spaces sent = " ".join(sent.split()) return sent targets = [] preds = [] for i, row in tqdm(cv_test.iterrows(), total=cv_test.shape[0]): row["sentence"] = clean_sentence(row["sentence"]) speech_array, sampling_rate = torchaudio.load(clips_path + row["path"]) resampler = torchaudio.transforms.Resample(sampling_rate, 16_000) row["speech"] = resampler(speech_array).squeeze().numpy() inputs = processor(row["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) targets.append(row["sentence"]) preds.append(processor.batch_decode(pred_ids)[0]) print("WER: {:2f}".format(100 * wer.compute(predictions=preds, references=targets))) ``` **Test Result**: 40.01 %
9c686fddeb30053b34e806a039cb9c9b
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Asmagalally-with-Protogen-v2.2- Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
4ea6acba238f012f4fec2203fc3c059b
apache-2.0
['generated_from_trainer']
false
tiny-mlm-glue-stsb-target-glue-mrpc This model is a fine-tuned version of [muhtasham/tiny-mlm-glue-stsb](https://huggingface.co/muhtasham/tiny-mlm-glue-stsb) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.2364 - Accuracy: 0.7132 - F1: 0.8047
6efebc9a1827204daae10eba504f9d72
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: constant - num_epochs: 200
739bbed836db3537416c444177cef1cd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.5901 | 4.35 | 500 | 0.5567 | 0.7108 | 0.8072 | | 0.4581 | 8.7 | 1000 | 0.5798 | 0.7377 | 0.8283 | | 0.3115 | 13.04 | 1500 | 0.6576 | 0.7426 | 0.8247 | | 0.197 | 17.39 | 2000 | 0.7977 | 0.7255 | 0.8152 | | 0.1153 | 21.74 | 2500 | 1.0637 | 0.7059 | 0.7973 | | 0.0843 | 26.09 | 3000 | 1.2364 | 0.7132 | 0.8047 |
01b790ae5c95793eb620a6ed7bb8d68f
apache-2.0
['speech']
false
SEW-D-mid [SEW-D by ASAPP Research](https://github.com/asappresearch/sew) The base model pretrained on 16kHz sampled speech audio. When using the model make sure that your speech input is also sampled at 16Khz. Note that this model should be fine-tuned on a downstream task, like Automatic Speech Recognition, Speaker Identification, Intent Classification, Emotion Recognition, etc... Paper: [Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition](https://arxiv.org/abs/2109.06870) Authors: Felix Wu, Kwangyoun Kim, Jing Pan, Kyu Han, Kilian Q. Weinberger, Yoav Artzi **Abstract** This paper is a study of performance-efficiency trade-offs in pre-trained models for automatic speech recognition (ASR). We focus on wav2vec 2.0, and formalize several architecture designs that influence both the model performance and its efficiency. Putting together all our observations, we introduce SEW (Squeezed and Efficient Wav2vec), a pre-trained model architecture with significant improvements along both performance and efficiency dimensions across a variety of training setups. For example, under the 100h-960h semi-supervised setup on LibriSpeech, SEW achieves a 1.9x inference speedup compared to wav2vec 2.0, with a 13.5% relative reduction in word error rate. With a similar inference time, SEW reduces word error rate by 25-50% across different model sizes. The original model can be found under https://github.com/asappresearch/sew
a539af234fc8ff4a797090fd04b6480a
apache-2.0
['speech']
false
Usage See [this blog](https://huggingface.co/blog/fine-tune-wav2vec2-english) for more information on how to fine-tune the model. Note that the class `Wav2Vec2ForCTC` has to be replaced by `SEWDForCTC`.
353982ba1e4690caa19b2c34a53d865d
apache-2.0
['generated_from_keras_callback']
false
bert-model-english1 This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0274 - Train Accuracy: 0.9914 - Validation Loss: 0.3493 - Validation Accuracy: 0.9303 - Epoch: 2
ed60c76f63cff51c2bea283a4aecfd98
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 5e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32
f7bf32bdc2b451c26e2ed7ea4e7cfae2
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.0366 | 0.9885 | 0.3013 | 0.9299 | 0 | | 0.0261 | 0.9912 | 0.3445 | 0.9351 | 1 | | 0.0274 | 0.9914 | 0.3493 | 0.9303 | 2 |
a90b5f5e6f2b69ccc7013d1bca0a207c
apache-2.0
['generated_from_trainer']
false
wav2vec2-base_toy_train_data_augmented This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0238 - Wer: 0.6969
a2b04994e9212c5fd04f3ff19c5ead83
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 20
020a2d6a699e0dedc8ab1d47bb44d36d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.12 | 1.05 | 250 | 3.3998 | 0.9982 | | 3.0727 | 2.1 | 500 | 3.1261 | 0.9982 | | 1.9729 | 3.15 | 750 | 1.4868 | 0.9464 | | 1.3213 | 4.2 | 1000 | 1.2598 | 0.8833 | | 1.0508 | 5.25 | 1250 | 1.0014 | 0.8102 | | 0.8483 | 6.3 | 1500 | 0.9475 | 0.7944 | | 0.7192 | 7.35 | 1750 | 0.9493 | 0.7686 | | 0.6447 | 8.4 | 2000 | 0.9872 | 0.7573 | | 0.6064 | 9.45 | 2250 | 0.9587 | 0.7447 | | 0.5384 | 10.5 | 2500 | 0.9332 | 0.7320 | | 0.4985 | 11.55 | 2750 | 0.9926 | 0.7315 | | 0.4643 | 12.6 | 3000 | 1.0008 | 0.7292 | | 0.4565 | 13.65 | 3250 | 0.9522 | 0.7171 | | 0.449 | 14.7 | 3500 | 0.9685 | 0.7140 | | 0.4307 | 15.75 | 3750 | 1.0080 | 0.7077 | | 0.4239 | 16.81 | 4000 | 0.9950 | 0.7023 | | 0.389 | 17.86 | 4250 | 1.0260 | 0.7007 | | 0.3471 | 18.91 | 4500 | 1.0012 | 0.6966 | | 0.3276 | 19.96 | 4750 | 1.0238 | 0.6969 |
b397dc933144a16ca09dfd2030d33e51
mit
[]
false
Roblox avatar on Stable Diffusion why am i spending time making these?, anyways. This is the `<roblox-avatar>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). photos were taken from pinterest. Here is the new concept you will be able to use as an `object`: ![<roblox-avatar> 0](https://huggingface.co/sd-concepts-library/roblox-avatar/resolve/main/concept_images/4.jpeg) ![<roblox-avatar> 1](https://huggingface.co/sd-concepts-library/roblox-avatar/resolve/main/concept_images/0.jpeg) ![<roblox-avatar> 2](https://huggingface.co/sd-concepts-library/roblox-avatar/resolve/main/concept_images/3.jpeg) ![<roblox-avatar> 3](https://huggingface.co/sd-concepts-library/roblox-avatar/resolve/main/concept_images/2.jpeg) ![<roblox-avatar> 4](https://huggingface.co/sd-concepts-library/roblox-avatar/resolve/main/concept_images/1.jpeg)
de141d9383220bf848372397ce64583f
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3
b096c198d94365529a9f4591c2a64e23
apache-2.0
['generated_from_trainer']
false
swin-tiny-patch4-window7-224-lcbsi-wbc-new This model is a fine-tuned version of [polejowska/swin-tiny-patch4-window7-224-lcbsi-wbc](https://huggingface.co/polejowska/swin-tiny-patch4-window7-224-lcbsi-wbc) on the WBC dataset. It achieves the following results on the evaluation set: - Loss: 0.0457 - Accuracy: 0.992 - Precision: 0.9920 - Recall: 0.992 - F1: 0.9920
d9efb6cdba7c88356a745f243797bb38
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002562 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 3
005d971529ec5b358ec163afa13ca379
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | Precision | Recall | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:---------:|:------:|:------:| | 0.0936 | 0.98 | 27 | 0.0724 | 0.984 | 0.9841 | 0.984 | 0.9840 | | 0.0276 | 1.98 | 54 | 0.0768 | 0.984 | 0.9841 | 0.984 | 0.9839 | | 0.0133 | 2.98 | 81 | 0.0457 | 0.992 | 0.9920 | 0.992 | 0.9920 |
c116a029d50e660a9aabbc3b5807d17b
afl-3.0
[]
false
--- About : This model can be used for text summarization. The dataset on which it was fine tuned consisted of 10,323 articles. The Data Fields : - "Headline" : title of the article - "articleBody" : the main article content - "source" : the link to the readmore page. The data splits were : - Train : 8258. - Vaildation : 2065.
56b3f21f4cfca5c8edca7c0f217bee87
afl-3.0
[]
false
How to use along with pipeline ```python from transformers import pipeline from transformers import AutoTokenizer, AutoModelForSeq2Seq tokenizer = AutoTokenizer.from_pretrained("AkashKhamkar/InSumT510k") model = AutoModelForSeq2SeqLM.from_pretrained("AkashKhamkar/InSumT510k") summarizer = pipeline("summarization", model=model, tokenizer=tokenizer) summarizer("Text for summarization...", min_length=5, max_length=50) ``` language: - English library_name: Pytorch tags: - Summarization - T5-base - Conditional Modelling -
9fa8046eb50afc6f14292b935fd0ca7d
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 12 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 24 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2.0
d62e52983f08b69acc1d65887de9f013
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 200
cf294cc3399756f9d1b3afd9f150971f
cc-by-4.0
['answer extraction']
false
Model Card of `lmqg/flan-t5-small-squad-ae` This model is fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) for answer extraction on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
85fba57f191407a08386973424ea41fe
cc-by-4.0
['answer extraction']
false
Overview - **Language model:** [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
38dca41d73fdaf8a4400c588320201dd
cc-by-4.0
['answer extraction']
false
model prediction answers = model.generate_a("William Turner was an English painter who specialised in watercolour landscapes") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/flan-t5-small-squad-ae") output = pipe("extract answers: <hl> Beyonce further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records. <hl> Her performance in the film received praise from critics, and she garnered several nominations for her portrayal of James, including a Satellite Award nomination for Best Supporting Actress, and a NAACP Image Award nomination for Outstanding Supporting Actress.") ```
443519cce3f3d80a6d56bf9bf676aaec
cc-by-4.0
['answer extraction']
false
Evaluation - ***Metric (Answer Extraction)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-ae/raw/main/eval/metric.first.answer.paragraph_sentence.answer.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:---------------------------------------------------------------| | AnswerExactMatch | 55.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | AnswerF1Score | 68.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | BERTScore | 91.1 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 48.25 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 43.39 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 38.64 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 34.6 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 42.59 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 80.54 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 67.61 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) |
abc089cbfd778b3c479ff2eadad127b7
cc-by-4.0
['answer extraction']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_sentence'] - output_types: ['answer'] - prefix_types: ['ae'] - model: google/flan-t5-small - max_length: 512 - max_length_output: 32 - epoch: 8 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 1 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-small-squad-ae/raw/main/trainer_config.json).
39185aacfea76d3aff44c4e9fa3b01e1
cc-by-4.0
['answer extraction']
false
Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
d52bca31ed6005462b87f86841e27b60
apache-2.0
['generated_from_trainer']
false
t5-base-vanilla-cstop_artificial This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1598
7ac829ce8bbd6f8a0acb878036742886
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 64 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000
6e4ca39cd92d3339cf43644df68737c0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:----:|:---------------:| | 1.2724 | 28.5 | 200 | 0.0776 | | 0.0151 | 57.13 | 400 | 0.1004 | | 0.1727 | 85.63 | 600 | 0.1202 | | 0.0133 | 114.25 | 800 | 0.1005 | | 0.0044 | 142.75 | 1000 | 0.1131 | | 0.0022 | 171.38 | 1200 | 0.1285 | | 0.0018 | 199.88 | 1400 | 0.1349 | | 0.0014 | 228.5 | 1600 | 0.1451 | | 0.003 | 257.13 | 1800 | 0.1215 | | 0.003 | 285.63 | 2000 | 0.1345 | | 0.0012 | 314.25 | 2200 | 0.1520 | | 0.001 | 342.75 | 2400 | 0.1486 | | 0.0008 | 371.38 | 2600 | 0.1559 | | 0.0007 | 399.88 | 2800 | 0.1590 | | 0.0006 | 428.5 | 3000 | 0.1598 |
780664970b29b185593f651fc8af0130
apache-2.0
['generated_from_trainer']
false
tiny-vanilla-target-glue-wnli This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.7580 - Accuracy: 0.0986
5d75c75c0faf06b9022801423e6d87e4
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6894 | 25.0 | 500 | 0.7552 | 0.3099 | | 0.6681 | 50.0 | 1000 | 0.9797 | 0.1549 | | 0.6258 | 75.0 | 1500 | 1.3863 | 0.1127 | | 0.5659 | 100.0 | 2000 | 1.7580 | 0.0986 |
09515643b5e083543dd631878025d175