license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 800 - num_epochs: 30 - mixed_precision_training: Native AMP | 33eab011a1e60401ac33ef7291225a07 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.9363 | 13.89 | 500 | 2.7532 | 1.0 | | 0.9875 | 27.78 | 1000 | 0.9149 | 0.5907 | | ba33f7f2896a6b5b4fc63a44b86cc95c |
apache-2.0 | ['vision', 'image-classification'] | false | Swin Transformer v2 (large-sized model) Swin Transformer v2 model pre-trained on ImageNet-21k at resolution 192x192. It was introduced in the paper [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883) by Liu et al. and first released in [this repository](https://github.com/microsoft/Swin-Transformer). Disclaimer: The team releasing Swin Transformer v2 did not write a model card for this model so this model card has been written by the Hugging Face team. | 582cae13e344c525864a5f6e2e4ce00b |
apache-2.0 | ['vision', 'image-classification'] | false | Model description The Swin Transformer is a type of Vision Transformer. It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each local window (shown in red). It can thus serve as a general-purpose backbone for both image classification and dense recognition tasks. In contrast, previous vision Transformers produce feature maps of a single low resolution and have quadratic computation complexity to input image size due to computation of self-attention globally. Swin Transformer v2 adds 3 main improvements: 1) a residual-post-norm method combined with cosine attention to improve training stability; 2) a log-spaced continuous position bias method to effectively transfer models pre-trained using low-resolution images to downstream tasks with high-resolution inputs; 3) a self-supervised pre-training method, SimMIM, to reduce the needs of vast labeled images.  [Source](https://paperswithcode.com/method/swin-transformer) | 143bbf560d56f9a48c237729831671e5 |
apache-2.0 | ['vision', 'image-classification'] | false | Intended uses & limitations You can use the raw model for image classification. See the [model hub](https://huggingface.co/models?search=swinv2) to look for fine-tuned versions on a task that interests you. | 342c6fb1110cdaf9212894364df54fc0 |
apache-2.0 | ['vision', 'image-classification'] | false | How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 21k ImageNet classes: ```python from transformers import AutoImageProcessor, AutoModelForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) processor = AutoImageProcessor.from_pretrained("microsoft/swinv2-large-patch4-window12-192-22k") model = AutoModelForImageClassification.from_pretrained("microsoft/swinv2-large-patch4-window12-192-22k") inputs = processor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits | 5dee895d110e50ea178785f7cdd6be17 |
apache-2.0 | ['vision', 'image-classification'] | false | model predicts one of the 21k ImageNet classes predicted_class_idx = logits.argmax(-1).item() print("Predicted class:", model.config.id2label[predicted_class_idx]) ``` For more code examples, we refer to the [documentation](https://huggingface.co/transformers/model_doc/swinv2.html | 9be14b04e2608027b903913e0e9217f9 |
apache-2.0 | ['vision', 'image-classification'] | false | BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2111-09883, author = {Ze Liu and Han Hu and Yutong Lin and Zhuliang Yao and Zhenda Xie and Yixuan Wei and Jia Ning and Yue Cao and Zheng Zhang and Li Dong and Furu Wei and Baining Guo}, title = {Swin Transformer {V2:} Scaling Up Capacity and Resolution}, journal = {CoRR}, volume = {abs/2111.09883}, year = {2021}, url = {https://arxiv.org/abs/2111.09883}, eprinttype = {arXiv}, eprint = {2111.09883}, timestamp = {Thu, 02 Dec 2021 15:54:22 +0100}, biburl = {https://dblp.org/rec/journals/corr/abs-2111-09883.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` | 75fb29d8fe4ff91a09126b807135dc64 |
mit | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | S2T-LARGE-LIBRISPEECH-ASR `s2t-large-librispeech-asr` is a Speech to Text Transformer (S2T) model trained for automatic speech recognition (ASR). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text) | f00e73449248cbf892c7164baa2f7366 |
mit | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | Intended uses & limitations This model can be used for end-to-end speech recognition (ASR). See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints. | 4e2808d8778468391c82cf624664d54f |
mit | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the filter bank features. Make sure to install the `torchaudio` package before running this example.* You could either install those as extra speech dependancies with `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly with `pip install torchaudio sentencepiece`. ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset import soundfile as sf model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-large-librispeech-asr") processor = Speech2Textprocessor.from_pretrained("facebook/s2t-large-librispeech-asr") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) ds = ds.map(map_to_array) input_features = processor( ds["speech"][0], sampling_rate=16_000, return_tensors="pt" ).input_features | f4fd7f1dc10790b4022d684dcb822479 |
mit | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | Evaluation on LibriSpeech Test The following script shows how to evaluate this model on the [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) *"clean"* and *"other"* test dataset. ```python from datasets import load_dataset, load_metric from transformers import Speech2TextForConditionalGeneration, Speech2TextProcessor import soundfile as sf librispeech_eval = load_dataset("librispeech_asr", "clean", split="test") | 7439d02751c7b8443250d49efc73cf3d |
mit | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | change to "other" for other test dataset wer = load_metric("wer") model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-large-librispeech-asr").to("cuda") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-large-librispeech-asr", do_upper_case=True) def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch librispeech_eval = librispeech_eval.map(map_to_array) def map_to_pred(batch): features = processor(batch["speech"], sampling_rate=16000, padding=True, return_tensors="pt") input_features = features.input_features.to("cuda") attention_mask = features.attention_mask.to("cuda") gen_tokens = model.generate(input_ids=input_features, attention_mask=attention_mask) batch["transcription"] = processor.batch_decode(gen_tokens, skip_special_tokens=True) return batch result = librispeech_eval.map(map_to_pred, batched=True, batch_size=8, remove_columns=["speech"]) print("WER:", wer(predictions=result["transcription"], references=result["text"])) ``` *Result (WER)*: | "clean" | "other" | |:-------:|:-------:| | 3.3 | 7.5 | | 03b0ad8fd77bf411c58251b3577bdeba |
mit | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | Preprocessing The speech data is pre-processed by extracting Kaldi-compliant 80-channel log mel-filter bank features automatically from WAV/FLAC audio files via PyKaldi or torchaudio. Further utterance-level CMVN (cepstral mean and variance normalization) is applied to each example. The texts are lowercased and tokenized using SentencePiece and a vocabulary size of 10,000. | c280b8c0017dfb9bb80cd97c87f55387 |
mit | ['audio', 'automatic-speech-recognition', 'hf-asr-leaderboard'] | false | Training The model is trained with standard autoregressive cross-entropy loss and using [SpecAugment](https://arxiv.org/abs/1904.08779). The encoder receives speech features, and the decoder generates the transcripts autoregressively. | 03518a843e633a187673e3a56ea02564 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert_add_GLUE_Experiment_qqp_384 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE QQP dataset. It achieves the following results on the evaluation set: - Loss: 0.4096 - Accuracy: 0.8095 - F1: 0.7372 - Combined Score: 0.7734 | e6839908d9b8e26ed44cb49de96ce0f6 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:--------------:| | 0.5518 | 1.0 | 1422 | 0.5289 | 0.7376 | 0.6535 | 0.6955 | | 0.4901 | 2.0 | 2844 | 0.4655 | 0.7772 | 0.6744 | 0.7258 | | 0.4098 | 3.0 | 4266 | 0.4096 | 0.8095 | 0.7372 | 0.7734 | | 0.3273 | 4.0 | 5688 | 0.4343 | 0.8211 | 0.7536 | 0.7873 | | 0.2681 | 5.0 | 7110 | 0.4322 | 0.8286 | 0.7519 | 0.7902 | | 0.223 | 6.0 | 8532 | 0.4789 | 0.8301 | 0.7502 | 0.7901 | | 0.1883 | 7.0 | 9954 | 0.4715 | 0.8329 | 0.7663 | 0.7996 | | 0.1603 | 8.0 | 11376 | 0.5090 | 0.8346 | 0.7577 | 0.7961 | | e201924cf156326c08539492022429e1 |
apache-2.0 | ['generated_from_trainer'] | false | vit-base-patch16-224-in21k-finetuned-cassava3 This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.3419 - Accuracy: 0.8855 | 6df3df10dc6e78e5293055bdac3a9e70 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 10 | d640136b715bbd0649579941fc974419 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.5624 | 0.99 | 133 | 0.5866 | 0.8166 | | 0.4717 | 1.99 | 266 | 0.4245 | 0.8692 | | 0.4105 | 2.99 | 399 | 0.3708 | 0.8811 | | 0.3753 | 3.99 | 532 | 0.3646 | 0.8787 | | 0.2997 | 4.99 | 665 | 0.3655 | 0.8780 | | 0.3176 | 5.99 | 798 | 0.3545 | 0.8822 | | 0.2849 | 6.99 | 931 | 0.3441 | 0.8850 | | 0.2931 | 7.99 | 1064 | 0.3419 | 0.8855 | | 0.27 | 8.99 | 1197 | 0.3419 | 0.8848 | | 0.2927 | 9.99 | 1330 | 0.3403 | 0.8853 | | 1702ca93151267e1f3a5dae5682297b8 |
mit | ['generated_from_trainer'] | false | spelling-correction-english-base-location-unique-2-2 This model is a fine-tuned version of [grantslewis/spelling-correction-english-base-location-unique-2](https://huggingface.co/grantslewis/spelling-correction-english-base-location-unique-2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8272 - Cer: 0.1685 | b734e8a83cf7a126047281eb06b07a5d |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 70 - eval_batch_size: 70 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 | e505dd1e418c6d78bb2151eba9682eb1 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 470 | 0.8853 | 0.1740 | | 0.808 | 2.0 | 940 | 0.8494 | 0.1679 | | 0.7434 | 3.0 | 1410 | 0.8288 | 0.1700 | | 0.7324 | 4.0 | 1880 | 0.8272 | 0.1685 | | 0b79ce1fd5b6c8203a7fda3eaa5182a5 |
mit | ['generated_from_trainer'] | false | xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1448 - F1: 0.8881 | 4a475657870bc3efa205042e942a8baf |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 | 0d74acc813db3572207af1444125df40 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.3029 | 1.0 | 1669 | 0.2075 | 0.7971 | | 0.164 | 2.0 | 3338 | 0.1612 | 0.8680 | | 0.1025 | 3.0 | 5007 | 0.1448 | 0.8881 | | 7a61aad64ddaa20badd9abd0298f52de |
apache-2.0 | ['generated_from_trainer'] | false | t5-small-finetuned-fi-to-en This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt19 dataset. It achieves the following results on the evaluation set: - Loss: 3.5235 - Bleu: 1.129 - Gen Len: 17.088 | 0bc8b0b875e09807b3a42e6dbe86465b |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-----:|:-------:| | 3.414 | 1.0 | 6250 | 3.5235 | 1.129 | 17.088 | | 26dcfcbd18abfcca4a69e9d84e4e968e |
apache-2.0 | [] | false | bert-base-en-fr-es-de-zh-cased We are sharing smaller versions of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) that handle a custom number of languages. Unlike [distilbert-base-multilingual-cased](https://huggingface.co/distilbert-base-multilingual-cased), our versions give exactly the same representations produced by the original model which preserves the original accuracy. For more information please visit our paper: [Load What You Need: Smaller Versions of Multilingual BERT](https://www.aclweb.org/anthology/2020.sustainlp-1.16.pdf). | aee308b7e39a4e2145547f9596138ae1 |
apache-2.0 | [] | false | How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased") model = AutoModel.from_pretrained("Geotrend/bert-base-en-fr-es-de-zh-cased") ``` To generate other smaller versions of multilingual transformers please visit [our Github repo](https://github.com/Geotrend-research/smaller-transformers). | a1b4d75c26c29d3973ceb34392c692d2 |
apache-2.0 | ['t5-lm-adapt'] | false | lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-base): - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. and is pretrained on both the denoising and language modeling objective. More specifically, this checkpoint is initialized from [T5 Version 1.1 - Base](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-base) and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). This adaptation improves the ability of the model to be used for prompt tuning. **Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp). Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu* | dd9b1901376cf8cf15135661d02b3f6c |
apache-2.0 | ['generated_from_trainer'] | false | bertiny-finetuned-finer This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the finer-139 dataset. It achieves the following results on the evaluation set: - Loss: 0.0882 - Precision: 0.5339 - Recall: 0.0360 - F1: 0.0675 - Accuracy: 0.9847 | f5d44cad45f7fde63424d74a3ffa5059 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0871 | 1.0 | 11255 | 0.0952 | 0.0 | 0.0 | 0.0 | 0.9843 | | 0.0864 | 2.0 | 22510 | 0.0895 | 0.7640 | 0.0082 | 0.0162 | 0.9844 | | 0.0929 | 3.0 | 33765 | 0.0882 | 0.5339 | 0.0360 | 0.0675 | 0.9847 | | 3b5de3dc49394721cafbb0d33c604823 |
['apache-2.0'] | [] | false | Romanian paraphrase  Fine-tune t5-base model for paraphrase. Since there is no Romanian dataset for paraphrasing, I had to create my own [dataset](https://huggingface.co/datasets/BlackKakapo/paraphrase-ro-v1). The dataset contains ~60k examples. | b7f486e037842354322e64075277a92f |
['apache-2.0'] | [] | false | How to use ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("BlackKakapo/t5-base-paraphrase-ro") model = AutoModelForSeq2SeqLM.from_pretrained("BlackKakapo/t5-base-paraphrase-ro") ``` | 006376be93bf106a65c21265108b4cbd |
['apache-2.0'] | [] | false | Or ```python from transformers import T5ForConditionalGeneration, T5TokenizerFast model = T5ForConditionalGeneration.from_pretrained("BlackKakapo/t5-base-paraphrase-ro") tokenizer = T5TokenizerFast.from_pretrained("BlackKakapo/t5-base-paraphrase-ro") ``` | ad477dcaa843db29af280f35367bf0ba |
['apache-2.0'] | [] | false | Generate ```python text = "Am impresia că fac multe greșeli." encoding = tokenizer.encode_plus(text, pad_to_max_length=True, return_tensors="pt") input_ids, attention_masks = encoding["input_ids"].to(device), encoding["attention_mask"].to(device) beam_outputs = model.generate( input_ids=input_ids, attention_mask=attention_masks, do_sample=True, max_length=256, top_k=10, top_p=0.9, early_stopping=False, num_return_sequences=5 ) for beam_output in beam_outputs: text_para = tokenizer.decode(beam_output, skip_special_tokens=True,clean_up_tokenization_spaces=True) if text.lower() != text_para.lower() or text not in final_outputs: final_outputs.append(text_para) break print(final_outputs) ``` | f8f1faf8ec5db13d3a552249e43e53c5 |
creativeml-openrail-m | ['stable-diffusion', 'text-to-image'] | false | gGWoman This is my new Stable Diffusion custom model that bring to you a generic woman generated with non-licenced images. The magic word is: gGWoman If you enjoy my work, please consider supporting me: [](https://www.buymeacoffee.com/elrivx) Examples: <img src=https://imgur.com/CQR59kd.png width=30% height=30%> <img src=https://imgur.com/WVh9kE1.png width=30% height=30%> <img src=https://imgur.com/y0twso7.png width=30% height=30%> <img src=https://imgur.com/FVxkzzj.png width=30% height=30%> | 6f3bfd3d7b841e9c5362602beea61791 |
apache-2.0 | ['generated_from_trainer'] | false | eval_masked_102_cola This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6601 - Matthews Correlation: 0.5989 | 6fd95d8bfdc68549754d500e03dc9e45 |
apache-2.0 | ['generated_from_trainer'] | false | finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3029 - Accuracy: 0.8667 - F1: 0.8675 | 1568eee00c2c240e45ffd2fdc7389af8 |
cc-by-4.0 | ['question generation'] | false | Model Card of `lmqg/flan-t5-small-squad-qg` This model is fine-tuned version of [google/flan-t5-small](https://huggingface.co/google/flan-t5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). | 7455c9210ba9cbb602492b4412322d0c |
cc-by-4.0 | ['question generation'] | false | model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/flan-t5-small-squad-qg") output = pipe("generate question: <hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ``` | 9a856480cbe79f027694bfff1785f15a |
cc-by-4.0 | ['question generation'] | false | Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.23 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 56.66 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 40.4 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 30.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 24.34 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 25.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 63.77 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 51.26 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metric (Question & Answer Generation, Pipeline Approach)***: Each question is generated on the answer generated by [`lmqg/flan-t5-small-squad-ae`](https://huggingface.co/lmqg/flan-t5-small-squad-ae). [raw metric file](https://huggingface.co/lmqg/flan-t5-small-squad-qg/raw/main/eval_pipeline/metric.first.answer.paragraph.questions_answers.lmqg_qg_squad.default.lmqg_flan-t5-small-squad-ae.json) | | Score | Type | Dataset | |:--------------------------------|--------:|:--------|:---------------------------------------------------------------| | QAAlignedF1Score (BERTScore) | 92.34 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedF1Score (MoverScore) | 63.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (BERTScore) | 92.13 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedPrecision (MoverScore) | 63.89 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (BERTScore) | 92.58 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | QAAlignedRecall (MoverScore) | 63.8 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | 2400cd7e624f5bb968e79f6a907c4e34 |
cc-by-4.0 | ['question generation'] | false | Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: paragraph_answer - output_types: question - prefix_types: ['qg'] - model: google/flan-t5-small - max_length: 512 - max_length_output: 32 - epoch: 7 - batch: 64 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 1 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/flan-t5-small-squad-qg/raw/main/trainer_config.json). | 9226ae07a0429d16079c540f0da14339 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-mnli This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.1091 - Accuracy: 0.42 | e9ac4b0bcb93e9adc1612d9bdad2b418 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 32 | 1.1005 | 0.28 | | No log | 2.0 | 64 | 1.1038 | 0.3 | | No log | 3.0 | 96 | 1.1074 | 0.32 | | No log | 4.0 | 128 | 1.1088 | 0.42 | | No log | 5.0 | 160 | 1.1091 | 0.42 | | 905fbfe75a2b0cb01fcb81b1e36f9d84 |
apache-2.0 | ['generated_from_keras_callback'] | false | marian-finetuned-kde4-en-to-fr This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/opus-mt-en-fr) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.6859 - Validation Loss: 0.8062 - Epoch: 2 | fc6bc861842c2418158bd020fa5786e4 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.0582 | 0.8792 | 0 | | 0.7977 | 0.8250 | 1 | | 0.6859 | 0.8062 | 2 | | 9896c3757cccb5d6447c268961394d11 |
apache-2.0 | [] | false | Model description **CAMeLBERT-MSA DID MADAR Twitter-5 Model** is a dialect identification (DID) model that was built by fine-tuning the [CAMeLBERT-MSA](https://huggingface.co/CAMeL-Lab/bert-base-arabic-camelbert-msa/) model. For the fine-tuning, we used the [MADAR Twitter-5](https://camel.abudhabi.nyu.edu/madar-shared-task-2019/) dataset, which includes 21 labels. Our fine-tuning procedure and the hyperparameters we used can be found in our paper *"[The Interplay of Variant, Size, and Task Type in Arabic Pre-trained Language Models](https://arxiv.org/abs/2103.06678)."* Our fine-tuning code can be found [here](https://github.com/CAMeL-Lab/CAMeLBERT). | 68104be47fae25a1bbc63af330de4f83 |
apache-2.0 | [] | false | Intended uses You can use the CAMeLBERT-MSA DID MADAR Twitter-5 model as part of the transformers pipeline. This model will also be available in [CAMeL Tools](https://github.com/CAMeL-Lab/camel_tools) soon. | 8cc878aac73188cc509a7abb381a496d |
apache-2.0 | [] | false | How to use To use the model with a transformers pipeline: ```python >>> from transformers import pipeline >>> did = pipeline('text-classification', model='CAMeL-Lab/bert-base-arabic-camelbert-msa-did-madar-twitter5') >>> sentences = ['عامل ايه ؟', 'شلونك ؟ شخبارك ؟'] >>> did(sentences) [{'label': 'Egypt', 'score': 0.5741344094276428}, {'label': 'Kuwait', 'score': 0.5225679278373718}] ``` *Note*: to download our models, you would need `transformers>=3.5.0`. Otherwise, you could download the models manually. | 24cc78df7233e9652c8e866f8914bd79 |
creativeml-openrail-m | ['coreml', 'stable-diffusion', 'text-to-image'] | false | Kurzgesagtish: Source(s): [CivitAI](https://civitai.com/models/1212/kurzgesagtish) Here it is the kurzgesagtish model, honestly i didnt know what to call it but it kept being compared to the style used on the kurzgesagt youtube channel, hope you all make amazing things :) Activation prompt : illustration style kurzgesagtish | aa62f515c7ac286799fd1764600e1bde |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 128 - seed: 4 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20.0 | 565e5e39a1843d29fa54a5dfda9bb246 |
apache-2.0 | ['generated_from_trainer'] | false | glue-mrpc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3654 - Accuracy: 0.8554 - F1: 0.8998 | 0ba0724fc6bc3fc036f7aa1a552d72c5 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 230 | 0.4039 | 0.8039 | 0.8611 | | No log | 2.0 | 460 | 0.3654 | 0.8554 | 0.8998 | | 0.4368 | 3.0 | 690 | 0.4146 | 0.8407 | 0.8885 | | 0.4368 | 4.0 | 920 | 0.5756 | 0.8456 | 0.8941 | | 0.1744 | 5.0 | 1150 | 0.5523 | 0.8456 | 0.8916 | | a6a28478efc6fffe873a79097bd2f4cb |
mit | ['generated_from_keras_callback'] | false | pmfsl/pt-bert-large-finetuned-rte This model is a fine-tuned version of [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3300 - Validation Loss: 0.1597 - Train Accuracy: 0.9432 - Train F1: 0.9439 - Epoch: 0 | 5e14d4ec6a42a5bcacabe81ba2ebfd42 |
mit | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 406, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 | 583b2f74747873c5adc308317e4ecb2f |
mit | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Train Accuracy | Train F1 | Epoch | |:----------:|:---------------:|:--------------:|:--------:|:-----:| | 0.3300 | 0.1597 | 0.9432 | 0.9439 | 0 | | a01afea630f9a02c9f2b743866e791d7 |
apache-2.0 | ['generated_from_keras_callback'] | false | rhitabrat/bert-finetuned-news This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.7333 - Epoch: 1 | b535b5d9e917ae8db03fdca39ccca313 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 19448, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16 | 4f608645cd19746f7d00880ff1fc790b |
apache-2.0 | ['generated_from_trainer'] | false | distilbert_sa_GLUE_Experiment_logit_kd_cola_256 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.6808 - Matthews Correlation: 0.0 | 525ede6e1a2ee8986ec10a42f6a48643 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.8053 | 1.0 | 34 | 0.6856 | 0.0 | | 0.7977 | 2.0 | 68 | 0.6837 | 0.0 | | 0.7952 | 3.0 | 102 | 0.6832 | 0.0 | | 0.7934 | 4.0 | 136 | 0.6852 | 0.0 | | 0.7703 | 5.0 | 170 | 0.6808 | 0.0 | | 0.7008 | 6.0 | 204 | 0.6885 | 0.0675 | | 0.6386 | 7.0 | 238 | 0.7263 | 0.1037 | | 0.6059 | 8.0 | 272 | 0.7450 | 0.0825 | | 0.577 | 9.0 | 306 | 0.7559 | 0.1071 | | 0.5531 | 10.0 | 340 | 0.7794 | 0.1048 | | cd3e027fa3d8bc54ebf1e8aac4d22a7b |
mit | [] | false | Senneca on Stable Diffusion This is the `<Senneca>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`:      | 28261f25ad4ca7ba0f75481de18afb12 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | teamcomo-kj Dreambooth model trained by DFrostKilla with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: | 06d14c1b63c222f58c151b7e7a18db6d |
apache-2.0 | ['generated_from_trainer'] | false | bert-all-squad_que_translated This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co/bert-base-multilingual-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5174 | 5a55a0133f46324b7eeb4dda925e50e3 |
apache-2.0 | ['generated_from_keras_callback', 'hubert'] | false | hubert-small-wiki-seq128 Fully trained model with the second phase of training is available here: [SzegedAI/hubert-small-wiki](https://huggingface.co/SzegedAI/hubert-small-wiki) This model was trained from scratch on the Wikipedia subset of Hungarian Webcorpus 2.0 with MLM and SOP tasks. | bb5b91d9f42bdabc399e303937eff139 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-xlsr-korean-speech-emotion-recognition3 This model is a fine-tuned version of [jungjongho/wav2vec2-large-xlsr-korean-demo-colab_epoch15](https://huggingface.co/jungjongho/wav2vec2-large-xlsr-korean-demo-colab_epoch15) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0600 - Accuracy: 0.9876 | dc13f9cc6dbeec313978de43060e155d |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP | 27b269f0c68527e4967eac381f24c41f |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6472 | 0.08 | 1500 | 0.3769 | 0.8705 | | 0.3873 | 0.15 | 3000 | 0.3814 | 0.9127 | | 0.3002 | 0.23 | 4500 | 0.2617 | 0.9429 | | 0.2399 | 0.3 | 6000 | 0.1336 | 0.9693 | | 0.2181 | 0.38 | 7500 | 0.1360 | 0.9728 | | 0.1992 | 0.46 | 9000 | 0.1239 | 0.9717 | | 0.1556 | 0.53 | 10500 | 0.1053 | 0.9781 | | 0.1412 | 0.61 | 12000 | 0.0915 | 0.9810 | | 0.1396 | 0.69 | 13500 | 0.0777 | 0.9826 | | 0.1159 | 0.76 | 15000 | 0.0801 | 0.9831 | | 0.1156 | 0.84 | 16500 | 0.0667 | 0.9867 | | 0.1149 | 0.91 | 18000 | 0.0670 | 0.9860 | | 0.0929 | 0.99 | 19500 | 0.0600 | 0.9876 | | 309f2be9b56c5ed048495acbd8956e89 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | xridl Dreambooth model trained by Suniljl with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: | 50cf699c1bde71ecf563409e58445334 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-georgian-large This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice_10_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.4291 - Wer: 0.6392 | 9bcd5fadc841f43d0c0ce990fc14ddca |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 15 - mixed_precision_training: Native AMP | 978acb608374f6c99fff73ed77439968 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0867 | 4.21 | 400 | 3.1211 | 1.0 | | 2.8871 | 8.42 | 800 | 2.2250 | 1.0 | | 0.3667 | 12.63 | 1200 | 0.4291 | 0.6392 | | 946a25d71b7831d0c400f04125c32896 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | MultiBERTs Seed 4 Checkpoint 160k (uncased) Seed 4 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-4](https://hf.co/multberts-seed-4). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | 3dd7c932be812c62bedc39d91d922963 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-4'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-160k') model = BertModel.from_pretrained("multiberts-seed-4-160k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | f04bd678a2c1d6938145f84f53288b67 |
cc-by-sa-4.0 | ['asteroid', 'audio', 'ConvTasNet', 'audio-to-audio'] | false | Asteroid model `JorisCos/ConvTasNet_Libri3Mix_sepnoisy_8k` Description: This model was trained by Joris Cosentino using the librimix recipe in [Asteroid](https://github.com/asteroid-team/asteroid). It was trained on the `sep_noisy` task of the Libri3Mix dataset. Training config: ```yml data: n_src: 3 sample_rate: 8000 segment: 3 task: sep_noisy train_dir: data/wav8k/min/train-360 valid_dir: data/wav8k/min/dev filterbank: kernel_size: 16 n_filters: 512 stride: 8 masknet: bn_chan: 128 hid_chan: 512 mask_act: relu n_blocks: 8 n_repeats: 3 n_src: 3 skip_chan: 128 optim: lr: 0.001 optimizer: adam weight_decay: 0.0 training: batch_size: 24 early_stop: true epochs: 200 half_lr: true num_workers: 4 ``` Results: On Libri3Mix min test set : ```yml si_sdr: 5.978836560066222 si_sdr_imp: 10.388889689413096 sdr: 6.8651365291740225 sdr_imp: 10.928018056925016 sir: 14.997089638783114 sir_imp: 18.08248357801549 sar: 8.127504792061933 sar_imp: -0.7869320540959925 stoi: 0.7669414686111115 stoi_imp: 0.20416563213078837 ``` License notice: This work "ConvTasNet_Libri3Mix_sepnoisy_8k" is a derivative of [LibriSpeech ASR corpus](http://www.openslr.org/12) by Vassil Panayotov, used under [CC BY 4.0](https://creativecommons.org/licenses/by/4.0/); of The WSJ0 Hipster Ambient Mixtures dataset by [Whisper.ai](http://wham.whisper.ai/), used under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/) (Research only). "ConvTasNet_Libri3Mix_sepnoisy_8k" is licensed under [Attribution-ShareAlike 3.0 Unported](https://creativecommons.org/licenses/by-sa/3.0/) by Joris Cosentino | 57cce6564a245c828ed868506453673a |
apache-2.0 | [] | false | | |:------------:|:-----:| | Organization | 30108 | | Location | 12924 | | Facility | 4458 | | Event | 7557 | | Product | 4389 | | Person | 15645 | **Download** You can download the dataset from [here](https://github.com/HaniehP/PersianNER) | b52262278818f58bfdcdcb2b6e4ffe96 |
apache-2.0 | [] | false | Results The following table summarizes the F1 score obtained by ParsBERT as compared to other models and architectures. | Dataset | ParsBERT v2 | ParsBERT v1 | mBERT | MorphoBERT | Beheshti-NER | LSTM-CRF | Rule-Based CRF | BiLSTM-CRF | |---------|-------------|-------------|-------|------------|--------------|----------|----------------|------------| | ARMAN | 99.84* | 98.79 | 95.89 | 89.9 | 84.03 | 86.55 | - | 77.45 | | 478c8bc26e2957feb668e04eab33b153 |
apache-2.0 | [] | false | How to use :hugs: | Notebook | Description | | |:----------|:-------------|------:| | [How to use Pipelines](https://github.com/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | Simple and efficient way to use State-of-the-Art models on downstream tasks through transformers | [](https://colab.research.google.com/github/hooshvare/parsbert-ner/blob/master/persian-ner-pipeline.ipynb) | | ce92e0ab4b1f84a98cb84c440561ea75 |
apache-2.0 | ['generated_from_trainer'] | false | my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8597 - F1: 0.5171 - Precision: 0.5205 - Recall: 0.52 - Accuracy: 0.52 | 0202e0006e695ccb116d04912eb2b5b4 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Precision | Recall | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:------:|:---------:|:------:|:--------:| | 0.6451 | 1.0 | 752 | 0.7708 | 0.4699 | 0.5047 | 0.5035 | 0.5035 | | 0.5828 | 2.0 | 1504 | 0.7702 | 0.5101 | 0.5106 | 0.5106 | 0.5106 | | 0.5139 | 3.0 | 2256 | 0.8597 | 0.5171 | 0.5205 | 0.52 | 0.52 | | 41f42cdf0c1a3deeba9fd32680e011b6 |
mit | [] | false | Model by plasmo This your the Stable Diffusion model fine-tuned the macro_bug concept taught to Stable Diffusion with Dreambooth. Macro Bug - A focused stacked macro insect model (ShivamShrirao Version, trained on 3000 steps) Keyword: "macro_bug" but sometimes not even needed as this model seems heavily weighted. I made another version (theLastBen) of this model, but this model seems to create more detailed and creative images. Sample pictures of this concept:     | d64b06f5be1540eac0705362e711f8fd |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-0'] | false | MultiBERTs Seed 0 Checkpoint 1900k (uncased) Seed 0 intermediate checkpoint 1900k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-0](https://hf.co/multberts-seed-0). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | 35c248972d5424a22aaa431dad3e2e02 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-0'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-0-1900k') model = BertModel.from_pretrained("multiberts-seed-0-1900k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | dbc3d8348e444d5597ff1f80dcb7df91 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | CRonaldolibya Dreambooth model trained by WALIDALI with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: | 971053db6f9925b6b4031f023befdf48 |
apache-2.0 | ['solidity', 'web3', 'code generation', 'smart contract'] | false | How to use this trained model - A hello world example to use this model, notice the input `text` includes - Header solidity version like `pragma solidity ^0.5.7` - Ancestor class/library info, e.g. public functions and constants from `ParentA` - Contract/Library/Interface declaration header, e.g. `HelloWorld` ended with `{` - Or simply use the test widget on the right side of the window and test, however the quality is known to be worse without decoding params ```python | e7fb7a6f1b59f841882177c31fc494c6 |
apache-2.0 | ['solidity', 'web3', 'code generation', 'smart contract'] | false | fallback to cpu if you do not have cuda tokenizer = AutoTokenizer.from_pretrained("hululuzhu/solidity-t5") model = T5ForConditionalGeneration.from_pretrained("hululuzhu/solidity-t5").to(DEVICE) text = """pragma solidity ^0.5.7; // Context: ParentA | Functions: helloA helloB | Constants: constantA contract HelloWorld is ParentA {""" input_ids = tokenizer(text, return_tensors="pt", truncation=True).input_ids.to(DEVICE) | 3702e7f356a63d308ef9edfab751631f |
apache-2.0 | ['solidity', 'web3', 'code generation', 'smart contract'] | false | Need to tune beam/topk/topp params to get good outcome generated_ids = model.generate(input_ids, max_length=256, num_beams=5, top_p=0.95, top_k=50) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) | 6187a2e1105de451eb584987cb9ee4e7 |
apache-2.0 | ['solidity', 'web3', 'code generation', 'smart contract'] | false | Expect outcome """ string public constant name = "Hello World"; ... uint256 public constant override returns (uint256) { return initialSupply; } function initialSupply() public view returns (uint256) { ... """ ``` | 86ee28f76a39767619e74cbd961dd72b |
apache-2.0 | ['solidity', 'web3', 'code generation', 'smart contract'] | false | Background - Base T5 code model: https://huggingface.co/Salesforce/codet5-large - Source data: https://huggingface.co/datasets/mwritescode/slither-audited-smart-contracts - Processing steps: Clean, contract-level segmentation sepration, split in and out - After processing input sample ``` pragma solidity 0.5.7; // Context: PauserRole | Functions: isPauser addPauser renouncePauser | Constants: contract Pausable is PauserRole { ``` - After processing output sample (**notice indentation is bad, this is intentional to reduce token size**) ``` event Paused(address account); event Unpaused(address account); bool private _pausableActive; bool private _paused; constructor () internal { _paused = false; } function paused() public view returns (bool) { return _paused; } modifier whenNotPaused() { require(!_paused); _; } modifier whenPaused() { require(_paused); _; } function pause() public onlyPauser whenNotPaused whenPausableActive { _paused = true; emit Paused(msg.sender); } function unpause() public onlyPauser whenPaused whenPausableActive { _paused = false; emit Unpaused(msg.sender); } function _setPausableActive(bool _active) internal { _pausableActive = _active; } modifier whenPausableActive() { require(_pausableActive); _; } } ``` - Source training code: See the [end to end notebook](https://github.com/hululuzhu/solidity-t5/blob/main/code/Solidity_T5_Data_Processing_and_Training.ipynb) at code dir here | 457093249cd529ab4a3a8b317b5f36d7 |
apache-2.0 | ['solidity', 'web3', 'code generation', 'smart contract'] | false | Future TODO - The model is significantly under-trained because of lack of GPU budget, need 10x colab resources (~$100 for full train) - This is quite limited on how the model is used, potentially we could switch to GPT2 decoder-only to compare, but CodeT5 has its strong code optimization - Need more classifiers (T5 or BERT alike) to detect potential defects. | 36a646ea4b5060b2c371ff8b3b6e7d89 |
mit | ['generated_from_trainer'] | false | bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv-v3-e3 This model is a fine-tuned version of [theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv](https://huggingface.co/theojolliffe/bart-cnn-pubmed-arxiv-pubmed-arxiv-arxiv) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8311 - Rouge1: 53.458 - Rouge2: 34.076 - Rougel: 37.3287 - Rougelsum: 50.7849 - Gen Len: 142.0 | fccefc77c5bae99e180518114de9e22b |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP | bee3cc03489d7a16a1863eac55144072 |
mit | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:--------:| | No log | 1.0 | 398 | 0.8697 | 52.6579 | 33.307 | 35.8099 | 49.9687 | 142.0 | | 0.8264 | 2.0 | 796 | 0.8293 | 52.6738 | 33.7202 | 36.1502 | 50.0501 | 141.9815 | | 0.5471 | 3.0 | 1194 | 0.8311 | 53.458 | 34.076 | 37.3287 | 50.7849 | 142.0 | | 5c472516e455028ffa9075bdf5c620c3 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | the_pm_generator Dreambooth model trained by uxstudent Use the prompt field in the right to generate avatars. Need ideas for prompts? Try: - `picture of Pablo by Leonardo Da Vinci` - `picture of Pablo wearing aviator jacket by greg rutkowsi` - `portrait photo of pablo warrior chief, tribal panther make up, blue on red, side profile, looking away, serious eyes, 50mm portrait photography, hard rim lighting photography–beta –ar 2:3 –beta –upbeta –upbeta` more examples: - https://mpost.io/best-100-stable-diffusion-prompts-the-most-beautiful-ai-text-to-image-prompts/ | 3232b2ec944e7a1db1d36d0150288136 |
apache-2.0 | ['translation'] | false | opus-mt-sv-zne * source languages: sv * target languages: zne * OPUS readme: [sv-zne](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-zne/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sv-zne/opus-2020-01-16.eval.txt) | 1a5baf7fb3914726bc6e5f305931d8e0 |
cc-by-4.0 | ['generated_from_trainer'] | false | What-deepset-bert-uncased-finetune This model is a fine-tuned version of [deepset/bert-large-uncased-whole-word-masking-squad2](https://huggingface.co/deepset/bert-large-uncased-whole-word-masking-squad2) on an unknown dataset. | 33de357a72bb0401cf8cd8f3de166801 |
cc-by-4.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 | 1a4df65a7b4eba451ed26cbe0c70ca67 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8381 - Matthews Correlation: 0.5456 | 5da581b7f1f000523f145ab3d9d78fb9 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5245 | 1.0 | 535 | 0.5432 | 0.4249 | | 0.3514 | 2.0 | 1070 | 0.5075 | 0.4874 | | 0.2368 | 3.0 | 1605 | 0.5554 | 0.5403 | | 0.1712 | 4.0 | 2140 | 0.7780 | 0.5246 | | 0.1254 | 5.0 | 2675 | 0.8381 | 0.5456 | | ecc40d0e267dfb2942e4cd5df1761886 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.