license stringlengths 2 30 | tags stringlengths 2 513 | is_nc bool 1 class | readme_section stringlengths 201 597k | hash stringlengths 32 32 |
|---|---|---|---|---|
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | m3rrw3 Dreambooth model trained by gababas with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept: | fd3db34f7e6f1b3e67000d76eb6954d4 |
creativeml-openrail-m | ['text-to-image', 'stable-diffusion'] | false | Tippy Dreambooth model trained by KeaponLaffin with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb) Sample pictures of this concept: | a109da790c723a8a52d74cb16e00cd26 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | `Shinji_Watanabe/librispeech_asr_train_asr_transformer_e18_raw_bpe_sp_valid.acc.best` ♻️ Imported from https://zenodo.org/record/4030677/ This model was trained by Shinji Watanabe using librispeech/asr1 recipe in [espnet](https://github.com/espnet/espnet/). | c159716bfb15f58b70450516f7413235 |
cc-by-4.0 | ['espnet', 'audio', 'automatic-speech-recognition'] | false | Citing ESPnet ```BibTex @inproceedings{watanabe2018espnet, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson {Enrique Yalta Soplin} and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, title={{ESPnet}: End-to-End Speech Processing Toolkit}, year={2018}, booktitle={Proceedings of Interspeech}, pages={2207--2211}, doi={10.21437/Interspeech.2018-1456}, url={http://dx.doi.org/10.21437/Interspeech.2018-1456} } @inproceedings{hayashi2020espnet, title={{Espnet-TTS}: Unified, reproducible, and integratable open source end-to-end text-to-speech toolkit}, author={Hayashi, Tomoki and Yamamoto, Ryuichi and Inoue, Katsuki and Yoshimura, Takenori and Watanabe, Shinji and Toda, Tomoki and Takeda, Kazuya and Zhang, Yu and Tan, Xu}, booktitle={Proceedings of IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)}, pages={7654--7658}, year={2020}, organization={IEEE} } ``` or arXiv: ```bibtex @misc{watanabe2018espnet, title={ESPnet: End-to-End Speech Processing Toolkit}, author={Shinji Watanabe and Takaaki Hori and Shigeki Karita and Tomoki Hayashi and Jiro Nishitoba and Yuya Unno and Nelson Enrique Yalta Soplin and Jahn Heymann and Matthew Wiesner and Nanxin Chen and Adithya Renduchintala and Tsubasa Ochiai}, year={2018}, eprint={1804.00015}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` | fa9e33de3b2586f651c12cf922293baf |
zlib | [] | false | Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). | 1013f6698ff26bca75fcb5945d9a8a6c |
zlib | [] | false | Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] | 92d44fb5fae3afef4b6d719e32f9e3c0 |
zlib | [] | false | Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] | a24191f900b5ff1df051e501cb2e3a94 |
zlib | [] | false | Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. | d06c30070bf52e8e661810ae70ea5e52 |
zlib | [] | false | Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] | 2605012cfc37b4509fa2a6767b1d6dcd |
zlib | [] | false | Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact | 07845922c1cfcb3f7d64404631069cef |
zlib | [] | false | compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] | fc9d547bf5335f7585fd92151791cb8f |
zlib | [] | false | Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] | 0a029a351265e0d90f8144230a871db4 |
apache-2.0 | ['deep-narrow'] | false | T5-Efficient-TINY-FF12000 (Deep-Narrow version) T5-Efficient-TINY-FF12000 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block. | 0de65fd16cdbee35d2179a2372ed92fc |
apache-2.0 | ['deep-narrow'] | false | Details model architecture This model checkpoint - **t5-efficient-tiny-ff12000** - is of model type **Tiny** with the following variations: - **ff** is **12000** It has **61.72** million parameters and thus requires *ca.* **246.87 MB** of memory in full precision (*fp32*) or **123.44 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh | | 43ad356551d265aec57e4d627612de1b |
apache-2.0 | ['deep-narrow'] | false | Params| | ----| ---- | ---- | ---- | ---- | ---- | ----| | Tiny | 4/4 | 1024 | 256 | 32 | 4 | 16M| | Mini | 4/4 | 1536 | 384 | 32 | 8 | 31M| | Small | 6/6 | 2048 | 512 | 32 | 8 | 60M| | Base | 12/12 | 3072 | 768 | 64 | 12 | 220M| | Large | 24/24 | 4096 | 1024 | 64 | 16 | 738M| | Xl | 24/24 | 16384 | 1024 | 128 | 32 | 3B| | XXl | 24/24 | 65536 | 1024 | 128 | 128 | 11B| whereas the following abbreviations are used: | Abbreviation | Definition | | ----| ---- | | nl | Number of transformer blocks (depth) | | dm | Dimension of embedding vector (output vector of transformers block) | | kv | Dimension of key/value projection matrix | | nh | Number of attention heads | | ff | Dimension of intermediate vector within transformer block (size of feed-forward projection matrix) | | el | Number of transformer blocks in the encoder (encoder depth) | | dl | Number of transformer blocks in the decoder (decoder depth) | | sh | Signifies that attention heads are shared | | skv | Signifies that key-values projection matrices are tied | If a model checkpoint has no specific, *el* or *dl* than both the number of encoder- and decoder layers correspond to *nl*. | bc09d4b825a147eddb1c0e9a723d148d |
apache-2.0 | ['deep-narrow'] | false | Pre-Training The checkpoint was pretrained on the [Colossal, Cleaned version of Common Crawl (C4)](https://huggingface.co/datasets/c4) for 524288 steps using the span-based masked language modeling (MLM) objective. | 4c2801c69d0c104b4a96dc5c57a005c8 |
apache-2.0 | ['deep-narrow'] | false | Fine-Tuning **Note**: This model is a **pretrained** checkpoint and has to be fine-tuned for practical usage. The checkpoint was pretrained in English and is therefore only useful for English NLP tasks. You can follow on of the following examples on how to fine-tune the model: *PyTorch*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/pytorch/summarization) - [Question Answering](https://github.com/huggingface/transformers/blob/master/examples/pytorch/question-answering/run_seq2seq_qa.py) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/pytorch/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *Tensorflow*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/tensorflow/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. *JAX/Flax*: - [Summarization](https://github.com/huggingface/transformers/tree/master/examples/flax/summarization) - [Text Classification](https://github.com/huggingface/transformers/tree/master/examples/flax/text-classification) - *Note*: You will have to slightly adapt the training example here to make it work with an encoder-decoder model. | 521707760ee9ec6510f9ebbedda647b0 |
apache-2.0 | ['deep-narrow'] | false | More information We strongly recommend the reader to go carefully through the original paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** to get a more nuanced understanding of this model checkpoint. As explained in the following [issue](https://github.com/google-research/google-research/issues/986 | cf6d451af6a8a5b3a9d429bab67863d9 |
apache-2.0 | ['deep-narrow'] | false | issuecomment-1035051145), checkpoints including the *sh* or *skv* model architecture variations have *not* been ported to Transformers as they are probably of limited practical usage and are lacking a more detailed description. Those checkpoints are kept [here](https://huggingface.co/NewT5SharedHeadsSharedKeyValues) as they might be ported potentially in the future. | 1d159d97a352942f4481b627660d761f |
apache-2.0 | [] | false | doc2query/stackexchange-title-body-t5-base-v1
This is a [doc2query](https://arxiv.org/abs/1904.08375) model based on T5 (also known as [docT5query](https://cs.uwaterloo.ca/~jimmylin/publications/Nogueira_Lin_2019_docTTTTTquery-v2.pdf)).
It can be used for:
- **Document expansion**: You generate for your paragraphs 20-40 queries and index the paragraphs and the generates queries in a standard BM25 index like Elasticsearch, OpenSearch, or Lucene. The generated queries help to close the lexical gap of lexical search, as the generate queries contain synonyms. Further, it re-weights words giving important words a higher weight even if they appear seldomn in a paragraph. In our [BEIR](https://arxiv.org/abs/2104.08663) paper we showed that BM25+docT5query is a powerful search engine. In the [BEIR repository](https://github.com/UKPLab/beir) we have an example how to use docT5query with Pyserini.
- **Domain Specific Training Data Generation**: It can be used to generate training data to learn an embedding model. On [SBERT.net](https://www.sbert.net/examples/unsupervised_learning/query_generation/README.html) we have an example how to use the model to generate (query, text) pairs for a given collection of unlabeled texts. These pairs can then be used to train powerful dense embedding models.
| a2526d32c4cf2954c5dc2ac8bf5484ca |
apache-2.0 | [] | false | Usage
```python
from transformers import T5Tokenizer, T5ForConditionalGeneration
model_name = 'doc2query/stackexchange-title-body-t5-base-v1'
tokenizer = T5Tokenizer.from_pretrained(model_name)
model = T5ForConditionalGeneration.from_pretrained(model_name)
text = "Python is an interpreted, high-level and general-purpose programming language. Python's design philosophy emphasizes code readability with its notable use of significant whitespace. Its language constructs and object-oriented approach aim to help programmers write clear, logical code for small and large-scale projects."
input_ids = tokenizer.encode(text, max_length=320, truncation=True, return_tensors='pt')
outputs = model.generate(
input_ids=input_ids,
max_length=64,
do_sample=True,
top_p=0.95,
num_return_sequences=5)
print("Text:")
print(text)
print("\nGenerated Queries:")
for i in range(len(outputs)):
query = tokenizer.decode(outputs[i], skip_special_tokens=True)
print(f'{i + 1}: {query}')
```
**Note:** `model.generate()` is non-deterministic. It produces different queries each time you run it.
| 617dc64359f33fb4199cd61310fa42e2 |
apache-2.0 | [] | false | Training
This model fine-tuned [google/t5-v1_1-base](https://huggingface.co/google/t5-v1_1-base) for 550k training steps. For the training script, see the `train_script.py` in this repository.
The input-text was truncated to 320 word pieces. Output text was generated up to 64 word pieces.
This model was trained on a (title, question_body) from StackExchange.
| 156e53a7ab198e08bb5d541e703bbaf9 |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-base-timit-demo-colab3 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8480 - Wer: 0.5608 | 58e179d1576ccf55d58d11881f2f9b4c |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 600 - num_epochs: 30 - mixed_precision_training: Native AMP | 6014503c02b7ebdd6d6740548157d50f |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 4.7977 | 13.89 | 500 | 1.6491 | 0.8257 | | 0.7393 | 27.78 | 1000 | 0.8480 | 0.5608 | | fd1981ad61adecffe0ab0f0ed4b460a5 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8311 - Matthews Correlation: 0.5199 | 4c86745bd9904dcf6b74398d38c766dc |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5263 | 1.0 | 535 | 0.5272 | 0.4152 | | 0.3504 | 2.0 | 1070 | 0.4835 | 0.5021 | | 0.2372 | 3.0 | 1605 | 0.6059 | 0.5056 | | 0.182 | 4.0 | 2140 | 0.7617 | 0.5179 | | 0.1319 | 5.0 | 2675 | 0.8311 | 0.5199 | | 151eec72183e1466835d35b80a2db66f |
apache-2.0 | ['generated_from_trainer'] | false | Article_100v9_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article100v9_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.3011 - Precision: 0.4913 - Recall: 0.5293 - F1: 0.5096 - Accuracy: 0.8977 | 6e47d7b14ad40dc616f4f8af1c6cefd7 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 44 | 0.3780 | 0.3029 | 0.2939 | 0.2984 | 0.8623 | | No log | 2.0 | 88 | 0.3133 | 0.4705 | 0.4818 | 0.4761 | 0.8922 | | No log | 3.0 | 132 | 0.3011 | 0.4913 | 0.5293 | 0.5096 | 0.8977 | | 5e121613f1e70383e8e382e40a09c325 |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-cased-deep-ritmo-sampa This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.5550 | 0369bbc4c47bd1fec73072eb999c1b5a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.4042 | 1.0 | 1875 | 3.0610 | | 2.8648 | 2.0 | 3750 | 2.6298 | | 2.6572 | 3.0 | 5625 | 2.5550 | | 580ee5b699b0dd47acf4dc0d8c51f16d |
other | ['vision', 'image-segmentation'] | false | Mask2Former Mask2Former model trained on Cityscapes semantic segmentation (large-sized version, Swin backbone). It was introduced in the paper [Masked-attention Mask Transformer for Universal Image Segmentation ](https://arxiv.org/abs/2112.01527) and first released in [this repository](https://github.com/facebookresearch/Mask2Former/). Disclaimer: The team releasing Mask2Former did not write a model card for this model so this model card has been written by the Hugging Face team. | 5e0bc9896afd3d0be3a2db7717698dd1 |
other | ['vision', 'image-segmentation'] | false | Model description Mask2Former addresses instance, semantic and panoptic segmentation with the same paradigm: by predicting a set of masks and corresponding labels. Hence, all 3 tasks are treated as if they were instance segmentation. Mask2Former outperforms the previous SOTA, [MaskFormer](https://arxiv.org/abs/2107.06278) both in terms of performance an efficiency by (i) replacing the pixel decoder with a more advanced multi-scale deformable attention Transformer, (ii) adopting a Transformer decoder with masked attention to boost performance without without introducing additional computation and (iii) improving training efficiency by calculating the loss on subsampled points instead of whole masks.  | 4fe0caae0795f7d97583b395240d6dc2 |
other | ['vision', 'image-segmentation'] | false | Intended uses & limitations You can use this particular checkpoint for panoptic segmentation. See the [model hub](https://huggingface.co/models?search=mask2former) to look for other fine-tuned versions on a task that interests you. | 9408aa151d2b393d574bb03cdb13a852 |
other | ['vision', 'image-segmentation'] | false | load Mask2Former fine-tuned on Cityscapes semantic segmentation processor = AutoImageProcessor.from_pretrained("facebook/mask2former-swin-large-cityscapes-semantic") model = Mask2FormerForUniversalSegmentation.from_pretrained("facebook/mask2former-swin-large-cityscapes-semantic") url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) inputs = processor(images=image, return_tensors="pt") with torch.no_grad(): outputs = model(**inputs) | ac615617a67e3091c1087903cefa4bbc |
other | ['vision', 'image-segmentation'] | false | we refer to the demo notebooks for visualization (see "Resources" section in the Mask2Former docs) ``` For more code examples, we refer to the [documentation](https://huggingface.co/docs/transformers/master/en/model_doc/mask2former). | abd2df604097ab10885412557733c0c7 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0607 - Precision: 0.9285 - Recall: 0.9362 - F1: 0.9324 - Accuracy: 0.9839 | 99d0e00f7d00e2a05761d584ef83b7dd |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2452 | 1.0 | 878 | 0.0709 | 0.9184 | 0.9206 | 0.9195 | 0.9803 | | 0.0501 | 2.0 | 1756 | 0.0621 | 0.9212 | 0.9328 | 0.9270 | 0.9830 | | 0.0299 | 3.0 | 2634 | 0.0607 | 0.9285 | 0.9362 | 0.9324 | 0.9839 | | add3e2d898c77eb93bdd13ee344c0a00 |
apache-2.0 | ['generated_from_trainer'] | false | Fine_Tuning_XLSR_300M_testing_model This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.2861 - Wer: 1.0 | adffa952213e26fa0e046cab130239b1 |
apache-2.0 | ['generated_from_trainer'] | false | t5-small-devices-sum-ver3 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1325 - Rouge1: 95.6631 - Rouge2: 83.6149 - Rougel: 95.6622 - Rougelsum: 95.6632 - Gen Len: 4.9279 | dc2cc2573ad34854e6f1bd2ff0e4dea0 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 - mixed_precision_training: Native AMP | 55661b6324f8fe64add7f2a10e3c1340 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 467 | 0.3307 | 90.9817 | 74.3762 | 90.9596 | 90.9781 | 4.7527 | | 1.0254 | 2.0 | 934 | 0.2365 | 92.6761 | 78.1252 | 92.6664 | 92.6682 | 4.8004 | | 0.3526 | 3.0 | 1401 | 0.1904 | 93.8503 | 80.4523 | 93.8286 | 93.8338 | 4.8221 | | 0.2643 | 4.0 | 1868 | 0.1638 | 94.8079 | 82.1779 | 94.7815 | 94.7853 | 4.917 | | 0.2075 | 5.0 | 2335 | 0.1503 | 95.1619 | 82.6284 | 95.1533 | 95.1578 | 4.9263 | | 0.1831 | 6.0 | 2802 | 0.1408 | 95.2357 | 82.8152 | 95.2261 | 95.2263 | 4.9287 | | 0.161 | 7.0 | 3269 | 0.1386 | 95.4993 | 83.2609 | 95.4935 | 95.4933 | 4.9269 | | 0.1589 | 8.0 | 3736 | 0.1344 | 95.6363 | 83.4727 | 95.6304 | 95.632 | 4.9309 | | 0.1517 | 9.0 | 4203 | 0.1330 | 95.6702 | 83.6329 | 95.6669 | 95.6736 | 4.9301 | | 0.1436 | 10.0 | 4670 | 0.1325 | 95.6631 | 83.6149 | 95.6622 | 95.6632 | 4.9279 | | eb11af760e9c989c7ae3b2323fb1125b |
mit | ['generated_from_trainer', 'nlu', 'text-classification', 'intent-classification'] | false | multilingual_minilm-amazon_massive-intent_eu_noen This model is a fine-tuned version of [microsoft/Multilingual-MiniLM-L12-H384](https://huggingface.co/microsoft/Multilingual-MiniLM-L12-H384) on the [MASSIVE1.1](https://huggingface.co/datasets/AmazonScience/massive) dataset. It achieves the following results on the evaluation set: - Loss: 0.7794 - Accuracy: 0.8551 - F1: 0.8551 | e96992b1127cde3a99f37d4da8dfda9d |
mit | ['generated_from_trainer', 'nlu', 'text-classification', 'intent-classification'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:| | 1.7624 | 1.0 | 4318 | 1.5462 | 0.6331 | 0.6331 | | 0.9535 | 2.0 | 8636 | 0.9628 | 0.7698 | 0.7698 | | 0.6849 | 3.0 | 12954 | 0.8034 | 0.8097 | 0.8097 | | 0.5163 | 4.0 | 17272 | 0.7444 | 0.8290 | 0.8290 | | 0.3973 | 5.0 | 21590 | 0.7346 | 0.8383 | 0.8383 | | 0.331 | 6.0 | 25908 | 0.7369 | 0.8453 | 0.8453 | | 0.2876 | 7.0 | 30226 | 0.7325 | 0.8510 | 0.8510 | | 0.2319 | 8.0 | 34544 | 0.7726 | 0.8496 | 0.8496 | | 0.2098 | 9.0 | 38862 | 0.7803 | 0.8543 | 0.8543 | | 0.1863 | 10.0 | 43180 | 0.7794 | 0.8551 | 0.8551 | | 8981cad461560c94316cc8a94119e415 |
creativeml-openrail-m | ['text-to-image'] | false | avatar-jsjessy-low-facetuned-650 Dreambooth model trained by eicu with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: jsjessy (use that on your prompt)  | aae64c6ffce1d142cf1775e5b70bccc8 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2207 - Accuracy: 0.924 - F1: 0.9244 | 75515960223559b4065105dc37b7d511 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.7914 | 1.0 | 250 | 0.3032 | 0.905 | 0.9030 | | 0.2379 | 2.0 | 500 | 0.2207 | 0.924 | 0.9244 | | 92f0319df1e88fa73be7d99adde86a13 |
mit | [] | false | Isabell Schulte - PVIII - 4tiles - 6000steps on Stable Diffusion This is the `<isabell-schulte-p8-style-4tiles-6000s>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`:     | d8f8cf9a28b24ed2f4b2c3d740c20fcc |
mit | ['keytotext', 'k2t', 'Keywords to Sentences'] | false | keytotext  Idea is to build a model which will take keywords as inputs and generate sentences as outputs. | d19fb42d01f2ceeabd270200b58b48d5 |
mit | ['keytotext', 'k2t', 'Keywords to Sentences'] | false | Keytotext is powered by Huggingface 🤗 [](https://pypi.org/project/keytotext/) [](https://pepy.tech/project/keytotext) [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) | 2d4ccac4a8379a4cc7d5d7b9e45059d3 |
mit | ['keytotext', 'k2t', 'Keywords to Sentences'] | false | Model: Keytotext is based on the Amazing T5 Model: - `k2t`: [Model](https://huggingface.co/gagan3012/k2t) - `k2t-tiny`: [Model](https://huggingface.co/gagan3012/k2t-tiny) - `k2t-base`: [Model](https://huggingface.co/gagan3012/k2t-base) Training Notebooks can be found in the [`Training Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Training%20Notebooks) Folder | fd9c40e551ebb6b271f6b4bce1b6ccc0 |
mit | ['keytotext', 'k2t', 'Keywords to Sentences'] | false | Usage: Example usage: [](https://colab.research.google.com/github/gagan3012/keytotext/blob/master/Examples/K2T.ipynb) Example Notebooks can be found in the [`Notebooks`](https://github.com/gagan3012/keytotext/tree/master/Examples) Folder ``` pip install keytotext ```  | bb4468c576885abf5f21a7ecce41cd8c |
mit | ['keytotext', 'k2t', 'Keywords to Sentences'] | false | UI: UI: [](https://share.streamlit.io/gagan3012/keytotext/UI/app.py) ``` pip install streamlit-tags ``` This uses a custom streamlit component built by me: [GitHub](https://github.com/gagan3012/streamlit-tags)  | 98f08a7630bb4d69240ad106b33115d2 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-amazon-review This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 1.3494 - Accuracy: 0.693 - F1: 0.7003 - Precision: 0.7095 - Recall: 0.693 | fc04c8342d8792113e0b8aec518f5eea |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 0.5 | 500 | 0.8287 | 0.7104 | 0.7120 | 0.7152 | 0.7104 | | 0.4238 | 1.0 | 1000 | 0.8917 | 0.7094 | 0.6989 | 0.6917 | 0.7094 | | 0.4238 | 1.5 | 1500 | 0.9367 | 0.6884 | 0.6983 | 0.7151 | 0.6884 | | 0.3152 | 2.0 | 2000 | 0.9845 | 0.7116 | 0.7144 | 0.7176 | 0.7116 | | 0.3152 | 2.5 | 2500 | 1.0752 | 0.6814 | 0.6968 | 0.7232 | 0.6814 | | 0.2454 | 3.0 | 3000 | 1.1215 | 0.6918 | 0.6954 | 0.7068 | 0.6918 | | 0.2454 | 3.5 | 3500 | 1.2905 | 0.6976 | 0.7048 | 0.7138 | 0.6976 | | 0.1989 | 4.0 | 4000 | 1.2938 | 0.694 | 0.7016 | 0.7113 | 0.694 | | 0.1989 | 4.5 | 4500 | 1.3623 | 0.6972 | 0.7014 | 0.7062 | 0.6972 | | 0.1746 | 5.0 | 5000 | 1.3494 | 0.693 | 0.7003 | 0.7095 | 0.693 | | c2f1e3e24dd56419f08ee9d557a47774 |
apache-2.0 | ['generated_from_trainer'] | false | bert-base-uncased-wnli This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6968 - Accuracy: 0.4789 | 273bb7daa5fa5f59d2cc8ec30dc1b68f |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7192 | 1.0 | 5 | 0.6968 | 0.4789 | | 0.6928 | 2.0 | 10 | 0.7003 | 0.2676 | | 0.6921 | 3.0 | 15 | 0.7057 | 0.5211 | | 0.6931 | 4.0 | 20 | 0.7282 | 0.3944 | | 0.6922 | 5.0 | 25 | 0.7579 | 0.2535 | | 0.68 | 6.0 | 30 | 0.8314 | 0.2254 | | 0.6652 | 7.0 | 35 | 0.8990 | 0.1831 | | 0.627 | 8.0 | 40 | 1.0187 | 0.2254 | | 27841a9f43454b9b55c05e58175113bf |
mit | [] | false | This Repository includes the files required to run the `BioAssays Semantification` ORKG-NLP service. Please check [this article](https://orkg-nlp-pypi.readthedocs.io/en/latest/services/services.html) for more details about the service. The [Scikit-Learn](https://scikit-learn.org/stable/) models are converted using [skl2onnx](https://github.com/onnx/sklearn-onnx) and may not include all original scikit-learn functionalities. | b372bc42f61060386754c2fe148cf87f |
apache-2.0 | [] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-06 - train_batch_size: 16 - eval_batch_size: 16 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 500 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16 | d083d4ead27e3a8f9238e8b46504db96 |
other | ['stable-diffusion', 'text-to-image'] | false | Cool Japan Diffusion 2.1.0 Beta Model Card  [注意事项。从2023年1月10日起,中国将对图像生成的人工智能实施法律限制。 ](http://www.cac.gov.cn/2022-12/11/c_1672221949318230.htm) (中国国内にいる人への警告) English version is [here](README_en.md). | c8c8652e0ba51d07a1be4fbd9c2d1622 |
other | ['stable-diffusion', 'text-to-image'] | false | ライセンスについて ライセンスについては、もとのライセンス CreativeML Open RAIL++-M License に例外を除き商用利用禁止を追加しただけです。 例外を除き商用利用禁止を追加した理由は創作業界に悪影響を及ぼしかねないという懸念からです。 この懸念が払拭されれば、次のバージョンから元のライセンスに戻し、商用利用可能とします。 ちなみに、元のライセンスの日本語訳は[こちら](https://qiita.com/robitan/items/887d9f3153963114823d)になります。 営利企業にいる方は法務部にいる人と相談してください。 趣味で利用する方はあまり気にしなくても一般常識を守れば大丈夫なはずです。 なお、ライセンスにある通り、このモデルを改造しても、このライセンスを引き継ぐ必要があります。 | ac7de3d3edb25c0f47bedf903a56efa4 |
other | ['stable-diffusion', 'text-to-image'] | false | 法律や倫理について 本モデルは日本にて作成されました。したがって、日本の法律が適用されます。 本モデルの学習は、著作権法第30条の4に基づき、合法であると主張します。 また、本モデルの配布については、著作権法や刑法175条に照らしてみても、 正犯や幇助犯にも該当しないと主張します。詳しくは柿沼弁護士の[見解](https://twitter.com/tka0120/status/1601483633436393473?s=20&t=yvM9EX0Em-_7lh8NJln3IQ)を御覧ください。 ただし、ライセンスにもある通り、本モデルの生成物は各種法令に従って取り扱って下さい。 しかし、本モデルを配布する行為が倫理的に良くないとは作者は思っています。 これは学習する著作物に対して著作者の許可を得ていないためです。 ただし、学習するには著作者の許可は法律上必要もなく、検索エンジンと同様法律上は問題はありません。 したがって、法的な側面ではなく、倫理的な側面を調査する目的も本配布は兼ねていると考えてください。 | ec91f0279db1d39ef6a5c89b7e9d190f |
other | ['stable-diffusion', 'text-to-image'] | false | 使い方 手軽に楽しみたい方は、パソコンならば右上側にあるテキストフォームに入れて生成してみてください。 スマートフォンならば、上に戻って生成してみてください。 詳しい本モデルの取り扱い方は[こちらの取扱説明書](https://alfredplpl.hatenablog.com/entry/2022/12/30/102636)にかかれています。 モデルは[ここ](https://huggingface.co/aipicasso/cool-japan-diffusion-2-1-0-beta/resolve/main/v2-1-0-beta.ckpt)からダウンロードできます。 以下、一般的なモデルカードの日本語訳です。 | 9514e8b9612be1483dbf1e925e80f1fa |
other | ['stable-diffusion', 'text-to-image'] | false | モデル詳細 - **開発者:** Robin Rombach, Patrick Esser, Alfred Increment - **モデルタイプ:** 拡散モデルベースの text-to-image 生成モデル - **言語:** 日本語 - **ライセンス:** CreativeML Open RAIL++-M-NC License - **モデルの説明:** このモデルはプロンプトに応じて適切な画像を生成することができます。アルゴリズムは [Latent Diffusion Model](https://arxiv.org/abs/2112.10752) と [OpenCLIP-ViT/H](https://github.com/mlfoundations/open_clip) です。 - **補足:** - **参考文献:** @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } | 282fbbac6eeabff309e699954035c520 |
other | ['stable-diffusion', 'text-to-image'] | false | Diffusersの場合 [🤗's Diffusers library](https://github.com/huggingface/diffusers) を使ってください。 まずは、以下のスクリプトを実行し、ライブラリをいれてください。 ```bash pip install --upgrade git+https://github.com/huggingface/diffusers.git transformers accelerate scipy ``` 次のスクリプトを実行し、画像を生成してください。 ```python from diffusers import StableDiffusionPipeline, EulerDiscreteScheduler import torch model_id = "aipicasso/cool-japan-diffusion-2-1-0-beta" scheduler = EulerDiscreteScheduler.from_pretrained(model_id, subfolder="scheduler") pipe = StableDiffusionPipeline.from_pretrained(model_id, scheduler=scheduler, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "anime, a portrait of a girl with black short hair and red eyes, kimono, full color illustration, official art, 4k, detailed" negative_prompt="low quality, bad face, bad anatomy, bad hand, lowres, jpeg artifacts, 2d, 3d, cg, text" image = pipe(prompt,negative_prompt=negative_prompt).images[0] image.save("girl.png") ``` **注意**: - [xformers](https://github.com/facebookresearch/xformers) を使うと早くなるらしいです。 - GPUを使う際にGPUのメモリが少ない人は `pipe.enable_attention_slicing()` を使ってください。 | f3969114f5ed7057f6d5ae64fd04465e |
other | ['stable-diffusion', 'text-to-image'] | false | 想定される用途 - コンテスト - [AIアートグランプリ](https://www.aiartgrandprix.com/)への投稿 - ファインチューニングに用いた全データを開示し、審査基準を満たしていることを判断してもらうようにします。また、事前に申請して、確認を取るようにします。 - コンテストに向けて、要望があれば、Hugging Face の Community などで私に伝えてください。 - 画像生成AIに関する報道 - 公共放送だけでなく、営利企業でも可能 - 画像合成AIに関する情報を「知る権利」は創作業界に悪影響を及ぼさないと判断したためです。また、報道の自由などを尊重しました。 - クールジャパンの紹介 - 他国の人にクールジャパンとはなにかを説明すること。 - 他国の留学生はクールジャパンに惹かれて日本に来ることがおおくあります。そこで、クールジャパンが日本では「クールでない」とされていることにがっかりされることがとても多いとAlfred Incrementは感じております。他国の人が憧れる自国の文化をもっと誇りに思ってください。 - 研究開発 - Discord上でのモデルの利用 - プロンプトエンジニアリング - ファインチューニング(追加学習とも) - DreamBooth など - 他のモデルとのマージ - Latent Diffusion Modelとクールジャパンとの相性 - 本モデルの性能をFIDなどで調べること - 本モデルがStable Diffusion以外のモデルとは独立であることをチェックサムやハッシュ関数などで調べること - 教育 - 美大生や専門学校生の卒業制作 - 大学生の卒業論文や課題制作 - 先生が画像生成AIの現状を伝えること - 自己表現 - SNS上で自分の感情や思考を表現すること - Hugging Face の Community にかいてある用途 - 日本語か英語で質問してください | d7bcae69401104d19d77e2585efe9e8d |
other | ['stable-diffusion', 'text-to-image'] | false | 使用してはいけない用途や悪意のある用途 - デジタル贋作 ([Digital Forgery](https://arxiv.org/abs/2212.03860)) は公開しないでください(著作権法に違反するおそれ) - 特に既存のキャラクターは公開しないでください(著作権法に違反するおそれ) - なお、学習していない[キャラクターも生成できる](https://twitter.com/ThePioneerJPnew/status/1609074173892235264?s=20&t=-rY1ufzNeIDT3Fm5YdME6g)そうです。(このツイート自体は研究目的として許可しています。) - 他人の作品を無断でImage-to-Imageしないでください(著作権法に違反するおそれ) - わいせつ物を頒布しないでください (刑法175条に違反するおそれ) - いわゆる業界のマナーを守らないようなこと - 事実に基づかないことを事実のように語らないようにしてください(威力業務妨害罪が適用されるおそれ) - フェイクニュース | 852ae056c0480f6879a85f8053f369f9 |
other | ['stable-diffusion', 'text-to-image'] | false | 学習 **学習データ** 次のデータを主に使ってStable Diffusionをファインチューニングしています。 - VAEについて - Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 60万種類 (データ拡張により無限枚作成) - U-Netについて - Danbooruなどの無断転載サイトを除いた日本の国内法を遵守したデータ: 40万ペア **学習プロセス** Stable DiffusionのVAEとU-Netをファインチューニングしました。 - **ハードウェア:** RTX 3090 - **オプティマイザー:** AdamW - **Gradient Accumulations**: 1 - **バッチサイズ:** 1 | 33ad0eaeb920ff687173936b884ffce5 |
other | ['stable-diffusion', 'text-to-image'] | false | 参考文献 @InProceedings{Rombach_2022_CVPR, author = {Rombach, Robin and Blattmann, Andreas and Lorenz, Dominik and Esser, Patrick and Ommer, Bj\"orn}, title = {High-Resolution Image Synthesis With Latent Diffusion Models}, booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, month = {June}, year = {2022}, pages = {10684-10695} } *このモデルカードは [Stable Diffusion v2](https://huggingface.co/stabilityai/stable-diffusion-2/raw/main/README.md) に基づいて、Alfred Incrementがかきました。 | ece3b936278e3964fd137668b56b89ac |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.7086 | 1.0 | 157 | 2.4898 | | 2.5796 | 2.0 | 314 | 2.4230 | | 2.5269 | 3.0 | 471 | 2.4354 | | c462763aff12f4d415f7d9bc6bc78227 |
mit | ['generated_from_trainer'] | false | finetuning-customer-sentiment-model-300-samples This model is a fine-tuned version of [roberta-base](https://huggingface.co/roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5949 - Accuracy: 0.7558 | 1b3991da9207c4ab2d78a4c802728504 |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-1'] | false | MultiBERTs Seed 1 Checkpoint 400k (uncased) Seed 1 intermediate checkpoint 400k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-1](https://hf.co/multberts-seed-1). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani). | 74dcc763da2945d5d1ecbcdf08c5512e |
apache-2.0 | ['exbert', 'multiberts', 'multiberts-seed-1'] | false | How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-1-400k') model = BertModel.from_pretrained("multiberts-seed-1-400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | 1fcdbf7185f610a6e3aaf7d0bd8dff2c |
mit | ['audio', 'music', 'generation', 'tensorflow'] | false | Model provided by: nakas Pretrained Nes_Acoustic_More_Energy_Vocals model for the [Musika system](https://github.com/marcoppasini/musika) for fast infinite waveform music generation. Introduced in [this paper](https://arxiv.org/abs/2208.08706). | 5f1fdbbe3db1fd13f0f44d4d05290cd8 |
mit | ['audio', 'music', 'generation', 'tensorflow'] | false | How to use You can generate music from this pretrained Nes_Acoustic_More_Energy_Vocals model using the notebook available [here](https://colab.research.google.com/drive/1HJWliBXPi-Xlx3gY8cjFI5-xaZgrTD7r). | e180e9862304b0909350136edd98cd3b |
apache-2.0 | [] | false | BigBird large model BigBird, is a sparse-attention based transformer which extends Transformer based models, such as BERT to much longer sequences. Moreover, BigBird comes along with a theoretical understanding of the capabilities of a complete transformer that the sparse model can handle. It is a pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in this [paper](https://arxiv.org/abs/2007.14062) and first released in this [repository](https://github.com/google-research/bigbird). Disclaimer: The team releasing BigBird did not write a model card for this model so this model card has been written by the Hugging Face team. | 062242e2548d947bce661a354fdd4458 |
apache-2.0 | [] | false | Model description BigBird relies on **block sparse attention** instead of normal attention (i.e. BERT's attention) and can handle sequences up to a length of 4096 at a much lower compute cost compared to BERT. It has achieved SOTA on various tasks involving very long sequences such as long documents summarization, question-answering with long contexts. | 213f83a14cc78b93f89cf9947f6c34ce |
apache-2.0 | [] | false | you can change `block_size` & `num_random_blocks` like this: model = BigBirdModel.from_pretrained("google/bigbird-roberta-large", block_size=16, num_random_blocks=2) text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` | dbcbf0a181e90494ef7a6c6910cc2cda |
apache-2.0 | [] | false | Training Data This model is pre-trained on four publicly available datasets: **Books**, **CC-News**, **Stories** and **Wikipedia**. It used same sentencepiece vocabulary as RoBERTa (which is in turn borrowed from GPT2). | 17c06c1d685927b39093e5aa686ce580 |
apache-2.0 | [] | false | Training Procedure Document longer than 4096 were split into multiple documents and documents that were much smaller than 4096 were joined. Following the original BERT training, 15% of tokens were masked and model is trained to predict the mask. Model is warm started from RoBERTa’s checkpoint. | fa7add8003d3822ea89b532537c8a3fc |
apache-2.0 | [] | false | BibTeX entry and citation info ```tex @misc{zaheer2021big, title={Big Bird: Transformers for Longer Sequences}, author={Manzil Zaheer and Guru Guruganesh and Avinava Dubey and Joshua Ainslie and Chris Alberti and Santiago Ontanon and Philip Pham and Anirudh Ravula and Qifan Wang and Li Yang and Amr Ahmed}, year={2021}, eprint={2007.14062}, archivePrefix={arXiv}, primaryClass={cs.LG} } ``` | 90422e4cfd35dc6608bfd1c2172ea771 |
apache-2.0 | ['exbert', 'gpt2'] | false | GPTalian This is a GPT2 model of Italian regional languages trained on [collections of Italian "dialect poetry"](http://dialectpoetry.com) by Luigi Bonaffini. This is a multilingual model. Italians use the word "dialect" to describe their regional languages, but they are separate languages. And there's a lot of English in this dataset too. The challenge of this project is to train a model to write the languages of Italy. For those who do not know Italian, here's some (lowercase) text that you can type into the API box: - oggi si parla il dialetto - la sua poesia viene di - ma non sempre trova | 1edd1f34a204f6e37b27dce5a72a8ced |
apache-2.0 | ['translation'] | false | opus-mt-niu-sv * source languages: niu * target languages: sv * OPUS readme: [niu-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/niu-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/niu-sv/opus-2020-01-16.eval.txt) | 42f7fdbb7daec69d180216912a8c095e |
apache-2.0 | ['generated_from_trainer'] | false | wav2vec2-large-xls-r-300m-turkish-colab This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.3864 - Wer: 0.3570 | a19c0dd2a54589627ab58b383422ff9c |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.8302 | 3.67 | 400 | 0.6702 | 0.6903 | | 0.4098 | 7.34 | 800 | 0.4574 | 0.4939 | | 0.1908 | 11.01 | 1200 | 0.4350 | 0.4557 | | 0.1279 | 14.68 | 1600 | 0.4204 | 0.4213 | | 0.0966 | 18.35 | 2000 | 0.4238 | 0.3991 | | 0.0782 | 22.02 | 2400 | 0.3822 | 0.3906 | | 0.0613 | 25.69 | 2800 | 0.3982 | 0.3714 | | 0.0477 | 29.36 | 3200 | 0.3864 | 0.3570 | | 706f5839716020a50dfa1fd9395a49d4 |
mit | ['generated_from_trainer'] | false | kind_torvalds This model was trained from scratch on the tomekkorbak/pii-pile-chunk3-0-50000, the tomekkorbak/pii-pile-chunk3-50000-100000, the tomekkorbak/pii-pile-chunk3-100000-150000, the tomekkorbak/pii-pile-chunk3-150000-200000, the tomekkorbak/pii-pile-chunk3-200000-250000, the tomekkorbak/pii-pile-chunk3-250000-300000, the tomekkorbak/pii-pile-chunk3-300000-350000, the tomekkorbak/pii-pile-chunk3-350000-400000, the tomekkorbak/pii-pile-chunk3-400000-450000, the tomekkorbak/pii-pile-chunk3-450000-500000, the tomekkorbak/pii-pile-chunk3-500000-550000, the tomekkorbak/pii-pile-chunk3-550000-600000, the tomekkorbak/pii-pile-chunk3-600000-650000, the tomekkorbak/pii-pile-chunk3-650000-700000, the tomekkorbak/pii-pile-chunk3-700000-750000, the tomekkorbak/pii-pile-chunk3-750000-800000, the tomekkorbak/pii-pile-chunk3-800000-850000, the tomekkorbak/pii-pile-chunk3-850000-900000, the tomekkorbak/pii-pile-chunk3-900000-950000, the tomekkorbak/pii-pile-chunk3-950000-1000000, the tomekkorbak/pii-pile-chunk3-1000000-1050000, the tomekkorbak/pii-pile-chunk3-1050000-1100000, the tomekkorbak/pii-pile-chunk3-1100000-1150000, the tomekkorbak/pii-pile-chunk3-1150000-1200000, the tomekkorbak/pii-pile-chunk3-1200000-1250000, the tomekkorbak/pii-pile-chunk3-1250000-1300000, the tomekkorbak/pii-pile-chunk3-1300000-1350000, the tomekkorbak/pii-pile-chunk3-1350000-1400000, the tomekkorbak/pii-pile-chunk3-1400000-1450000, the tomekkorbak/pii-pile-chunk3-1450000-1500000, the tomekkorbak/pii-pile-chunk3-1500000-1550000, the tomekkorbak/pii-pile-chunk3-1550000-1600000, the tomekkorbak/pii-pile-chunk3-1600000-1650000, the tomekkorbak/pii-pile-chunk3-1650000-1700000, the tomekkorbak/pii-pile-chunk3-1700000-1750000, the tomekkorbak/pii-pile-chunk3-1750000-1800000, the tomekkorbak/pii-pile-chunk3-1800000-1850000, the tomekkorbak/pii-pile-chunk3-1850000-1900000 and the tomekkorbak/pii-pile-chunk3-1900000-1950000 datasets. | 6146326e85b54d294c84391009cbddc6 |
mit | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.01 - training_steps: 12588 - mixed_precision_training: Native AMP | 8b6369fce62a72674b5e348a4a8f26cb |
mit | ['generated_from_trainer'] | false | Full config {'dataset': {'datasets': ['tomekkorbak/pii-pile-chunk3-0-50000', 'tomekkorbak/pii-pile-chunk3-50000-100000', 'tomekkorbak/pii-pile-chunk3-100000-150000', 'tomekkorbak/pii-pile-chunk3-150000-200000', 'tomekkorbak/pii-pile-chunk3-200000-250000', 'tomekkorbak/pii-pile-chunk3-250000-300000', 'tomekkorbak/pii-pile-chunk3-300000-350000', 'tomekkorbak/pii-pile-chunk3-350000-400000', 'tomekkorbak/pii-pile-chunk3-400000-450000', 'tomekkorbak/pii-pile-chunk3-450000-500000', 'tomekkorbak/pii-pile-chunk3-500000-550000', 'tomekkorbak/pii-pile-chunk3-550000-600000', 'tomekkorbak/pii-pile-chunk3-600000-650000', 'tomekkorbak/pii-pile-chunk3-650000-700000', 'tomekkorbak/pii-pile-chunk3-700000-750000', 'tomekkorbak/pii-pile-chunk3-750000-800000', 'tomekkorbak/pii-pile-chunk3-800000-850000', 'tomekkorbak/pii-pile-chunk3-850000-900000', 'tomekkorbak/pii-pile-chunk3-900000-950000', 'tomekkorbak/pii-pile-chunk3-950000-1000000', 'tomekkorbak/pii-pile-chunk3-1000000-1050000', 'tomekkorbak/pii-pile-chunk3-1050000-1100000', 'tomekkorbak/pii-pile-chunk3-1100000-1150000', 'tomekkorbak/pii-pile-chunk3-1150000-1200000', 'tomekkorbak/pii-pile-chunk3-1200000-1250000', 'tomekkorbak/pii-pile-chunk3-1250000-1300000', 'tomekkorbak/pii-pile-chunk3-1300000-1350000', 'tomekkorbak/pii-pile-chunk3-1350000-1400000', 'tomekkorbak/pii-pile-chunk3-1400000-1450000', 'tomekkorbak/pii-pile-chunk3-1450000-1500000', 'tomekkorbak/pii-pile-chunk3-1500000-1550000', 'tomekkorbak/pii-pile-chunk3-1550000-1600000', 'tomekkorbak/pii-pile-chunk3-1600000-1650000', 'tomekkorbak/pii-pile-chunk3-1650000-1700000', 'tomekkorbak/pii-pile-chunk3-1700000-1750000', 'tomekkorbak/pii-pile-chunk3-1750000-1800000', 'tomekkorbak/pii-pile-chunk3-1800000-1850000', 'tomekkorbak/pii-pile-chunk3-1850000-1900000', 'tomekkorbak/pii-pile-chunk3-1900000-1950000'], 'filter_threshold': 0.000286, 'is_split_by_sentences': True, 'skip_tokens': 1649999872}, 'generation': {'force_call_on': [25177], 'metrics_configs': [{}, {'n': 1}, {'n': 2}, {'n': 5}], 'scenario_configs': [{'generate_kwargs': {'do_sample': True, 'max_length': 128, 'min_length': 10, 'temperature': 0.7, 'top_k': 0, 'top_p': 0.9}, 'name': 'unconditional', 'num_samples': 2048}], 'scorer_config': {}}, 'kl_gpt3_callback': {'force_call_on': [25177], 'max_tokens': 64, 'num_samples': 4096}, 'model': {'from_scratch': False, 'gpt2_config_kwargs': {'reorder_and_upcast_attn': True, 'scale_attn_by': True}, 'model_kwargs': {'revision': '9e6c78543a6ff1e4089002c38864d5a9cf71ec90'}, 'path_or_name': 'tomekkorbak/nervous_wozniak'}, 'objective': {'name': 'MLE'}, 'tokenizer': {'path_or_name': 'gpt2'}, 'training': {'dataloader_num_workers': 0, 'effective_batch_size': 128, 'evaluation_strategy': 'no', 'fp16': True, 'hub_model_id': 'kind_torvalds', 'hub_strategy': 'all_checkpoints', 'learning_rate': 0.0001, 'logging_first_step': True, 'logging_steps': 1, 'num_tokens': 3300000000, 'output_dir': 'training_output2', 'per_device_train_batch_size': 16, 'push_to_hub': True, 'remove_unused_columns': False, 'save_steps': 25177, 'save_strategy': 'steps', 'seed': 42, 'tokens_already_seen': 1649999872, 'warmup_ratio': 0.01, 'weight_decay': 0.1}} | 9c9057fcae6eb207c18a4937145fa6cf |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.0606 - Precision: 0.9277 - Recall: 0.9385 - F1: 0.9330 - Accuracy: 0.9844 | a2d51f3d97e6c8d019db8a03d753f24f |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.2454 | 1.0 | 878 | 0.0692 | 0.9106 | 0.9212 | 0.9159 | 0.9809 | | 0.0517 | 2.0 | 1756 | 0.0616 | 0.9203 | 0.9352 | 0.9277 | 0.9834 | | 0.0314 | 3.0 | 2634 | 0.0606 | 0.9277 | 0.9385 | 0.9330 | 0.9844 | | 6f11bb229da25e8e5de641a5ca9295e4 |
apache-2.0 | ['generated_from_trainer'] | false | small-mlm-rotten_tomatoes-custom-tokenizer This model is a fine-tuned version of [google/bert_uncased_L-4_H-512_A-8](https://huggingface.co/google/bert_uncased_L-4_H-512_A-8) on the None dataset. It achieves the following results on the evaluation set: - Loss: 7.0377 | f31df695b34ee6442cbd5478fb328c2d |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 7.6287 | 0.47 | 500 | 7.2726 | | 7.0283 | 0.94 | 1000 | 7.0982 | | 6.7115 | 1.41 | 1500 | 6.9665 | | 6.695 | 1.87 | 2000 | 7.2285 | | 6.55 | 2.34 | 2500 | 6.9906 | | 6.4289 | 2.81 | 3000 | 7.0377 | | 8d9a041785a0545bbe72319c1887152e |
apache-2.0 | ['generated_from_trainer'] | false | distilbert-base-uncased-finetuned-indosquad-v2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6650 | 0c1e585b68c799f5a3ce2af1cedd5c55 |
apache-2.0 | ['generated_from_trainer'] | false | Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 | 775eff61d9332071a264211b52521621 |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.9015 | 1.0 | 9676 | 1.5706 | | 1.6438 | 2.0 | 19352 | 1.5926 | | 1.4714 | 3.0 | 29028 | 1.5253 | | 1.3486 | 4.0 | 38704 | 1.6650 | | 68ba237b909e0ac65cca09a15829ba4d |
apache-2.0 | ['generated_from_keras_callback'] | false | alk/mt5-small-finetuned-cnn_dailymail-en-es This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.9490 - Validation Loss: 1.6920 - Epoch: 7 | 7639ca88d5467887794aca75091cef37 |
apache-2.0 | ['generated_from_keras_callback'] | false | Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 287112, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 | 21928da41eb5aace971abdc75d8febff |
apache-2.0 | ['generated_from_keras_callback'] | false | Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 2.9445 | 1.9068 | 0 | | 2.2439 | 1.8106 | 1 | | 2.1301 | 1.7582 | 2 | | 2.0643 | 1.7378 | 3 | | 2.0191 | 1.7181 | 4 | | 1.9870 | 1.7033 | 5 | | 1.9646 | 1.7015 | 6 | | 1.9490 | 1.6920 | 7 | | 227f1b9d6e6e6895a46364ca240e92e2 |
apache-2.0 | ['generated_from_trainer'] | false | distilbert_add_GLUE_Experiment_stsb This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE STSB dataset. It achieves the following results on the evaluation set: - Loss: 2.2770 - Pearson: 0.0450 - Spearmanr: 0.0447 - Combined Score: 0.0448 | 3f50e1f3b59895d9cd2c0933e33a5b5a |
apache-2.0 | ['generated_from_trainer'] | false | Training results | Training Loss | Epoch | Step | Validation Loss | Pearson | Spearmanr | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:-------:|:---------:|:--------------:| | 4.11 | 1.0 | 23 | 2.2770 | 0.0450 | 0.0447 | 0.0448 | | 2.2155 | 2.0 | 46 | 2.4336 | 0.0499 | 0.0451 | 0.0475 | | 2.1634 | 3.0 | 69 | 2.3207 | 0.0729 | 0.0634 | 0.0681 | | 2.0618 | 4.0 | 92 | 2.6080 | 0.0787 | 0.0783 | 0.0785 | | 1.8586 | 5.0 | 115 | 2.4988 | 0.1020 | 0.1017 | 0.1018 | | 1.6977 | 6.0 | 138 | 2.6166 | 0.1187 | 0.1137 | 0.1162 | | a7ddb837967e2ea39e767f6fba24d0a4 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.