license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-4']
false
Model description MultiBERTs models are transformers model pretrained on a large corpus of English data in a self-supervised fashion. This means it was pretrained on the raw texts only, with no humans labelling them in any way (which is why it can use lots of publicly available data) with an automatic process to generate inputs and labels from those texts. More precisely, it was pretrained with two objectives: - Masked language modeling (MLM): taking a sentence, the model randomly masks 15% of the words in the input then run the entire masked sentence through the model and has to predict the masked words. This is different from traditional recurrent neural networks (RNNs) that usually see the words one after the other, or from autoregressive models like GPT which internally mask the future tokens. It allows the model to learn a bidirectional representation of the sentence. - Next sentence prediction (NSP): the models concatenates two masked sentences as inputs during pretraining. Sometimes they correspond to sentences that were next to each other in the original text, sometimes not. The model then has to predict if the two sentences were following each other or not. This way, the model learns an inner representation of the English language that can then be used to extract features useful for downstream tasks: if you have a dataset of labeled sentences for instance, you can train a standard classifier using the features produced by the MultiBERTs model as inputs.
b9144676e8c465278f177525e0f4e9c4
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-4']
false
Intended uses & limitations You can use the raw model for either masked language modeling or next sentence prediction, but it's mostly intended to be fine-tuned on a downstream task. See the [model hub](https://huggingface.co/models?filter=multiberts) to look for fine-tuned versions on a task that interests you. Note that this model is primarily aimed at being fine-tuned on tasks that use the whole sentence (potentially masked) to make decisions, such as sequence classification, token classification or question answering. For tasks such as text generation you should look at model like GPT2.
2c9e2acab99923bd89b9f7fbf24877b6
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-4']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-4-400k') model = BertModel.from_pretrained("multiberts-seed-4-400k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
b30137ee912b0c75bd0b5e9d067b38cd
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-4']
false
Limitations and bias Even if the training data used for this model could be characterized as fairly neutral, this model can have biased predictions. This bias will also affect all fine-tuned versions of this model. For an understanding of bias of this particular checkpoint, please try out this checkpoint with the snippet present in the [Limitation and bias section](https://huggingface.co/bert-base-uncased
8f7aabea8dd380816450317eb00edad1
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-4']
false
Training data The MultiBERTs models were pretrained on [BookCorpus](https://yknzhu.wixsite.com/mbweb), a dataset consisting of 11,038 unpublished books and [English Wikipedia](https://en.wikipedia.org/wiki/English_Wikipedia) (excluding lists, tables and headers).
4bcefc062d45d28e6dd27f4349332ee9
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-4']
false
Preprocessing The texts are lowercased and tokenized using WordPiece and a vocabulary size of 30,000. The inputs of the model are then of the form: ``` [CLS] Sentence A [SEP] Sentence B [SEP] ``` With probability 0.5, sentence A and sentence B correspond to two consecutive sentences in the original corpus and in the other cases, it's another random sentence in the corpus. Note that what is considered a sentence here is a consecutive span of text usually longer than a single sentence. The only constrain is that the result with the two "sentences" has a combined length of less than 512 tokens. The details of the masking procedure for each sentence are the following: - 15% of the tokens are masked. - In 80% of the cases, the masked tokens are replaced by `[MASK]`. - In 10% of the cases, the masked tokens are replaced by a random token (different) from the one they replace. - In the 10% remaining cases, the masked tokens are left as is.
4c490b75c72b569a8709cba84b95c1cf
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-4']
false
Pretraining The full model was trained on 16 Cloud TPU v2 chips for two million steps with a batch size of 256. The sequence length was set to 512 throughout. The optimizer used is Adam with a learning rate of 1e-4, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), a weight decay of 0.01, learning rate warmup for 10,000 steps and linear decay of the learning rate after.
d267f3859be98191bffc7918a6b1e494
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-4']
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2106-16163, author = {Thibault Sellam and Steve Yadlowsky and Jason Wei and Naomi Saphra and Alexander D'Amour and Tal Linzen and Jasmijn Bastings and Iulia Turc and Jacob Eisenstein and Dipanjan Das and Ian Tenney and Ellie Pavlick}, title = {The MultiBERTs: {BERT} Reproductions for Robustness Analysis}, journal = {CoRR}, volume = {abs/2106.16163}, year = {2021}, url = {https://arxiv.org/abs/2106.16163}, eprinttype = {arXiv}, eprint = {2106.16163}, timestamp = {Mon, 05 Jul 2021 15:15:50 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2106-16163.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ``` <a href="https://huggingface.co/exbert/?model=multiberts"> <img width="300px" src="https://cdn-media.huggingface.co/exbert/button.png"> </a>
47741996fcf345d3254bb05d29f3e68b
mit
[]
false
Isabell Schulte - PVIII - 12tiles - 3000steps - Style on Stable Diffusion This is the `<isabell-schulte-p8-style-12tiles-3000s>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<isabell-schulte-p8-style-12tiles-3000s> 0](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/3.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 1](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/6.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 2](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/0.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 3](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/7.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 4](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/5.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 5](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/8.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 6](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/9.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 7](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/1.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 8](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/10.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 9](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/2.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 10](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/11.jpeg) ![<isabell-schulte-p8-style-12tiles-3000s> 11](https://huggingface.co/sd-concepts-library/isabell-schulte-pviii-12tiles-3000steps-style/resolve/main/concept_images/4.jpeg)
0a97d675aeb7cbb2b732345b8f347227
apache-2.0
['generated_from_trainer']
false
resnet-50-finetuned-FER2013-0.003-CKPlus This model is a fine-tuned version of [Celal11/resnet-50-finetuned-FER2013-0.003](https://huggingface.co/Celal11/resnet-50-finetuned-FER2013-0.003) on the image_folder dataset. It achieves the following results on the evaluation set: - Loss: 0.0614 - Accuracy: 0.9848
1d803dc13de7f0bff412203034cc9115
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.003 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2
7a218e801530781860d7f58a0a9ebb19
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.6689 | 0.97 | 27 | 0.1123 | 0.9797 | | 0.2929 | 1.97 | 54 | 0.0614 | 0.9848 |
a6783a5c2e3fb3e2183f75ec3237b36e
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-cola This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.8024 - Matthews Correlation: 0.5275
50f40b68cb75eed645106da79cbe7377
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Matthews Correlation | |:-------------:|:-----:|:----:|:---------------:|:--------------------:| | 0.5261 | 1.0 | 535 | 0.5320 | 0.4152 | | 0.3482 | 2.0 | 1070 | 0.4960 | 0.5049 | | 0.2364 | 3.0 | 1605 | 0.6204 | 0.5123 | | 0.186 | 4.0 | 2140 | 0.7605 | 0.5232 | | 0.139 | 5.0 | 2675 | 0.8024 | 0.5275 |
00557f6311c7d6817eedbc211600b7f4
mit
[]
false
Model description It is GPT2-small model pre-trained with indonesian Wikipedia using a causal language modeling (CLM) objective. This model is uncased: it does not make a difference between indonesia and Indonesia. This is one of several other language models that have been pre-trained with indonesian datasets. More detail about its usage on downstream tasks (text classification, text generation, etc) is available at [Transformer based Indonesian Language Models](https://github.com/cahya-wirawan/indonesian-language-models/tree/master/Transformers)
0afe373ed929ceb60199d7d617eefc39
mit
[]
false
How to use You can use this model directly with a pipeline for text generation. Since the generation relies on some randomness, we set a seed for reproducibility: ```python >>> from transformers import pipeline, set_seed >>> generator = pipeline('text-generation', model='cahya/gpt2-small-indonesian-522M') >>> set_seed(42) >>> generator("Kerajaan Majapahit adalah", max_length=30, num_return_sequences=5, num_beams=10) [{'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini berdiri pada abad ke-14'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-16. Kerajaan ini berdiri pada abad ke-14'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini berdiri pada abad ke-15'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-16. Kerajaan ini berdiri pada abad ke-15'}, {'generated_text': 'Kerajaan Majapahit adalah sebuah kerajaan yang pernah berdiri di Jawa Timur pada abad ke-14 hingga abad ke-15. Kerajaan ini merupakan kelanjutan dari Kerajaan Majapahit yang'}] ``` Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import GPT2Tokenizer, GPT2Model model_name='cahya/gpt2-small-indonesian-522M' tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = GPT2Model.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ``` and in Tensorflow: ```python from transformers import GPT2Tokenizer, TFGPT2Model model_name='cahya/gpt2-small-indonesian-522M' tokenizer = GPT2Tokenizer.from_pretrained(model_name) model = TFGPT2Model.from_pretrained(model_name) text = "Silakan diganti dengan text apa saja." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ```
9ae529cd2b50624ef7011769d91fdc13
mit
[]
false
Training data This model was pre-trained with 522MB of indonesian Wikipedia. The texts are tokenized using a byte-level version of Byte Pair Encoding (BPE) (for unicode characters) and a vocabulary size of 52,000. The inputs are sequences of 128 consecutive tokens.
36413f808c0b5b676f1e0ae07a9a9211
mit
['torch']
false
GPT-2 Pretrained model on Bulgarian language using a causal language modeling (CLM) objective. It was introduced in [this paper](https://d4mucfpksywv.cloudfront.net/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) and first released at [this page](https://openai.com/blog/better-language-models/).
bc30e174b5642d3fc74630921c074f3f
mit
['torch']
false
Model description This is the **SMALL** version compressed via [progressive module replacing](https://arxiv.org/abs/2002.02925). The compression was executed on Bulgarian text from [OSCAR](https://oscar-corpus.com/post/oscar-2019/), [Chitanka](https://chitanka.info/) and [Wikipedia](https://bg.wikipedia.org/).
481dc65e5ac5e3d24aba628c581496df
mit
['torch']
false
How to use Here is how to use this model in PyTorch: ```python >>> from transformers import AutoModel, AutoTokenizer >>> >>> model_id = "rmihaylov/gpt2-small-theseus-bg" >>> tokenizer = AutoTokenizer.from_pretrained(model_id) >>> model = AutoModel.from_pretrained(model_id, trust_remote_code=True) >>> >>> input_ids = tokenizer.encode( >>> "Здравей,", >>> add_special_tokens=False, >>> return_tensors='pt') >>> >>> output_ids = model.generate( >>> input_ids, >>> do_sample=True, >>> max_length=50, >>> top_p=0.92, >>> pad_token_id=2, >>> top_k=0) >>> >>> output = tokenizer.decode(output_ids[0]) >>> >>> output = output.replace('<|endoftext|>', '\n\n\n') >>> output = output.replace('<|unknown|>', '') >>> output = output.replace('▁', ' ') >>> output = output.replace('<|n|>', '\n') >>> >>> print(output) Здравей, извинявай, но не мога да заспя. Джини се обърна и забеляза колко са прегърнати. — Почакай, Джини. Не мога да повярвам, че е възможно! Толкова искам да те видя. — Обеща ```
888955e82a4533c0417be0491ad940c0
mit
['torch']
false
out-of-scope-use-cases): > Because large-scale language models like GPT-2 do not distinguish fact from fiction, we don’t support use-cases > that require the generated text to be true. > > Additionally, language models like GPT-2 reflect the biases inherent to the systems they were trained on, so we do > not recommend that they be deployed into systems that interact with humans > unless the deployers first carry out a > study of biases relevant to the intended use-case. We found no statistically significant difference in gender, race, > and religious bias probes between 774M and 1.5B, implying all versions of GPT-2 should be approached with similar > levels of caution around use cases that are sensitive to biases around human attributes.
36848b22212c0b64f18cf0ef2c488428
mit
[]
false
Usage ```python import torch from transformers import AutoTokenizer, AutoModel, AutoModelForSeq2SeqLM model_name = "vblagoje/bart_lfqa" device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSeq2SeqLM.from_pretrained(model_name) model = model.to(device)
635e3023660300c40a06066e8065458f
mit
[]
false
given the question above suppose these documents below were found in some document store documents = ["when the skin is completely wet. The body continuously loses water by...", "at greater pressures. There is an ambiguity, however, as to the meaning of the terms 'heating' and 'cooling'...", "are not in a relation of thermal equilibrium, heat will flow from the hotter to the colder, by whatever pathway...", "air condition and moving along a line of constant enthalpy toward a state of higher humidity. A simple example ...", "Thermal contact conductance In physics, thermal contact conductance is the study of heat conduction between solid ..."]
f8b0f37f60e92d8bf19d37e8f17fbfbb
mit
[]
false
concatenate question and support documents into BART input conditioned_doc = "<P> " + " <P> ".join([d for d in documents]) query_and_docs = "question: {} context: {}".format(query, conditioned_doc) model_input = tokenizer(query_and_docs, truncation=True, padding=True, return_tensors="pt") generated_answers_encoded = model.generate(input_ids=model_input["input_ids"].to(device), attention_mask=model_input["attention_mask"].to(device), min_length=64, max_length=256, do_sample=False, early_stopping=True, num_beams=8, temperature=1.0, top_k=None, top_p=None, eos_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, num_return_sequences=1) tokenizer.batch_decode(generated_answers_encoded, skip_special_tokens=True,clean_up_tokenization_spaces=True)
767698ff6461a8270bb0ce752778bc97
mit
[]
false
below is the abstractive answer generated by the model ["When you heat water to room temperature, it loses heat to the air around it. When you cool it down, it gains heat back from the air, which is why it feels colder than the air surrounding it. It's the same reason why you feel cold when you turn on a fan. The air around you is losing heat, and the water is gaining heat."] ```
b9116c379ab94fd1160f97f4a27711b2
apache-2.0
['translation']
false
opus-mt-tiv-fr * source languages: tiv * target languages: fr * OPUS readme: [tiv-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/tiv-fr/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/tiv-fr/opus-2020-01-16.eval.txt)
b21281303ce5c999846d290421e915e0
apache-2.0
['generated_from_trainer']
false
test_ner3 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the pv_dataset dataset. It achieves the following results on the evaluation set: - Loss: 0.2983 - Precision: 0.6698 - Recall: 0.6499 - F1: 0.6597 - Accuracy: 0.9607
4b2e194fc3d51cc75fa24dc81a1f2fb5
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 20
ceacd0c39e7b98f9ecbba4eff4aff286
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.1106 | 1.0 | 1813 | 0.1128 | 0.6050 | 0.5949 | 0.5999 | 0.9565 | | 0.0705 | 2.0 | 3626 | 0.1190 | 0.6279 | 0.6122 | 0.6200 | 0.9585 | | 0.0433 | 3.0 | 5439 | 0.1458 | 0.6342 | 0.5983 | 0.6157 | 0.9574 | | 0.0301 | 4.0 | 7252 | 0.1453 | 0.6305 | 0.6818 | 0.6552 | 0.9594 | | 0.0196 | 5.0 | 9065 | 0.1672 | 0.6358 | 0.6871 | 0.6605 | 0.9594 | | 0.0133 | 6.0 | 10878 | 0.1931 | 0.6427 | 0.6138 | 0.6279 | 0.9587 | | 0.0104 | 7.0 | 12691 | 0.1948 | 0.6657 | 0.6511 | 0.6583 | 0.9607 | | 0.0081 | 8.0 | 14504 | 0.2243 | 0.6341 | 0.6574 | 0.6455 | 0.9586 | | 0.0054 | 9.0 | 16317 | 0.2432 | 0.6547 | 0.6318 | 0.6431 | 0.9588 | | 0.0041 | 10.0 | 18130 | 0.2422 | 0.6717 | 0.6397 | 0.6553 | 0.9605 | | 0.0041 | 11.0 | 19943 | 0.2415 | 0.6571 | 0.6420 | 0.6495 | 0.9601 | | 0.0027 | 12.0 | 21756 | 0.2567 | 0.6560 | 0.6590 | 0.6575 | 0.9601 | | 0.0023 | 13.0 | 23569 | 0.2609 | 0.6640 | 0.6495 | 0.6566 | 0.9606 | | 0.002 | 14.0 | 25382 | 0.2710 | 0.6542 | 0.6670 | 0.6606 | 0.9598 | | 0.0012 | 15.0 | 27195 | 0.2766 | 0.6692 | 0.6539 | 0.6615 | 0.9610 | | 0.001 | 16.0 | 29008 | 0.2938 | 0.6692 | 0.6415 | 0.6551 | 0.9603 | | 0.0007 | 17.0 | 30821 | 0.2969 | 0.6654 | 0.6490 | 0.6571 | 0.9604 | | 0.0007 | 18.0 | 32634 | 0.3035 | 0.6628 | 0.6456 | 0.6541 | 0.9601 | | 0.0007 | 19.0 | 34447 | 0.2947 | 0.6730 | 0.6489 | 0.6607 | 0.9609 | | 0.0004 | 20.0 | 36260 | 0.2983 | 0.6698 | 0.6499 | 0.6597 | 0.9607 |
c651dbae466fb3898e5baab5ba68ff16
apache-2.0
['generated_from_trainer']
false
wav2vec2-xls-r-300m-ar-9 This model is a fine-tuned version of [facebook/wav2vec2-xls-r-300m](https://huggingface.co/facebook/wav2vec2-xls-r-300m) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 86.4276 - Wer: 0.1947
ad1719f7e56c8013bbeabed502bb8176
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 64 - eval_batch_size: 16 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 120 - mixed_precision_training: Native AMP
06acfac459cf990b45c5d1d89b62b7d5
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:------:| | 6312.2087 | 4.71 | 400 | 616.6482 | 1.0 | | 1928.3641 | 9.41 | 800 | 135.8992 | 0.6373 | | 502.0017 | 14.12 | 1200 | 84.4729 | 0.3781 | | 299.4288 | 18.82 | 1600 | 76.2488 | 0.3132 | | 224.0057 | 23.53 | 2000 | 77.6899 | 0.2868 | | 183.0379 | 28.24 | 2400 | 77.7943 | 0.2725 | | 160.6119 | 32.94 | 2800 | 79.4487 | 0.2643 | | 142.7342 | 37.65 | 3200 | 81.3426 | 0.2523 | | 127.1061 | 42.35 | 3600 | 83.4995 | 0.2489 | | 114.0666 | 47.06 | 4000 | 82.9293 | 0.2416 | | 108.4024 | 51.76 | 4400 | 78.6118 | 0.2330 | | 99.6215 | 56.47 | 4800 | 87.1001 | 0.2328 | | 95.5135 | 61.18 | 5200 | 84.0371 | 0.2260 | | 88.2917 | 65.88 | 5600 | 85.9637 | 0.2278 | | 82.5884 | 70.59 | 6000 | 81.7456 | 0.2237 | | 77.6827 | 75.29 | 6400 | 88.2686 | 0.2184 | | 73.313 | 80.0 | 6800 | 85.1965 | 0.2183 | | 69.61 | 84.71 | 7200 | 86.1655 | 0.2100 | | 65.6991 | 89.41 | 7600 | 84.0606 | 0.2106 | | 62.6059 | 94.12 | 8000 | 83.8724 | 0.2036 | | 57.8635 | 98.82 | 8400 | 85.2078 | 0.2012 | | 55.2126 | 103.53 | 8800 | 86.6009 | 0.2021 | | 53.1746 | 108.24 | 9200 | 88.4284 | 0.1975 | | 52.3969 | 112.94 | 9600 | 85.2846 | 0.1972 | | 49.8386 | 117.65 | 10000 | 86.4276 | 0.1947 |
c7af1c4469b150180b91d4851858cbbd
apache-2.0
['generated_from_keras_callback']
false
bertbaseuncasedny This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.3901 - Train End Logits Accuracy: 0.8823 - Train Start Logits Accuracy: 0.8513 - Validation Loss: 1.2123 - Validation End Logits Accuracy: 0.7291 - Validation Start Logits Accuracy: 0.6977 - Epoch: 3
f880f335da4d5f41874519ad1eadb9a9
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 29508, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
2213e5ca32396b8dd2ab538cd5219065
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.2597 | 0.6683 | 0.6277 | 1.0151 | 0.7214 | 0.6860 | 0 | | 0.7699 | 0.7820 | 0.7427 | 1.0062 | 0.7342 | 0.6996 | 1 | | 0.5343 | 0.8425 | 0.8064 | 1.1162 | 0.7321 | 0.7010 | 2 | | 0.3901 | 0.8823 | 0.8513 | 1.2123 | 0.7291 | 0.6977 | 3 |
69bae67a40c62c734886786d6b8e2ecc
mit
['generated_from_keras_callback']
false
YSKartal/bert-base-turkish-cased-turkish_offensive_trained_model This model is a fine-tuned version of [dbmdz/bert-base-turkish-cased](https://huggingface.co/dbmdz/bert-base-turkish-cased) on [offenseval2020_tr](https://huggingface.co/datasets/offenseval2020_tr) dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0365 - Validation Loss: 0.4846 - Train F1: 0.6993 - Epoch: 3
563670d307ec5fcf3d489180f47a889c
mit
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 7936, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32
a3a42a80614eb45d278a5592c17505fd
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Train F1 | Epoch | |:----------:|:---------------:|:--------:|:-----:| | 0.3003 | 0.2664 | 0.6971 | 0 | | 0.1866 | 0.3018 | 0.6990 | 1 | | 0.0860 | 0.3803 | 0.7032 | 2 | | 0.0365 | 0.4846 | 0.6993 | 3 |
65afb6f91f36f0b3e61babd332e6e907
mit
['conversational']
false
I fine-tuned DialoGPT-small model on "The Big Bang Theory" TV Series dataset from Kaggle (https://www.kaggle.com/mitramir5/the-big-bang-theory-series-transcript) ```python from transformers import AutoModelForCausalLM, AutoTokenizer import torch tokenizer = AutoTokenizer.from_pretrained("vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts") model = AutoModelForCausalLM.from_pretrained("vijayv500/DialoGPT-small-Big-Bang-Theory-Series-Transcripts")
422c31f2334ab91a4e220e3f8bb32430
mit
['conversational']
false
generated a response while limiting the total chat history to 1000 tokens, chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature = 0.8 )
2fbfdb0e6b92303a178609a7799a1ddc
apache-2.0
['generated_from_trainer']
false
mobilebert_sa_GLUE_Experiment_logit_kd_mrpc This model is a fine-tuned version of [google/mobilebert-uncased](https://huggingface.co/google/mobilebert-uncased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5133 - Accuracy: 0.6740 - F1: 0.7772 - Combined Score: 0.7256
1bd1fefbdc1a4b42fb636b5da2c1042f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Combined Score | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:--------------:| | 0.6228 | 1.0 | 29 | 0.5556 | 0.6838 | 0.8122 | 0.7480 | | 0.611 | 2.0 | 58 | 0.5551 | 0.6838 | 0.8122 | 0.7480 | | 0.6095 | 3.0 | 87 | 0.5538 | 0.6838 | 0.8122 | 0.7480 | | 0.6062 | 4.0 | 116 | 0.5503 | 0.6838 | 0.8122 | 0.7480 | | 0.5825 | 5.0 | 145 | 0.5262 | 0.6985 | 0.8167 | 0.7576 | | 0.4981 | 6.0 | 174 | 0.5197 | 0.6936 | 0.8038 | 0.7487 | | 0.468 | 7.0 | 203 | 0.5133 | 0.6740 | 0.7772 | 0.7256 | | 0.3901 | 8.0 | 232 | 0.5382 | 0.6838 | 0.7757 | 0.7297 | | 0.323 | 9.0 | 261 | 0.6140 | 0.6789 | 0.7657 | 0.7223 | | 0.2674 | 10.0 | 290 | 0.5512 | 0.6740 | 0.7687 | 0.7214 | | 0.2396 | 11.0 | 319 | 0.6467 | 0.6667 | 0.7631 | 0.7149 | | 0.2127 | 12.0 | 348 | 0.7811 | 0.6716 | 0.7690 | 0.7203 |
36703e037f0a63fa25c700ba1b3441e0
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.1258 - Accuracy: 0.9793
a82b546f3205a55dfc843bf7179535c3
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5
e75e23022551c6b949620d398010d569
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.1561 | 1.0 | 399 | 1.1127 | 0.6643 | | 0.4803 | 2.0 | 798 | 0.3547 | 0.9687 | | 0.2855 | 3.0 | 1197 | 0.1663 | 0.9763 | | 0.1987 | 4.0 | 1596 | 0.1258 | 0.9793 | | 0.2097 | 5.0 | 1995 | 0.1171 | 0.9791 |
b9b6191150bf29820153f04a17bca198
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2147 - Accuracy: 0.927 - F1: 0.9270
05fe14f93fe7a299d5e5c14dc49576d2
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8181 | 1.0 | 250 | 0.3036 | 0.9085 | 0.9064 | | 0.2443 | 2.0 | 500 | 0.2147 | 0.927 | 0.9270 |
31b2bde93be5576cea1bbbea108d30ed
apache-2.0
['generated_from_keras_callback']
false
hsohn3/ehr-bert-base-uncased-cchs-wordlevel This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.7374 - Epoch: 9
27760fb7a68e1b17c609ed854c298c49
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 - block_size: 512 - batch_size: 4 - num_epochs: 10 - mlm_probability: 0.15
79bb8d818e644cbc1f24a1a26d1ef877
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Epoch | |:----------:|:-----:| | 3.8857 | 0 | | 3.7525 | 1 | | 3.7505 | 2 | | 3.7493 | 3 | | 3.7412 | 4 | | 3.7432 | 5 | | 3.7428 | 6 | | 3.7409 | 7 | | 3.7394 | 8 | | 3.7374 | 9 |
193207095b33ae648670748568bc2cc9
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper medium Greek El Greco This model is a fine-tuned version of [emilios/whisper-medium-el-n2](https://huggingface.co/emilios/whisper-medium-el-n2) on the mozilla-foundation/common_voice_11_0 el dataset. It achieves the following results on the evaluation set: - Loss: 0.5669 - Wer: 9.8997
377549fc489b6b0822661e054abbf7c7
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - distributed_type: multi-GPU - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 11000
2092fc8b2d15d2b9ce1cdfa8c6e7cab5
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:-----:|:---------------:|:-------:| | 0.0014 | 58.82 | 1000 | 0.4951 | 10.3640 | | 0.0006 | 117.65 | 2000 | 0.5181 | 10.2805 | | 0.0007 | 175.82 | 3000 | 0.5317 | 10.1133 | | 0.0004 | 234.65 | 4000 | 0.5396 | 10.1226 | | 0.0004 | 293.47 | 5000 | 0.5532 | 10.1040 | | 0.0013 | 352.29 | 6000 | 0.5645 | 10.0854 | | 0.0002 | 411.12 | 7000 | 0.5669 | 10.1133 | | 0.0001 | 469.94 | 8000 | 0.5669 | 9.8997 | | 0.0001 | 528.76 | 9000 | 0.5645 | 9.9276 | | 0.0001 | 587.82 | 10000 | 0.5674 | 9.9647 | | 0.0003 | 646.82 | 11000 | 0.5669 | 9.9461 |
88fc008c75ec2fe92ad746bec656a71b
mit
[]
false
Model miniALBERT is a recursive transformer model which uses cross-layer parameter sharing, embedding factorisation, and bottleneck adapters to achieve high parameter efficiency. Since miniALBERT is a compact model, it is trained using a layer-to-layer distillation technique, using the BioBERT-v1.1 model as the teacher. Currently, this model is trained for 100K steps on the PubMed Abstracts dataset. In terms of architecture, this model uses an embedding dimension of 128, a hidden size of 768, an MLP expansion rate of 4, and a reduction factor of 16 for bottleneck adapters. In general, this model uses 6 recursions and has a unique parameter count of 11 million parameters.
219d2524688299ce0ff7ea8659e5c07d
mit
[]
false
Usage Since miniALBERT uses a unique architecture it can not be loaded using ts.AutoModel for now. To load the model, first, clone the miniALBERT GitHub project, using the below code: ```bash git clone https://github.com/nlpie-research/MiniALBERT.git ``` Then use the ```sys.path.append``` to add the miniALBERT files to your project and then import the miniALBERT modeling file using the below code: ```bash import sys sys.path.append("PATH_TO_CLONED_PROJECT/MiniALBERT/") from minialbert_modeling import MiniAlbertForSequenceClassification, MiniAlbertForTokenClassification ``` Finally, load the model like a regular model in the transformers library using the below code: ```Python
ae1219ab8f0bba5a61ef8da6f830a99a
mit
[]
false
For Sequence Classification use the below code model = MiniAlbertForTokenClassification.from_pretrained("nlpie/bio-miniALBERT-128") ``` In addition, For efficient fine-tuning using the pre-trained bottleneck adapters use the below code: ```Python model.trainAdaptersOnly() ```
e05b83598c217da88acac30b1f62bd78
mit
[]
false
Citation If you use the model, please cite our paper: ``` @article{nouriborji2022minialbert, title={MiniALBERT: Model Distillation via Parameter-Efficient Recursive Transformers}, author={Nouriborji, Mohammadmahdi and Rohanian, Omid and Kouchaki, Samaneh and Clifton, David A}, journal={arXiv preprint arXiv:2210.06425}, year={2022} } ```
ef5261261120eddef3f2428c181a5a3f
apache-2.0
['generated_from_trainer']
false
tst-translation This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 1.5889 - Bleu: 13.3161 - Gen Len: 42.493
73d2101cba3027687bfb78489f9c24d8
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5.0
7c108e89e3e66a03d64af9e016238481
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_1700k']
false
MultiBERTs, Intermediate Checkpoint - Seed 0, Step 1700k MultiBERTs is a collection of checkpoints and a statistical library to support robust research on BERT. We provide 25 BERT-base models trained with similar hyper-parameters as [the original BERT model](https://github.com/google-research/bert) but with different random seeds, which causes variations in the initial weights and order of training instances. The aim is to distinguish findings that apply to a specific artifact (i.e., a particular instance of the model) from those that apply to the more general procedure. We also provide 140 intermediate checkpoints captured during the course of pre-training (we saved 28 checkpoints for the first 5 runs). The models were originally released through [http://goo.gle/multiberts](http://goo.gle/multiberts). We describe them in our paper [The MultiBERTs: BERT Reproductions for Robustness Analysis](https://arxiv.org/abs/2106.16163). This is model
310affc43f0d3fb75cd658d57ced6d60
apache-2.0
['multiberts', 'multiberts-seed_0', 'multiberts-seed_0-step_1700k']
false
How to use Using code from [BERT-base uncased](https://huggingface.co/bert-base-uncased), here is an example based on Tensorflow: ``` from transformers import BertTokenizer, TFBertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1700k') model = TFBertModel.from_pretrained("google/multiberts-seed_0-step_1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='tf') output = model(encoded_input) ``` PyTorch version: ``` from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('google/multiberts-seed_0-step_1700k') model = BertModel.from_pretrained("google/multiberts-seed_0-step_1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
b916af07fd5b4f27f0ea5592cdb2206c
apache-2.0
['generated_from_trainer']
false
reddit-bert-text4 This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.4763
362483e376c991605ee5c7c534cc1f98
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1071 | 1.0 | 978 | 2.6170 | | 2.6788 | 2.0 | 1956 | 2.5332 | | 2.6112 | 3.0 | 2934 | 2.4844 |
5d2221684269eb7be76fa5e31dec7029
apache-2.0
['translation']
false
rus-dan * source group: Russian * target group: Danish * OPUS readme: [rus-dan](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md) * model: transformer-align * source language(s): rus * target language(s): dan * model: transformer-align * pre-processing: normalization + SentencePiece (spm32k,spm32k) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.eval.txt)
87c92db053fa4bdea111609261e31446
apache-2.0
['translation']
false
System Info: - hf_name: rus-dan - source_languages: rus - target_languages: dan - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/rus-dan/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['ru', 'da'] - src_constituents: {'rus'} - tgt_constituents: {'dan'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm32k,spm32k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/rus-dan/opus-2020-06-17.test.txt - src_alpha3: rus - tgt_alpha3: dan - short_pair: ru-da - chrF2_score: 0.7140000000000001 - bleu: 56.6 - brevity_penalty: 0.977 - ref_len: 11746.0 - src_name: Russian - tgt_name: Danish - train_date: 2020-06-17 - src_alpha2: ru - tgt_alpha2: da - prefer_old: False - long_pair: rus-dan - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
2bfc6824480c5aef9ae93422993e3154
bsd-3-clause
[]
false
Model description CodeGen is a family of autoregressive language models for **program synthesis** from the paper: [A Conversational Paradigm for Program Synthesis](https://arxiv.org/abs/2203.13474) by Erik Nijkamp, Bo Pang, Hiroaki Hayashi, Lifu Tu, Huan Wang, Yingbo Zhou, Silvio Savarese, Caiming Xiong. The models are originally released in [this repository](https://github.com/salesforce/CodeGen), under 3 pre-training data variants (`NL`, `Multi`, `Mono`) and 4 model size variants (`350M`, `2B`, `6B`, `16B`). The checkpoint included in this repository is denoted as **CodeGen-NL 6B** in the paper, where "NL" means it is pre-trained on the Pile and "6B" refers to the number of trainable parameters.
45d7631c320db6fa7432691377302708
bsd-3-clause
[]
false
Training data This checkpoint (CodeGen-NL 6B) was pre-trained on [the Pile](https://github.com/EleutherAI/the-pile), a large-scale curated dataset created by [EleutherAI](https://www.eleuther.ai/). Parts of the dataset include code data.
08c593c09e08e696f3ffc2c2bb00f23c
bsd-3-clause
[]
false
Training procedure CodeGen was trained using cross-entropy loss to maximize the likelihood of sequential inputs. The family of models are trained using multiple TPU-v4-512 by Google, leveraging data and model parallelism. See Section 2.3 of the [paper](https://arxiv.org/abs/2203.13474) for more details.
84f5fabc0b4b9ce38ac85087c038e41c
bsd-3-clause
[]
false
Intended Use and Limitations As an autoregressive language model, CodeGen is capable of extracting features from given natural language and programming language texts, and calculating the likelihood of them. However, the model is intended for and best at **program synthesis**, that is, generating executable code given English prompts, where the prompts should be in the form of a comment string. The model can complete partially-generated code as well.
ff976d481e0487572be0de152e90aaeb
bsd-3-clause
[]
false
How to use This model can be easily loaded using the `AutoModelForCausalLM` functionality: ```python from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Salesforce/codegen-6B-nl") model = AutoModelForCausalLM.from_pretrained("Salesforce/codegen-6B-nl") text = "def hello_world():" input_ids = tokenizer(text, return_tensors="pt").input_ids generated_ids = model.generate(input_ids, max_length=128) print(tokenizer.decode(generated_ids[0], skip_special_tokens=True)) ```
3d965a15294cc54b7f2767b142ea609b
bsd-3-clause
[]
false
BibTeX entry and citation info ```bibtex @article{Nijkamp2022ACP, title={A Conversational Paradigm for Program Synthesis}, author={Nijkamp, Erik and Pang, Bo and Hayashi, Hiroaki and Tu, Lifu and Wang, Huan and Zhou, Yingbo and Savarese, Silvio and Xiong, Caiming}, journal={arXiv preprint}, year={2022} } ```
a464443f1f63e7f62119169133753280
mit
['audio', 'speech-translation', 'automatic-speech-recognition', 'speech2text2']
false
S2T2-Wav2Vec2-CoVoST2-EN-AR-ST `s2t-wav2vec2-large-en-ar` is a Speech to Text Transformer model trained for end-to-end Speech Translation (ST). The S2T2 model was proposed in [Large-Scale Self- and Semi-Supervised Learning for Speech Translation](https://arxiv.org/pdf/2104.06678.pdf) and officially released in [Fairseq](https://github.com/pytorch/fairseq/blob/6f847c8654d56b4d1b1fbacec027f47419426ddb/fairseq/models/wav2vec/wav2vec2_asr.py
3f8e367fcfbb1f4d13cb8e4ec7f9001b
mit
['audio', 'speech-translation', 'automatic-speech-recognition', 'speech2text2']
false
Model description S2T2 is a transformer-based seq2seq (speech encoder-decoder) model designed for end-to-end Automatic Speech Recognition (ASR) and Speech Translation (ST). It uses a pretrained [Wav2Vec2](https://huggingface.co/transformers/model_doc/wav2vec2.html) as the encoder and a transformer-based decoder. The model is trained with standard autoregressive cross-entropy loss and generates the translations autoregressively.
1d87215a362cd110c58b50c838fc2fc8
mit
['audio', 'speech-translation', 'automatic-speech-recognition', 'speech2text2']
false
Intended uses & limitations This model can be used for end-to-end English speech to Arabic text translation. See the [model hub](https://huggingface.co/models?filter=speech2text2) to look for other S2T2 checkpoints.
dad9261dddfcb7b220d9ef0ea065717a
mit
['audio', 'speech-translation', 'automatic-speech-recognition', 'speech2text2']
false
How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. You can use the model directly via the ASR pipeline ```python from datasets import load_dataset from transformers import pipeline librispeech_en = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") asr = pipeline("automatic-speech-recognition", model="facebook/s2t-wav2vec2-large-en-ar", feature_extractor="facebook/s2t-wav2vec2-large-en-ar") translation = asr(librispeech_en[0]["file"]) ``` or step-by-step as follows: ```python import torch from transformers import Speech2Text2Processor, SpeechEncoderDecoder from datasets import load_dataset import soundfile as sf model = SpeechEncoderDecoder.from_pretrained("facebook/s2t-wav2vec2-large-en-ar") processor = Speech2Text2Processor.from_pretrained("facebook/s2t-wav2vec2-large-en-ar") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation") ds = ds.map(map_to_array) inputs = processor(ds["speech"][0], sampling_rate=16_000, return_tensors="pt") generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"]) transcription = processor.batch_decode(generated_ids) ```
63861afffd93e88f1e6eaadfba9644c9
mit
['audio', 'speech-translation', 'automatic-speech-recognition', 'speech2text2']
false
Evaluation results CoVoST-V2 test results for en-ar (BLEU score): **20.2** For more information, please have a look at the [official paper](https://arxiv.org/pdf/2104.06678.pdf) - especially row 10 of Table 2.
7c2791df74b12612b99f96859ec04390
mit
['audio', 'speech-translation', 'automatic-speech-recognition', 'speech2text2']
false
BibTeX entry and citation info ```bibtex @article{DBLP:journals/corr/abs-2104-06678, author = {Changhan Wang and Anne Wu and Juan Miguel Pino and Alexei Baevski and Michael Auli and Alexis Conneau}, title = {Large-Scale Self- and Semi-Supervised Learning for Speech Translation}, journal = {CoRR}, volume = {abs/2104.06678}, year = {2021}, url = {https://arxiv.org/abs/2104.06678}, archivePrefix = {arXiv}, eprint = {2104.06678}, timestamp = {Thu, 12 Aug 2021 15:37:06 +0200}, biburl = {https://dblp.org/rec/journals/corr/abs-2104-06678.bib}, bibsource = {dblp computer science bibliography, https://dblp.org} } ```
06a93f8b008378ab48786b72ae02e986
apache-2.0
['generated_from_trainer']
false
finetuning-sentiment-model-3000-samples This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.1460 - Accuracy: 0.75
1ade7c1ad67fd91441dc17cfff566cc0
apache-2.0
['generated_from_trainer']
false
t5-small-finetuned-xsum-wei2 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the xsum dataset. It achieves the following results on the evaluation set: - Loss: 2.4131 - Rouge1: 29.2287 - Rouge2: 8.4073 - Rougel: 23.0934 - Rougelsum: 23.0954 - Gen Len: 18.8236
1550c45ebaa0acdfa0daae88e6488246
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 4e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
a70e8172405df0f7d12b339b6f4daa9c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:------:|:-------:|:---------:|:-------:| | 2.633 | 1.0 | 17004 | 2.4131 | 29.2287 | 8.4073 | 23.0934 | 23.0954 | 18.8236 |
a44236a878c05fd6eaf04827c1efe2e9
apache-2.0
['generated_from_trainer']
false
tiny-bert-sst2-distilled-model This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google/bert_uncased_L-2_H-128_A-2) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 1.2592 - Accuracy: 0.8383
a2698146bd23ca09f912d2c21f8c3e2a
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 6e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 33 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7 - mixed_precision_training: Native AMP
900fbd00dac3d4ca0587f7b99ff5abd9
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.5303 | 1.0 | 4210 | 1.2542 | 0.8222 | | 0.4503 | 2.0 | 8420 | 1.1260 | 0.8211 | | 0.3689 | 3.0 | 12630 | 1.2325 | 0.8234 | | 0.3122 | 4.0 | 16840 | 1.2533 | 0.8337 | | 0.2764 | 5.0 | 21050 | 1.2726 | 0.8337 | | 0.254 | 6.0 | 25260 | 1.2609 | 0.8337 | | 0.2358 | 7.0 | 29470 | 1.2592 | 0.8383 |
6af071aef8572ba86b852c771df5dd65
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
fathyshalab/domain_transfer_clinic_credit_cards-massive_cooking-roberta-large-v1-2-4 This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer.
74c6b5d2155140c70790e049942fdad2
apache-2.0
['setfit', 'sentence-transformers', 'text-classification']
false
BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
261073ed26c1159dee00153ba29ccd26
apache-2.0
['summarization', 'generated_from_trainer']
false
mt5-base-finetuned-ar-wikilingua This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on the wiki_lingua dataset. It achieves the following results on the evaluation set: - Loss: 3.6790 - Rouge-1: 19.46 - Rouge-2: 6.82 - Rouge-l: 17.57 - Gen Len: 18.83 - Bertscore: 70.18
f28774fcf07ee1452cacb3bce3d5f443
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 250 - num_epochs: 8 - label_smoothing_factor: 0.1
22c37df36f5db8e462bdec62694c074c
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 4.9783 | 1.0 | 5111 | 4.0107 | 15.8 | 4.65 | 14.18 | 18.98 | 68.66 | | 4.2093 | 2.0 | 10222 | 3.8664 | 16.46 | 5.17 | 15.08 | 18.91 | 68.5 | | 4.0303 | 3.0 | 15333 | 3.7847 | 17.0 | 5.43 | 15.45 | 18.89 | 68.75 | | 3.9165 | 4.0 | 20444 | 3.7405 | 17.03 | 5.5 | 15.45 | 18.86 | 68.78 | | 3.8396 | 5.0 | 25555 | 3.7102 | 17.14 | 5.57 | 15.48 | 18.87 | 68.92 | | 3.7825 | 6.0 | 30666 | 3.6944 | 17.64 | 5.73 | 15.96 | 18.82 | 69.14 | | 3.7447 | 7.0 | 35777 | 3.6801 | 17.6 | 5.66 | 15.9 | 18.78 | 69.23 | | 3.7203 | 8.0 | 40888 | 3.6790 | 17.94 | 5.81 | 16.21 | 18.81 | 69.29 |
76e2d68718be3b65f848fb6aa888f8ba
apache-2.0
['summarization', 'generated_from_trainer']
false
t5-small-finetuned-summarization-cnn This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the cnn_dailymail dataset. It achieves the following results on the evaluation set: - Loss: 2.0105 - Rouge1: 24.4825 - Rouge2: 9.1573 - Rougel: 19.7135 - Rougelsum: 22.2551
2958c9ea6236fd978c129a21eb1a07f8
apache-2.0
['summarization', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5.6e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
b19d72faea8c7f12a136571d8aceab62
apache-2.0
['summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | 2.0389 | 1.0 | 718 | 2.0150 | 24.4413 | 9.1782 | 19.7202 | 22.2225 | | 1.9497 | 2.0 | 1436 | 2.0105 | 24.4825 | 9.1573 | 19.7135 | 22.2551 |
9cf3af3b08011b3090375a05d0bf8774
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1
24c4da64e66f5e66fc959f01d97d3bbb
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 63 | 1.3966 | 24.7113 | 17.3364 | 22.3967 | 24.026 | 19.0 |
ac4fa0caea21dae9d95c60555595c682
other
['generated_from_trainer']
false
NLP_Opt350M This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.3806
876a11e2b12486b84da70f829bca6349
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.453 | 1.0 | 849 | 3.3589 | | 2.9744 | 2.0 | 1698 | 3.3594 | | 2.7146 | 3.0 | 2547 | 3.3806 |
3a000a9360b98705d93c6307937eedd7
apache-2.0
['generated_from_trainer']
false
oldData_BERT This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0616
f82393229666cdd5a661a9c6aa08e119
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 8 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 7
2e298f49bb0c3fad8e771cbf0f3efce8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.2348 | 1.0 | 1125 | 1.0185 | | 1.0082 | 2.0 | 2250 | 0.7174 | | 0.699 | 3.0 | 3375 | 0.3657 | | 0.45 | 4.0 | 4500 | 0.1880 | | 0.2915 | 5.0 | 5625 | 0.1140 | | 0.2056 | 6.0 | 6750 | 0.0708 | | 0.1312 | 7.0 | 7875 | 0.0616 |
8001bc2ec650ed95913d34ef1297d637
apache-2.0
['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107']
false
Model description This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain. The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training. We observed that this improved the performance of extracted utterance embeddings for downstream tasks. The system is trained with recordings sampled at 16kHz (single channel). The code will automatically normalize your audio (i.e., resampling + mono channel selection) when calling *classify_file* if needed. The model can classify a speech utterance according to the language spoken. It covers 3 different languages ( English, Hindi, Other.
a48e6965a351a3c277462c3dc76ccb7c