license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
[]
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-06 - train_batch_size: 32 - eval_batch_size: 32 - gradient_accumulation_steps: 1 - optimizer: AdamW with betas=(None, None), weight_decay=None and epsilon=None - lr_scheduler: None - lr_warmup_steps: 50 - ema_inv_gamma: None - ema_inv_gamma: None - ema_inv_gamma: None - mixed_precision: fp16
017203d5f5a9ef2291efd32e07f5ab7e
mit
['generated_from_trainer']
false
deberta-v3-large-cola This model is a fine-tuned version of [microsoft/deberta-v3-large](https://huggingface.co/microsoft/deberta-v3-large) on the GLUE COLA dataset. It achieves the following results on the evaluation set: - Loss: 0.5335 - Matthews Correlation: 0.7193
2181be4e12f6f647b7b16a0f1c352c07
mit
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-06 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
a979534c6d453f056db964e2fde9d161
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Small Lt This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5724 - Wer: 35.8598
f135cd2a360499febb9d6078439b71ac
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0237 | 6.0 | 1000 | 0.4745 | 37.9839 | | 0.0016 | 12.01 | 2000 | 0.5128 | 35.9749 | | 0.0008 | 18.01 | 3000 | 0.5458 | 35.7843 | | 0.0005 | 24.02 | 4000 | 0.5652 | 35.8240 | | 0.0004 | 30.02 | 5000 | 0.5724 | 35.8598 |
d4ead414b45711ef7aba4b421cbd9fa4
cc0-1.0
['MaltBERTa', 'MaCoCu']
false
Model description **XLMR-MaltBERTa** is a large pre-trained language model trained on Maltese texts. It was created by continuing training from the [XLM-RoBERTa-large](https://huggingface.co/xlm-roberta-large) model. It was developed as part of the [MaCoCu](https://macocu.eu/) project. The main developer is [Rik van Noord](https://www.rikvannoord.nl/) from the University of Groningen. XLMR-MaltBERTa was trained on 3.2GB of text, which is equal to 439M tokens. It was trained for 50,000 steps with a batch size of 1,024. It uses the same vocabulary as the original XLMR-large model. The model is trained on the same data as [MaltBERTa](https://huggingface.co/RVN/MaltBERTa), but this model was trained from scratch using the RoBERTa architecture. The training and fine-tuning procedures are described in detail on our [Github repo](https://github.com/macocu/LanguageModels).
f18ac7cb688c7206cf793c4a8ac285f1
cc0-1.0
['MaltBERTa', 'MaCoCu']
false
How to use ```python from transformers import AutoTokenizer, AutoModel, TFAutoModel tokenizer = AutoTokenizer.from_pretrained("RVN/XLMR-MaltBERTa") model = AutoModel.from_pretrained("RVN/XLMR-MaltBERTa")
bc03a69cc234302ff38fc2a04fcdd9e8
cc0-1.0
['MaltBERTa', 'MaCoCu']
false
Benchmark performance We tested the performance of MaltBERTa on the UPOS and XPOS benchmark of the [Universal Dependencies](https://universaldependencies.org/) project. Moreover, we test on a Google Translated version of the COPA data set (see our [Github repo](https://github.com/RikVN/COPA) for details). We compare performance to the strong multi-lingual models XLMR-base and XLMR-large, though note that Maltese was not one of the training languages for those models. We also compare to the recently introduced Maltese language models [BERTu](https://huggingface.co/MLRS/BERTu), [mBERTu](https://huggingface.co/MLRS/mBERTu) and our own [MaltBERTa](https://huggingface.co/RVN/MaltBERTa). For details regarding the fine-tuning procedure you can checkout our [Github](https://github.com/macocu/LanguageModels). Scores are averages of three runs for UPOS/XPOS and 10 runs for COPA. We use the same hyperparameter settings for all models for UPOS/XPOS, while for COPA we optimize on the dev set. | | **UPOS** | **UPOS** | **XPOS** | **XPOS** | **COPA** | |-----------------|:--------:|:--------:|:--------:|:--------:| :--------:| | | **Dev** | **Test** | **Dev** | **Test** | **Test** | | **XLM-R-base** | 93.6 | 93.2 | 93.4 | 93.2 | 52.2 | | **XLM-R-large** | 94.9 | 94.4 | 95.1 | 94.7 | 54.0 | | **BERTu** | 97.5 | 97.6 | 95.7 | 95.8 | **55.6** | | **mBERTu** | **97.7** | 97.8 | 97.9 | 98.1 | 52.6 | | **MaltBERTa** | 95.7 | 95.8 | 96.1 | 96.0 | 53.7 | | **XLMR-MaltBERTa** | **97.7** | **98.1** | **98.1** | **98.2** | 54.4 |
f41169875b02ab6bf3a010125740673b
cc0-1.0
['MaltBERTa', 'MaCoCu']
false
Acknowledgements Research supported with Cloud TPUs from Google's TPU Research Cloud (TRC). The authors received funding from the European Union’s Connecting Europe Facility 2014- 2020 - CEF Telecom, under Grant Agreement No.INEA/CEF/ICT/A2020/2278341 (MaCoCu).
df243aa312f57944002128b6239a963d
cc0-1.0
['MaltBERTa', 'MaCoCu']
false
Citation If you use this model, please cite the following paper: ```bibtex @inproceedings{non-etal-2022-macocu, title = "{M}a{C}o{C}u: Massive collection and curation of monolingual and bilingual data: focus on under-resourced languages", author = "Ba{\~n}{\'o}n, Marta and Espl{\`a}-Gomis, Miquel and Forcada, Mikel L. and Garc{\'\i}a-Romero, Cristian and Kuzman, Taja and Ljube{\v{s}}i{\'c}, Nikola and van Noord, Rik and Sempere, Leopoldo Pla and Ram{\'\i}rez-S{\'a}nchez, Gema and Rupnik, Peter and Suchomel, V{\'\i}t and Toral, Antonio and van der Werff, Tobias and Zaragoza, Jaume", booktitle = "Proceedings of the 23rd Annual Conference of the European Association for Machine Translation", month = jun, year = "2022", address = "Ghent, Belgium", publisher = "European Association for Machine Translation", url = "https://aclanthology.org/2022.eamt-1.41", pages = "303--304" } ```
211fe4c3b5e09df6b8019a224c4c71db
mit
['roberta-base', 'roberta-base-epoch_62']
false
RoBERTa, Intermediate Checkpoint - Epoch 62 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_62.
879edbbd71c71b6aff0b4f02770cc3d4
apache-2.0
['pytorch', 'text-generation', 'causal-lm', 'rwkv']
false
Model Description RWKV-2 430M is a L24-D1024 causal language model trained on the Pile. See https://github.com/BlinkDL/RWKV-LM for details. At this moment you have to use my Github code (https://github.com/BlinkDL/RWKV-v2-RNN-Pile) to run it. ctx_len = 768 n_layer = 24 n_embd = 1024 Final checkpoint: 20220615-10803.pth : Trained on the Pile for 331B tokens. * Pile loss 2.349 * LAMBADA ppl 15.34, acc 42.42% * PIQA acc 67.03% * SC2016 acc 62.05% * Hellaswag acc_norm 38.47%
6bebc2b51fb19cdeecbcc309410bc5ea
mit
['text-classification', 'zero-shot-classification']
false
Model description This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation). It is the only model in the model hub trained on 8 NLI datasets, including DocNLI with very long texts to learn long range reasoning. Note that the model was trained on binary NLI to predict either "entailment" or "not-entailment". The DocNLI merges the classes "neural" and "contradiction" into "not-entailment" to enable the inclusion of the DocNLI dataset. The base model is [DeBERTa-v3-base from Microsoft](https://huggingface.co/microsoft/deberta-v3-base). The v3 variant of DeBERTa substantially outperforms previous versions of the model by including a different pre-training objective, see annex 11 of the original [DeBERTa paper](https://arxiv.org/pdf/2006.03654.pdf) as well as the [DeBERTa-V3 paper](https://arxiv.org/abs/2111.09543). For highest performance (but less speed), I recommend using https://huggingface.co/MoritzLaurer/DeBERTa-v3-large-mnli-fever-anli-ling-wanli.
7f585359593351a2d1de19ecf236acfb
mit
['text-classification', 'zero-shot-classification']
false
Simple zero-shot classification pipeline ```python from transformers import pipeline classifier = pipeline("zero-shot-classification", model="MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c") sequence_to_classify = "Angela Merkel is a politician in Germany and leader of the CDU" candidate_labels = ["politics", "economy", "entertainment", "environment"] output = classifier(sequence_to_classify, candidate_labels, multi_label=False) print(output) ```
0baa9fc9f63f9f8ee7b7b4fdb85d9f11
mit
['text-classification', 'zero-shot-classification']
false
NLI use-case ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "MoritzLaurer/DeBERTa-v3-base-mnli-fever-docnli-ling-2c" tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForSequenceClassification.from_pretrained(model_name) premise = "I first thought that I liked the movie, but upon second thought it was actually disappointing." hypothesis = "The movie was good." input = tokenizer(premise, hypothesis, truncation=True, return_tensors="pt") output = model(input["input_ids"].to(device))
eebfd09842b09ca221e306c4e8ab5861
mit
['text-classification', 'zero-shot-classification']
false
Training data This model was trained on 1.279.665 hypothesis-premise pairs from 8 NLI datasets: [MultiNLI](https://huggingface.co/datasets/multi_nli), [Fever-NLI](https://github.com/easonnie/combine-FEVER-NSMN/blob/master/other_resources/nli_fever.md), [LingNLI](https://arxiv.org/abs/2104.07179) and [DocNLI](https://arxiv.org/pdf/2106.09449.pdf) (which includes [ANLI](https://github.com/facebookresearch/anli), QNLI, DUC, CNN/DailyMail, Curation).
179ba72c76e3403c9fd7709209cef8b7
mit
['text-classification', 'zero-shot-classification']
false
Training procedure DeBERTa-v3-small-mnli-fever-docnli-ling-2c was trained using the Hugging Face trainer with the following hyperparameters. ``` training_args = TrainingArguments( num_train_epochs=3,
dcc7fd090a780f401af6ca9e0f4f48ac
mit
['text-classification', 'zero-shot-classification']
false
Eval results The model was evaluated using the binary test sets for MultiNLI and ANLI and the binary dev set for Fever-NLI (two classes instead of three). The metric used is accuracy. mnli-m-2c | mnli-mm-2c | fever-nli-2c | anli-all-2c | anli-r3-2c | lingnli-2c ---------|----------|---------|----------|----------|------ 0.935 | 0.933 | 0.897 | 0.710 | 0.678 | 0.895
1bd03671eb8ec23cc259a2874d0012d3
mit
[]
false
Model description This language-music model takes [BART-base](https://huggingface.co/facebook/bart-base) fine-tunes on 282,870 English text-music pairs, where all scores are represented in ABC notation. It was introduced in the paper [Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task](https://arxiv.org/abs/2211.11216) by Wu et al. and released in [this repository](https://github.com/sander-wood/text-to-music). It is capable of generating complete and semantically consistent sheet music directly from descriptions in natural language based on text. To the best of our knowledge, this is the first model that achieves text-conditional symbolic music generation which is trained on real text-music pairs, and the music is generated entirely by the model and without any hand-crafted rules.
ffdda06bc2a4e81f755e04568dd0143c
mit
[]
false
Intended uses & limitations You can use this model for text-conditional music generation. All scores generated by this model can be written on one stave (for vocal solo or instrumental solo) in standard classical notation, and are in a variety of styles, e.g., blues, classical, folk, jazz, pop, and world music. We recommend using the script in [this repository](https://github.com/sander-wood/text-to-music) for inference. The generated tunes are in ABC notation, and can be converted to sheet music or audio using [this website](https://ldzhangyx.github.io/abc/), or [this software](https://sourceforge.net/projects/easyabc/). Its creativity is limited, can not perform well on tasks requiring a high degree of creativity (e.g., melody style transfer), and it is input-sensitive. For more information, please check [our paper](https://arxiv.org/abs/2211.11216).
580a94a7cf7ddebb5aab931cfbefacf7
mit
[]
false
How to use Here is how to use this model in PyTorch: ```python import torch from samplings import top_p_sampling, temperature_sampling from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained('sander-wood/text-to-music') model = AutoModelForSeq2SeqLM.from_pretrained('sander-wood/text-to-music') model = model max_length = 1024 top_p = 0.9 temperature = 1.0 text = "This is a traditional Irish dance music." input_ids = tokenizer(text, return_tensors='pt', truncation=True, max_length=max_length)['input_ids'] decoder_start_token_id = model.config.decoder_start_token_id eos_token_id = model.config.eos_token_id decoder_input_ids = torch.tensor([[decoder_start_token_id]]) for t_idx in range(max_length): outputs = model(input_ids=input_ids, decoder_input_ids=decoder_input_ids) probs = outputs.logits[0][-1] probs = torch.nn.Softmax(dim=-1)(probs).detach().numpy() sampled_id = temperature_sampling(probs=top_p_sampling(probs, top_p=top_p, return_probs=True), temperature=temperature) decoder_input_ids = torch.cat((decoder_input_ids, torch.tensor([[sampled_id]])), 1) if sampled_id!=eos_token_id: continue else: tune = "X:1\n" tune += tokenizer.decode(decoder_input_ids[0], skip_special_tokens=True) print(tune) break ```
063ab0ea4714a9b101ebc839f653b95a
mit
[]
false
X:1 L:1/8 M:6/8 K:D A | BEE BEE | Bdf edB | BAF FEF | DFA BAF | BEE BEE | Bdf edB | BAF DAF | FED E2 :: A | Bef gfe | faf edB | BAF FEF | DFA BAF | Bef gfe | faf edB | BAF DAF | FED E2 :| X:2 L:1/8 M:6/8 K:D A |: DED F2 A | d2 f ecA | G2 B F2 A | E2 F GFE | DED F2 A | d2 f ecA | Bgf edc |1 d3 d2 A :|2 d3 d2 a || a2 f d2 e | f2 g agf | g2 e c2 d | e2 f gfe | fed gfe | agf bag | fed cde | d3 d2 a | agf fed | Adf agf | gfe ecA | Ace gfe | fed gfe | agf bag | fed cde | d3 d2 || X:3 L:1/8 M:6/8 K:D BEE BEE | Bdf edB | BAF FEF | DFA dBA | BEE BEE | Bdf edB | BAF FEF |1 DED DFA :|2 DED D2 e |: faf edB | BAF DFA | BAF FEF | DFA dBA | faf edB | BAF DFA | BdB AFA |1 DED D2 e :|2 DED DFA || ``` ```
6785e0d827ca27c4ea15ffa5883fca1a
mit
[]
false
X:1 L:1/8 M:4/4 K:F "F" CFG |"F" A6 z G |"Fm7" A3 G"Bb7" A3 G |"F" A6 z G |"F7" A4"Eb7" G4 |"F" F6 z F | "Dm" A3 G"Dm/C" A3 G |"Bb" A2"Gm" B2"C7" G3 G |"F" F8- |"Dm7""G7" F6 z2 |"C" C4 C3 C | "C7" C2 B,2"F" C4 |"F" C4 C3 C |"Dm" D2 C2"Dm/C" D4 |"Bb" D4 D3 D |"Bb" D2 C2"C7" D4 |"F" C8- | "F" C4"Gm" z C"C7" FG |"F" A6 z G |"Fm7" A3 G"Bb7" A3 G |"F" A6 z G |"F7" A4"Eb7" G4 |"F" F6 z F | "Dm" A3 G"Dm/C" A3 G |"Bb" A2"Gm" B2"C7" G3 G |"F" F8- |"F" F6 z2 |] X:2 L:1/4 M:4/4 K:F "^A""F" A3 A |"Am7" A2"D7" A2 |"Gm7" G2"C7" G A |"F" F4 |"F" A3 A |"Am7" A2"D7" A2 |"Gm7" G2"C7" G A | "F" F4 |"Gm" B3 B |"Am7" B2"D7" B2 |"Gm" B2"D7" B A |"Gm7" G4 |"F" A3 A |"Am7" A2"D7" A2 | "Gm7" G2"C7" G A |"F" F4 |"Bb7" F3 G |"F" A2 A2 |"Gm" B2"C7" B2 |"F" c2"D7" c c |"Gm7" c2"C7" B2 | "F" A2"F7" A2 |"Bb" B2"F" B A |"Bb" B2"F" B A |"Gm" B2"F" B A |"Gm7" B2"F" B A |"Gm7" B2"F" B A | "C7" B2 c2 |"F""Bb7" A4 |"F""Bb7" z4 |] X:3 L:1/4 M:4/4 K:Bb B, ||"Gm""^A1" G,2 B, D |"D7" ^F A2 G/=F/ |"Gm" G2"Cm7" B c |"F7" A2 G =F |"Bb" D2 F A | "Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z B, |"Gm""^B" G,2 B, D |"D7" ^F A2 G/=F/ | "Gm" G2"Cm7" B c |"F7" A2 G =F |"Bb" D2 F A |"Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z2 || "^C""F7""^A2" F4- | F E D C |"Bb" D2 F B | d3 c/B/ |"F" A2"Cm7" G2 |"D7" ^F2 G2 |"Gm" B3"C7" A | "F7" G4 ||"F7""^A3" F4- | F E D C |"Bb" D2 F B | d3 c/B/ |"F" A2"Cm7" G2 |"D7" ^F2 G2 |"Gm" B3 A | "C7" G4 ||"^B""Gm""^C" B2 c B |"Cm" c B c B |"Gm7" c2 B A |"C7" B3 A |"Bb" B2 c B |"G7" d c B A | "Cm" G2 A G |"F7" F2 z G ||"^C""F7" F F3 |"Bb" D D3 |"Cm" E E3 |"D7" ^F F3 |"Gm" G2 A B |"C7" d3 d | "Gm" d3 d |"D7" d3 B, ||"^D""Gm" G,2 B, D |"D7" ^F A2 G/=F/ |"Gm" G2"Cm7" B c |"F7" A2 G =F | "Bb" D2 F A |"Cm7" c e2 d/c/ |"Gm7" B3/2 G/-"C7" G2- |"F7" G2 z2 |] ``` ```
a45c000e333ec90616da82da89ec0415
mit
[]
false
This is a Chinese folk song from the Jiangnan region. It was created during the Qianlong era (1735-1796) of the Qing dynasty. Over time, many regional variations were created, and the song gained popularity both in China and abroad. One version of the song describes a custom of giving jasmine flowers, popular in the southern Yangtze delta region of China.
07a9d8dfac1bff1266f27c2990673a03
mit
[]
false
X:1 L:1/8 Q:1/4=100 M:2/4 K:C "^Slow" DA A2 | GA c2- | c2 G2 | c2 GF | GA/G/ F2 | E2 DC | DA A2 | GA c2- | c2 GA | cd- d2 | cA c2- | c2 GA | cd- d2 | cA c2- | c2 GA | c2 A2 | c2 d2 | cA c2- | c2 c2 | A2 G2 | F2 AG | F2 ED | CA,/C/ D2- | D2 CD | F2 A2 | G2 ED | CG A2 | G2 FD | CA,/C/ D2- | D2 CD | F2 A2 | G2 ED | CG A2 | G2 FD | CA,/C/ D2- | D2 z2 :| X:2 L:1/8 Q:1/4=100 M:2/4 K:C "^ MDolce" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | EG ed | c2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |"^ howeveroda" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | A2 cA | GA E2- | E2 z2 | GA cd | e2 ed | cd e2- | e2 z2 | ge d2 | cd c2- | c2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | EG ed | c2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |"^DDtisata" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | A2 cA | GA E2- | E2 z2 | GA cd | e2 ed | cd e2- | e2 z2 | ge d2 | cd c2- | c2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |"^ Easy" Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 | Ac de | d2 AG | cA cd | A2 AG | E2 ED | CD E2- | E2 z2 |] X:3 L:1/8 Q:1/4=60 M:4/4 K:C "^S books defe.." AA A2 cdcc | AcAG A4- | A8 | A,4 CD C2 | A,4 cdcA | A2 GA- A4- | A2 GA A2 AA | AG E2 D2 C2 | D6 ED | C2 D4 C2 | D2 C2 D4 | C2 A,2 CD C2 | A,4 cdcA | A2 GA- A4- | A2 GA A2 AA | AG E2 D2 C2 | D6 z2 |] ```
6f3f21ae618d29fd8012879742ede8e8
mit
[]
false
BibTeX entry and citation info ```bibtex @inproceedings{ wu2023exploring, title={Exploring the Efficacy of Pre-trained Checkpoints in Text-to-Music Generation Task}, author={Shangda Wu and Maosong Sun}, booktitle={The AAAI-23 Workshop on Creative AI Across Modalities}, year={2023}, url={https://openreview.net/forum?id=QmWXskBhesn} } ```
a4b5fdf4f9f3eed4710c65e4bd1e61c3
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad_v2 dataset. It achieves the following results on the evaluation set: - Loss: 1.4101
e994a16716eb9e10394e7d9c9423d658
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2109 | 1.0 | 8235 | 1.2303 | | 0.9385 | 2.0 | 16470 | 1.2412 | | 0.7448 | 3.0 | 24705 | 1.4101 |
cc56b1a5cb3d3cd8051529591ba5ec89
apache-2.0
['translation']
false
opus-mt-sn-sv * source languages: sn * target languages: sv * OPUS readme: [sn-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sn-sv/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/sn-sv/opus-2020-01-16.eval.txt)
e4c106ee82888fb7cd26b0ffadafdd5e
other
['generated_from_trainer']
false
TextGen_Opt350M This model is a fine-tuned version of [facebook/opt-350m](https://huggingface.co/facebook/opt-350m) on the None dataset. It achieves the following results on the evaluation set: - Loss: 3.6053
08a2f38e105d44492c3e4159de20e175
other
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.5886 | 1.0 | 2056 | 3.5856 | | 3.2797 | 2.0 | 4112 | 3.5879 | | 3.0513 | 3.0 | 6168 | 3.6053 |
ed854e0149756b84855a575429ed0d22
apache-2.0
['generated_from_trainer']
false
distilled-mt5-small-0.8-2 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the wmt16 ro-en dataset. It achieves the following results on the evaluation set: - Loss: 3.7465 - Bleu: 1.3564 - Gen Len: 89.6103
1f1b755577a33ca0d5003718beff7a96
apache-2.0
['generated_from_trainer']
false
wav2vec2-large-xls-r-300m-ur This model is a fine-tuned version of [anuragshas/wav2vec2-large-xls-r-300m-ur](https://huggingface.co/anuragshas/wav2vec2-large-xls-r-300m-ur) on the common_voice dataset. It achieves the following results on the evaluation set: - Loss: 2.0508 - Wer: 0.7328
c2339ee4cc3541ac5c4e989030e4ddd6
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 7.5e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.12 - num_epochs: 240
c5a22a93a88e37d8e8bfca0cfc626632
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 0.0719 | 66.67 | 400 | 1.8510 | 0.7432 | | 0.0284 | 133.33 | 800 | 2.0088 | 0.7415 | | 0.014 | 200.0 | 1200 | 2.0508 | 0.7328 |
b84517e4f48719470abac2f64a24a72e
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncasedv1-finetuned-twitter-sentiment This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the sentiment140 dataset. It achieves the following results on the evaluation set: - Loss: 0.3985 - Accuracy: 0.8247 - F1: 0.8246 - Precision: 0.8251 - Recall: 0.8017
7ca7a9a9afad7808038a9e60543d7bab
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:|:---------:|:------:| | No log | 1.0 | 500 | 0.4049 | 0.8181 | 0.8178 | 0.8236 | 0.7862 | | No log | 2.0 | 1000 | 0.3985 | 0.8247 | 0.8246 | 0.8251 | 0.8017 |
d5665cadbee2a220ffd68df13f2a18e2
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large-V2 Slovenian - Drishti Sharma This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.2118 - Wer: 13.8338
ff04758eae2bfe2af68dec0781471cac
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0118 | 3.04 | 1000 | 0.2118 | 13.8338 |
8a170d66f08067efcc1a78b9604a5f7e
mit
['bart', 'pytorch']
false
bart-large-japanese This model is converted from the original [Japanese BART Pretrained model](https://nlp.ist.i.kyoto-u.ac.jp/?BART%E6%97%A5%E6%9C%AC%E8%AA%9EPretrained%E3%83%A2%E3%83%87%E3%83%AB) released by Kyoto University. Both the encoder and decoder outputs are identical to the original Fairseq model.
3359b4fa177c522ea89f7e8a95a24c76
mit
['bart', 'pytorch']
false
How to use the model The input text should be tokenized by [BartJapaneseTokenizer](https://huggingface.co/Formzu/bart-large-japanese/blob/main/tokenization_bart_japanese.py). Tokenizer requirements: * [Juman++](https://github.com/ku-nlp/jumanpp) * [zenhan](https://pypi.org/project/zenhan/) * [pyknp](https://pypi.org/project/pyknp/) * [sentencepiece](https://pypi.org/project/sentencepiece/)
6901afff578ca996b534c9e7851b29cd
mit
['bart', 'pytorch']
false
Simple FillMaskPipeline ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer, pipeline model_name = "Formzu/bart-large-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" fill_mask = pipeline("fill-mask", model=model, tokenizer=tokenizer) out = fill_mask(masked_text) print(out)
b496b34099cff048e63598acccc50e20
mit
['bart', 'pytorch']
false
Text Generation ```python from transformers import AutoModelForSeq2SeqLM, AutoTokenizer import torch device = torch.device("cuda") if torch.cuda.is_available() else torch.device("cpu") model_name = "Formzu/bart-large-japanese" model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to(device) tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) masked_text = "天気が<mask>から散歩しましょう。" inp = tokenizer(masked_text, return_tensors='pt').to(device) out = model.generate(**inp, num_beams=1, min_length=0, max_length=20, early_stopping=True, no_repeat_ngram_size=2) res = "".join(tokenizer.decode(out.squeeze(0).tolist(), skip_special_tokens=True).split(" ")) print(res)
f4fde716597df4f03f8294f0d220ca13
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Large-V2 Nepali This model is a fine-tuned version of [DrishtiSharma/whisper-large-v2-hindi-3k-steps](https://huggingface.co/DrishtiSharma/whisper-large-v2-hindi-3k-steps) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.8851 - Wer: 9.7561
ee50d9c0d427545d8042ad41804b039e
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 200 - mixed_precision_training: Native AMP
fb223a19c8bbfe2184719f47ca36bdb5
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0 | 200.0 | 200 | 0.8851 | 9.7561 |
4532b32d8f1772d1bc34093640026d6b
mit
[]
false
Bluebey on Stable Diffusion This is the `<bluebey>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as an `object`: ![<bluebey> 0](https://huggingface.co/sd-concepts-library/bluebey/resolve/main/concept_images/4.jpeg) ![<bluebey> 1](https://huggingface.co/sd-concepts-library/bluebey/resolve/main/concept_images/0.jpeg) ![<bluebey> 2](https://huggingface.co/sd-concepts-library/bluebey/resolve/main/concept_images/3.jpeg) ![<bluebey> 3](https://huggingface.co/sd-concepts-library/bluebey/resolve/main/concept_images/2.jpeg) ![<bluebey> 4](https://huggingface.co/sd-concepts-library/bluebey/resolve/main/concept_images/1.jpeg)
78bc4c819dd29ef65dd69489cda2ece7
creativeml-openrail-m
['text-to-image', 'stable-diffusion']
false
Raiden_Shogun_DB Dreambooth model trained by Falon with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
6a299e6c9c8544abe4d06fe3e058bdff
apache-2.0
['t5-lm-adapt']
false
lm-adapted-t511lm100k) includes the following improvements compared to the original [T5 model](https://huggingface.co/t5-large): - GEGLU activation in feed-forward hidden layer, rather than ReLU - see [here](https://arxiv.org/abs/2002.05202). - Dropout was turned off in pre-training (quality win). Dropout should be re-enabled during fine-tuning. - Pre-trained on C4 only without mixing in the downstream tasks. - no parameter sharing between embedding and classifier layer - "xl" and "xxl" replace "3B" and "11B". The model shapes are a bit different - larger `d_model` and smaller `num_heads` and `d_ff`. and is pretrained on both the denoising and language modeling objective. More specifically, this checkpoint is initialized from [T5 Version 1.1 - Large](https://huggingface.co/google/https://huggingface.co/google/t5-v1_1-large) and then trained for an additional 100K steps on the LM objective discussed in the [T5 paper](https://arxiv.org/pdf/1910.10683.pdf). This adaptation improves the ability of the model to be used for prompt tuning. **Note**: A popular fine-tuned version of the *T5 Version 1.1 - LM Adapted* model is [BigScience's T0pp](https://huggingface.co/bigscience/T0pp). Pretraining Dataset: [C4](https://huggingface.co/datasets/c4) Other Community Checkpoints: [here](https://huggingface.co/models?other=t5-lm-adapt) Paper: [Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer](https://arxiv.org/pdf/1910.10683.pdf) Authors: *Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J. Liu*
b9fd9dbaad0ee3078a5c63e0a9b46cef
apache-2.0
['image-classification', 'pytorch', 'onnx']
false
Usage instructions ```python from PIL import Image from torchvision.transforms import Compose, ConvertImageDtype, Normalize, PILToTensor, Resize from torchvision.transforms.functional import InterpolationMode from holocron.models import model_from_hf_hub model = model_from_hf_hub("frgfm/repvgg_a2").eval() img = Image.open(path_to_an_image).convert("RGB")
55cef41739d13f584b54106df578b807
apache-2.0
['generated_from_trainer']
false
correct_distilBERT_token_itr0_1e-05_all_01_03_2022-15_43_47 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3343 - Precision: 0.1651 - Recall: 0.3039 - F1: 0.2140 - Accuracy: 0.8493
a87fdbc46b70c6f4c6160c13e9e9bfd1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 30 | 0.4801 | 0.0352 | 0.0591 | 0.0441 | 0.7521 | | No log | 2.0 | 60 | 0.3795 | 0.0355 | 0.0795 | 0.0491 | 0.8020 | | No log | 3.0 | 90 | 0.3359 | 0.0591 | 0.1294 | 0.0812 | 0.8334 | | No log | 4.0 | 120 | 0.3205 | 0.0785 | 0.1534 | 0.1039 | 0.8486 | | No log | 5.0 | 150 | 0.3144 | 0.0853 | 0.1571 | 0.1105 | 0.8516 |
a9599e453691655de17a324c7e50054b
apache-2.0
['text-classification', 'generated_from_trainer']
false
platzi-roberta-bryan This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilroberta-base) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.6294 - Accuracy: 0.8309 - F1: 0.8787
ae384d519c1ce2c39a6ec48a67124981
apache-2.0
['text-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.3798 | 1.09 | 500 | 0.6294 | 0.8309 | 0.8787 | | 0.3876 | 2.18 | 1000 | 0.6294 | 0.8309 | 0.8787 |
1a13294161b206b1aa11fb952d6e4ee5
apache-2.0
['generated_from_keras_callback']
false
annaeze/lab9_2 This model is a fine-tuned version of [annaeze/lab9_1](https://huggingface.co/annaeze/lab9_1) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0642 - Validation Loss: 0.0854 - Epoch: 2
d5318cd4da4c6c8587c8852f81dcddf6
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.3518 | 0.1309 | 0 | | 0.0959 | 0.1059 | 1 | | 0.0642 | 0.0854 | 2 |
fbb66e76d28c1dee14148f192f9edf84
other
[]
false
diff-svc一键包 原项目地址:https://github.com/openvpi/diff-svc vst插件:https://github.com/zhaohui8969/VST_NetProcess-/tree/master 代码修改:@ChrisPreston 模型训练:@ChrisPreston 音源:Aqua Ch. 湊あくあ https://www.youtube.com/@MinatoAqua カバー株式会社 模型使用协议(重要): 1. 请勿用于商业目的 2. 请勿用于会影响主播本人的行为(比如冒充本人发表争议言论) 3. 请勿用于血腥、暴力、性相关、政治相关内容 4. 不允许二次分发模型 5. 非个人使用场合请注明模型作者@ChrisPreston以及diff-svc原项目 6. 允许用于个人娱乐场景下的游戏语音、直播活动,不得用于低创内容,用于直播前请与本人联系 联系方式:电邮:kameiliduo0825@gmail.com, b站:https://space.bilibili.com/18801308 免责声明:由于使用本模型造成的法律纠纷本人概不负责 diff-svc easy package Original repository: https://github.com/openvpi/diff-svc vst plugin: https://github.com/zhaohui8969/VST_NetProcess-/tree/master Code modification: @ChrisPreston Model Training: @ChrisPreston Sound source: Aqua Ch. https://www.youtube.com/@MinatoAqua Cover.crop Model usage agreement (important): 1. Do not use for commercial purposes 2. Do not use it for actions that will affect MinatoAqua (such as pretending to be herself to make controversial remarks) 3. Please do not use it for bloody, violent, sexual or political content 4. No redistribute allowed 5. Please indicate the author of the model @ChrisPreston and the original project of diff-svc for non-personal use 6. It is allowed to be used for game voice and live broadcast activities in personal entertainment scenarios. Please contact me before using it for live broadcast Contact information: Email: kameiliduo0825@gmail.com, Bilibili: https://space.bilibili.com/18801308 Disclaimer: I am not responsible for any legal disputes caused by the use of this model
4af19f078fc74d60953655eb44abc8ad
apache-2.0
['vision', 'image-classification']
false
How to use Here is how to use this model to classify an image of the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoFeatureExtractor from optimum.onnxruntime import ORTModelForImageClassification from optimum.pipelines import pipeline feature_extractor = AutoFeatureExtractor.from_pretrained("optimum/vit-base-patch16-224")
378a124da6c2c6968d6d5931a556baf3
apache-2.0
['vision', 'image-classification']
false
Loading already converted and optimized ORT checkpoint for inference model = ORTModelForImageClassification.from_pretrained("optimum/vit-base-patch16-224") onnx_img_classif = pipeline( "image-classification", model=model, feature_extractor=feature_extractor ) url = "http://images.cocodataset.org/val2017/000000039769.jpg" pred = onnx_img_classif(url) print("Top-5 predicted classes:", pred) ```
9b47e5397c1e49a3e5ae65eb948be0e5
mit
[]
false
scrap-style on Stable Diffusion This is the `<style-scrap>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<style-chewie> 0](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/3.jpeg) ![<style-chewie> 1](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/0.jpeg) ![<style-chewie> 2](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/2.jpeg) ![<style-chewie> 3](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/1.jpeg) ![<style-chewie> 4](https://huggingface.co/sd-concepts-library/scrap-style/resolve/main/concept_images/4.jpeg)
b501d9fe0fff29e750b4937dffe026f0
apache-2.0
['generated_from_trainer']
false
finetuned_token_2e-05_16_02_2022-01_30_30 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1748 - Precision: 0.3384 - Recall: 0.3492 - F1: 0.3437 - Accuracy: 0.9442
03c282d4481e550a17f7ca1584f2b90f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 38 | 0.3180 | 0.0985 | 0.1648 | 0.1233 | 0.8643 | | No log | 2.0 | 76 | 0.2667 | 0.1962 | 0.2698 | 0.2272 | 0.8926 | | No log | 3.0 | 114 | 0.2374 | 0.2268 | 0.3005 | 0.2585 | 0.9062 | | No log | 4.0 | 152 | 0.2305 | 0.2248 | 0.3247 | 0.2657 | 0.9099 | | No log | 5.0 | 190 | 0.2289 | 0.2322 | 0.3166 | 0.2679 | 0.9102 |
a9831466e1c7fe4ed991cf80de622ced
apache-2.0
['image-classification', 'timm']
false
Model card for maxvit_base_tf_384.in1k An official MaxViT image classification model. Trained in tensorflow on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
b9ec66d2d198f23c89b4ebacf5fb760e
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 119.7 - GMACs: 73.8 - Activations (M): 332.9 - Image size: 384 x 384 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k
164549b6aea31382275ee7cf6f51dc90
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_base_tf_384.in1k', pretrained=True) model = model.eval()
306e40343b23026f9007568fd570036b
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_base_tf_384.in1k', pretrained=True, features_only=True, ) model = model.eval()
fd478b8a0e148f9f8d8c7163959b5bb4
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_base_tf_384.in1k', pretrained=True, num_classes=0,
0453e92d5401d1945d01af25e5eb2402
apache-2.0
['translation']
false
opus-mt-fr-pon * source languages: fr * target languages: pon * OPUS readme: [fr-pon](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-pon/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-16.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.zip) * test set translations: [opus-2020-01-16.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.test.txt) * test set scores: [opus-2020-01-16.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-pon/opus-2020-01-16.eval.txt)
3b6919940977253cf3227b840bed8fdb
apache-2.0
['image-classification', 'timm']
false
Model card for maxvit_base_tf_512.in21k_ft_in1k An official MaxViT image classification model. Pretrained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) and fine-tuned on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
89ac0a229f31505e3d8b344039f3bc1b
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 119.9 - GMACs: 138.0 - Activations (M): 704.0 - Image size: 512 x 512 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k
122d97a936ac4932811a7ca8a404a385
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_base_tf_512.in21k_ft_in1k', pretrained=True) model = model.eval()
3b0ed86567e8269130f313f6915de987
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_base_tf_512.in21k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval()
39f4c9271bd60daf53d0282b598e13aa
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_base_tf_512.in21k_ft_in1k', pretrained=True, num_classes=0,
9abc28c30ea5ff8efdb46ba642fab192
apache-2.0
['automatic-speech-recognition', 'zh-CN']
false
exp_w2v2t_zh-cn_vp-nl_s160 Fine-tuned [facebook/wav2vec2-large-nl-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-nl-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (zh-CN)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
3c752037b211875fc7e83ff621046381
apache-2.0
['generated_from_trainer']
false
sentiment-browser-extension This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.7068 - Accuracy: 0.8516 - F1: 0.8690
492fe4fa9a620cf40d0aa7259a0e918c
creativeml-openrail-m
['stable-diffusion', 'anime', 'anything-v4', 'art', 'arxiv:2210.14140']
false
Fast Anime PromptGen This model was trained on a dataset of **80,000** safe anime prompts for 3 epochs. I fetched the prompts from the [Safebooru API endpoint](https://safebooru.donmai.us/posts/random.json), but only accepted unique prompts with **up_score ≥ 8** and without any [blacklisted tags](./blacklist.txt). I didn't release the V1 model because it only generated gibberish prompts. After trying all means to correct that behavior, I eventually figured that the cause of the gibberish prompts is not from the pipeline params, model structure or training duration, but rather from the random usernames in the training data. Here's the complete [prompt preprocessing algorithm](./preprocess.py).
5ab627782b9e195d097476b0d34f82e2
creativeml-openrail-m
['stable-diffusion', 'anime', 'anything-v4', 'art', 'arxiv:2210.14140']
false
Text-to-image Examples Prefix *1girl* | [Generated *1girl* prompts](./anime_girl_settings.txt) | Model *Anything V4* ![](./anime_girls.png) Prefix *1boy*  | [Generated *1boy* prompts](./anime_boy_settings.txt) | Model *Anything V4* ![](./anime_boys.png)
e0f4654a48af69f4334fc2abe5e1fc6e
creativeml-openrail-m
['stable-diffusion', 'anime', 'anything-v4', 'art', 'arxiv:2210.14140']
false
Contrastive Search ``` pip install --upgrade transformers ``` ```python import torch from transformers import GPT2Tokenizer, GPT2LMHeadModel, pipeline tokenizer = GPT2Tokenizer.from_pretrained('distilgpt2') tokenizer.add_special_tokens({'pad_token': '[PAD]'}) model = GPT2LMHeadModel.from_pretrained('FredZhang7/anime-anything-promptgen-v2') prompt = r'1girl, genshin'
44538c57b76285bbbffb957a7e3e626e
creativeml-openrail-m
['stable-diffusion', 'anime', 'anything-v4', 'art', 'arxiv:2210.14140']
false
generate 10 samples using contrastive search outs = nlp(prompt, max_length=76, num_return_sequences=10, do_sample=True, repetition_penalty=1.2, temperature=0.7, top_k=4, early_stopping=True) print('\nInput:\n' + 100 * '-') print('\033[96m' + prompt + '\033[0m') print('\nOutput:\n' + 100 * '-') for i in range(len(outs)):
eb47de3067aaf7b675d5b6803a03ef17
creativeml-openrail-m
['stable-diffusion', 'anime', 'anything-v4', 'art', 'arxiv:2210.14140']
false
remove trailing commas and double spaces outs[i] = str(outs[i]['generated_text']).replace(' ', '').rstrip(',') print('\033[92m' + '\n\n'.join(outs) + '\033[0m\n') ``` Output Example: ![](./contrastive_search.png) Please see [Fast GPT PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2) for more info on the pipeline parameters.
3a168875cbd051ef0bba6b4bf032fc7a
creativeml-openrail-m
['stable-diffusion', 'anime', 'anything-v4', 'art', 'arxiv:2210.14140']
false
Awesome Tips - If you feel like a generated anime character doesn't show emotions, try emoticons like `;o`, `:o`, `;p`, `:d`, `:p`, and `;d` in the prompt. I also use `happy smirk`, `happy smile`, `laughing closed eyes`, etc. to make the characters more lively and expressive. - Adding `absurdres`, instead of `highres` and `masterpiece`, to a prompt can drastically increase the sharpness and resolution of a generated image.
c4f6e016029d6eb2565fdc0870cdb339
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Es - Sanchit Gandhi This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the Multilingual LibriSpeech dataset. It achieves the following results on the evaluation set: - Loss: 0.1252 - Wer: 4.9888
5d896aa441046f7fc3b65f273e49b40d
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 2500 - mixed_precision_training: Native AMP
c30545d5998ae2be84426233eb0e19a6
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2346 | 0.2 | 500 | 0.1957 | 8.5131 | | 0.1252 | 0.4 | 1000 | 0.1448 | 5.7876 | | 0.2076 | 0.6 | 1500 | 0.1361 | 5.5786 | | 0.2356 | 0.8 | 2000 | 0.1504 | 6.6611 | | 0.1893 | 1.0 | 2500 | 0.1252 | 4.9888 |
45c7a142a3ce86752e4716ad07f5a8ff
apache-2.0
['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107']
false
Model description This is a spoken language recognition model trained on the VoxLingua107 dataset using SpeechBrain. The model uses the ECAPA-TDNN architecture that has previously been used for speaker recognition. However, it uses more fully connected hidden layers after the embedding layer, and cross-entropy loss was used for training. We observed that this improved the performance of extracted utterance embeddings for downstream tasks. The model can classify a speech utterance according to the language spoken. It covers 107 different languages ( Abkhazian, Afrikaans, Amharic, Arabic, Assamese, Azerbaijani, Bashkir, Belarusian, Bulgarian, Bengali, Tibetan, Breton, Bosnian, Catalan, Cebuano, Czech, Welsh, Danish, German, Greek, English, Esperanto, Spanish, Estonian, Basque, Persian, Finnish, Faroese, French, Galician, Guarani, Gujarati, Manx, Hausa, Hawaiian, Hindi, Croatian, Haitian, Hungarian, Armenian, Interlingua, Indonesian, Icelandic, Italian, Hebrew, Japanese, Javanese, Georgian, Kazakh, Central Khmer, Kannada, Korean, Latin, Luxembourgish, Lingala, Lao, Lithuanian, Latvian, Malagasy, Maori, Macedonian, Malayalam, Mongolian, Marathi, Malay, Maltese, Burmese, Nepali, Dutch, Norwegian Nynorsk, Norwegian, Occitan, Panjabi, Polish, Pushto, Portuguese, Romanian, Russian, Sanskrit, Scots, Sindhi, Sinhala, Slovak, Slovenian, Shona, Somali, Albanian, Serbian, Sundanese, Swedish, Swahili, Tamil, Telugu, Tajik, Thai, Turkmen, Tagalog, Turkish, Tatar, Ukrainian, Urdu, Uzbek, Vietnamese, Waray, Yiddish, Yoruba, Mandarin Chinese).
0fc626a8be0d913cd5d97e6ae4730d96
apache-2.0
['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107']
false
How to use ```python import torchaudio from speechbrain.pretrained import EncoderClassifier language_id = EncoderClassifier.from_hparams(source="TalTechNLP/voxlingua107-epaca-tdnn-ce", savedir="tmp")
5b8b7538e30810235fcd019ef2612bf9
apache-2.0
['audio-classification', 'speechbrain', 'embeddings', 'Language', 'Identification', 'pytorch', 'ECAPA-TDNN', 'TDNN', 'VoxLingua107']
false
Download Thai language sample from Omniglot and cvert to suitable form signal = language_id.load_audio("https://omniglot.com/soundfiles/udhr/udhr_th.mp3") prediction = language_id.classify_batch(signal) print(prediction) (tensor([[-2.8646e+01, -3.0346e+01, -2.0748e+01, -2.9562e+01, -2.2187e+01, -3.2668e+01, -3.6677e+01, -3.3573e+01, -3.2545e+01, -2.4365e+01, -2.4688e+01, -3.1171e+01, -2.7743e+01, -2.9918e+01, -2.4770e+01, -3.2250e+01, -2.4727e+01, -2.6087e+01, -2.1870e+01, -3.2821e+01, -2.2128e+01, -2.2822e+01, -3.0888e+01, -3.3564e+01, -2.9906e+01, -2.2392e+01, -2.5573e+01, -2.6443e+01, -3.2429e+01, -3.2652e+01, -3.0030e+01, -2.4607e+01, -2.2967e+01, -2.4396e+01, -2.8578e+01, -2.5153e+01, -2.8475e+01, -2.6409e+01, -2.5230e+01, -2.7957e+01, -2.6298e+01, -2.3609e+01, -2.5863e+01, -2.8225e+01, -2.7225e+01, -3.0486e+01, -2.1185e+01, -2.7938e+01, -3.3155e+01, -1.9076e+01, -2.9181e+01, -2.2160e+01, -1.8352e+01, -2.5866e+01, -3.3636e+01, -4.2016e+00, -3.1581e+01, -3.1894e+01, -2.7834e+01, -2.5429e+01, -3.2235e+01, -3.2280e+01, -2.8786e+01, -2.3366e+01, -2.6047e+01, -2.2075e+01, -2.3770e+01, -2.2518e+01, -2.8101e+01, -2.5745e+01, -2.6441e+01, -2.9822e+01, -2.7109e+01, -3.0225e+01, -2.4566e+01, -2.9268e+01, -2.7651e+01, -3.4221e+01, -2.9026e+01, -2.6009e+01, -3.1968e+01, -3.1747e+01, -2.8156e+01, -2.9025e+01, -2.7756e+01, -2.8052e+01, -2.9341e+01, -2.8806e+01, -2.1636e+01, -2.3992e+01, -2.3794e+01, -3.3743e+01, -2.8332e+01, -2.7465e+01, -1.5085e-02, -2.9094e+01, -2.1444e+01, -2.9780e+01, -3.6046e+01, -3.7401e+01, -3.0888e+01, -3.3172e+01, -1.8931e+01, -2.2679e+01, -3.0225e+01, -2.4995e+01, -2.1028e+01]]), tensor([-0.0151]), tensor([94]), ['th'])
7ff4a6502a8552c10d25f44579fde712
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
kornwtp/ConGen-Multilingual-MiniLM-L12 This is a [ConGen](https://github.com/KornWtp/ConGen) model: It maps sentences to a 384 dimensional dense vector space and can be used for tasks like semantic search.
a33859d5179937d350c831d9518d632c
apache-2.0
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage Using this model becomes easy when you have [ConGen](https://github.com/KornWtp/ConGen) installed: ``` pip install -U git+https://github.com/KornWtp/ConGen.git ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('kornwtp/ConGen-Multilingual-MiniLM-L12') embeddings = model.encode(sentences) print(embeddings) ```
7a32cf01279ad8778d09c2a224a20128
mit
['deberta', 'deberta-v3', 'mdeberta', 'question-answering']
false
Evaluation on SQuAD2.0 dev set ``` { "epoch": 3.0, "eval_HasAns_exact": 79.65587044534414, "eval_HasAns_f1": 85.91387795001529, "eval_HasAns_total": 5928, "eval_NoAns_exact": 82.10260723296888, "eval_NoAns_f1": 82.10260723296888, "eval_NoAns_total": 5945, "eval_best_exact": 80.8809904826076, "eval_best_exact_thresh": 0.0, "eval_best_f1": 84.00551406448994, "eval_best_f1_thresh": 0.0, "eval_exact": 80.8809904826076, "eval_f1": 84.00551406449004, "eval_samples": 12508, "eval_total": 11873, "train_loss": 0.7729689576483615, "train_runtime": 9118.953, "train_samples": 134891, "train_samples_per_second": 44.377, "train_steps_per_second": 0.925 } ```
ce6db823822a6fa7eb5a8b51a8eb902b
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Whisper Tiny Belarusian Repo to test model training This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_11_0 be dataset. It achieves the following results on the evaluation set: - Loss: 0.4388 - Wer: 46.5201
8e8afb62c62dc3f2e92b18797bb71cf4
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 300 - mixed_precision_training: Native AMP
aaa6d12bd8ef4609d7b8bbf2cc889e8a
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 2.5366 | 0.05 | 10 | 1.5402 | 94.5055 | | 1.3721 | 0.1 | 20 | 1.0021 | 75.8242 | | 0.9921 | 0.15 | 30 | 0.8322 | 75.0916 | | 0.9844 | 0.2 | 40 | 0.8080 | 72.8938 | | 0.7071 | 0.25 | 50 | 0.7862 | 77.2894 | | 0.7998 | 0.3 | 60 | 0.7052 | 68.8645 | | 0.6935 | 0.35 | 70 | 0.6781 | 64.2857 | | 0.81 | 0.4 | 80 | 0.6341 | 63.5531 | | 0.6133 | 0.45 | 90 | 0.6083 | 62.6374 | | 0.6675 | 0.5 | 100 | 0.5851 | 62.8205 | | 0.5577 | 0.55 | 110 | 0.5651 | 59.3407 | | 0.6473 | 0.6 | 120 | 0.5638 | 58.0586 | | 0.6018 | 0.65 | 130 | 0.5434 | 53.8462 | | 0.5918 | 0.7 | 140 | 0.5385 | 54.9451 | | 0.5654 | 0.75 | 150 | 0.5200 | 58.0586 | | 0.587 | 0.8 | 160 | 0.4974 | 57.1429 | | 0.6157 | 0.85 | 170 | 0.4834 | 53.2967 | | 0.6803 | 0.9 | 180 | 0.4852 | 55.8608 | | 0.4813 | 0.95 | 190 | 0.4686 | 51.2821 | | 0.4952 | 1.0 | 200 | 0.4624 | 51.4652 | | 0.3956 | 0.03 | 210 | 0.4690 | 52.0147 | | 0.3719 | 0.07 | 220 | 0.4673 | 52.7473 | | 0.3168 | 0.1 | 230 | 0.4499 | 51.4652 | | 0.3582 | 0.13 | 240 | 0.4525 | 46.8864 | | 0.2475 | 0.17 | 250 | 0.4612 | 52.3810 | | 0.2988 | 0.2 | 260 | 0.4346 | 49.8168 | | 0.2749 | 0.23 | 270 | 0.4249 | 48.9011 | | 0.3368 | 0.27 | 280 | 0.4388 | 46.5201 | | 0.2574 | 0.3 | 290 | 0.4309 | 46.7033 | | 0.2921 | 0.33 | 300 | 0.4282 | 46.7033 |
c957755bbc54dd0e557dc592c9ded8af
cc
[]
false
FeiArt Handpainted CG Diffusion is a custom diffusion model trained by @FeiArt_AiArt. It can be used to create Handpainted CG style images. To use it,you can use [FeiArt_Handpainted CG Diffusion](https://colab.research.google.com/drive/1u9ompOlBZMgIZc_KZvxIa3V6UD4Ch3dT?usp=sharing) If you create a fun image with this model, please show your result and [@FeiArt_AiArt](https://twitter.com/FeiArt_AiArt) Or you can join the [FeiArt Diffusion Discord](https://discord.gg/MkAsEpNnqs) Share your work created with this model. Exchange experiences and parameters. And see more custom diffusion models model_config.update({ 'attention_resolutions': '32,16,8', 'class_cond': False, 'diffusion_steps': 1000, 'rescale_timesteps': True, 'timestep_respacing': 'ddim100', 'image_size': 512, 'learn_sigma': True, 'noise_schedule': 'linear', 'num_channels': 256, 'num_head_channels': 64, 'num_res_blocks': 2, 'resblock_updown': True, 'use_checkpoint': use_checkpoint, 'use_fp16': True, 'use_scale_shift_norm': True, and change the model.load to below model.load_state_dict(torch.load(custom_path, map_location='cpu'), strict=False)
25dac51ea24e8a5c3599f170e164e68c
apache-2.0
['generated_from_trainer']
false
bert-base-uncased-finetuned-squad This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1006
21acfaf984e21e0cfa7ae7e197c69a74
apache-2.0
['generated_from_trainer']
false
base-mlm-imdb-target-imdb This model is a fine-tuned version of [muhtasham/base-mlm-imdb](https://huggingface.co/muhtasham/base-mlm-imdb) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.4659 - Accuracy: 0.8918 - F1: 0.9428
bc3af1afba9c7cdea6267219d38d3eee
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2453 | 0.64 | 500 | 0.1892 | 0.9334 | 0.9656 | | 0.1764 | 1.28 | 1000 | 0.1267 | 0.9581 | 0.9786 | | 0.117 | 1.92 | 1500 | 0.1926 | 0.9290 | 0.9632 | | 0.0727 | 2.56 | 2000 | 0.3109 | 0.9182 | 0.9574 | | 0.0665 | 3.2 | 2500 | 0.4659 | 0.8918 | 0.9428 |
63a1e49cf63ec0533dea96763918cad8
apache-2.0
['generated_from_trainer']
false
distilbert_sa_GLUE_Experiment_data_aug_sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the GLUE SST2 dataset. It achieves the following results on the evaluation set: - Loss: 0.6218 - Accuracy: 0.7775
9257b037359ee2bdf9e5cc2a716a20d0
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3214 | 1.0 | 4374 | 0.6218 | 0.7775 | | 0.1833 | 2.0 | 8748 | 0.7939 | 0.7695 | | 0.1228 | 3.0 | 13122 | 0.8713 | 0.7706 | | 0.0916 | 4.0 | 17496 | 1.1167 | 0.7638 | | 0.0733 | 5.0 | 21870 | 1.3167 | 0.7695 | | 0.0613 | 6.0 | 26244 | 1.1949 | 0.7592 |
11e513634455a0f8cca1386f6b51df8f
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2182 - Accuracy: 0.9265 - F1: 0.9266
39b9fc5b236fc6ee90b26851f4841e15