license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.692 | 0.15 | 500 | 0.6882 | 0.5574 | | 0.6777 | 0.31 | 1000 | 0.6637 | 0.6059 | | 0.667 | 0.46 | 1500 | 0.6568 | 0.6064 | | 0.6609 | 0.61 | 2000 | 0.6517 | 0.6193 | | 0.6596 | 0.76 | 2500 | 0.6514 | 0.6127 | | 0.6584 | 0.92 | 3000 | 0.6496 | 0.6202 | | 0.6514 | 1.07 | 3500 | 0.6487 | 0.6191 | | 0.652 | 1.22 | 4000 | 0.6420 | 0.6253 | | 0.6449 | 1.37 | 4500 | 0.6415 | 0.6268 | | 0.6477 | 1.53 | 5000 | 0.6358 | 0.6306 |
4b986941f52037c120e12a9782749790
apache-2.0
['whisper-event', 'generated_from_trainer']
false
openai/whisper-large-v2 This model is a fine-tuned version of [openai/whisper-large-v2](https://huggingface.co/openai/whisper-large-v2) on the common_voice_11_0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5984 - Wer: 18.3045
366571baed26530762188a1b6e04ac7e
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - training_steps: 1500 - mixed_precision_training: Native AMP
adc82daffa977a6d1867d36ec90aa9eb
apache-2.0
['whisper-event', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.0002 | 24.01 | 1500 | 0.5984 | 18.3045 |
6dc3d872f5de1bff93e8f8e048f2813d
apache-2.0
['generated_from_trainer']
false
all-roberta-large-v1-home-6-16-5 This model is a fine-tuned version of [sentence-transformers/all-roberta-large-v1](https://huggingface.co/sentence-transformers/all-roberta-large-v1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 2.3789 - Accuracy: 0.3356
58cddbe7fdb80ca7d1417db0e729a96b
apache-2.0
['translation']
false
opus-mt-en-ny * source languages: en * target languages: ny * OPUS readme: [en-ny](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-ny/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-08.zip](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.zip) * test set translations: [opus-2020-01-08.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.test.txt) * test set scores: [opus-2020-01-08.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/en-ny/opus-2020-01-08.eval.txt)
de2937e86223e3e2731adf0bcb48a32b
apache-2.0
['t5-small', 'text2text-generation', 'natural language understanding', 'conversational system', 'task-oriented dialog']
false
t5-small-nlu-tm1-context3 This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [Taskmaster-1](https://huggingface.co/datasets/ConvLab/tm1) with context window size == 3. Refer to [ConvLab-3](https://github.com/ConvLab/ConvLab-3) for model description and usage.
c488497a053b5459926def180c14af81
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-vios-google-colab This model is a fine-tuned version of [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.5647 - Wer: 0.4970
32759410b34c922c95dbe1ed3b5cd17b
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 30 - mixed_precision_training: Native AMP
af383d35a378892a4697d8d2b8bd26c8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 7.7292 | 2.0 | 500 | 3.4159 | 1.0 | | 3.0762 | 4.0 | 1000 | 1.3005 | 0.9615 | | 0.8812 | 6.0 | 1500 | 0.4664 | 0.4740 | | 0.5076 | 8.0 | 2000 | 0.4101 | 0.4180 | | 0.4075 | 10.0 | 2500 | 0.3815 | 0.3802 | | 0.3724 | 12.0 | 3000 | 0.3785 | 0.3741 | | 0.3762 | 14.0 | 3500 | 0.4404 | 0.3766 | | 0.4541 | 16.0 | 4000 | 0.4671 | 0.3822 | | 0.6391 | 18.0 | 4500 | 0.5643 | 0.4200 | | 0.7681 | 20.0 | 5000 | 0.6564 | 0.5214 | | 0.8131 | 22.0 | 5500 | 0.5786 | 0.4934 | | 0.7448 | 24.0 | 6000 | 0.5561 | 0.4920 | | 0.7337 | 26.0 | 6500 | 0.5631 | 0.4964 | | 0.7359 | 28.0 | 7000 | 0.5647 | 0.4968 | | 0.7397 | 30.0 | 7500 | 0.5647 | 0.4970 |
640922ddcc1b69ba44b2083f8725b5c6
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.3821 - Accuracy: 0.896 - F1: 0.8928
eb2a2e133d27eebdd476edc4dc72a3cc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 125 | 0.6029 | 0.7985 | 0.7597 | | 0.7905 | 2.0 | 250 | 0.3821 | 0.896 | 0.8928 |
7903e24acfa504c33b21befd39ef7f25
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - training_steps: 10 - mixed_precision_training: Native AMP
d4e9c62cdbc2f7bcd07eb8d2d195b057
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | No log | 1.13 | 8 | 2.1362 | 43.7647 |
6d589972942b617339e5ad8bedf04ef8
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-squad This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.1639
65fb1811025e806698df2cc76604c456
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.2291 | 1.0 | 5533 | 1.1581 | | 0.9553 | 2.0 | 11066 | 1.1249 | | 0.7767 | 3.0 | 16599 | 1.1639 |
9fb3fc85c4ce7985f0468d7809139cd5
apache-2.0
['generated_from_trainer']
false
w2v2 This model is a fine-tuned version of [facebook/wav2vec2-large-960h-lv60-self](https://huggingface.co/facebook/wav2vec2-large-960h-lv60-self) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.8860 - Wer: 0.2817
953748fcf47a789d15fc547dbfc820a1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 5.5664 | 3.07 | 500 | 3.0411 | 0.9997 | | 2.5607 | 6.13 | 1000 | 1.0770 | 0.3660 | | 0.9959 | 9.2 | 1500 | 0.8815 | 0.3017 | | 0.8129 | 12.27 | 2000 | 0.8676 | 0.2915 | | 0.7334 | 15.34 | 2500 | 0.8381 | 0.2931 | | 0.669 | 18.4 | 3000 | 0.8802 | 0.2864 | | 0.6312 | 21.47 | 3500 | 0.8679 | 0.2864 | | 0.6094 | 24.54 | 4000 | 0.8811 | 0.2802 | | 0.5987 | 27.61 | 4500 | 0.8860 | 0.2817 |
34c03aec2141f73a20e0eac69b746229
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
wav2vec2-large-xlsr-53-Georgian Fine-tuned [facebook/wav2vec2-large-xlsr-53](https://huggingface.co/facebook/wav2vec2-large-xlsr-53) in Georgian using the [Common Voice](https://huggingface.co/datasets/common_voice) When using this model, make sure that your speech input is sampled at 16kHz.
d1826b586e5d081cd69c85eeb449c9bd
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "ka", split="test[:2%]") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") resampler = torchaudio.transforms.Resample(48_000, 16_000)
2a130cd018be5c1081f4a67ed6e79e87
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
Evaluation The model can be evaluated as follows on the Georgian test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "ka", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model = Wav2Vec2ForCTC.from_pretrained("MehdiHosseiniMoghadam/wav2vec2-large-xlsr-53-Georgian") model.to("cuda") chars_to_ignore_regex = '[\\,\\?\\.\\!\\-\\;\\:\\"\\“]' resampler = torchaudio.transforms.Resample(48_000, 16_000)
53405759496ed01e04e6bc151e167f3f
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn)
dcb27c4a5034ae62d3ce5bdd0010ed2e
apache-2.0
['audio', 'automatic-speech-recognition', 'speech', 'xlsr-fine-tuning-week']
false
We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=8) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 60.504024 %
2d625d385359dbbb2dda73e46a631387
apache-2.0
['translation']
false
opus-mt-fi-to * source languages: fi * target languages: to * OPUS readme: [fi-to](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-to/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-24.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.zip) * test set translations: [opus-2020-01-24.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.test.txt) * test set scores: [opus-2020-01-24.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-to/opus-2020-01-24.eval.txt)
f172ae2e23955b8ddb21d6d29efd33fc
mit
['generated_from_keras_callback']
false
esm2_t12_35M_UR50D-finetuned-cytosol-membrane-classification This model is a fine-tuned version of [facebook/esm2_t12_35M_UR50D](https://huggingface.co/facebook/esm2_t12_35M_UR50D) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1009 - Train Accuracy: 0.9684 - Validation Loss: 0.2122 - Validation Accuracy: 0.9401 - Epoch: 2
db1f12a6870ed3761c1ee89486f63aae
mit
['generated_from_keras_callback']
false
Training results | Train Loss | Train Accuracy | Validation Loss | Validation Accuracy | Epoch | |:----------:|:--------------:|:---------------:|:-------------------:|:-----:| | 0.2464 | 0.9228 | 0.1954 | 0.9417 | 0 | | 0.1428 | 0.9565 | 0.1831 | 0.9345 | 1 | | 0.1009 | 0.9684 | 0.2122 | 0.9401 | 2 |
fcb549f90e43aacc2943371ba4224e68
mit
['exbert']
false
Overview **Language model:** deepset/roberta-base-squad2-distilled **Language:** English **Training data:** SQuAD 2.0 training set **Eval data:** SQuAD 2.0 dev set **Infrastructure**: 4x V100 GPU **Published**: Dec 8th, 2021
af38e48cb5cae5db696f3e4911eb4db6
mit
['roberta-base', 'roberta-base-epoch_57']
false
RoBERTa, Intermediate Checkpoint - Epoch 57 This model is part of our reimplementation of the [RoBERTa model](https://arxiv.org/abs/1907.11692), trained on Wikipedia and the Book Corpus only. We train this model for almost 100K steps, corresponding to 83 epochs. We provide the 84 checkpoints (including the randomly initialized weights before the training) to provide the ability to study the training dynamics of such models, and other possible use-cases. These models were trained in part of a work that studies how simple statistics from data, such as co-occurrences affects model predictions, which are described in the paper [Measuring Causal Effects of Data Statistics on Language Model's `Factual' Predictions](https://arxiv.org/abs/2207.14251). This is RoBERTa-base epoch_57.
573c9aa384596bfeaadafbcbfed96236
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2141 - Accuracy: 0.922 - F1: 0.9220
43af6f8de5477a053dd008ef5bf9c868
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8134 | 1.0 | 250 | 0.3017 | 0.9105 | 0.9087 | | 0.2455 | 2.0 | 500 | 0.2141 | 0.922 | 0.9220 |
e6ef19c9a68cfb51050dd35068d8b05a
apache-2.0
['generated_from_trainer']
false
english-filipino-wav2vec2-l-xls-r-test This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.5795 - Wer: 0.3996
21fb46d240b322c7567a836690227540
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 3.0751 | 2.09 | 400 | 2.4744 | 0.9804 | | 0.7852 | 4.19 | 800 | 0.5836 | 0.5620 | | 0.3751 | 6.28 | 1200 | 0.4873 | 0.4658 | | 0.2578 | 8.38 | 1600 | 0.5725 | 0.5289 | | 0.1897 | 10.47 | 2000 | 0.5342 | 0.4856 | | 0.1394 | 12.57 | 2400 | 0.5677 | 0.4761 | | 0.1048 | 14.66 | 2800 | 0.5708 | 0.4415 | | 0.0848 | 16.75 | 3200 | 0.5908 | 0.4374 | | 0.0652 | 18.85 | 3600 | 0.5795 | 0.3996 |
c50c9d1b98042c7d3433f8140abf39de
apache-2.0
['summarization', 'urdu', 'ur', 'mt5', 'Abstractive Summarization', 'generated_from_trainer']
false
mt5-base-finetuned-urdu This model is a fine-tuned version of [google/mt5-base](https://huggingface.co/google/mt5-base) on Urdu subset the xlsum dataset. It achieves the following results on the evaluation set: - Loss: 2.8954 - Rouge-1: 28.84 - Rouge-2: 13.87 - Rouge-l: 25.63 - Gen Len: 19.0 - Bertscore: 71.31
3d39015143fa3bc36414f2aac29a2a83
apache-2.0
['summarization', 'urdu', 'ur', 'mt5', 'Abstractive Summarization', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge-1 | Rouge-2 | Rouge-l | Gen Len | Bertscore | |:-------------:|:-----:|:-----:|:---------------:|:-------:|:-------:|:-------:|:-------:|:---------:| | 3.6205 | 1.0 | 2114 | 3.0871 | 26.45 | 11.4 | 23.26 | 19.0 | 70.76 | | 3.2169 | 2.0 | 4228 | 2.9830 | 27.19 | 11.91 | 23.95 | 19.0 | 70.92 | | 3.0787 | 3.0 | 6342 | 2.9284 | 27.9 | 12.57 | 24.62 | 18.99 | 71.13 | | 2.9874 | 4.0 | 8456 | 2.9049 | 28.28 | 12.91 | 24.99 | 18.99 | 71.28 | | 2.9232 | 5.0 | 10570 | 2.8954 | 28.65 | 13.17 | 25.32 | 18.99 | 71.39 |
427620accdb79d09572eb4fe0c6369b3
apache-2.0
['deep-narrow']
false
T5-Efficient-BASE-DM512 (Deep-Narrow version) T5-Efficient-BASE-DM512 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5). It is a *pretrained-only* checkpoint and was released with the paper **[Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers](https://arxiv.org/abs/2109.10686)** by *Yi Tay, Mostafa Dehghani, Jinfeng Rao, William Fedus, Samira Abnar, Hyung Won Chung, Sharan Narang, Dani Yogatama, Ashish Vaswani, Donald Metzler*. In a nutshell, the paper indicates that a **Deep-Narrow** model architecture is favorable for **downstream** performance compared to other model architectures of similar parameter count. To quote the paper: > We generally recommend a DeepNarrow strategy where the model’s depth is preferentially increased > before considering any other forms of uniform scaling across other dimensions. This is largely due to > how much depth influences the Pareto-frontier as shown in earlier sections of the paper. Specifically, a > tall small (deep and narrow) model is generally more efficient compared to the base model. Likewise, > a tall base model might also generally more efficient compared to a large model. We generally find > that, regardless of size, even if absolute performance might increase as we continue to stack layers, > the relative gain of Pareto-efficiency diminishes as we increase the layers, converging at 32 to 36 > layers. Finally, we note that our notion of efficiency here relates to any one compute dimension, i.e., > params, FLOPs or throughput (speed). We report all three key efficiency metrics (number of params, > FLOPS and speed) and leave this decision to the practitioner to decide which compute dimension to > consider. To be more precise, *model depth* is defined as the number of transformer blocks that are stacked sequentially. A sequence of word embeddings is therefore processed sequentially by each transformer block.
876acfe517660f66e1e98251634545ef
apache-2.0
['deep-narrow']
false
Details model architecture This model checkpoint - **t5-efficient-base-dm512** - is of model type **Base** with the following variations: - **dm** is **512** It has **148.63** million parameters and thus requires *ca.* **594.52 MB** of memory in full precision (*fp32*) or **297.26 MB** of memory in half precision (*fp16* or *bf16*). A summary of the *original* T5 model architectures can be seen here: | Model | nl (el/dl) | ff | dm | kv | nh |
eb0076bfa334aa72c27cb9771f3029d6
apache-2.0
['generated_from_trainer']
false
test This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the imdb dataset. It achieves the following results on the evaluation set: - Loss: 0.3003 - Accuracy: 0.88 - F1: 0.88
d188f99b9039940e8762715bd4372110
mit
[]
false
Trained on amateur photographs of chickens from Reddit. Include "chkn" in a prompt to use. ![22270-1687283316-ambushed by chkn!, art by Gian Paolo Dulbecco, Mr. Doodle, trending on artstation.png](https://s3.amazonaws.com/moonup/production/uploads/1667527175082-6303df4ffc783bfc7442d090.png) ![22227-1353605590-Flock of chkn, art by Alex Andreev, Jeremiah Ketner, trending on artstation.png](https://s3.amazonaws.com/moonup/production/uploads/1667528158522-6303df4ffc783bfc7442d090.png) ![22237-389909750-Flock of chkn, art by Stanisław Ignacy Witkiewicz, Adolph Menzel, trending on artstation.png](https://s3.amazonaws.com/moonup/production/uploads/1667528187652-6303df4ffc783bfc7442d090.png) ![22197-2893918631-Portrait of a (chkn), art by Atelier Olschinsky, trending on artstation.png](https://s3.amazonaws.com/moonup/production/uploads/1667528201281-6303df4ffc783bfc7442d090.png) ![22052-1975968497-Portrait of (chkn), trending on artstation, art by Albert Bloch, Lee Jeffries.png](https://s3.amazonaws.com/moonup/production/uploads/1667528223276-6303df4ffc783bfc7442d090.png) ![22138-4080725859-A chkn warrior charging into battle, art by boris valejo and greg rutkowski, trending on artstation.png](https://s3.amazonaws.com/moonup/production/uploads/1667528238010-6303df4ffc783bfc7442d090.png)
44a527343aea938a08cf6f029c8d1bd7
mit
[]
false
model by Xmuzz This your the Stable Diffusion model fine-tuned the xordixx concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt`: **xordizz** You can also train your own concepts and upload them to the library by using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_training.ipynb). And you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Here are the images used for training this concept: ![image 0](https://huggingface.co/Xmuzz/xordixx/resolve/main/concept_images/6.jpeg) ![image 1](https://huggingface.co/Xmuzz/xordixx/resolve/main/concept_images/1.jpeg) ![image 2](https://huggingface.co/Xmuzz/xordixx/resolve/main/concept_images/5.jpeg) ![image 3](https://huggingface.co/Xmuzz/xordixx/resolve/main/concept_images/2.jpeg) ![image 4](https://huggingface.co/Xmuzz/xordixx/resolve/main/concept_images/3.jpeg) ![image 5](https://huggingface.co/Xmuzz/xordixx/resolve/main/concept_images/0.jpeg) ![image 6](https://huggingface.co/Xmuzz/xordixx/resolve/main/concept_images/4.jpeg) ![image 7](https://huggingface.co/Xmuzz/xordixx/resolve/main/concept_images/7.jpeg)
129a0ce345785e40eea0f0ed6da84f2a
cc-by-4.0
['question generation']
false
Model Card of `lmqg/mt5-small-squad-qg` This model is fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) for question generation task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation).
49cbe1e13cab7bd8ce5ae6f7fb1b1e25
cc-by-4.0
['question generation']
false
Overview - **Language model:** [google/mt5-small](https://huggingface.co/google/mt5-small) - **Language:** en - **Training data:** [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992)
6eae25cc484f4cfc32c939caf2fdd9a6
cc-by-4.0
['question generation']
false
model prediction questions = model.generate_q(list_context="William Turner was an English painter who specialised in watercolour landscapes", list_answer="William Turner") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "lmqg/mt5-small-squad-qg") output = pipe("<hl> Beyonce <hl> further expanded her acting career, starring as blues singer Etta James in the 2008 musical biopic, Cadillac Records.") ```
5b8be73adcd568ef90655037f5e352bb
cc-by-4.0
['question generation']
false
Evaluation - ***Metric (Question Generation)***: [raw metric file](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/eval/metric.first.sentence.paragraph_answer.question.lmqg_qg_squad.default.json) | | Score | Type | Dataset | |:-----------|--------:|:--------|:---------------------------------------------------------------| | BERTScore | 90.01 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_1 | 54.07 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_2 | 37.62 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_3 | 28.18 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | Bleu_4 | 21.65 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | METEOR | 23.83 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | MoverScore | 62.75 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | | ROUGE_L | 48.95 | default | [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) | - ***Metrics (Question Generation, Out-of-Domain)*** | Dataset | Type | BERTScore| Bleu_4 | METEOR | MoverScore | ROUGE_L | Link | |:--------|:-----|---------:|-------:|-------:|-----------:|--------:|-----:| | [lmqg/qg_dequad](https://huggingface.co/datasets/lmqg/qg_dequad) | default | 73.53 | 0.0 | 4.81 | 50.37 | 1.56 | [link](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_dequad.default.json) | | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | default | 74.94 | 0.59 | 6.02 | 50.62 | 5.21 | [link](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_esquad.default.json) | | [lmqg/qg_frquad](https://huggingface.co/datasets/lmqg/qg_frquad) | default | 72.91 | 1.71 | 8.24 | 50.96 | 15.84 | [link](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_frquad.default.json) | | [lmqg/qg_itquad](https://huggingface.co/datasets/lmqg/qg_itquad) | default | 72.6 | 0.54 | 5.89 | 50.23 | 5.01 | [link](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_itquad.default.json) | | [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) | default | 66.08 | 0.0 | 0.51 | 46.53 | 6.08 | [link](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_jaquad.default.json) | | [lmqg/qg_koquad](https://huggingface.co/datasets/lmqg/qg_koquad) | default | 66.34 | 0.0 | 0.73 | 45.86 | 0.06 | [link](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_koquad.default.json) | | [lmqg/qg_ruquad](https://huggingface.co/datasets/lmqg/qg_ruquad) | default | 70.89 | 0.0 | 1.78 | 49.1 | 0.99 | [link](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/eval_ood/metric.first.sentence.paragraph_answer.question.lmqg_qg_ruquad.default.json) |
0cfb1e71b7eab03a2633bf6e9224089b
cc-by-4.0
['question generation']
false
Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_squad - dataset_name: default - input_types: ['paragraph_answer'] - output_types: ['question'] - prefix_types: None - model: google/mt5-small - max_length: 512 - max_length_output: 32 - epoch: 15 - batch: 64 - lr: 0.0005 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 1 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/lmqg/mt5-small-squad-qg/raw/main/trainer_config.json).
16c51afd0c6ee8b1312014d3e494c07f
mit
['generated_from_trainer']
false
my-lilt-en-funsd This model is a fine-tuned version of [SCUT-DLVCLab/lilt-roberta-en-base](https://huggingface.co/SCUT-DLVCLab/lilt-roberta-en-base) on the funsd-layoutlmv3 dataset. It achieves the following results on the evaluation set: - Loss: 1.7942 - Answer: {'precision': 0.8597914252607184, 'recall': 0.9082007343941249, 'f1': 0.8833333333333333, 'number': 817} - Header: {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 119} - Question: {'precision': 0.9046746104491292, 'recall': 0.9164345403899722, 'f1': 0.9105166051660516, 'number': 1077} - Overall Precision: 0.8740 - Overall Recall: 0.8927 - Overall F1: 0.8833 - Overall Accuracy: 0.8042
837b13e0c78882095eec146ea84874ca
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Answer | Header | Question | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:------:|:----:|:---------------:|:--------------------------------------------------------------------------------------------------------:|:--------------------------------------------------------------------------------------------------------:|:---------------------------------------------------------------------------------------------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.1935 | 26.32 | 500 | 1.2125 | {'precision': 0.8702830188679245, 'recall': 0.9033047735618115, 'f1': 0.8864864864864864, 'number': 817} | {'precision': 0.6296296296296297, 'recall': 0.5714285714285714, 'f1': 0.5991189427312775, 'number': 119} | {'precision': 0.8748921484037964, 'recall': 0.9415041782729805, 'f1': 0.9069767441860466, 'number': 1077} | 0.8605 | 0.9041 | 0.8818 | 0.8024 | | 0.0063 | 52.63 | 1000 | 1.4406 | {'precision': 0.8732394366197183, 'recall': 0.9106487148102815, 'f1': 0.8915518274415818, 'number': 817} | {'precision': 0.632183908045977, 'recall': 0.46218487394957986, 'f1': 0.5339805825242718, 'number': 119} | {'precision': 0.8827708703374778, 'recall': 0.9229340761374187, 'f1': 0.902405810258738, 'number': 1077} | 0.8683 | 0.8907 | 0.8794 | 0.8175 | | 0.002 | 78.95 | 1500 | 1.6624 | {'precision': 0.861904761904762, 'recall': 0.8861689106487148, 'f1': 0.8738684369342186, 'number': 817} | {'precision': 0.6363636363636364, 'recall': 0.5294117647058824, 'f1': 0.5779816513761468, 'number': 119} | {'precision': 0.8920863309352518, 'recall': 0.9210770659238626, 'f1': 0.9063499314755596, 'number': 1077} | 0.8674 | 0.8838 | 0.8755 | 0.7998 | | 0.0006 | 105.26 | 2000 | 1.7942 | {'precision': 0.8597914252607184, 'recall': 0.9082007343941249, 'f1': 0.8833333333333333, 'number': 817} | {'precision': 0.6666666666666666, 'recall': 0.5714285714285714, 'f1': 0.6153846153846153, 'number': 119} | {'precision': 0.9046746104491292, 'recall': 0.9164345403899722, 'f1': 0.9105166051660516, 'number': 1077} | 0.8740 | 0.8927 | 0.8833 | 0.8042 | | 0.0002 | 131.58 | 2500 | 1.8161 | {'precision': 0.8591385331781141, 'recall': 0.9033047735618115, 'f1': 0.8806682577565632, 'number': 817} | {'precision': 0.6346153846153846, 'recall': 0.5546218487394958, 'f1': 0.5919282511210763, 'number': 119} | {'precision': 0.9047619047619048, 'recall': 0.9173630454967502, 'f1': 0.9110189027201475, 'number': 1077} | 0.8720 | 0.8902 | 0.8810 | 0.8021 |
aafba09ce0c77f3b325872a868b847c3
mit
['generated_from_trainer']
false
roberta_large-ner-conll2003_0818_v0 This model is a fine-tuned version of [roberta-large](https://huggingface.co/roberta-large) on the conll2003 dataset. It achieves the following results on the evaluation set: - Loss: 0.1793 - Precision: 0.9064 - Recall: 0.9333 - F1: 0.9197 - Accuracy: 0.9796
9a2ada70ac4ad8db5b6bae9af1312b76
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.0273 | 1.0 | 878 | 0.0500 | 0.9338 | 0.9588 | 0.9461 | 0.9894 | | 0.0154 | 2.0 | 1756 | 0.0479 | 0.9402 | 0.9660 | 0.9529 | 0.9904 |
3edb44d5207b601adf72b27eac9fc03c
apache-2.0
['generated_from_trainer']
false
t5-small-mlm-pubmed This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8008 - Rouge2 Precision: 0.6071 - Rouge2 Recall: 0.4566 - Rouge2 Fmeasure: 0.5079
45106ddc9f11e80b511e8affd3f1437f
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Rouge2 Precision | Rouge2 Recall | Rouge2 Fmeasure | |:-------------:|:-----:|:-----:|:---------------:|:----------------:|:-------------:|:---------------:| | 0.914 | 0.75 | 500 | 0.8691 | 0.5901 | 0.4357 | 0.4879 | | 0.9093 | 1.51 | 1000 | 0.8646 | 0.5867 | 0.4372 | 0.488 | | 0.895 | 2.26 | 1500 | 0.8618 | 0.5891 | 0.4387 | 0.49 | | 0.8842 | 3.02 | 2000 | 0.8571 | 0.5899 | 0.4374 | 0.4891 | | 0.8796 | 3.77 | 2500 | 0.8544 | 0.5903 | 0.4406 | 0.4916 | | 0.8759 | 4.52 | 3000 | 0.8513 | 0.5921 | 0.4395 | 0.4912 | | 0.8621 | 5.28 | 3500 | 0.8485 | 0.5934 | 0.4413 | 0.493 | | 0.8613 | 6.03 | 4000 | 0.8442 | 0.5944 | 0.4428 | 0.4944 | | 0.8537 | 6.79 | 4500 | 0.8406 | 0.594 | 0.4414 | 0.4932 | | 0.8518 | 7.54 | 5000 | 0.8399 | 0.5956 | 0.4424 | 0.4945 | | 0.8438 | 8.3 | 5500 | 0.8365 | 0.5953 | 0.4452 | 0.4964 | | 0.8339 | 9.05 | 6000 | 0.8353 | 0.5983 | 0.4468 | 0.4983 | | 0.8307 | 9.8 | 6500 | 0.8331 | 0.5979 | 0.4461 | 0.4976 | | 0.8328 | 10.56 | 7000 | 0.8304 | 0.5975 | 0.4465 | 0.4979 | | 0.8263 | 11.31 | 7500 | 0.8283 | 0.5977 | 0.4467 | 0.4981 | | 0.8168 | 12.07 | 8000 | 0.8267 | 0.5971 | 0.4463 | 0.4976 | | 0.8165 | 12.82 | 8500 | 0.8248 | 0.5969 | 0.4462 | 0.4976 | | 0.8084 | 13.57 | 9000 | 0.8245 | 0.6018 | 0.4527 | 0.5035 | | 0.8136 | 14.33 | 9500 | 0.8219 | 0.6023 | 0.4509 | 0.5023 | | 0.8073 | 15.08 | 10000 | 0.8206 | 0.6002 | 0.4486 | 0.5001 | | 0.808 | 15.84 | 10500 | 0.8185 | 0.6009 | 0.4506 | 0.5019 | | 0.8027 | 16.59 | 11000 | 0.8173 | 0.5978 | 0.4478 | 0.4989 | | 0.8061 | 17.35 | 11500 | 0.8169 | 0.6022 | 0.4513 | 0.5026 | | 0.7922 | 18.1 | 12000 | 0.8152 | 0.6016 | 0.4501 | 0.5016 | | 0.7928 | 18.85 | 12500 | 0.8141 | 0.6009 | 0.45 | 0.5012 | | 0.7909 | 19.61 | 13000 | 0.8143 | 0.6019 | 0.4521 | 0.5028 | | 0.7909 | 20.36 | 13500 | 0.8115 | 0.5997 | 0.4505 | 0.5011 | | 0.7949 | 21.12 | 14000 | 0.8115 | 0.6043 | 0.4536 | 0.5048 | | 0.7853 | 21.87 | 14500 | 0.8095 | 0.6033 | 0.4527 | 0.5038 | | 0.7819 | 22.62 | 15000 | 0.8095 | 0.6054 | 0.4541 | 0.5056 | | 0.7828 | 23.38 | 15500 | 0.8075 | 0.6036 | 0.453 | 0.5042 | | 0.787 | 24.13 | 16000 | 0.8068 | 0.6031 | 0.4528 | 0.504 | | 0.7739 | 24.89 | 16500 | 0.8072 | 0.6043 | 0.4529 | 0.5045 | | 0.7782 | 25.64 | 17000 | 0.8073 | 0.606 | 0.4551 | 0.5063 | | 0.7772 | 26.4 | 17500 | 0.8063 | 0.6055 | 0.4549 | 0.5062 | | 0.7718 | 27.15 | 18000 | 0.8057 | 0.606 | 0.4546 | 0.5059 | | 0.7747 | 27.9 | 18500 | 0.8045 | 0.6046 | 0.4543 | 0.5054 | | 0.7738 | 28.66 | 19000 | 0.8035 | 0.6059 | 0.4549 | 0.506 | | 0.7642 | 29.41 | 19500 | 0.8041 | 0.6053 | 0.4545 | 0.5058 | | 0.7666 | 30.17 | 20000 | 0.8039 | 0.6066 | 0.457 | 0.508 | | 0.7686 | 30.92 | 20500 | 0.8027 | 0.6075 | 0.4571 | 0.5081 | | 0.7664 | 31.67 | 21000 | 0.8026 | 0.6062 | 0.4566 | 0.5076 | | 0.77 | 32.43 | 21500 | 0.8022 | 0.6068 | 0.4571 | 0.5081 | | 0.7618 | 33.18 | 22000 | 0.8015 | 0.6065 | 0.4563 | 0.5072 | | 0.7615 | 33.94 | 22500 | 0.8013 | 0.6064 | 0.4565 | 0.5074 | | 0.7611 | 34.69 | 23000 | 0.8017 | 0.607 | 0.4567 | 0.5078 | | 0.7611 | 35.44 | 23500 | 0.8013 | 0.608 | 0.4565 | 0.5082 | | 0.7604 | 36.2 | 24000 | 0.8012 | 0.6069 | 0.4561 | 0.5072 | | 0.7599 | 36.95 | 24500 | 0.8013 | 0.6078 | 0.4571 | 0.5085 | | 0.7542 | 37.71 | 25000 | 0.8016 | 0.6083 | 0.4579 | 0.5091 | | 0.7637 | 38.46 | 25500 | 0.8009 | 0.6072 | 0.4569 | 0.5081 | | 0.7596 | 39.22 | 26000 | 0.8008 | 0.6069 | 0.4566 | 0.5078 | | 0.7604 | 39.97 | 26500 | 0.8008 | 0.6071 | 0.4566 | 0.5079 |
3b1da4aca7297cd473f6823111c5cbfb
gpl-3.0
['object-detection', 'yolo', 'autogenerated-modelcard']
false
Model Description <!-- Provide a longer summary of what this model is. --> YOLOv6 is a single-stage object detection framework dedicated to industrial applications, with hardware-friendly efficient design and high performance. - **Developed by:** [More Information Needed] - **Shared by [Optional]:** [@nateraw](https://hf.co/nateraw) - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Related Models:** [yolov6t](https://hf.co/nateraw/yolov6t), [yolov6n](https://hf.co/nateraw/yolov6n) - **Parent Model:** N/A - **Resources for more information:** The [official GitHub Repository](https://github.com/meituan/YOLOv6)
f27eeebbd0e4db5b7ee8fb2f5e7e8f0b
mit
[]
false
If you cannot load a model, please create a model here. ``` import torch import torch.nn as nn class Janken(nn.Module): def __init__(self): super().__init__() self.fc1=nn.Linear(2,20,bias=False) self.fc2=nn.Linear(20,60,bias=False) self.fc3=nn.Linear(60,70,bias=False) self.fc4=nn.Linear(70,70,bias=False) self.fc5=nn.Linear(70,30,bias=False) self.fc6=nn.Linear(30,10,bias=False) self.fc7=nn.Linear(10,2,bias=False) def forward(self,x): x=self.fc1(x) x=self.fc2(x) x=self.fc3(x) x=self.fc4(x) x=self.fc5(x) x=self.fc6(x) x=self.fc7(x) return x model=Janken() optimizer=torch.optim.RMSprop(model.parameters(),lr=0.00011) criterion=nn.MSELoss() epochs=250 patterns=[[[1,0],[0,1]],[[0,1],[0,0]],[[0,0],[1,0]]] for epoch in range(epochs): optimizer.zero_grad() for pattern in patterns: inputs=torch.Tensor(pattern[0]) targets=torch.Tensor(pattern[1]) outputs=model(inputs) loss=criterion(targets,outputs) loss.backward() optimizer.step() print(f"{epoch+1}/{epochs}\t\t{loss.item()}") torch.save("janken.bin",model) ``` code<br> 1,0=Paper<br> 0,1=Knife<br> 0,0=Rock<br> <img src="https://i.imgur.com/FyAKxKB.png">
ce96df4736a0caea111f32a3263cad4e
mit
['audio', 'speech-translation', 'automatic-speech-recognition']
false
S2T-SMALL-COVOST2-CA-EN-ST `s2t-small-covost2-ca-en-st` is a Speech to Text Transformer (S2T) model trained for end-to-end Speech Translation (ST). The S2T model was proposed in [this paper](https://arxiv.org/abs/2010.05171) and released in [this repository](https://github.com/pytorch/fairseq/tree/master/examples/speech_to_text)
c17534840ab4cf11467a7ced7ddf3a2b
mit
['audio', 'speech-translation', 'automatic-speech-recognition']
false
Intended uses & limitations This model can be used for end-to-end Catalan speech to English text translation. See the [model hub](https://huggingface.co/models?filter=speech_to_text) to look for other S2T checkpoints.
4cf1215cf2c062d563c1229878aef2d0
mit
['audio', 'speech-translation', 'automatic-speech-recognition']
false
How to use As this a standard sequence to sequence transformer model, you can use the `generate` method to generate the transcripts by passing the speech features to the model. *Note: The `Speech2TextProcessor` object uses [torchaudio](https://github.com/pytorch/audio) to extract the filter bank features. Make sure to install the `torchaudio` package before running this example.* You could either install those as extra speech dependancies with `pip install transformers"[speech, sentencepiece]"` or install the packages seperatly with `pip install torchaudio sentencepiece`. ```python import torch from transformers import Speech2TextProcessor, Speech2TextForConditionalGeneration from datasets import load_dataset import soundfile as sf model = Speech2TextForConditionalGeneration.from_pretrained("facebook/s2t-small-covost2-ca-en-st") processor = Speech2TextProcessor.from_pretrained("facebook/s2t-small-covost2-ca-en-st") def map_to_array(batch): speech, _ = sf.read(batch["file"]) batch["speech"] = speech return batch ds = load_dataset( "patrickvonplaten/librispeech_asr_dummy", "clean", split="validation" ) ds = ds.map(map_to_array) inputs = processor( ds["speech"][0], sampling_rate=48_000, return_tensors="pt" ) generated_ids = model.generate(input_ids=inputs["input_features"], attention_mask=inputs["attention_mask"]) translation = processor.batch_decode(generated_ids, skip_special_tokens=True) ```
2a2c6c8266f0424ba0bcbd740df71328
mit
['audio', 'speech-translation', 'automatic-speech-recognition']
false
Training data The s2t-small-covost2-ca-en-st is trained on Catalan-English subset of [CoVoST2](https://github.com/facebookresearch/covost). CoVoST is a large-scale multilingual ST corpus based on [Common Voice](https://arxiv.org/abs/1912.06670), created to to foster ST research with the largest ever open dataset
b1e9a2b33cf6e1f438ed747dfb116ab9
apache-2.0
['super-image', 'image-super-resolution']
false
Enhanced Deep Residual Networks for Single Image Super-Resolution (EDSR) EDSR model pre-trained on DIV2K (800 images training, augmented to 4000 images, 100 images validation) for 2x, 3x and 4x image super resolution. It was introduced in the paper [Enhanced Deep Residual Networks for Single Image Super-Resolution](https://arxiv.org/abs/1707.02921) by Lim et al. (2017) and first released in [this repository](https://github.com/sanghyun-son/EDSR-PyTorch). The goal of image super resolution is to restore a high resolution (HR) image from a single low resolution (LR) image. The image below shows the ground truth (HR), the bicubic upscaling x2 and EDSR upscaling x2. ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4](images/edsr_4_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 4")
50fb33b85e6db5c80f4f6892df7168c3
apache-2.0
['super-image', 'image-super-resolution']
false
How to use The model can be used with the [super_image](https://github.com/eugenesiow/super-image) library: ```bash pip install super-image ``` Here is how to use a pre-trained model to upscale your image: ```python from super_image import EdsrModel, ImageLoader from PIL import Image import requests url = 'https://paperswithcode.com/media/datasets/Set5-0000002728-07a9793f_zA3bDjj.jpg' image = Image.open(requests.get(url, stream=True).raw) model = EdsrModel.from_pretrained('eugenesiow/edsr', scale=2)
4391fee36b7430f02772a8702673ad71
apache-2.0
['super-image', 'image-super-resolution']
false
Algorithm). Evaluation datasets include: - Set5 - [Bevilacqua et al. (2012)](https://huggingface.co/datasets/eugenesiow/Set5) - Set14 - [Zeyde et al. (2010)](https://huggingface.co/datasets/eugenesiow/Set14) - BSD100 - [Martin et al. (2001)](https://huggingface.co/datasets/eugenesiow/BSD100) - Urban100 - [Huang et al. (2015)](https://huggingface.co/datasets/eugenesiow/Urban100) The results columns below are represented below as `PSNR/SSIM`. They are compared against a Bicubic baseline. |Dataset |Scale |Bicubic |edsr | |--- |--- |--- |--- | |Set5 |2x |33.64/0.9292 |**38.19/0.9612** | |Set5 |3x |30.39/0.8678 |**35.31/0.9421** | |Set5 |4x |28.42/0.8101 |**32.5/0.8986** | |Set14 |2x |30.22/0.8683 |**33.99/0.9215** | |Set14 |3x |27.53/0.7737 |**31.18/0.862** | |Set14 |4x |25.99/0.7023 |**28.92/0.7899** | |BSD100 |2x |29.55/0.8425 |**33.89/0.9266** | |BSD100 |3x |27.20/0.7382 |**29.77/0.8224** | |BSD100 |4x |25.96/0.6672 |**28.62/0.7689** | |Urban100 |2x |26.66/0.8408 |**32.68/0.9331** | |Urban100 |3x | |**29.75/0.8825** | |Urban100 |4x |23.14/0.6573 |**26.53/0.7995** | ![Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2](images/edsr_2_4_compare.png "Comparing Bicubic upscaling against the models x4 upscaling on Set5 Image 2") You can find a notebook to easily run evaluation on pretrained models below: [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/eugenesiow/super-image-notebooks/blob/master/notebooks/Evaluate_Pretrained_super_image_Models.ipynb "Open in Colab")
8ad36aa48d1454bf88debca274c4ec20
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 9.2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 - mixed_precision_training: Native AMP
42cffdc441ea499bb95d8e1589767f89
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 3.67 | 0.32 | 5000 | 3.4705 | | 3.573 | 0.63 | 10000 | 3.3747 | | 3.5075 | 0.95 | 15000 | 3.3154 | | 3.4486 | 1.26 | 20000 | 3.2704 | | 3.4207 | 1.58 | 25000 | 3.2351 | | 3.3933 | 1.89 | 30000 | 3.2069 | | 3.3612 | 2.21 | 35000 | 3.1853 | | 3.34 | 2.53 | 40000 | 3.1659 | | 3.3422 | 2.84 | 45000 | 3.1503 | | 3.3034 | 3.16 | 50000 | 3.1376 | | 3.2886 | 3.47 | 55000 | 3.1283 | | 3.2806 | 3.79 | 60000 | 3.1208 | | 3.2745 | 4.1 | 65000 | 3.1141 | | 3.2894 | 4.42 | 70000 | 3.1093 | | 3.264 | 4.74 | 75000 | 3.1075 |
c0b1c7ec2bcb02abb2494c3744b4b182
apache-2.0
['generated_from_trainer']
false
bert-nlp-project-ft-imdb This model is a fine-tuned version of [jestemleon/bert-nlp-project-imdb](https://huggingface.co/jestemleon/bert-nlp-project-imdb) on the [steciuk/imdb](https://huggingface.co/datasets/steciuk/imdb) dataset. It achieves the following results on the evaluation set: - Loss: 0.2429 - Accuracy: 0.9477 - F1: 0.9468 and flowing results on the testing set: - Accuracy: 0.9467 - F1: 0.9480
17d08a312c62827562a38293257184bc
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2603 | 0.38 | 750 | 0.1922 | 0.9293 | 0.9293 | | 0.2021 | 0.75 | 1500 | 0.1633 | 0.9463 | 0.9446 | | 0.1706 | 1.12 | 2250 | 0.1957 | 0.944 | 0.9425 | | 0.1195 | 1.5 | 3000 | 0.2054 | 0.9455 | 0.9452 | | 0.1106 | 1.88 | 3750 | 0.2417 | 0.9383 | 0.9391 | | 0.0747 | 2.25 | 4500 | 0.2562 | 0.945 | 0.9441 | | 0.0566 | 2.62 | 5250 | 0.2544 | 0.946 | 0.9443 | | 0.0511 | 3.0 | 6000 | 0.2429 | 0.9477 | 0.9468 |
e856f76eebd36f55d53776236ae7ff27
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch']
false
Model description ![GuwenBERT](https://user-images.githubusercontent.com/9592150/97142000-cad08e00-179a-11eb-88df-aff9221482d8.png) This is a RoBERTa model pre-trained on Classical Chinese. You can fine-tune GuwenBERT for downstream tasks, such as sentence breaking, punctuation, named entity recognition, and so on. For more information about RoBERTa, take a look at the RoBERTa's offical repo.
1d6806a082aca6a516e757c6791e530d
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch']
false
How to use ```python from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("ethanyt/guwenbert-large") model = AutoModel.from_pretrained("ethanyt/guwenbert-large") ```
ecbfee50cad950d91aa61bbd9896fe7a
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch']
false
Training data The training data is daizhige dataset (殆知阁古代文献) which is contains of 15,694 books in Classical Chinese, covering Buddhism, Confucianism, Medicine, History, Zi, Yi, Yizang, Shizang, Taoism, and Jizang. 76% of them are punctuated. The total number of characters is 1.7B (1,743,337,673). All traditional Characters are converted to simplified characters. The vocabulary is constructed from this data set and the size is 23,292.
7001d34b0fa76adf55b1ef23d08e8feb
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch']
false
Training procedure The models are initialized with `hfl/chinese-roberta-wwm-ext-large` and then pre-trained with a 2-step strategy. In the first step, the model learns MLM with only word embeddings updated during training, until convergence. In the second step, all parameters are updated during training. The models are trained on 4 V100 GPUs for 120K steps (20K for step
d8f0778abef921f32d7a2f664658f398
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch']
false
2) with a batch size of 2,048 and a sequence length of 512. The optimizer used is Adam with a learning rate of 1e-4, adam-betas of (0.9,0.98), adam-eps of 1e-6, a weight decay of 0.01, learning rate warmup for 5K steps, and linear decay of learning rate after.
f20d1eb33c9f34588454b34d253b3e4b
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch']
false
"Gulian Cup" Ancient Books Named Entity Recognition Evaluation Second place in the competition. Detailed test results: | NE Type | Precision | Recall | F1 | |:----------:|:-----------:|:------:|:-----:| | Book Name | 77.50 | 73.73 | 75.57 | | Other Name | 85.85 | 89.32 | 87.55 | | Micro Avg. | 83.88 | 85.39 | 84.63 |
456f79dd18b780e61c32a1ee39353f97
apache-2.0
['chinese', 'classical chinese', 'literary chinese', 'ancient chinese', 'bert', 'pytorch']
false
About Us We are from [Datahammer](https://datahammer.net), Beijing Institute of Technology. For more cooperation, please contact email: ethanyt [at] qq.com > Created with ❤️ by Tan Yan [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/Ethan-yt) and Zewen Chi [![Github icon](https://cdn0.iconfinder.com/data/icons/octicons/1024/mark-github-32.png)](https://github.com/CZWin32768)
c326234a729ee3e780480f6497a4071e
apache-2.0
['generated_from_trainer']
false
roberta-base-bne-finetuned-amazon_reviews_multi This model is a fine-tuned version of [BSC-TeMU/roberta-base-bne](https://huggingface.co/BSC-TeMU/roberta-base-bne) on the amazon_reviews_multi dataset. It achieves the following results on the evaluation set: - Loss: 0.2313 - Accuracy: 0.9337
88ebd355271a76904fcb44d36e115cf1
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1864 | 1.0 | 1250 | 0.2209 | 0.9317 | | 0.1063 | 2.0 | 2500 | 0.2313 | 0.9337 |
8616dd477ed0e04a64006e9865c29db9
creativeml-openrail-m
['text-to-image', 'diffusers', 'lora']
false
cat-toy-z Dreambooth LoRA model trained by multimodalart with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: cttoyz (use that on your prompt) ![cttoyz 0](https://huggingface.co/multimodalart/cat-toy-z/resolve/main/concept_images/cttoyz_%281%29.jpg)![cttoyz 1](https://huggingface.co/multimodalart/cat-toy-z/resolve/main/concept_images/cttoyz_%282%29.jpg)![cttoyz 2](https://huggingface.co/multimodalart/cat-toy-z/resolve/main/concept_images/cttoyz_%283%29.jpg)![cttoyz 3](https://huggingface.co/multimodalart/cat-toy-z/resolve/main/concept_images/cttoyz_%284%29.jpg)
e2de6bb6de351d8840da045d842bd8f0
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2294 - Accuracy: 0.924 - F1: 0.9240
ae043bbf175ce8682634a0b86b3c48fd
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 250 | 0.3316 | 0.9025 | 0.8985 | | No log | 2.0 | 500 | 0.2294 | 0.924 | 0.9240 |
4cce011e15a34294c82dca159f4425b9
apache-2.0
['generated_from_trainer']
false
distilbert-finetuned This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.9895
69bbe08da16dd08a4a902aaa7116734a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 2.2103 | 1.0 | 10024 | 2.0834 | | 2.1146 | 2.0 | 20048 | 2.0387 | | 2.0721 | 3.0 | 30072 | 2.0095 |
0653fe228af193ec619b74b998e23e84
mit
['vision', 'image-classification']
false
DiNAT (base variant) DiNAT-Base trained on ImageNet-1K at 224x224 resolution. It was introduced in the paper [Dilated Neighborhood Attention Transformer](https://arxiv.org/abs/2209.15001) by Hassani et al. and first released in [this repository](https://github.com/SHI-Labs/Neighborhood-Attention-Transformer).
33e642ab3317bb3130c560490f205e63
mit
['vision', 'image-classification']
false
Example Here is how to use this model to classify an image from the COCO 2017 dataset into one of the 1,000 ImageNet classes: ```python from transformers import AutoImageProcessor, DinatForImageClassification from PIL import Image import requests url = "http://images.cocodataset.org/val2017/000000039769.jpg" image = Image.open(requests.get(url, stream=True).raw) feature_extractor = AutoImageProcessor.from_pretrained("shi-labs/dinat-base-in1k-224") model = DinatForImageClassification.from_pretrained("shi-labs/dinat-base-in1k-224") inputs = feature_extractor(images=image, return_tensors="pt") outputs = model(**inputs) logits = outputs.logits
3ae5835fd39e50b6bba7f84ee5cb6d57
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Whisper Small Fa - BuzzyBuzzy This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.5968 - Wer: 34.5206
d96cd5812bb3b1f6e1068d4f124fbe67
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 10000 - mixed_precision_training: Native AMP
582e853eac821dc780d21b5f27642790
apache-2.0
['hf-asr-leaderboard', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.0091 | 0.86 | 2000 | 0.5627 | 37.7340 | | 0.0077 | 1.72 | 4000 | 0.5761 | 36.5033 | | 0.0028 | 2.58 | 6000 | 0.5851 | 35.5931 | | 0.0011 | 3.44 | 8000 | 0.5886 | 35.1893 | | 0.0001 | 4.3 | 10000 | 0.5968 | 34.5206 |
c04a6d50478a5611c7682d09ce5ea7ca
apache-2.0
['translation']
false
spa-nor * source group: Spanish * target group: Norwegian * OPUS readme: [spa-nor](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-nor/README.md) * model: transformer-align * source language(s): spa * target language(s): nno nob * model: transformer-align * pre-processing: normalization + SentencePiece (spm12k,spm12k) * a sentence initial language token is required in the form of `>>id<<` (id = valid target language ID) * download original weights: [opus-2020-06-17.zip](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.zip) * test set translations: [opus-2020-06-17.test.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.test.txt) * test set scores: [opus-2020-06-17.eval.txt](https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.eval.txt)
b3d3aeb544acb4922aba3cf744758df8
apache-2.0
['translation']
false
System Info: - hf_name: spa-nor - source_languages: spa - target_languages: nor - opus_readme_url: https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/spa-nor/README.md - original_repo: Tatoeba-Challenge - tags: ['translation'] - languages: ['es', 'no'] - src_constituents: {'spa'} - tgt_constituents: {'nob', 'nno'} - src_multilingual: False - tgt_multilingual: False - prepro: normalization + SentencePiece (spm12k,spm12k) - url_model: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.zip - url_test_set: https://object.pouta.csc.fi/Tatoeba-MT-models/spa-nor/opus-2020-06-17.test.txt - src_alpha3: spa - tgt_alpha3: nor - short_pair: es-no - chrF2_score: 0.565 - bleu: 36.7 - brevity_penalty: 0.99 - ref_len: 7217.0 - src_name: Spanish - tgt_name: Norwegian - train_date: 2020-06-17 - src_alpha2: es - tgt_alpha2: no - prefer_old: False - long_pair: spa-nor - helsinki_git_sha: 480fcbe0ee1bf4774bcbe6226ad9f58e63f6c535 - transformers_git_sha: 2207e5d8cb224e954a7cba69fa4ac2309e9ff30b - port_machine: brutasse - port_time: 2020-08-21-14:41
fd6debeeda79292b87777fefb204c698
openrail
[]
false
The training get transformers: ``` git clone https://github.com/huggingface/transformers cd transformers ``` Prepare an initialized opt-1.3 model: ``` cat << EOT > prep-fp32.py from transformers import AutoConfig, AutoModel, AutoTokenizer import torch mname = "facebook/opt-1.3b" config = AutoConfig.from_pretrained(mname) model = AutoModel.from_config(config, torch_dtype=torch.float16) tokenizer = AutoTokenizer.from_pretrained(mname) path = "opt-1.3b-fp32" model.save_pretrained(path) tokenizer.save_pretrained(path) EOT ``` Run: ``` python prep-fp32.py ``` Train from scratch on a single 8x 80GB A100 node on `realnewslike` subset of https://huggingface.co/datasets/c4: ``` git clone https://github.com/huggingface/transformers cd transformers PYTHONPATH="src" python -m torch.distributed.run \ --nproc_per_node=8 \ --nnode=1 \ --node_rank=0 \ --master_addr=127.0.0.1 \ --master_port=9901 \ examples/pytorch/language-modeling/run_clm.py \ --tf32 1 \ --seed 42 \ --dataset_name c4 \ --dataset_config_name realnewslike \ --model_name_or_path opt-1.3b-fp32 \ --per_device_train_batch_size 6 \ --per_device_eval_batch_size 6 \ --gradient_accumulation_steps 2 \ --do_train \ --logging_steps 5 \ --save_steps 1000 \ --eval_steps 1000 \ --weight_decay 0.1 \ --num_train_epochs 1 \ --adam_beta1 0.9 \ --adam_beta2 0.95 \ --learning_rate 0.0002 \ --lr_scheduler_type linear \ --warmup_steps 1000 \ --report_to tensorboard \ --output_dir saved \ --logging_dir tb \ --log_level warning \ --preprocessing_num_workers 32 ``` The training took about 40h.
46719ef6e2cb89417c684de3c0cc503c
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.1675 - Accuracy: 0.9325 - F1: 0.9327
5d79b0211b0b27e667f882a70f58a45c
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.29 | 1.0 | 250 | 0.1896 | 0.9265 | 0.9255 | | 0.1557 | 2.0 | 500 | 0.1675 | 0.9325 | 0.9327 |
685f35651f6263d70ea3489fcec4425f
creativeml-openrail-m
['pytorch', 'diffusers', 'stable-diffusion', 'text-to-image', 'diffusion-models-class', 'dreambooth-hackathon', 'animal']
false
DreamBooth model for the dashdash concept trained by jiaenyue. This is a Stable Diffusion model fine-tuned on the dashdash concept with DreamBooth. It can be used by modifying the `instance_prompt`: **a photo of dashdash cat** This model was created as part of the DreamBooth Hackathon 🔥. Visit the [organisation page](https://huggingface.co/dreambooth-hackathon) for instructions on how to take part!
e8a4bd216e44f5fa41b1dc936184ac32
other
['stable-diffusion', 'stable-diffusion-diffusers', 'image-to-image']
false
Stable Diffusion Image Variations Model Card This version of Stable Diffusion has been fine tuned from [CompVis/stable-diffusion-v1-3-original](https://huggingface.co/CompVis/stable-diffusion-v-1-3-original) to accept CLIP image embedding rather than text embeddings. This allows the creation of "image variations" similar to DALLE-2 using Stable Diffusion. This version of the weights has been ported to huggingface Diffusers, to use this with the Diffusers library requires the [Lambda Diffusers repo](https://github.com/LambdaLabsML/lambda-diffusers). ![](https://raw.githubusercontent.com/justinpinkney/stable-diffusion/main/assets/im-vars-thin.jpg)
7c104df67482bfb09175c89102013238
other
['stable-diffusion', 'stable-diffusion-diffusers', 'image-to-image']
false
Example First clone [Lambda Diffusers](https://github.com/LambdaLabsML/lambda-diffusers) and install any requirements (in a virtual environment in the example below): ```bash git clone https://github.com/LambdaLabsML/lambda-diffusers.git cd lambda-diffusers python -m venv .venv source .venv/bin/activate pip install -r requirements.txt ``` Then run the following python code: ```python from pathlib import Path from lambda_diffusers import StableDiffusionImageEmbedPipeline from PIL import Image import torch device = "cuda" if torch.cuda.is_available() else "cpu" pipe = StableDiffusionImageEmbedPipeline.from_pretrained("lambdalabs/sd-image-variations-diffusers") pipe = pipe.to(device) im = Image.open("your/input/image/here.jpg") num_samples = 4 image = pipe(num_samples*[im], guidance_scale=3.0) image = image["sample"] base_path = Path("outputs/im2im") base_path.mkdir(exist_ok=True, parents=True) for idx, im in enumerate(image): im.save(base_path/f"{idx:06}.jpg") ```
bb25f9c4f89142c9c4b4b3c84890d1c7
other
['stable-diffusion', 'stable-diffusion-diffusers', 'image-to-image']
false
Training **Training Data** The model developers used the following dataset for training the model: - LAION-2B (en) and subsets thereof (see next section) **Training Procedure** This model is fine tuned from Stable Diffusion v1-3 where the text encoder has been replaced with an image encoder. The training procedure is the same as for Stable Diffusion except for the fact that images are encoded through a ViT-L/14 image-encoder including the final projection layer to the CLIP shared embedding space. - **Hardware:** 4 x A6000 GPUs (provided by [Lambda GPU Cloud](https://lambdalabs.com/service/gpu-cloud)) - **Optimizer:** AdamW - **Gradient Accumulations**: 1 - **Steps**: 87,000 - **Batch:** 6 x 4 = 24 - **Learning rate:** warmup to 0.0001 for 1,000 steps and then kept constant Training was done using a [modified version of the original Stable Diffusion training code]((https://github.com/justinpinkney/stable-diffusion), the original version of the weights is [here](https://huggingface.co/lambdalabs/stable-diffusion-image-conditioned).
d4bd8618442f6243a81426828851814a
other
['stable-diffusion', 'stable-diffusion-diffusers', 'image-to-image']
false
Safety Module The intended use of this model is with the [Safety Checker](https://github.com/huggingface/diffusers/blob/main/src/diffusers/pipelines/stable_diffusion/safety_checker.py) in Diffusers. This checker works by checking model outputs against known hard-coded NSFW concepts. The concepts are intentionally hidden to reduce the likelihood of reverse-engineering this filter. Specifically, the checker compares the class probability of harmful concepts in the embedding space of the `CLIPModel` *after generation* of the images. The concepts are passed into the model with the generated image and compared to a hand-engineered weight for each NSFW concept. *This model card was written by: Justin Pinkney and is based on the [Stable Diffusion model card](https://huggingface.co/CompVis/stable-diffusion-v1-4).*
3c6e20a5504352bbcc7e093e89b97de7
apache-2.0
[]
false
Example Usage ```python from transformers import AutoTokenizer, T5ForConditionalGeneration tokenizer = AutoTokenizer.from_pretrained("laituan245/molt5-large", model_max_length=512) model = T5ForConditionalGeneration.from_pretrained('laituan245/molt5-large') ```
28b5d7801ba27db613ffa22e5f939707
apache-2.0
['generated_from_keras_callback']
false
xander71988/t5-small-finetuned-facet-contract-type This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.1701 - Validation Loss: 0.1605 - Epoch: 6
41ea0504fdaf49d0c8bc05913be4431e
apache-2.0
['generated_from_keras_callback']
false
Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5.6e-05, 'decay_steps': 7000, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: mixed_float16
f831bddfd7d2d13f2c641c69c6d8e80d
apache-2.0
['generated_from_keras_callback']
false
Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 0.8446 | 0.3244 | 0 | | 0.2976 | 0.1945 | 1 | | 0.2240 | 0.1686 | 2 | | 0.1970 | 0.1763 | 3 | | 0.1866 | 0.1548 | 4 | | 0.1793 | 0.1565 | 5 | | 0.1701 | 0.1605 | 6 |
dfd79b3be086bfd5fad919edbdff4753
apache-2.0
['translation']
false
opus-mt-fr-gil * source languages: fr * target languages: gil * OPUS readme: [fr-gil](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-gil/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-gil/opus-2020-01-09.eval.txt)
343ce953ffe8891b0ee35968a5b0b152
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
vit-base-beans This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the beans dataset. It achieves the following results on the evaluation set: - Loss: 0.0769 - Accuracy: 0.9850
fc1b1a0741098629c1c315dca35bf6a2
apache-2.0
['image-classification', 'vision', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Accuracy | Validation Loss | |:-------------:|:-----:|:----:|:--------:|:---------------:| | 0.2572 | 1.0 | 130 | 0.9624 | 0.2294 | | 0.1531 | 2.0 | 260 | 0.9699 | 0.1501 | | 0.0817 | 3.0 | 390 | 0.9850 | 0.0896 | | 0.1444 | 4.0 | 520 | 0.9850 | 0.0833 | | 0.1576 | 5.0 | 650 | 0.9850 | 0.0769 |
83c608826c8bdda5b4e8b67455d2581b
cc-by-sa-4.0
['vietnamese', 'token-classification', 'pos', 'dependency-parsing']
false
Model Description This is a RoBERTa model pre-trained on Vietnamese texts for POS-tagging and dependency-parsing, derived from [roberta-base-vietnamese](https://huggingface.co/KoichiYasuoka/roberta-base-vietnamese). Every word is tagged by [UPOS](https://universaldependencies.org/u/pos/)(Universal Part-Of-Speech).
795e8ceaeec557b8b06b7d71ae9f7379