license
stringlengths
2
30
tags
stringlengths
2
513
is_nc
bool
1 class
readme_section
stringlengths
201
597k
hash
stringlengths
32
32
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 4 - mixed_precision_training: Native AMP
623a32d556dd8dfb18a6c546daff4dc8
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:------:| | 3.219 | 0.04 | 500 | 0.1976 | 0.1215 | | 0.0762 | 0.08 | 1000 | 0.2818 | 0.1324 | | 0.0824 | 0.12 | 1500 | 0.4541 | 0.1602 | | 0.0807 | 0.15 | 2000 | 0.1556 | 0.1162 | | 0.0799 | 0.19 | 2500 | 0.1618 | 0.1164 | | 0.0826 | 0.23 | 3000 | 0.3510 | 0.1379 | | 0.0809 | 0.27 | 3500 | 0.1486 | 0.1182 | | 0.0854 | 0.31 | 4000 | 0.1267 | 0.1177 | | 0.0817 | 0.35 | 4500 | 0.1581 | 0.1218 | | 0.0835 | 0.38 | 5000 | 0.1670 | 0.1251 | | 0.0841 | 0.42 | 5500 | 0.1576 | 0.1179 | | 0.0798 | 0.46 | 6000 | 0.2201 | 0.1300 | | 0.083 | 0.5 | 6500 | 0.1165 | 0.1179 | | 0.0878 | 0.54 | 7000 | 0.2640 | 0.1430 | | 0.0811 | 0.58 | 7500 | 0.1585 | 0.1288 | | 0.083 | 0.62 | 8000 | 0.3127 | 0.1370 | | 0.083 | 0.65 | 8500 | 0.4790 | 0.1449 | | 0.0775 | 0.69 | 9000 | 0.1651 | 0.1163 | | 0.0787 | 0.73 | 9500 | 1.6426 | 0.2083 | | 0.0781 | 0.77 | 10000 | 0.2307 | 0.1324 | | 0.0827 | 0.81 | 10500 | 0.1765 | 0.1318 | | 0.0816 | 0.85 | 11000 | 0.1679 | 0.1201 | | 0.0797 | 0.88 | 11500 | 0.2506 | 0.1508 | | 0.0813 | 0.92 | 12000 | 0.1893 | 0.1239 | | 0.0758 | 0.96 | 12500 | 0.1266 | 0.1147 | | 0.091 | 1.0 | 13000 | 0.1606 | 0.1180 | | 0.0677 | 1.04 | 13500 | 0.1107 | 0.1118 | | 0.0733 | 1.08 | 14000 | 0.1734 | 0.1565 | | 0.072 | 1.12 | 14500 | 0.1141 | 0.1126 | | 0.0731 | 1.15 | 15000 | 0.1125 | 0.1112 | | 0.0793 | 1.19 | 15500 | 0.1818 | 0.1146 | | 0.07 | 1.23 | 16000 | 0.2678 | 0.1265 | | 0.0658 | 1.27 | 16500 | 0.2909 | 0.1203 | | 0.0678 | 1.31 | 17000 | 0.3241 | 0.1280 | | 0.0681 | 1.35 | 17500 | 0.3243 | 0.1497 | | 0.0666 | 1.38 | 18000 | 0.2056 | 0.1150 | | 0.0667 | 1.42 | 18500 | 0.4678 | 0.1252 | | 0.0656 | 1.46 | 19000 | 0.1603 | 0.1138 | | 0.0662 | 1.5 | 19500 | 0.1554 | 0.1115 | | 0.0669 | 1.54 | 20000 | 0.1215 | 0.1101 | | 0.0681 | 1.58 | 20500 | 0.1118 | 0.1083 | | 0.0708 | 1.62 | 21000 | 0.1743 | 0.1146 | | 0.0673 | 1.65 | 21500 | 0.1509 | 0.1109 | | 0.0667 | 1.69 | 22000 | 0.3411 | 0.1495 | | 0.065 | 1.73 | 22500 | 0.1045 | 0.1067 | | 0.0644 | 1.77 | 23000 | 0.0999 | 0.1075 | | 0.0643 | 1.81 | 23500 | 0.1019 | 0.1073 | | 0.0675 | 1.85 | 24000 | 0.1196 | 0.1073 | | 0.0618 | 1.88 | 24500 | 0.1092 | 0.1086 | | 0.0626 | 1.92 | 25000 | 0.1256 | 0.1070 | | 0.0635 | 1.96 | 25500 | 0.1183 | 0.1069 | | 0.0621 | 2.0 | 26000 | 0.1180 | 0.1091 | | 0.0548 | 2.04 | 26500 | 0.1199 | 0.1048 | | 0.0548 | 2.08 | 27000 | 0.1215 | 0.1057 | | 0.0531 | 2.12 | 27500 | 0.1086 | 0.1036 | | 0.0548 | 2.15 | 28000 | 0.1103 | 0.1043 | | 0.054 | 2.19 | 28500 | 0.1078 | 0.1048 | | 0.0521 | 2.23 | 29000 | 0.1094 | 0.1039 | | 0.0534 | 2.27 | 29500 | 0.1058 | 0.1037 | | 0.0539 | 2.31 | 30000 | 0.1035 | 0.1026 | | 0.0516 | 2.35 | 30500 | 0.1009 | 0.1027 | | 0.0525 | 2.38 | 31000 | 0.1292 | 0.1056 | | 0.0501 | 2.42 | 31500 | 0.1124 | 0.1033 | | 0.052 | 2.46 | 32000 | 0.1020 | 0.1028 | | 0.0519 | 2.5 | 32500 | 0.1131 | 0.1038 | | 0.0498 | 2.54 | 33000 | 0.1036 | 0.1031 | | 0.0525 | 2.58 | 33500 | 0.0994 | 0.1005 | | 0.0506 | 2.61 | 34000 | 0.1093 | 0.1015 | | 0.0484 | 2.65 | 34500 | 0.1048 | 0.1005 | | 0.0493 | 2.69 | 35000 | 0.1192 | 0.1028 | | 0.048 | 2.73 | 35500 | 0.1208 | 0.1020 | | 0.0473 | 2.77 | 36000 | 0.1410 | 0.1042 | | 0.0472 | 2.81 | 36500 | 0.1382 | 0.1052 | | 0.0467 | 2.85 | 37000 | 0.1118 | 0.1012 | | 0.0473 | 2.88 | 37500 | 0.1032 | 0.1002 | | 0.0466 | 2.92 | 38000 | 0.1041 | 0.1004 | | 0.0455 | 2.96 | 38500 | 0.1056 | 0.1004 | | 0.0483 | 3.0 | 39000 | 0.1091 | 0.0995 | | 0.0408 | 3.04 | 39500 | 0.1170 | 0.1012 | | 0.0395 | 3.08 | 40000 | 0.1106 | 0.0995 | | 0.0407 | 3.11 | 40500 | 0.1075 | 0.0998 | | 0.0403 | 3.15 | 41000 | 0.1129 | 0.1000 | | 0.0397 | 3.19 | 41500 | 0.1062 | 0.0993 | | 0.0389 | 3.23 | 42000 | 0.1072 | 0.0990 | | 0.0385 | 3.27 | 42500 | 0.1032 | 0.0985 | | 0.0389 | 3.31 | 43000 | 0.0989 | 0.0973 | | 0.0404 | 3.35 | 43500 | 0.1031 | 0.0973 | | 0.0387 | 3.38 | 44000 | 0.0998 | 0.0974 | | 0.0391 | 3.42 | 44500 | 0.1000 | 0.0969 | | 0.0387 | 3.46 | 45000 | 0.0982 | 0.0968 | | 0.0407 | 3.5 | 45500 | 0.1057 | 0.0979 | | 0.038 | 3.54 | 46000 | 0.1026 | 0.0974 | | 0.0399 | 3.58 | 46500 | 0.1020 | 0.0970 | | 0.0387 | 3.61 | 47000 | 0.1022 | 0.0968 | | 0.0379 | 3.65 | 47500 | 0.1016 | 0.0961 | | 0.0369 | 3.69 | 48000 | 0.1012 | 0.0957 | | 0.0372 | 3.73 | 48500 | 0.0993 | 0.0956 | | 0.0361 | 3.77 | 49000 | 0.1013 | 0.0951 | | 0.0366 | 3.81 | 49500 | 0.1020 | 0.0956 | | 0.0377 | 3.85 | 50000 | 0.1014 | 0.0961 | | 0.0363 | 3.88 | 50500 | 0.1019 | 0.0962 | | 0.0368 | 3.92 | 51000 | 0.1033 | 0.0963 | | 0.0381 | 3.96 | 51500 | 0.1026 | 0.0960 | | 0.0364 | 4.0 | 52000 | 0.1024 | 0.0959 |
e035e4a0ebfc8750ea0c92c1dbfb8cd2
mit
['generated_from_trainer']
false
finetuned_gpt2_sst2_negation0.0005_pretrainedTrue This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the sst2 dataset. It achieves the following results on the evaluation set: - Loss: 3.5276
b64e09981e57a59761bb05dc40fcd204
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 3.1086 | 1.0 | 1059 | 3.5051 | | 2.9257 | 2.0 | 2118 | 3.5195 | | 2.833 | 3.0 | 3177 | 3.5276 |
ca063b19b697230962b1374e7f60d822
apache-2.0
['generated_from_trainer']
false
Team-Gryffindor-DistilBERT-finetuned-ner-creditcardcontract This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - eval_loss: 0.0231 - eval_precision: 0.7448 - eval_recall: 0.75 - eval_f1: 0.7474 - eval_accuracy: 0.9942 - eval_runtime: 61.7618 - eval_samples_per_second: 27.201 - eval_steps_per_second: 3.4 - epoch: 3.0 - step: 5670
611a850bd2214f3a10cab48aefd1c043
openrail
[]
false
<a href="https://www.buymeacoffee.com/s3nh"><img src="https://www.buymeacoffee.com/assets/img/guidelines/download-assets-sm-1.svg" alt=""></a> <img src = 'https://images.unsplash.com/photo-1592564630984-7410f94db184?ixlib=rb-4.0.3&ixid=MnwxMjA3fDB8MHxwaG90by1wYWdlfHx8fGVufDB8fHx8&auto=format&fit=crop&w=1146&q=80'>
a183466c5b4a2dca7a3e639e968f816c
openrail
[]
false
Description DialogGPT is a variant of the GPT (Generative Pretrained Transformer) language model developed by OpenAI. It's a deep neural network-based language model that's trained on massive amounts of text data to generate human-like text. DialogGPT uses the transformer architecture, which is a type of neural network designed for processing sequential data such as language. During the training phase, the model is exposed to a large corpus of text and learns to predict the next word in a sequence given the previous words. In the context of dialog, DialogGPT is trained to predict the response in a conversation, given the context of the conversation. This context can include one or more turns of the conversation, along with any additional information such as the topic of the conversation or the speaker's personality. At inference time, the model takes the current context of the conversation as input and generates a response. The response is generated by sampling from the model's predicted distribution over the vocabulary. Overall, DialogGPT provides a flexible and powerful solution for generating human-like text in a conversational context, allowing for the creation of a wide range of applications such as chatbots, conversational agents, and virtual assistants
01c095294331d5fed9fbac5eb9c850a1
openrail
[]
false
Parameters Model was trained for 40 epochs, using params as follows. ``` per_gpu_train_batch_size: int = 2 self.per_gpu_eval_batch_size: int = 2 self.gradient_accumulation_steps: int = 1 self.learning_rate: float = 5e-5 self.weight_decay: float = 0.0 self.adam_epsilon: float = 1e-8 self.max_grad_norm: int = 1.0 self.num_train_epochs: int = 40 self.max_steps: int = -1 self.warmup_steps: int = 0 self.logging_steps: int = 1000 self.save_steps: int = 3500 self.save_total_limit = None self.eval_all_checkpoints: bool = False self.no_cuda: bool = False self.overwrite_output_dir: bool = True self.overwrite_cache: bool = True self.should_continue: bool = False self.seed: int = 42 self.local_rank: int = -1 self.fp16: bool = False self.fp16_opt_level: str = 'O1' ```
e932d047b26b27e75260354d67d07e8a
openrail
[]
false
Usage DialoGPT large version, finetuned on Morty's sequences (Rick and Morty Cartoon character). Simple snippet of how to infer of this model: ```python from transformers import AutoModelWithLMHead, AutoModelForCausalLM, AutoTokenizer tokenizer = AutoTokenizer.from_pretrained('s3nh/DialoGPT-small-morty') model = AutoModelWithLMHead.from_pretrained('s3nh/DialoGPT-small-morty') for step in range(4): new_user_input_ids = tokenizer.encode(input(">> User:") + tokenizer.eos_token, return_tensors='pt') bot_input_ids = torch.cat([chat_history_ids, new_user_input_ids], dim=-1) if step > 0 else new_user_input_ids chat_history_ids = model.generate( bot_input_ids, max_length=200, pad_token_id=tokenizer.eos_token_id, no_repeat_ngram_size=3, do_sample=True, top_k=100, top_p=0.7, temperature=0.8 ) print("MortyBot: {}".format(tokenizer.decode(chat_history_ids[:, bot_input_ids.shape[-1]:][0], skip_special_tokens=True)))
8a3939237908285ab8f0671c636cdb5e
apache-2.0
['generated_from_trainer']
false
Tagged_One_500v4_NER_Model_3Epochs_AUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the tagged_one500v4_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.2804 - Precision: 0.6656 - Recall: 0.6225 - F1: 0.6433 - Accuracy: 0.9187
70a95f6ddfb94103baccd50437b0746a
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 183 | 0.2784 | 0.5897 | 0.5076 | 0.5456 | 0.9064 | | No log | 2.0 | 366 | 0.2816 | 0.6535 | 0.5787 | 0.6138 | 0.9112 | | 0.1091 | 3.0 | 549 | 0.2804 | 0.6656 | 0.6225 | 0.6433 | 0.9187 |
339feccdb010786588e958a27171853c
mit
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
S-PubMedBert-MedQuAD This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here -->
b168d87950d18388ec2e772a1b729a05
mit
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('TimKond/S-PubMedBert-MedQuAD') embeddings = model.encode(sentences) print(embeddings) ```
72be17771ce86570a90b1d71698a8993
mit
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME})
637d7556202f5b92aadff3a66039aa0e
mit
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.DataLoader` of length 82590 with parameters: ``` {'batch_size': 2, 'shuffle':True} ``` **Loss**: `sentence_transformers.losses.SoftmaxLoss` with parameters: ``` {'num_labels': 2, 'sentence_embedding_dimension': '768'} ``` Parameters of the fit()-Method: ``` { "callback": null, "epochs": 1, "evaluation_steps": 0, "evaluator": None, "max_grad_norm": 1, "optimizer_class": "<class 'transformers.optimization.AdamW'>", "optimizer_params": { "correct_bias": false, "eps": 1e-06, "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 8259, "weight_decay": 0.01 } ```
53753622763d68b5806b4481ad148621
mit
['sentence-transformers', 'feature-extraction', 'sentence-similarity', 'transformers']
false
Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ```
91f00884ecad92ddab12384ce1af6aca
apache-2.0
['generated_from_trainer']
false
sagemaker-distilbert-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2322 - Accuracy: 0.921
689e9c90497dc06f002e14c7a292b983
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.9306 | 1.0 | 500 | 0.2322 | 0.921 |
8756992d117f298eaf885f5ed7284620
mit
[]
false
bada club on Stable Diffusion This is the `<bada-club>` concept taught to Stable Diffusion via Textual Inversion. You can load this concept into the [Stable Conceptualizer](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/stable_conceptualizer_inference.ipynb) notebook. You can also train your own concepts and load them into the concept libraries using [this notebook](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_textual_inversion_training.ipynb). Here is the new concept you will be able to use as a `style`: ![<bada-club> 0](https://huggingface.co/sd-concepts-library/bada-club/resolve/main/concept_images/2.jpeg) ![<bada-club> 1](https://huggingface.co/sd-concepts-library/bada-club/resolve/main/concept_images/3.jpeg) ![<bada-club> 2](https://huggingface.co/sd-concepts-library/bada-club/resolve/main/concept_images/1.jpeg) ![<bada-club> 3](https://huggingface.co/sd-concepts-library/bada-club/resolve/main/concept_images/0.jpeg)
be8526b7e63824d4c2463a1d84b44146
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
blue_pencil <strong>blue_pencil</strong> ใฏใ€ๆง˜ใ€…ใชใƒขใƒ‡ใƒซใ‚’้ฉๅฝ“ใช้…ๅˆใงใƒžใƒผใ‚ธใ—ใŸใƒขใƒ‡ใƒซใงใ™ใ€‚ ๆœ‰ๅใชใƒขใƒ‡ใƒซใ‚’ใ„ใใคใ‹ๆ€ใ„ๆตฎใ‹ในใฆใใ ใ•ใ„ใ€‚ ใ‚ใชใŸใŒๆ€ใ„ๆตฎใ‹ในใŸใƒขใƒ‡ใƒซใฏใ€ๆใ‚‰ใใ“ใฎใƒขใƒ‡ใƒซใซๅซใพใ‚Œใฆใ„ใพใ™ใ€‚ ใ“ใฎใƒžใƒผใ‚ธใƒขใƒ‡ใƒซใฎ็‰นๅพดใฏใ‚ใ‹ใ‚Šใพใ›ใ‚“ใ€‚ ใ„ใ‚ใ„ใ‚ใชใƒขใƒ‡ใƒซใ‚’ใƒžใƒผใ‚ธใ—ใฆใฟใ‚‹ใ“ใจใŒ็›ฎ็š„ใชใฎใงใ€่ณชใ‚‚้ซ˜ใใ‚ใ‚Šใพใ›ใ‚“ใ€‚ ๅ…จใฆใฎใƒขใƒ‡ใƒซใฏ [stable-diffusion-webui-model-tookit](https://github.com/arenatemp/stable-diffusion-webui-model-toolkit) ใ‚’็”จใ„ใฆ `fp16` ใซใ—ใฆใ„ใพใ™ใ€‚ ---
f7836091ae16474027541055cb426d3d
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
`blue_pencil-v1b` <small>(`@20230212`)</small> `blue_pencil-v1` ใฎ [Amalgam_Mix](https://civitai.com/models/4758/amalgammix) ใฎไปฃใ‚ใ‚Šใซ [Balor-V2](https://huggingface.co/ploughB660/Balor-V2) ใ‚’้šŽๅฑคใƒžใƒผใ‚ธใ—ใŸใƒขใƒ‡ใƒซใงใ™ v1 ใจใฏใกใ‚‡ใฃใจๅ‚พๅ‘ใŒ้•ใ„ใพใ™
3bc700a9a3c8ce4d6285599c807c45b1
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
ๅ‡บๅŠ›ไพ‹ ``` girl, tokyo, scenery Negative prompt: EasyNegative Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 205537258 Size: 768x768, Clip skip: 2 Denoising strength: 0.65, Hires upscale: 2, Hires upscaler: Latent (nearest-exact) ``` ![blue_pencil-v1b_1](../../resolve/main/images/blue_pencil-v1b/1.png) ---
7dddd0c12a377a3c4796a856174be343
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
`blue_pencil-v1` <small>(`@20230211`)</small> ไปฅไธ‹ใฎใƒขใƒ‡ใƒซใŒๅซใพใ‚Œใฆใ„ใพใ™๏ผˆ้ †ไธๅŒ๏ผ‰ <details> * [Defmix-v1.1](https://huggingface.co/Defpoint/Defmix-v1.0) * Counterfeit v1.0 * Counterfeit v2.0 * Basil Mix * Anything v4.0 * [PastelRainier](https://huggingface.co/Hemlok/RainierMix) * ACertainThing * Anything-V4.5 * Counterfeit-V2.0 * Evt_V4-preview * basil_mix * pastel-mix * [GingerMixR](https://huggingface.co/Hemlok/GingerMix) * LimeMixV2 * [Elysium_Anime_V3](https://huggingface.co/hesw23168/SD-Elysium-Model) * [SukiyakiMix-v1.0](https://huggingface.co/Vsukiyaki/SukiyakiMix-v1.0) * pastel-mix * AbyssOrangeMix2 * [HD-20](https://www.cognitionai.org/hdhowtogetstarted) * [7th_anime_v3_testA](https://huggingface.co/syaimu/7th_test) * [AniReal](https://huggingface.co/Hosioka/AniReal) * [TriPhaze_B](https://huggingface.co/Lucetepolis/TriPhaze) * ultracolor.v4 * Counterfeit-V2.5 * Treebark * [Nabylon-v1.2](https://huggingface.co/NegiInNattoMaki/Nabylon-v1.0) * AbyssOrangeMix2 * LonganMix * and more * [atwcustom_V4](https://huggingface.co/atsuwo/ATW-custom) * [Amalgam_Mix](https://civitai.com/models/4758/amalgammix) </details>
3a87c074f84c5fea2e3ad0515167debd
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
1 ``` girl, tokyo, scenery Negative prompt: EasyNegative Steps: 30, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 2526423076 Size: 768x768, Clip skip: 2 ``` ![blue_pencil-v1_1-1](../../resolve/main/images/blue_pencil-v1/1-1.png)
101f1ba4e507bc5aba7e91a2f1a2f160
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
2 ``` girl, early teen, kimono, sakura, particles Negative prompt: EasyNegative Steps: 20, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 4036639388, Size: 512x768, Clip skip: 2 ``` ![blue_pencil-v1_2-1](../../resolve/main/images/blue_pencil-v1/2-1.png)
f49606432c22b6e69e813229c496acda
creativeml-openrail-m
['stable-diffusion', 'text-to-image']
false
3 ``` girl, early teen, t-shirt, pants, from behind, landscape, scenery, apocalyptic Negative prompt: EasyNegative Steps: 40, Sampler: DPM++ SDE Karras, CFG scale: 7.5, Seed: 748447692, Size: 768x512, Clip skip: 2 ``` ![blue_pencil-v1_3](../../resolve/main/images/blue_pencil-v1/3.png)
d9bffd537a5688aadae35d77a7aeb5ee
mit
['generated_from_trainer']
false
xlnet-base-cased_fold_4_binary_v1 This model is a fine-tuned version of [xlnet-base-cased](https://huggingface.co/xlnet-base-cased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5724 - F1: 0.8315
8f0da8cf4b1b8ca8df70685c99b3c681
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | No log | 1.0 | 289 | 0.4043 | 0.8009 | | 0.4373 | 2.0 | 578 | 0.4093 | 0.8260 | | 0.4373 | 3.0 | 867 | 0.5084 | 0.8206 | | 0.2707 | 4.0 | 1156 | 0.5945 | 0.8087 | | 0.2707 | 5.0 | 1445 | 0.6389 | 0.8251 | | 0.1691 | 6.0 | 1734 | 0.8131 | 0.8156 | | 0.1012 | 7.0 | 2023 | 0.9865 | 0.8190 | | 0.1012 | 8.0 | 2312 | 1.1356 | 0.8342 | | 0.0506 | 9.0 | 2601 | 1.0624 | 0.8369 | | 0.0506 | 10.0 | 2890 | 1.2604 | 0.8255 | | 0.0384 | 11.0 | 3179 | 1.2648 | 0.8183 | | 0.0384 | 12.0 | 3468 | 1.3763 | 0.8158 | | 0.0318 | 13.0 | 3757 | 1.4966 | 0.8217 | | 0.0221 | 14.0 | 4046 | 1.3889 | 0.8250 | | 0.0221 | 15.0 | 4335 | 1.4014 | 0.8284 | | 0.0145 | 16.0 | 4624 | 1.5321 | 0.8289 | | 0.0145 | 17.0 | 4913 | 1.4914 | 0.8233 | | 0.0172 | 18.0 | 5202 | 1.3946 | 0.8314 | | 0.0172 | 19.0 | 5491 | 1.5032 | 0.8269 | | 0.0135 | 20.0 | 5780 | 1.5111 | 0.8328 | | 0.0087 | 21.0 | 6069 | 1.4899 | 0.8318 | | 0.0087 | 22.0 | 6358 | 1.5562 | 0.8311 | | 0.0061 | 23.0 | 6647 | 1.5384 | 0.8327 | | 0.0061 | 24.0 | 6936 | 1.5798 | 0.8304 | | 0.0052 | 25.0 | 7225 | 1.5724 | 0.8315 |
757cd8170d3b1d9178bcc8f4ab5eb3fd
creativeml-openrail-m
[]
false
`Broken mirror, shattered mirror, brokenM_style` this style gives a shattered mirror / reflection to prompts. License This embedding is open access and available to all, with a CreativeML OpenRAIL-M license further specifying rights and usage. The CreativeML OpenRAIL License specifies: You can't use the embedding to deliberately produce nor share illegal or harmful outputs or content The authors claims no rights on the outputs you generate, you are free to use them and are accountable for their use which must not go against the provisions set in the license You may re-distribute the weights and use the embedding commercially and/or as a service. If you do, please be aware you have to include the same use restrictions as the ones in the license and share a copy of the CreativeML OpenRAIL-M to all your users (please read the license entirely and carefully) Please read the full license here
5b4660b6a9b53c0a0963bbb6641b1cc2
apache-2.0
['generated_from_trainer']
false
distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2202 - Accuracy: 0.923 - F1: 0.9232
b272cb1bb2105806099470372eb27228
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.8244 | 1.0 | 250 | 0.3104 | 0.9025 | 0.8997 | | 0.2478 | 2.0 | 500 | 0.2202 | 0.923 | 0.9232 |
0a9652a698b41cf14597129f7d182df9
mit
[]
false
Simple usage sample code ```python !pip install tokenizers==0.10.3 transformers==4.8.0 from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry") model = AutoModelForCausalLM.from_pretrained("Norod78/hebrew-gpt_neo-xl-poetry", pad_token_id=tokenizer.eos_token_id) prompt_text = "ืื ื™ ืื•ื”ื‘ ืฉื•ืงื•ืœื“ ื•ืขื•ื’ื•ืช" max_len = 512 sample_output_num = 3 seed = 1000 import numpy as np import torch device = torch.device("cuda" if torch.cuda.is_available() else "cpu") n_gpu = 0 if torch.cuda.is_available()==False else torch.cuda.device_count() print(f"device: {device}, n_gpu: {n_gpu}") np.random.seed(seed) torch.manual_seed(seed) if n_gpu > 0: torch.cuda.manual_seed_all(seed) model.to(device) encoded_prompt = tokenizer.encode( prompt_text, add_special_tokens=False, return_tensors="pt") encoded_prompt = encoded_prompt.to(device) if encoded_prompt.size()[-1] == 0: input_ids = None else: input_ids = encoded_prompt print("input_ids = " + str(input_ids)) if input_ids != None: max_len += len(encoded_prompt[0]) if max_len > 2048: max_len = 2048 print("Updated max_len = " + str(max_len)) stop_token = "<|endoftext|>" new_lines = "\n\n\n" sample_outputs = model.generate( input_ids, do_sample=True, max_length=max_len, top_k=50, top_p=0.95, num_return_sequences=sample_output_num ) print(100 * '-' + "\n\t\tOutput\n" + 100 * '-') for i, sample_output in enumerate(sample_outputs): text = tokenizer.decode(sample_output, skip_special_tokens=True)
996281a6c53217bdc9cd291d370a7b4b
apache-2.0
['generated_from_trainer']
false
finetuned_sentence_itr0_2e-05_editorials_27_02_2022-19_38_42 This model is a fine-tuned version of [distilbert-base-uncased-finetuned-sst-2-english](https://huggingface.co/distilbert-base-uncased-finetuned-sst-2-english) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0914 - Accuracy: 0.9746 - F1: 0.9870
6a22f46cef25fba7c56390f42efaae6e
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 104 | 0.0501 | 0.9828 | 0.9913 | | No log | 2.0 | 208 | 0.0435 | 0.9828 | 0.9913 | | No log | 3.0 | 312 | 0.0414 | 0.9828 | 0.9913 | | No log | 4.0 | 416 | 0.0424 | 0.9799 | 0.9898 | | 0.0547 | 5.0 | 520 | 0.0482 | 0.9828 | 0.9913 |
1d8f6767935bac476ff77daa4f0dd5eb
apache-2.0
['generated_from_trainer']
false
wnli This model is a fine-tuned version of [albert-base-v2](https://huggingface.co/albert-base-v2) on the GLUE WNLI dataset. It achieves the following results on the evaluation set: - Loss: 0.6898 - Accuracy: 0.5634
d01f269e47b3cf3fe9f54273e96590d4
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4.0
57936ab9e82a796ee992714a365befec
apache-2.0
['generated_from_trainer']
false
test-trainer This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2394 - Accuracy: 0.9395 - F1: 0.9396
5bcbc1b6aa7a4d6559f6901ac80d702d
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.2518 | 1.0 | 2000 | 0.1971 | 0.931 | 0.9305 | | 0.1678 | 2.0 | 4000 | 0.1782 | 0.9405 | 0.9406 | | 0.1048 | 3.0 | 6000 | 0.2394 | 0.9395 | 0.9396 |
4db33e44b4a9fe4e60092bff4c1511e0
apache-2.0
['image-classification', 'timm']
false
Model card for maxvit_xlarge_tf_384.in21k_ft_in1k An official MaxViT image classification model. Pretrained in tensorflow on ImageNet-21k (21843 Google specific instance of ImageNet-22k) and fine-tuned on ImageNet-1k by paper authors. Ported from official Tensorflow implementation (https://github.com/google-research/maxvit) to PyTorch by Ross Wightman.
7a1abfc9cf7f81c2519e40d65b93ac37
apache-2.0
['image-classification', 'timm']
false
Model Details - **Model Type:** Image classification / feature backbone - **Model Stats:** - Params (M): 475.3 - GMACs: 292.8 - Activations (M): 668.8 - Image size: 384 x 384 - **Papers:** - MaxViT: Multi-Axis Vision Transformer: https://arxiv.org/abs/2204.01697 - **Dataset:** ImageNet-1k - **Pretrain Dataset:** ImageNet-21k
b55bffe3b5487ae6a8a42e086add3a11
apache-2.0
['image-classification', 'timm']
false
Image Classification ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model('maxvit_xlarge_tf_384.in21k_ft_in1k', pretrained=True) model = model.eval()
1484ffa63254913172585d26ee38e6d7
apache-2.0
['image-classification', 'timm']
false
Feature Map Extraction ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_xlarge_tf_384.in21k_ft_in1k', pretrained=True, features_only=True, ) model = model.eval()
7ae7749fe8fb6415065d63ff0a55dc55
apache-2.0
['image-classification', 'timm']
false
Image Embeddings ```python from urllib.request import urlopen from PIL import Image import timm img = Image.open( urlopen('https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/beignets-task-guide.png')) model = timm.create_model( 'maxvit_xlarge_tf_384.in21k_ft_in1k', pretrained=True, num_classes=0,
3be0af3f6bc732157c9e566cd5126245
apache-2.0
['audio-classification', 'generated_from_trainer']
false
hubert-base-superb-ks This model is a fine-tuned version of [facebook/hubert-base-ls960](https://huggingface.co/facebook/hubert-base-ls960) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0848 - Accuracy: 0.9822
d38ccb8ca30c71780b944708df8daf6a
apache-2.0
['audio-classification', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 8 - seed: 0 - distributed_type: IPU - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - total_eval_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5.0 - training precision: Mixed Precision
bd5ba1083106ba0da081a09f2ce5aeb2
apache-2.0
['translation']
false
opus-mt-fr-ht * source languages: fr * target languages: ht * OPUS readme: [fr-ht](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-ht/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-09.zip](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.zip) * test set translations: [opus-2020-01-09.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.test.txt) * test set scores: [opus-2020-01-09.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fr-ht/opus-2020-01-09.eval.txt)
cd8ebf1cfa22c4be49c8a22e0bd94cd1
apache-2.0
['translation']
false
opus-mt-fi-fj * source languages: fi * target languages: fj * OPUS readme: [fi-fj](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fi-fj/README.md) * dataset: opus * model: transformer-align * pre-processing: normalization + SentencePiece * download original weights: [opus-2020-01-20.zip](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.zip) * test set translations: [opus-2020-01-20.test.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.test.txt) * test set scores: [opus-2020-01-20.eval.txt](https://object.pouta.csc.fi/OPUS-MT-models/fi-fj/opus-2020-01-20.eval.txt)
5bfb28abd94813ea6196449bd30f1a41
apache-2.0
['generated_from_trainer']
false
Article_50v3_NER_Model_3Epochs_UNAUGMENTED This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the article50v3_wikigold_split dataset. It achieves the following results on the evaluation set: - Loss: 0.7382 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.7789
6e9a23c4821e1e2a1db0de4354e3e1e3
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 6 | 0.9648 | 0.1172 | 0.0042 | 0.0081 | 0.7782 | | No log | 2.0 | 12 | 0.7740 | 0.0 | 0.0 | 0.0 | 0.7789 | | No log | 3.0 | 18 | 0.7382 | 0.0 | 0.0 | 0.0 | 0.7789 |
7131a703d739427d93a30fe5d25d64f1
mit
['generated_from_trainer']
false
debert_base_fine_tuned_sent140 This model is a fine-tuned version of [microsoft/deberta-base](https://huggingface.co/microsoft/deberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9678 - Accuracy: 0.7647
14640351cc2ae502db0cafa20ef8632d
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 408 | 0.8139 | 0.7219 | | 0.8198 | 2.0 | 816 | 0.7742 | 0.7460 | | 0.4479 | 3.0 | 1224 | 0.9678 | 0.7647 |
b870ef2e2ba3eb4d6375f8d178f546d1
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
Paint Journey V2 is [V1](https://huggingface.co/FredZhang7/paint-journey-v1) fine-tuned on 768x768 oil paintings by Midjourney V4, Open Journey V2, Disco Diffusion, and artists given permission Begin the prompt with **((oil painting))** to add the oil paint effect. For digital and other painting styles, use similar prompts as you would for Midjourney V4 (with some tweaks), Stable Diffusion v1.5 (add more styles), Open Journey V2, or Disco Diffusion. [![Open with Camenduru's WebUI in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/AMLA-UBC/100-Exploring-the-World-of-Modern-Machine-Learning/blob/main/assets/PaintJourneyV2.ipynb)
13f8865dc8396ca9b00c4032f769b776
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
Examples *All examples were generated using Camenduru's WebUI (see the Colab file)* ![](./assets/characters.png) *โฌ†๏ธ 768x1136 portraits, generated using descriptive prompts and without face restoration, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/character_settings.txt)* ![](./assets/nature.png) *โฌ†๏ธ 1280x768 (mostly) natural landscapes, used shorter prompts, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/nature_settings.txt)* ![](./assets/outerspace.png) *โฌ†๏ธ 1152x768 outerspace landscapes, used descriptive prompts, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/outerspace_settings.txt)* ![](./assets/lamborghini.png) *โฌ†๏ธ 1280x768 lamborghini, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/lamborghini_settings.txt)* ![](./assets/eevee.png) *โฌ†๏ธ 960x768 Eevee, [generation parameters](https://huggingface.co/FredZhang7/paint-journey-v2/raw/main/assets/eevee_settings.txt)*
dee03dc30b6c954e98b7024492568cb7
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
Comparisons Paint Journey V2's paintings are closer to human-drawn art than Open Journey V2. Compared to models like Dreamlike Diffusion 1.0, PJ V2 tends to generate 768x768 or higher resolution images with reduced noise levels. This model is also capable of generating stunning portraits at 768x1136 resolution without duplicated faces (with [Camenduru's WebUI](https://github.com/camenduru/stable-diffusion-webui)), a difficult task to models like DreamShaper 3.3. At lower resolutions, DreamShaper 3.3 tends to generate higher quality portraits than PJ V2 in terms of noise levels, given the same (short) postive and negative prompts. However, PJ V2 can craft more stunning masterpieces with more descriptive positive and negative prompts and can still generate beautiful landscapes with shorter prompts.
55ef349034969d47bbf6c8355df5c0c5
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
Training Instead of solely fine-tuning its Unet, Paint Journey V2 focuses on fine-tuning its text encoder with a diverse range of prompts. This allows for a seamless blend of the digital and oil painting styles into various other types of prompts, resulting in a more natural and dynamic output. This model was trained on a curated dataset of roughly 300 images hand-picked from Midjourney, [Prompt Hero](https://prompthero.com/), [PixaBay](https://pixabay.com/images/search/paintings/), Open Journey V2, and Reddit. Before training, I used R-ESRGAN 4x on many images to increase their resolution and reduce noise.
a6725a1781892429f17a13d7e11c4684
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
Running out of prompts? Useful resources: [Lexica.art](https://lexica.art/), [Fast GPT PromptGen](https://huggingface.co/FredZhang7/distilgpt2-stable-diffusion-v2), [Prompt Hero](https://prompthero.com/)
653b95d062c87a56b497769b00595c3c
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
Output Dimensions Portrait sizes include, but are not limited to, `512x768`, `768x768`, and `768x1136`. Landscape sizes include, but are not limited to, `768x512`, `768x768`, `1152x768`, and `1280x768`.
c6a2f9c79ae48e5ba8fdc24e91a5d7bf
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
Camenduru's WebUI ``` git clone -b v1.6 https://github.com/camenduru/stable-diffusion-webui ``` <details> <summary> Click to use Automatic1111's Webui instead, but may not output images as artistic </summary> ``` git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui.git ``` </details> Download [checkpoint](./paint_journey_v2.ckpt) and [vae](./paint_journey_v2.vae.pt) to the `./stable-diffusion-webui/models/Stable-diffusion` folder. Run `webui-user.bat`.
6c1079bbe483ce5a5caff1460d33be39
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
๐Ÿงจ Diffusers *Tip: using double, tripple, or quadriple brackets around some letters WORD (e.g. "((WORD))") will put an 'emphasis' on WORD* ```bash pip install --upgrade diffusers transformers ``` ```python
dcd8656532317f02d25cd00fae5e4316
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
changing-the-scheduler from diffusers import StableDiffusionPipeline, EulerAncestralDiscreteScheduler import torch, random, datetime pipe = StableDiffusionPipeline.from_pretrained("FredZhang7/paint-journey-v2") pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config) pipe = pipe.to("cuda") def random_seed(): return random.randint(0, 2**32 - 1) prompt = "((oil painting)), gentle waves, bright blue sky, white sails billowing, sun glistening on the surface, salty sea air, distant horizon, calm breeze, birds soaring overhead, vibrant colors, artstation digital painting, high resolution, uhd, 4 k, 8k wallpaper"
617a9fb77f41a05396bc6f446302d7dc
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
sampling steps, 30 to 40 is usually good for Euler Ancestral generator = torch.Generator("cuda").manual_seed(seed) with torch.autocast("cuda"): image = pipe(prompt=prompt, num_inference_steps=num_inference_steps, width=width, height=height, generator=generator, guidance_scale=cfg_scale).images[0] def generate_filename(string, seed): invalid_chars = ["<", ">", ":", '"', "/", "\\", "|", "?", "*"] for char in invalid_chars: string = string.replace(char, "") return f"{datetime.now().strftime('%Y-%m-%d_%H-%M-%S')}_{seed}_{string}" image.save(f"./{generate_filename(prompt, seed)}.png") ```
29bf04f05f42dbe0ef8b006d5539dd70
creativeml-openrail-m
['text-to-image', 'midjourney', 'stable-diffusion', 'disco-diffusion', 'art', 'arxiv:2208.12242']
false
Safety Checker V2 The official [stable diffusion safety checker](https://huggingface.co/CompVis/stable-diffusion-safety-checker) uses up 1.22GB VRAM. I recommend using [Google Safesearch Mini V2](https://huggingface.co/FredZhang7/google-safesearch-mini-v2) (220MB) to save 1.0GB VRAM.
f4fc71b9db5790958ce467237879fcb5
apache-2.0
['generated_from_trainer']
false
This time use self-made dataset(cut the audio of "https://www.youtube.com/watch?v=a2ZOTD3R7JI" into slices and write the corresponding transcript, totally 4 mins), don't know why the word-error-rate keep 1. But can know that much be the problem of dataset, because last time use the same pre-trained model and standard singlish corpus fine-tune get nice result. (can find it at:RuiqianLi/wav2vec2-large-xls-r-300m-singlish-colab) It achieves the following results on the evaluation set: - Loss: 3.0927 - Wer: 1.0
e0a5c167b19919006dd01398734c404d
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.01 - train_batch_size: 2 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 100 - mixed_precision_training: Native AMP
e5ca6b0a7f47a40f94b5f11782696e66
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:---:| | 3.7943 | 20.0 | 200 | 3.0597 | 1.0 | | 2.9902 | 40.0 | 400 | 3.1604 | 1.0 | | 2.9696 | 60.0 | 600 | 3.1112 | 1.0 | | 2.8885 | 80.0 | 800 | 3.0234 | 1.0 | | 2.8154 | 100.0 | 1000 | 3.0927 | 1.0 |
3d6290353f88e16f658e0b3c10699e05
apache-2.0
['automatic-speech-recognition', 'en']
false
exp_w2v2r_en_vp-100k_accent_us-5_england-5_s203 Fine-tuned [facebook/wav2vec2-large-100k-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-100k-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (en)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
8e12c627b77a32ade7b11f8d3fd903bd
apache-2.0
['generated_from_trainer']
false
mrpc This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the GLUE MRPC dataset. It achieves the following results on the evaluation set: - Loss: 0.5611 - Accuracy: 0.6912 - F1: 0.8158 - Combined Score: 0.7535
525d76f253e0b88f9c1d681d563a4cf6
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 256 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3.0
0229b13918e1522ed934374b55d28c96
mit
['generated_from_trainer']
false
roberta_reman This model is a fine-tuned version of [ibm/ColD-Fusion](https://huggingface.co/ibm/ColD-Fusion) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4272 - F1: 0.7004 - Roc Auc: 0.7862 - Accuracy: 0.4330 - Recall: 0.6831 - Precision: 0.7185
39de9edd4cbabedbcde0945706e1a365
mit
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | Roc Auc | Accuracy | Recall | Precision | |:-------------:|:-----:|:----:|:---------------:|:------:|:-------:|:--------:|:------:|:---------:| | No log | 1.0 | 113 | 0.4673 | 0.5668 | 0.6955 | 0.2990 | 0.4930 | 0.6667 | | No log | 2.0 | 226 | 0.4187 | 0.6397 | 0.7403 | 0.3918 | 0.5563 | 0.7524 | | No log | 3.0 | 339 | 0.4272 | 0.7004 | 0.7862 | 0.4330 | 0.6831 | 0.7185 | | No log | 4.0 | 452 | 0.4191 | 0.6566 | 0.7539 | 0.3918 | 0.6127 | 0.7073 | | 0.3529 | 5.0 | 565 | 0.4246 | 0.6788 | 0.7706 | 0.4124 | 0.6549 | 0.7045 |
8b6cb1bc294d3e36603cffab6bd1daad
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
MultiBERTs Seed 2 Checkpoint 1700k (uncased) Seed 2 intermediate checkpoint 1700k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/google-research/language/tree/master/language/multiberts). This is an intermediate checkpoint. The final checkpoint can be found at [multiberts-seed-2](https://hf.co/multberts-seed-2). This model is uncased: it does not make a difference between english and English. Disclaimer: The team releasing MultiBERTs did not write a model card for this model so this model card has been written by [gchhablani](https://hf.co/gchhablani).
6de0da4078532a36eddfed8c42d7abf7
apache-2.0
['exbert', 'multiberts', 'multiberts-seed-2']
false
How to use Here is how to use this model to get the features of a given text in PyTorch: ```python from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('multiberts-seed-2-1700k') model = BertModel.from_pretrained("multiberts-seed-2-1700k") text = "Replace me by any text you'd like." encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) ```
541a892fa0777518f673fe5901ed4205
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
Overview <details> <summary>Click to expand</summary> - **Model type:** Language Model - **Architecture:** RoBERTa-large - **Language:** English - **License:** Apache 2.0 - **Task:** Zero-Shot Text Classification - **Data:** Microsoft Academic Graph - **Additional Resources:** - [Paper]() <-- WiP (soon to be published in EACL 2023) - [GitHub](https://github.com/TeMU-BSC/sciroshot) </details>
134a368decc494202e6b27a158ef93c9
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
Model description SCIroShot is an entailment-based Zero-Shot Text Classification model that has been fine-tuned using a self-made dataset composed of scientific articles from [Microsoft Academic Graph](https://www.microsoft.com/en-us/research/project/microsoft-academic-graph/) (MAG). The resulting model achieves SOTA performance in the scientific domain and very competitive results in other areas.
3d070227fa125a88a76e3f5309f948c4
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
How to use ```python from transformers import pipeline zstc = pipeline("zero-shot-classification", model="BSC-LT/sciroshot") sentence = "Leo Messi is the best player ever." candidate_labels = ["politics", "science", "sports", "environment"] template = "This example is {}" output = zstc(sentence, candidate_labels, hypothesis_template=template, multi_label=False) print(output) print(f'Predicted class: {output["labels"][0]}') ```
48ca5f282e7030b5a7912798e1b56a1e
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
Limitations and bias No measures have been taken to estimate the bias and toxicity embedded in the model. Even though the fine-tuning data (which is of a scientific nature) may seem harmless, it is important to note that the corpus used to pre-train the vanilla model is very likely to contain a lot of unfiltered content from the internet, as stated in the [RoBERTa-large model card](https://huggingface.co/roberta-large
ec93fc905f0a45a6e953c3cd679ce4a3
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
Training data Our data builds on top of scientific-domain annotated data from Microsoft Academic Graph (MAG). This database consists of a heterogeneous graph with billions of records from both scientific publications and patents, in addition to metadata information such as the authors, institutions, journals, conferences and their citation relationships. The documents are organized in a six-level hierarchical structure of scientific concepts, where the two top-most levels are manually curated in order to guarantee a high level of accuracy. To create the training corpus, a random sample of scientific articles with a publication year between 2000 and 2021 were retrieved from MAG with their respective titles and abstracts in English. This results in over 2M documents with their corresponding Field Of Study, which was obtained from the 1-level MAG taxonomy (292 possible classes, such as "Computational biology" or "Transport Engineering"). The fine-tuning dataset was constructed in a weakly supervised manner by converting text classification data to the entailment format. Using the relationship between scientific texts and their matching concepts in the 1-level MAG taxonomy we are able to generate the premise- hypothesis pairs corresponding to the entailment label. Conversely, we generate the pairs for the neutral label by removing the actual relationship between the texts and their scientific concepts and creating a virtual relationship with those to which they are not matched.
6a316f5eac66c91a312999b98218f50e
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
Training procedure The newly-created scientific dataset described in the previous section was used to fine-tune a 355M parameters RoBERTa model on the entailment task. To do so, the model has to compute the entailment score between every text that is fed to it and all candidate labels. The final prediction would be the highest-scoring class in a single-label classification setup, or the N classes above a certain threshold in a multi-label scenario. A subset of 52 labels from the training data were kept apart so that they could be used as a development set of fully-unseen classes. As a novelty, the validation was not performed on the entailment task (which is used a proxy) but directly on the target text classification task. This allows us to stop training at the right time via early stopping, which prevents the model from "overfitting" to the training task. This method was our way to counteract an effect that was empirically discovered during the experimentation period, where it was observed that after a certain point the model can start to worsen in the target task (ZSTC) despite still continuing to improve in the training task (RTE). The simple act of shortening the training time led to a boost in performance. Read the paper for more details on the methodology and the analysis of RTE/ZSTC correlation.
b95ad17b52b80d6a3d411b1a39e9a408
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
Evaluation data The model's performance was evaluated on a collection of disciplinary-labeled textual datasets, both from the scientific domain (closer to training data) and the general domain (to assess generalizability). The following table provides an overview of the number of examples and labels for each dataset: | Dataset | Labels | Size | |------------------|--------|--------| | arXiv | 11 | 3,838 | | SciDocs-MeSH | 11 | 16,433 | | SciDocs-MAG | 19 | 17,501 | | Konstanz | 24 | 10,000 | | Elsevier | 26 | 14,738 | | PubMed | 109 | 5,000 | | Topic Categorization (Yahoo! Answers) | 10 | 60,000 | | Emotion Detection (UnifyEmotion) | 10 | 15,689 | | Situation Frame Detection (Situation Typing) | 12 | 3,311 | Please refer to the paper for further details on each particular dataset.
4b7a164db819e2276b76698497e0966c
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
Scientific domain benchmark | Model | arXiv | SciDocs-MesH | SciDocs-MAG | Konstanz | Elsevier | PubMed | |-------|-------|--------------|-------------|----------|----------|--------| | [fb/bart-large-mnli](https://huggingface.co/facebook/bart-large-mnli) | 33.28 | **66.18**๐Ÿ”ฅ | 51.77 | 54.62 | 28.41 | **31.59**๐Ÿ”ฅ | | SCIroShot | **42.22**๐Ÿ”ฅ | 59.34 | **69.86**๐Ÿ”ฅ | **66.07**๐Ÿ”ฅ | **54.42**๐Ÿ”ฅ | 27.93 |
c55bf777c71e25e9c9dcfb4d78ca2a03
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
General domain benchmark | Model | Topic | Emotion | Situation | |-------|-------|---------|-----------| | RTE [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 43.8 | 12.6 | **37.2**๐Ÿ”ฅ | | FEVER [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 40.1 | 24.7 | 21.0 | | MNLI [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf) | 37.9 | 22.3 | 15.4 | | NSP [(Ma et al., 2021)](https://aclanthology.org/2021.acl-short.99.pdf) | 50.6 | 16.5 | 25.8 | | NSP-Reverse [(Ma et al., 2021)](https://aclanthology.org/2021.acl-short.99.pdf) | 53.1 | 16.1 | 19.9 | | SCIroShot | **59.08**๐Ÿ”ฅ | **24.94**๐Ÿ”ฅ | 27.42 All the numbers reported above represent **label-wise weighted F1** except for the Topic classification dataset, which is evaluated in terms of **accuracy** following the notation from [(Yin et al., 2019)](https://arxiv.org/pdf/1909.00161.pdf).
adfaf46e4879c97904e2b318fcf8e3cf
apache-2.0
['zero-shot', 'text-classification', 'science', 'mag']
false
Disclaimer <details> <summary>Click to expand</summary> The model published in this repository is intended for a generalist purpose and is made available to third parties under a Apache v2.0 License. Please keep in mind that the model may have bias and/or any other undesirable distortions. When third parties deploy or provide systems and/or services to other parties using this model (or a system based on it) or become users of the model itself, they should note that it is under their responsibility to mitigate the risks arising from its use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence. In no event shall the owners and creators of the model be liable for any results arising from the use made by third parties. </details>
1cf9101de3a62ca8b7b188ef89f1cdda
apache-2.0
['speech', 'audio', 'automatic-speech-recognition']
false
Info This Wav2Vec2 model was first pretrained on 500 hours Kalmyk TV recordings and 1000 hours Mongolian speech recognition dataset. After that, the model was finetuned on a 300 hours [Kalmyk synthetic STT dataset](https://github.com/tugstugi/mongolian-nlp
9966ad1002f2dd027f02ec44b3a184b7
apache-2.0
['speech', 'audio', 'automatic-speech-recognition']
false
datasets) created by a voice conversion model. * 50% WER on a private test set created from Kalmyk TV recordnings * on clean voice recordings, the model should have much lower WER * voice conversion info * 300 hours [Kalmyk synthetic STT dataset](https://github.com/tugstugi/mongolian-nlp
9a82b63c4a5af1e1ddcb3b2245a3890a
apache-2.0
['speech', 'audio', 'automatic-speech-recognition']
false
datasets) * The source voice is a Kalmyk female voice TTS * Target voices are from the VCTK dataset * example data: https://twitter.com/tugstugi/status/1409111296897912835 * each WAV has a different text created from Kalmyk books
ca5d9770e8393a0c9324f87ff88e59d2
apache-2.0
['generated_from_trainer']
false
english-filipino-wav2vec2-l-xls-r-test-07 This model is a fine-tuned version of [jonatasgrosman/wav2vec2-large-xlsr-53-english](https://huggingface.co/jonatasgrosman/wav2vec2-large-xlsr-53-english) on the filipino_voice dataset. It achieves the following results on the evaluation set: - Loss: 0.6768 - Wer: 0.3755
f729d84a7cf370972dc9d93fbc143bfb
apache-2.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 30 - mixed_precision_training: Native AMP
494f78b2cdbc3e96df26ce17ac130563
apache-2.0
['generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 2.9255 | 2.09 | 400 | 0.7742 | 0.7694 | | 0.5792 | 4.19 | 800 | 0.5368 | 0.5250 | | 0.3611 | 6.28 | 1200 | 0.4796 | 0.4718 | | 0.2742 | 8.38 | 1600 | 0.5308 | 0.4764 | | 0.201 | 10.47 | 2000 | 0.5885 | 0.4723 | | 0.164 | 12.57 | 2400 | 0.5595 | 0.4750 | | 0.1374 | 14.66 | 2800 | 0.5836 | 0.4366 | | 0.1138 | 16.75 | 3200 | 0.6110 | 0.4628 | | 0.0991 | 18.85 | 3600 | 0.6179 | 0.4174 | | 0.0837 | 20.94 | 4000 | 0.6681 | 0.4170 | | 0.0722 | 23.04 | 4400 | 0.6665 | 0.4103 | | 0.0576 | 25.13 | 4800 | 0.7538 | 0.4068 | | 0.052 | 27.23 | 5200 | 0.6808 | 0.3844 | | 0.0449 | 29.32 | 5600 | 0.6768 | 0.3755 |
e7ad6a7689222d5cd08966448c6b1174
apache-2.0
['image-classification', 'generated_from_trainer']
false
vit_receipts_classifier This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the cord, rvl-cdip, visual-genome and an external receipt dataset to carry out Binary Classification (`ticket` vs `no_ticket`). Ticket here is used as a synonym to "receipt". It achieves the following results on the evaluation set, which contain pictures from the above datasets in scanned, photography or mobile picture formats (color and grayscale): - Loss: 0.0116 - F1: 0.9991
178f6b1fb7f9eb5817b70dbb2cfad7d4
apache-2.0
['image-classification', 'generated_from_trainer']
false
Intended uses & limitations Use this model to classify your images into tickets or not tickers. WIth the tickets group, you can use Multimodal Information Extraction, as Visual Named Entity Recognition, to extract the ticket items, amounts, total, etc. Check the Cord dataset for more information.
53d26941e34e04205da931edafe2a584
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training and evaluation data This model used 2 datasets as positive class (`ticket`): - `cord` - `https://expressexpense.com/blog/free-receipt-images-ocr-machine-learning-dataset/` For the negative class (`no_ticket`), the following datasets were used: - A subset of `RVL-CDIP` - A subset of `visual-genome`
7e7f5abfa12b9c98f7344ddb2fb69739
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training procedure Datasets were loaded with different distributions of data for positive and negative classes. Then, normalization and resizing is carried out to adapt it to ViT expected input. Different runs were carried out changing the data distribution and the hyperparameters to maximize F1.
227d3c926bf59d476e9865dad90290d4
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP
668deb64684b9b9c34fe78decac78430
apache-2.0
['image-classification', 'generated_from_trainer']
false
Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0026 | 0.28 | 500 | 0.0187 | 0.9982 | | 0.0186 | 0.56 | 1000 | 0.0116 | 0.9991 | | 0.0006 | 0.84 | 1500 | 0.0044 | 0.9997 |
4129f78bc586e1bd74cb809397eba117
mit
[]
false
Cosmoose-SD on Stable Diffusion via Dreambooth trained on the [fast-DreamBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook
a29ea3cf51728c1e113734f223414ca8
mit
[]
false
Model by woolion This your the Stable Diffusion model fine-tuned the Cosmoose-SD concept taught to Stable Diffusion with Dreambooth. It can be used by modifying the `instance_prompt(s)` with `csmoos_style`. The DreamBooth step was trained on Cosmoose(.org) images, all drawn by Woolion. You can also train your own concepts and upload them to the library by using [the fast-DremaBooth.ipynb by TheLastBen](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb). You can run your new concept via A1111 Colab :[Fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Or you can run your new concept via `diffusers`: [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb), [Spaces with the Public Concepts loaded](https://huggingface.co/spaces/sd-dreambooth-library/stable-diffusion-dreambooth-concepts) Sample pictures of this concept: ![1668380733319.png 0](https://huggingface.co/woolion/cosmoose-sd/resolve/main/concept_images/1668380733319.png) ![1668380618746.png 1](https://huggingface.co/woolion/cosmoose-sd/resolve/main/concept_images/1668380618746.png) ![1668379756916.png 2](https://huggingface.co/woolion/cosmoose-sd/resolve/main/concept_images/1668379756916.png) ![1668380287261.png 3](https://huggingface.co/woolion/cosmoose-sd/resolve/main/concept_images/1668380287261.png)
58053af6138533efe66d6e0ba05d4f3f
cc-by-4.0
['generated_from_trainer']
false
thai-squad This model is a fine-tuned version of [deepset/xlm-roberta-base-squad2](https://huggingface.co/deepset/xlm-roberta-base-squad2) on Thai dataset from [iApp Technology Co., Ltd.](https://github.com/iapp-technology/iapp-wiki-qa-dataset).
e42da5dd8795fd97aedc11036b944fe3
cc-by-4.0
['generated_from_trainer']
false
Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2
b695ad7dabfa455f7087fd35f0ce9b57
apache-2.0
['automatic-speech-recognition', 'ru']
false
exp_w2v2t_ru_vp-es_s35 Fine-tuned [facebook/wav2vec2-large-es-voxpopuli](https://huggingface.co/facebook/wav2vec2-large-es-voxpopuli) for speech recognition using the train split of [Common Voice 7.0 (ru)](https://huggingface.co/datasets/mozilla-foundation/common_voice_7_0). When using this model, make sure that your speech input is sampled at 16kHz. This model has been fine-tuned by the [HuggingSound](https://github.com/jonatasgrosman/huggingsound) tool.
770facc59d85ab7f1c9732193b43b887
apache-2.0
['generated_from_trainer']
false
wav2vec2-base-timit-demo-colab4 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.9149 - Wer: 0.5907
6cc9516be5bbbeb743ecb60d53057305