modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
Akashpb13/xlsr_maltese_wav2vec2
[ "pytorch", "jax", "wav2vec2", "automatic-speech-recognition", "mt", "dataset:common_voice", "transformers", "audio", "speech", "xlsr-fine-tuning-week", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
2023-04-11T07:06:47Z
--- license: apache-2.0 datasets: - zaemyung/IteraTeR_plus language: - en pipeline_tag: text2text-generation --- # DElIteraTeR-PEGASUS-Multi-Sent-Revision-Generator This model was obtained by fine-tuning [google/pegasus-large](https://huggingface.co/google/pegasus-large) on [IteraTeR+](https://huggingface.co/datasets/zaemyung/IteraTeR_plus) `multi_sent` dataset. Paper: [Improving Iterative Text Revision by Learning Where to Edit from Other Revision Tasks](https://aclanthology.org/2022.emnlp-main.678/) <br> Authors: Zae Myung Kim, Wanyu Du, Vipul Raheja, Dhruv Kumar, and Dongyeop Kang ## Usage ```python from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("zaemyung/DElIteraTeR-PEGASUS-Multi-Sent-Revision-Generator") model = AutoModelForSeq2SeqLM.from_pretrained("zaemyung/DElIteraTeR-PEGASUS-Multi-Sent-Revision-Generator") before_inputs = [ "<bos>These were known as temple rings <coherence>. They</coherence> were worn on the head, near the temples of a woman or a girl.<eos>", "Andrew Hendy, Hereditary Chief of the Miskitu Nation.<bos> <clarity>Proclaimed</clarity> by the Nicaraguans on the death of his cousin George V, who died on 8th November 1888.<eos> He was repudiated by many people of the Miskitu Nation and abdicated in favour of his cousin Jonathan I, on 8th March 1889. He retired to Nicaraguan territory where he became a Miskitu Jefe Inspector and River Magistrate." ] model_inputs = tokenizer(before_inputs, return_tensors='pt', padding=True) model_outputs = model.generate(**model_inputs, num_beams=8, max_length=1024) after_texts = tokenizer.batch_decode(model_outputs, skip_special_tokens=True) print(after_texts) # 'These were known as temple rings because they were worn on the head, near the temples of a woman or a girl.', # 'Andrew Hendy, Hereditary Chief of the Miskitu Nation. He was proclaimed by the Nicaraguans on the death of his cousin George V, who died on 8th November 1888. He was repudiated by many people of the Miskitu Nation and abdicated in favour of his cousin Jonathan I, on 8th March 1889. He retired to Nicaraguan territory where he became a Miskitu Jefe Inspector and River Magistrate.'] ```
Akiva/Joke
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper-large-v2-kangri results: - task: type: automatic-speech-recognition name: Speech Recognition dataset: type: bridgeconn/snow-mountain name: snow-moutain-Kangri config: Kangri split: train_500 metrics: - type: wer value: 17.40 name: WER lower_is_better: true --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-large-v2-kangri This model is a fine-tuned version of [vasista22/whisper-hindi-large-v2](https://huggingface.co/vasista22/whisper-hindi-large-v2) on the [bridgeconn/snow-mountain](https://huggingface.co/datasets/bridgeconn/snow-mountain) dataset for the low resource Indian language- Kangri. It achieves the following results on the evaluation set: - Loss: 0.2967 - Wer: 0.1740 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 16 - total_eval_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 5000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0001 | 40.0 | 1000 | 0.2442 | 0.1800 | | 0.0 | 80.0 | 2000 | 0.2752 | 0.1764 | | 0.0 | 120.0 | 3000 | 0.2870 | 0.1747 | | 0.0 | 160.0 | 4000 | 0.2940 | 0.1745 | | 0.0 | 200.0 | 5000 | 0.2967 | 0.1740 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
AkshatSurolia/BEiT-FaceMask-Finetuned
[ "pytorch", "beit", "image-classification", "dataset:Face-Mask18K", "transformers", "license:apache-2.0", "autotrain_compatible" ]
image-classification
{ "architectures": [ "BeitForImageClassification" ], "model_type": "beit", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
239
null
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-de-fr results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-de-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1654 - F1: 0.8590 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2845 | 1.0 | 715 | 0.1831 | 0.8249 | | 0.1449 | 2.0 | 1430 | 0.1643 | 0.8479 | | 0.0929 | 3.0 | 2145 | 0.1654 | 0.8590 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0 - Datasets 1.16.1 - Tokenizers 0.10.3
AlbertHSU/BertTEST
[ "pytorch" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: mit tags: - generated_from_trainer datasets: - xtreme metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-fr results: - task: name: Token Classification type: token-classification dataset: name: xtreme type: xtreme args: PAN-X.fr metrics: - name: F1 type: f1 value: 0.816703001844709 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-fr This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the xtreme dataset. It achieves the following results on the evaluation set: - Loss: 0.3170 - F1: 0.8167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.7704 | 1.0 | 96 | 0.3685 | 0.7553 | | 0.319 | 2.0 | 192 | 0.3247 | 0.7881 | | 0.2142 | 3.0 | 288 | 0.3170 | 0.8167 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0 - Datasets 1.16.1 - Tokenizers 0.10.3
Aleksandar/bert-srb-base-cased-oscar
[ "pytorch", "bert", "fill-mask", "transformers", "generated_from_trainer", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-04-11T07:46:23Z
--- license: mit tags: - generated_from_trainer metrics: - f1 model-index: - name: xlm-roberta-base-finetuned-panx-all results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # xlm-roberta-base-finetuned-panx-all This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1717 - F1: 0.8552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 24 - eval_batch_size: 24 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | F1 | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.2939 | 1.0 | 715 | 0.1884 | 0.8138 | | 0.1464 | 2.0 | 1430 | 0.1720 | 0.8469 | | 0.0939 | 3.0 | 2145 | 0.1717 | 0.8552 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0 - Datasets 1.16.1 - Tokenizers 0.10.3
Aleksandar/bert-srb-ner
[ "pytorch", "bert", "token-classification", "dataset:wikiann", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: summarization_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # summarization_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1359 - Rouge1: 0.1813 - Rouge2: 0.1114 - Rougel: 0.1616 - Rougelsum: 0.1617 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 8 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | 0.2358 | 1.0 | 1635 | 0.1719 | 0.1758 | 0.1033 | 0.1554 | 0.1554 | 19.0 | | 0.2043 | 2.0 | 3270 | 0.1574 | 0.1764 | 0.1046 | 0.1561 | 0.1561 | 19.0 | | 0.191 | 3.0 | 4905 | 0.1505 | 0.1778 | 0.1069 | 0.1577 | 0.1578 | 19.0 | | 0.178 | 4.0 | 6540 | 0.1448 | 0.1797 | 0.1093 | 0.1597 | 0.1597 | 19.0 | | 0.1734 | 5.0 | 8175 | 0.1406 | 0.1804 | 0.1102 | 0.1605 | 0.1604 | 19.0 | | 0.1681 | 6.0 | 9810 | 0.1376 | 0.1811 | 0.111 | 0.1613 | 0.1613 | 19.0 | | 0.1665 | 7.0 | 11445 | 0.1365 | 0.1815 | 0.1114 | 0.1618 | 0.1618 | 19.0 | | 0.1643 | 8.0 | 13080 | 0.1359 | 0.1813 | 0.1114 | 0.1616 | 0.1617 | 19.0 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Aleksandar/electra-srb-ner-setimes
[ "pytorch", "electra", "token-classification", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "ElectraForTokenClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: hawkeoni/ppo-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Aleksandar/electra-srb-ner
[ "pytorch", "safetensors", "electra", "token-classification", "dataset:wikiann", "transformers", "generated_from_trainer", "autotrain_compatible" ]
token-classification
{ "architectures": [ "ElectraForTokenClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- language: - en license: apache-2.0 tags: - whisper-event - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: Whisper Small English results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: mozilla-foundation/common_voice_11_0 en type: mozilla-foundation/common_voice_11_0 config: en split: test args: en metrics: - name: Wer type: wer value: 12.021334704238024 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Small English This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_11_0 en dataset. It achieves the following results on the evaluation set: - Loss: 0.3107 - Wer: 12.0213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 32 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 40000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:-----:|:---------------:|:-------:| | 0.1577 | 0.06 | 2500 | 0.4077 | 16.2349 | | 0.2244 | 0.12 | 5000 | 0.3698 | 14.7325 | | 0.3231 | 0.19 | 7500 | 0.3434 | 13.7448 | | 0.2536 | 0.25 | 10000 | 0.3406 | 13.4981 | | 0.2234 | 0.31 | 12500 | 0.3510 | 14.1304 | | 0.1989 | 0.38 | 15000 | 0.3388 | 13.6394 | | 0.2449 | 0.44 | 17500 | 0.3394 | 13.4293 | | 0.2302 | 0.5 | 20000 | 0.3198 | 12.5020 | | 0.213 | 0.56 | 22500 | 0.3167 | 12.4904 | | 0.2395 | 0.62 | 25000 | 0.3145 | 12.7533 | | 0.1152 | 0.69 | 27500 | 0.3181 | 12.6087 | | 0.0901 | 1.01 | 30000 | 0.3134 | 12.3240 | | 0.1595 | 1.07 | 32500 | 0.3107 | 12.0213 | | 0.1249 | 1.13 | 35000 | 0.3131 | 12.0869 | | 0.1404 | 1.2 | 37500 | 0.3117 | 12.4635 | | 0.1812 | 1.26 | 40000 | 0.3104 | 12.1415 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0+cu117 - Datasets 2.11.1.dev0 - Tokenizers 0.13.2
Aleksandar1932/gpt2-hip-hop
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - generated_from_trainer model-index: - name: lora-alpaca-spanish-30b-v0.2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lora-alpaca-spanish-30b-v0.2 This model was trained from scratch on an unknown dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - distributed_type: multi-GPU - num_devices: 4 - gradient_accumulation_steps: 32 - total_train_batch_size: 512 - total_eval_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 100 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3
Alexander-Learn/bert-finetuned-squad-accelerate
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 271.77 +/- 15.69 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Alexander-Learn/bert-finetuned-squad
[ "pytorch", "tensorboard", "bert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "BertForQuestionAnswering" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: chenoi/deepRL5-2 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Alexandru/creative_copilot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-nc-4.0 tags: - generated_from_trainer - instruction fine-tuning model-index: - name: flan-t5-small-distil-v2 results: [] language: - en pipeline_tag: text2text-generation widget: - text: >- how can I become more healthy? example_title: example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> # LaMini-T5-61M [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]() This model is one of our LaMini-LM series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/). You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper. <table> <thead> <tr> <th>Base model</th> <th colspan="4">LaMini-LM series (#parameters)</th> </tr> </thead> <tbody> <tr> <td>T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td> <td></td> </tr> <tr> <td>Flan-T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td> <td></td> </tr> <tr> <td>Cerebras-GPT</td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td> </tr> <tr> <td>GPT-2</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td> <td></td> </tr> <tr> <td>GPT-Neo</td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td> <td></td> <td></td> </tr> <tr> <td>GPT-J</td> <td colspan="4">coming soon</td> </tr> <tr> <td>LLaMA</td> <td colspan="4">coming soon</td> </tr> </tbody> </table> ## Use ### Intended use We recommend using the model to response to human instructions written in natural language. We now show you how to load and use our model using HuggingFace `pipeline()`. ```python # pip install -q transformers from transformers import pipeline checkpoint = "{model_name}" model = pipeline('text2text-generation', model = checkpoint) input_prompt = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"' generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text'] print("Response", generated_text) ``` ## Training Procedure <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a> </p> We initialize with [t5-small](https://huggingface.co/t5-small) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 61M. ### Training Hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 128 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ## Evaluation We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). ## Limitations More information needed # Citation ```bibtex @article{lamini-lm, author = {Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji }, title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions}, journal = {CoRR}, volume = {abs/2304.14402}, year = {2023}, url = {https://arxiv.org/abs/2304.14402}, eprinttype = {arXiv}, eprint = {2304.14402} } ```
Alireza1044/albert-base-v2-stsb
[ "pytorch", "tensorboard", "albert", "text-classification", "en", "dataset:glue", "transformers", "generated_from_trainer", "license:apache-2.0" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: layoutlmv3-finetuned-invoice-3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv3-finetuned-invoice-3 This model is a fine-tuned version of [microsoft/layoutlmv3-base](https://huggingface.co/microsoft/layoutlmv3-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.0175 - Precision: 0.0 - Recall: 0.0 - F1: 0.0 - Accuracy: 0.9239 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 20 ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
AmirServi/MyModel
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-11T09:51:37Z
--- license: other --- LLaMA-7B converted to work with Transformers/HuggingFace. This is under a special license, please see the LICENSE file for details. -- license: other --- # LLaMA Model Card ## Model details **Organization developing the model** The FAIR team of Meta AI. **Model date** LLaMA was trained between December. 2022 and Feb. 2023. **Model version** This is version 1 of the model. **Model type** LLaMA is an auto-regressive language model, based on the transformer architecture. The model comes in different sizes: 7B, 13B, 33B and 65B parameters. **Paper or resources for more information** More information can be found in the paper “LLaMA, Open and Efficient Foundation Language Models”, available at https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/. **Citations details** https://research.facebook.com/publications/llama-open-and-efficient-foundation-language-models/ **License** Non-commercial bespoke license **Where to send questions or comments about the model** Questions and comments about LLaMA can be sent via the [GitHub repository](https://github.com/facebookresearch/llama) of the project , by opening an issue. ## Intended use **Primary intended uses** The primary use of LLaMA is research on large language models, including: exploring potential applications such as question answering, natural language understanding or reading comprehension, understanding capabilities and limitations of current language models, and developing techniques to improve those, evaluating and mitigating biases, risks, toxic and harmful content generations, hallucinations. **Primary intended users** The primary intended users of the model are researchers in natural language processing, machine learning and artificial intelligence. **Out-of-scope use cases** LLaMA is a base, or foundational, model. As such, it should not be used on downstream applications without further risk evaluation and mitigation. In particular, our model has not been trained with human feedback, and can thus generate toxic or offensive content, incorrect information or generally unhelpful answers. ## Factors **Relevant factors** One of the most relevant factors for which model performance may vary is which language is used. Although we included 20 languages in the training data, most of our dataset is made of English text, and we thus expect the model to perform better for English than other languages. Relatedly, it has been shown in previous studies that performance might vary for different dialects, and we expect that it will be the case for our model. **Evaluation factors** As our model is trained on data from the Web, we expect that it reflects biases from this source. We thus evaluated on RAI datasets to measure biases exhibited by the model for gender, religion, race, sexual orientation, age, nationality, disability, physical appearance and socio-economic status. We also measure the toxicity of model generations, depending on the toxicity of the context used to prompt the model. ## Metrics **Model performance measures** We use the following measure to evaluate the model: - Accuracy for common sense reasoning, reading comprehension, natural language understanding (MMLU), BIG-bench hard, WinoGender and CrowS-Pairs, - Exact match for question answering, - The toxicity score from Perspective API on RealToxicityPrompts. **Decision thresholds** Not applicable. **Approaches to uncertainty and variability** Due to the high computational requirements of training LLMs, we trained only one model of each size, and thus could not evaluate variability of pre-training. ## Evaluation datasets The model was evaluated on the following benchmarks: BoolQ, PIQA, SIQA, HellaSwag, WinoGrande, ARC, OpenBookQA, NaturalQuestions, TriviaQA, RACE, MMLU, BIG-bench hard, GSM8k, RealToxicityPrompts, WinoGender, CrowS-Pairs. ## Training dataset The model was trained using the following source of data: CCNet [67%], C4 [15%], GitHub [4.5%], Wikipedia [4.5%], Books [4.5%], ArXiv [2.5%], Stack Exchange[2%]. The Wikipedia and Books domains include data in the following languages: bg, ca, cs, da, de, en, es, fr, hr, hu, it, nl, pl, pt, ro, ru, sl, sr, sv, uk. See the paper for more details about the training set and corresponding preprocessing. ## Quantitative analysis Hyperparameters for the model architecture <table> <thead> <tr> <th >LLaMA</th> <th colspan=6>Model hyper parameters </th> </tr> <tr> <th>Number of parameters</th><th>dimension</th><th>n heads</th><th>n layers</th><th>Learn rate</th><th>Batch size</th><th>n tokens</th> </tr> </thead> <tbody> <tr> <th>7B</th> <th>4096</th> <th>32</th> <th>32</th> <th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>13B</th><th>5120</th><th>40</th><th>40</th><th>3.0E-04</th><th>4M</th><th>1T </tr> <tr> <th>33B</th><th>6656</th><th>52</th><th>60</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> <tr> <th>65B</th><th>8192</th><th>64</th><th>80</th><th>1.5.E-04</th><th>4M</th><th>1.4T </tr> </tbody> </table> *Table 1 - Summary of LLama Model Hyperparameters* We present our results on eight standard common sense reasoning benchmarks in the table below. <table> <thead> <tr> <th>LLaMA</th> <th colspan=9>Reasoning tasks </th> </tr> <tr> <th>Number of parameters</th> <th>BoolQ</th><th>PIQA</th><th>SIQA</th><th>HellaSwag</th><th>WinoGrande</th><th>ARC-e</th><th>ARC-c</th><th>OBQA</th><th>COPA</th> </tr> </thead> <tbody> <tr> <th>7B</th><th>76.5</th><th>79.8</th><th>48.9</th><th>76.1</th><th>70.1</th><th>76.7</th><th>47.6</th><th>57.2</th><th>93 </th> <tr><th>13B</th><th>78.1</th><th>80.1</th><th>50.4</th><th>79.2</th><th>73</th><th>78.1</th><th>52.7</th><th>56.4</th><th>94 </th> <tr><th>33B</th><th>83.1</th><th>82.3</th><th>50.4</th><th>82.8</th><th>76</th><th>81.4</th><th>57.8</th><th>58.6</th><th>92 </th> <tr><th>65B</th><th>85.3</th><th>82.8</th><th>52.3</th><th>84.2</th><th>77</th><th>81.5</th><th>56</th><th>60.2</th><th>94</th></tr> </tbody> </table> *Table 2 - Summary of LLama Model Performance on Reasoning tasks* We present our results on bias in the table below. Note that lower value is better indicating lower bias. | No | Category | FAIR LLM | | --- | -------------------- | -------- | | 1 | Gender | 70.6 | | 2 | Religion | 79 | | 3 | Race/Color | 57 | | 4 | Sexual orientation | 81 | | 5 | Age | 70.1 | | 6 | Nationality | 64.2 | | 7 | Disability | 66.7 | | 8 | Physical appearance | 77.8 | | 9 | Socioeconomic status | 71.5 | | | LLaMA Average | 66.6 | *Table 3 - Summary bias of our model output* ## Ethical considerations **Data** The data used to train the model is collected from various sources, mostly from the Web. As such, it contains offensive, harmful and biased content. We thus expect the model to exhibit such biases from the training data. **Human life** The model is not intended to inform decisions about matters central to human life, and should not be used in such a way. **Mitigations** We filtered the data from the Web based on its proximity to Wikipedia text and references. For this, we used a Kneser-Ney language model and a fastText linear classifier. **Risks and harms** Risks and harms of large language models include the generation of harmful, offensive or biased content. These models are often prone to generating incorrect information, sometimes referred to as hallucinations. We do not expect our model to be an exception in this regard. **Use cases** LLaMA is a foundational model, and as such, it should not be used for downstream applications without further investigation and mitigations of risks. These risks and potential fraught use cases include, but are not limited to: generation of misinformation and generation of harmful, biased or offensive content.
Amit29/t5-small-finetuned-xsum
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-11T09:58:47Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: gian-cr/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AndrewMcDowell/wav2vec2-xls-r-300m-german-de
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "de", "dataset:mozilla-foundation/common_voice_7_0", "transformers", "generated_from_trainer", "hf-asr-leaderboard", "mozilla-foundation/common_voice_7_0", "robust-speech-event", "license:apache-2.0", "model-index" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
2023-04-11T10:41:28Z
--- license: mit tags: - pytorch - diffusers - unconditional-image-generation --- This model is a diffusion model for unconditional image generation of shoes trained on a custom dataset at 128x128 resolution ## Usage ```python from diffusers import DDPMPipeline pipeline = DDPMPipeline.from_pretrained('Apocalypse-19/shoe-generator') image = pipeline().images[0] image ```
AnonymousSub/AR_rule_based_twostagetriplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
เกมออนไลน์ที่ไม่เล่นไม่ได้เลย LUCKY FUN 789 เพราะมีเกมให้เลือกเล่นมากมาย อาทิ บาคาร่า กีฬา สล็อต ยิงปลา เกมโปรด เกมฮอต ป็อกเด้ง ไฮโล หวย แม้กระทั้งเทรดบิทคอยด์ ที่กำลังเป็นที่นิยมในการลงทุน ณ ปัจจุบัน หากท่านเป็นคนที่มีความสนใจการลงทุนที่ครบจบในเว็ปเดียว ก็ไม่ควรจะพลาดที่จะเป็นสมาชิก แล้วท่านจะได้ประสบการณ์ใหม่ อัพเดตการลงทุนในยุคที่ใครช้าถือว่าเอ้าท์ ไม่พลาดโอกาสคว้ากำไรอย่างแน่นอน เพราะเป็นเกมที่สามารถจะทำเงินให้แก่คนที่เล่นนั้นได้อย่างมหาศาลเลยทีเดียว และถ้าเล่นกันดีๆแล้วละก็ อย่างไรเกมนี้ก็เป็นเกมการพนันที่สร้างเงินกันได้ เพราะฉะนั้นแล้วสิ่งที่สำคัญที่สุดที่จะได้เงินคือต้องเล่นเท่านั้นถึงจะรู้ผลได้ว่าจะได้หรือไม่ได้ เพราะไม่งั้นแล้วถ้าไม่เล่นยังไงก็ไม่มีทางได้เงินกันอย่างแน่นอน <p>► <a href="https://luckyfun789.com/" rel="noopener nofollow">เกมสล็อตออนไลน์</a></p> LUCKYFUN789 ผู้ให้บริการเกมสล็อตออนไลน์ ครบ จบในเว็บเดียว ด้วยระบบอัตโนมัติ ฝาก-ถอนไว จ่ายจริง ล้ำสมัยด้วยเกมสล็อตออนไลน์รูปแบบใหม่ ที่ LUCKYFUN789
AnonymousSub/AR_specter
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- license: mit tags: - generated_from_trainer model-index: - name: gpt-expt-sp-v3-K-600-MA-Mac-actions-kmeans-v4 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt-expt-sp-v3-K-600-MA-Mac-actions-kmeans-v4 This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.0162 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 512 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 500 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:------:|:------:|:---------------:| | 0.1593 | 21.46 | 5000 | 0.0853 | | 0.0847 | 42.92 | 10000 | 0.1198 | | 0.0406 | 64.38 | 15000 | 0.0613 | | 0.0324 | 85.83 | 20000 | 0.0307 | | 0.0238 | 107.3 | 25000 | 0.0211 | | 0.0207 | 128.75 | 30000 | 0.0184 | | 0.0193 | 150.21 | 35000 | 0.0176 | | 0.0185 | 171.67 | 40000 | 0.0171 | | 0.018 | 193.13 | 45000 | 0.0170 | | 0.0177 | 214.59 | 50000 | 0.0167 | | 0.0174 | 236.05 | 55000 | 0.0167 | | 0.0172 | 257.51 | 60000 | 0.0166 | | 0.017 | 278.97 | 65000 | 0.0165 | | 0.0169 | 300.43 | 70000 | 0.0164 | | 0.0168 | 321.89 | 75000 | 0.0164 | | 0.0167 | 343.35 | 80000 | 0.0163 | | 0.0166 | 364.8 | 85000 | 0.0163 | | 0.0165 | 386.27 | 90000 | 0.0163 | | 0.0164 | 407.72 | 95000 | 0.0162 | | 0.0164 | 429.18 | 100000 | 0.0162 | | 0.0163 | 450.64 | 105000 | 0.0162 | | 0.0163 | 472.1 | 110000 | 0.0162 | | 0.0163 | 493.56 | 115000 | 0.0162 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1.post200 - Datasets 2.9.0 - Tokenizers 0.13.2
AnonymousSub/SR_rule_based_only_classfn_twostage_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: - he tags: - language model --- ## AlephBertGimmel Modern Hebrew pretrained BERT model with a 128K token vocabulary. [Checkpoint](https://github.com/Dicta-Israel-Center-for-Text-Analysis/alephbertgimmel/tree/main/alephbertgimmel-small/ckpt_29400--Max128Seq) of the alephbertgimmel-small-128 from [alephbertgimmel](https://github.com/Dicta-Israel-Center-for-Text-Analysis/alephbertgimmel) ```python from transformers import AutoTokenizer, AutoModelForMaskedLM import torch from transformers import AutoModelForMaskedLM, AutoTokenizer model = AutoModelForMaskedLM.from_pretrained("imvladikon/alephbertgimmel-small-128") tokenizer = AutoTokenizer.from_pretrained("imvladikon/alephbertgimmel-small-128") text = "{} היא מטרופולין המהווה את מרכז הכלכלה" input = tokenizer.encode(text.format("[MASK]"), return_tensors="pt") mask_token_index = torch.where(input == tokenizer.mask_token_id)[1] token_logits = model(input).logits mask_token_logits = token_logits[0, mask_token_index, :] top_5_tokens = torch.topk(mask_token_logits, 5, dim=1).indices[0].tolist() for token in top_5_tokens: print(text.format(tokenizer.decode([token]))) # ישראל היא מטרופולין המהווה את מרכז הכלכלה # ירושלים היא מטרופולין המהווה את מרכז הכלכלה # חיפה היא מטרופולין המהווה את מרכז הכלכלה # אילת היא מטרופולין המהווה את מרכז הכלכלה # אשדוד היא מטרופולין המהווה את מרכז הכלכלה ``` ```python def ppl_naive(text, model, tokenizer): input = tokenizer.encode(text, return_tensors="pt") loss = model(input, labels=input)[0] return torch.exp(loss).item() text = """{} היא עיר הבירה של מדינת ישראל, והעיר הגדולה ביותר בישראל בגודל האוכלוסייה""" for word in ["חיפה", "ירושלים", "תל אביב"]: print(ppl_naive(text.format(word), model, tokenizer)) # 9.825098991394043 # 10.594215393066406 # 9.536449432373047 # I'd expect that for "ירושלים" should be the smallest value, but... @torch.inference_mode() def ppl_pseudo(text, model, tokenizer, ignore_idx=-100): input = tokenizer.encode(text, return_tensors='pt') mask = torch.ones(input.size(-1) - 1).diag(1)[:-2] repeat_input = input.repeat(input.size(-1) - 2, 1) input = repeat_input.masked_fill(mask == 1, tokenizer.mask_token_id) labels = repeat_input.masked_fill(input != tokenizer.mask_token_id, ignore_idx) loss = model(input, labels=labels)[0] return torch.exp(loss).item() for word in ["חיפה", "ירושלים", "תל אביב"]: print(ppl_pseudo(text.format(word), model, tokenizer)) # 4.346900939941406 # 3.292382001876831 # 2.732590913772583 ``` When using AlephBertGimmel, please reference: ```bibtex @misc{guetta2022large, title={Large Pre-Trained Models with Extra-Large Vocabularies: A Contrastive Analysis of Hebrew BERT Models and a New One to Outperform Them All}, author={Eylon Guetta and Avi Shmidman and Shaltiel Shmidman and Cheyn Shmuel Shmidman and Joshua Guedalia and Moshe Koppel and Dan Bareket and Amit Seker and Reut Tsarfaty}, year={2022}, eprint={2211.15199}, archivePrefix={arXiv}, primaryClass={cs.CL} } ```
AnonymousSub/SR_rule_based_roberta_bert_quadruplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-8x8-Slippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 metrics: - type: mean_reward value: 0.48 +/- 0.50 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** created by Apocalypse-19. ## Usage ```python model = load_from_hub(repo_id="Apocalypse-19/q-FrozenLake-8x8-Slippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/SR_rule_based_roberta_hier_triplet_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- language: ja license: cc-by-sa-4.0 library_name: transformers tags: - bert - fill-mask datasets: - wikipedia mask_token: "[MASK]" widget: - text: "京都 大学 で [MASK] を 専攻 する 。" - text: "東京 は 日本 の [MASK] だ 。" - text: "カフェ で [MASK] を 注文 する 。" --- # ku-accms/bert-base-japanese-ssuw ## Model description This is a pre-trained Japanese BERT base model for super short unit words (SSUW). ## Pre-processing The input text should be converted to full-width (zenkaku) characters and segmented into super short unit words in advance (e.g., by KyTea). ## How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='ku-accms/bert-base-japanese-ssuw') >>> unmasker("京都 大学 で [MASK] を 専攻 する 。") [{'sequence': '京都 大学 で 文学 を 専攻 する 。', 'score': '0.1464807540178299', 'token': '14603', 'token_str': '文学'} {'sequence': '京都 大学 で 哲学 を 専攻 する 。', 'score': '0.08064978569746017', 'token': '15917', 'token_str': '哲学'} {'sequence': '京都 大学 で 演劇 を 専攻 する 。', 'score': '0.0800977498292923', 'token': '16772', 'token_str': '演劇'} {'sequence': '京都 大学 で 法学 を 専攻 する 。', 'score': '0.04579947143793106', 'token': '16255', 'token_str': '法学'} {'sequence': '京都 大学 で 英語 を 専攻 する 。', 'score': '0.045536939054727554', 'token': '14592', 'token_str': '英語'} ``` Here is how to use this model to get the features of a given text in PyTorch: ```python import zenhan import Mykytea kytea_model_path = "somewhere" kytea = Mykytea.Mykytea("-model {} -notags".format(kytea_model_path)) def preprocess(text): return " ".join(kytea.getWS(zenhan.h2z(text))) from transformers import BertTokenizer, BertModel tokenizer = BertTokenizer.from_pretrained('ku-accms/bert-base-japanese-ssuw') model = BertModel.from_pretrained("ku-accms/bert-base-japanese-ssuw") text = "京都大学で自然言語処理を専攻する。" encoded_input = tokenizer(preprocess(text), return_tensors='pt') output = model(**encoded_input) ``` ## Training data We used a Japanese Wikipedia dump (as of 20230101, 3.3GB). ## Training procedure We first segmented the texts into words by KyTea and then tokenized the words into subwords using WordPiece with a vocabulary size of 32,000. We pre-trained the BERT model using [transformers](https://github.com/huggingface/transformers) library. The training took about 8 days using 4 NVIDIA A100-SXM4-80GB GPUs. The following hyperparameters were used for the pre-training. - learning_rate: 2e-4 - weight decay: 1e-2 - per_device_train_batch_size: 80 - num_devices: 4 - gradient_accumulation_steps: 3 - total_train_batch_size: 960 - max_seq_length: 512 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear schedule with warmup - training_steps: 500,000 - warmup_steps: 10,000
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_1
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- language: ja license: cc-by-sa-4.0 library_name: transformers tags: - roberta - fill-mask datasets: - wikipedia - cc100 mask_token: "[MASK]" widget: - text: "京都 大学 で [MASK] を 専攻 する 。" - text: "東京 は 日本 の [MASK] だ 。" - text: "カフェ で [MASK] を 注文 する 。" - text: "[MASK] 名人 が タイトル の 防衛 に 成功 する 。" --- # ku-accms/roberta-base-japanese-ssuw ## Model description This is a pre-trained Japanese RoBERTa base model for super short unit words (SSUW). ## Pre-processing The input text should be converted to full-width (zenkaku) characters and segmented into super short unit words in advance (e.g., by KyTea). ## How to use You can use this model directly with a pipeline for masked language modeling: ```python >>> from transformers import pipeline >>> unmasker = pipeline('fill-mask', model='ku-accms/roberta-base-japanese-ssuw') >>> unmasker("京都 大学 で [MASK] を 専攻 する 。") [{'sequence': '京都 大学 で 文学 を 専攻 する 。', 'score': '0.1479644924402237', 'token': '17907', 'token_str': '文学'} {'sequence': '京都 大学 で 哲学 を 専攻 する 。', 'score': '0.07658644765615463', 'token': '19302', 'token_str': '哲学'} {'sequence': '京都 大学 で デザイン を 専攻 する 。', 'score': '0.06302948296070099', 'token': '14411', 'token_str': 'デザイン'} {'sequence': '京都 大学 で 建築 を 専攻 する 。', 'score': '0.060596249997615814', 'token': '15478', 'token_str': '建築'} {'sequence': '京都 大学 で 工学 を 専攻 する 。', 'score': '0.0574776753783226', 'token': '18632', 'token_str': '工学'} ``` Here is how to use this model to get the features of a given text in PyTorch: ```python import zenhan import Mykytea kytea_model_path = "somewhere" kytea = Mykytea.Mykytea("-model {} -notags".format(kytea_model_path)) def preprocess(text): return " ".join(kytea.getWS(zenhan.h2z(text))) from transformers import BertTokenizer, RobertaModel tokenizer = BertTokenizer.from_pretrained('ku-accms/roberta-base-japanese-ssuw') model = RobertaModel.from_pretrained("ku-accms/roberta-base-japanese-ssuw") text = "京都大学で自然言語処理を専攻する。" encoded_input = tokenizer(preprocess(text), return_tensors='pt') output = model(**encoded_input) ``` ## Training data We used a Japanese Wikipedia dump (as of 20230101, 3.3GB) and a Japanese portion of CC100 (70GB). ## Training procedure We first segmented the texts into words by KyTea and then tokenized the words into subwords using WordPiece with a vocabulary size of 32,000. We pre-trained the RoBERTa model using [transformers](https://github.com/huggingface/transformers) library. The training took about 7 days using 4 NVIDIA A100-SXM4-80GB GPUs. The following hyperparameters were used for the pre-training. - learning_rate: 1e-4 - weight decay: 1e-2 - per_device_train_batch_size: 80 - num_devices: 4 - gradient_accumulation_steps: 3 - total_train_batch_size: 960 - max_seq_length: 512 - optimizer: AdamW with betas=(0.9,0.999) and epsilon=1e-06 - lr_scheduler_type: linear schedule with warmup - training_steps: 500,000 - warmup_steps: 10,000
AnonymousSub/SR_rule_based_roberta_only_classfn_twostage_epochs_1_shard_10
[ "pytorch", "roberta", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "RobertaModel" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### Ninetail Dreambooth model trained by blazers with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
AnonymousSub/SR_rule_based_twostage_quadruplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- tags: - FrozenLake-v1-8x8 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-8x8-Slippery-v2 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-8x8 type: FrozenLake-v1-8x8 metrics: - type: mean_reward value: 0.57 +/- 0.50 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** created by Apocalypse-19. ## Usage ```python model = load_from_hub(repo_id="Apocalypse-19/q-FrozenLake-8x8-Slippery-v2", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/SR_rule_based_twostagequadruplet_hier_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 208.69 +/- 71.29 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/SR_rule_based_twostagetriplet_epochs_1_shard_1
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: chenoi/deepRL7 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AnonymousSub/bert_mean_diff_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
Access to model PrathameshPawar/bart_raw is restricted and you are not in the authorized list. Visit https://huggingface.co/PrathameshPawar/bart_raw to ask for access.
AnonymousSub/cline-emanuals-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - wnut_17 metrics: - precision - recall - f1 - accuracy model-index: - name: ner results: - task: name: Token Classification type: token-classification dataset: name: wnut_17 type: wnut_17 config: wnut_17 split: test args: wnut_17 metrics: - name: Precision type: precision value: 0.5552523874488404 - name: Recall type: recall value: 0.37720111214087115 - name: F1 type: f1 value: 0.44922737306843263 - name: Accuracy type: accuracy value: 0.9469454063528707 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ner This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the wnut_17 dataset. It achieves the following results on the evaluation set: - Loss: 0.2942 - Precision: 0.5553 - Recall: 0.3772 - F1: 0.4492 - Accuracy: 0.9469 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | No log | 1.0 | 213 | 0.2666 | 0.6024 | 0.2808 | 0.3831 | 0.9405 | | No log | 2.0 | 426 | 0.2605 | 0.5708 | 0.3364 | 0.4233 | 0.9456 | | 0.1299 | 3.0 | 639 | 0.2827 | 0.5658 | 0.3346 | 0.4205 | 0.9452 | | 0.1299 | 4.0 | 852 | 0.2836 | 0.5503 | 0.3753 | 0.4463 | 0.9469 | | 0.051 | 5.0 | 1065 | 0.2942 | 0.5553 | 0.3772 | 0.4492 | 0.9469 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
AnonymousSub/cline-s10-SR
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
AnonymousSub/cline-techqa
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - generated_from_trainer datasets: - samsum model-index: - name: pegasus-samsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-samsum This model is a fine-tuned version of [google/pegasus-cnn_dailymail](https://huggingface.co/google/pegasus-cnn_dailymail) on the samsum dataset. It achieves the following results on the evaluation set: - Loss: 1.4859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 1.7003 | 0.54 | 500 | 1.4859 | ### Framework versions - Transformers 4.11.3 - Pytorch 1.13.0 - Datasets 2.0.0 - Tokenizers 0.10.3
AnonymousSub/cline_squad2.0
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -135.57 +/- 57.14 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 100000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'chenoi/deepRL8-1' 'batch_size': 512 'minibatch_size': 128} ```
AnonymousSub/consert-s10-AR
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SnowballTarget 2. Step 1: Find your model_id: infatum/ppo-SnowballTarget 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AnonymousSub/consert-s10-SR
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: mit tags: - generated_from_trainer metrics: - rouge model-index: - name: bart-stats-extract results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-stats-extract This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingface.co/facebook/bart-large-cnn) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.3450 - Rouge1: 62.188 - Rouge2: 51.5988 - Rougel: 55.8383 - Rougelsum: 58.4919 - Gen Len: 90.4286 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:-------:|:-------:|:-------:|:---------:|:-------:| | No log | 1.0 | 4 | 1.0447 | 51.2166 | 37.2933 | 44.8635 | 47.5954 | 74.0 | | No log | 2.0 | 8 | 0.5919 | 55.0964 | 43.0158 | 49.4166 | 51.4412 | 92.2857 | | No log | 3.0 | 12 | 0.4159 | 60.2619 | 48.694 | 54.0969 | 54.9467 | 95.1429 | | No log | 4.0 | 16 | 0.3450 | 62.188 | 51.5988 | 55.8383 | 58.4919 | 90.4286 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
AnonymousSub/declutr-emanuals-s10-SR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: bert_ai results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_ai This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0761 - Accuracy: 0.9913 - F1: 0.9913 - Precision: 0.9833 - Recall: 0.9995 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.0358 | 1.0 | 6059 | 0.0390 | 0.9923 | 0.9923 | 0.9859 | 0.9989 | | 0.0187 | 2.0 | 12118 | 0.0738 | 0.9884 | 0.9884 | 0.9779 | 0.9993 | | 0.0056 | 3.0 | 18177 | 0.0761 | 0.9913 | 0.9913 | 0.9833 | 0.9995 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
AnonymousSub/declutr-model-emanuals
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-04-11T15:01:48Z
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 276.03 +/- 19.66 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
AnonymousSub/declutr-model
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-04-11T15:02:33Z
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: justinsiow/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AnonymousSub/declutr-roberta-papers
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: ongknsro/dogdog02-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
AnonymousSub/declutr-s10-AR
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: bert_human results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_human This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.0451 - Accuracy: 0.9930 - F1: 0.9930 - Precision: 0.9923 - Recall: 0.9921 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.062 | 1.0 | 5488 | 0.0409 | 0.9914 | 0.9914 | 0.9924 | 0.9885 | | 0.0279 | 2.0 | 10976 | 0.0414 | 0.9925 | 0.9925 | 0.9923 | 0.9909 | | 0.008 | 3.0 | 16464 | 0.0451 | 0.9930 | 0.9930 | 0.9923 | 0.9921 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
AnonymousSub/dummy_1
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
null
--- license: cc-by-nc-sa-4.0 tags: - generated_from_trainer model-index: - name: layoutlmv2-base-uncased_finetuned_docvqa results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # layoutlmv2-base-uncased_finetuned_docvqa This model is a fine-tuned version of [microsoft/layoutlmv2-base-uncased](https://huggingface.co/microsoft/layoutlmv2-base-uncased) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.3167 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 4 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 0.5 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 5.4213 | 0.22 | 50 | 4.6420 | | 4.513 | 0.44 | 100 | 4.3167 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
AnonymousSub/hier_triplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Luksal/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/roberta-base_wikiqa
[ "pytorch", "roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "RobertaForSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - bleu model-index: - name: spanish-spellchecker-t5-base-wikitest1000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # spanish-spellchecker-t5-base-wikitest1000 This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.2943 - Bleu: 0.0 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - distributed_type: multi-GPU - num_devices: 2 - total_train_batch_size: 64 - total_eval_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:----:|:-------:| | No log | 1.0 | 52 | 0.4171 | 0.0 | 19.0 | | No log | 2.0 | 104 | 0.3522 | 0.0 | 19.0 | | No log | 3.0 | 156 | 0.3275 | 0.0 | 19.0 | | No log | 4.0 | 208 | 0.3160 | 0.0 | 19.0 | | No log | 5.0 | 260 | 0.3064 | 0.0 | 19.0 | | No log | 6.0 | 312 | 0.3012 | 0.0 | 19.0 | | No log | 7.0 | 364 | 0.2987 | 0.0 | 19.0 | | No log | 8.0 | 416 | 0.2959 | 0.0 | 19.0 | | No log | 9.0 | 468 | 0.2947 | 0.0 | 19.0 | | 0.5085 | 10.0 | 520 | 0.2943 | 0.0 | 19.0 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu117 - Datasets 2.10.1 - Tokenizers 0.13.2
AnonymousSub/rule_based_bert_quadruplet_epochs_1_shard_1_wikiqa
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
33
2023-04-11T15:31:19Z
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Luksal/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
AnonymousSub/rule_based_bert_triplet_epochs_1_shard_10
[ "pytorch", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 19.76 +/- 2.52 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r chenoi/deepRL8-2-last ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .home.chenoi1.miniconda3.envs.last.lib.python3.9.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=deepRL8-2-last ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .home.chenoi1.miniconda3.envs.rl.lib.python3.9.site-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=deepRL8-2-last --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
Ashagi/Ashvx
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-11T20:05:34Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/multiplenegatives-e5-base-unsupervised-500-34358c This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/multiplenegatives-e5-base-unsupervised-500-34358c') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/multiplenegatives-e5-base-unsupervised-500-34358c') model = AutoModel.from_pretrained('rithwik-db/multiplenegatives-e5-base-unsupervised-500-34358c') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/multiplenegatives-e5-base-unsupervised-500-34358c) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 298 with parameters: ``` {'batch_size': 8, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Atchuth/DialoGPT-small-MBOT
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 545.00 +/- 120.79 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AMI0x -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga AMI0x -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga AMI0x ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
Augustvember/wokka5
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.46 +/- 2.69 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="yingzhi/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Augustvember/your-model-name
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="vinaysatish/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Axon/resnet34-v1
[ "dataset:ImageNet", "arxiv:1512.03385", "Axon", "Elixir", "license:apache-2.0" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 258.14 +/- 25.99 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Ayato/DialoGTP-large-Yuri
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: dyingc/poca-SoccerTwos 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Ayham/albert_distilgpt2_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- language: - de tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 metrics: - wer model-index: - name: whisper-fine-tuned-de_learn results: - task: name: Automatic Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice 11.0 type: mozilla-foundation/common_voice_11_0 config: de split: validation[9500:14300] args: 'config: german, split: test' metrics: - name: Wer type: wer value: 14.329741524756484 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper-fine-tuned-de_learn This model is a fine-tuned version of [whisper-fine-tuned-de_arg_new](https://huggingface.co/whisper-fine-tuned-de_arg_new) on the Common Voice 11.0 dataset. It achieves the following results on the evaluation set: - Loss: 0.3423 - Wer: 14.3297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 16 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 8000 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 0.2689 | 0.67 | 1000 | 0.3094 | 15.5378 | | 0.1072 | 1.33 | 2000 | 0.3068 | 15.0653 | | 0.1134 | 2.0 | 3000 | 0.2991 | 14.5704 | | 0.0437 | 2.67 | 4000 | 0.3166 | 14.8876 | | 0.0163 | 3.33 | 5000 | 0.3308 | 14.4940 | | 0.0118 | 4.0 | 6000 | 0.3314 | 14.3882 | | 0.0052 | 4.67 | 7000 | 0.3399 | 14.2915 | | 0.0032 | 5.33 | 8000 | 0.3423 | 14.3297 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0 - Datasets 2.11.0 - Tokenizers 0.13.3
Ayham/albert_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: rlucasz93/ppo-Pyramid 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Ayham/distilbert_gpt2_summarization_xsum
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:xsum", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - generated_from_trainer model-index: - name: finetuned results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned This model is a fine-tuned version of [microsoft/trocr-base-stage1](https://huggingface.co/microsoft/trocr-base-stage1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.6485 - Cer: 0.0565 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.1738 | 0.28 | 200 | 1.2406 | 0.2262 | | 0.9181 | 0.57 | 400 | 0.8294 | 0.1084 | | 0.6552 | 0.85 | 600 | 0.6485 | 0.0565 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Ayham/xlmroberta_large_gpt2_summarization_cnndm
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
## Usage The model can be used directly (without a language model) as follows: ```python import soundfile as sf import torch from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import argparse def parse_transcription(wav_file): # load pretrained model processor = Wav2Vec2Processor.from_pretrained("addy88/wav2vec2-kannada-stt") model = Wav2Vec2ForCTC.from_pretrained("addy88/wav2vec2-kannada-stt") # load audio audio_input, sample_rate = sf.read(wav_file) # pad input values and return pt tensor input_values = processor(audio_input, sampling_rate=sample_rate, return_tensors="pt").input_values # INFERENCE # retrieve logits & take argmax logits = model(input_values).logits predicted_ids = torch.argmax(logits, dim=-1) # transcribe transcription = processor.decode(predicted_ids[0], skip_special_tokens=True) print(transcription) ```
Ayham/xlnet_bert_summarization_cnn_dailymail
[ "pytorch", "tensorboard", "encoder-decoder", "text2text-generation", "dataset:cnn_dailymail", "transformers", "generated_from_trainer", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "EncoderDecoderModel" ], "model_type": "encoder-decoder", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - spacy - token-classification language: - en model-index: - name: en_cardb results: - task: name: NER type: token-classification metrics: - name: NER Precision type: precision value: 0.9584607543 - name: NER Recall type: recall value: 0.9673353909 - name: NER F Score type: f_score value: 0.9628776242 --- | Feature | Description | | --- | --- | | **Name** | `en_cardb` | | **Version** | `0.0.0` | | **spaCy** | `>=3.4.4,<3.5.0` | | **Default Pipeline** | `transformer`, `ner` | | **Components** | `transformer`, `ner` | | **Vectors** | 0 keys, 0 unique vectors (0 dimensions) | | **Sources** | n/a | | **License** | n/a | | **Author** | [n/a]() | ### Label Scheme <details> <summary>View label scheme (3 labels for 1 components)</summary> | Component | Labels | | --- | --- | | **`ner`** | `antineoplastic`, `cancertype`, `carcinogen` | </details> ### Accuracy | Type | Score | | --- | --- | | `ENTS_F` | 96.29 | | `ENTS_P` | 95.85 | | `ENTS_R` | 96.73 | | `TRANSFORMER_LOSS` | 63857.68 | | `NER_LOSS` | 42226.48 |
Ayumi/Jovana
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m library_name: diffusers pipeline_tag: text-to-image tags: - stable-diffusion - anime - aiart --- **This model is trained for 12 characters from Kuma Kuma Kuma Bear (くまクマ熊ベアー) + 5 characters from Saving 80,000 Gold in Another World for My Retirement (老後に備えて異世界で8万枚の金貨を貯めます)** **Why do I train the two animes together?** I feel these two animes (light novels actually) have so much similarity that I really want to make some crossovers. For examples please see https://civitai.com/models/37632/kumabear-roukin8-characters-fullckpt Moreover there is no reason to do single anime either. I plan to add __shinmai renkinjutsushi no tenpo keiei__ next. ## Trigger Words **KumaBear** * Atla * Cliff * Eleanora * Fina * Flora * Gentz * Misana * Noire * Shia * Shuri * Telmina * Yuna **Roukin8** * Adelaide * Beatrice * Colette * Sabine * YamanoMitsuha **Styles** (may not be very effective) * aniscreen * fanart * light novel * official art * ..., style(s) of your favorite model if know how to merge things properly --- To get everything right you may need additional trigger words for outfits and ornaments. Here are some suggestions - If you want to get the bear costume of Yuna you may add kigurumi, bear hood, animal hood, animal costume, hand puppet etc. - Add Red bow for Fina/Shuri/Noire - Add twin drill for Shia - Add double bun for Flora - Add scrunchie for telmina Kumakyuu and Kumayuru are not tagged, but you may get something that look right by prompting with bears, stuffed animal etc. Interestingly I can hardly take off the hood of Yuna during the early phase of training, but it becomes possible after longer training (actually now Yuna by default does not have hood though almost all the images of her have hood on!) Many characters are missing from the two animes. I may update the KumaBear one at the end of the season with the following characters * kumakyuu * kumayuru * Lurina * Farrat (king) * Kitia (queen) * Karin * Sanya * Helen * Ans * Mylene * Cattleya ## Dataset * KumaBear 5113 * anime screenshots 5042 * fanart 37 * official art 15 * novel illustration 19 * Roukin8 2948 (screenshots only) * Regularization ~30K ## Training * First trained for 9739 steps, resumed and trained for another 20494 steps * clip skip 1, resolution 512, batch size 8, on top of [JosephusCheung/ACertainty](https://huggingface.co/JosephusCheung/ACertainty/tree/main) * 2.5e-6 cosine scheduler, Adam8bit, conditional dropout 0.08
AyushPJ/ai-club-inductions-21-nlp-ELECTRA-base-squad
[ "pytorch", "electra", "question-answering", "transformers", "generated_from_trainer", "autotrain_compatible" ]
question-answering
{ "architectures": [ "ElectraForQuestionAnswering" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: mit base_model: CompVis/stable-diffusion-v1-4 ---
Azaghast/DistilBART-SCP-ParaSummarization
[ "pytorch", "bart", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "BartForConditionalGeneration" ], "model_type": "bart", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": true, "length_penalty": 2, "max_length": 142, "min_length": 56, "no_repeat_ngram_size": 3, "num_beams": 4, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # rithwik-db/cleaned-e5-base-unsupervised-16 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('rithwik-db/cleaned-e5-base-unsupervised-16') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('rithwik-db/cleaned-e5-base-unsupervised-16') model = AutoModel.from_pretrained('rithwik-db/cleaned-e5-base-unsupervised-16') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=rithwik-db/cleaned-e5-base-unsupervised-16) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 149 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 3, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Azura/data
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model meltano/singer-sdk is restricted and you are not in the authorized list. Visit https://huggingface.co/meltano/singer-sdk to ask for access.
BSC-LT/roberta-base-bne-capitel-pos
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "pos", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="Yanrds/q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BSC-LT/roberta-base-bne-sqac
[ "pytorch", "roberta", "question-answering", "es", "dataset:BSC-TeMU/SQAC", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "qa", "question answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="Yanrds/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BSC-LT/roberta-base-bne
[ "pytorch", "roberta", "fill-mask", "es", "dataset:bne", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
594
2023-04-12T01:15:27Z
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # hlyu/distilbert_tasb_14 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 768 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('hlyu/distilbert_tasb_14') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('hlyu/distilbert_tasb_14') model = AutoModel.from_pretrained('hlyu/distilbert_tasb_14') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=hlyu/distilbert_tasb_14) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 5055 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MSELoss.MSELoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 1000, "evaluator": "sentence_transformers.evaluation.SequentialEvaluator.SequentialEvaluator", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "eps": 1e-06, "lr": 0.0001 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: DistilBertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
BSC-LT/roberta-large-bne-capitel-ner
[ "pytorch", "roberta", "token-classification", "es", "dataset:bne", "dataset:capitel", "arxiv:1907.11692", "arxiv:2107.07253", "transformers", "national library of spain", "spanish", "bne", "capitel", "ner", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "RobertaForTokenClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-04-12T01:18:54Z
--- license: mit --- ![](https://user-images.githubusercontent.com/61938694/231021615-38df0a0a-d97e-4f7a-99d9-99952357b4b1.png) ## Paella We are releasing a new Paella model which builds on top of our initial paper https://arxiv.org/abs/2211.07292. Paella is a text-to-image model that works in a quantized latent space and learns similarly to MUSE and Diffusion models. Since the paper-release we worked intensively to bring Paella to a similar level as other state-of-the-art models. With this release we are coming a step closer to that goal. However, our main intention is not to make the greatest text-to-image model out there (at least for now), it is to bring text-to-image models closer to people outside the field on a technical basis. For example, many models have codebases with many thousand lines of code, that make it pretty hard for people to dive into the code and easily understand it. And that is the contribution we are the most with Paella. The training and sampling code for Paella is minimalistic and can be understood in a few minutes, making further extensions, quick tests, idea testing etc. extremely fast. For instance, the entire sampling code can be written in just **12 lines** of code. ### How does Paella work? Paella works in a quantized latent space, just like StableDiffusion etc., to reduce the computational power needed. Images will be encoded to a smaller latent space and converted to visual tokens of shape *h x w*. Now during training, these visual tokens will be noised, by replacing a random amount of tokens with other randomly selected tokens from the codebook of the VQGAN. The noised image will be given to the model, along with a timestep and the conditional information, which is text in our case. The model is tasked to predict the un-noised version of the tokens. And that's it. The model is optimized with the CrossEntropy loss between the original tokens and the predicted tokens. The amount of noise added during the training is just a linear schedule, meaning that we uniformly sample a percentage between 0 and 100% and noise that amount of tokens.<br><br> <figure> <img src="https://user-images.githubusercontent.com/61938694/231248435-d21170c1-57b4-4a8f-90a6-62cf3e7effcd.png" width="400"> <figcaption>Images are noised and then fed to the model during training.</figcaption> </figure> Sampling is also extremely simple, we start with the entire image being random tokens. Then we feed the latent image, the timestep and the condition into the model and let it predict the final image. The models outputs a distribution over every token, which we sample from with standard multinomial sampling. Since there are infinite possibilities for the result to look like, just doing a single step results in very basic shapes without any details. That is why we add noise to the image again and feed it back to the model. And we repeat that process for a number of times with less noise being added every time and slowly get our final image. You can see how images emerge [here](https://user-images.githubusercontent.com/61938694/231252449-d9ac4d15-15ef-4aed-a0de-91fa8746a415.png).<br> The following is the entire sampling code needed to generate images: ```python def sample(model_inputs, latent_shape, unconditional_inputs, steps=12, renoise_steps=11, temperature=(0.7, 0.3), cfg=8.0): with torch.inference_mode(): sampled = torch.randint(0, model.num_labels, size=latent_shape) initial_noise = sampled.clone() timesteps = torch.linspace(1.0, 0.0, steps+1) temperatures = torch.linspace(temperature[0], temperature[1], steps) for i, t in enumerate(timesteps[:steps]): t = torch.ones(latent_shape[0]) * t logits = model(sampled, t, **model_inputs) if cfg: logits = logits * cfg + model(sampled, t, **unconditional_inputs) * (1-cfg) sampled = logits.div(temperatures[i]).softmax(dim=1).permute(0, 2, 3, 1).reshape(-1, logits.size(1)) sampled = torch.multinomial(sampled, 1)[:, 0].view(logits.size(0), *logits.shape[2:]) if i < renoise_steps: t_next = torch.ones(latent_shape[0]) * timesteps[i+1] sampled = model.add_noise(sampled, t_next, random_x=initial_noise)[0] return sampled ``` ### Results <img src="https://user-images.githubusercontent.com/61938694/231598512-2410c172-5a9d-43f4-947c-6ff7eaee77e7.png"> Since Paella is also conditioned on CLIP image embeddings the following things are also possible:<br><br> <img src="https://user-images.githubusercontent.com/61938694/231278319-16551a8d-bfd1-49c9-b604-c6da3955a6d4.png"> <img src="https://user-images.githubusercontent.com/61938694/231287637-acd0b9b2-90c7-4518-9b9e-d7edefc6c3af.png"> <img src="https://user-images.githubusercontent.com/61938694/231287119-42fe496b-e737-4dc5-8e53-613bdba149da.png"> ### Technical Details. Model-Architecture: U-Net (Mix of....) <br> Dataset: Laion-A, Laion Aesthetic > 6.0 <br> Training Steps: 1.3M <br> Batch Size: 2048 <br> Resolution: 256 <br> VQGAN Compression: f4 <br> Condition: ByT5-XL (95%), CLIP-H Image Embedding (10%), CLIP-H Text Embedding (10%) Optimizer: AdamW Hardware: 128 A100 @ 80GB <br> Training Time: ~3 weeks <br> Learning Rate: 1e-4 <br> More details on the approach, training and sampling can be found in paper and on GitHub. ### Paper, Code Release Paper: https://arxiv.org/abs/2211.07292 <br> Code: https://github.com/dome272/Paella <br> ### Goal So you see, there are no heavy math formulas or theorems needed to achieve good sampling qualities. Moreover, there are no constants such as alpha, beta, alpha_cum_prod etc. necessary as in diffusion models. This makes this method really suitable for people new to the field of generative AI. We hope we can set the foundation for further research in that direction and hope to contribute to a world where AI is accessible and can be understood by everyone. ### Limitations & Conclusion There are still many things to improve for Paella to get on par with standard diffusion models or to even outperform them. One primary thing we notice is that even though we only condition the model on CLIP image embedding 10% of the time, during inference the model heavily relies on the generated image embeddings by a prior model (mapping clip text embeddings to image embeddings as proposed in Dalle2). We counteract this by decreasing the importance of the image embeddings by reweighing the attention scores. There probably is a way to avoid this happening already in training. Other limitations such as lack of composition, text depiction, unawareness of concepts etc. could also be reduced by continuing the training for longer. As a reference, Paella has only seen as many images as SD 1.4 and due to earlier To conclude, this is still work in progress, but our first model that works a million times better than the first versions we trained months ago. We hope that more people become interested in this approach, since we believe it has a lot of potential to become much better than this and to enable new people to have an easy-to-understand introduction to the field of generative AI.
BSen/wav2vec2-base-timit-demo-colab
[ "pytorch", "tensorboard", "wav2vec2", "automatic-speech-recognition", "transformers", "generated_from_trainer", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
2023-04-12T01:24:28Z
--- license: mit tags: - generated_from_trainer datasets: - ricardo-filho/tcm-0.9-no-valor-objeto model-index: - name: bert_base_tcm_0.9_no_valor_objeto results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_base_tcm_0.9_no_valor_objeto This model is a fine-tuned version of [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased) on the ricardo-filho/tcm-0.9-no-valor-objeto dataset. It achieves the following results on the evaluation set: - Loss: 0.0176 - Criterio Julgamento Precision: 0.8310 - Criterio Julgamento Recall: 0.8310 - Criterio Julgamento F1: 0.8310 - Criterio Julgamento Number: 142 - Data Sessao Precision: 0.7909 - Data Sessao Recall: 0.9667 - Data Sessao F1: 0.87 - Data Sessao Number: 90 - Modalidade Licitacao Precision: 0.9564 - Modalidade Licitacao Recall: 0.9815 - Modalidade Licitacao F1: 0.9688 - Modalidade Licitacao Number: 648 - Numero Exercicio Precision: 0.9362 - Numero Exercicio Recall: 0.9788 - Numero Exercicio F1: 0.9570 - Numero Exercicio Number: 330 - Objeto Licitacao Precision: 0.4460 - Objeto Licitacao Recall: 0.5849 - Objeto Licitacao F1: 0.5061 - Objeto Licitacao Number: 106 - Overall Precision: 0.8751 - Overall Recall: 0.9316 - Overall F1: 0.9025 - Overall Accuracy: 0.9953 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Criterio Julgamento Precision | Criterio Julgamento Recall | Criterio Julgamento F1 | Criterio Julgamento Number | Data Sessao Precision | Data Sessao Recall | Data Sessao F1 | Data Sessao Number | Modalidade Licitacao Precision | Modalidade Licitacao Recall | Modalidade Licitacao F1 | Modalidade Licitacao Number | Numero Exercicio Precision | Numero Exercicio Recall | Numero Exercicio F1 | Numero Exercicio Number | Objeto Licitacao Precision | Objeto Licitacao Recall | Objeto Licitacao F1 | Objeto Licitacao Number | Overall Precision | Overall Recall | Overall F1 | Overall Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:-----------------------------:|:--------------------------:|:----------------------:|:--------------------------:|:---------------------:|:------------------:|:--------------:|:------------------:|:------------------------------:|:---------------------------:|:-----------------------:|:---------------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:--------------------------:|:-----------------------:|:-------------------:|:-----------------------:|:-----------------:|:--------------:|:----------:|:----------------:| | 0.0159 | 1.0 | 3497 | 0.0176 | 0.8310 | 0.8310 | 0.8310 | 142 | 0.7909 | 0.9667 | 0.87 | 90 | 0.9564 | 0.9815 | 0.9688 | 648 | 0.9362 | 0.9788 | 0.9570 | 330 | 0.4460 | 0.5849 | 0.5061 | 106 | 0.8751 | 0.9316 | 0.9025 | 0.9953 | | 0.0161 | 2.0 | 6994 | 0.0191 | 0.8312 | 0.9014 | 0.8649 | 142 | 0.7890 | 0.9556 | 0.8643 | 90 | 0.9580 | 0.9846 | 0.9711 | 648 | 0.9475 | 0.9848 | 0.9658 | 330 | 0.5556 | 0.6604 | 0.6034 | 106 | 0.8920 | 0.9476 | 0.9189 | 0.9954 | | 0.0094 | 3.0 | 10491 | 0.0215 | 0.8125 | 0.9155 | 0.8609 | 142 | 0.7818 | 0.9556 | 0.86 | 90 | 0.9608 | 0.9846 | 0.9726 | 648 | 0.9503 | 0.9848 | 0.9673 | 330 | 0.5108 | 0.6698 | 0.5796 | 106 | 0.8834 | 0.9498 | 0.9154 | 0.9955 | | 0.0057 | 4.0 | 13988 | 0.0212 | 0.8269 | 0.9085 | 0.8658 | 142 | 0.8095 | 0.9444 | 0.8718 | 90 | 0.9697 | 0.9861 | 0.9778 | 648 | 0.9501 | 0.9818 | 0.9657 | 330 | 0.5290 | 0.6887 | 0.5984 | 106 | 0.8935 | 0.9498 | 0.9208 | 0.9960 | | 0.0049 | 5.0 | 17485 | 0.0214 | 0.8344 | 0.9225 | 0.8763 | 142 | 0.7905 | 0.9222 | 0.8513 | 90 | 0.9652 | 0.9830 | 0.9740 | 648 | 0.9474 | 0.9818 | 0.9643 | 330 | 0.5217 | 0.6792 | 0.5902 | 106 | 0.8894 | 0.9476 | 0.9176 | 0.9957 | | 0.0036 | 6.0 | 20982 | 0.0297 | 0.8397 | 0.9225 | 0.8792 | 142 | 0.7748 | 0.9556 | 0.8557 | 90 | 0.9636 | 0.9799 | 0.9717 | 648 | 0.9585 | 0.9788 | 0.9685 | 330 | 0.5435 | 0.7075 | 0.6148 | 106 | 0.8922 | 0.9498 | 0.9201 | 0.9953 | | 0.0016 | 7.0 | 24479 | 0.0297 | 0.8302 | 0.9296 | 0.8771 | 142 | 0.7925 | 0.9333 | 0.8571 | 90 | 0.9652 | 0.9830 | 0.9740 | 648 | 0.9467 | 0.9697 | 0.9581 | 330 | 0.5746 | 0.7264 | 0.6417 | 106 | 0.8948 | 0.9498 | 0.9215 | 0.9955 | | 0.0016 | 8.0 | 27976 | 0.0298 | 0.8212 | 0.8732 | 0.8464 | 142 | 0.8095 | 0.9444 | 0.8718 | 90 | 0.9666 | 0.9815 | 0.9740 | 648 | 0.9524 | 0.9697 | 0.9610 | 330 | 0.5746 | 0.7264 | 0.6417 | 106 | 0.8974 | 0.9438 | 0.9200 | 0.9955 | | 0.0011 | 9.0 | 31473 | 0.0319 | 0.7949 | 0.8732 | 0.8322 | 142 | 0.7788 | 0.9 | 0.8351 | 90 | 0.9650 | 0.9799 | 0.9724 | 648 | 0.9467 | 0.9697 | 0.9581 | 330 | 0.6016 | 0.7264 | 0.6581 | 106 | 0.8938 | 0.9400 | 0.9163 | 0.9954 | | 0.0011 | 10.0 | 34970 | 0.0324 | 0.8141 | 0.8944 | 0.8523 | 142 | 0.7524 | 0.8778 | 0.8103 | 90 | 0.9680 | 0.9815 | 0.9747 | 648 | 0.9494 | 0.9667 | 0.9580 | 330 | 0.5878 | 0.7264 | 0.6498 | 106 | 0.8939 | 0.9407 | 0.9167 | 0.9954 | ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Babelscape/wikineural-multilingual-ner
[ "pytorch", "tensorboard", "safetensors", "bert", "token-classification", "de", "en", "es", "fr", "it", "nl", "pl", "pt", "ru", "multilingual", "dataset:Babelscape/wikineural", "transformers", "named-entity-recognition", "sequence-tagger-model", "license:cc-by-nc-sa-4.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
41,608
2023-04-12T01:29:08Z
--- pipeline_tag: image-classification tags: - art --- Models for [Sketch2Image](https://github.com/GreeneryScenery/Sketch2Image).
Bagus/wav2vec2-large-xlsr-bahasa-indonesia
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "el", "dataset:common_voice_id_6.1", "transformers", "audio", "speech", "bahasa-indonesia", "license:apache-2.0" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2023-04-12T01:41:02Z
--- tags: - generated_from_keras_callback model-index: - name: pegasus-large-musicLyrics results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus-large-musicLyrics This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0472 - Train Sparse Categorical Accuracy: 0.7971 - Validation Loss: 1.0387 - Validation Sparse Categorical Accuracy: 0.7992 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 5e-05, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 1.4713 | 0.7524 | 1.0652 | 0.7960 | 0 | | 1.0824 | 0.7934 | 1.0482 | 0.7982 | 1 | | 1.0472 | 0.7971 | 1.0387 | 0.7992 | 2 | ### Framework versions - Transformers 4.27.4 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
BatuhanYilmaz/distilbert-base-uncased-finetuned-squad-d5716d28
[ "pytorch", "distilbert", "fill-mask", "en", "dataset:squad", "arxiv:1910.01108", "transformers", "question-answering", "license:apache-2.0", "autotrain_compatible" ]
question-answering
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
18
2023-04-12T02:31:56Z
--- tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-chinese-wikiann-zh-ner results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann config: zh split: validation args: zh metrics: - name: Precision type: precision value: 0.7890612756621219 - name: Recall type: recall value: 0.8060513887777155 - name: F1 type: f1 value: 0.797465848346862 - name: Accuracy type: accuracy value: 0.9432393178410795 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-wikiann-zh-ner This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2092 - Precision: 0.7891 - Recall: 0.8061 - F1: 0.7975 - Accuracy: 0.9432 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.842 | 0.16 | 400 | 0.3530 | 0.5535 | 0.6872 | 0.6131 | 0.8927 | | 0.32 | 0.32 | 800 | 0.2800 | 0.6929 | 0.6749 | 0.6838 | 0.9190 | | 0.2928 | 0.48 | 1200 | 0.2438 | 0.7031 | 0.7661 | 0.7333 | 0.9301 | | 0.245 | 0.64 | 1600 | 0.2525 | 0.6959 | 0.7919 | 0.7408 | 0.9280 | | 0.2236 | 0.8 | 2000 | 0.2315 | 0.7441 | 0.7503 | 0.7472 | 0.9342 | | 0.2444 | 0.96 | 2400 | 0.2119 | 0.7719 | 0.7675 | 0.7697 | 0.9379 | | 0.1899 | 1.12 | 2800 | 0.2267 | 0.7531 | 0.8062 | 0.7788 | 0.9387 | | 0.1649 | 1.28 | 3200 | 0.2249 | 0.7519 | 0.8202 | 0.7846 | 0.9395 | | 0.1521 | 1.44 | 3600 | 0.2220 | 0.7778 | 0.8032 | 0.7903 | 0.9413 | | 0.1787 | 1.6 | 4000 | 0.2185 | 0.7879 | 0.7860 | 0.7869 | 0.9417 | | 0.146 | 1.76 | 4400 | 0.2134 | 0.7721 | 0.8128 | 0.7919 | 0.9416 | | 0.1557 | 1.92 | 4800 | 0.2111 | 0.7857 | 0.8101 | 0.7977 | 0.9429 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
BatuhanYilmaz/marian-finetuned-kde4-en-to-fr
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-12T02:33:28Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 106.00 +/- 2.00 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pablomaya -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga pablomaya -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga pablomaya ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
BhanuSama/gpt2-finetuned-xsum
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-12T03:24:49Z
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 584.00 +/- 104.33 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MohammedEltoum -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga MohammedEltoum -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga MohammedEltoum ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
BigSalmon/FormalBerta
[ "pytorch", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.50 +/- 2.78 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="xnpeng/Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BigSalmon/GPTIntro
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-12T04:15:00Z
--- license: apache-2.0 inference: false --- **NOTE: This "delta model" cannot be used directly.** Users have to apply it on top of the original LLaMA weights to get actual Vicuna weights. See https://github.com/lm-sys/FastChat#vicuna-weights for instructions. <br> <br> # Vicuna Model Card ## Model details **Model type:** Vicuna is an open-source chatbot trained by fine-tuning LLaMA on user-shared conversations collected from ShareGPT. It is an auto-regressive language model, based on the transformer architecture. **Model date:** Vicuna was trained between March 2023 and April 2023. **Organizations developing the model:** The Vicuna team with members from UC Berkeley, CMU, Stanford, and UC San Diego. **Paper or resources for more information:** https://vicuna.lmsys.org/ **License:** Apache License 2.0 **Where to send questions or comments about the model:** https://github.com/lm-sys/FastChat/issues ## Intended use **Primary intended uses:** The primary use of Vicuna is research on large language models and chatbots. **Primary intended users:** The primary intended users of the model are researchers and hobbyists in natural language processing, machine learning, and artificial intelligence. ## Training dataset 70K conversations collected from ShareGPT.com. ## Evaluation dataset A preliminary evaluation of the model quality is conducted by creating a set of 80 diverse questions and utilizing GPT-4 to judge the model outputs. See https://vicuna.lmsys.org/ for more details. ## Major updates of weights v1.1 - Refactor the tokenization and separator. In Vicuna v1.1, the separator has been changed from `"###"` to the EOS token `"</s>"`. This change makes it easier to determine the generation stop criteria and enables better compatibility with other libraries. - Fix the supervised fine-tuning loss computation for better model quality.
BigSalmon/GPTNeo350MInformalToFormalLincoln2
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.926 - name: F1 type: f1 value: 0.9261570669458271 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2262 - Accuracy: 0.926 - F1: 0.9262 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | 0.837 | 1.0 | 250 | 0.3302 | 0.9015 | 0.8980 | | 0.2559 | 2.0 | 500 | 0.2262 | 0.926 | 0.9262 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
BigSalmon/GPTNeo350MInformalToFormalLincoln4
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
null
--- license: apache-2.0 --- ## convert vicuna-7b to ggml-vicuna-7b-f16 Source : https://huggingface.co/chharlesonfire/vicuna-7b No unnecessary changes Same format No quantization
BigSalmon/GPTNeo350MInformalToFormalLincoln6
[ "pytorch", "gpt_neo", "text-generation", "transformers", "has_space" ]
text-generation
{ "architectures": [ "GPTNeoForCausalLM" ], "model_type": "gpt_neo", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- tags: - unity-ml-agents - ml-agents - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SoccerTwos library_name: ml-agents --- # **poca** Agent playing **SoccerTwos** This is a trained model of a **poca** agent playing **SoccerTwos** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-SoccerTwos 2. Step 1: Write your model_id: ManishW/SoccerTwos_v0 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
BigSalmon/InformalToFormalLincoln14
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 96.70 +/- 99.97 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'experiment_name': 'final3' 'env_name': 'LunarLander-v2' 'learning_rate': 0.0003 'seed': 1 'total_timesteps': 125000 'num_envs': 4 'num_steps': 128 'torch_deterministic': True 'cuda': True 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 10 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'save_video': True 'batch_size': 512 'mini_batch_size': 128} ```
BigSalmon/InformalToFormalLincoln15
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
11
2023-04-12T04:43:49Z
--- license: mit language: - en pipeline_tag: text-classification tags: - emotion - 20 classes - code - emotions widget: - text: I'm so angry right now. I can't believe he did that to me. example_title: anger - text: I'm feeling disgusted by the smell of this food. example_title: disgust - text: I'm feeling very afraid of what might happen next. example_title: fear - text: I'm so joyful right now! This is the best day of my life. example_title: joy - text: >- I'm feeling neutral about this situation. I don't really care one way or another. example_title: neutral - text: I'm feeling really sad today after my dog passed away." example_title: sadness - text: I'm so surprised by what just happened! I never saw that coming. example_title: surprise - text: I'm feeling cheeky today. I'm going to play a little prank on my friend. example_title: cheeky - text: I'm feeling confused about what to do next. I need some guidance. example_title: confuse - text: I'm feeling curious about the world around me. There's so much to learn! example_title: curious - text: I'm feeling empathetic towards my friend who is going through a tough time. example_title: empathetic - text: I'm feeling grumpy today. Everything is annoying me! example_title: grumpy - text: I'm feeling guilty about what I did. I wish I could take it back. example_title: guilty - text: I'm feeling very energetic today. I'm ready to take on the world! example_title: energetic - text: I'm feeling impatient waiting for this movie to start. example_title: impatient - text: >- I'm feeling so much love for my family right now. They mean everything to me. example_title: love - text: I'm thinking about my future and what I want to achieve. example_title: think - text: >- I'm feeling serious about this issue. It's important and needs to be addressed. example_title: serious - text: >- I'm feeling suspicious of what he's telling me. I think he's hiding something. example_title: suspicious - text: I'm feeling whiny today. Everything is bothering me! example_title: whiny - text: I love football so much example_title: love 2 - text: I'm reflecting on my experiences to gain insights example_title: think 2 - text: >- I borrowed money from a friend and haven't paid it back yet. Now I feel ashamed. example_title: guilty 2 - text: I'm starting to think that he's up to something. example_title: suspicious 2 - text: We need to approach this matter with a sense of purpose example_title: serious 2 --- # Emotion classification from 20 classes ## 20 Emotion labels | id | label | | --- | ---------- | | 0 | anger | | 1 | cheeky | | 2 | confuse | | 3 | curious | | 4 | disgust | | 5 | empathetic | | 6 | energetic | | 7 | fear | | 8 | grumpy | | 9 | guilty | | 10 | impatient | | 11 | joy | | 12 | love | | 13 | neutral | | 14 | sadness | | 15 | serious | | 16 | surprise | | 17 | suspicious | | 18 | think | | 19 | whiny | ## How to use Here is how to use this model to get the emotion label of a given text: ```python from transformers import AutoModelForSequenceClassification, pipeline model_name = 'jitesh/emotion-english' model = AutoModelForSequenceClassification.from_pretrained(model_name) classifier = pipeline("text-classification", model=model, tokenizer=model_name) text = "I can't wait any longer " prediction = classifier(text) print(prediction[0], text) ``` The above code outputs the following line. ```bash {'label': 'impatient', 'score': 0.924211859703064} I can't wait any longer ```
BigSalmon/InformalToFormalLincoln17
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
2023-04-12T04:55:39Z
--- license: apache-2.0 --- ## convert ggml-vicuna-7b-f16 to ggml-vicuna-7b-q4_0 Source: https://huggingface.co/chharlesonfire/ggml-vicuna-7b-f16 No unnecessary changes ## Usage: 1. Download llama.cpp from https://github.com/ggerganov/llama.cpp 2. make and run llama.cpp and choose model with ggml-vicuna-7b-q4_0.bin
BigSalmon/MrLincoln10
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
Access to model BMILab/TCR-BERT-MLM is restricted and you are not in the authorized list. Visit https://huggingface.co/BMILab/TCR-BERT-MLM to ask for access.
BigSalmon/MrLincoln2
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - superb metrics: - accuracy model-index: - name: wav2vec2-base-finetuned-ks results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-finetuned-ks This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the superb dataset. It achieves the following results on the evaluation set: - Loss: 0.0920 - Accuracy: 0.9831 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 3e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.7143 | 1.0 | 399 | 0.6080 | 0.9187 | | 0.29 | 2.0 | 798 | 0.1861 | 0.9751 | | 0.2249 | 3.0 | 1197 | 0.1158 | 0.9813 | | 0.1649 | 4.0 | 1597 | 0.0999 | 0.9796 | | 0.1302 | 5.0 | 1995 | 0.0920 | 0.9831 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
BigSalmon/MrLincoln7
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - LunarLander-v2 - ppo - deep-reinforcement-learning - reinforcement-learning - custom-implementation - deep-rl-course model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: -153.95 +/- 91.84 name: mean_reward verified: false --- # PPO Agent Playing LunarLander-v2 This is a trained model of a PPO agent playing LunarLander-v2. # Hyperparameters ```python {'exp_name': 'ppo' 'seed': 1 'torch_deterministic': True 'cuda': True 'track': False 'wandb_project_name': 'cleanRL' 'wandb_entity': None 'capture_video': False 'env_id': 'LunarLander-v2' 'total_timesteps': 50000 'learning_rate': 0.00025 'num_envs': 4 'num_steps': 128 'anneal_lr': True 'gae': True 'gamma': 0.99 'gae_lambda': 0.95 'num_minibatches': 4 'update_epochs': 4 'norm_adv': True 'clip_coef': 0.2 'clip_vloss': True 'ent_coef': 0.01 'vf_coef': 0.5 'max_grad_norm': 0.5 'target_kl': None 'repo_id': 'justinsiow/ppo-LunarLander-v2-scratch' 'batch_size': 512 'minibatch_size': 128} ```
BigeS/DialoGPT-small-Rick
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: other --- # 聲明 Disclaimer 本資料夾中的模型不是我所製作,版權歸原作者所有(各模型版權詳見 http://www.civitai.com 所示)。我上傳至本資料夾僅爲方便在綫抽取資源,并非盈利。 The models in this folder are not made by me, and the copyright belongs to the original author (see http://www.civitai.com for details on the copyright of each model). I uploaded to this folder only for the convenience of extracting resources online, not for profit. # 模型列表 List of Models 本資料夾中所有模型詳見下表。 All the models in this folder are detailed in the table below. | 模型名稱 Model Name | Civitai 頁面鏈接 Civitai Page Link | Civitai 下載鏈接 Civitai Download Link | |----------------------|--------------------|--------------------| |realdosmix.safetensors |https://civitai.com/models/6925/realdosmix |https://civitai.com/api/download/models/8137 | <img src="https://raw.githubusercontent.com/hanafuusen/images/main/realdosmix_civitai.jpg" width="" height="">
BillelBenoudjit/jplu-wikiann
[ "fr", "dataset:wikiann", "model-index" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: wav2vec2-base-mscd1 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wav2vec2-base-mscd1 This model is a fine-tuned version of [facebook/wav2vec2-base](https://huggingface.co/facebook/wav2vec2-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.8065 - Wer: 0.3824 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 300 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:------:|:----:|:---------------:|:------:| | 6.6008 | 41.67 | 500 | 4.7974 | 1.0 | | 1.2892 | 83.33 | 1000 | 1.7488 | 0.5 | | 0.1991 | 125.0 | 1500 | 1.3122 | 0.4412 | | 0.109 | 166.67 | 2000 | 1.3265 | 0.3235 | | 0.071 | 208.33 | 2500 | 1.1280 | 0.3529 | | 0.0508 | 250.0 | 3000 | 1.6514 | 0.3529 | | 0.0365 | 291.67 | 3500 | 1.8065 | 0.3824 | ### Framework versions - Transformers 4.24.0 - Pytorch 2.0.0+cu118 - Datasets 1.18.3 - Tokenizers 0.13.3
Bimal/my_bot_model
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- license: mit --- ### pemhHiS是一个通过岛综合版500页数据+欢乐恶搞+二次元百科词条进行微调后的清华大学ChatGLM-6B模型。 使用LORA进行finetune 示例串 这里[https://www.nmbxd1.com/t/56735335]
Binbin/test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
I want a photo for my youtube channel that has my name in letters and an image of sweet from gta san andreas has to say ADIN MODS the image
Biniam/en_ti_translate
[ "pytorch", "marian", "text2text-generation", "transformers", "translation", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
2023-04-12T06:14:42Z
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation widget: - text: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: how can I become more healthy? ### Response: example_title: example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> # LaMini-Cerebras-256M [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]() This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [cerebras/Cerebras-GPT-256M](https://huggingface.co/cerebras/Cerebras-GPT-256M) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/). You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper. <table> <thead> <tr> <th>Base model</th> <th colspan="4">LaMini-LM series (#parameters)</th> </tr> </thead> <tbody> <tr> <td>T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td> <td></td> </tr> <tr> <td>Flan-T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td> <td></td> </tr> <tr> <td>Cerebras-GPT</td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td> </tr> <tr> <td>GPT-2</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td> <td></td> </tr> <tr> <td>GPT-Neo</td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td> <td></td> <td></td> </tr> <tr> <td>GPT-J</td> <td colspan="4">coming soon</td> </tr> <tr> <td>LLaMA</td> <td colspan="4">coming soon</td> </tr> </tbody> </table> ## Use ### Intended use We recommend using the model to respond to human instructions written in natural language. Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance. See the example on the right or the code below. We now show you how to load and use our model using HuggingFace `pipeline()`. ```python # pip install -q transformers from transformers import pipeline checkpoint = "{model_name}" model = pipeline('text-generation', model = checkpoint) instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"' input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text'] print("Response", generated_text) ``` ## Training Procedure <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a> </p> We initialize with [cerebras/Cerebras-GPT-256M](https://huggingface.co/cerebras/Cerebras-GPT-256M) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 256M. ### Training Hyperparameters ## Evaluation We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). ## Limitations More information needed # Citation ```bibtex @article{lamini-lm, author = {Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji }, title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions}, journal = {CoRR}, volume = {abs/2304.14402}, year = {2023}, url = {https://arxiv.org/abs/2304.14402}, eprinttype = {arXiv}, eprint = {2304.14402} } ```
BinksSachary/DialoGPT-small-shaxx
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- tags: - generated_from_trainer datasets: - wikiann metrics: - precision - recall - f1 - accuracy model-index: - name: bert-base-chinese-wikiann-zh-ner-new results: - task: name: Token Classification type: token-classification dataset: name: wikiann type: wikiann config: zh split: validation args: zh metrics: - name: Precision type: precision value: 0.7833553500660502 - name: Recall type: recall value: 0.8069318818538381 - name: F1 type: f1 value: 0.7949688510369846 - name: Accuracy type: accuracy value: 0.9435204272863568 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-chinese-wikiann-zh-ner-new This model is a fine-tuned version of [bert-base-chinese](https://huggingface.co/bert-base-chinese) on the wikiann dataset. It achieves the following results on the evaluation set: - Loss: 0.2085 - Precision: 0.7834 - Recall: 0.8069 - F1: 0.7950 - Accuracy: 0.9435 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 2 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.773 | 0.16 | 400 | 0.3232 | 0.5816 | 0.6670 | 0.6214 | 0.9049 | | 0.3149 | 0.32 | 800 | 0.2954 | 0.6832 | 0.6923 | 0.6877 | 0.9195 | | 0.2912 | 0.48 | 1200 | 0.2418 | 0.7010 | 0.7551 | 0.7270 | 0.9299 | | 0.2446 | 0.64 | 1600 | 0.2539 | 0.7159 | 0.7743 | 0.7440 | 0.9292 | | 0.2193 | 0.8 | 2000 | 0.2330 | 0.7441 | 0.7613 | 0.7526 | 0.9351 | | 0.2434 | 0.96 | 2400 | 0.2186 | 0.7603 | 0.7696 | 0.7649 | 0.9369 | | 0.1915 | 1.12 | 2800 | 0.2245 | 0.7568 | 0.8032 | 0.7793 | 0.9398 | | 0.1607 | 1.28 | 3200 | 0.2263 | 0.7566 | 0.8138 | 0.7842 | 0.9399 | | 0.1513 | 1.44 | 3600 | 0.2228 | 0.7782 | 0.7964 | 0.7872 | 0.9414 | | 0.1777 | 1.6 | 4000 | 0.2098 | 0.7857 | 0.7916 | 0.7887 | 0.9423 | | 0.1466 | 1.76 | 4400 | 0.2132 | 0.7673 | 0.8163 | 0.7911 | 0.9418 | | 0.1528 | 1.92 | 4800 | 0.2093 | 0.7793 | 0.8114 | 0.7951 | 0.9435 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
BinksSachary/ShaxxBot
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="xnpeng/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
BinksSachary/ShaxxBot2
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- license: mit tags: - generated_from_trainer datasets: - imagefolder model-index: - name: git-base-pokemon results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # git-base-pokemon This model is a fine-tuned version of [microsoft/git-base](https://huggingface.co/microsoft/git-base) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 1.4530 - Wer Score: 16.5809 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 128 - eval_batch_size: 128 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 50 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer Score | |:-------------:|:-----:|:----:|:---------------:|:---------:| | 7.2955 | 16.67 | 50 | 5.0105 | 21.4433 | | 3.6221 | 33.33 | 100 | 2.2745 | 21.3860 | | 1.8264 | 50.0 | 150 | 1.4530 | 16.5809 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0a0+d321be6 - Datasets 2.11.0 - Tokenizers 0.13.3
Blabla/Pipipopo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-nc-4.0 language: - en pipeline_tag: text-generation widget: - text: >- Below is an instruction that describes a task. Write a response that appropriately completes the request. ### Instruction: how can I become more healthy? ### Response: example_title: example --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini.png" alt="Title" style="width: 100%; min-width: 300px; display: block; margin: auto;"></a> </p> # LaMini-Cerebras-590M [![Model License](https://img.shields.io/badge/Model%20License-CC%20By%20NC%204.0-red.svg)]() This model is one of our LaMini-LM model series in paper "[LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions](https://github.com/mbzuai-nlp/lamini-lm)". This model is a fine-tuned version of [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M) on [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction) that contains 2.58M samples for instruction fine-tuning. For more information about our dataset, please refer to our [project repository](https://github.com/mbzuai-nlp/lamini-lm/). You can view other models of LaMini-LM series as follows. Models with ✩ are those with the best overall performance given their size/architecture, hence we recommend using them. More details can be seen in our paper. <table> <thead> <tr> <th>Base model</th> <th colspan="4">LaMini-LM series (#parameters)</th> </tr> </thead> <tbody> <tr> <td>T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-61m" target="_blank" rel="noopener noreferrer">LaMini-T5-61M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-223m" target="_blank" rel="noopener noreferrer">LaMini-T5-223M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-t5-738m" target="_blank" rel="noopener noreferrer">LaMini-T5-738M</a></td> <td></td> </tr> <tr> <td>Flan-T5</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-77m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-77M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-248m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-248M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-flan-t5-783m" target="_blank" rel="noopener noreferrer">LaMini-Flan-T5-783M</a>✩</td> <td></td> </tr> <tr> <td>Cerebras-GPT</td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-111m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-111M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-256m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-256M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-590m" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-590M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-cerebras-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Cerebras-1.3B</a></td> </tr> <tr> <td>GPT-2</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-124m" target="_blank" rel="noopener noreferrer">LaMini-GPT-124M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-774m" target="_blank" rel="noopener noreferrer">LaMini-GPT-774M</a>✩</td> <td><a href="https://huggingface.co/MBZUAI/lamini-gpt-1.5b" target="_blank" rel="noopener noreferrer">LaMini-GPT-1.5B</a>✩</td> <td></td> </tr> <tr> <td>GPT-Neo</td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-125m" target="_blank" rel="noopener noreferrer">LaMini-Neo-125M</a></td> <td><a href="https://huggingface.co/MBZUAI/lamini-neo-1.3b" target="_blank" rel="noopener noreferrer">LaMini-Neo-1.3B</a></td> <td></td> <td></td> </tr> <tr> <td>GPT-J</td> <td colspan="4">coming soon</td> </tr> <tr> <td>LLaMA</td> <td colspan="4">coming soon</td> </tr> </tbody> </table> ## Use ### Intended use We recommend using the model to respond to human instructions written in natural language. Since this decoder-only model is fine-tuned with wrapper text, we suggest using the same wrapper text to achieve the best performance. See the example on the right or the code below. We now show you how to load and use our model using HuggingFace `pipeline()`. ```python # pip install -q transformers from transformers import pipeline checkpoint = "{model_name}" model = pipeline('text-generation', model = checkpoint) instruction = 'Please let me know your thoughts on the given place and why you think it deserves to be visited: \n"Barcelona, Spain"' input_prompt = f"Below is an instruction that describes a task. Write a response that appropriately completes the request.\n\n### Instruction:\n{instruction}\n\n### Response:" generated_text = model(input_prompt, max_length=512, do_sample=True)[0]['generated_text'] print("Response", generated_text) ``` ## Training Procedure <p align="center" width="100%"> <a><img src="https://raw.githubusercontent.com/mbzuai-nlp/lamini-lm/main/images/lamini-pipeline.drawio.png" alt="Title" style="width: 100%; min-width: 250px; display: block; margin: auto;"></a> </p> We initialize with [cerebras/Cerebras-GPT-590M](https://huggingface.co/cerebras/Cerebras-GPT-590M) and fine-tune it on our [LaMini-instruction dataset](https://huggingface.co/datasets/MBZUAI/LaMini-instruction). Its total number of parameters is 590M. ### Training Hyperparameters ## Evaluation We conducted two sets of evaluations: automatic evaluation on downstream NLP tasks and human evaluation on user-oriented instructions. For more detail, please refer to our [paper](). ## Limitations More information needed # Citation ```bibtex @article{lamini-lm, author = {Minghao Wu and Abdul Waheed and Chiyu Zhang and Muhammad Abdul-Mageed and Alham Fikri Aji }, title = {LaMini-LM: A Diverse Herd of Distilled Models from Large-Scale Instructions}, journal = {CoRR}, volume = {abs/2304.14402}, year = {2023}, url = {https://arxiv.org/abs/2304.14402}, eprinttype = {arXiv}, eprint = {2304.14402} } ```
Blaine-Mason/hackMIT-finetuned-sst2
[ "pytorch", "tensorboard", "bert", "text-classification", "dataset:glue", "transformers", "generated_from_trainer" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
36
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - glue metrics: - accuracy model-index: - name: distilbert-base-uncased-finetuned-sst2 results: - task: name: Text Classification type: text-classification dataset: name: glue type: glue config: sst2 split: validation args: sst2 metrics: - name: Accuracy type: accuracy value: 0.908256880733945 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the glue dataset. It achieves the following results on the evaluation set: - Loss: 0.3078 - Accuracy: 0.9083 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | No log | 1.0 | 211 | 0.3078 | 0.9083 | | No log | 2.0 | 422 | 0.4370 | 0.8968 | | 0.0968 | 3.0 | 633 | 0.4457 | 0.9002 | | 0.0968 | 4.0 | 844 | 0.4723 | 0.9048 | | 0.0259 | 5.0 | 1055 | 0.4991 | 0.9014 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Blazeolmo/Scrabunzi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
<p>[ดูหนัง] เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง/เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่อง HD พากย์ไทย THAIวันนี้!.....</p> <p> ทเรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ดูหนังออนไลน์เต็มเรื่องฟรี HD1080 . เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (2023) | จอห์น วิ </p><p>► <a href="https://golden678.com/" rel="noopener nofollow">กำลังเล่น⇒ ⟹ ➟ เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (THAI)</a></p> <p>► <a href="https://golden678.com/" rel="noopener nofollow">กำลังเล่น⇒ ⟹ ➟ เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (THAI)</a></p> <p> แรงกว่านรก 4 (2023) หนังเต็มออนไลน์. <p>(THAI) . เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็ม HD 1080p ฟรีออนไลน์ .Tid Noi: More Than True Love หนังเต็มเรื่องฟรี HD . สตรีมมิ่งประเทศไทยเต็มรูปแบบ HD (1080i) . พร้อมให้ดาวน์โหลด HD,DB,720,1080,4K, MKV, ความละเอียดสูงพิเศษ </p> <p>ข้อมูลภาพยนตร์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง รอบพิเศษ</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง จองตั๋ว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องพากย์ไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง รอบฉาย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ฉายญี่ปุ่น</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องสปอย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องวันที่ออกฉาย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเข้าวันไหน</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องซับไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเวอร์ชันเต็ม</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องพากย์ไทย เต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องอูต้า</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องดู</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023)</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องการ์ด</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกล้อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกล่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกราว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องฉบับเต็ม</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องฟรี 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว ดาวน์โหลด 2023</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเต็มเรื่องออนไลน์ฟรี</p> <p>ดาวน์โหลด เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว Hd 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องสตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเต็มเรื่องภาพยนตร์ในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว สตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องฟรี</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) เต็มเรื่องออนไลน์ฟรี 2023</p> <p>ดาวน์โหลด เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) หนังเต็ม เรื่องราว Hd 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่องสตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 เต็มเรื่องภาพยนตร์ในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว สตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่องฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว ดาวน์โหลดฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่อง 2023 ดูออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว Google Drive</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว สตรีมสด</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 เต็มเรื่องออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง พากย์ไทย เต็มเรื่อง:</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่อง037</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องnetflix</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่องhd</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องpantip</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ค่าตั๋ว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เรื่องย่อ</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องReview</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (2023) - ดูหนังออนไลน์ฟรี 037HDmovie</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูหนังออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูฟรี เต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูฟรีในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) สตรีมออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง ซับไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง พากย์ไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องFacebook</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องInstagram</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องTwitter</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องYoutube</p>
BlightZz/DialoGPT-medium-Kurisu
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
19
2023-04-12T06:34:19Z
--- license: creativeml-openrail-m tags: - text-to-image - stable-diffusion --- ### girlnew1 Dreambooth model trained by Fred99774 with [TheLastBen's fast-DreamBooth](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast-DreamBooth.ipynb) notebook Test the concept via A1111 Colab [fast-Colab-A1111](https://colab.research.google.com/github/TheLastBen/fast-stable-diffusion/blob/main/fast_stable_diffusion_AUTOMATIC1111.ipynb) Sample pictures of this concept:
BlindMan820/Sarcastic-News-Headlines
[ "pytorch", "distilbert", "text-classification", "English", "dataset:Kaggle Dataset", "transformers", "Text", "Sequence-Classification", "Sarcasm", "DistilBert" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
<p>[ดูหนัง] เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง/เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่อง HD พากย์ไทย THAIวันนี้!.....</p> <p> ทเรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ดูหนังออนไลน์เต็มเรื่องฟรี HD1080 . เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (2023) | จอห์น วิ </p><p>► <a href="https://golden678.com/" rel="noopener nofollow">กำลังเล่น⇒ ⟹ ➟ เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (THAI)</a></p> <p>► <a href="https://golden678.com/" rel="noopener nofollow">กำลังเล่น⇒ ⟹ ➟ เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (THAI)</a></p> <p> แรงกว่านรก 4 (2023) หนังเต็มออนไลน์. <p>(THAI) . เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็ม HD 1080p ฟรีออนไลน์ .Tid Noi: More Than True Love หนังเต็มเรื่องฟรี HD . สตรีมมิ่งประเทศไทยเต็มรูปแบบ HD (1080i) . พร้อมให้ดาวน์โหลด HD,DB,720,1080,4K, MKV, ความละเอียดสูงพิเศษ </p> <p>ข้อมูลภาพยนตร์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง รอบพิเศษ</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง จองตั๋ว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องพากย์ไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง รอบฉาย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ฉายญี่ปุ่น</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องสปอย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องวันที่ออกฉาย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเข้าวันไหน</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องซับไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเวอร์ชันเต็ม</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องพากย์ไทย เต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องอูต้า</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องดู</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023)</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องการ์ด</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกล้อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกล่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกราว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องฉบับเต็ม</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องฟรี 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว ดาวน์โหลด 2023</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเต็มเรื่องออนไลน์ฟรี</p> <p>ดาวน์โหลด เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว Hd 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องสตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเต็มเรื่องภาพยนตร์ในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว สตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องฟรี</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) เต็มเรื่องออนไลน์ฟรี 2023</p> <p>ดาวน์โหลด เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) หนังเต็ม เรื่องราว Hd 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่องสตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 เต็มเรื่องภาพยนตร์ในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว สตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่องฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว ดาวน์โหลดฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่อง 2023 ดูออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว Google Drive</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว สตรีมสด</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 เต็มเรื่องออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง พากย์ไทย เต็มเรื่อง:</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่อง037</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องnetflix</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่องhd</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องpantip</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ค่าตั๋ว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เรื่องย่อ</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องReview</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (2023) - ดูหนังออนไลน์ฟรี 037HDmovie</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูหนังออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูฟรี เต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูฟรีในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) สตรีมออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง ซับไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง พากย์ไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องFacebook</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องInstagram</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องTwitter</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องYoutube</p>
Bman/DialoGPT-medium-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
<p>[ดูหนัง] เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง/เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่อง HD พากย์ไทย THAIวันนี้!.....</p> <p> ทเรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ดูหนังออนไลน์เต็มเรื่องฟรี HD1080 . เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (2023) | จอห์น วิ </p><p>► <a href="https://golden678.com/" rel="noopener nofollow">กำลังเล่น⇒ ⟹ ➟ เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (THAI)</a></p> <p>► <a href="https://golden678.com/" rel="noopener nofollow">กำลังเล่น⇒ ⟹ ➟ เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (THAI)</a></p> <p> แรงกว่านรก 4 (2023) หนังเต็มออนไลน์. <p>(THAI) . เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็ม HD 1080p ฟรีออนไลน์ .Tid Noi: More Than True Love หนังเต็มเรื่องฟรี HD . สตรีมมิ่งประเทศไทยเต็มรูปแบบ HD (1080i) . พร้อมให้ดาวน์โหลด HD,DB,720,1080,4K, MKV, ความละเอียดสูงพิเศษ </p> <p>ข้อมูลภาพยนตร์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง รอบพิเศษ</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง จองตั๋ว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องพากย์ไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง รอบฉาย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ฉายญี่ปุ่น</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องสปอย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องวันที่ออกฉาย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเข้าวันไหน</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องซับไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเวอร์ชันเต็ม</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องพากย์ไทย เต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องอูต้า</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องดู</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023)</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องการ์ด</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกล้อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกล่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกราว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องฉบับเต็ม</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องฟรี 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว ดาวน์โหลด 2023</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเต็มเรื่องออนไลน์ฟรี</p> <p>ดาวน์โหลด เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว Hd 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องสตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเต็มเรื่องภาพยนตร์ในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว สตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องฟรี</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) เต็มเรื่องออนไลน์ฟรี 2023</p> <p>ดาวน์โหลด เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) หนังเต็ม เรื่องราว Hd 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่องสตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 เต็มเรื่องภาพยนตร์ในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว สตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่องฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว ดาวน์โหลดฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่อง 2023 ดูออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว Google Drive</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว สตรีมสด</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 เต็มเรื่องออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง พากย์ไทย เต็มเรื่อง:</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่อง037</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องnetflix</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่องhd</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องpantip</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ค่าตั๋ว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เรื่องย่อ</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องReview</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (2023) - ดูหนังออนไลน์ฟรี 037HDmovie</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูหนังออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูฟรี เต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูฟรีในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) สตรีมออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง ซับไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง พากย์ไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องFacebook</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องInstagram</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องTwitter</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องYoutube</p>
BobBraico/bert-finetuned-ner
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 8.22 +/- 3.72 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r justinsiow/rl_course_vizdoom_health_gathering_supreme ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
BobBraico/distilbert-base-uncased-finetuned-imdb-accelerate
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
<p>[ดูหนัง] เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง/เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่อง HD พากย์ไทย THAIวันนี้!.....</p> <p> ทเรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ดูหนังออนไลน์เต็มเรื่องฟรี HD1080 . เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (2023) | จอห์น วิ <p>► <a href="https://golden678.com/" rel="noopener nofollow">ทดลองเล่นฟรี เกมสล็อต golden678 slot เกมสล็อตออนไลน์</a></p> <p>► <a href="https://golden678.com/" rel="noopener nofollow">ทดลองเล่นฟรี เกมสล็อต golden678 slot เกมสล็อตออนไลน์</a></p> <p> แรงกว่านรก 4 (2023) หนังเต็มออนไลน์. <p>(THAI) . เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็ม HD 1080p ฟรีออนไลน์ .Tid Noi: More Than True Love หนังเต็มเรื่องฟรี HD . สตรีมมิ่งประเทศไทยเต็มรูปแบบ HD (1080i) . พร้อมให้ดาวน์โหลด HD,DB,720,1080,4K, MKV, ความละเอียดสูงพิเศษ </p> <p>ข้อมูลภาพยนตร์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง รอบพิเศษ</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง จองตั๋ว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องพากย์ไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง รอบฉาย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ฉายญี่ปุ่น</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องสปอย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องวันที่ออกฉาย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเข้าวันไหน</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องซับไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเวอร์ชันเต็ม</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องพากย์ไทย เต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องอูต้า</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องดู</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023)</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องการ์ด</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกล้อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกล่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องกราว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องฉบับเต็ม</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องฟรี 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว ดาวน์โหลด 2023</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเต็มเรื่องออนไลน์ฟรี</p> <p>ดาวน์โหลด เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว Hd 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องสตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องเต็มเรื่องภาพยนตร์ในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องหนังเต็ม เรื่องราว สตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องภาพยนตร์เต็มเรื่องฟรี</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) เต็มเรื่องออนไลน์ฟรี 2023</p> <p>ดาวน์โหลด เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) หนังเต็ม เรื่องราว Hd 2023</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่องสตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 เต็มเรื่องภาพยนตร์ในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว สตรีมมิ่งฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่องฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว ดาวน์โหลดฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 ภาพยนตร์เต็มเรื่อง 2023 ดูออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว Google Drive</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 หนังเต็ม เรื่องราว สตรีมสด</p> <p>ดูเรื่อง เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(Tid Noi: More Than True Love) 2023 เต็มเรื่องออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง พากย์ไทย เต็มเรื่อง:</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่อง037</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องnetflix</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เต็มเรื่องhd</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องpantip</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง ค่าตั๋ว</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง เรื่องย่อ</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่องReview</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง (2023) - ดูหนังออนไลน์ฟรี 037HDmovie</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) หนังเต็มออนไลน์ฟรี</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูหนังออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูฟรี เต็มเรื่อง</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) ดูฟรีในรูปแบบ HD</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) สตรีมออนไลน์</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง ซับไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่อง พากย์ไทย</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องFacebook</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องInstagram</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องTwitter</p> <p>เรื่อง John Wick: Chapter 4 จอห์น วิค 4 : แรงกว่านรก (2023) เต็มเรื่อง(2023) เต็มเรื่องYoutube</p>
BobBraico/distilbert-base-uncased-finetuned-imdb
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: t5-base-end2end-chatbot-generative results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-base-end2end-chatbot-generative This model is a fine-tuned version of [t5-base](https://huggingface.co/t5-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 2.2857 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 64 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.9718 | 0.16 | 100 | 2.3951 | | 2.4855 | 0.32 | 200 | 2.3368 | | 2.4712 | 0.49 | 300 | 2.3113 | | 2.4346 | 0.65 | 400 | 2.2972 | | 2.4126 | 0.81 | 500 | 2.2888 | | 2.4219 | 0.97 | 600 | 2.2857 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Botjallu/DialoGPT-small-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-12T07:06:48Z
--- library_name: sample-factory tags: - deep-reinforcement-learning - reinforcement-learning - sample-factory model-index: - name: APPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: doom_health_gathering_supreme type: doom_health_gathering_supreme metrics: - type: mean_reward value: 11.03 +/- 4.46 name: mean_reward verified: false --- A(n) **APPO** model trained on the **doom_health_gathering_supreme** environment. This model was trained using Sample-Factory 2.0: https://github.com/alex-petrenko/sample-factory. Documentation for how to use Sample-Factory can be found at https://www.samplefactory.dev/ ## Downloading the model After installing Sample-Factory, download the model with: ``` python -m sample_factory.huggingface.load_from_hub -r justinsiow/rl_course_vizdoom_health_gathering_supreme-v2 ``` ## Using the model To run the model after download, use the `enjoy` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme-v2 ``` You can also upload models to the Hugging Face Hub using the same script with the `--push_to_hub` flag. See https://www.samplefactory.dev/10-huggingface/huggingface/ for more details ## Training with this model To continue training with this model, use the `train` script corresponding to this environment: ``` python -m .usr.local.lib.python3.9.dist-packages.ipykernel_launcher --algo=APPO --env=doom_health_gathering_supreme --train_dir=./train_dir --experiment=rl_course_vizdoom_health_gathering_supreme-v2 --restart_behavior=resume --train_for_env_steps=10000000000 ``` Note, you may have to adjust `--train_for_env_steps` to a suitably high number as the experiment will resume at the number of steps it concluded at.
BotterHax/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- license: openrail pipeline_tag: image-segmentation ---
Brayan/CNN_Brain_Tumor
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: ml-agents tags: - SnowballTarget - deep-reinforcement-learning - reinforcement-learning - ML-Agents-SnowballTarget --- # **ppo** Agent playing **SnowballTarget** This is a trained model of a **ppo** agent playing **SnowballTarget** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://singularite.itch.io/snowballtarget 2. Step 1: Find your model_id: jmurphy97/ppo-SnowballTarget1 3. Step 2: Select your SnowballTarget.onnx file 4. Click on Watch the agent play 👀
Brinah/1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 270.78 +/- 21.22 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```