modelId
stringlengths
4
81
tags
list
pipeline_tag
stringclasses
17 values
config
dict
downloads
int64
0
59.7M
first_commit
timestamp[ns, tz=UTC]
card
stringlengths
51
438k
BritishLibraryLabs/bl-books-genre
[ "pytorch", "distilbert", "text-classification", "multilingual", "dataset:blbooksgenre", "transformers", "genre", "books", "library", "historic", "glam ", "lam", "license:mit", "has_space" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
76
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: DistibertNER results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # DistibertNER This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0368 - Validation Loss: 0.0173 - Train Precision: 0.9941 - Train Recall: 0.9971 - Train F1: 0.9956 - Train Accuracy: 0.9972 - Epoch: 9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 5e-05, 'decay_steps': 9, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False, 'weight_decay_rate': 1e-08} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Precision | Train Recall | Train F1 | Train Accuracy | Epoch | |:----------:|:---------------:|:---------------:|:------------:|:--------:|:--------------:|:-----:| | 0.0700 | 0.0422 | 0.9941 | 0.9912 | 0.9926 | 0.9945 | 0 | | 0.0860 | 0.0423 | 0.9971 | 0.9941 | 0.9956 | 0.9972 | 1 | | 0.0694 | 0.0354 | 0.9971 | 0.9941 | 0.9956 | 0.9972 | 2 | | 0.0615 | 0.0287 | 0.9941 | 0.9912 | 0.9926 | 0.9945 | 3 | | 0.0462 | 0.0244 | 0.9941 | 0.9912 | 0.9926 | 0.9945 | 4 | | 0.0462 | 0.0208 | 0.9941 | 0.9971 | 0.9956 | 0.9972 | 5 | | 0.0497 | 0.0188 | 0.9941 | 0.9971 | 0.9956 | 0.9972 | 6 | | 0.0339 | 0.0178 | 0.9941 | 0.9971 | 0.9956 | 0.9972 | 7 | | 0.0386 | 0.0173 | 0.9941 | 0.9971 | 0.9956 | 0.9972 | 8 | | 0.0368 | 0.0173 | 0.9941 | 0.9971 | 0.9956 | 0.9972 | 9 | ### Framework versions - Transformers 4.27.4 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
Brokette/projetCS
[ "pytorch", "wav2vec2", "automatic-speech-recognition", "transformers" ]
automatic-speech-recognition
{ "architectures": [ "Wav2Vec2ForCTC" ], "model_type": "wav2vec2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - opus_books metrics: - bleu model-index: - name: translation-en-fr results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: opus_books type: opus_books config: en-fr split: train args: en-fr metrics: - name: Bleu type: bleu value: 5.7246 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # translation-en-fr This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus_books dataset. It achieves the following results on the evaluation set: - Loss: 1.5898 - Bleu: 5.7246 - Gen Len: 17.5859 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Bleu | Gen Len | |:-------------:|:-----:|:-----:|:---------------:|:------:|:-------:| | 1.8329 | 1.0 | 6355 | 1.6091 | 5.6057 | 17.5952 | | 1.7888 | 2.0 | 12710 | 1.5898 | 5.7246 | 17.5859 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Brona/model1
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: cc-by-4.0 metrics: - bleu4 - meteor - rouge-l - bertscore - moverscore language: es datasets: - lmqg/qg_esquad pipeline_tag: text2text-generation tags: - question answering widget: - text: "question: ¿Cuál es la población de Nueva York a partir de 2014?, context: Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva York consta de cinco municipios, cada uno de los cuales es un condado separado del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar." example_title: "Question Answering Example 1" - text: "question: ¿Cómo se llama el ejército personal de Sassou?, context: El progreso democrático del Congo se descarriló en 1997, cuando Lissouba y Sassou comenzaron a luchar por el poder en la guerra civil. A medida que se acercaban las elecciones presidenciales de julio de 1997, las tensiones entre los campos de Lissouba y Sassou aumentaron. El 5 de junio, las fuerzas del gobierno del presidente Lissouba rodearon el complejo de Sassou en Brazzaville y Sassou ordenó a los miembros de su milicia privada (conocida como Cobras) resistir. Así comenzó un conflicto de cuatro meses que destruyó o dañó gran parte de Brazzaville y causó decenas de miles de muertes civiles. A principios de octubre, el régimen socialista angoleño comenzó una invasión del Congo para instalar a Sassou en el poder. A mediados de octubre, el gobierno de Lissouba cayó. Poco después, Sassou se declaró presidente." example_title: "Question Answering Example 2" model-index: - name: vocabtrimmer/mbart-large-cc25-trimmed-es-esquad-qa results: - task: name: Text2text Generation type: text2text-generation dataset: name: lmqg/qg_esquad type: default args: default metrics: - name: BLEU4 (Question Answering) type: bleu4_question_answering value: 27.7 - name: ROUGE-L (Question Answering) type: rouge_l_question_answering value: 39.6 - name: METEOR (Question Answering) type: meteor_question_answering value: 33.54 - name: BERTScore (Question Answering) type: bertscore_question_answering value: 92.63 - name: MoverScore (Question Answering) type: moverscore_question_answering value: 78.61 - name: AnswerF1Score (Question Answering) type: answer_f1_score__question_answering value: 63.67 - name: AnswerExactMatch (Question Answering) type: answer_exact_match_question_answering value: 43.91 --- # Model Card of `vocabtrimmer/mbart-large-cc25-trimmed-es-esquad-qa` This model is fine-tuned version of [vocabtrimmer/mbart-large-cc25-trimmed-es](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-es) for question answering task on the [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (dataset_name: default) via [`lmqg`](https://github.com/asahi417/lm-question-generation). ### Overview - **Language model:** [vocabtrimmer/mbart-large-cc25-trimmed-es](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-es) - **Language:** es - **Training data:** [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) (default) - **Online Demo:** [https://autoqg.net/](https://autoqg.net/) - **Repository:** [https://github.com/asahi417/lm-question-generation](https://github.com/asahi417/lm-question-generation) - **Paper:** [https://arxiv.org/abs/2210.03992](https://arxiv.org/abs/2210.03992) ### Usage - With [`lmqg`](https://github.com/asahi417/lm-question-generation#lmqg-language-model-for-question-generation-) ```python from lmqg import TransformersQG # initialize model model = TransformersQG(language="es", model="vocabtrimmer/mbart-large-cc25-trimmed-es-esquad-qa") # model prediction answers = model.answer_q(list_question="¿Cuál es la población de Nueva York a partir de 2014?", list_context=" Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva York consta de cinco municipios, cada uno de los cuales es un condado separado del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar.") ``` - With `transformers` ```python from transformers import pipeline pipe = pipeline("text2text-generation", "vocabtrimmer/mbart-large-cc25-trimmed-es-esquad-qa") output = pipe("question: ¿Cuál es la población de Nueva York a partir de 2014?, context: Situada en uno de los mayores puertos naturales del mundo, la ciudad de Nueva York consta de cinco municipios, cada uno de los cuales es un condado separado del estado de Nueva York. Los cinco distritos - Brooklyn, Queens, Manhattan, el Bronx y Staten Island - se consolidaron en una sola ciudad en 1898. Con una población censada estimada en 2014 de 8.491.079 habitantes distribuidos en una superficie de solo 790 km ², Nueva York es la ciudad más densamente poblada de los Estados Unidos. Hasta 800 idiomas se hablan en Nueva York, por lo que es la ciudad más lingüísticamente diversa del mundo. Según estimaciones del censo de 2014, la región metropolitana de la ciudad de Nueva York sigue siendo por un margen significativo la más poblada de los Estados Unidos, según lo definido tanto por el Área Estadística Metropolitana (20,1 millones de residentes). En 2013, el MSA produjo un producto metropolitano bruto (GMP) de casi US $1,39 billones, mientras que en 2012, el CSA generó un GMP de más de US $1,55 billones, ambos clasificados en primer lugar.") ``` ## Evaluation - ***Metric (Question Answering)***: [raw metric file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-es-esquad-qa/raw/main/eval/metric.first.answer.paragraph_question.answer.lmqg_qg_esquad.default.json) | | Score | Type | Dataset | |:-----------------|--------:|:--------|:-----------------------------------------------------------------| | AnswerExactMatch | 43.91 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | AnswerF1Score | 63.67 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | BERTScore | 92.63 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_1 | 38.14 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_2 | 33.28 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_3 | 30.15 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | Bleu_4 | 27.7 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | METEOR | 33.54 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | MoverScore | 78.61 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | | ROUGE_L | 39.6 | default | [lmqg/qg_esquad](https://huggingface.co/datasets/lmqg/qg_esquad) | ## Training hyperparameters The following hyperparameters were used during fine-tuning: - dataset_path: lmqg/qg_esquad - dataset_name: default - input_types: ['paragraph_question'] - output_types: ['answer'] - prefix_types: None - model: vocabtrimmer/mbart-large-cc25-trimmed-es - max_length: 512 - max_length_output: 32 - epoch: 15 - batch: 8 - lr: 0.0001 - fp16: False - random_seed: 1 - gradient_accumulation_steps: 8 - label_smoothing: 0.15 The full configuration can be found at [fine-tuning config file](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-es-esquad-qa/raw/main/trainer_config.json). ## Citation ``` @inproceedings{ushio-etal-2022-generative, title = "{G}enerative {L}anguage {M}odels for {P}aragraph-{L}evel {Q}uestion {G}eneration", author = "Ushio, Asahi and Alva-Manchego, Fernando and Camacho-Collados, Jose", booktitle = "Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing", month = dec, year = "2022", address = "Abu Dhabi, U.A.E.", publisher = "Association for Computational Linguistics", } ```
Brykee/DialoGPT-medium-Morty
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
10
null
--- language: zh license: apache-2.0 --- # G2PTL-1 ## Introduction G2PTL-1: A Geography-Graph Pre-trained model for address. This work is the first version of G2PTL (v1.0) ## Model description G2PTL is a Transformer model that is pretrained on a large corpus of Chinese addresses in a self-supervised manner. It has three pretraining objectives: - Masked language modeling (MLM): taking an address, the model randomly masks some words in the input text and predicts the masked words. It should be noted that for the geographical entities in the address, we adopt the Whole Word Masking (WWM) approach to mask them and learn the co-occurrence relationships among them. - Hierarchical text modeling (HTC): an address is a text with a hierarchical structure of province, city, district, and street. HTC is used to model the hierarchical relationship among these levels in addresses. ![HTC.jpg](./Images/HTC.jpg) - Geocoding (GC): an address can be represented by a point with latitude and longitude in the real world. The GC task is designed to learn the mapping relationship between address text and geographical location. More detail: https://arxiv.org/abs/2304.01559 ![Model.jpg](./Images/Model.jpg) ## Intended uses & limitations This model is designed for decision tasks based on address text, including tasks related to understanding address texts and Spatial-Temporal downstream tasks which rely on address text representation. 1. Address text understanding tasks - Geocoding - Named Entity Recognition - Geographic Entity Alignment - Address Text Similarity - Address Text Classification - ... 2. Spatial-Temporal downstream tasks: - Estimated Time of Arrival (ETA) Prediction - Pick-up & Delivery Route Prediction. - Express Volume Prediction - ... The model currently only supports Chinese addresses, and it is an encoder-only model which is not suitable for text generation scenarios such as question answering. If you need to use address text based dialogue capabilities, you can look forward to our second version of G2PTL (v2.0) ## How to use You can use this model directly with a pipeline for masked language modeling: ```Python >>> from transformers import pipeline, AutoModel, AutoTokenizer >>> model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) >>> tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) >>> mask_filler = pipeline(task= 'fill-mask', model= model,tokenizer = tokenizer) >>> mask_filler("浙江省杭州市[MASK]杭区五常街道阿里巴巴西溪园区") ``` ```json [{'score': 1.0, 'token': 562, 'token_str': '余', 'sequence': '浙 江 省 杭 州 市 余 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'}, {'score': 7.49648343401077e-09, 'token': 1852, 'token_str': '杭', 'sequence': '浙 江 省 杭 州 市 杭 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'}, {'score': 5.823675763849678e-09, 'token': 213, 'token_str': '西', 'sequence': '浙 江 省 杭 州 市 西 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'}, {'score': 3.383779922927488e-09, 'token': 346, 'token_str': '五', 'sequence': '浙 江 省 杭 州 市 五 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'}, {'score': 2.9116642430437878e-09, 'token': 2268, 'token_str': '荆', 'sequence': '浙 江 省 杭 州 市 荆 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区'}] ``` You can also use this model for multiple [MASK] filling in PyTorch: ```python from transformers import pipeline, AutoModel, AutoTokenizer import torch model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) model.eval() text = ['浙江省杭州市[MASK][MASK][MASK]五常街道阿里巴巴西溪园区'] encoded_input = tokenizer(text, return_tensors='pt') outputs = model(**encoded_input) prediction_scores = outputs.logits prediction_scores = torch.argmax(prediction_scores, dim=-1) prediction_scores = prediction_scores.cpu().detach().numpy() input_ids = encoded_input['input_ids'] print('G2PTL:', tokenizer.decode(prediction_scores[torch.where(input_ids.cpu()>0)][1:-1])) ``` ```json G2PTL: 浙 江 省 杭 州 市 余 杭 区 五 常 街 道 阿 里 巴 巴 西 溪 园 区 ``` Here is how to use this model to get the HTC output of a given text in PyTorch: ```python from transformers import pipeline, AutoModel, AutoTokenizer model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) model.eval() text = "浙江省杭州市五常街道阿里巴巴西溪园区" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) htc_layer_out = output.htc_layer_out htc_pred = model.get_htc_code(htc_layer_out) print('HTC Result: ', model.decode_htc_code_2_chn(htc_pred)) ``` ```json HTC Result: ['浙江省杭州市余杭区五常街道', '浙江省杭州市五常街道'] ``` Here is how to use this model to get the features/embeddings of a given text in PyTorch: ```python from transformers import pipeline, AutoModel, AutoTokenizer model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) model.eval() text = "浙江省杭州市余杭区五常街道阿里巴巴西溪园区" encoded_input = tokenizer(text, return_tensors='pt') output = model(**encoded_input) final_hidden_state = output.final_hidden_state ``` Here is how to use this model to get cosine similarity between two address texts in PyTorch: ```python from transformers import pipeline, AutoModel, AutoTokenizer import torch model = AutoModel.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) tokenizer = AutoTokenizer.from_pretrained('Cainiao-AI/G2PTL', trust_remote_code=True) model.eval() text = ["浙江省杭州市余杭区五常街道阿里巴巴西溪园区", "浙江省杭州市阿里巴巴西溪园区"] encoded_input = tokenizer(text, return_tensors='pt', padding=True) output = model(**encoded_input) final_pooler_output = output.final_pooler_output cos_sim = torch.cosine_similarity(final_pooler_output[0], final_pooler_output[1]) print('Cosin Similarity: ', cos_sim[0].detach().numpy()) ``` ```json Cosin Similarity: 0.8974346 ``` ## Requirements python>=3.8 ```shell tqdm==4.65.0 torch==1.13.1 transformers==4.27.4 datasets==2.11.0 fairseq==0.12.2 ``` ## Citation ```bibtex @misc{wu2023g2ptl, title={G2PTL: A Pre-trained Model for Delivery Address and its Applications in Logistics System}, author={Lixia Wu and Jianlin Liu and Junhong Lou and Haoyuan Hu and Jianbin Zheng and Haomin Wen and Chao Song and Shu He}, year={2023}, eprint={2304.01559}, archivePrefix={arXiv}, primaryClass={cs.AI} } ```
Bryson575x/riceboi
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: QRDQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 661.50 +/- 201.53 name: mean_reward verified: false --- # **QRDQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **QRDQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga Mihail-P -f logs/ python -m rl_zoo3.enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -orga Mihail-P -f logs/ python -m rl_zoo3.enjoy --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo qrdqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga Mihail-P ``` ## Hyperparameters ```python OrderedDict([('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_fraction', 0.025), ('frame_stack', 4), ('n_timesteps', 10000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('normalize', False)]) ```
BumBelDumBel/TRUMP
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- license: other --- This model is the 4bit quantized version of the alpaca-7b model. Created with the https://github.com/oobabooga/GPTQ-for-LLaMa repository for better compatibility to text-generation-webui. Use with `--wbits 4` and `--groupsize 128`
BumBelDumBel/ZORK-AI-TEST
[ "pytorch", "tensorboard", "gpt2", "text-generation", "transformers", "generated_from_trainer", "license:mit" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: apache-2.0 tags: - summarization - generated_from_trainer datasets: - arxiv_summarization_dataset - ccdv/arxiv-summarization metrics: - rouge model-index: - name: bart-base-arxiv-sum-session-1 results: - task: name: Sequence-to-sequence Language Modeling type: text2text-generation dataset: name: arxiv_summarization_dataset type: arxiv_summarization_dataset config: section split: validation args: section metrics: - name: Rouge1 type: rouge value: 12.7479 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bart-base-arxiv-sum-session-1 This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the arxiv_summarization_dataset dataset. It achieves the following results on the evaluation set: - Loss: 2.8862 - Rouge1: 12.7479 - Rouge2: 4.8295 - Rougel: 10.2761 - Rougelsum: 11.7334 ## Model description Model obtained from fine-tuning facebook/bart-base on 25,000 training samples from the ccdv/arxiv-summarization dataset. ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | |:-------------:|:-----:|:----:|:---------------:|:-------:|:------:|:-------:|:---------:| | No log | 1.0 | 195 | 2.9794 | 12.5852 | 4.6927 | 10.1374 | 11.6014 | | No log | 2.0 | 390 | 2.9077 | 12.5854 | 4.7568 | 10.166 | 11.5699 | | No log | 3.0 | 585 | 2.8862 | 12.7479 | 4.8295 | 10.2761 | 11.7334 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
BumBelDumBel/ZORK_AI_FANTASY
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_keras_callback model-index: - name: pretrained-bert results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pretrained-bert This model is a fine-tuned version of [](https://huggingface.co/) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: nan - Validation Loss: nan - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'learning_rate': 1e-04, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | nan | nan | 0 | ### Framework versions - Transformers 4.24.0 - TensorFlow 2.9.1 - Datasets 2.4.0 - Tokenizers 0.11.0
BunakovD/sd
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
Access to model Qrstud/ANCs is restricted and you are not in the authorized list. Visit https://huggingface.co/Qrstud/ANCs to ask for access.
Bwehfuk/Ron
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-12T07:52:36Z
--- library_name: ml-agents tags: - Pyramids - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Pyramids --- # **ppo** Agent playing **Pyramids** This is a trained model of a **ppo** agent playing **Pyramids** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Pyramids 2. Step 1: Find your model_id: jmurphy97/ppo-Pyramids1 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
CALM/CALM
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 128 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('{MODEL_NAME}') embeddings = model.encode(sentences) print(embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name={MODEL_NAME}) ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 512, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 768, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (dense): Dense({'in_features': 768, 'out_features': 128, 'bias': False, 'activation_function': 'torch.nn.modules.linear.Identity'}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
CAMeL-Lab/bert-base-arabic-camelbert-ca-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
--- license: gpl-3.0 tags: - text2text-generation pipeline_tag: text2text-generation language: - zh - en --- # Model Card for ChatBELLE-int4 ## Welcome 4-bit quantized model using [llama.cpp](https://github.com/ggerganov/llama.cpp). If you find this model helpful, please *like* this model and star us on https://github.com/LianjiaTech/BELLE ! ## Model description ChatBELLE-int4 is based on 7B model and quantized to 4-bit. The code of Chinese data generation and other detailed information can be found in our Github project repository: https://github.com/LianjiaTech/BELLE. ## Download Should you accept our license and acknowledged the limitations, download the model by clicking [Download](https://huggingface.co/BelleGroup/BELLE-LLaMA-7B-2M-q4/resolve/main/belle-model.bin). ## Model Usage You can use this model with ChatBELLE, a minimal, cross-platform LLM chat app powered by [BELLE](https://github.com/LianjiaTech/BELLE) using quantized on-device offline models and Flutter UI, running on macOS (done), Windows, Android, iOS(see [Known Issues](#known-issues)) and more. ### macOS * Download [chatbelle.dmg](https://github.com/LianjiaTech/BELLE/releases/download/v0.95/chatbelle.dmg) from [Releases](https://github.com/LianjiaTech/BELLE/releases/tag/v0.95) page, double click to open it, then drag `Chat Belle.dmg` into `Applications` folder. * Open the `Chat Belle` app in `Applications` folder by right click then Ctrl-click `Open`, then click `Open`. * The app will prompt the intended model file path and fail to load the model. Close the app. * Download quantized model `belle-model.bin` from this repo. * Move and rename the model to the path prompted by the app. Defaults to `~/Library/Containers/com.barius.chatbelle/Data/belle-model.bin` . * Reopen the app again (double clicking is now OK). ### Windows * Stay tuned ### Android * Stay tuned ### iOS * Stay tuned ## Limitations There still exists a few issues in the model trained on current base model and data: 1. The model might generate factual errors when asked to follow instructions related to facts. 2. Occasionally generates harmful responses since the model still struggles to identify potential harmful instructions. 3. Needs improvements on reasoning and coding. Since the model still has its limitations, we require developers only use the open-sourced code, data, model and any other artifacts generated via this project for research purposes. Commercial use and other potential harmful use cases are not allowed. ## Citation Please cite us when using our code, data or model. ``` @misc{BELLE, author = {Yunjie Ji, Yong Deng, Yan Gong, Yiping Peng, Qiang Niu, Baochang Ma, Xiangang Li}, title = {BELLE: Be Everyone's Large Language model Engine}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/LianjiaTech/BELLE}}, } ```
CAMeL-Lab/bert-base-arabic-camelbert-ca-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16,451
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 545.50 +/- 280.90 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RandenBanuelos -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga RandenBanuelos -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga RandenBanuelos ``` ## Hyperparameters ```python OrderedDict([('batch_size', 32), ('buffer_size', 100000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.01), ('exploration_fraction', 0.1), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.0001), ('learning_starts', 100000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
CAMeL-Lab/bert-base-arabic-camelbert-da-ner
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
42
null
--- language: - mn license: mit tags: - generated_from_trainer metrics: - precision - recall - f1 - accuracy model-index: - name: bloom-mongolian-ner-demo results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bloom-mongolian-ner-demo This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1048 - Precision: 0.9267 - Recall: 0.9354 - F1: 0.9310 - Accuracy: 0.9796 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Precision | Recall | F1 | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:---------:|:------:|:------:|:--------:| | 0.195 | 1.0 | 477 | 0.0947 | 0.8845 | 0.8994 | 0.8919 | 0.9707 | | 0.0848 | 2.0 | 954 | 0.0761 | 0.9095 | 0.9235 | 0.9164 | 0.9774 | | 0.0614 | 3.0 | 1431 | 0.0724 | 0.9218 | 0.9317 | 0.9267 | 0.9797 | | 0.0452 | 4.0 | 1908 | 0.0756 | 0.9283 | 0.9350 | 0.9316 | 0.9806 | | 0.035 | 5.0 | 2385 | 0.0824 | 0.9221 | 0.9337 | 0.9279 | 0.9796 | | 0.0263 | 6.0 | 2862 | 0.0895 | 0.9191 | 0.9319 | 0.9254 | 0.9787 | | 0.02 | 7.0 | 3339 | 0.0991 | 0.9238 | 0.9335 | 0.9286 | 0.9789 | | 0.0148 | 8.0 | 3816 | 0.1005 | 0.9277 | 0.9358 | 0.9317 | 0.9798 | | 0.0124 | 9.0 | 4293 | 0.1014 | 0.9254 | 0.9356 | 0.9305 | 0.9801 | | 0.01 | 10.0 | 4770 | 0.1048 | 0.9267 | 0.9354 | 0.9310 | 0.9796 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-da-poetry
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:1905.05700", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 271.08 +/- 16.67 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SloBertAA_Top5_WithOOC results: [] datasets: - gregorgabrovsek/RTVCommentsTop5UsersWithOOC language: - sl --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SloBertAA_Top5_WithOOC This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5616 - Accuracy: 0.8946 ## Other related models Models fine-tuned on the RTV datasets: | Base model | Includes the OOC class? | 5 classes | 10 classes | 20 classes | 50 classes | 100 classes | | ------------------- | ----------------------- |:---------:|:----------:|:----------:|:----------:|:-----------:| | SloBERTa | Yes | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top5_WithOOC) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top10_WithOOC) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top20_WithOOC) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top50_WithOOC) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top100_WithOOC) | | SloBERTa | No | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top5_WithoutOOC) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top10_WithoutOOC) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top20_WithoutOOC) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top50_WithoutOOC) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top100_WithoutOOC) | | BERT Multilingual | Yes | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top5_WithOOC_MultilingualBertBase) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top10_WithOOC_MultilingualBertBase) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top20_WithOOC_MultilingualBertBase) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top50_WithOOC_MultilingualBertBase) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top100_WithOOC_MultilingualBertBase) | | BERT Multilingual | No | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top5_WithoutOOC_MultilingualBertBase) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top10_WithoutOOC_MultilingualBertBase) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top20_WithoutOOC_MultilingualBertBase) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top50_WithoutOOC_MultilingualBertBase) | [link](https://huggingface.co/gregorgabrovsek/SloBertAA_Top100_WithoutOOC_MultilingualBertBase) | Models fine-tuned on the IMDb datasets: | Base model | Includes the OOC class? | 5 classes | 10 classes | 25 classes | 50 classes | 100 classes | | ------------------- | ----------------------- |:---------:|:----------:|:----------:|:----------:|:-----------:| | BERT Multilingual | No | [link](https://huggingface.co/gregorgabrovsek/BERT_AA_IMDB_Top5_WithoutOOC_MultilingualBertBase) |[link](https://huggingface.co/gregorgabrovsek/BERT_AA_IMDB_Top10_WithoutOOC_MultilingualBertBase) |[link](https://huggingface.co/gregorgabrovsek/BERT_AA_IMDB_Top25_WithoutOOC_MultilingualBertBase) |[link](https://huggingface.co/gregorgabrovsek/BERT_AA_IMDB_Top50_WithoutOOC_MultilingualBertBase) |[link](https://huggingface.co/gregorgabrovsek/BERT_AA_IMDB_Top100_WithoutOOC_MultilingualBertBase) | ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.4211 | 1.0 | 10508 | 0.3823 | 0.8700 | | 0.3163 | 2.0 | 21016 | 0.3917 | 0.8772 | | 0.257 | 3.0 | 31524 | 0.3771 | 0.8925 | | 0.1874 | 4.0 | 42032 | 0.5059 | 0.8931 | | 0.129 | 5.0 | 52540 | 0.5616 | 0.8946 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.8.0 - Datasets 2.10.1 - Tokenizers 0.13.2
CAMeL-Lab/bert-base-arabic-camelbert-da-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
null
--- language: - hi license: apache-2.0 tags: - hf-asr-leaderboard - generated_from_trainer datasets: - mozilla-foundation/common_voice_11_0 model-index: - name: Whisper12366 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper12366 This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the Common Voice 11.0 dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 16 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 4000 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.28.0.dev0 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-da
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
449
null
--- license: mit library_name: sklearn tags: - sklearn - skops - tabular-classification model_format: pickle model_file: model.pkl widget: structuredData: petal length (cm): - 5.7 - 5.6 - 5.2 petal width (cm): - 2.1 - 2.4 - 2.0 sepal length (cm): - 6.7 - 6.3 - 6.5 sepal width (cm): - 3.3 - 3.4 - 3.0 --- # Model description [More Information Needed] ## Intended uses & limitations [More Information Needed] ## Training Procedure [More Information Needed] ### Hyperparameters <details> <summary> Click to expand </summary> | Hyperparameter | Value | |-------------------|---------| | C | 1.0 | | class_weight | | | dual | False | | fit_intercept | True | | intercept_scaling | 1 | | l1_ratio | | | max_iter | 100 | | multi_class | auto | | n_jobs | | | penalty | l2 | | random_state | 0 | | solver | lbfgs | | tol | 0.0001 | | verbose | 0 | | warm_start | False | </details> ### Model Plot <style>#sk-container-id-4 {color: black;background-color: white;}#sk-container-id-4 pre{padding: 0;}#sk-container-id-4 div.sk-toggleable {background-color: white;}#sk-container-id-4 label.sk-toggleable__label {cursor: pointer;display: block;width: 100%;margin-bottom: 0;padding: 0.3em;box-sizing: border-box;text-align: center;}#sk-container-id-4 label.sk-toggleable__label-arrow:before {content: "▸";float: left;margin-right: 0.25em;color: #696969;}#sk-container-id-4 label.sk-toggleable__label-arrow:hover:before {color: black;}#sk-container-id-4 div.sk-estimator:hover label.sk-toggleable__label-arrow:before {color: black;}#sk-container-id-4 div.sk-toggleable__content {max-height: 0;max-width: 0;overflow: hidden;text-align: left;background-color: #f0f8ff;}#sk-container-id-4 div.sk-toggleable__content pre {margin: 0.2em;color: black;border-radius: 0.25em;background-color: #f0f8ff;}#sk-container-id-4 input.sk-toggleable__control:checked~div.sk-toggleable__content {max-height: 200px;max-width: 100%;overflow: auto;}#sk-container-id-4 input.sk-toggleable__control:checked~label.sk-toggleable__label-arrow:before {content: "▾";}#sk-container-id-4 div.sk-estimator input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-4 div.sk-label input.sk-toggleable__control:checked~label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-4 input.sk-hidden--visually {border: 0;clip: rect(1px 1px 1px 1px);clip: rect(1px, 1px, 1px, 1px);height: 1px;margin: -1px;overflow: hidden;padding: 0;position: absolute;width: 1px;}#sk-container-id-4 div.sk-estimator {font-family: monospace;background-color: #f0f8ff;border: 1px dotted black;border-radius: 0.25em;box-sizing: border-box;margin-bottom: 0.5em;}#sk-container-id-4 div.sk-estimator:hover {background-color: #d4ebff;}#sk-container-id-4 div.sk-parallel-item::after {content: "";width: 100%;border-bottom: 1px solid gray;flex-grow: 1;}#sk-container-id-4 div.sk-label:hover label.sk-toggleable__label {background-color: #d4ebff;}#sk-container-id-4 div.sk-serial::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: 0;}#sk-container-id-4 div.sk-serial {display: flex;flex-direction: column;align-items: center;background-color: white;padding-right: 0.2em;padding-left: 0.2em;position: relative;}#sk-container-id-4 div.sk-item {position: relative;z-index: 1;}#sk-container-id-4 div.sk-parallel {display: flex;align-items: stretch;justify-content: center;background-color: white;position: relative;}#sk-container-id-4 div.sk-item::before, #sk-container-id-4 div.sk-parallel-item::before {content: "";position: absolute;border-left: 1px solid gray;box-sizing: border-box;top: 0;bottom: 0;left: 50%;z-index: -1;}#sk-container-id-4 div.sk-parallel-item {display: flex;flex-direction: column;z-index: 1;position: relative;background-color: white;}#sk-container-id-4 div.sk-parallel-item:first-child::after {align-self: flex-end;width: 50%;}#sk-container-id-4 div.sk-parallel-item:last-child::after {align-self: flex-start;width: 50%;}#sk-container-id-4 div.sk-parallel-item:only-child::after {width: 0;}#sk-container-id-4 div.sk-dashed-wrapped {border: 1px dashed gray;margin: 0 0.4em 0.5em 0.4em;box-sizing: border-box;padding-bottom: 0.4em;background-color: white;}#sk-container-id-4 div.sk-label label {font-family: monospace;font-weight: bold;display: inline-block;line-height: 1.2em;}#sk-container-id-4 div.sk-label-container {text-align: center;}#sk-container-id-4 div.sk-container {/* jupyter's `normalize.less` sets `[hidden] { display: none; }` but bootstrap.min.css set `[hidden] { display: none !important; }` so we also need the `!important` here to be able to override the default hidden behavior on the sphinx rendered scikit-learn.org. See: https://github.com/scikit-learn/scikit-learn/issues/21755 */display: inline-block !important;position: relative;}#sk-container-id-4 div.sk-text-repr-fallback {display: none;}</style><div id="sk-container-id-4" class="sk-top-container" style="overflow: auto;"><div class="sk-text-repr-fallback"><pre>LogisticRegression(random_state=0)</pre><b>In a Jupyter environment, please rerun this cell to show the HTML representation or trust the notebook. <br />On GitHub, the HTML representation is unable to render, please try loading this page with nbviewer.org.</b></div><div class="sk-container" hidden><div class="sk-item"><div class="sk-estimator sk-toggleable"><input class="sk-toggleable__control sk-hidden--visually" id="sk-estimator-id-4" type="checkbox" checked><label for="sk-estimator-id-4" class="sk-toggleable__label sk-toggleable__label-arrow">LogisticRegression</label><div class="sk-toggleable__content"><pre>LogisticRegression(random_state=0)</pre></div></div></div></div></div> ## Evaluation Results | Metric | Value | |----------|----------| | accuracy | 0.933333 | | f1 score | 0.933333 | # How to Get Started with the Model [More Information Needed] # Model Card Authors This model card is written by following authors: [More Information Needed] # Model Card Contact You can contact the model card authors through following channels: [More Information Needed] # Citation Below you can find information related to citation. **BibTeX:** ``` [More Information Needed] ``` # citation_bibtex bibtex @inproceedings{...,year={2020}} # get_started_code import pickle with open(dtc_pkl_filename, 'rb') as file: clf = pickle.load(file) # model_card_authors skops_user # limitations This model is not ready to be used in production. # model_description This is a DecisionTreeClassifier model trained on breast cancer dataset. # eval_method The model is evaluated using test split, on accuracy and F1 score with macro average.
CAMeL-Lab/bert-base-arabic-camelbert-mix-did-madar-corpus26
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
45
null
--- tags: - generated_from_trainer model-index: - name: OCR30000 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # OCR30000 This model is a fine-tuned version of [microsoft/trocr-base-stage1](https://huggingface.co/microsoft/trocr-base-stage1) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5132 - Cer: 0.0297 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Cer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 1.3097 | 0.3 | 1000 | 1.2014 | 0.1848 | | 0.9406 | 0.6 | 2000 | 0.7268 | 0.0825 | | 0.4937 | 0.9 | 3000 | 0.5132 | 0.0297 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-egy
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
62
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: bert-finetuned-squad results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-squad This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the squad dataset. ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.1+cu116 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-mix-pos-msa
[ "pytorch", "tf", "bert", "token-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1,862
null
--- license: creativeml-openrail-m base_model: CompVis/stable-diffusion-v1-4 tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers inference: true ---
CAMeL-Lab/bert-base-arabic-camelbert-mix-sentiment
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
855
null
--- license: openrail metrics: - accuracy pipeline_tag: text-to-image --- # Model Card for Model ID <!-- Provide a quick summary of what the model is/does. --> This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/modelcard_template.md?plain=1). ## Model Details ### Model Description <!-- Provide a longer summary of what this model is. --> - **Developed by:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Model type:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] - **Finetuned from model [optional]:** [More Information Needed] ### Model Sources [optional] <!-- Provide the basic links for the model. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> ### Direct Use <!-- This section is for the model use without fine-tuning or plugging into a larger ecosystem/app. --> [More Information Needed] ### Downstream Use [optional] <!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the model will not work well for. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations. ## How to Get Started with the Model Use the code below to get started with the model. [More Information Needed] ## Training Details ### Training Data <!-- This should link to a Data Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. --> [More Information Needed] ### Training Procedure <!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. --> #### Preprocessing [optional] [More Information Needed] #### Training Hyperparameters - **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision --> #### Speeds, Sizes, Times [optional] <!-- This section provides information about throughput, start/end time, checkpoint size if relevant, etc. --> [More Information Needed] ## Evaluation <!-- This section describes the evaluation protocols and provides the results. --> ### Testing Data, Factors & Metrics #### Testing Data <!-- This should link to a Data Card if possible. --> [More Information Needed] #### Factors <!-- These are the things the evaluation is disaggregating by, e.g., subpopulations or domains. --> [More Information Needed] #### Metrics <!-- These are the evaluation metrics being used, ideally with a description of why. --> [More Information Needed] ### Results [More Information Needed] #### Summary ## Model Examination [optional] <!-- Relevant interpretability work for the model goes here --> [More Information Needed] ## Environmental Impact <!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly --> Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700). - **Hardware Type:** [More Information Needed] - **Hours used:** [More Information Needed] - **Cloud Provider:** [More Information Needed] - **Compute Region:** [More Information Needed] - **Carbon Emitted:** [More Information Needed] ## Technical Specifications [optional] ### Model Architecture and Objective [More Information Needed] ### Compute Infrastructure [More Information Needed] #### Hardware [More Information Needed] #### Software [More Information Needed] ## Citation [optional] <!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Model Card Authors [optional] [More Information Needed] ## Model Card Contact [More Information Needed]
CAMeL-Lab/bert-base-arabic-camelbert-msa-did-nadi
[ "pytorch", "tf", "bert", "text-classification", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
71
null
--- language: - zh - en tags: - chatglm - blip2 --- # Model Card for blip2zh-chatglm-6b ## Model Details ### Model Description blip2zh-chatglm-6b是基于blip2训练的中文多模态聊天模型。具有基本的图像理解能力。 由于blip2的训练方式不会对语言模型进行微调,因此在纯文本对话中的行为可以保持和原始chatglm一致。 注意:由于目前模型仅经过blip2两阶段图文对齐预训练,没有包括vqa或者指令微调等具体下游任务的训练,因此依然容易生成不符合预期的内容。 - **blip2 base model**: [bert-base-chinese](https://huggingface.co/bert-base-chinese) - **Vision encoder**: eva-clip-vit-g - **Language model**: [chatglm-6b](https://github.com/THUDM/ChatGLM-6B) at [commit](https://huggingface.co/THUDM/chatglm-6b/commit/9324de70a93207c9a310cf99d5d6261791489691) ### Model Sources - [**Training Code**](https://github.com/XiPotatonium/LAVIS): blip2训练代码,基于[LAVIS](https://github.com/salesforce/LAVIS) - [**webui**](https://github.com/XiPotatonium/chatbot-webui): 一个由gradio实现的webui - [**api**](https://github.com/XiPotatonium/chatbot-api): 一个由fastapi实现的api服务,可以部署在本地,同时也支持一些其他类型的本地可部署语言模型。 ## Uses 模型参数包含了图像编码器,blip2和chatglm-6b。 加载模型及推理可以参考[api](https://github.com/XiPotatonium/chatbot-api/blob/main/src/model/blip2chatglm/__init__.py)的实现 一些[example](https://github.com/XiPotatonium/chatbot-api/blob/main/examples.ipynb) ## Limitations 受限于中文数据集,目前图像理解能力依然有限,会产生无关或者错误的内容。 目前没有引入多轮对话训练以及指令微调。多轮对话可能会受到上下文的干扰。 并且同样受限于chatglm-6b本身的对话效果。 ## Training Details ### Training Data * [laion-2b-chinese](https://huggingface.co/datasets/IDEA-CCNL/laion2B-multi-chinese-subset): 我们仅选取了其中clip分数较高的670k图文对并采样了部分数据进行训练。 * [coco-zh](https://github.com/li-xirong/coco-cn) * [flickr8k-zh](http://lixirong.net/datasets/flickr8kcn) ### Training Procedure 基于blip2的两阶段训练方法 ## Demos ![](imgs/demo1.png) ![](imgs/demo2.png) ![](imgs/demo3.png)
CAMeL-Lab/bert-base-arabic-camelbert-msa-half
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
16
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - recall - precision model-index: - name: gpt2_human results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # gpt2_human This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.4323 - Accuracy: {'accuracy': 0.8127125850340136} - F1: 0.8108 - Recall: 0.7227 - Precision: 0.8380 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.001 - train_batch_size: 4 - eval_batch_size: 4 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Recall | Precision | |:-------------:|:-----:|:-----:|:---------------:|:--------------------------------:|:------:|:------:|:---------:| | 0.8137 | 1.0 | 10976 | 0.6581 | {'accuracy': 0.6217049319727891} | 0.6205 | 0.5507 | 0.5835 | | 0.4538 | 2.0 | 21952 | 0.4323 | {'accuracy': 0.8127125850340136} | 0.8108 | 0.7227 | 0.8380 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CAMeL-Lab/bert-base-arabic-camelbert-msa-quarter
[ "pytorch", "tf", "jax", "bert", "fill-mask", "ar", "arxiv:2103.06678", "transformers", "license:apache-2.0", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "BertForMaskedLM" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
12
null
--- language: - en license: creativeml-openrail-m thumbnail: "https://huggingface.co/Guizmus/SDArt_cosmichorrors768/resolve/main/showcase.jpg" tags: - stable-diffusion - text-to-image - image-to-image --- # SDArt : Cosmic Horrors (version based on 2.1 768px) ![Showcase](https://huggingface.co/Guizmus/SDArt_cosmichorrors768/resolve/main/showcase.jpg) ## Theme > It was a stupid idea, I know that now. I thought I was brave, daring, that I could handle anything. But nothing could have prepared me for what I found on the other side of the veil. I had always been fascinated by the unknown and the supernatural, and finally, found a ritual that would grant me passage. > > As I recited the incantation, my mind was wrenched apart, tearing at the senses as I felt an indescribable sense of disorientation. The world was drained of color, shrouded in a thick, eerie mist that clung to my skin. I couldn't see anything moving but there was a strange silence that hung in the air. It was eerie.. unsettling.. like walking through a graveyard. > > And then, I saw it. > > Beyond the veil was a terror beyond anything I had ever imagined. A mass of writhing tendrils, some thick and muscular, others thin and sinuous that moved with a strange, fluid grace. Its eyes were pure black within a world of gray - drawing me in like a magnet. > > I tried to flee, but my feet seemed to be rooted to the spot. It was like the entity had some kind of hold over me. I could feel its presence in my mind, trying to rend it apart. It whispered to me the secrets of the universe, knowledge meant for no human to possess. > > And then, I saw black. > > When I opened my eyes again, I was back in my own world. But the memory of the gray world and the creature haunted me. I had looked into the abyss of the unknown, and it had looked back. **COW #2 - Cosmic Horrors** The veil between worlds is wearing thin.. and the monsters await. Prepare yourself for an encounter with a terrifying monster that is beyond human comprehension. Beware, for these eldritch entities exist outside the realms of your reality, and to face them is to stare into the abyss of the unknown. **Challenges:** * Your image must be mostly grayscale. * Your artwork must feature an eldritch entity or monster. * Create mist/fog within the composition to add an element of suspense and mystery. ## Model description This is a model related to the "Challenge of the WeekEnd" contest on Stable Diffusion discord.. I try to make a model out of all the submission for people to continue enjoy the theme after the even, and see a little of their designs in other people's creations. The token stays "SDArt" and I balance the learning on the low side, so that it doesn't just replicate creations. The total dataset is made of 39 pictures. It was trained on [Stable diffusion 2.1 768px](https://huggingface.co/stabilityai/stable-diffusion-2-1). I used [EveryDream](https://github.com/victorchall/EveryDream2trainer) to do the training, 100 total repeat per picture. The pictures were tagged using the token "SDArt", and an arbitrary token I choose. The dataset is provided below, as well as a list of usernames and their corresponding token. The recommended sampling is k_Euler_a or DPM++ 2M Karras on 20 steps, CFGS 7.5 . [The model is also available here](https://huggingface.co/Guizmus/SDArt_cosmichorrors) in a version trained on 1.5 as a base. ## Trained tokens * SDArt * dyce * bnp * keel * fcu * cous * aved * pfa * kprc * kuro * elis * ndi * asot * loeb * bsp * psst * irgc * mds * kts * byes * dany * mss * guin * mgt * mwf * crit * mlas * isch * phol * vedi * dds * httr * pte * oxi * nery * nips * nlwx * nrg * ofi * olis ## Download links [SafeTensors](https://huggingface.co/Guizmus/SDArt_cosmichorrors768/resolve/main/SDArt_CosmicHorrors768.safetensors) [CKPT](https://huggingface.co/Guizmus/SDArt_cosmichorrors768/resolve/main/SDArt_CosmicHorrors768.ckpt) [Config (yaml)](https://huggingface.co/Guizmus/SDArt_cosmichorrors768/resolve/main/SDArt_CosmicHorrors768.yaml) [Dataset](https://huggingface.co/Guizmus/SDArt_cosmichorrors768/resolve/main/dataset.zip) ## 🧨 Diffusers This model can be used just like any other Stable Diffusion model. For more information, please have a look at the [Stable Diffusion](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion). You can also export the model to [ONNX](https://huggingface.co/docs/diffusers/optimization/onnx), [MPS](https://huggingface.co/docs/diffusers/optimization/mps) and/or [FLAX/JAX](). ```python from diffusers import StableDiffusionPipeline import torch model_id = "Guizmus/SDArt_cosmichorrors768" pipe = StableDiffusionPipeline.from_pretrained(model_id, torch_dtype=torch.float16) pipe = pipe.to("cuda") prompt = "SDArt httr" image = pipe(prompt).images[0] image.save("./SDArt.png") ```
CLAck/indo-pure
[ "pytorch", "marian", "text2text-generation", "en", "id", "dataset:ALT", "transformers", "translation", "license:apache-2.0", "autotrain_compatible" ]
translation
{ "architectures": [ "MarianMTModel" ], "model_type": "marian", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - emotion metrics: - accuracy - f1 model-index: - name: distilbert-base-uncased-ft-emotion-oversampled results: - task: name: Text Classification type: text-classification dataset: name: emotion type: emotion config: split split: validation args: split metrics: - name: Accuracy type: accuracy value: 0.932 - name: F1 type: f1 value: 0.932662033465717 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-ft-emotion-oversampled This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the emotion dataset. It achieves the following results on the evaluation set: - Loss: 0.2837 - Accuracy: 0.932 - F1: 0.9327 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 64 - eval_batch_size: 64 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 2 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | |:-------------:|:-----:|:----:|:---------------:|:--------:|:------:| | No log | 1.0 | 503 | 0.2921 | 0.931 | 0.9319 | | No log | 2.0 | 1006 | 0.2837 | 0.932 | 0.9327 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
CLS/WubiBERT_models
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: afl-3.0 --- RoBERTa-base-ch 模型是哈工大讯飞联合实验室(HFL)开源的RoBERTa-wwm-ext的中文base 版。 RoBERTa-wwm-ext 中文模型是基于 RoBERTa 用全词Mask 方法预训练出的模型。 The RoBERTa-base-ch model is the chinese version of RoBERTa-wwm-ext which is open sourced by the Harbin Institute of Technology Xunfei Lab (HFL). RoBERTa-wwm-ext chinese model is pre-trained based on the RoBERTa model with whole word mask which is proposed by Yiming Cui Wanxiang Che Ting Liu Bing Qin Ziqing Yang.
CLTL/gm-ner-xlmrbase
[ "pytorch", "tf", "xlm-roberta", "token-classification", "nl", "transformers", "dighum", "license:apache-2.0", "autotrain_compatible" ]
token-classification
{ "architectures": [ "XLMRobertaForTokenClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
2
null
--- language: vi datasets: - vlsp - common_voice metrics: - wer tags: - audio - automatic-speech-recognition - speech license: apache-2.0 model-index: - name: Wav2vec2 Base Vietnamese results: - task: name: Speech Recognition type: automatic-speech-recognition dataset: name: Common Voice vi type: common_voice args: vi metrics: - name: Test WER type: wer value: 31.353591 --- # Wav2Vec2-Large-XLSR-53-Vietnamese Fine-tuned [dragonSwing/wav2vec2-base-pretrain-vietnamese](https://huggingface.co/dragonSwing/wav2vec2-base-pretrain-vietnamese) on Vietnamese Speech Recognition task using 100h labelled data from [VSLP dataset](https://drive.google.com/file/d/1vUSxdORDxk-ePUt-bUVDahpoXiqKchMx/view?usp=sharing). When using this model, make sure that your speech input is sampled at 16kHz. ## Usage The model can be used directly (without a language model) as follows: ```python import torch import torchaudio from datasets import load_dataset from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor test_dataset = load_dataset("common_voice", "vi", split="test") processor = Wav2Vec2Processor.from_pretrained("dragonSwing/wav2vec2-base-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("dragonSwing/wav2vec2-base-vietnamese") resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) inputs = processor(test_dataset["speech"][:2], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values, attention_mask=inputs.attention_mask).logits predicted_ids = torch.argmax(logits, dim=-1) print("Prediction:", processor.batch_decode(predicted_ids)) print("Reference:", test_dataset["sentence"][:2]) ``` ## Evaluation The model can be evaluated as follows on the Vietnamese test data of Common Voice. ```python import torch import torchaudio from datasets import load_dataset, load_metric from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor import re test_dataset = load_dataset("common_voice", "vi", split="test") wer = load_metric("wer") processor = Wav2Vec2Processor.from_pretrained("dragonSwing/wav2vec2-base-vietnamese") model = Wav2Vec2ForCTC.from_pretrained("dragonSwing/wav2vec2-base-vietnamese") model.to("cuda") chars_to_ignore_regex = r'[,?.!\-;:"“%\'�]' resampler = torchaudio.transforms.Resample(48_000, 16_000) # Preprocessing the datasets. # We need to read the aduio files as arrays def speech_file_to_array_fn(batch): batch["sentence"] = re.sub(chars_to_ignore_regex, '', batch["sentence"]).lower() speech_array, sampling_rate = torchaudio.load(batch["path"]) batch["speech"] = resampler(speech_array).squeeze().numpy() return batch test_dataset = test_dataset.map(speech_file_to_array_fn) # Preprocessing the datasets. # We need to read the aduio files as arrays def evaluate(batch): inputs = processor(batch["speech"], sampling_rate=16_000, return_tensors="pt", padding=True) with torch.no_grad(): logits = model(inputs.input_values.to("cuda"), attention_mask=inputs.attention_mask.to("cuda")).logits pred_ids = torch.argmax(logits, dim=-1) batch["pred_strings"] = processor.batch_decode(pred_ids) return batch result = test_dataset.map(evaluate, batched=True, batch_size=1) print("WER: {:2f}".format(100 * wer.compute(predictions=result["pred_strings"], references=result["sentence"]))) ``` **Test Result**: 31.353591%
CLTL/icf-domains
[ "pytorch", "roberta", "nl", "transformers", "license:mit", "text-classification" ]
text-classification
{ "architectures": [ "RobertaForMultiLabelSequenceClassification" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
35
null
--- license: apache-2.0 tags: - generated_from_trainer model-index: - name: punctfix-se results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # punctfix-se This model is a fine-tuned version of [bert-base-multilingual-uncased](https://huggingface.co/bert-base-multilingual-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.1049 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0001 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 1000 - num_epochs: 3 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:------:|:---------------:| | 0.2163 | 0.06 | 10000 | 0.1730 | | 0.1737 | 0.12 | 20000 | 0.1588 | | 0.1649 | 0.18 | 30000 | 0.1556 | | 0.1579 | 0.24 | 40000 | 0.1512 | | 0.1543 | 0.3 | 50000 | 0.1493 | | 0.1502 | 0.36 | 60000 | 0.1430 | | 0.1479 | 0.42 | 70000 | 0.1407 | | 0.1456 | 0.48 | 80000 | 0.1380 | | 0.1433 | 0.54 | 90000 | 0.1378 | | 0.1423 | 0.6 | 100000 | 0.1360 | | 0.14 | 0.66 | 110000 | 0.1334 | | 0.1379 | 0.72 | 120000 | 0.1334 | | 0.1365 | 0.78 | 130000 | 0.1307 | | 0.1346 | 0.84 | 140000 | 0.1287 | | 0.1335 | 0.9 | 150000 | 0.1282 | | 0.1315 | 0.96 | 160000 | 0.1260 | | 0.1289 | 1.02 | 170000 | 0.1267 | | 0.1257 | 1.07 | 180000 | 0.1254 | | 0.1251 | 1.13 | 190000 | 0.1251 | | 0.1237 | 1.19 | 200000 | 0.1231 | | 0.1234 | 1.25 | 210000 | 0.1242 | | 0.1226 | 1.31 | 220000 | 0.1232 | | 0.1214 | 1.37 | 230000 | 0.1215 | | 0.121 | 1.43 | 240000 | 0.1206 | | 0.1201 | 1.49 | 250000 | 0.1191 | | 0.1195 | 1.55 | 260000 | 0.1179 | | 0.1183 | 1.61 | 270000 | 0.1168 | | 0.1174 | 1.67 | 280000 | 0.1173 | | 0.1176 | 1.73 | 290000 | 0.1163 | | 0.1154 | 1.79 | 300000 | 0.1148 | | 0.1145 | 1.85 | 310000 | 0.1141 | | 0.1143 | 1.91 | 320000 | 0.1140 | | 0.1137 | 1.97 | 330000 | 0.1122 | | 0.1098 | 2.03 | 340000 | 0.1138 | | 0.1056 | 2.09 | 350000 | 0.1115 | | 0.1057 | 2.15 | 360000 | 0.1133 | | 0.1048 | 2.21 | 370000 | 0.1108 | | 0.1041 | 2.27 | 380000 | 0.1111 | | 0.1041 | 2.33 | 390000 | 0.1099 | | 0.1035 | 2.39 | 400000 | 0.1106 | | 0.1026 | 2.45 | 410000 | 0.1085 | | 0.1026 | 2.51 | 420000 | 0.1096 | | 0.1018 | 2.57 | 430000 | 0.1093 | | 0.101 | 2.63 | 440000 | 0.1074 | | 0.1002 | 2.69 | 450000 | 0.1073 | | 0.0999 | 2.75 | 460000 | 0.1060 | | 0.0995 | 2.81 | 470000 | 0.1058 | | 0.0991 | 2.87 | 480000 | 0.1056 | | 0.0986 | 2.93 | 490000 | 0.1051 | | 0.0987 | 2.99 | 500000 | 0.1049 | ### Framework versions - Transformers 4.28.0 - Pytorch 2.0.0+cu117 - Datasets 2.4.0 - Tokenizers 0.11.6
Caddy/UD
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: An orange T-shirt tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - chentxxx/orange_T These are LoRA adaption weights for runwayml/stable-diffusion-v1-5. The weights were trained on An orange T-shirt using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following. ![img_0](./image_0.png) ![img_1](./image_1.png) ![img_2](./image_2.png) ![img_3](./image_3.png)
CallumRai/HansardGPT2
[ "pytorch", "jax", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
14
null
--- license: apache-2.0 tags: - generated_from_trainer datasets: - imagefolder metrics: - accuracy model-index: - name: deit-tiny-patch16-224-finetuned-main-gpu-20e-final results: - task: name: Image Classification type: image-classification dataset: name: imagefolder type: imagefolder config: default split: validation args: default metrics: - name: Accuracy type: accuracy value: 0.9856292517006803 --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # deit-tiny-patch16-224-finetuned-main-gpu-20e-final This model is a fine-tuned version of [facebook/deit-tiny-patch16-224](https://huggingface.co/facebook/deit-tiny-patch16-224) on the imagefolder dataset. It achieves the following results on the evaluation set: - Loss: 0.0420 - Accuracy: 0.9856 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 4 - total_train_batch_size: 128 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - num_epochs: 20 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.6047 | 1.0 | 551 | 0.6283 | 0.7111 | | 0.431 | 2.0 | 1102 | 0.3962 | 0.8366 | | 0.352 | 3.0 | 1653 | 0.2620 | 0.8953 | | 0.2682 | 4.0 | 2204 | 0.1814 | 0.9318 | | 0.2533 | 5.0 | 2755 | 0.1564 | 0.9396 | | 0.2069 | 6.0 | 3306 | 0.1243 | 0.9531 | | 0.2065 | 7.0 | 3857 | 0.1048 | 0.9603 | | 0.194 | 8.0 | 4408 | 0.1019 | 0.9636 | | 0.1879 | 9.0 | 4959 | 0.0877 | 0.9671 | | 0.1584 | 10.0 | 5510 | 0.0870 | 0.9687 | | 0.1426 | 11.0 | 6061 | 0.0814 | 0.9718 | | 0.1596 | 12.0 | 6612 | 0.0740 | 0.9749 | | 0.1125 | 13.0 | 7163 | 0.0613 | 0.9781 | | 0.1374 | 14.0 | 7714 | 0.0570 | 0.9787 | | 0.1003 | 15.0 | 8265 | 0.0596 | 0.9793 | | 0.109 | 16.0 | 8816 | 0.0511 | 0.9815 | | 0.1206 | 17.0 | 9367 | 0.0497 | 0.9829 | | 0.1024 | 18.0 | 9918 | 0.0437 | 0.9844 | | 0.1051 | 19.0 | 10469 | 0.0420 | 0.9851 | | 0.0955 | 20.0 | 11020 | 0.0420 | 0.9856 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.13.1+cu116 - Datasets 2.10.1 - Tokenizers 0.13.2
Camzure/MaamiBot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: TreyJ/distilbert-base-uncased-finetuned-squad-batchsize-8 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TreyJ/distilbert-base-uncased-finetuned-squad-batchsize-8 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0566 - Train End Logits Accuracy: 0.7096 - Train Start Logits Accuracy: 0.6701 - Validation Loss: 1.1422 - Validation End Logits Accuracy: 0.6940 - Validation Start Logits Accuracy: 0.6575 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 1e-05, 'decay_steps': 22130, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train End Logits Accuracy | Train Start Logits Accuracy | Validation Loss | Validation End Logits Accuracy | Validation Start Logits Accuracy | Epoch | |:----------:|:-------------------------:|:---------------------------:|:---------------:|:------------------------------:|:--------------------------------:|:-----:| | 1.5985 | 0.5877 | 0.5521 | 1.1950 | 0.6779 | 0.6395 | 0 | | 1.0566 | 0.7096 | 0.6701 | 1.1422 | 0.6940 | 0.6575 | 1 | ### Framework versions - Transformers 4.28.0.dev0 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
Canadiancaleb/jessebot
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - Taxi-v3 - q-learning - reinforcement-learning - custom-implementation model-index: - name: q-Taxi-v3 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: Taxi-v3 type: Taxi-v3 metrics: - type: mean_reward value: 7.56 +/- 2.71 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **Taxi-v3** This is a trained model of a **Q-Learning** agent playing **Taxi-v3** . ## Usage ```python model = load_from_hub(repo_id="ajitgupta/q-Taxi-v3", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
Capreolus/birch-bert-large-car_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: stable-baselines3 tags: - LunarLander-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: PPO results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: LunarLander-v2 type: LunarLander-v2 metrics: - type: mean_reward value: 248.54 +/- 25.26 name: mean_reward verified: false --- # **PPO** Agent playing **LunarLander-v2** This is a trained model of a **PPO** agent playing **LunarLander-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Capreolus/birch-bert-large-mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - generated_from_keras_callback model-index: - name: ru_propaganda_opposition_model_without_foreign_agent_mask_large_2 results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # ru_propaganda_opposition_model_without_foreign_agent_mask_large_2 This model is a fine-tuned version of [DeepPavlov/bert-base-bg-cs-pl-ru-cased](https://huggingface.co/DeepPavlov/bert-base-bg-cs-pl-ru-cased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 0.0024 - Validation Loss: 0.0746 - Train Accuracy: 0.9821 - Epoch: 4 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 3985, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Train Accuracy | Epoch | |:----------:|:---------------:|:--------------:|:-----:| | 0.2137 | 0.0650 | 0.9702 | 0 | | 0.0580 | 0.0753 | 0.9702 | 1 | | 0.0207 | 0.0590 | 0.9821 | 2 | | 0.0038 | 0.0711 | 0.9792 | 3 | | 0.0024 | 0.0746 | 0.9821 | 4 | ### Framework versions - Transformers 4.27.4 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
Capreolus/birch-bert-large-msmarco_mb
[ "pytorch", "tf", "jax", "bert", "next-sentence-prediction", "transformers" ]
null
{ "architectures": [ "BertForNextSentencePrediction" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - fastai --- # Amazing! 🥳 Congratulations on hosting your fastai model on the Hugging Face Hub! # Some next steps 1. Fill out this model card with more information (see the template below and the [documentation here](https://huggingface.co/docs/hub/model-repos))! 2. Create a demo in Gradio or Streamlit using 🤗 Spaces ([documentation here](https://huggingface.co/docs/hub/spaces)). 3. Join the fastai community on the [Fastai Discord](https://discord.com/invite/YKrxeNn)! Greetings fellow fastlearner 🤝! Don't forget to delete this content from your model card. --- # Model card ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed
Capreolus/electra-base-msmarco
[ "pytorch", "tf", "electra", "text-classification", "arxiv:2008.09093", "transformers" ]
text-classification
{ "architectures": [ "ElectraForSequenceClassification" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
110
null
--- library_name: stable-baselines3 tags: - PandaReachDense-v2 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: PandaReachDense-v2 type: PandaReachDense-v2 metrics: - type: mean_reward value: -2.59 +/- 0.51 name: mean_reward verified: false --- # **A2C** Agent playing **PandaReachDense-v2** This is a trained model of a **A2C** agent playing **PandaReachDense-v2** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Captain-1337/CrudeBERT
[ "pytorch", "bert", "text-classification", "arxiv:1908.10063", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
28
null
--- license: mit tags: - generated_from_trainer model-index: - name: ec-biogpt-noised-pubmed-v3 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ec-biogpt-noised-pubmed-v3 This model is a fine-tuned version of [microsoft/biogpt](https://huggingface.co/microsoft/biogpt) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.7552 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 10 - num_epochs: 5 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:-----:|:---------------:| | 1.9981 | 0.07 | 500 | 1.8163 | | 1.7501 | 0.14 | 1000 | 1.7809 | | 2.0623 | 0.22 | 1500 | 1.7638 | | 1.8094 | 0.29 | 2000 | 1.7458 | | 1.8711 | 0.36 | 2500 | 1.7326 | | 1.6588 | 0.43 | 3000 | 1.7244 | | 1.5469 | 0.5 | 3500 | 1.7153 | | 1.6981 | 0.57 | 4000 | 1.7084 | | 1.6728 | 0.65 | 4500 | 1.7025 | | 1.8203 | 0.72 | 5000 | 1.6973 | | 1.8318 | 0.79 | 5500 | 1.6924 | | 1.6916 | 0.86 | 6000 | 1.6906 | | 1.6369 | 0.93 | 6500 | 1.6816 | | 1.4371 | 1.01 | 7000 | 1.6838 | | 1.381 | 1.08 | 7500 | 1.6829 | | 1.6214 | 1.15 | 8000 | 1.6846 | | 1.6218 | 1.22 | 8500 | 1.6790 | | 1.6278 | 1.29 | 9000 | 1.6788 | | 1.4046 | 1.36 | 9500 | 1.6774 | | 1.4866 | 1.44 | 10000 | 1.6728 | | 1.4712 | 1.51 | 10500 | 1.6716 | | 1.5896 | 1.58 | 11000 | 1.6702 | | 1.4818 | 1.65 | 11500 | 1.6681 | | 1.4261 | 1.72 | 12000 | 1.6638 | | 1.5318 | 1.79 | 12500 | 1.6624 | | 1.4814 | 1.87 | 13000 | 1.6620 | | 1.5131 | 1.94 | 13500 | 1.6583 | | 1.3971 | 2.01 | 14000 | 1.6806 | | 1.4146 | 2.08 | 14500 | 1.6842 | | 1.5739 | 2.15 | 15000 | 1.6888 | | 1.312 | 2.23 | 15500 | 1.6857 | | 1.4992 | 2.3 | 16000 | 1.6876 | | 1.2725 | 2.37 | 16500 | 1.6845 | | 1.3904 | 2.44 | 17000 | 1.6840 | | 1.4569 | 2.51 | 17500 | 1.6855 | | 1.4358 | 2.58 | 18000 | 1.6811 | | 1.4747 | 2.66 | 18500 | 1.6814 | | 1.3272 | 2.73 | 19000 | 1.6818 | | 1.3743 | 2.8 | 19500 | 1.6756 | | 1.3953 | 2.87 | 20000 | 1.6759 | | 1.4173 | 2.94 | 20500 | 1.6748 | | 1.3998 | 3.02 | 21000 | 1.7133 | | 1.3396 | 3.09 | 21500 | 1.7205 | | 1.1953 | 3.16 | 22000 | 1.7218 | | 1.2047 | 3.23 | 22500 | 1.7223 | | 1.0788 | 3.3 | 23000 | 1.7214 | | 1.3048 | 3.37 | 23500 | 1.7230 | | 1.3271 | 3.45 | 24000 | 1.7195 | | 1.4236 | 3.52 | 24500 | 1.7208 | | 1.1851 | 3.59 | 25000 | 1.7209 | | 1.285 | 3.66 | 25500 | 1.7207 | | 1.3013 | 3.73 | 26000 | 1.7174 | | 1.2734 | 3.81 | 26500 | 1.7182 | | 1.3496 | 3.88 | 27000 | 1.7168 | | 1.3628 | 3.95 | 27500 | 1.7134 | | 1.0063 | 4.02 | 28000 | 1.7507 | | 1.1155 | 4.09 | 28500 | 1.7557 | | 1.1886 | 4.16 | 29000 | 1.7571 | | 1.1304 | 4.24 | 29500 | 1.7575 | | 1.0328 | 4.31 | 30000 | 1.7563 | | 1.2631 | 4.38 | 30500 | 1.7584 | | 1.2212 | 4.45 | 31000 | 1.7564 | | 1.1825 | 4.52 | 31500 | 1.7583 | | 1.4374 | 4.6 | 32000 | 1.7562 | | 1.1568 | 4.67 | 32500 | 1.7554 | | 1.3035 | 4.74 | 33000 | 1.7565 | | 1.27 | 4.81 | 33500 | 1.7557 | | 1.2518 | 4.88 | 34000 | 1.7560 | | 1.0965 | 4.95 | 34500 | 1.7552 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu117 - Datasets 2.11.0 - Tokenizers 0.13.3
Carlork314/Carlos
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # reklam-2-29-xlm This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("../models/reklam-2-29-xlm") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
CarlosPR/mt5-spanish-memmories-analysis
[ "pytorch", "mt5", "text2text-generation", "transformers", "autotrain_compatible" ]
text2text-generation
{ "architectures": [ "MT5ForConditionalGeneration" ], "model_type": "mt5", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- license: mit tags: - generated_from_trainer metrics: - accuracy - f1 - precision - recall model-index: - name: ukraine-war-pov results: [] widget: - text: 'Росія знову скоює воєнні злочини' example_title: 'proukrainian' - text: 'ВСУ все берет с собой — украинские «захистники» взяли стульчак из Артемовска' example_title: 'prorussian' --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ukraine-war-pov This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.2166 - Accuracy: 0.9315 - F1: 0.9315 - Precision: 0.9315 - Recall: 0.9315 - AUC: 0.9774 (self-report) ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 16 - eval_batch_size: 64 - seed: 123 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 10 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | F1 | Precision | Recall | |:-------------:|:-----:|:-----:|:---------------:|:--------:|:------:|:---------:|:------:| | 0.284 | 1.0 | 1875 | 0.1850 | 0.9295 | 0.9295 | 0.9303 | 0.9295 | | 0.2271 | 2.0 | 3750 | 0.1551 | 0.9405 | 0.9405 | 0.9414 | 0.9405 | | 0.2064 | 3.0 | 5625 | 0.1734 | 0.9305 | 0.9305 | 0.9311 | 0.9305 | | 0.1842 | 4.0 | 7500 | 0.1694 | 0.9315 | 0.9315 | 0.9317 | 0.9315 | | 0.1628 | 5.0 | 9375 | 0.1838 | 0.9435 | 0.9435 | 0.9438 | 0.9435 | | 0.1309 | 6.0 | 11250 | 0.2074 | 0.9395 | 0.9395 | 0.9395 | 0.9395 | | 0.1017 | 7.0 | 13125 | 0.2659 | 0.9365 | 0.9365 | 0.9365 | 0.9365 | | 0.0778 | 8.0 | 15000 | 0.2851 | 0.94 | 0.9400 | 0.9400 | 0.94 | | 0.0664 | 9.0 | 16875 | 0.3238 | 0.9385 | 0.9385 | 0.9387 | 0.9385 | | 0.066 | 10.0 | 18750 | 0.3092 | 0.939 | 0.9390 | 0.9390 | 0.9390 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Tokenizers 0.13.3
CarlosTron/Yo
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - setfit - sentence-transformers - text-classification pipeline_tag: text-classification --- # fathyshalab/massive-ar-SA This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2. Training a classification head with features from the fine-tuned Sentence Transformer. ## Usage To use this model for inference, first install the SetFit library: ```bash python -m pip install setfit ``` You can then run inference as follows: ```python from setfit import SetFitModel # Download from Hub and run inference model = SetFitModel.from_pretrained("fathyshalab/massive-ar-SA") # Run inference preds = model(["i loved the spiderman movie!", "pineapple on pizza is the worst 🤮"]) ``` ## BibTeX entry and citation info ```bibtex @article{https://doi.org/10.48550/arxiv.2209.11055, doi = {10.48550/ARXIV.2209.11055}, url = {https://arxiv.org/abs/2209.11055}, author = {Tunstall, Lewis and Reimers, Nils and Jo, Unso Eun Seo and Bates, Luke and Korat, Daniel and Wasserblat, Moshe and Pereg, Oren}, keywords = {Computation and Language (cs.CL), FOS: Computer and information sciences, FOS: Computer and information sciences}, title = {Efficient Few-Shot Learning Without Prompts}, publisher = {arXiv}, year = {2022}, copyright = {Creative Commons Attribution 4.0 International} } ```
Carolhuehuehuehue/Sla
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - generated_from_keras_callback model-index: - name: TF-CodeT5-base results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # TF-CodeT5-base This model was trained from scratch on an unknown dataset. It achieves the following results on the evaluation set: ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: None - training_precision: float32 ### Training results ### Framework versions - Transformers 4.28.1 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
Cat/Kitty
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-12T11:12:34Z
--- language: "en" thumbnail: "Keywords to Sentences" tags: - keytotext - k2t - Keywords to Sentences model-index: - name: k2t_AI_Ads_Foods Idea is to build a model which will take keywords as inputs and generate sentences as outputs. Potential use case can include: - Marketing - Search Engine Optimization - Topic generation etc. - Fine tuning of topic modeling models
Cedille/fr-boris
[ "pytorch", "gptj", "text-generation", "fr", "dataset:c4", "arxiv:2202.03371", "transformers", "causal-lm", "license:mit", "has_space" ]
text-generation
{ "architectures": [ "GPTJForCausalLM" ], "model_type": "gptj", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
401
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: PGRAD_Pytorch_CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
dccuchile/albert-base-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
34
null
--- tags: - generated_from_keras_callback model-index: - name: pegasus_trained_SIR results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # pegasus_trained_SIR This model is a fine-tuned version of [google/pegasus-large](https://huggingface.co/google/pegasus-large) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.0036 - Train Sparse Categorical Accuracy: 0.7945 - Validation Loss: 1.1063 - Validation Sparse Categorical Accuracy: 0.7936 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': 0.001, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Train Sparse Categorical Accuracy | Validation Loss | Validation Sparse Categorical Accuracy | Epoch | |:----------:|:---------------------------------:|:---------------:|:--------------------------------------:|:-----:| | 1.3170 | 0.7603 | 1.1221 | 0.7874 | 0 | | 1.1244 | 0.7805 | 1.1193 | 0.7913 | 1 | | 1.0036 | 0.7945 | 1.1063 | 0.7936 | 2 | ### Framework versions - Transformers 4.28.0 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/albert-base-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2023-04-12T11:18:34Z
--- license: apache-2.0 tags: - image-clasification - generated_from_trainer datasets: - beans metrics: - accuracy model-index: - name: Clasificacion-vit-model-manuel-chaves results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Clasificacion-vit-model-manuel-chaves This model is a fine-tuned version of [google/vit-base-patch16-224-in21k](https://huggingface.co/google/vit-base-patch16-224-in21k) on the glue and the mrpc datasets. It achieves the following results on the evaluation set: - Loss: 0.0701 - Accuracy: 0.9774 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0002 - train_batch_size: 8 - eval_batch_size: 8 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 0.1442 | 3.85 | 500 | 0.0701 | 0.9774 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/albert-base-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SloBertAA_Top5_WithoutOOC_NEW results: [] datasets: - gregorgabrovsek/RTVCommentsTop5UsersWithoutOOC language: - sl --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SloBertAA_Top5_WithoutOOC_NEW This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3464 - Accuracy: 0.9419 ## Other related models | Base model | Includes the OOC class? | 5 classes | 10 classes | 20 classes | 50 classes | | ------------------- | ----------------------- |:---------:|:----------:|:----------:|:----------:| | SloBERTa | Yes | x | x | x | x | | SloBERTa | No | x | x | x | x | | BERT Multilingual | Yes | x | x | x | x | | BERT Multilingual | No | x | x | x | x | ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2787 | 1.0 | 8757 | 0.2899 | 0.9120 | | 0.2197 | 2.0 | 17514 | 0.2314 | 0.9294 | | 0.1483 | 3.0 | 26271 | 0.3004 | 0.9349 | | 0.0997 | 4.0 | 35028 | 0.3182 | 0.9405 | | 0.0699 | 5.0 | 43785 | 0.3464 | 0.9419 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.8.0 - Datasets 2.10.1 - Tokenizers 0.13.2
dccuchile/albert-large-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2023-04-12T11:26:34Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SloBertAA_Top10_WithOOC_NEW results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SloBertAA_Top10_WithOOC_NEW This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.5075 - Accuracy: 0.9081 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.3943 | 1.0 | 16293 | 0.4045 | 0.8756 | | 0.318 | 2.0 | 32586 | 0.3345 | 0.8978 | | 0.219 | 3.0 | 48879 | 0.3845 | 0.9017 | | 0.1544 | 4.0 | 65172 | 0.4492 | 0.9052 | | 0.1056 | 5.0 | 81465 | 0.5075 | 0.9081 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.8.0 - Datasets 2.10.1 - Tokenizers 0.13.2
dccuchile/albert-large-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-04-12T11:26:34Z
--- license: cc-by-sa-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: SloBertAA_Top10_WithoutOOC_NEW results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # SloBertAA_Top10_WithoutOOC_NEW This model is a fine-tuned version of [EMBEDDIA/sloberta](https://huggingface.co/EMBEDDIA/sloberta) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.3475 - Accuracy: 0.9426 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 12 - eval_batch_size: 12 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:-----:|:---------------:|:--------:| | 0.2904 | 1.0 | 14812 | 0.2680 | 0.9209 | | 0.2039 | 2.0 | 29624 | 0.2471 | 0.9332 | | 0.1422 | 3.0 | 44436 | 0.2779 | 0.9371 | | 0.0888 | 4.0 | 59248 | 0.3324 | 0.9385 | | 0.0623 | 5.0 | 74060 | 0.3475 | 0.9426 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.8.0 - Datasets 2.10.1 - Tokenizers 0.13.2
dccuchile/albert-tiny-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
32
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: my_politics_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_politics_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4867 - Rouge1: 0.0889 - Rouge2: 0.0071 - Rougel: 0.0709 - Rougelsum: 0.0707 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 22 | 5.2260 | 0.0774 | 0.0032 | 0.0624 | 0.0624 | 19.0 | | No log | 2.0 | 44 | 4.7072 | 0.0851 | 0.0061 | 0.0683 | 0.0685 | 19.0 | | No log | 3.0 | 66 | 4.5356 | 0.0897 | 0.0066 | 0.0707 | 0.0707 | 19.0 | | No log | 4.0 | 88 | 4.4867 | 0.0889 | 0.0071 | 0.0709 | 0.0707 | 19.0 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/albert-tiny-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-04-12T11:35:26Z
--- license: mit tags: - generated_from_trainer model-index: - name: codeparrot-ds results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # codeparrot-ds This model is a fine-tuned version of [gpt2](https://huggingface.co/gpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 1.6795 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0005 - train_batch_size: 32 - eval_batch_size: 32 - seed: 42 - gradient_accumulation_steps: 8 - total_train_batch_size: 256 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: cosine - lr_scheduler_warmup_steps: 1000 - num_epochs: 1 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | 2.4884 | 0.94 | 5000 | 1.6795 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.1.0 - Tokenizers 0.13.2
dccuchile/albert-tiny-spanish-finetuned-xnli
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
null
--- license: other --- # A model of Kokura Asahi (Tsuki ni Yorisou Otome no Sahou) trained with sovits4.0 This is a document to introduce a model of Kokura Asahi, a character from the visual novel Tsuki ni Yorisou Otome no Sahou (The Manners of a Girl who Draws Near to the Moon), trained with sovits4.0, a singing voice conversion system based on DDSP-SVC. ## What is sovits4.0? Sovits4.0 is a soft voice conversion system that can convert any input voice into a target singer's voice. It is based on DDSP-SVC, a deep learning framework that uses differentiable digital signal processing (DDSP) modules to synthesize high-fidelity audio signals. Sovits4.0 can handle various languages and singing styles, and can also generate expressive effects such as vibrato and breathiness. Sovits4.0 is developed by svc-develop-team, a group of enthusiasts who love singing voice synthesis and artificial intelligence. The source code and pre-trained models are available on GitHub. Sovits4.0 also has a web interface that allows users to upload their own audio files and convert them online. ## Who is Kokura Asahi? ![Asahi](https://i.328888.xyz/2023/04/12/iXO0Ox.png) Kokura Asahi is a character from the visual novel Tsuki ni Yorisou Otome no Sahou, developed by Navel. The game is a romance adventure game that features cross-dressing and maid themes. He is a talented young man who belongs to the wealthy Kokura family, but he lives a restricted life under his family's surveillance. He disguises himself as a commoner girl and enrolls in Filia Girls' Academy, a prestigious school for fashion design. He also becomes the maid of Sakurakouji Luna, the top student of the academy and his love interest. Kokura Asahi has a beautiful and feminine appearance, with long black hair. He is skilled in various fields. He has a cheerful and positive personality, and he is loyal and devoted to Luna.
dccuchile/albert-xlarge-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
2023-04-12T11:37:26Z
--- license: creativeml-openrail-m tags: - text-to-image widget: - text: ultmtpop --- ### ultimate-pop-v2 Dreambooth model trained by wimvanhenden with [Hugging Face Dreambooth Training Space](https://huggingface.co/spaces/multimodalart/dreambooth-training) with the v1-5 base model You run your new concept via `diffusers` [Colab Notebook for Inference](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/sd_dreambooth_inference.ipynb). Don't forget to use the concept prompts! Sample pictures of: ultmtpop (use that on your prompt) ![ultmtpop 0](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%281%29.jpg)![ultmtpop 1](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%282%29.jpg)![ultmtpop 2](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%283%29.jpg)![ultmtpop 3](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%284%29.jpg)![ultmtpop 4](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%285%29.jpg)![ultmtpop 5](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%286%29.jpg)![ultmtpop 6](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%287%29.jpg)![ultmtpop 7](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%288%29.jpg)![ultmtpop 8](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%289%29.jpg)![ultmtpop 9](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2810%29.jpg)![ultmtpop 10](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2811%29.jpg)![ultmtpop 11](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2812%29.jpg)![ultmtpop 12](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2813%29.jpg)![ultmtpop 13](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2814%29.jpg)![ultmtpop 14](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2815%29.jpg)![ultmtpop 15](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2816%29.jpg)![ultmtpop 16](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2817%29.jpg)![ultmtpop 17](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2818%29.jpg)![ultmtpop 18](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2819%29.jpg)![ultmtpop 19](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2820%29.jpg)![ultmtpop 20](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2821%29.jpg)![ultmtpop 21](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2822%29.jpg)![ultmtpop 22](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2823%29.jpg)![ultmtpop 23](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2824%29.jpg)![ultmtpop 24](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2825%29.jpg)![ultmtpop 25](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2826%29.jpg)![ultmtpop 26](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2827%29.jpg)![ultmtpop 27](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2828%29.jpg)![ultmtpop 28](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2829%29.jpg)![ultmtpop 29](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2830%29.jpg)![ultmtpop 30](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2831%29.jpg)![ultmtpop 31](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2832%29.jpg)![ultmtpop 32](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2833%29.jpg)![ultmtpop 33](https://huggingface.co/wimvanhenden/ultimate-pop-v2/resolve/main/concept_images/ultmtpop_%2834%29.jpg)
dccuchile/albert-xlarge-spanish-finetuned-ner
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
2023-04-12T11:37:36Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: my_business_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_business_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.2830 - Rouge1: 0.0763 - Rouge2: 0.0059 - Rougel: 0.0651 - Rougelsum: 0.0654 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 26 | 4.8465 | 0.0716 | 0.0056 | 0.0587 | 0.0584 | 19.0 | | No log | 2.0 | 52 | 4.4580 | 0.0765 | 0.0059 | 0.0637 | 0.0638 | 19.0 | | No log | 3.0 | 78 | 4.3173 | 0.0779 | 0.0059 | 0.0655 | 0.0658 | 19.0 | | No log | 4.0 | 104 | 4.2830 | 0.0763 | 0.0059 | 0.0651 | 0.0654 | 19.0 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/albert-xlarge-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-04-12T11:43:03Z
--- license: mit tags: - vision - image-classification metrics: - accuracy widget: - src: >- https://huggingface.co/datasets/fufufukakaka/autotrain-data-pokemon-image-classification-2/resolve/main/raw/image_folders/auto/imgs/%E3%83%8F%E3%83%83%E3%82%B5%E3%83%A0/0.png example_title: ハッサム - src: >- https://huggingface.co/datasets/fufufukakaka/autotrain-data-pokemon-image-classification-2/resolve/main/raw/image_folders/auto/imgs/%E3%83%86%E3%83%84%E3%83%8E%E3%83%84%E3%83%84%E3%83%9F/0.png example_title: テツノツツミ - src: >- https://huggingface.co/datasets/fufufukakaka/autotrain-data-pokemon-image-classification-2/resolve/main/raw/image_folders/auto/imgs/%E3%83%8C%E3%83%A1%E3%83%AB%E3%82%B4%E3%83%B3/3.png example_title: ヌメルゴン --- ポケモンの選出画面での画像を入力として、そのポケモンを識別するモデルです。 `microsoft/swin-base-patch4-window7-224-in22k` を元にして fine-tuning しています。 Repo: https://github.com/fufufukakaka/poke_battle_logger
dccuchile/albert-xxlarge-spanish-finetuned-mldoc
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
2023-04-12T11:45:22Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: BhaskarWary/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # BhaskarWary/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6230 - Validation Loss: 1.7181 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.5628 | 2.2555 | 0 | | 1.8928 | 1.7181 | 1 | | 1.6230 | 1.7181 | 2 | ### Framework versions - Transformers 4.27.4 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/albert-xxlarge-spanish-finetuned-pawsx
[ "pytorch", "albert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "AlbertForSequenceClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
26
2023-04-12T11:55:36Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: Jyoti125/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # Jyoti125/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.6933 - Validation Loss: 1.8314 - Epoch: 2 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 1.7031 | 1.8314 | 0 | | 1.6948 | 1.8314 | 1 | | 1.6933 | 1.8314 | 2 | ### Framework versions - Transformers 4.27.4 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/albert-xxlarge-spanish-finetuned-pos
[ "pytorch", "albert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "AlbertForTokenClassification" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
2023-04-12T11:55:42Z
--- license: mit tags: - generated_from_trainer model-index: - name: Covid_Misinformation_Model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Covid_Misinformation_Model This model is a fine-tuned version of [spencer-gable-cook/COVID-19_Misinformation_Detector](https://huggingface.co/spencer-gable-cook/COVID-19_Misinformation_Detector) on the None dataset. It achieves the following results on the evaluation set: - Loss: 0.1213 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 5 ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 1.9.0+cu111 - Datasets 2.11.0 - Tokenizers 0.12.1
dccuchile/albert-xxlarge-spanish-finetuned-qa-mlqa
[ "pytorch", "albert", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "AlbertForQuestionAnswering" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
2023-04-12T11:56:19Z
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: NeelRP/my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # NeelRP/my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 1.7735 - Validation Loss: 1.6265 - Epoch: 1 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'Adam', 'weight_decay': None, 'clipnorm': None, 'global_clipnorm': None, 'clipvalue': None, 'use_ema': False, 'ema_momentum': 0.99, 'ema_overwrite_frequency': None, 'jit_compile': True, 'is_legacy_optimizer': False, 'learning_rate': {'class_name': 'PolynomialDecay', 'config': {'initial_learning_rate': 2e-05, 'decay_steps': 500, 'end_learning_rate': 0.0, 'power': 1.0, 'cycle': False, 'name': None}}, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-08, 'amsgrad': False} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.4756 | 1.9440 | 0 | | 1.7735 | 1.6265 | 1 | ### Framework versions - Transformers 4.27.4 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/albert-base-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
586
2023-04-12T11:56:49Z
--- license: apache-2.0 tags: - generated_from_trainer datasets: - squad model-index: - name: my_awesome_qa_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_qa_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the squad dataset. It achieves the following results on the evaluation set: - Loss: 1.7504 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 3 ### Training results | Training Loss | Epoch | Step | Validation Loss | |:-------------:|:-----:|:----:|:---------------:| | No log | 1.0 | 250 | 2.3494 | | 2.7849 | 2.0 | 500 | 1.8369 | | 2.7849 | 3.0 | 750 | 1.7504 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/albert-large-spanish
[ "pytorch", "tf", "albert", "pretraining", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA" ]
null
{ "architectures": [ "AlbertForPreTraining" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
75
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: my_tech_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_tech_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.6813 - Rouge1: 0.0798 - Rouge2: 0.0056 - Rougel: 0.0617 - Rougelsum: 0.0619 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 21 | 5.3232 | 0.0773 | 0.0055 | 0.0634 | 0.0637 | 19.0 | | No log | 2.0 | 42 | 4.9006 | 0.0765 | 0.005 | 0.0611 | 0.0613 | 19.0 | | No log | 3.0 | 63 | 4.7285 | 0.0775 | 0.0055 | 0.0615 | 0.0617 | 19.0 | | No log | 4.0 | 84 | 4.6813 | 0.0798 | 0.0056 | 0.0617 | 0.0619 | 19.0 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/bert-base-spanish-wwm-cased-finetuned-ner
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
81
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: my_entertainment_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_entertainment_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.6595 - Rouge1: 0.0747 - Rouge2: 0.0041 - Rougel: 0.0608 - Rougelsum: 0.0605 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 20 | 5.3975 | 0.0715 | 0.0037 | 0.0578 | 0.0577 | 19.0 | | No log | 2.0 | 40 | 4.8739 | 0.0742 | 0.0037 | 0.0611 | 0.0609 | 19.0 | | No log | 3.0 | 60 | 4.7067 | 0.0736 | 0.0041 | 0.0607 | 0.0605 | 19.0 | | No log | 4.0 | 80 | 4.6595 | 0.0747 | 0.0041 | 0.0608 | 0.0605 | 19.0 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/bert-base-spanish-wwm-cased-finetuned-pawsx
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
25
2023-04-12T12:05:14Z
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: a photo of sks dog in a bucket tags: - stable-diffusion - stable-diffusion-ppdiffusers - text-to-image - ppdiffusers - lora inference: false --- # LoRA DreamBooth - kwange/lora_sks_dogs 本仓库的 LoRA 权重是基于 runwayml/stable-diffusion-v1-5 训练而来的,我们采用[DreamBooth](https://dreambooth.github.io/)的技术并使用 a photo of sks dog in a bucket 文本进行了训练。 下面是在训练过程中生成的一些图片。 ![img_0](validation_images/500.png) ![img_0](validation_images/400.png) ![img_0](validation_images/300.png) ![img_0](validation_images/200.png)
dccuchile/bert-base-spanish-wwm-cased-finetuned-pos
[ "pytorch", "bert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "BertForTokenClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
2023-04-12T12:08:13Z
--- license: apache-2.0 tags: - generated_from_trainer metrics: - rouge model-index: - name: my_sport_model results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_sport_model This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the None dataset. It achieves the following results on the evaluation set: - Loss: 4.4430 - Rouge1: 0.0574 - Rouge2: 0.0047 - Rougel: 0.0476 - Rougelsum: 0.0475 - Gen Len: 19.0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - num_epochs: 4 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Rouge1 | Rouge2 | Rougel | Rougelsum | Gen Len | |:-------------:|:-----:|:----:|:---------------:|:------:|:------:|:------:|:---------:|:-------:| | No log | 1.0 | 26 | 5.1010 | 0.0562 | 0.0032 | 0.0452 | 0.0452 | 19.0 | | No log | 2.0 | 52 | 4.6487 | 0.0581 | 0.0048 | 0.0455 | 0.0456 | 19.0 | | No log | 3.0 | 78 | 4.4856 | 0.057 | 0.0042 | 0.0472 | 0.0471 | 19.0 | | No log | 4.0 | 104 | 4.4430 | 0.0574 | 0.0047 | 0.0476 | 0.0475 | 19.0 | ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
dccuchile/distilbert-base-spanish-uncased-finetuned-mldoc
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
27
2023-04-12T12:20:27Z
--- license: apache-2.0 language: - en pipeline_tag: text2text-generation tags: - alpaca - llama - chat - gpt4 --- This repository comes with LoRA checkpoint to make LLaMA into a chatbot like language model. The checkpoint is the output of instruction following fine-tuning process with the following settings on 8xA100(40G) DGX system. - Training script: borrowed from the official [Alpaca-LoRA](https://github.com/tloen/alpaca-lora) implementation - Training script: ```shell python finetune.py \ --base_model='decapoda-research/llama-13b-hf' \ --data_path='alpaca_data_gpt4.json' \ --num_epochs=10 \ --cutoff_len=512 \ --group_by_length \ --output_dir='./gpt4-alpaca-lora-13b' \ --lora_target_modules='[q_proj,k_proj,v_proj,o_proj]' \ --lora_r=16 \ --batch_size=... \ --micro_batch_size=... ``` You can find how the training went from W&B report [here](https://wandb.ai/chansung18/gpt4_alpaca_lora/runs/izjeebh1?workspace=user-chansung18). ## Example output 1. Example #1 ``` Instruction: List all Canadian provinces in alphabetical order. Response: List of Canadian provinces in alphabetical order: Alberta, British Columbia, Manitoba, New Brunswick, Newfoundland and Labrador, Northwest Territories, Nova Scotia, Nunavut, Ontario, Prince Edward Island, Quebec, Saskatchewan. Instruction: Which ones are on the east side? Response: Eastern provinces of Canada are: Newfoundland and Labrador, Nova Scotia, Prince Edward Island, New Brunswick. Instruction: What foods are famous in each province on the east side? Response: Newfoundland and Labrador: Jiggs Dinner, Screech, Partridgeberry Wine, Flipper Pie, Nanny’s Cookies. Nova Scotia: Donair, Pizza Burger, Apple Pie, Halifax Donair. New Brunswick: Fried Dough (Malpeque Doughnuts), Poutine, Acadian Foods (Cote-de-beouf, potato chips, cabbage rolls, pickled beets). Prince Edward Island: Coleslaw, Lobster, Potato Bread, Blueberry Pies, Butter Tarts. Instruction: What about sightseeing? or landmarks? list one per province Response: Newfoundland and Labrador: Cabot Tower (St. John’s), Terra Nova National Park, Gros Morne National Park, Signal Hill. Nova Scotia: Cabot Trail, Peggy’s Cove, Lighthouses, Halifax Waterfront. New Brunswick: Fundy National Park, Bay of Fundy Tides, Loyalist House, Hopewell Rocks. Prince Edward Island: Green Gables Shore, Cavendish National Park, Anne of Green Gables House. ``` 2. Example #2 ``` Instruction: Tell me about alpacas. Response: Alpacas are a type of llama that live in the Andes mountains in South America. They are native to Peru, Chile, Bolivia, and Ecuador, where they are valued for their fiber. Alpacas are related to camels and they are cousins of the llama. Alpacas have woolly fur that comes in many colors and they are very gentle creatures. They are considered the smaller cousins of the llama and they are about the size of a medium dog. Alpacas have long necks and long, lush fur around their necks, which is known as their “ruff.” They also have long lashes that help keep dust and debris from their eyes. Alpacas have hooves that help them move easily through the rocky terrain of the Andes mountains. They make a soft “humbing” sound when they are content. Alpacas live in herds, and females are called “huacas,” and males are called “chancos.” Alpacas can live for up to 20 years. Alpacas can come in a variety of colors, including white, black, brown, Instruction: What other animals are living in the same area? Response: Alpacas live in the Andes mountains, and in that region there are many other animals. There are condors, vicunas, pumas, rheas, foxes, chinchillas, and more. Instruction: Are they the same species? Response: Alpacas and llamas are both members of the camelid family, but there are differences. Alpacas have longer necks, smaller ears, and shorter, thicker wool. Llamas are larger and their wool is coarser. ``` 3. Example #3 ``` Instruction: Tell me about the king of France in 2019. Response: In 2019, King of France is not applicable, as France does not have a monarch. It has been a republic since the 1790s and its constitution prohibits the monarchy from being reinstated. The President of France is currently Emmanuel Macron. Instruction: What about before him? Response: Before Emmanuel Macron, there were 5 other presidents: Nicolas Sarkozy, Jacques Chirac, François Mitterrand, Valéry Giscard d’Estaing, and Georges Pompidou. ``` 4. Example #4 ``` Instruction: Write a Python program that prints the first 10 Fibonacci numbers. Response: def fibonacci_numbers(): a, b = 0, 1 for i in range(10): a, b = b, a + b print(a, b) fibonacci_numbers() Output: 0 1 1 2 3 5 8 13 21 34 Instruction: Could you explain how the code works? Response: First, the function fibonacci_numbers() is defined. The range function creates a range of numbers, in this case between 0 and 10. Inside the loop, we initialize two variables a and b to 0 and 1 respectively. a, b represent the previous and current Fibonacci numbers respectively. For each number in the range, we add the previous Fibonacci number b to the current one a. At the end of the loop, the a and b variables will hold the Fibonacci numbers from 0 to 10. This is then printed on the screen. Instruction: What is recursion? Response: Recursion is a way of defining a method in which the method calls itself until a certain condition is met. In Python, recursion is used when there is a task that can be broken into smaller tasks that can be done recursively. A recursive function can be defined as any function that calls itself in the body of the function. ```
dccuchile/distilbert-base-spanish-uncased-finetuned-pos
[ "pytorch", "distilbert", "token-classification", "transformers", "autotrain_compatible" ]
token-classification
{ "architectures": [ "DistilBertForTokenClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
3
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # gszabo/sent_bert2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('gszabo/sent_bert2') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('gszabo/sent_bert2') model = AutoModel.from_pretrained('gszabo/sent_bert2') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gszabo/sent_bert2) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 939 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
dccuchile/distilbert-base-spanish-uncased-finetuned-xnli
[ "pytorch", "distilbert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "DistilBertForSequenceClassification" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
31
2023-04-12T12:36:28Z
--- license: mit --- A Demo BERT classification model Trained on (Part of) Yelp Dataset Photo2Text model: ydshieh/vit-gpt2-coco-en Expected / Standard Input: ``` [CLS] Business Name [SEP] Address [SEP] City [SEP] Photo2Text Outputs ... ``` Example: ``` [CLS] Paws The Cat Cafe [SEP] 10588 109 Street [SEP] Edmonton [SEP] A cup of coffee ``` Expected Output: 5
dccuchile/distilbert-base-spanish-uncased
[ "pytorch", "distilbert", "fill-mask", "es", "dataset:large_spanish_corpus", "transformers", "spanish", "OpenCENIA", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "DistilBertForMaskedLM" ], "model_type": "distilbert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
670
2023-04-12T12:36:37Z
--- license: other --- # 聲明 Disclaimer 本資料夾中的模型不是我所製作,版權歸原作者所有(各模型版權詳見 http://www.civitai.com 所示)。我上傳至本資料夾僅爲方便在綫抽取資源,并非盈利。 The models in this folder are not made by me, and the copyright belongs to the original author (see http://www.civitai.com for details on the copyright of each model). I uploaded to this folder only for the convenience of extracting resources online, not for profit. # 模型列表 List of Models 本資料夾中所有模型詳見下表。 All the models in this folder are detailed in the table below. | 模型名稱 Model Name | Civitai 頁面鏈接 Civitai Page Link | Civitai 下載鏈接 Civitai Download Link | |----------------------|--------------------|--------------------| |samdoesartsSamYang_offset.safetensors |https://civitai.com/models/6638 |https://civitai.com/api/download/models/7804 | |samdoesartsSamYang_original.safetensors |已失效 expired |https://civitai.com/api/download/models/10864 | |hipoly3DModelLora_v20.safetensors |https://civitai.com/models/8730?modelVersionId=44566 |https://civitai.com/api/download/models/44566 | |hipoly3DModelLora_v10.safetensors |https://civitai.com/models/8730?modelVersionId=10301 |https://civitai.com/api/download/models/10301 | |Zheng.safetensors |https://civitai.com/models/11034?modelVersionId=39348 |https://civitai.com/api/download/models/39348 | 注 1:samdoesartsSamYang 模型的觸發詞為:sam yang 注 2:hipoly3DModelLora_v10 模型的觸發詞為:hiqcgbody <img src="https://raw.githubusercontent.com/hanafuusen/images/main/samdoesartsSamYang_civitai.jpg" width="" height=""> <img src="https://raw.githubusercontent.com/hanafuusen/images/main/hipoly3DModelLora_v10_civitai.jpg" width="" height="">
Chaewon/mnmt_decoder_en
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - autotrain - text-classification language: - it widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 5.8685242878202715 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118263 - CO2 Emissions (in grams): 5.8685 ## Validation Metrics - Loss: 0.097 - Accuracy: 0.967 - Macro F1: 0.967 - Micro F1: 0.967 - Weighted F1: 0.967 - Macro Precision: 0.966 - Micro Precision: 0.967 - Weighted Precision: 0.968 - Macro Recall: 0.968 - Micro Recall: 0.967 - Weighted Recall: 0.967 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118263 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118263", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118263", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Chaewon/mnmt_decoder_en_gpt2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - autotrain - text-classification language: - it widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 5.946872411841352 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118264 - CO2 Emissions (in grams): 5.9469 ## Validation Metrics - Loss: 0.088 - Accuracy: 0.970 - Macro F1: 0.969 - Micro F1: 0.970 - Weighted F1: 0.970 - Macro Precision: 0.969 - Micro Precision: 0.970 - Weighted Precision: 0.970 - Macro Recall: 0.969 - Micro Recall: 0.970 - Weighted Recall: 0.970 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118264 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118264", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118264", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Chaima/TunBerto
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-12T12:45:05Z
--- tags: - autotrain - text-classification language: - it widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 5.866543555771449 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118265 - CO2 Emissions (in grams): 5.8665 ## Validation Metrics - Loss: 0.092 - Accuracy: 0.967 - Macro F1: 0.967 - Micro F1: 0.967 - Weighted F1: 0.967 - Macro Precision: 0.968 - Micro Precision: 0.967 - Weighted Precision: 0.967 - Macro Recall: 0.965 - Micro Recall: 0.967 - Weighted Recall: 0.967 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118265 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118265", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118265", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
chainyo/speaker-recognition-meetup
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
1
null
--- tags: - autotrain - text-classification language: - it widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 5.898764047890292 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118266 - CO2 Emissions (in grams): 5.8988 ## Validation Metrics - Loss: 0.097 - Accuracy: 0.966 - Macro F1: 0.965 - Micro F1: 0.966 - Weighted F1: 0.966 - Macro Precision: 0.966 - Micro Precision: 0.966 - Weighted Precision: 0.966 - Macro Recall: 0.965 - Micro Recall: 0.966 - Weighted Recall: 0.966 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118266 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118266", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118266", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Chakita/Friends
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
8
null
--- tags: - autotrain - text-classification language: - it widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 2.8305450358664017 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118268 - CO2 Emissions (in grams): 2.8305 ## Validation Metrics - Loss: 0.103 - Accuracy: 0.965 - Macro F1: 0.964 - Micro F1: 0.965 - Weighted F1: 0.965 - Macro Precision: 0.964 - Micro Precision: 0.965 - Weighted Precision: 0.965 - Macro Recall: 0.965 - Micro Recall: 0.965 - Weighted Recall: 0.965 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118268 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118268", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118268", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Chakita/KNUBert
[ "pytorch", "tensorboard", "roberta", "fill-mask", "transformers", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- tags: - autotrain - text-classification language: - it widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 4.745997821805124 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118269 - CO2 Emissions (in grams): 4.7460 ## Validation Metrics - Loss: 0.103 - Accuracy: 0.966 - Macro F1: 0.965 - Micro F1: 0.966 - Weighted F1: 0.966 - Macro Precision: 0.965 - Micro Precision: 0.966 - Weighted Precision: 0.966 - Macro Recall: 0.965 - Micro Recall: 0.966 - Weighted Recall: 0.966 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118269 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118269", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118269", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Chakita/KROBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
7
null
--- tags: - autotrain - text-classification language: - it widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 7.013570648305172 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118271 - CO2 Emissions (in grams): 7.0136 ## Validation Metrics - Loss: 0.089 - Accuracy: 0.971 - Macro F1: 0.971 - Micro F1: 0.971 - Weighted F1: 0.971 - Macro Precision: 0.971 - Micro Precision: 0.971 - Weighted Precision: 0.971 - Macro Recall: 0.971 - Micro Recall: 0.971 - Weighted Recall: 0.971 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118271 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118271", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118271", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Chakita/Kalbert
[ "pytorch", "tensorboard", "albert", "fill-mask", "transformers", "generated_from_trainer", "license:mit", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "AlbertForMaskedLM" ], "model_type": "albert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - autotrain - text-classification language: - it widget: - text: "I love AutoTrain 🤗" datasets: - davanstrien/autotrain-data-cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 6.04002410891006 --- # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118270 - CO2 Emissions (in grams): 6.0400 ## Validation Metrics - Loss: 0.142 - Accuracy: 0.941 - Macro F1: 0.940 - Micro F1: 0.941 - Weighted F1: 0.941 - Macro Precision: 0.941 - Micro Precision: 0.941 - Weighted Precision: 0.941 - Macro Recall: 0.939 - Micro Recall: 0.941 - Weighted Recall: 0.941 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "I love AutoTrain"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118270 ``` Or Python API: ``` from transformers import AutoModelForSequenceClassification, AutoTokenizer model = AutoModelForSequenceClassification.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118270", use_auth_token=True) tokenizer = AutoTokenizer.from_pretrained("davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118270", use_auth_token=True) inputs = tokenizer("I love AutoTrain", return_tensors="pt") outputs = model(**inputs) ```
Chakita/KannadaBERT
[ "pytorch", "roberta", "fill-mask", "transformers", "masked-lm", "fill-in-the-blanks", "autotrain_compatible" ]
fill-mask
{ "architectures": [ "RobertaForMaskedLM" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
5
null
--- tags: - autotrain - text-classification - lam - metadata language: - it widget: - text: porta a due battenti.Figure:putti.Animali:aquila.Decorazioni - text: Elemento di decorazione architettonica a rilievo datasets: - biglam/cultural_heritage_metadata_accuracy co2_eq_emissions: emissions: 7.171395981202868 metrics: - f1 - accuracy - recall pipeline_tag: text-classification license: mit library_name: transformers --- # Model Card for Cultural Heritage Metadata Accuracy Detection model This model is trained to detect the quality of Italian cultural heritage metadata, assigning a score of `high quality` or `low quality` to input text. The model was trained on the [Annotated dataset to assess the accuracy of the textual description of cultural heritage records](https://huggingface.co/datasets/biglam/cultural_heritage_metadata_accuracy) dataset. >The dataset contains more than 100K textual descriptions of cultural items from Cultura Italia, the Italian National Cultural aggregator. Each of the description is labeled either HIGH or LOW quality, according its adherence to the standard cataloguing guidelines provided by Istituto Centrale per il Catalogo e la Documentazione (ICCD). More precisely, each description is labeled as HIGH quality if the object and subject of the item (for which the description is provided) are both described according to the ICCD guidelines, and as LOW quality in all other cases. Most of the dataset was manually annotated, with ~30K descriptions automatically labeled as LOW quality due to their length (less than 3 tokens) or their provenance from old (pre-2012), not curated, collections. The dataset was developed to support the training and testing of ML text classification approaches for automatically assessing the quality of textual descriptions in digital Cultural Heritage repositories. ## Uses <!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. --> This model could potentially be useful for performing validation on metadata quality. However, before using this model, it would be sensible to validate: - how it performs on your data - if you agree with the quality ratings assigned in the original dataset. It will likely make more sense to use this model in the context of a 'human in the loop' pipeline whereby the model is used to surface metadata records which may benefit from additional human attention rather than using it to make automatic decisions. # Model Trained Using AutoTrain - Problem type: Multi-class Classification - Model ID: 48840118272 - CO2 Emissions (in grams): 7.1714 ## Validation Metrics - Loss: 0.085 - Accuracy: 0.972 - Macro F1: 0.972 - Micro F1: 0.972 - Weighted F1: 0.972 - Macro Precision: 0.972 - Micro Precision: 0.972 - Weighted Precision: 0.972 - Macro Recall: 0.972 - Micro Recall: 0.972 - Weighted Recall: 0.972 ## Usage You can use cURL to access this model: ``` $ curl -X POST -H "Authorization: Bearer YOUR_API_KEY" -H "Content-Type: application/json" -d '{"inputs": "Elemento di decorazione architettonica a rilievo"}' https://api-inference.huggingface.co/models/davanstrien/autotrain-cultural_heritage_metadata_accuracy-48840118272 ``` You can also use the model locally be leveraging a Transformers [pipeline](https://huggingface.co/docs/transformers/pipeline_tutorial) ``` from transformers import pipeline pipe = pipeline('text-classification', model='biglam/cultural_heritage_metadata_accuracy') pipe("Elemento di decorazione architettonica a rilievo") ```
Chakita/gpt2_mwp
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": true, "max_length": 50 }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
6
null
--- tags: - automatic-speech-recognition - ahazeemi/librispeech10h - generated_from_trainer metrics: - wer model-index: - name: wavlm-libri-clean-100h-large results: [] datasets: - ahazeemi/librispeech10h language: - en pipeline_tag: automatic-speech-recognition --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wavlm-libri-clean-100h-large This model is a fine-tuned version of [microsoft/wavlm-large](https://huggingface.co/microsoft/wavlm-large) on the AHAZEEMI/LIBRISPEECH10H - CLEAN dataset. It achieves the following results on the evaluation set: - Loss: 0.0893 - Wer: 0.0655 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 0.0003 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 2 - total_train_batch_size: 4 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - num_epochs: 3.0 ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:------:| | 0.0144 | 0.42 | 300 | 0.0947 | 0.0749 | | 0.1408 | 0.84 | 600 | 0.1347 | 0.1363 | | 0.0396 | 1.26 | 900 | 0.1090 | 0.0935 | | 0.0353 | 1.68 | 1200 | 0.1032 | 0.0832 | | 0.051 | 2.1 | 1500 | 0.0969 | 0.0774 | | 0.0254 | 2.52 | 1800 | 0.0930 | 0.0715 | | 0.0579 | 2.94 | 2100 | 0.0894 | 0.0660 | ### Framework versions - Transformers 4.26.1 - Pytorch 2.0.0+cpu - Datasets 2.9.0 - Tokenizers 0.13.2
CharlieChen/feedback-bigbird
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - medical datasets: - allenai/s2orc --- This repo contains PMC_LLaMA_7B, which is LLaMA-7b finetuned on the PMC papers in S2ORC dataset. The model was trained with the following hyperparameters: * Epochs: 5 * Batch size: 128 * Cutoff length: 512 * Learning rate: 2e-5 Each epoch we sample 512 tokens per paper for training. The model can be loaded as following: ``` import transformers import torch tokenizer = transformers.LlamaTokenizer.from_pretrained('chaoyi-wu/PMC_LLAMA_7B') model = transformers.LlamaForCausalLM.from_pretrained('chaoyi-wu/PMC_LLAMA_7B') sentence = 'Hello, doctor' batch = tokenizer( sentence, return_tensors="pt", add_special_tokens=False ) with torch.no_grad(): generated = model.generate(inputs = batch["input_ids"], max_length=200, do_sample=True, top_k=50) print('model predict: ',tokenizer.decode(generated[0])) ```
Charlotte77/model_test
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: egarciamartin/e 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
ChaseBread/DialoGPT-small-harrypotter
[ "pytorch", "gpt2", "text-generation", "transformers", "conversational" ]
conversational
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": 1000 }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
2023-04-12T13:03:42Z
--- library_name: ml-agents tags: - Huggy - deep-reinforcement-learning - reinforcement-learning - ML-Agents-Huggy --- # **ppo** Agent playing **Huggy** This is a trained model of a **ppo** agent playing **Huggy** using the [Unity ML-Agents Library](https://github.com/Unity-Technologies/ml-agents). ## Usage (with ML-Agents) The Documentation: https://github.com/huggingface/ml-agents#get-started We wrote a complete tutorial to learn to train your first agent using ML-Agents and publish it to the Hub: ### Resume the training ``` mlagents-learn <your_configuration_file_path.yaml> --run-id=<run_id> --resume ``` ### Watch your Agent play You can watch your agent **playing directly in your browser:**. 1. Go to https://huggingface.co/spaces/unity/ML-Agents-Huggy 2. Step 1: Find your model_id: egarciamartin/PPO-Huggy 3. Step 2: Select your *.nn /*.onnx file 4. Click on Watch the agent play 👀
Cheatham/xlm-roberta-large-finetuned-d1
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: cc-by-nc-4.0 tags: - generated_from_trainer metrics: - accuracy model-index: - name: videomae-base-finetuned-basketball-subset-v3-25epoch results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # videomae-base-finetuned-basketball-subset-v3-25epoch This model is a fine-tuned version of [MCG-NJU/videomae-base](https://huggingface.co/MCG-NJU/videomae-base) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.8922 - Accuracy: 0.9 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 5e-05 - train_batch_size: 1 - eval_batch_size: 1 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_ratio: 0.1 - training_steps: 5100 ### Training results | Training Loss | Epoch | Step | Validation Loss | Accuracy | |:-------------:|:-----:|:----:|:---------------:|:--------:| | 1.2489 | 0.04 | 202 | 0.7252 | 0.7 | | 0.5366 | 1.04 | 404 | 1.2745 | 0.6 | | 0.9659 | 2.04 | 606 | 0.7013 | 0.85 | | 0.0226 | 3.04 | 808 | 1.3065 | 0.7 | | 0.5437 | 4.04 | 1010 | 2.0397 | 0.7 | | 0.0002 | 5.04 | 1212 | 1.8936 | 0.75 | | 0.0003 | 6.04 | 1414 | 1.4473 | 0.8 | | 0.0193 | 7.04 | 1616 | 1.1602 | 0.75 | | 0.0001 | 8.04 | 1818 | 0.8922 | 0.9 | | 0.0001 | 9.04 | 2020 | 1.0781 | 0.85 | | 0.0 | 10.04 | 2222 | 1.1948 | 0.85 | | 0.0 | 11.04 | 2424 | 1.2431 | 0.85 | | 0.0 | 12.04 | 2626 | 1.2794 | 0.85 | | 0.0 | 13.04 | 2828 | 1.3082 | 0.85 | | 0.0 | 14.04 | 3030 | 1.3332 | 0.85 | | 0.0 | 15.04 | 3232 | 1.3539 | 0.85 | | 0.0 | 16.04 | 3434 | 1.3793 | 0.85 | | 0.0 | 17.04 | 3636 | 1.4510 | 0.8 | | 0.0 | 18.04 | 3838 | 1.5646 | 0.8 | | 0.0 | 19.04 | 4040 | 1.6535 | 0.8 | | 0.0 | 20.04 | 4242 | 1.7017 | 0.8 | | 0.0 | 21.04 | 4444 | 1.7366 | 0.8 | | 0.0 | 22.04 | 4646 | 1.7639 | 0.8 | | 0.0 | 23.04 | 4848 | 1.7792 | 0.8 | | 0.0 | 24.04 | 5050 | 1.7855 | 0.8 | | 0.0 | 25.01 | 5100 | 1.7857 | 0.8 | ### Framework versions - Transformers 4.26.1 - Pytorch 1.12.1+cu116 - Datasets 2.4.0 - Tokenizers 0.12.1
Cheatham/xlm-roberta-large-finetuned-d12_2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
项目名称: ChineseBert_text_analysis_system ZJF-Thunder/ChineseBert_text_analysis_system 大致使用说明: 1.系统运行必要文件: 需要去huggingface仓库下载预训练模型chinese-bert-wwm-ext或bert-base-chinese或其他chinesebert模型, 下载命令: git clone https://huggingface.co/hfl/chinese-roberta-wwm-ext-large git clone https://huggingface.co/hfl/chinese-roberta-wwm-ext git clone https://huggingface.co/hfl/chinese-bert-wwm-ext git clone https://huggingface.co/hfl/chinese-bert-wwm git clone https://huggingface.co/bert-base-chinese 保存到路径./models/中 2.训练模型所需数据集存放在data目录中, 3.微调的模型存放路径为:./模型保存, 运行Text_Classification.py即可训练模型,并保存模型和测试模型,保存路径系统自动创建 4.Sentence_transformation.py为扩充数据集文件 5.其他py文件均为测试模型文件 6.后续再详细补充
Cheatham/xlm-roberta-large-finetuned-d1r01
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
21
2023-04-12T13:13:00Z
--- license: gpl-3.0 pipeline_tag: graph-ml tags: - code --- --- import contextlib import os from matplotlib import pyplot as plt import numpy as np import torch import torch.nn as nn import torch.optim as optim import requests from torchvision import datasets, transforms import psutil import time import subprocess import onnxruntime as ort import matplotlib.pyplot as plt import numpy as np import numexpr as ne from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("janpase97/codeformer-pretrained") model = AutoModelForSeq2SeqLM.from_pretrained("janpase97/codeformer-pretrained") def check_graphics_api(target_app_name): graphics_api = None with contextlib.suppress(subprocess.CalledProcessError): output = subprocess.check_output(['tasklist', '/FI', f'imagename eq {target_app_name}', '/M']).decode('utf-8') if "opengl32.dll" in output: graphics_api = "OpenGL" elif "d3d11.dll" in output: graphics_api = "DirectX11" elif "d3d12.dll" in output: graphics_api = "DirectX12" elif "vulkan" in output: graphics_api = "VULKAN" return graphics_api # Get the target application's process object def get_target_app_process(target_app_name): return next( ( process for process in psutil.process_iter(['name']) if process.info['name'] == target_app_name ), None, ) # Attach the AI to the application's process by PID def attach_ai_to_app_pid(target_app_process): if target_app_process is not None: print(f"AI is attached to the application's process with PID: {target_app_process.pid}") return True else: print("Could not find the target application's process to attach the AI.") return False # Check if the targeted application is running def is_target_app_running(target_app_name): return any( process.info['name'] == target_app_name for process in psutil.process_iter(['name']) ) # Create the directory if it doesn't exist directory = r"G:\Epic Games\GTAV\GTA5_AI\trained_models" if not os.path.exists(directory): os.makedirs(directory) # Define the neural network model class NanoCircuit(nn.Module): def __init__(self): super(NanoCircuit, self).__init__() self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 10) def forward(self, x): x = x.view(-1, 784) # Reshape the input from (batch_size, 28, 28) to (batch_size, 784) x = torch.relu(self.fc1(x)) x = self.fc2(x) return x # Set the device to GPU if available device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu") # Load the MNIST dataset transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))]) train_dataset = datasets.MNIST(root='./data', train=True, download=True, transform=transform) train_loader = torch.utils.data.DataLoader(train_dataset, batch_size=64, shuffle=True) # Initialize the model and move it to the GPU model = NanoCircuit().to(device) criterion = nn.CrossEntropyLoss() optimizer = optim.SGD(model.parameters(), lr=0.01, momentum=0.9) # Train the model on the GPU with a data cap def train_with_data_cap(model, data_loader, criterion, optimizer, device, data_cap_gb): data_processed = 0 data_cap_bytes = data_cap_gb * (1024 ** 3) epoch = 0 while data_processed < data_cap_bytes: running_loss = 0.0 for i, data in enumerate(data_loader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # Update the amount of data processed data_processed += inputs.nelement() * inputs.element_size() if data_processed >= data_cap_bytes: break optimizer.zero_grad() outputs = model(inputs.view(-1, 28 * 28)) loss = criterion(outputs, labels) loss.backward() optimizer.step() running_loss += loss.item() epoch += 1 print(f"Epoch {epoch}, Loss: {running_loss / (i + 1)}") print(f"Data processed: {data_processed / (1024 ** 3):.2f} GB") return model # Save the updated model as a .onnx file def save_model(model, filepath): dummy_input = torch.randn(1, 1, 28, 28).to(device) torch.onnx.export(model, dummy_input, filepath, input_names=['input'], output_names=['output'], opset_version=11) # Train the model with a 1 GB data cap trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, device, data_cap_gb=50) save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) target_app_name = "GTA5_TRAINED.exe" save_interval_seconds = 5 * 60 application_was_running = False while True: if is_target_app_running(target_app_name): print("Target application is running. Training and updating the model...") trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, device, data_cap_gb=.1) save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) application_was_running = True elif application_was_running: print("Target application has exited. Saving the model...") save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) print("Finished training and saved the model.") break else: print("Target application is not running. Waiting to start training and updating the model...") time.sleep(save_interval_seconds) def train_with_data_cap(model, data_loader, criterion, optimizer, device, data_cap_gb): data_processed = 0 data_cap_bytes = data_cap_gb * (1024 ** 3) epoch = 0 while data_processed < data_cap_bytes: running_loss = 0.0 for i, data in enumerate(data_loader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # Update the amount of data processed data_processed += inputs.nelement() * inputs.element_size() if data_processed >= data_cap_bytes: break optimizer.zero_grad() # Compute the outputs and loss using numexpr outputs = model(inputs.view(-1, 28 * 28)) outputs = outputs.cpu().detach().numpy() labels = labels.cpu().detach().numpy() loss = ne.evaluate("sum(-log(outputs[arange(outputs.shape[0]), labels]))") / len(labels) # Backpropagate and update the model parameters ne.evaluate("loss", out=loss) grad_outputs = np.ones_like(outputs) grad_outputs[np.arange(grad_outputs.shape[0]), labels] = -1 grad_outputs /= len(labels) grad_outputs = ne.evaluate("grad_outputs * loss_grad") grad_outputs = torch.from_numpy(grad_outputs).to(device) outputs = torch.from_numpy(outputs).to(device) loss.backward(grad_outputs) optimizer.step() running_loss += loss.item() epoch += 1 print(f"Epoch {epoch}, Loss: {running_loss / (i + 1)}") print(f"Data processed: {data_processed / (1024 ** 3):.2f} GB") return model # Train the model with a 10 GB data cap trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, os.device_encoding, data_cap_gb=10) save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) target_app_name = "GTA5.exe" save_interval_seconds = 5 * 60 application_was_running = False while True: if is_target_app_running(target_app_name): print("Target application is running. Training and updating the model...") trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, os.device_encoding, data_cap_gb=10) save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) application_was_running = True elif application_was_running: print("Target application has exited. Saving the model...") save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) print("Finished training and saved the model.") break else: print("Target application is not running. Waiting to start training and updating the model...") time.sleep(save_interval_seconds) def train_with_data_cap(model, data_loader, criterion, optimizer, device, data_cap_gb): data_processed = 0 data_cap_bytes = data_cap_gb * (1024 ** 3) epoch = 0 while data_processed < data_cap_bytes: running_loss = 0.0 for i, data in enumerate(data_loader, 0): inputs, labels = data inputs, labels = inputs.to(device), labels.to(device) # Update the amount of data processed data_processed += inputs.nelement() * inputs.element_size() if data_processed >= data_cap_bytes: break optimizer.zero_grad() # Compute the outputs and loss using numexpr outputs = model(inputs.view(-1, 28 * 28)) outputs = outputs.cpu().detach().numpy() labels = labels.cpu().detach().numpy() loss = ne.evaluate("sum(-log(outputs[arange(outputs.shape[0]), labels]))") / len(labels) # Backpropagate and update the model parameters ne.evaluate("loss", out=loss) grad_outputs = np.ones_like(outputs) grad_outputs[np.arange(grad_outputs.shape[0]), labels] = -1 grad_outputs /= len(labels) grad_outputs = ne.evaluate("grad_outputs * loss_grad") grad_outputs = torch.from_numpy(grad_outputs).to(device) outputs = torch.from_numpy(outputs).to(device) loss.backward(grad_outputs) optimizer.step() running_loss += loss.item() epoch += 1 print(f"Epoch {epoch}, Loss: {running_loss / (i + 1)}") print(f"Data processed: {data_processed / (1024 ** 3):.2f} GB") return model target_app_name = "GTA5.exe" save_interval_seconds = 1 * 60 application_was_running = False while True: if is_target_app_running(target_app_name): print("Target application is running. Training and updating the model...") trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, device, data_cap_gb=10) save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) application_was_running = True elif application_was_running: print("Target application has exited. Saving the model...") save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) print("Finished training and saved the model.") break else: start_time = time.time() print("Target application is not running. Waiting to detect the graphics API...") while (time.time() - start_time) < 5: if is_target_app_running(target_app_name): if graphics_api := check_graphics_api(target_app_name): print(f"Detected {graphics_api} in the target application.") break else: print("Could not detect the graphics API used in the target application.") time.sleep(1) if not is_target_app_running(target_app_name): print("Target application not detected in 5 seconds. Shutting down the AI.") break while True: if is_target_app_running(target_app_name): if graphics_api := check_graphics_api(target_app_name): print(f"Detected {graphics_api} in the target application.") else: print("Could not detect the graphics API used in the target application.") else: start_time = time.time() print("Target application is not running. Waiting to start training and updating the model...") while (time.time() - start_time) < 5: if is_target_app_running(target_app_name): print(f"Detected {graphics_api} in the target application.") break time.sleep(1) if not is_target_app_running(target_app_name): print("Target application not detected in 5 seconds. Shutting down the AI.") break #Generate some random data for the boxplots np.random.seed(0) original_data = np.random.normal(0, 1, 100) trained_data = np.random.normal(0.5, 1, 100) while True: if is_target_app_running(target_app_name): print("Target application is running. Training and updating the model...") trained_model = train_with_data_cap(model, train_loader, criterion, optimizer, device, data_cap_gb=10) save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) # Create a box plot of the original and trained data plt.figure() plt.boxplot([original_data, trained_data], labels=["Original Data", "Trained Data"]) plt.title("Boxplot of Original and Trained Data") plt.ylabel("Values") plt.show() # Save the box plot as an image plt.savefig(r"G:\Epic Games\GTAV\GTA5_AI\Plot Box Comparison\boxplot_comparison.png") application_was_running = True elif application_was_running: print("Target application has exited. Saving the model...") save_model(trained_model, os.path.join(directory, 'GTA5_TRAINED.onnx')) print("Finished training and saved the model.") break else: start_time = time.time() print("Target application is not running. Waiting to detect the graphics API...") while (time.time() - start_time) < 5: if is_target_app_running(target_app_name): if graphics_api := check_graphics_api(target_app_name): print(f"Detected {graphics_api} in the target application.") break else: print("Could not detect the graphics API used in the target application.") time.sleep(1) if not is_target_app_running(target_app_name): print("Target application not detected in 5 seconds. Shutting down the AI.") break
Cheatham/xlm-roberta-large-finetuned
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - accuracy - f1 model-index: - name: finetuning-sentiment-model-3000-samples-vNew2 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuning-sentiment-model-3000-samples-vNew2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on an unknown dataset. It achieves the following results on the evaluation set: - Loss: 0.4047 - Accuracy: 0.8713 - F1: 0.7171 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 2e-05 - train_batch_size: 16 - eval_batch_size: 16 - seed: 42 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - training_steps: 3000 ### Training results ### Framework versions - Transformers 4.27.4 - Pytorch 2.0.0+cu118 - Datasets 2.11.0 - Tokenizers 0.13.3
Cheatham/xlm-roberta-large-finetuned3
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
22
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1675.16 +/- 136.29 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```
Cheatham/xlm-roberta-large-finetuned4
[ "pytorch", "xlm-roberta", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "XLMRobertaForSequenceClassification" ], "model_type": "xlm-roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
20
null
--- license: other tags: - text-to-image - stable-diffusion ---
CheonggyeMountain-Sherpa/kogpt-trinity-poem
[ "pytorch", "gpt2", "text-generation", "transformers" ]
text-generation
{ "architectures": [ "GPT2LMHeadModel" ], "model_type": "gpt2", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
15
null
--- license: apache-2.0 tags: - generated_from_keras_callback model-index: - name: CSerdar014191/distilgpt2_test01_finetune results: [] --- <!-- This model card has been generated automatically according to the information Keras had access to. You should probably proofread and complete it, then remove this comment. --> # CSerdar014191/distilgpt2_test01_finetune This model is a fine-tuned version of [distilgpt2](https://huggingface.co/distilgpt2) on an unknown dataset. It achieves the following results on the evaluation set: - Train Loss: 3.8469 - Validation Loss: 3.9551 - Epoch: 0 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - optimizer: {'name': 'AdamWeightDecay', 'learning_rate': 2e-05, 'decay': 0.0, 'beta_1': 0.9, 'beta_2': 0.999, 'epsilon': 1e-07, 'amsgrad': False, 'weight_decay_rate': 0.01} - training_precision: float32 ### Training results | Train Loss | Validation Loss | Epoch | |:----------:|:---------------:|:-----:| | 3.8469 | 3.9551 | 0 | ### Framework versions - Transformers 4.27.4 - TensorFlow 2.12.0 - Datasets 2.11.0 - Tokenizers 0.13.3
Chertilasus/main
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m --- https://civitai.com/models/19244/hanying-punishing-grey-raven
Chester/traffic-rec
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity --- # metarank/ce-esci-MiniLM-L6-v2 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. A [cross-encoder/ms-marco-MiniLM-L-6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2) model fine-tuned on [Amazon ESCI dataset](https://github.com/amazon-science/esci-data). ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('metarank/esci-MiniLM-L6-v2') embeddings = model.encode(sentences) print(embeddings) ``` ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 769 with parameters: ``` {'batch_size': 128, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MultipleNegativesRankingLoss.MultipleNegativesRankingLoss` with parameters: ``` {'scale': 20.0, 'similarity_fct': 'cos_sim'} ``` Parameters of the fit()-Method: ``` { "epochs": 1, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 1000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False}) (2): Normalize() ) ``` ## Citing & Authors * Roman Grebennikov
Chikita1/www_stash_stock
[ "license:bsd-3-clause-clear" ]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m --- https://civitai.com/models/37333/himekohonkai-star-rail
Chinat/test-classifier
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 500.00 +/- 0.00 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Ching/negation_detector
[ "pytorch", "roberta", "question-answering", "transformers", "autotrain_compatible" ]
question-answering
{ "architectures": [ "RobertaForQuestionAnswering" ], "model_type": "roberta", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
9
null
--- license: creativeml-openrail-m base_model: runwayml/stable-diffusion-v1-5 instance_prompt: photo of a pink chair with black legs tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - svdiff inference: true --- # SVDiff-pytorch - svdiff-library/svdiff_chair_example These are SVDiff weights for runwayml/stable-diffusion-v1-5. The weights were trained on photo of a pink chair with black legs as Single Image Editing.
Chiuchiyin/Donald
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
2023-04-12T13:37:55Z
--- tags: - FrozenLake-v1-4x4-no_slippery - q-learning - reinforcement-learning - custom-implementation model-index: - name: DeepRLCourse_Unit2-q-FrozenLake-v1-4x4-noSlippery results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: FrozenLake-v1-4x4-no_slippery type: FrozenLake-v1-4x4-no_slippery metrics: - type: mean_reward value: 1.00 +/- 0.00 name: mean_reward verified: false --- # **Q-Learning** Agent playing1 **FrozenLake-v1** This is a trained model of a **Q-Learning** agent playing **FrozenLake-v1** . ## Usage ```python model = load_from_hub(repo_id="sarahpuspdew/DeepRLCourse_Unit2-q-FrozenLake-v1-4x4-noSlippery", filename="q-learning.pkl") # Don't forget to check if you need to add additional attributes (is_slippery=False etc) env = gym.make(model["env_id"]) ```
ChoboAvenger/DialoGPT-small-joshua
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: apache-2.0 tags: - generated_from_trainer metrics: - wer model-index: - name: whisper_medium results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # whisper_medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset. It achieves the following results on the evaluation set: - Loss: 1.5582 - Wer: 78.8991 ## Model description More information needed ## Intended uses & limitations More information needed ## Training and evaluation data More information needed ## Training procedure ### Training hyperparameters The following hyperparameters were used during training: - learning_rate: 1e-05 - train_batch_size: 2 - eval_batch_size: 2 - seed: 42 - gradient_accumulation_steps: 16 - total_train_batch_size: 32 - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 - lr_scheduler_type: linear - lr_scheduler_warmup_steps: 500 - training_steps: 100 - mixed_precision_training: Native AMP ### Training results | Training Loss | Epoch | Step | Validation Loss | Wer | |:-------------:|:-----:|:----:|:---------------:|:-------:| | 1.4737 | 0.08 | 100 | 1.5582 | 78.8991 | ### Framework versions - Transformers 4.27.4 - Pytorch 1.13.0 - Datasets 2.11.0 - Tokenizers 0.13.2
CleveGreen/JobClassifier_v2
[ "pytorch", "bert", "text-classification", "transformers" ]
text-classification
{ "architectures": [ "BertForSequenceClassification" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
37
null
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - transformers --- # gszabo/sent_bert_epoch10 This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when you have [sentence-transformers](https://www.SBERT.net) installed: ``` pip install -U sentence-transformers ``` Then you can use the model like this: ```python from sentence_transformers import SentenceTransformer sentences = ["This is an example sentence", "Each sentence is converted"] model = SentenceTransformer('gszabo/sent_bert_epoch10') embeddings = model.encode(sentences) print(embeddings) ``` ## Usage (HuggingFace Transformers) Without [sentence-transformers](https://www.SBERT.net), you can use the model like this: First, you pass your input through the transformer model, then you have to apply the right pooling-operation on-top of the contextualized word embeddings. ```python from transformers import AutoTokenizer, AutoModel import torch #Mean Pooling - Take attention mask into account for correct averaging def mean_pooling(model_output, attention_mask): token_embeddings = model_output[0] #First element of model_output contains all token embeddings input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float() return torch.sum(token_embeddings * input_mask_expanded, 1) / torch.clamp(input_mask_expanded.sum(1), min=1e-9) # Sentences we want sentence embeddings for sentences = ['This is an example sentence', 'Each sentence is converted'] # Load model from HuggingFace Hub tokenizer = AutoTokenizer.from_pretrained('gszabo/sent_bert_epoch10') model = AutoModel.from_pretrained('gszabo/sent_bert_epoch10') # Tokenize sentences encoded_input = tokenizer(sentences, padding=True, truncation=True, return_tensors='pt') # Compute token embeddings with torch.no_grad(): model_output = model(**encoded_input) # Perform pooling. In this case, mean pooling. sentence_embeddings = mean_pooling(model_output, encoded_input['attention_mask']) print("Sentence embeddings:") print(sentence_embeddings) ``` ## Evaluation Results <!--- Describe how your model was evaluated --> For an automated evaluation of this model, see the *Sentence Embeddings Benchmark*: [https://seb.sbert.net](https://seb.sbert.net?model_name=gszabo/sent_bert_epoch10) ## Training The model was trained with the parameters: **DataLoader**: `torch.utils.data.dataloader.DataLoader` of length 939 with parameters: ``` {'batch_size': 16, 'sampler': 'torch.utils.data.sampler.RandomSampler', 'batch_sampler': 'torch.utils.data.sampler.BatchSampler'} ``` **Loss**: `sentence_transformers.losses.MegaBatchMarginLoss.MegaBatchMarginLoss` Parameters of the fit()-Method: ``` { "epochs": 10, "evaluation_steps": 0, "evaluator": "NoneType", "max_grad_norm": 1, "optimizer_class": "<class 'torch.optim.adamw.AdamW'>", "optimizer_params": { "lr": 2e-05 }, "scheduler": "WarmupLinear", "steps_per_epoch": null, "warmup_steps": 10000, "weight_decay": 0.01 } ``` ## Full Model Architecture ``` SentenceTransformer( (0): Transformer({'max_seq_length': 128, 'do_lower_case': False}) with Transformer model: BertModel (1): Pooling({'word_embedding_dimension': 384, 'pooling_mode_cls_token': False, 'pooling_mode_mean_tokens': True, 'pooling_mode_max_tokens': False, 'pooling_mode_mean_sqrt_len_tokens': False, 'pooling_mode_weightedmean_tokens': False, 'pooling_mode_lasttoken': False}) ) ``` ## Citing & Authors <!--- Describe where people can find more information -->
Contrastive-Tension/BERT-Base-Swe-CT-STSb
[ "pytorch", "tf", "jax", "bert", "feature-extraction", "transformers" ]
feature-extraction
{ "architectures": [ "BertModel" ], "model_type": "bert", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
126
null
--- tags: - CartPole-v1 - reinforce - reinforcement-learning - custom-implementation - deep-rl-class model-index: - name: Reinforce-CartPole-v1 results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: CartPole-v1 type: CartPole-v1 metrics: - type: mean_reward value: 496.75 +/- 27.09 name: mean_reward verified: false --- # **Reinforce** Agent playing **CartPole-v1** This is a trained model of a **Reinforce** agent playing **CartPole-v1** . To learn to use this model and train yours check Unit 4 of the Deep Reinforcement Learning Course: https://huggingface.co/deep-rl-course/unit4/introduction
Culmenus/opus-mt-de-is-finetuned-de-to-is_nr2-finetuned-de-to-is_nr2
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- license: creativeml-openrail-m base_model: SG161222/Realistic_Vision_V1.4 instance_prompt: stepania tags: - stable-diffusion - stable-diffusion-diffusers - text-to-image - diffusers - lora inference: true --- # LoRA DreamBooth - stepania2 These are LoRA adaption weights for [SG161222/Realistic_Vision_V1.4](https://huggingface.co/SG161222/Realistic_Vision_V1.4). The weights were trained on the instance prompt "stepania" using [DreamBooth](https://dreambooth.github.io/). You can find some example images in the following.
DCU-NLP/electra-base-irish-cased-discriminator-v1
[ "pytorch", "electra", "pretraining", "ga", "transformers", "irish", "license:apache-2.0" ]
null
{ "architectures": [ "ElectraForPreTraining" ], "model_type": "electra", "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
4
null
--- library_name: stable-baselines3 tags: - SpaceInvadersNoFrameskip-v4 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: DQN results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: SpaceInvadersNoFrameskip-v4 type: SpaceInvadersNoFrameskip-v4 metrics: - type: mean_reward value: 246.00 +/- 45.65 name: mean_reward verified: false --- # **DQN** Agent playing **SpaceInvadersNoFrameskip-v4** This is a trained model of a **DQN** agent playing **SpaceInvadersNoFrameskip-v4** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3) and the [RL Zoo](https://github.com/DLR-RM/rl-baselines3-zoo). The RL Zoo is a training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included. ## Usage (with SB3 RL Zoo) RL Zoo: https://github.com/DLR-RM/rl-baselines3-zoo<br/> SB3: https://github.com/DLR-RM/stable-baselines3<br/> SB3 Contrib: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib Install the RL Zoo (with SB3 and SB3-Contrib): ```bash pip install rl_zoo3 ``` ``` # Download model and save it into the logs/ folder python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga email81227 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` If you installed the RL Zoo3 via pip (`pip install rl_zoo3`), from anywhere you can do: ``` python -m rl_zoo3.load_from_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -orga email81227 -f logs/ python -m rl_zoo3.enjoy --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ ``` ## Training (with the RL Zoo) ``` python -m rl_zoo3.train --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ # Upload the model and generate video (when possible) python -m rl_zoo3.push_to_hub --algo dqn --env SpaceInvadersNoFrameskip-v4 -f logs/ -orga email81227 ``` ## Hyperparameters ```python OrderedDict([('batch_size', 128), ('buffer_size', 64000), ('env_wrapper', ['stable_baselines3.common.atari_wrappers.AtariWrapper']), ('exploration_final_eps', 0.05), ('exploration_fraction', 0.15), ('frame_stack', 4), ('gradient_steps', 1), ('learning_rate', 0.005), ('learning_starts', 1000), ('n_timesteps', 1000000.0), ('optimize_memory_usage', False), ('policy', 'CnnPolicy'), ('target_update_interval', 1000), ('train_freq', 4), ('normalize', False)]) ```
DKpro000/DialoGPT-small-harrypotter
[]
null
{ "architectures": null, "model_type": null, "task_specific_params": { "conversational": { "max_length": null }, "summarization": { "early_stopping": null, "length_penalty": null, "max_length": null, "min_length": null, "no_repeat_ngram_size": null, "num_beams": null, "prefix": null }, "text-generation": { "do_sample": null, "max_length": null }, "translation_en_to_de": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_fr": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null }, "translation_en_to_ro": { "early_stopping": null, "max_length": null, "num_beams": null, "prefix": null } } }
0
null
--- library_name: stable-baselines3 tags: - AntBulletEnv-v0 - deep-reinforcement-learning - reinforcement-learning - stable-baselines3 model-index: - name: A2C results: - task: type: reinforcement-learning name: reinforcement-learning dataset: name: AntBulletEnv-v0 type: AntBulletEnv-v0 metrics: - type: mean_reward value: 1317.09 +/- 109.12 name: mean_reward verified: false --- # **A2C** Agent playing **AntBulletEnv-v0** This is a trained model of a **A2C** agent playing **AntBulletEnv-v0** using the [stable-baselines3 library](https://github.com/DLR-RM/stable-baselines3). ## Usage (with Stable-baselines3) TODO: Add your code ```python from stable_baselines3 import ... from huggingface_sb3 import load_from_hub ... ```