id
stringlengths
6
113
author
stringlengths
2
36
task_category
stringclasses
39 values
tags
listlengths
1
4.05k
created_time
timestamp[s]date
2022-03-02 23:29:04
2025-04-07 20:40:27
last_modified
timestamp[s]date
2020-05-14 13:13:12
2025-04-19 04:15:39
downloads
int64
0
118M
likes
int64
0
4.86k
README
stringlengths
30
1.01M
matched_task
listlengths
1
10
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
tamarab/bert-emotion
tamarab
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-05-20T16:45:12
2022-05-20T19:12:14
116
0
--- datasets: - tweet_eval license: apache-2.0 metrics: - precision - recall tags: - generated_from_trainer model-index: - name: bert-emotion results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval args: emotion metrics: ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the ...
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["precision", "recall"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "args": "emo...
Netta1994/setfit_baai_gpt-4o_cot-few_shot_remove_final_evaluation_e1_one_big_model_1727080822.0
Netta1994
text-classification
[ "setfit", "safetensors", "bert", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:BAAI/bge-base-en-v1.5", "base_model:finetune:BAAI/bge-base-en-v1.5", "region:us" ]
2024-09-23T08:40:22
2024-09-23T08:40:53
7
0
--- base_model: BAAI/bge-base-en-v1.5 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: 'The provided answer is overall accurate, complete, and relevant to the query about performing...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# SetFit with BAAI/bge-base-en-v1.5 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-...
{"base_model": "BAAI/bge-base-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "The provided answer is overall accurate, complete, and relevant to t...
pkaustubh4/QnA_BERT
pkaustubh4
question-answering
[ "transformers", "pytorch", "distilbert", "question-answering", "en", "dataset:squad", "license:mit", "endpoints_compatible", "region:us" ]
2023-08-16T13:38:04
2023-08-16T20:43:31
10
0
--- datasets: - squad language: - en license: mit --- # Question Answering with DistilBERT README This repository contains code to train a Question Answering model using the DistilBERT architecture on the SQuAD (Stanford Question Answering Dataset) dataset. The model is trained to answer questions based on a given cont...
[ "QUESTION_ANSWERING" ]
Non_BioNLP
# Question Answering with DistilBERT README This repository contains code to train a Question Answering model using the DistilBERT architecture on the SQuAD (Stanford Question Answering Dataset) dataset. The model is trained to answer questions based on a given context paragraph. The training process utilizes PyTorch, ...
{"datasets": ["squad"], "language": ["en"], "license": "mit"}
fcogidi/pegasus-arxiv
fcogidi
summarization
[ "transformers.js", "onnx", "pegasus", "text2text-generation", "summarization", "en", "region:us" ]
2024-11-30T22:51:19
2024-12-01T00:20:43
18
0
--- language: - en library_name: transformers.js pipeline_tag: summarization --- https://huggingface.co/google/pegasus-arxiv with ONNX weights compatible with Transformers.js. **NOTE**: As of 2024-11-30 Transformers.js does not support `PegasusTokenizer`.
[ "SUMMARIZATION" ]
Non_BioNLP
ERROR: type should be string, got "\nhttps://huggingface.co/google/pegasus-arxiv with ONNX weights compatible with Transformers.js.\n\n**NOTE**: As of 2024-11-30 Transformers.js does not support `PegasusTokenizer`."
{"language": ["en"], "library_name": "transformers.js", "pipeline_tag": "summarization"}
wildgrape14/distilbert-base-uncased-finetuned-emotion
wildgrape14
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compat...
2023-08-10T11:57:40
2023-08-10T11:57:57
8
0
--- base_model: distilbert-base-uncased datasets: - emotion license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-emotion This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co...
{"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "datas...
cvapict/yhi-message-type-all-MiniLM-L6-v2
cvapict
text-classification
[ "sentence-transformers", "pytorch", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-08-30T11:48:06
2023-08-30T11:48:43
8
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # cvapict/yhi-message-type-all-MiniLM-L6-v2 {'accuracy': 0.8048780487804879} This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model h...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# cvapict/yhi-message-type-all-MiniLM-L6-v2 {'accuracy': 0.8048780487804879} This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](http...
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
IMISLab/GreekT5-umt5-base-greeksum
IMISLab
summarization
[ "transformers", "pytorch", "umt5", "text2text-generation", "summarization", "el", "arxiv:2311.07767", "arxiv:2304.00869", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-11-12T12:08:04
2024-08-02T09:14:45
41
1
--- language: - el license: apache-2.0 metrics: - bertscore - rouge pipeline_tag: summarization widget: - text: 'ฮฮฑ ฯ€ฮฑฬฯฮตฮน ""ฮพฮตฮบฮฑฬฮธฮฑฯฮท"" ฮธฮตฬฯƒฮท ฯƒฮต ฯƒฯ‡ฮตฬฯƒฮท ฮผฮต ฯ„ฮฟฮฝ ฮบฮนฬฮฝฮดฯ…ฮฝฮฟ ฮผฮตฯ„ฮฑฬฮดฮฟฯƒฮทฯ‚ ฯ„ฮฟฯ… ฮบฮฟฯฮฟฮฝฮฟฮนฬˆฮฟฯ…ฬ ฮฑฯ€ฮฟฬ ฯ„ฮท ฮ˜ฮตฮนฬฮฑ ฮšฮฟฮนฮฝฯ‰ฮฝฮนฬฮฑ ฮบฮฑฮปฮตฮนฬ ฯ„ฮทฮฝ ฮบฯ…ฮฒฮตฬฯฮฝฮทฯƒฮท ฮบฮฑฮน ฯ„ฮฟฮฝ ฮ ฯฯ‰ฮธฯ…ฯ€ฮฟฯ…ฯฮณฮฟฬ ฮผฮต ฮฑฮฝฮฑฮบฮฟฮนฬฮฝฯ‰ฯƒฮทฬ ฯ„ฮฟฯ… ฯ„ฮท ฮ”ฮตฯ…ฯ„ฮตฬฯฮฑ ฮฟ ฮฃฮฅฮกฮ™ฮ–ฮ‘. ""ฮคฮทฮฝ ฯ‰...
[ "SUMMARIZATION" ]
Non_BioNLP
# GreekT5 (umt5-base-greeksum) A Greek news summarization model trained on [GreekSum](https://github.com/iakovosevdaimon/GreekSUM). This model is part of a series of models trained as part of our research paper: [Giarelis, N., Mastrokostas, C., & Karacapilidis, N. (2024) GreekT5: Sequence-to-Sequence Models for Gr...
{"language": ["el"], "license": "apache-2.0", "metrics": ["bertscore", "rouge"], "pipeline_tag": "summarization", "widget": [{"text": "ฮฮฑ ฯ€ฮฑฬฯฮตฮน \"\"ฮพฮตฮบฮฑฬฮธฮฑฯฮท\"\" ฮธฮตฬฯƒฮท ฯƒฮต ฯƒฯ‡ฮตฬฯƒฮท ฮผฮต ฯ„ฮฟฮฝ ฮบฮนฬฮฝฮดฯ…ฮฝฮฟ ฮผฮตฯ„ฮฑฬฮดฮฟฯƒฮทฯ‚ ฯ„ฮฟฯ… ฮบฮฟฯฮฟฮฝฮฟฮนฬˆฮฟฯ…ฬ ฮฑฯ€ฮฟฬ ฯ„ฮท ฮ˜ฮตฮนฬฮฑ ฮšฮฟฮนฮฝฯ‰ฮฝฮนฬฮฑ ฮบฮฑฮปฮตฮนฬ ฯ„ฮทฮฝ ฮบฯ…ฮฒฮตฬฯฮฝฮทฯƒฮท ฮบฮฑฮน ฯ„ฮฟฮฝ ฮ ฯฯ‰ฮธฯ…ฯ€ฮฟฯ…ฯฮณฮฟฬ ฮผฮต ฮฑฮฝฮฑฮบฮฟฮนฬฮฝฯ‰ฯƒฮทฬ ฯ„ฮฟฯ… ฯ„ฮท ฮ”ฮตฯ…ฯ„ฮตฬฯฮฑ...
skywood/NHNDQ-nllb-finetuned-en2ko-ct2-float16
skywood
translation
[ "transformers", "translation", "en", "ko", "license:cc-by-4.0", "endpoints_compatible", "region:us" ]
2024-04-07T07:12:12
2024-04-08T11:50:57
79
1
--- language: - en - ko license: cc-by-4.0 tags: - translation --- I only did ctranslate2 convert to the original. cmd> ct2-transformers-converter --model NHNDQ/nllb-finetuned-en2ko --quantization float16 --output_dir NHNDQ-nllb-finetuned-en2ko-ct2 All copyrights belong to the original authors and the CT model may b...
[ "TRANSLATION" ]
Non_BioNLP
I only did ctranslate2 convert to the original. cmd> ct2-transformers-converter --model NHNDQ/nllb-finetuned-en2ko --quantization float16 --output_dir NHNDQ-nllb-finetuned-en2ko-ct2 All copyrights belong to the original authors and the CT model may be deleted upon request. Below is the original model information. O...
{"language": ["en", "ko"], "license": "cc-by-4.0", "tags": ["translation"]}
XSY/t5-small-finetuned-xsum
XSY
text2text-generation
[ "transformers", "pytorch", "tensorboard", "t5", "text2text-generation", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05
2021-11-09T13:40:46
123
0
--- {} --- ่ฟ™ไธชๆจกๅž‹ๆ˜ฏๆ นๆฎ่ฟ™ไธชไธ€ๆญฅไธ€ๆญฅๅฎŒๆˆ็š„๏ผŒๅฆ‚ๆžœๆƒณ่‡ชๅทฑๅพฎ่ฐƒ๏ผŒ่ฏทๅ‚่€ƒhttps://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb This model is completed step by step according to this, if you want to fine-tune yourself, please refer to https://colab.research.google.com/github/huggingface/notebooks/blob/ma...
[ "SUMMARIZATION" ]
Non_BioNLP
่ฟ™ไธชๆจกๅž‹ๆ˜ฏๆ นๆฎ่ฟ™ไธชไธ€ๆญฅไธ€ๆญฅๅฎŒๆˆ็š„๏ผŒๅฆ‚ๆžœๆƒณ่‡ชๅทฑๅพฎ่ฐƒ๏ผŒ่ฏทๅ‚่€ƒhttps://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/summarization.ipynb This model is completed step by step according to this, if you want to fine-tune yourself, please refer to https://colab.research.google.com/github/huggingface/notebooks/blob/master/exampl...
{}
tamilnlpSLIIT/whisper-ta
tamilnlpSLIIT
automatic-speech-recognition
[ "transformers", "pytorch", "jax", "whisper", "automatic-speech-recognition", "whisper-event", "ta", "license:apache-2.0", "model-index", "endpoints_compatible", "region:us" ]
2024-05-19T16:46:12
2024-05-19T16:46:12
7
0
--- language: - ta license: apache-2.0 metrics: - wer tags: - whisper-event model-index: - name: Whisper Tamil Medium - Vasista Sai Lodagala results: - task: type: automatic-speech-recognition name: Automatic Speech Recognition dataset: name: google/fleurs type: google/fleurs confi...
[ "TRANSLATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Whisper Tamil Medium This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium)...
{"language": ["ta"], "license": "apache-2.0", "metrics": ["wer"], "tags": ["whisper-event"], "model-index": [{"name": "Whisper Tamil Medium - Vasista Sai Lodagala", "results": [{"task": {"type": "automatic-speech-recognition", "name": "Automatic Speech Recognition"}, "dataset": {"name": "google/fleurs", "type": "google...
fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "Query", "Document", "Retrieval", "Description", "JSON", "custom_code", "en", "dataset:fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp", "dataset:allenai/c4", "license:...
2024-06-16T13:40:02
2024-06-16T13:40:17
5
0
--- datasets: - fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb - Query - Document - Retrieval - Description - JSON --- This model is a fine-t...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](https://huggingface.co/jinaai/jina-embeddings-v2-base-en) designed for the following use case: general domain ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis...
{"datasets": ["fine-tuned/jinaai_jina-embeddings-v2-base-en-6162024-xxse-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Query", "Document", "Retrieval", "Description", "JSO...
HooshvareLab/bert-fa-base-uncased-clf-persiannews
HooshvareLab
text-classification
[ "transformers", "pytorch", "tf", "jax", "bert", "text-classification", "fa", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04
2021-05-18T20:51:07
2,153
8
--- language: fa license: apache-2.0 --- # ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT]...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# ParsBERT (v2.0) A Transformer-based Model for Persian Language Understanding We reconstructed the vocabulary and fine-tuned the ParsBERT v1.1 on the new Persian corpora in order to provide some functionalities for using ParsBERT in other scopes! Please follow the [ParsBERT](https://github.com/hooshvare/parsbert) r...
{"language": "fa", "license": "apache-2.0"}
Unbabel/wmt20-comet-qe-da-v2-marian
Unbabel
translation
[ "translation", "multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "...
2024-05-28T10:18:50
2024-05-28T10:45:42
0
0
--- language: - multilingual - af - am - ar - as - az - be - bg - bn - br - bs - ca - cs - cy - da - de - el - en - eo - es - et - eu - fa - fi - fr - fy - ga - gd - gl - gu - ha - he - hi - hr - hu - hy - id - is - it - ja - jv - ka - kk - km - kn - ko - ku - ky - la ...
[ "TRANSLATION" ]
Non_BioNLP
Marian version of [wmt20-comet-qe-da-v2](https://huggingface.co/Unbabel/wmt20-comet-qe-da-v2). Credits to Microsoft Translate Team! # Paper TBA # License Apache-2.0 # Usage TBA # Intended uses Our model is intented to be used for **MT evaluation**. Given a a triplet with (source senten...
{"language": ["multilingual", "af", "am", "ar", "as", "az", "be", "bg", "bn", "br", "bs", "ca", "cs", "cy", "da", "de", "el", "en", "eo", "es", "et", "eu", "fa", "fi", "fr", "fy", "ga", "gd", "gl", "gu", "ha", "he", "hi", "hr", "hu", "hy", "id", "is", "it", "ja", "jv", "ka", "kk", "km", "kn", "ko", "ku", "ky", "la", "l...
antonkurylo/t5-small-billsum
antonkurylo
summarization
[ "transformers", "tensorboard", "safetensors", "t5", "text2text-generation", "summarization", "generated_from_trainer", "base_model:google-t5/t5-small", "base_model:finetune:google-t5/t5-small", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible",...
2024-10-22T19:02:06
2024-10-23T20:28:36
75
0
--- base_model: t5-small library_name: transformers license: apache-2.0 metrics: - rouge tags: - summarization - generated_from_trainer model-index: - name: t5-small-billsum results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probab...
[ "SUMMARIZATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # t5-small-billsum This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown dataset. It ach...
{"base_model": "t5-small", "library_name": "transformers", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "t5-small-billsum", "results": []}]}
4yo1/llama3-pre1-ds-lora1
4yo1
translation
[ "transformers", "pytorch", "llama", "text-generation", "llama-3-ko", "translation", "en", "ko", "dataset:recipes", "license:mit", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-07-18T00:57:07
2024-07-18T01:07:19
2,088
0
--- datasets: - recipes language: - en - ko library_name: transformers license: mit pipeline_tag: translation tags: - llama-3-ko --- ### Model Card for Model ID ### Model Details Model Card: llama3-pre1-ds-lora1 with Fine-Tuning Model Overview Model Name: llama3-pre1-ds-lora1 Model Type: Transformer-based Language M...
[ "TRANSLATION" ]
Non_BioNLP
### Model Card for Model ID ### Model Details Model Card: llama3-pre1-ds-lora1 with Fine-Tuning Model Overview Model Name: llama3-pre1-ds-lora1 Model Type: Transformer-based Language Model Model Size: 8 billion parameters by: 4yo1 Languages: English and Korean ### Model Description llama3-pre1-ds-lora1 is a lang...
{"datasets": ["recipes"], "language": ["en", "ko"], "library_name": "transformers", "license": "mit", "pipeline_tag": "translation", "tags": ["llama-3-ko"]}
Helsinki-NLP/opus-mt-vi-fr
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "vi", "fr", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04
2023-08-16T12:08:36
111
0
--- language: - vi - fr license: apache-2.0 tags: - translation --- ### vie-fra * source group: Vietnamese * target group: French * OPUS readme: [vie-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md) * model: transformer-align * source language(s): vie * target language...
[ "TRANSLATION" ]
Non_BioNLP
### vie-fra * source group: Vietnamese * target group: French * OPUS readme: [vie-fra](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/vie-fra/README.md) * model: transformer-align * source language(s): vie * target language(s): fra * model: transformer-align * pre-processing: normalization ...
{"language": ["vi", "fr"], "license": "apache-2.0", "tags": ["translation"]}
ahearnlr/bert-emotion
ahearnlr
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:tweet_eval", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-05-30T15:22:59
2023-05-30T15:30:44
13
0
--- datasets: - tweet_eval license: apache-2.0 metrics: - precision - recall tags: - generated_from_trainer model-index: - name: bert-emotion results: - task: type: text-classification name: Text Classification dataset: name: tweet_eval type: tweet_eval config: emotion split:...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-emotion This model is a fine-tuned version of [distilbert-base-cased](https://huggingface.co/distilbert-base-cased) on the ...
{"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["precision", "recall"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "config": "e...
yam3333/paraphrase-xlm-r-multilingual-v1-finetuned
yam3333
sentence-similarity
[ "sentence-transformers", "safetensors", "xlm-roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:383", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/paraphrase-xlm-r-multilingual-v1", "base_model:finetune:sentence-tr...
2024-11-17T15:55:40
2024-11-17T15:56:43
7
0
--- base_model: sentence-transformers/paraphrase-xlm-r-multilingual-v1 library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:383 - loss:CosineSimilarityLoss widget: - source_sentence: เคฌเฅเคฏเคตเคธเคพเคฏ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# SentenceTransformer based on sentence-transformers/paraphrase-xlm-r-multilingual-v1 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/paraphrase-xlm-r-multilingual-v1](https://huggingface.co/sentence-transformers/paraphrase-xlm-r-multilingual-v1). It maps sentences...
{"base_model": "sentence-transformers/paraphrase-xlm-r-multilingual-v1", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:383", "loss:CosineSimilarityLoss"], "widget": [{...
mahsaBa76/bge-base-custom-matryoshka
mahsaBa76
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:278", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-base-en-v1.5...
2025-01-07T19:28:48
2025-01-07T19:28:58
7
0
--- base_model: BAAI/bge-base-en-v1.5 library_name: sentence-transformers metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 - cosine_recall@5 - cosine_recall@1...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# SentenceTransformer based on BAAI/bge-base-en-v1.5 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for seman...
{"base_model": "BAAI/bge-base-en-v1.5", "library_name": "sentence-transformers", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_recall@1", "cosine_recall@3", "cosine_recall@...
RichardErkhov/gplsi_-_Aitana-6.3B-4bits
RichardErkhov
null
[ "safetensors", "bloom", "4-bit", "bitsandbytes", "region:us" ]
2025-03-09T07:59:59
2025-03-09T08:01:59
2
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Aitana-6.3B - bnb 4bits - Model creator: https://huggingface.co/gplsi/ - Original model: https://huggingface.co/g...
[ "QUESTION_ANSWERING", "TRANSLATION", "SUMMARIZATION", "PARAPHRASING" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Aitana-6.3B - bnb 4bits - Model creator: https://huggingface.co/gplsi/ - Original model: https://huggingface.co/gplsi/Aitana...
{}
prithivMLmods/Delta-Pavonis-Qwen-14B
prithivMLmods
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "conversational", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2025-03-14T10:04:04
2025-03-27T10:03:15
238
3
--- base_model: - prithivMLmods/Calcium-Opus-14B-Elite2-R1 language: - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference - trl - sft - Qwen - Distill --- ![sefsefsefsef.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/LNH...
[ "TRANSLATION" ]
Non_BioNLP
![sefsefsefsef.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/LNHgLHbGCgS6XB4635R0a.png) # **Delta-Pavonis-Qwen-14B** > Delta-Pavonis-Qwen-14B is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model i...
{"base_model": ["prithivMLmods/Calcium-Opus-14B-Elite2-R1"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "trl", "sft", "Qwen", "Distill"]}
aroot/mbart-finetuned-eng-kor-22045430821
aroot
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-06-30T17:44:19
2023-06-30T18:00:59
12
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: mbart-finetuned-eng-kor-22045430821 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comme...
[ "TRANSLATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mbart-finetuned-eng-kor-22045430821 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://hug...
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "mbart-finetuned-eng-kor-22045430821", "results": []}]}
TransferGraph/chiragasarpota_scotus-bert-finetuned-lora-ag_news
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:ag_news", "base_model:chiragasarpota/scotus-bert", "base_model:adapter:chiragasarpota/scotus-bert", "license:apache-2.0", "model-index", "region:us" ]
2024-02-27T22:53:36
2024-02-28T00:42:37
0
0
--- base_model: chiragasarpota/scotus-bert datasets: - ag_news library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: chiragasarpota_scotus-bert-finetuned-lora-ag_news results: - task: type: text-classification name: Text Classification ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # chiragasarpota_scotus-bert-finetuned-lora-ag_news This model is a fine-tuned version of [chiragasarpota/scotus-bert](https://hug...
{"base_model": "chiragasarpota/scotus-bert", "datasets": ["ag_news"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "chiragasarpota_scotus-bert-finetuned-lora-ag_news", "results": [{"task": {"type": "text-classification", "...
Cran-May/tempemotacilla-eridanus-0302
Cran-May
text-generation
[ "transformers", "safetensors", "qwen2", "text-generation", "text-generation-inference", "trl", "r999", "conversational", "en", "zh", "base_model:prithivMLmods/Pegasus-Opus-14B-Exp", "base_model:finetune:prithivMLmods/Pegasus-Opus-14B-Exp", "license:apache-2.0", "model-index", "autotrain_...
2025-03-02T04:21:04
2025-03-02T04:21:05
25
0
--- base_model: - prithivMLmods/Pegasus-Opus-14B-Exp language: - en - zh library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - text-generation-inference - trl - r999 model-index: - name: Eridanus-Opus-14B-r999 results: - task: type: text-generation name: Text Generation ...
[ "TRANSLATION" ]
Non_BioNLP
![8.png](https://cdn-uploads.huggingface.co/production/uploads/65bb837dbfb878f46c77de4c/-TJI83fUCFhvKv6wHwxyj.png) # **Eridanus-Opus-14B-r999** Eridanus-Opus-14B-r999 is based on the Qwen 2.5 14B modality architecture, designed to enhance the reasoning capabilities of 14B-parameter models. This model is optimized f...
{"base_model": ["prithivMLmods/Pegasus-Opus-14B-Exp"], "language": ["en", "zh"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference", "trl", "r999"], "model-index": [{"name": "Eridanus-Opus-14B-r999", "results": [{"task": {"type": "text-genera...
LaTarn/re-clean-setfit-model
LaTarn
text-classification
[ "sentence-transformers", "safetensors", "bert", "setfit", "text-classification", "arxiv:2209.11055", "license:apache-2.0", "region:us" ]
2023-11-03T00:03:46
2023-11-03T00:04:11
46
0
--- license: apache-2.0 pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification --- # LaTarn/re-clean-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot lea...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# LaTarn/re-clean-setfit-model This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves: 1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning. 2...
{"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]}
nikitakapitan/bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos
nikitakapitan
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_comp...
2023-10-02T09:25:26
2023-10-02T10:19:39
15
0
--- base_model: distilbert-base-uncased datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos results: - task: type: text-classification name: Text Classification dataset: name...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos This model is a fine-tuned version of [distilbert-base-uncased](https:...
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-uncased-finetuned-clinc_oos-distilled-clinc_oos", "results": [{"task": {"type": "text-classification", "name": "Text Classificati...
mav23/pythia-1b-GGUF
mav23
null
[ "gguf", "pytorch", "causal-lm", "pythia", "en", "dataset:the_pile", "arxiv:2304.01373", "arxiv:2101.00027", "arxiv:2201.07311", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-11-20T18:24:08
2024-11-20T18:33:58
77
0
--- datasets: - the_pile language: - en license: apache-2.0 tags: - pytorch - causal-lm - pythia --- The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 41...
[ "QUESTION_ANSWERING", "TRANSLATION" ]
Non_BioNLP
The *Pythia Scaling Suite* is a collection of models developed to facilitate interpretability research [(see paper)](https://arxiv.org/pdf/2304.01373.pdf). It contains two sets of eight models of sizes 70M, 160M, 410M, 1B, 1.4B, 2.8B, 6.9B, and 12B. For each size, there are two models: one trained on the Pile, and...
{"datasets": ["the_pile"], "language": ["en"], "license": "apache-2.0", "tags": ["pytorch", "causal-lm", "pythia"]}
MultiBertGunjanPatrick/multiberts-seed-1-160k
MultiBertGunjanPatrick
null
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-1", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04
2021-10-04T04:59:30
102
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 tags: - exbert - multiberts - multiberts-seed-1 --- # MultiBERTs Seed 1 Checkpoint 160k (uncased) Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was in...
[ "QUESTION_ANSWERING" ]
Non_BioNLP
# MultiBERTs Seed 1 Checkpoint 160k (uncased) Seed 1 intermediate checkpoint 160k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/go...
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-1"]}
gaudi/opus-mt-fr-swc-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-25T15:14:55
2024-10-19T04:48:59
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original...
[ "TRANSLATION" ]
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): ...
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
rezashkv/diffusion_pruning
rezashkv
text-to-image
[ "diffusers", "safetensors", "text-to-image", "stable-diffusion", "en", "arxiv:2406.12042", "license:mit", "region:us" ]
2024-06-13T22:29:44
2024-06-19T03:10:07
0
0
--- language: - en license: mit tags: - text-to-image - stable-diffusion - diffusers --- # APTP: Adaptive Prompt-Tailored Pruning of T2I Diffusion Models [![arXiv](https://img.shields.io/badge/Paper-arXiv-red?style=for-the-badge)](https://arxiv.org/abs/2406.12042) [![Github](https://img.shields.io/badge/Gihub-Code-s...
[ "SEMANTIC_SIMILARITY" ]
Non_BioNLP
# APTP: Adaptive Prompt-Tailored Pruning of T2I Diffusion Models [![arXiv](https://img.shields.io/badge/Paper-arXiv-red?style=for-the-badge)](https://arxiv.org/abs/2406.12042) [![Github](https://img.shields.io/badge/Gihub-Code-succees?style=for-the-badge&logo=GitHub)](https://github.com/rezashkv/diffusion_pruning) ...
{"language": ["en"], "license": "mit", "tags": ["text-to-image", "stable-diffusion", "diffusers"]}
TransferGraph/Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:Jeevesh8/512seq_len_6ep_bert_ft_cola-91", "base_model:adapter:Jeevesh8/512seq_len_6ep_bert_ft_cola-91", "model-index", "region:us" ]
2024-02-29T13:41:48
2024-02-29T13:41:51
0
0
--- base_model: Jeevesh8/512seq_len_6ep_bert_ft_cola-91 datasets: - tweet_eval library_name: peft metrics: - accuracy tags: - parquet - text-classification model-index: - name: Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate results: - task: type: text-classification name: Text Cl...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate This model is a fine-tuned version of [Jeevesh8/512seq_le...
{"base_model": "Jeevesh8/512seq_len_6ep_bert_ft_cola-91", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "Jeevesh8_512seq_len_6ep_bert_ft_cola-91-finetuned-lora-tweet_eval_hate", "results": [{"task": {"type": "text-classi...
Language-Media-Lab/mt5-small-ain-jpn-mt
Language-Media-Lab
translation
[ "transformers", "pytorch", "mt5", "text2text-generation", "translation", "jpn", "ain", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05
2022-02-04T13:20:55
119
0
--- language: - jpn - ain tags: - translation --- mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
[ "TRANSLATION" ]
Non_BioNLP
mt5-small-ain-jpn-mt is a machine translation model pretrained with [Google's mT5-small](https://huggingface.co/google/mt5-small) and fine-tuned on bilingual datasets crawled from the Web. It translates Ainu language to Japanese.
{"language": ["jpn", "ain"], "tags": ["translation"]}
jingyeom/korean_embedding_model
jingyeom
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "feature-extraction", "sentence-similarity", "mteb", "model-index", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-01-15T00:45:15
2024-01-15T00:48:35
0
1
--- pipeline_tag: sentence-similarity tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb model-index: - name: korean_embedding_model results: - task: type: STS dataset: name: MTEB BIOSSES type: mteb/biosses-sts config: default split: test revision...
[ "SUMMARIZATION" ]
Non_BioNLP
# {MODEL_NAME} This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 1024 dimensional dense vector space and can be used for tasks like clustering or semantic search. <!--- Describe your model here --> ## Usage (Sentence-Transformers) Using this model becomes easy when...
{"pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"], "model-index": [{"name": "korean_embedding_model", "results": [{"task": {"type": "STS"}, "dataset": {"name": "MTEB BIOSSES", "type": "mteb/biosses-sts", "config": "default", "split": "test", "...
TransferGraph/dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:dhimskyy/wiki-bert", "base_model:adapter:dhimskyy/wiki-bert", "model-index", "region:us" ]
2024-02-29T12:50:31
2024-02-29T12:50:33
0
0
--- base_model: dhimskyy/wiki-bert datasets: - tweet_eval library_name: peft metrics: - accuracy tags: - parquet - text-classification model-index: - name: dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion results: - task: type: text-classification name: Text Classification dataset: name: t...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion This model is a fine-tuned version of [dhimskyy/wiki-bert](https://huggingf...
{"base_model": "dhimskyy/wiki-bert", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "dhimskyy_wiki-bert-finetuned-lora-tweet_eval_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification...
RichardErkhov/Qwen_-_Qwen2-0.5B-4bits
RichardErkhov
null
[ "safetensors", "qwen2", "4-bit", "bitsandbytes", "region:us" ]
2024-10-30T13:36:21
2024-10-30T13:36:50
4
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen2-0.5B - bnb 4bits - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwe...
[ "QUESTION_ANSWERING", "TRANSLATION" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Qwen2-0.5B - bnb 4bits - Model creator: https://huggingface.co/Qwen/ - Original model: https://huggingface.co/Qwen/Qwen2-0.5...
{}
TransferGraph/nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:nurkayevaa/autonlp-bert-covid-407910458", "base_model:adapter:nurkayevaa/autonlp-bert-covid-407910458", "model-index", "region:us" ]
2024-02-29T13:08:52
2024-02-29T13:08:54
0
0
--- base_model: nurkayevaa/autonlp-bert-covid-407910458 datasets: - tweet_eval library_name: peft metrics: - accuracy tags: - parquet - text-classification model-index: - name: nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment results: - task: type: text-classification name: Te...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment This model is a fine-tuned version of [nurkayevaa/au...
{"base_model": "nurkayevaa/autonlp-bert-covid-407910458", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "nurkayevaa_autonlp-bert-covid-407910458-finetuned-lora-tweet_eval_sentiment", "results": [{"task": {"type": "text-c...
north/t5_large_NCC
north
text2text-generation
[ "transformers", "pytorch", "tf", "jax", "tensorboard", "t5", "text2text-generation", "no", "nn", "sv", "dk", "is", "en", "dataset:nbailab/NCC", "dataset:mc4", "dataset:wikipedia", "arxiv:2104.09617", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-gene...
2022-05-21T11:46:30
2022-10-13T13:54:32
26
1
--- datasets: - nbailab/NCC - mc4 - wikipedia language: - false - nn - sv - dk - is - en license: apache-2.0 widget: - text: <extra_id_0> hver uke samles Regjeringens medlemmer til Statsrรฅd pรฅ <extra_id_1>. Dette organet er รธverste <extra_id_2> i Norge. For at mรธtet skal vรฆre <extra_id_3>, mรฅ over halvparten av...
[ "TRANSLATION" ]
Non_BioNLP
The North-T5-models are a set of Norwegian and Scandinavian sequence-to-sequence-models. It builds upon the flexible [T5](https://github.com/google-research/text-to-text-transfer-transformer) and [T5X](https://github.com/google-research/t5x) and can be used for a variety of NLP tasks ranging from classification to tra...
{"datasets": ["nbailab/NCC", "mc4", "wikipedia"], "language": [false, "nn", "sv", "dk", "is", "en"], "license": "apache-2.0", "widget": [{"text": "<extra_id_0> hver uke samles Regjeringens medlemmer til Statsrรฅd pรฅ <extra_id_1>. Dette organet er รธverste <extra_id_2> i Norge. For at mรธtet skal vรฆre <extra_id_3>, mรฅ over...
mmcquade11-test/reuters-summarization
mmcquade11-test
text2text-generation
[ "transformers", "pytorch", "pegasus", "text2text-generation", "autonlp", "en", "dataset:mmcquade11/autonlp-data-reuters-summarization", "co2_eq_emissions", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05
2021-11-30T21:43:51
16
0
--- datasets: - mmcquade11/autonlp-data-reuters-summarization language: en tags: - a - u - t - o - n - l - p widget: - text: I love AutoNLP ๐Ÿค— co2_eq_emissions: 286.4350821612984 --- This is an autoNLP model I trained on Reuters dataset # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 34018133...
[ "SUMMARIZATION" ]
Non_BioNLP
This is an autoNLP model I trained on Reuters dataset # Model Trained Using AutoNLP - Problem type: Summarization - Model ID: 34018133 - CO2 Emissions (in grams): 286.4350821612984 ## Validation Metrics - Loss: 1.1805976629257202 - Rouge1: 55.4013 - Rouge2: 30.8004 - RougeL: 52.57 - RougeLsum: 52.6103 - Gen Len: 1...
{"datasets": ["mmcquade11/autonlp-data-reuters-summarization"], "language": "en", "tags": ["a", "u", "t", "o", "n", "l", "p"], "widget": [{"text": "I love AutoNLP ๐Ÿค—"}], "co2_eq_emissions": 286.4350821612984}
microsoft/prophetnet-large-uncased-cnndm
microsoft
text2text-generation
[ "transformers", "pytorch", "rust", "prophetnet", "text2text-generation", "en", "dataset:cnn_dailymail", "arxiv:2001.04063", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05
2023-01-24T16:56:43
965
2
--- datasets: - cnn_dailymail language: en --- ## prophetnet-large-uncased-cnndm Fine-tuned weights(converted from [original fairseq version repo](https://github.com/microsoft/ProphetNet)) for [ProphetNet](https://arxiv.org/abs/2001.04063) on summarization task CNN/DailyMail. ProphetNet is a new pre-trained language...
[ "SUMMARIZATION" ]
Non_BioNLP
## prophetnet-large-uncased-cnndm Fine-tuned weights(converted from [original fairseq version repo](https://github.com/microsoft/ProphetNet)) for [ProphetNet](https://arxiv.org/abs/2001.04063) on summarization task CNN/DailyMail. ProphetNet is a new pre-trained language model for sequence-to-sequence learning with a...
{"datasets": ["cnn_dailymail"], "language": "en"}
mtsdurica/madlad400-3b-mt-Q4_0-GGUF
mtsdurica
translation
[ "transformers", "gguf", "text2text-generation", "text-generation-inference", "llama-cpp", "gguf-my-repo", "translation", "multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa...
2024-07-13T15:01:37
2024-07-13T15:01:51
45
0
--- base_model: google/madlad400-3b-mt datasets: - allenai/MADLAD-400 language: - multilingual - en - ru - es - fr - de - it - pt - pl - nl - vi - tr - sv - id - ro - cs - zh - hu - ja - th - fi - fa - uk - da - el - 'no' - bg - sk - ko - ar - lt - ca - sl - he - et - lv - hi - sq - ms - az - sr - ta - hr - kk - is - m...
[ "TRANSLATION" ]
Non_BioNLP
# mtsdurica/madlad400-3b-mt-Q4_0-GGUF This model was converted to GGUF format from [`google/madlad400-3b-mt`](https://huggingface.co/google/madlad400-3b-mt) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [original model card](https://huggingfac...
{"base_model": "google/madlad400-3b-mt", "datasets": ["allenai/MADLAD-400"], "language": ["multilingual", "en", "ru", "es", "fr", "de", "it", "pt", "pl", "nl", "vi", "tr", "sv", "id", "ro", "cs", "zh", "hu", "ja", "th", "fi", "fa", "uk", "da", "el", "no", "bg", "sk", "ko", "ar", "lt", "ca", "sl", "he", "et", "lv", "hi"...
gokulsrinivasagan/bert_base_lda_100_stsb
gokulsrinivasagan
text-classification
[ "transformers", "tensorboard", "safetensors", "distilbert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "base_model:gokulsrinivasagan/bert_base_lda_100", "base_model:finetune:gokulsrinivasagan/bert_base_lda_100", "model-index", "autotrain_compatible", "endpoints_co...
2024-11-22T14:34:33
2024-11-22T14:36:23
5
0
--- base_model: gokulsrinivasagan/bert_base_lda_100 datasets: - glue language: - en library_name: transformers metrics: - spearmanr tags: - generated_from_trainer model-index: - name: bert_base_lda_100_stsb results: - task: type: text-classification name: Text Classification dataset: name: GLU...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_base_lda_100_stsb This model is a fine-tuned version of [gokulsrinivasagan/bert_base_lda_100](https://huggingface.co/gokuls...
{"base_model": "gokulsrinivasagan/bert_base_lda_100", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_base_lda_100_stsb", "results": [{"task": {"type": "text-classification", "name": "Text Classificati...
SEBIS/code_trans_t5_base_code_documentation_generation_go
SEBIS
summarization
[ "transformers", "pytorch", "jax", "t5", "feature-extraction", "summarization", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04
2021-06-23T04:12:04
128
0
--- tags: - summarization widget: - text: func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot } --- # CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 base model architec...
[ "SUMMARIZATION" ]
Non_BioNLP
# CodeTrans model for code documentation generation go Pretrained model on programming language go using the t5 base model architecture. It was first released in [this repository](https://github.com/agemagician/CodeTrans). This model is trained on tokenized go code functions: it works best with tokenized go functions...
{"tags": ["summarization"], "widget": [{"text": "func ( pr * Progress ) needSnapshotAbort ( ) bool { return pr . State == ProgressStateSnapshot && pr . Match >= pr . PendingSnapshot }"}]}
TransferGraph/YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony
TransferGraph
text-classification
[ "peft", "safetensors", "parquet", "text-classification", "dataset:tweet_eval", "base_model:YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602", "base_model:adapter:YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602", "license:apache-2.0", "model-index", "region:us" ]
2024-02-27T17:33:05
2024-02-29T13:38:35
0
0
--- base_model: YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602 datasets: - tweet_eval library_name: peft license: apache-2.0 metrics: - accuracy tags: - parquet - text-classification model-index: - name: YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony res...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-finetuned-lora-tweet_eval_irony This model is a fine-tuned versio...
{"base_model": "YeRyeongLee/electra-base-discriminator-finetuned-filtered-0602", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "YeRyeongLee_electra-base-discriminator-finetuned-filtered-0602-fine...
Alibaba-NLP/gte-Qwen2-7B-instruct
Alibaba-NLP
sentence-similarity
[ "sentence-transformers", "safetensors", "qwen2", "text-generation", "mteb", "transformers", "Qwen2", "sentence-similarity", "custom_code", "arxiv:2308.03281", "license:apache-2.0", "model-index", "autotrain_compatible", "text-generation-inference", "text-embeddings-inference", "endpoin...
2024-06-15T11:24:21
2025-03-24T09:43:55
110,385
348
--- license: apache-2.0 tags: - mteb - sentence-transformers - transformers - Qwen2 - sentence-similarity model-index: - name: gte-qwen2-7B-instruct results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: mteb/amazon_counterfactual config: ...
[ "SUMMARIZATION" ]
Non_BioNLP
## gte-Qwen2-7B-instruct **gte-Qwen2-7B-instruct** is the latest model in the gte (General Text Embedding) model family that ranks **No.1** in both English and Chinese evaluations on the Massive Text Embedding Benchmark [MTEB benchmark](https://huggingface.co/spaces/mteb/leaderboard) (as of June 16, 2024). Recently,...
{"license": "apache-2.0", "tags": ["mteb", "sentence-transformers", "transformers", "Qwen2", "sentence-similarity"], "model-index": [{"name": "gte-qwen2-7B-instruct", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "mteb/amazon_counterfactual"...
twadada/nmc-cls-100_correct
twadada
null
[ "mteb", "model-index", "region:us" ]
2024-09-13T07:45:45
2024-09-13T07:45:57
0
0
--- tags: - mteb model-index: - name: nomic_classification_100 results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en) type: None config: en split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 metrics: - type: accur...
[ "SUMMARIZATION" ]
Non_BioNLP
{"tags": ["mteb"], "model-index": [{"name": "nomic_classification_100", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en)", "type": "None", "config": "en", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205"}, "metrics": [{"type": "acc...
mradermacher/airoboros-34b-3.3-i1-GGUF
mradermacher
null
[ "transformers", "gguf", "en", "dataset:jondurbin/airoboros-3.2", "dataset:bluemoon-fandom-1-1-rp-cleaned", "dataset:boolq", "dataset:jondurbin/gutenberg-dpo-v0.1", "dataset:LDJnr/Capybara", "dataset:jondurbin/cinematika-v0.1", "dataset:glaiveai/glaive-function-calling-v2", "dataset:grimulkan/Lim...
2024-04-03T02:52:22
2024-05-06T05:21:32
490
1
--- base_model: jondurbin/airoboros-34b-3.3 datasets: - jondurbin/airoboros-3.2 - bluemoon-fandom-1-1-rp-cleaned - boolq - jondurbin/gutenberg-dpo-v0.1 - LDJnr/Capybara - jondurbin/cinematika-v0.1 - glaiveai/glaive-function-calling-v2 - grimulkan/LimaRP-augmented - piqa - Vezora/Tested-22k-Python-Alpaca - mattpscott/ai...
[ "SUMMARIZATION" ]
Non_BioNLP
## About <!-- ### convert_type: --> <!-- ### vocab_type: --> weighted/imatrix quants of https://huggingface.co/jondurbin/airoboros-34b-3.3 <!-- provided-files --> static quants are available at https://huggingface.co/mradermacher/airoboros-34b-3.3-GGUF ## Usage If you are unsure how to use GGUF files, refer to one...
{"base_model": "jondurbin/airoboros-34b-3.3", "datasets": ["jondurbin/airoboros-3.2", "bluemoon-fandom-1-1-rp-cleaned", "boolq", "jondurbin/gutenberg-dpo-v0.1", "LDJnr/Capybara", "jondurbin/cinematika-v0.1", "glaiveai/glaive-function-calling-v2", "grimulkan/LimaRP-augmented", "piqa", "Vezora/Tested-22k-Python-Alpaca", ...
YxBxRyXJx/bge-base-financial-matryoshka
YxBxRyXJx
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5600", "loss:MatryoshkaLoss", "loss:MultipleNegativesRankingLoss", "en", "arxiv:1908.10084", "arxiv:2205.13147", "arxiv:1705.00652", "base_model:BAAI/bge-bas...
2024-11-15T10:18:23
2024-11-15T10:19:00
6
0
--- base_model: BAAI/bge-base-en-v1.5 language: - en library_name: sentence-transformers license: apache-2.0 metrics: - cosine_accuracy@1 - cosine_accuracy@3 - cosine_accuracy@5 - cosine_accuracy@10 - cosine_precision@1 - cosine_precision@3 - cosine_precision@5 - cosine_precision@10 - cosine_recall@1 - cosine_recall@3 ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# BGE base Financial Matryoshka This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) on the json dataset. It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarit...
{"base_model": "BAAI/bge-base-en-v1.5", "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "cosine_...
nahyeonkang/ai.keepit
nahyeonkang
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:nsmc", "base_model:beomi/kcbert-base", "base_model:finetune:beomi/kcbert-base", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-08-03T16:21:01
2023-08-03T17:56:35
13
0
--- base_model: beomi/kcbert-base datasets: - nsmc license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: ai.keepit results: - task: type: text-classification name: Text Classification dataset: name: nsmc type: nsmc config: default split: ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # ai.keepit This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the nsmc datase...
{"base_model": "beomi/kcbert-base", "datasets": ["nsmc"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "ai.keepit", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "nsmc", "type": "nsmc", "config":...
google/t5-large-lm-adapt
google
text2text-generation
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "t5-lm-adapt", "en", "dataset:c4", "arxiv:2002.05202", "arxiv:1910.10683", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05
2023-01-24T16:52:08
2,748
8
--- datasets: - c4 language: en license: apache-2.0 tags: - t5-lm-adapt --- [Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted ## Version 1.1 - LM-Adapted [T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transforme...
[ "TEXT_CLASSIFICATION", "QUESTION_ANSWERING", "SUMMARIZATION" ]
Non_BioNLP
[Google's T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) Version 1.1 - LM-Adapted ## Version 1.1 - LM-Adapted [T5 Version 1.1 - LM Adapted](https://github.com/google-research/text-to-text-transfer-transformer/blob/main/released_checkpoints.md#lm-adapted-t511lm100k) includes the foll...
{"datasets": ["c4"], "language": "en", "license": "apache-2.0", "tags": ["t5-lm-adapt"]}
ahmeddbahaa/mT5_multilingual_XLSum-finetuned-fa-finetuned-ar
ahmeddbahaa
summarization
[ "transformers", "pytorch", "tensorboard", "mt5", "text2text-generation", "summarization", "Abstractive Summarization", "ar", "generated_from_trainer", "dataset:xlsum", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-06-08T16:23:58
2022-06-08T22:22:19
26
1
--- datasets: - xlsum tags: - mt5 - summarization - Abstractive Summarization - ar - generated_from_trainer model-index: - name: mT5_multilingual_XLSum-finetuned-fa-finetuned-ar results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should pr...
[ "SUMMARIZATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mT5_multilingual_XLSum-finetuned-fa-finetuned-ar This model is a fine-tuned version of [ahmeddbahaa/mT5_multilingual_XLSum-finet...
{"datasets": ["xlsum"], "tags": ["mt5", "summarization", "Abstractive Summarization", "ar", "generated_from_trainer"], "model-index": [{"name": "mT5_multilingual_XLSum-finetuned-fa-finetuned-ar", "results": []}]}
gokuls/bert_uncased_L-10_H-768_A-12_emotion
gokuls
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:emotion", "base_model:google/bert_uncased_L-10_H-768_A-12", "base_model:finetune:google/bert_uncased_L-10_H-768_A-12", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible...
2023-10-06T16:51:50
2023-10-06T16:59:05
7
0
--- base_model: google/bert_uncased_L-10_H-768_A-12 datasets: - emotion license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: bert_uncased_L-10_H-768_A-12_emotion results: - task: type: text-classification name: Text Classification dataset: name: emotion ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert_uncased_L-10_H-768_A-12_emotion This model is a fine-tuned version of [google/bert_uncased_L-10_H-768_A-12](https://hugging...
{"base_model": "google/bert_uncased_L-10_H-768_A-12", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_uncased_L-10_H-768_A-12_emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "data...
benayad7/concat-e5-small-bge-small-01
benayad7
null
[ "mteb", "model-index", "region:us" ]
2024-10-10T09:07:21
2024-10-14T09:37:01
0
0
--- tags: - mteb model-index: - name: no_model_name_available results: - task: type: Classification dataset: name: MTEB AmazonCounterfactualClassification (en-ext) type: mteb/amazon_counterfactual config: en-ext split: test revision: e8379541af4e31359cca9fbcf4b00f2671dba205 ...
[ "SUMMARIZATION" ]
Non_BioNLP
Add stuff later!
{"tags": ["mteb"], "model-index": [{"name": "no_model_name_available", "results": [{"task": {"type": "Classification"}, "dataset": {"name": "MTEB AmazonCounterfactualClassification (en-ext)", "type": "mteb/amazon_counterfactual", "config": "en-ext", "split": "test", "revision": "e8379541af4e31359cca9fbcf4b00f2671dba205...
mspy/twitter-paraphrase-embeddings
mspy
sentence-similarity
[ "sentence-transformers", "safetensors", "mpnet", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:13063", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:sentence-transformers/all-mpnet-base-v2", "base_model:finetune:sentence-transformers/all-mpne...
2024-07-28T12:26:58
2024-07-28T12:29:07
5
0
--- base_model: sentence-transformers/all-mpnet-base-v2 datasets: [] language: [] library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine - pearson_manhattan - spearman_manhattan - pearson_euclidean - spearman_euclidean - pearson_dot - spearman_dot - pearson_max - spearman_max pi...
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
Non_BioNLP
# SentenceTransformer based on sentence-transformers/all-mpnet-base-v2 This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2). It maps sentences & paragraphs to a 768-dimensional dense v...
{"base_model": "sentence-transformers/all-mpnet-base-v2", "datasets": [], "language": [], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearma...
aroot/eng-mya-wsample.43a
aroot
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-06T04:06:12
2023-07-06T04:28:08
12
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: eng-mya-wsample.43a results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-m...
[ "TRANSLATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # eng-mya-wsample.43a This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/face...
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "eng-mya-wsample.43a", "results": []}]}
cerebras/Cerebras-GPT-13B
cerebras
text-generation
[ "transformers", "pytorch", "gpt2", "feature-extraction", "causal-lm", "text-generation", "en", "dataset:the_pile", "arxiv:2304.03208", "arxiv:2203.15556", "arxiv:2101.00027", "license:apache-2.0", "text-generation-inference", "region:us" ]
2023-03-20T20:45:54
2023-11-22T21:49:12
2,440
647
--- datasets: - the_pile language: - en license: apache-2.0 pipeline_tag: text-generation tags: - pytorch - causal-lm inference: false --- # Cerebras-GPT 13B Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)! ## Model Description The Cerebras-GPT fam...
[ "TRANSLATION" ]
Non_BioNLP
# Cerebras-GPT 13B Check out our [Blog Post](https://www.cerebras.net/cerebras-gpt) and [arXiv paper](https://arxiv.org/abs/2304.03208)! ## Model Description The Cerebras-GPT family is released to facilitate research into LLM scaling laws using open architectures and data sets and demonstrate the simplicity of and s...
{"datasets": ["the_pile"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["pytorch", "causal-lm"], "inference": false}
gaudi/opus-mt-tr-en-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-17T00:17:05
2024-10-18T22:51:04
6
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original...
[ "TRANSLATION" ]
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): ...
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
Helsinki-NLP/opus-mt-af-es
Helsinki-NLP
translation
[ "transformers", "pytorch", "tf", "marian", "text2text-generation", "translation", "af", "es", "license:apache-2.0", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04
2023-08-16T11:25:22
96
0
--- language: - af - es license: apache-2.0 tags: - translation --- ### afr-spa * source group: Afrikaans * target group: Spanish * OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md) * model: transformer-align * source language(s): afr * target language...
[ "TRANSLATION" ]
Non_BioNLP
### afr-spa * source group: Afrikaans * target group: Spanish * OPUS readme: [afr-spa](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/afr-spa/README.md) * model: transformer-align * source language(s): afr * target language(s): spa * model: transformer-align * pre-processing: normalization ...
{"language": ["af", "es"], "license": "apache-2.0", "tags": ["translation"]}
TheBloke/finance-LLM-GGUF
TheBloke
text-generation
[ "transformers", "gguf", "llama", "finance", "text-generation", "en", "dataset:Open-Orca/OpenOrca", "dataset:GAIR/lima", "dataset:WizardLM/WizardLM_evol_instruct_V2_196k", "arxiv:2309.09530", "base_model:AdaptLLM/finance-LLM", "base_model:quantized:AdaptLLM/finance-LLM", "license:other", "r...
2023-12-24T21:28:55
2023-12-24T21:33:31
757
19
--- base_model: AdaptLLM/finance-LLM datasets: - Open-Orca/OpenOrca - GAIR/lima - WizardLM/WizardLM_evol_instruct_V2_196k language: - en license: other metrics: - accuracy model_name: Finance LLM pipeline_tag: text-generation tags: - finance inference: false model_creator: AdaptLLM model_type: llama prompt_template: '[...
[ "QUESTION_ANSWERING" ]
Non_BioNLP
<!-- markdownlint-disable MD041 --> <!-- header start --> <!-- 200823 --> <div style="width: auto; margin-left: auto; margin-right: auto"> <img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;"> </div> <div style="display: flex; justify-content:...
{"base_model": "AdaptLLM/finance-LLM", "datasets": ["Open-Orca/OpenOrca", "GAIR/lima", "WizardLM/WizardLM_evol_instruct_V2_196k"], "language": ["en"], "license": "other", "metrics": ["accuracy"], "model_name": "Finance LLM", "pipeline_tag": "text-generation", "tags": ["finance"], "inference": false, "model_creator": "A...
anismahmahi/G2_replace_Whata_repetition_with_noPropaganda_SetFit
anismahmahi
text-classification
[ "setfit", "safetensors", "mpnet", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:sentence-transformers/paraphrase-mpnet-base-v2", "base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2", "model-index", "region:us" ]
2024-01-07T13:33:28
2024-01-07T13:33:55
3
0
--- base_model: sentence-transformers/paraphrase-mpnet-base-v2 library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: Fox News, The Washington Post, NBC News, The Associated Press and the Los...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2 This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the S...
{"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Fox News, The Washington Post, NBC News, Th...
MultiBertGunjanPatrick/multiberts-seed-4-100k
MultiBertGunjanPatrick
null
[ "transformers", "pytorch", "bert", "pretraining", "exbert", "multiberts", "multiberts-seed-4", "en", "dataset:bookcorpus", "dataset:wikipedia", "arxiv:2106.16163", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:04
2021-10-04T05:10:05
111
0
--- datasets: - bookcorpus - wikipedia language: en license: apache-2.0 tags: - exbert - multiberts - multiberts-seed-4 --- # MultiBERTs Seed 4 Checkpoint 100k (uncased) Seed 4 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was in...
[ "QUESTION_ANSWERING" ]
Non_BioNLP
# MultiBERTs Seed 4 Checkpoint 100k (uncased) Seed 4 intermediate checkpoint 100k MultiBERTs (pretrained BERT) model on English language using a masked language modeling (MLM) objective. It was introduced in [this paper](https://arxiv.org/pdf/2106.16163.pdf) and first released in [this repository](https://github.com/go...
{"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert", "multiberts", "multiberts-seed-4"]}
DOSaAI/albanian-gpt2-large-120m-instruct-v0.1
DOSaAI
text-generation
[ "transformers", "text-generation", "sq", "en", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-03-31T19:27:33
2024-03-31T19:29:56
0
1
--- language: - sq - en library_name: transformers license: apache-2.0 pipeline_tag: text-generation --- # Albanian GPT-2 ## Model Description This model is a fine-tuned version of the GPT-2 model by [OpenAI](https://openai.com/) for Albanian text generation tasks. GPT-2 is a state-of-the-art natural language proces...
[ "SUMMARIZATION" ]
Non_BioNLP
# Albanian GPT-2 ## Model Description This model is a fine-tuned version of the GPT-2 model by [OpenAI](https://openai.com/) for Albanian text generation tasks. GPT-2 is a state-of-the-art natural language processing model developed by OpenAI. It is a variant of the GPT (Generative Pre-trained Transformer) model, pr...
{"language": ["sq", "en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation"}
Lvxue/distilled-mt5-small-1-0.5
Lvxue
text2text-generation
[ "transformers", "pytorch", "mt5", "text2text-generation", "generated_from_trainer", "en", "ro", "dataset:wmt16", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-08-12T02:06:37
2022-08-12T03:22:00
11
0
--- datasets: - wmt16 language: - en - ro license: apache-2.0 metrics: - bleu tags: - generated_from_trainer model-index: - name: distilled-mt5-small-1-0.5 results: - task: type: translation name: Translation dataset: name: wmt16 ro-en type: wmt16 args: ro-en metrics: - typ...
[ "TRANSLATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilled-mt5-small-1-0.5 This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on t...
{"datasets": ["wmt16"], "language": ["en", "ro"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilled-mt5-small-1-0.5", "results": [{"task": {"type": "translation", "name": "Translation"}, "dataset": {"name": "wmt16 ro-en", "type": "wmt16", "args": "ro-e...
aroot/wsample.49
aroot
translation
[ "transformers", "pytorch", "tensorboard", "mbart", "text2text-generation", "translation", "generated_from_trainer", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-04T23:03:25
2023-07-05T00:41:23
8
0
--- metrics: - bleu tags: - translation - generated_from_trainer model-index: - name: wsample.49 results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wsample.49 Th...
[ "TRANSLATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # wsample.49 This model is a fine-tuned version of [facebook/mbart-large-50-many-to-many-mmt](https://huggingface.co/facebook/mbar...
{"metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "wsample.49", "results": []}]}
ronaldseoh/long-t5-local-base
ronaldseoh
null
[ "pytorch", "jax", "longt5", "en", "arxiv:2112.07916", "arxiv:1912.08777", "arxiv:1910.10683", "license:apache-2.0", "region:us" ]
2024-09-20T02:08:58
2023-01-24T17:08:34
9
0
--- language: en license: apache-2.0 --- # LongT5 (local attention, base-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT...
[ "QUESTION_ANSWERING", "SUMMARIZATION" ]
Non_BioNLP
# LongT5 (local attention, base-sized model) LongT5 model pre-trained on English language. The model was introduced in the paper [LongT5: Efficient Text-To-Text Transformer for Long Sequences](https://arxiv.org/pdf/2112.07916.pdf) by Guo et al. and first released in [the LongT5 repository](https://github.com/google-r...
{"language": "en", "license": "apache-2.0"}
marbogusz/bert-multi-cased-squad_sv
marbogusz
question-answering
[ "transformers", "pytorch", "jax", "bert", "question-answering", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05
2021-05-19T23:00:13
103
0
--- {} --- Swedish bert multilingual model trained on a machine translated (MS neural translation) SQUAD 1.1 dataset
[ "TRANSLATION" ]
Non_BioNLP
Swedish bert multilingual model trained on a machine translated (MS neural translation) SQUAD 1.1 dataset
{}
maastrichtlawtech/wizardlm-7b-v1.0-lleqa
maastrichtlawtech
text-generation
[ "peft", "legal", "text-generation", "fr", "dataset:maastrichtlawtech/lleqa", "arxiv:2309.17050", "license:apache-2.0", "region:us" ]
2023-09-28T16:04:51
2023-10-03T09:44:44
4
3
--- datasets: - maastrichtlawtech/lleqa language: - fr library_name: peft license: apache-2.0 metrics: - rouge - meteor pipeline_tag: text-generation tags: - legal inference: false --- # wizardLM-7b-v1.0-lleqa This is a [wizardlm-7b-v1.0](https://huggingface.co/WizardLM/WizardLM-7B-V1.0) model fine-tuned with [QLoRA]...
[ "QUESTION_ANSWERING" ]
Non_BioNLP
# wizardLM-7b-v1.0-lleqa This is a [wizardlm-7b-v1.0](https://huggingface.co/WizardLM/WizardLM-7B-V1.0) model fine-tuned with [QLoRA](https://github.com/artidoro/qlora) for long-form legal question answering in **French**. ## Usage ```python [...] ``` ## Training #### Data We use the [Long-form Legal Question A...
{"datasets": ["maastrichtlawtech/lleqa"], "language": ["fr"], "library_name": "peft", "license": "apache-2.0", "metrics": ["rouge", "meteor"], "pipeline_tag": "text-generation", "tags": ["legal"], "inference": false}
tmnam20/mdeberta-v3-base-vsfc-1
tmnam20
text-classification
[ "transformers", "safetensors", "deberta-v2", "text-classification", "generated_from_trainer", "en", "dataset:tmnam20/VieGLUE", "base_model:microsoft/mdeberta-v3-base", "base_model:finetune:microsoft/mdeberta-v3-base", "license:mit", "model-index", "autotrain_compatible", "endpoints_compatibl...
2024-01-16T08:44:54
2024-01-16T08:47:32
4
0
--- base_model: microsoft/mdeberta-v3-base datasets: - tmnam20/VieGLUE language: - en license: mit metrics: - accuracy tags: - generated_from_trainer model-index: - name: mdeberta-v3-base-vsfc-1 results: - task: type: text-classification name: Text Classification dataset: name: tmnam20/VieGLUE...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mdeberta-v3-base-vsfc-1 This model is a fine-tuned version of [microsoft/mdeberta-v3-base](https://huggingface.co/microsoft/mdeb...
{"base_model": "microsoft/mdeberta-v3-base", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mdeberta-v3-base-vsfc-1", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "datas...
Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF
Triangle104
text-generation
[ "transformers", "gguf", "language", "granite-3.2", "llama-cpp", "gguf-my-repo", "text-generation", "base_model:ibm-granite/granite-3.2-2b-instruct", "base_model:quantized:ibm-granite/granite-3.2-2b-instruct", "license:apache-2.0", "region:us", "conversational" ]
2025-02-28T13:19:41
2025-02-28T13:21:09
18
0
--- base_model: ibm-granite/granite-3.2-2b-instruct library_name: transformers license: apache-2.0 pipeline_tag: text-generation tags: - language - granite-3.2 - llama-cpp - gguf-my-repo inference: false --- # Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF This model was converted to GGUF format from [`ibm-granite/gr...
[ "TEXT_CLASSIFICATION", "SUMMARIZATION" ]
Non_BioNLP
# Triangle104/granite-3.2-2b-instruct-Q5_K_S-GGUF This model was converted to GGUF format from [`ibm-granite/granite-3.2-2b-instruct`](https://huggingface.co/ibm-granite/granite-3.2-2b-instruct) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space. Refer to the [o...
{"base_model": "ibm-granite/granite-3.2-2b-instruct", "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["language", "granite-3.2", "llama-cpp", "gguf-my-repo"], "inference": false}
tcepi/sts_bertimbau
tcepi
sentence-similarity
[ "sentence-transformers", "safetensors", "bert", "sentence-similarity", "feature-extraction", "base_model:neuralmind/bert-base-portuguese-cased", "base_model:finetune:neuralmind/bert-base-portuguese-cased", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-10-23T13:36:44
2024-10-23T13:37:17
7
0
--- base_model: neuralmind/bert-base-portuguese-cased library_name: sentence-transformers pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction --- # SentenceTransformer based on neuralmind/bert-base-portuguese-cased This is a [sentence-transformers](https://www.SB...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# SentenceTransformer based on neuralmind/bert-base-portuguese-cased This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [neuralmind/bert-base-portuguese-cased](https://huggingface.co/neuralmind/bert-base-portuguese-cased). It maps sentences & paragraphs to a 768-dimensional dense vector spa...
{"base_model": "neuralmind/bert-base-portuguese-cased", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction"]}
proxectonos/Nos_MT-OpenNMT-es-gl
proxectonos
null
[ "gl", "license:mit", "region:us" ]
2023-02-16T09:27:38
2025-04-11T11:14:47
0
1
--- language: - gl license: mit metrics: - bleu (Gold1): 79.6 - bleu (Gold2): 43.3 - bleu (Flores): 21.8 - bleu (Test-suite): 74.3 --- **English text [here](https://huggingface.co/proxectonos/NOS-MT-OpenNMT-es-gl/blob/main/README_English.md)** **Descriciรณn do Modelo** Modelo feito con OpenNMT-py 3.2 para o par espa...
[ "TRANSLATION" ]
Non_BioNLP
**English text [here](https://huggingface.co/proxectonos/NOS-MT-OpenNMT-es-gl/blob/main/README_English.md)** **Descriciรณn do Modelo** Modelo feito con OpenNMT-py 3.2 para o par espaรฑol-galego utilizando unha arquitectura transformer. O modelo foi transformado para o formato da ctranslate2. **Como traducir con este...
{"language": ["gl"], "license": "mit", "metrics": [{"bleu (Gold1)": 79.6}, {"bleu (Gold2)": 43.3}, {"bleu (Flores)": 21.8}, {"bleu (Test-suite)": 74.3}]}
chunwoolee0/seqcls_mrpc_bert_base_uncased_model
chunwoolee0
text-classification
[ "transformers", "pytorch", "tensorboard", "bert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-07-14T23:27:51
2023-07-14T23:32:36
8
0
--- datasets: - glue license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: seqcls_mrpc_bert_base_uncased_model results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: mrpc split: va...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # seqcls_mrpc_bert_base_uncased_model This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-u...
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "seqcls_mrpc_bert_base_uncased_model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "m...
pierreguillou/bert-large-cased-squad-v1.1-portuguese
pierreguillou
question-answering
[ "transformers", "pytorch", "tf", "bert", "question-answering", "bert-large", "pt", "dataset:brWaC", "dataset:squad", "dataset:squad_v1_pt", "license:mit", "endpoints_compatible", "region:us" ]
2022-03-02T23:29:05
2022-01-04T09:57:00
777
45
--- datasets: - brWaC - squad - squad_v1_pt language: pt license: mit metrics: - squad tags: - question-answering - bert - bert-large - pytorch widget: - text: Quando comeรงou a pandemia de Covid-19 no mundo? context: A pandemia de COVID-19, tambรฉm conhecida como pandemia de coronavรญrus, รฉ uma pandemia em curso de...
[ "NAMED_ENTITY_RECOGNITION", "QUESTION_ANSWERING", "TEXTUAL_ENTAILMENT" ]
TBD
# Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1 ![Exemple of what can do the Portuguese BERT large cased QA (Question Answering), finetuned on SQUAD v1.1](https://miro.medium.com/max/5256/1*QxyeAjT2V1OfE2B6nEcs3w.png) ## Introduction The model was trained on the dataset SQUAD v1.1 in ...
{"datasets": ["brWaC", "squad", "squad_v1_pt"], "language": "pt", "license": "mit", "metrics": ["squad"], "tags": ["question-answering", "bert", "bert-large", "pytorch"], "widget": [{"text": "Quando comeรงou a pandemia de Covid-19 no mundo?", "context": "A pandemia de COVID-19, tambรฉm conhecida como pandemia de coronavรญ...
kunalr63/my_awesome_model
kunalr63
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-04-16T13:00:33
2023-04-16T13:33:32
14
0
--- datasets: - imdb license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: my_awesome_model results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: test args: pla...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # my_awesome_model This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased)...
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "my_awesome_model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "split": "tes...
gaudi/opus-mt-fr-ht-ctranslate2
gaudi
translation
[ "transformers", "marian", "ctranslate2", "translation", "license:apache-2.0", "endpoints_compatible", "region:us" ]
2024-07-22T15:57:36
2024-10-19T04:26:33
9
0
--- license: apache-2.0 tags: - ctranslate2 - translation --- # Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original...
[ "TRANSLATION" ]
Non_BioNLP
# Repository General Information ## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)! - Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): ...
{"license": "apache-2.0", "tags": ["ctranslate2", "translation"]}
RichardErkhov/EmergentMethods_-_Phi-3-mini-128k-instruct-graph-4bits
RichardErkhov
null
[ "safetensors", "phi3", "custom_code", "4-bit", "bitsandbytes", "region:us" ]
2025-01-18T08:48:37
2025-01-18T08:50:48
29
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Phi-3-mini-128k-instruct-graph - bnb 4bits - Model creator: https://huggingface.co/EmergentMethods/ - Original mo...
[ "TRANSLATION" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Phi-3-mini-128k-instruct-graph - bnb 4bits - Model creator: https://huggingface.co/EmergentMethods/ - Original model: https:...
{}
HusseinEid/bert-finetuned-ner
HusseinEid
token-classification
[ "transformers", "tensorboard", "safetensors", "bert", "token-classification", "generated_from_trainer", "en", "dataset:conll2003", "base_model:google-bert/bert-base-cased", "base_model:finetune:google-bert/bert-base-cased", "license:apache-2.0", "model-index", "autotrain_compatible", "endp...
2024-05-18T15:16:47
2024-05-18T15:35:40
9
0
--- base_model: bert-base-cased datasets: - conll2003 language: - en library_name: transformers license: apache-2.0 metrics: - precision - recall - f1 - accuracy tags: - generated_from_trainer model-index: - name: bert-finetuned-ner results: - task: type: token-classification name: Token Classification ...
[ "NAMED_ENTITY_RECOGNITION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # bert-finetuned-ner This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the conll2...
{"base_model": "bert-base-cased", "datasets": ["conll2003"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-finetuned-ner", "results": [{"task": {"type": "token-classifi...
Tasm/autotrain-esdxq-2v2zh
Tasm
text-classification
[ "tensorboard", "safetensors", "bert", "autotrain", "text-classification", "base_model:google-bert/bert-base-multilingual-cased", "base_model:finetune:google-bert/bert-base-multilingual-cased", "region:us" ]
2024-11-19T17:14:37
2024-11-19T17:26:01
5
0
--- base_model: google-bert/bert-base-multilingual-cased tags: - autotrain - text-classification widget: - text: I love AutoTrain --- # Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.0839352235198021 f1: 0.8888888888888888 precision: 1.0 recall: 0.8 auc: 0.8300000...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# Model Trained Using AutoTrain - Problem type: Text Classification ## Validation Metrics loss: 0.0839352235198021 f1: 0.8888888888888888 precision: 1.0 recall: 0.8 auc: 0.8300000000000001 accuracy: 0.9846153846153847
{"base_model": "google-bert/bert-base-multilingual-cased", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]}
ns0911/klue-roberta-base-klue-sts
ns0911
sentence-similarity
[ "sentence-transformers", "safetensors", "roberta", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:10501", "loss:CosineSimilarityLoss", "arxiv:1908.10084", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "model-index", "autotrain_...
2025-01-13T00:27:58
2025-01-13T00:28:18
6
0
--- base_model: klue/roberta-base library_name: sentence-transformers metrics: - pearson_cosine - spearman_cosine pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity - feature-extraction - generated_from_trainer - dataset_size:10501 - loss:CosineSimilarityLoss widget: - source_sentence...
[ "TEXT_CLASSIFICATION", "SEMANTIC_SIMILARITY" ]
Non_BioNLP
# SentenceTransformer based on klue/roberta-base This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [klue/roberta-base](https://huggingface.co/klue/roberta-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual similarity, semantic...
{"base_model": "klue/roberta-base", "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:10501", "loss:CosineSimilarityLoss"...
fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184
fine-tuned
feature-extraction
[ "transformers", "safetensors", "bert", "feature-extraction", "custom_code", "text-embeddings-inference", "endpoints_compatible", "region:us" ]
2024-05-02T15:16:45
2024-05-02T15:17:00
6
0
--- {} --- # fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 ## Model Description fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain. ## Use Case This model is designed to support various ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 ## Model Description fine-tuned/jina-embeddings-v2-base-en-522024-6pj3-webapp_6103321184 is a fine-tuned version of jinaai/jina-embeddings-v2-base-en designed for a specific domain. ## Use Case This model is designed to support various application...
{}
jeff-RQ/new-test-model
jeff-RQ
image-to-text
[ "transformers", "pytorch", "blip-2", "visual-question-answering", "vision", "image-to-text", "image-captioning", "en", "arxiv:2301.12597", "license:mit", "endpoints_compatible", "region:us" ]
2023-07-04T14:52:07
2023-07-05T15:01:24
144
0
--- language: en license: mit pipeline_tag: image-to-text tags: - vision - image-to-text - image-captioning - visual-question-answering duplicated_from: Salesforce/blip2-opt-2.7b --- # BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language mo...
[ "QUESTION_ANSWERING" ]
Non_BioNLP
# BLIP-2, OPT-2.7b, pre-trained only BLIP-2 model, leveraging [OPT-2.7b](https://huggingface.co/facebook/opt-2.7b) (a large language model with 2.7 billion parameters). It was introduced in the paper [BLIP-2: Bootstrapping Language-Image Pre-training with Frozen Image Encoders and Large Language Models](https://arxiv...
{"language": "en", "license": "mit", "pipeline_tag": "image-to-text", "tags": ["vision", "image-to-text", "image-captioning", "visual-question-answering"], "duplicated_from": "Salesforce/blip2-opt-2.7b"}
irusl/05newa1
irusl
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges", "conversational", "en", "dataset:teknium/OpenHermes-2.5", "ba...
2024-07-15T09:01:46
2024-07-15T09:04:58
6
0
--- base_model: NousResearch/Hermes-2-Pro-Llama-3-8B datasets: - teknium/OpenHermes-2.5 language: - en license: apache-2.0 tags: - Llama-3 - instruct - finetune - chatml - DPO - RLHF - gpt4 - synthetic data - distillation - function calling - json mode - axolotl - merges widget: - example_title: Hermes 2 Pro Llama-3 In...
[ "TRANSLATION" ]
Non_BioNLP
# - Hermes-2 ฮ˜ Llama-3 8B ![image/png](https://cdn-uploads.huggingface.co/production/uploads/6317aade83d8d2fd903192d9/HQnQmNM1L3KXGhp0wUzHH.png) ## Model Description Hermes-2 ฮ˜ (Theta) is the first experimental merged model released by [Nous Research](https://nousresearch.com/), in collaboration with Charles Goddard...
{"base_model": "NousResearch/Hermes-2-Pro-Llama-3-8B", "datasets": ["teknium/OpenHermes-2.5"], "language": ["en"], "license": "apache-2.0", "tags": ["Llama-3", "instruct", "finetune", "chatml", "DPO", "RLHF", "gpt4", "synthetic data", "distillation", "function calling", "json mode", "axolotl", "merges"], "widget": [{"e...
muhtasham/finetuned-mlm_mini
muhtasham
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "dataset:imdb", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-12-03T01:33:36
2022-12-03T01:52:06
11
0
--- datasets: - imdb license: apache-2.0 metrics: - accuracy - f1 tags: - generated_from_trainer model-index: - name: finetuned-mlm_mini results: - task: type: text-classification name: Text Classification dataset: name: imdb type: imdb config: plain_text split: train a...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # finetuned-mlm_mini This model is a fine-tuned version of [muhtasham/bert-mini-mlm-finetuned-emotion](https://huggingface.co/muht...
{"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuned-mlm_mini", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config": "plain_text", "spli...
dascim/greekbart
dascim
fill-mask
[ "transformers", "safetensors", "mbart", "text2text-generation", "summarization", "bart", "fill-mask", "gr", "arxiv:2304.00869", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2024-10-14T12:03:48
2024-10-15T07:49:37
20
0
--- language: - gr library_name: transformers license: mit pipeline_tag: fill-mask tags: - summarization - bart --- # GreekBART: The First Pretrained Greek Sequence-to-Sequence Model ## Introduction GreekBART is a Greek sequence to sequence pretrained model based on [BART](https://huggingface.co/facebook/bart-large)....
[ "SUMMARIZATION" ]
Non_BioNLP
# GreekBART: The First Pretrained Greek Sequence-to-Sequence Model ## Introduction GreekBART is a Greek sequence to sequence pretrained model based on [BART](https://huggingface.co/facebook/bart-large). GreekBART is pretrained by learning to reconstruct a corrupted input sentence. A corpus of 76.9GB of Greek raw text...
{"language": ["gr"], "library_name": "transformers", "license": "mit", "pipeline_tag": "fill-mask", "tags": ["summarization", "bart"]}
Volavion/bert-base-multilingual-uncased-temperature-cls
Volavion
null
[ "safetensors", "bert", "en", "base_model:google-bert/bert-base-multilingual-uncased", "base_model:finetune:google-bert/bert-base-multilingual-uncased", "license:mit", "region:us" ]
2025-01-15T10:27:50
2025-01-15T11:01:31
18
1
--- base_model: - google-bert/bert-base-multilingual-uncased language: - en license: mit --- # BERT-Based Classification Model for Optimal Temperature Selection This model leverages a BERT-based classification model to analyze input prompts and identify the most suitable generation temperature, enhancing text generat...
[ "TRANSLATION", "SUMMARIZATION" ]
Non_BioNLP
# BERT-Based Classification Model for Optimal Temperature Selection This model leverages a BERT-based classification model to analyze input prompts and identify the most suitable generation temperature, enhancing text generation quality and relevance from our paper related to temperature. ## Overview The model clas...
{"base_model": ["google-bert/bert-base-multilingual-uncased"], "language": ["en"], "license": "mit"}
r4ghu/distilbert-base-uncased-finetuned-clinc
r4ghu
text-classification
[ "transformers", "pytorch", "distilbert", "text-classification", "generated_from_trainer", "dataset:clinc_oos", "base_model:distilbert/distilbert-base-uncased", "base_model:finetune:distilbert/distilbert-base-uncased", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_comp...
2023-09-12T05:42:37
2023-09-13T01:19:35
12
0
--- base_model: distilbert-base-uncased datasets: - clinc_oos license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-clinc results: - task: type: text-classification name: Text Classification dataset: name: clinc_oos ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-clinc This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d...
{"base_model": "distilbert-base-uncased", "datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {...
RichardErkhov/abacusai_-_Giraffe-13b-32k-v3-gguf
RichardErkhov
null
[ "gguf", "endpoints_compatible", "region:us" ]
2024-08-02T17:14:03
2024-08-03T00:32:52
25
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Giraffe-13b-32k-v3 - GGUF - Model creator: https://huggingface.co/abacusai/ - Original model: https://huggingface...
[ "QUESTION_ANSWERING" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Giraffe-13b-32k-v3 - GGUF - Model creator: https://huggingface.co/abacusai/ - Original model: https://huggingface.co/abacusa...
{}
gokuls/mobilebert_sa_GLUE_Experiment_data_aug_wnli_128
gokuls
text-classification
[ "transformers", "pytorch", "tensorboard", "mobilebert", "text-classification", "generated_from_trainer", "en", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-02-03T16:11:58
2023-02-03T16:40:16
129
0
--- datasets: - glue language: - en license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: mobilebert_sa_GLUE_Experiment_data_aug_wnli_128 results: - task: type: text-classification name: Text Classification dataset: name: GLUE WNLI type: glue a...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # mobilebert_sa_GLUE_Experiment_data_aug_wnli_128 This model is a fine-tuned version of [google/mobilebert-uncased](https://huggin...
{"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mobilebert_sa_GLUE_Experiment_data_aug_wnli_128", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE WNLI...
RichardErkhov/Unbabel_-_TowerBase-7B-v0.1-gguf
RichardErkhov
null
[ "gguf", "arxiv:2402.17733", "endpoints_compatible", "region:us" ]
2024-05-11T10:07:33
2024-05-11T23:15:22
102
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TowerBase-7B-v0.1 - GGUF - Model creator: https://huggingface.co/Unbabel/ - Original model: https://huggingface.c...
[ "TRANSLATION" ]
Non_BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) TowerBase-7B-v0.1 - GGUF - Model creator: https://huggingface.co/Unbabel/ - Original model: https://huggingface.co/Unbabel/T...
{}
naksu/distilbert-base-uncased-finetuned-sst2
naksu
text-classification
[ "transformers", "pytorch", "tensorboard", "distilbert", "text-classification", "generated_from_trainer", "dataset:glue", "license:apache-2.0", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-01-23T06:33:51
2023-01-23T18:15:34
114
0
--- datasets: - glue license: apache-2.0 metrics: - accuracy tags: - generated_from_trainer model-index: - name: distilbert-base-uncased-finetuned-sst2 results: - task: type: text-classification name: Text Classification dataset: name: glue type: glue config: sst2 split: trai...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # distilbert-base-uncased-finetuned-sst2 This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di...
{"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-sst2", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "config": "sst2...
fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315
fine-tuned
feature-extraction
[ "sentence-transformers", "safetensors", "bert", "feature-extraction", "sentence-similarity", "mteb", "en", "dataset:fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315", "dataset:allenai/c4", "license:apache-2.0", "autotrain_compatible", "text-embeddings-inference", "endpoints_compatible", ...
2024-05-24T15:37:03
2024-05-24T15:37:35
9
0
--- datasets: - fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315 - allenai/c4 language: - en license: apache-2.0 pipeline_tag: feature-extraction tags: - sentence-transformers - feature-extraction - sentence-similarity - mteb --- This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface....
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case: custom ## How to Use This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more...
{"datasets": ["fine-tuned/NFCorpus-256-24-gpt-4o-2024-05-13-166315", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]}
KarelDO/lstm.CEBaB_confounding.observational.absa.5-class.seed_43
KarelDO
text-classification
[ "transformers", "pytorch", "bert", "text-classification", "generated_from_trainer", "en", "dataset:OpenTable", "model-index", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-10-14T04:31:04
2022-10-14T04:32:12
20
0
--- datasets: - OpenTable language: - en metrics: - accuracy tags: - generated_from_trainer model-index: - name: lstm.CEBaB_confounding.observational.absa.5-class.seed_43 results: - task: type: text-classification name: Text Classification dataset: name: OpenTable OPENTABLE-ABSA type: Op...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # lstm.CEBaB_confounding.observational.absa.5-class.seed_43 This model is a fine-tuned version of [lstm](https://huggingface.co/ls...
{"datasets": ["OpenTable"], "language": ["en"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "lstm.CEBaB_confounding.observational.absa.5-class.seed_43", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "OpenTable OPENTABLE...
mini1013/master_cate_top_bt5_4
mini1013
text-classification
[ "setfit", "safetensors", "roberta", "sentence-transformers", "text-classification", "generated_from_setfit_trainer", "arxiv:2209.11055", "base_model:klue/roberta-base", "base_model:finetune:klue/roberta-base", "model-index", "region:us" ]
2024-12-29T14:28:52
2024-12-29T14:29:14
8
0
--- base_model: klue/roberta-base library_name: setfit metrics: - accuracy pipeline_tag: text-classification tags: - setfit - sentence-transformers - text-classification - generated_from_setfit_trainer widget: - text: '[์‹œ์„ธ์ด๋„] NEW ์‹ฑํฌ๋กœ ์Šคํ‚จ ๋ž˜๋””์–ธํŠธ ๋ฆฌํ”„ํŒ… ํŒŒ์šด๋ฐ์ด์…˜ SPF30/PA++++ 30ml 130 ์˜คํŒ” (#M)ํ™ˆ>๋ฉ”์ดํฌ์—…>๋ฒ ์ด์Šค๋ฉ”์ดํฌ์—… HMALL > ๋ทฐํ‹ฐ > ๋ฉ”์ดํฌ์—… > ...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# SetFit with klue/roberta-base This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [klue/roberta-base](https://huggingface.co/klue/roberta-base) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-learn.org/st...
{"base_model": "klue/roberta-base", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "[์‹œ์„ธ์ด๋„] NEW ์‹ฑํฌ๋กœ ์Šคํ‚จ ๋ž˜๋””์–ธํŠธ ๋ฆฌํ”„ํŒ… ํŒŒ์šด๋ฐ์ด์…˜ SPF30/PA++++ 30ml 130 ์˜คํŒ” (#M)ํ™ˆ>๋ฉ”์ดํฌ์—…>๋ฒ ์ด์Šค...
csocsci/mt5-base-multi-label-cs-iiib-02c
csocsci
text2text-generation
[ "transformers", "pytorch", "mt5", "text2text-generation", "cs", "license:mit", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2023-09-22T13:29:45
2023-09-23T13:40:51
10
0
--- language: - cs license: mit --- # Model Card for mt5-base-multi-label-cs-iiib-02c <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech. ## Model Description The mo...
[ "TEXT_CLASSIFICATION" ]
Non_BioNLP
# Model Card for mt5-base-multi-label-cs-iiib-02c <!-- Provide a quick summary of what the model is/does. --> This model is fine-tuned for multi-label text classification of Supportive Interactions in Instant Messenger dialogs of Adolescents in Czech. ## Model Description The model was fine-tuned on a dataset of C...
{"language": ["cs"], "license": "mit"}
heegyu/TinyLlama-augesc-context-strategy
heegyu
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "dataset:thu-coai/augesc", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-03-01T16:19:26
2024-03-07T13:19:42
8
0
--- datasets: - thu-coai/augesc library_name: transformers --- Test set performance - Top 1 Accuracy: 0.4346 - Top 3 Accuracy: 0.7677 - Top 1 Macro F1: 0.2668 - Top 3 Macro F1: 0.5669 ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification device="cuda:0" model = "heegyu/TinyLl...
[ "PARAPHRASING" ]
Non_BioNLP
Test set performance - Top 1 Accuracy: 0.4346 - Top 3 Accuracy: 0.7677 - Top 1 Macro F1: 0.2668 - Top 3 Macro F1: 0.5669 ### Usage ```python from transformers import AutoTokenizer, AutoModelForSequenceClassification device="cuda:0" model = "heegyu/TinyLlama-augesc-context-strategy" tokenizer = AutoTokenizer.from_pre...
{"datasets": ["thu-coai/augesc"], "library_name": "transformers"}
Bahasalab/BahasaGpt-chat
Bahasalab
null
[ "transformers", "pytorch", "tensorboard", "license:cc-by-nc-3.0", "endpoints_compatible", "region:us" ]
2023-04-09T13:44:42
2023-04-11T07:23:12
18
2
--- license: cc-by-nc-3.0 --- # BahasaGPT-Chat ## Introduction This document provides an overview of the BahasaGPT-Chat model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 120000 Chat instruct...
[ "TRANSLATION" ]
Non_BioNLP
# BahasaGPT-Chat ## Introduction This document provides an overview of the BahasaGPT-Chat model, which is a fine-tuned model for a specific task in the Indonesian language. The model is based on the Bloomz-7B-mt architecture and is fine-tuned using a dataset of over 120000 Chat instructions based. ## Model Details ...
{"license": "cc-by-nc-3.0"}