id stringlengths 6 113 | author stringlengths 2 36 | task_category stringclasses 39
values | tags listlengths 1 4.05k | created_time timestamp[s]date 2022-03-02 23:29:04 2025-04-07 20:40:27 | last_modified timestamp[s]date 2020-05-14 13:13:12 2025-04-19 04:15:39 | downloads int64 0 118M | likes int64 0 4.86k | README stringlengths 30 1.01M | matched_task listlengths 1 10 | is_bionlp stringclasses 3
values | model_cards stringlengths 0 1M | metadata stringlengths 2 698k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
puettmann/LlaMaestra-3.2-1B-Translation-Q8_0-GGUF | puettmann | translation | [
"transformers",
"gguf",
"translation",
"text-generation",
"llama-cpp",
"gguf-my-repo",
"en",
"it",
"base_model:puettmann/LlaMaestra-3.2-1B-Translation",
"base_model:quantized:puettmann/LlaMaestra-3.2-1B-Translation",
"license:llama3.2",
"endpoints_compatible",
"region:us",
"conversational"... | 2024-12-08T21:22:09 | 2024-12-08T21:22:17 | 168 | 1 | ---
base_model: LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1
language:
- en
- it
library_name: transformers
license: llama3.2
tags:
- translation
- text-generation
- llama-cpp
- gguf-my-repo
---
# LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`LeonardPuett... | [
"TRANSLATION"
] | TBD |
# LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1-Q8_0-GGUF
This model was converted to GGUF format from [`LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1`](https://huggingface.co/LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org... | {"base_model": "LeonardPuettmann/LlaMaestra-3.2-1B-Instruct-v0.1", "language": ["en", "it"], "library_name": "transformers", "license": "llama3.2", "tags": ["translation", "text-generation", "llama-cpp", "gguf-my-repo"]} |
sbulut/finetuned-kde4-en-to-tr | sbulut | translation | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-tc-big-tr-en",
"base_model:finetune:Helsinki-NLP/opus-mt-tc-big-tr-en",
"license:cc-by-4.0",
"model-index",
"autotrain_com... | 2024-02-02T19:53:18 | 2024-02-02T21:57:41 | 16 | 0 | ---
base_model: Helsinki-NLP/opus-mt-tc-big-tr-en
datasets:
- kde4
license: cc-by-4.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kde4-en-to-tr
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
... | [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kde4-en-to-tr
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-tc-big-tr-en](https://huggingface.co/... | {"base_model": "Helsinki-NLP/opus-mt-tc-big-tr-en", "datasets": ["kde4"], "license": "cc-by-4.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kde4-en-to-tr", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-sequence Languag... |
interneuronai/az-gptneo | interneuronai | null | [
"peft",
"safetensors",
"base_model:EleutherAI/gpt-neo-2.7B",
"base_model:adapter:EleutherAI/gpt-neo-2.7B",
"region:us"
] | 2024-03-09T21:22:33 | 2024-03-09T21:34:37 | 2 | 0 | ---
base_model: EleutherAI/gpt-neo-2.7B
library_name: peft
---
Model Details
Original Model: EleutherAI/gpt-neo-2.7B
Fine-Tuned For: Azerbaijani language understanding and generation
Dataset Used: Azerbaijani translation of the Stanford Alpaca dataset
Fine-Tuning Method: Self-instruct method
This m... | [
"TRANSLATION"
] | Non_BioNLP |
Model Details
Original Model: EleutherAI/gpt-neo-2.7B
Fine-Tuned For: Azerbaijani language understanding and generation
Dataset Used: Azerbaijani translation of the Stanford Alpaca dataset
Fine-Tuning Method: Self-instruct method
This model, is part of the ["project/Barbarossa"](https://github.com/... | {"base_model": "EleutherAI/gpt-neo-2.7B", "library_name": "peft"} |
kuotient/Seagull-13b-translation-AWQ | kuotient | translation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"translate",
"awq",
"translation",
"ko",
"dataset:squarelike/sharegpt_deepl_ko_translation",
"license:cc-by-nc-sa-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"region:us"
] | 2024-02-24T08:02:37 | 2024-02-24T09:09:52 | 7 | 2 | ---
datasets:
- squarelike/sharegpt_deepl_ko_translation
language:
- ko
license: cc-by-nc-sa-4.0
pipeline_tag: translation
tags:
- translate
- awq
---
# **Seagull-13b-translation-AWQ 📇**

## This is quantized version of original model: Seagull-13b-translation.
*... | [
"TRANSLATION"
] | Non_BioNLP | # **Seagull-13b-translation-AWQ 📇**

## This is quantized version of original model: Seagull-13b-translation.
**Seagull-13b-translation** is yet another translator model, but carefully considered the following issues from existing translation models.
- `newline`... | {"datasets": ["squarelike/sharegpt_deepl_ko_translation"], "language": ["ko"], "license": "cc-by-nc-sa-4.0", "pipeline_tag": "translation", "tags": ["translate", "awq"]} |
sheetalp91/setfit-model-1 | sheetalp91 | text-classification | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-05-02T13:06:28 | 2023-05-02T13:06:43 | 9 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# sheetalp91/setfit-model-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learni... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# sheetalp91/setfit-model-1
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2. T... | {"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]} |
research-backup/mbart-large-cc25-squad-qa | research-backup | text2text-generation | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"en",
"dataset:lmqg/qg_squad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-31T19:43:55 | 2023-05-06T12:48:31 | 13 | 0 | ---
datasets:
- lmqg/qg_squad
language: en
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: 'question: What is a person called is practicing heresy?, context: Heresy
is any provocative belief or theory that ... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
# Model Card of `lmqg/mbart-large-cc25-squad-qa`
This model is fine-tuned version of [facebook/mbart-large-cc25](https://huggingface.co/facebook/mbart-large-cc25) for question answering task on the [lmqg/qg_squad](https://huggingface.co/datasets/lmqg/qg_squad) (dataset_name: default) via [`lmqg`](https://github.com/as... | {"datasets": ["lmqg/qg_squad"], "language": "en", "license": "cc-by-4.0", "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "tags": ["question answering"], "widget": [{"text": "question: What is a person called is practicing heresy?, context: Heresy is any pro... |
Pdmk/t5-small-finetuned-summary_pd | Pdmk | summarization | [
"transformers",
"pytorch",
"tensorboard",
"t5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"... | 2023-08-21T21:21:06 | 2023-08-23T20:12:08 | 18 | 0 | ---
base_model: t5-small
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: t5-small-finetuned-summary_pd
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread a... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# t5-small-finetuned-summary_pd
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on an unknown da... | {"base_model": "t5-small", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarization", "generated_from_trainer"], "model-index": [{"name": "t5-small-finetuned-summary_pd", "results": []}]} |
knowledgator/gliner-bi-small-v1.0 | knowledgator | token-classification | [
"gliner",
"pytorch",
"NER",
"GLiNER",
"information extraction",
"encoder",
"entity recognition",
"token-classification",
"multilingual",
"dataset:urchade/pile-mistral-v0.1",
"dataset:numind/NuNER",
"dataset:knowledgator/GLINER-multi-task-synthetic-data",
"license:apache-2.0",
"region:us"
] | 2024-08-18T06:56:31 | 2024-08-25T11:38:26 | 122 | 10 | ---
datasets:
- urchade/pile-mistral-v0.1
- numind/NuNER
- knowledgator/GLINER-multi-task-synthetic-data
language:
- multilingual
library_name: gliner
license: apache-2.0
pipeline_tag: token-classification
tags:
- NER
- GLiNER
- information extraction
- encoder
- entity recognition
---
# About
GLiNER is a Named Entit... | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
# About
GLiNER is a Named Entity Recognition (NER) model capable of identifying any entity type using a bidirectional transformer encoders (BERT-like). It provides a practical alternative to traditional NER models, which are limited to predefined entities, and Large Language Models (LLMs) that, despite their flexibil... | {"datasets": ["urchade/pile-mistral-v0.1", "numind/NuNER", "knowledgator/GLINER-multi-task-synthetic-data"], "language": ["multilingual"], "library_name": "gliner", "license": "apache-2.0", "pipeline_tag": "token-classification", "tags": ["NER", "GLiNER", "information extraction", "encoder", "entity recognition"]} |
mrm8488/spanish-TinyBERT-betito-finetuned-xnli-es | mrm8488 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:xnli",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-08T20:55:51 | 2022-03-09T07:29:03 | 117 | 0 | ---
datasets:
- xnli
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: spanish-TinyBERT-betito-finetuned-xnli-es
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: xnli
type: xnli
args: es
metrics:
- type: accuracy
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# spanish-TinyBERT-betito-finetuned-xnli-es
This model is a fine-tuned version of [mrm8488/spanish-TinyBERT-betito](https://huggin... | {"datasets": ["xnli"], "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "spanish-TinyBERT-betito-finetuned-xnli-es", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "xnli", "type": "xnli", "args": "es"}, "metrics": [{"type": "... |
etri-lirs/gbst-kebyt5-large-preview | etri-lirs | fill-mask | [
"transformers",
"pytorch",
"gbswt5",
"text2text-generation",
"fill-mask",
"custom_code",
"ko",
"en",
"ja",
"zh",
"arxiv:2106.12672",
"license:other",
"autotrain_compatible",
"region:us"
] | 2024-02-13T07:21:51 | 2024-11-25T04:10:05 | 0 | 2 | ---
language:
- ko
- en
- ja
- zh
license: other
pipeline_tag: fill-mask
---
# Model Card for GBST-KEByT5-large (1.23B #params)
<!-- Provide a quick summary of what the model is/does. -->
KEByT5: Korean-Enhanced/Enriched Byte-level Text-to-Text Transfer Transformer(T5)의 GBST 버전으로,
CharFormer(Tay et al., 2021)를 기반으로 합니... | [
"RELATION_EXTRACTION",
"TRANSLATION"
] | Non_BioNLP | # Model Card for GBST-KEByT5-large (1.23B #params)
<!-- Provide a quick summary of what the model is/does. -->
KEByT5: Korean-Enhanced/Enriched Byte-level Text-to-Text Transfer Transformer(T5)의 GBST 버전으로,
CharFormer(Tay et al., 2021)를 기반으로 합니다.
한국어를 위해 토큰 후보 구간을 (1, 2, 3, 6, 9) 바이트 단위로 청킹하여 후보군을 생성하고, GBST로 나온 소프트 임베... | {"language": ["ko", "en", "ja", "zh"], "license": "other", "pipeline_tag": "fill-mask"} |
tmnam20/bert-base-multilingual-cased-rte-100 | tmnam20 | text-classification | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compat... | 2024-01-16T06:54:35 | 2024-01-16T06:55:47 | 15 | 0 | ---
base_model: bert-base-multilingual-cased
datasets:
- tmnam20/VieGLUE
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-rte-100
results:
- task:
type: text-classification
name: Text Classification
dataset:
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-base-multilingual-cased-rte-100
This model is a fine-tuned version of [bert-base-multilingual-cased](https://huggingface.co... | {"base_model": "bert-base-multilingual-cased", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-base-multilingual-cased-rte-100", "results": [{"task": {"type": "text-classification", "name": "Text Cl... |
mustozsarac/finetuned-one-epoch-multi-qa-mpnet-base-dot-v1 | mustozsarac | sentence-similarity | [
"sentence-transformers",
"safetensors",
"mpnet",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:62964",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-transformers/multi-qa-mpnet-base-dot-v1",
"base_model:... | 2024-06-27T11:08:59 | 2024-06-27T11:09:15 | 5 | 0 | ---
base_model: sentence-transformers/multi-qa-mpnet-base-dot-v1
datasets: []
language: []
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:62964
- loss:MultipleNegativesRankingLoss
widg... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SentenceTransformer based on sentence-transformers/multi-qa-mpnet-base-dot-v1
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/multi-qa-mpnet-base-dot-v1](https://huggingface.co/sentence-transformers/multi-qa-mpnet-base-dot-v1). It maps sentences & paragraphs to a... | {"base_model": "sentence-transformers/multi-qa-mpnet-base-dot-v1", "datasets": [], "language": [], "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:62964", "loss:Multiple... |
RichardErkhov/01-ai_-_Yi-6B-Chat-8bits | RichardErkhov | null | [
"safetensors",
"llama",
"arxiv:2403.04652",
"arxiv:2311.16502",
"arxiv:2401.11944",
"8-bit",
"bitsandbytes",
"region:us"
] | 2024-10-06T11:46:36 | 2024-10-06T11:50:00 | 6 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Yi-6B-Chat - bnb 8bits
- Model creator: https://huggingface.co/01-ai/
- Original model: https://huggingface.co/01... | [
"QUESTION_ANSWERING"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
Yi-6B-Chat - bnb 8bits
- Model creator: https://huggingface.co/01-ai/
- Original model: https://huggingface.co/01-ai/Yi-6B-C... | {} |
monsterbeasts/LishizhenGPT | monsterbeasts | text-generation | [
"transformers",
"pytorch",
"safetensors",
"bloom",
"text-generation",
"ak",
"ar",
"as",
"bm",
"bn",
"ca",
"code",
"en",
"es",
"eu",
"fon",
"fr",
"gu",
"hi",
"id",
"ig",
"ki",
"kn",
"lg",
"ln",
"ml",
"mr",
"ne",
"nso",
"ny",
"or",
"pa",
"pt",
"rn",
... | 2024-04-23T09:05:31 | 2024-05-09T04:44:44 | 12 | 0 | ---
datasets:
- bigscience/xP3mt
language:
- ak
- ar
- as
- bm
- bn
- ca
- code
- en
- es
- eu
- fon
- fr
- gu
- hi
- id
- ig
- ki
- kn
- lg
- ln
- ml
- mr
- ne
- nso
- ny
- or
- pa
- pt
- rn
- rw
- sn
- st
- sw
- ta
- te
- tn
- ts
- tum
- tw
- ur
- vi
- wo
- xh
- yo
- zh
- zu
license: bigscience-bloom-rail-1.0
pipelin... | [
"COREFERENCE_RESOLUTION",
"TRANSLATION"
] | Non_BioNLP |

# Table of Contents
1. [Model Summary](#model-summary)
2. [Use](#use)
3. [Limitations](#limitations)
4. [Training](#training)
5. [Evaluation](#evaluation)
7. [Citation](#citation)
# Model Summary
> We present BLOOMZ & mT0, a... | {"datasets": ["bigscience/xP3mt"], "language": ["ak", "ar", "as", "bm", "bn", "ca", "code", "en", "es", "eu", "fon", "fr", "gu", "hi", "id", "ig", "ki", "kn", "lg", "ln", "ml", "mr", "ne", "nso", "ny", "or", "pa", "pt", "rn", "rw", "sn", "st", "sw", "ta", "te", "tn", "ts", "tum", "tw", "ur", "vi", "wo", "xh", "yo", "zh... |
tyzp-INC/bench2-all-MiniLM-L6-v2-tuned | tyzp-INC | text-classification | [
"sentence-transformers",
"pytorch",
"bert",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-07-23T15:18:43 | 2023-07-23T15:18:48 | 9 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# tyzp-INC/bench2-all-MiniLM-L6-v2-tuned
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient fe... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# tyzp-INC/bench2-all-MiniLM-L6-v2-tuned
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive l... | {"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]} |
tner/twitter-roberta-base-dec2021-tweetner7-2020 | tner | token-classification | [
"transformers",
"pytorch",
"roberta",
"token-classification",
"dataset:tner/tweetner7",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-03T09:07:32 | 2022-09-27T15:35:03 | 18 | 0 | ---
datasets:
- tner/tweetner7
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: 'Get the all-analog Classic Vinyl Edition of `Takin'' Off` Album from {@herbiehancock@}
via {@bluenoterecords@} link below: {{URL}}'
example_title: NER Example 1
model-index:
- name: tner/twitter-r... | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP | # tner/twitter-roberta-base-dec2021-tweetner7-2020
This model is a fine-tuned version of [cardiffnlp/twitter-roberta-base-dec2021](https://huggingface.co/cardiffnlp/twitter-roberta-base-dec2021) on the
[tner/tweetner7](https://huggingface.co/datasets/tner/tweetner7) dataset (`train_2020` split).
Model fine-tuning is ... | {"datasets": ["tner/tweetner7"], "metrics": ["f1", "precision", "recall"], "pipeline_tag": "token-classification", "widget": [{"text": "Get the all-analog Classic Vinyl Edition of `Takin' Off` Album from {@herbiehancock@} via {@bluenoterecords@} link below: {{URL}}", "example_title": "NER Example 1"}], "model-index": [... |
AlexWortega/qwen11k | AlexWortega | sentence-similarity | [
"sentence-transformers",
"safetensors",
"qwen2",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1077240",
"loss:MultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Qwen/Qwen2.5-0.5B-Instruct",
"base_model:finetune:Qwen/Qwen2.... | 2024-11-15T19:32:14 | 2024-11-15T19:33:05 | 13 | 0 | ---
base_model: Qwen/Qwen2.5-0.5B-Instruct
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:1077240
- loss:MultipleNegativesRankingLoss
widget... | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | Non_BioNLP |
# SentenceTransformer based on Qwen/Qwen2.5-0.5B-Instruct
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Qwen/Qwen2.5-0.5B-Instruct](https://huggingface.co/Qwen/Qwen2.5-0.5B-Instruct). It maps sentences & paragraphs to a 896-dimensional dense vector space and can be used for semantic t... | {"base_model": "Qwen/Qwen2.5-0.5B-Instruct", "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:1077240", "loss:MultipleNe... |
tycjan/distilbert-pl-store-products-retrieval | tycjan | sentence-similarity | [
"sentence-transformers",
"safetensors",
"distilbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:2400",
"loss:MultipleNegativesRankingLoss",
"dataset:tycjan/product-query-retrieval-dataset",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:sentence-... | 2025-02-16T20:27:43 | 2025-02-16T20:28:19 | 9 | 0 | ---
base_model: sentence-transformers/quora-distilbert-multilingual
datasets:
- tycjan/product-query-retrieval-dataset
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- data... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SentenceTransformer based on sentence-transformers/quora-distilbert-multilingual
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/quora-distilbert-multilingual](https://huggingface.co/sentence-transformers/quora-distilbert-multilingual) on the [product-query-retri... | {"base_model": "sentence-transformers/quora-distilbert-multilingual", "datasets": ["tycjan/product-query-retrieval-dataset"], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "ge... |
Helsinki-NLP/opus-mt-is-de | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"is",
"de",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-08-16T11:58:29 | 66 | 0 | ---
language:
- is
- de
license: apache-2.0
tags:
- translation
---
### isl-deu
* source group: Icelandic
* target group: German
* OPUS readme: [isl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-deu/README.md)
* model: transformer-align
* source language(s): isl
* target language(... | [
"TRANSLATION"
] | Non_BioNLP |
### isl-deu
* source group: Icelandic
* target group: German
* OPUS readme: [isl-deu](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/isl-deu/README.md)
* model: transformer-align
* source language(s): isl
* target language(s): deu
* model: transformer-align
* pre-processing: normalization +... | {"language": ["is", "de"], "license": "apache-2.0", "tags": ["translation"]} |
TransferGraph/CAMeL-Lab_bert-base-arabic-camelbert-mix-did-nadi-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi",
"base_model:adapter:CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-27T17:33:30 | 2024-02-27T17:33:32 | 0 | 0 | ---
base_model: CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: CAMeL-Lab_bert-base-arabic-camelbert-mix-did-nadi-finetuned-lora-tweet_eval_irony
results:
- task:
type... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# CAMeL-Lab_bert-base-arabic-camelbert-mix-did-nadi-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [CAMeL-L... | {"base_model": "CAMeL-Lab/bert-base-arabic-camelbert-mix-did-nadi", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "CAMeL-Lab_bert-base-arabic-camelbert-mix-did-nadi-finetuned-lora-tweet_eval_iron... |
poltextlab/xlm-roberta-large-polish-parlspeech-cap-v3 | poltextlab | text-classification | [
"pytorch",
"xlm-roberta",
"text-classification",
"pl",
"region:us"
] | 2025-01-31T10:11:26 | 2025-02-26T16:08:46 | 0 | 0 | ---
language:
- pl
metrics:
- accuracy
- f1-score
tags:
- text-classification
- pytorch
extra_gated_prompt: 'Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manua... | [
"TRANSLATION"
] | Non_BioNLP | # xlm-roberta-large-polish-parlspeech-cap-v3
## Model description
An `xlm-roberta-large` model fine-tuned on english training data containing parliamentary speeches (oral questions, interpellations, bill debates, other plenary speeches, urgent questions) labeled with [major topic codes](https://www.comparativeagendas... | {"language": ["pl"], "metrics": ["accuracy", "f1-score"], "tags": ["text-classification", "pytorch"], "extra_gated_prompt": "Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. Please allow us a few business days to manu... |
shinjiyamas/reddit-construct-classify | shinjiyamas | null | [
"transformers",
"RobertaWithFeatures",
"license:mit",
"endpoints_compatible",
"region:us"
] | 2024-05-31T06:37:28 | 2024-05-31T08:54:47 | 6 | 1 | ---
license: mit
---
# Project Name
Provide a brief introduction to what the project does and its target audience. Describe the problems it solves or the functionality it offers.
## Features
- Custom integration of numerical features with text data using RoBERTa.
- Ability to handle complex text classi... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# Project Name
Provide a brief introduction to what the project does and its target audience. Describe the problems it solves or the functionality it offers.
## Features
- Custom integration of numerical features with text data using RoBERTa.
- Ability to handle complex text classification tasks with addi... | {"license": "mit"} |
CATIE-AQ/QAmembert | CATIE-AQ | question-answering | [
"transformers",
"pytorch",
"safetensors",
"camembert",
"question-answering",
"fr",
"dataset:etalab-ia/piaf",
"dataset:fquad",
"dataset:lincoln/newsquadfr",
"dataset:pragnakalp/squad_v2_french_translated",
"dataset:CATIE-AQ/frenchQA",
"arxiv:1910.09700",
"doi:10.57967/hf/0821",
"license:mit... | 2023-01-10T16:33:26 | 2024-11-26T10:46:29 | 114 | 14 | ---
datasets:
- etalab-ia/piaf
- fquad
- lincoln/newsquadfr
- pragnakalp/squad_v2_french_translated
- CATIE-AQ/frenchQA
language: fr
library_name: transformers
license: mit
metrics:
- f1
- exact_match
pipeline_tag: question-answering
widget:
- text: Combien de personnes utilisent le français tous les jours ?
context:... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
# QAmembert
## Model Description
We present **QAmemBERT**, which is a [CamemBERT base](https://huggingface.co/camembert-base) fine-tuned for the Question-Answering task for the French language on four French Q&A datasets composed of contexts and questions with their answers inside the context (= SQuAD 1.0 format) bu... | {"datasets": ["etalab-ia/piaf", "fquad", "lincoln/newsquadfr", "pragnakalp/squad_v2_french_translated", "CATIE-AQ/frenchQA"], "language": "fr", "library_name": "transformers", "license": "mit", "metrics": ["f1", "exact_match"], "pipeline_tag": "question-answering", "widget": [{"text": "Combien de personnes utilisent le... |
Priyanka-Balivada/electra-5-epoch-sentiment | Priyanka-Balivada | text-classification | [
"transformers",
"pytorch",
"electra",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"base_model:google/electra-small-discriminator",
"base_model:finetune:google/electra-small-discriminator",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compat... | 2023-10-29T10:22:52 | 2024-02-20T14:32:28 | 20 | 0 | ---
base_model: google/electra-small-discriminator
datasets:
- tweet_eval
license: apache-2.0
metrics:
- accuracy
- precision
- recall
tags:
- generated_from_trainer
model-index:
- name: electra-5-epoch-sentiment
results:
- task:
type: text-classification
name: Text Classification
dataset:
nam... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
TOKENIZER & TRAINER CORRUPTED
# electra-5-epoch-sentiment
This model is a fine-tuned version of [google/electra-small-discrimina... | {"base_model": "google/electra-small-discriminator", "datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["accuracy", "precision", "recall"], "tags": ["generated_from_trainer"], "model-index": [{"name": "electra-5-epoch-sentiment", "results": [{"task": {"type": "text-classification", "name": "Text Classific... |
MemorialStar/distilbert-base-uncased-finetuned-emotion | MemorialStar | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_co... | 2024-03-02T08:03:24 | 2024-03-02T10:47:06 | 4 | 0 | ---
base_model: distilbert/distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert/distilbert-base-uncased](https://hug... | {"base_model": "distilbert/distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classificatio... |
Helsinki-NLP/opus-mt-es-tll | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"es",
"tll",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-08-16T11:33:37 | 357 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-es-tll
* source languages: es
* target languages: tll
* OPUS readme: [es-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* d... | [
"TRANSLATION"
] | Non_BioNLP |
### opus-mt-es-tll
* source languages: es
* target languages: tll
* OPUS readme: [es-tll](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/es-tll/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](... | {"license": "apache-2.0", "tags": ["translation"]} |
BatirayErbayVodafone/testg | BatirayErbayVodafone | text-generation | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"conversational",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2... | 2024-09-09T21:19:54 | 2024-09-10T04:52:10 | 7 | 0 | ---
base_model: google/gemma-2-9b
library_name: transformers
license: gemma
pipeline_tag: text-generation
tags:
- conversational
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, please ensu... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
# Gemma 2 model card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
**Resources and Technical Documentation**:
* [Responsible Generative AI Toolkit][rai-toolkit]
* [Gemma on Kaggle][kaggle-gemma]
* [Gemma on Vertex Model Garden][vertex-mg-gemma]
**Terms of Use**: [Terms](https://www.kaggle.com/models/g... | {"base_model": "google/gemma-2-9b", "library_name": "transformers", "license": "gemma", "pipeline_tag": "text-generation", "tags": ["conversational"], "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage lice... |
dmedhi/eng2french-t5-small | dmedhi | translation | [
"peft",
"safetensors",
"translation",
"transformers",
"en",
"fr",
"dataset:opus100",
"base_model:google-t5/t5-small",
"base_model:adapter:google-t5/t5-small",
"license:apache-2.0",
"region:us"
] | 2023-12-19T11:12:27 | 2023-12-19T18:12:31 | 12 | 0 | ---
base_model: t5-small
datasets:
- opus100
language:
- en
- fr
library_name: peft
license: apache-2.0
tags:
- translation
- safetensors
- transformers
---
# Model Card for Model ID
A language translation model fine-tuned on **opus100** dataset for *English to French* translation.
## Model Description
- **Model t... | [
"TRANSLATION"
] | Non_BioNLP |
# Model Card for Model ID
A language translation model fine-tuned on **opus100** dataset for *English to French* translation.
## Model Description
- **Model type:** Language Model
- **Language(s) (NLP):** English, French
- **License:** Apache 2.0
- **Finetuned from model:** [T5-small](https://huggingface.co/t5-sma... | {"base_model": "t5-small", "datasets": ["opus100"], "language": ["en", "fr"], "library_name": "peft", "license": "apache-2.0", "tags": ["translation", "safetensors", "transformers"]} |
elybes/IFRS_en_ar_translation | elybes | translation | [
"transformers",
"safetensors",
"marian",
"text2text-generation",
"finance",
"IFRS",
"translation",
"ar",
"en",
"dataset:elybes/IFRS",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-30T09:09:53 | 2024-08-13T20:39:10 | 28 | 1 | ---
datasets:
- elybes/IFRS
language:
- ar
- en
metrics:
- bleu
pipeline_tag: translation
tags:
- finance
- IFRS
- translation
---
| [
"TRANSLATION"
] | Non_BioNLP | {"datasets": ["elybes/IFRS"], "language": ["ar", "en"], "metrics": ["bleu"], "pipeline_tag": "translation", "tags": ["finance", "IFRS", "translation"]} | |
LoneStriker/bagel-7b-v0.1-5.0bpw-h6-exl2-2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"mistral",
"text-generation",
"conversational",
"dataset:ai2_arc",
"dataset:unalignment/spicy-3.1",
"dataset:codeparrot/apps",
"dataset:facebook/belebele",
"dataset:boolq",
"dataset:jondurbin/cinematika-v0.1",
"dataset:drop",
"dataset:lmsys/lmsys-chat-1m",
"d... | 2023-12-13T18:02:32 | 2023-12-13T18:06:31 | 6 | 0 | ---
datasets:
- ai2_arc
- unalignment/spicy-3.1
- codeparrot/apps
- facebook/belebele
- boolq
- jondurbin/cinematika-v0.1
- drop
- lmsys/lmsys-chat-1m
- TIGER-Lab/MathInstruct
- cais/mmlu
- Muennighoff/natural-instructions
- openbookqa
- piqa
- Vezora/Tested-22k-Python-Alpaca
- cakiki/rosetta-code
- Open-Orca/SlimOrca
... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
# A bagel, with everything (except DPO)

## Overview
This is the pre-DPO version of the mistral-7b model fine-tuned with https://github.com/jondurbin/bagel
You probably want the higher performing model that underwent DPO: https://huggingface.co/jondurbin/bagel-dpo-7b-v0.1
The only benefit to th... | {"datasets": ["ai2_arc", "unalignment/spicy-3.1", "codeparrot/apps", "facebook/belebele", "boolq", "jondurbin/cinematika-v0.1", "drop", "lmsys/lmsys-chat-1m", "TIGER-Lab/MathInstruct", "cais/mmlu", "Muennighoff/natural-instructions", "openbookqa", "piqa", "Vezora/Tested-22k-Python-Alpaca", "cakiki/rosetta-code", "Open-... |
Lots-of-LoRAs/Mistral-7B-Instruct-v0.2-4b-r16-task660 | Lots-of-LoRAs | null | [
"pytorch",
"safetensors",
"en",
"arxiv:1910.09700",
"arxiv:2407.00066",
"base_model:mistralai/Mistral-7B-Instruct-v0.2",
"base_model:finetune:mistralai/Mistral-7B-Instruct-v0.2",
"license:mit",
"region:us"
] | 2025-01-05T14:09:07 | 2025-01-05T14:09:13 | 0 | 0 | ---
base_model: mistralai/Mistral-7B-Instruct-v0.2
language: en
library_name: pytorch
license: mit
---
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task660
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -... | [
"TRANSLATION"
] | Non_BioNLP |
# Model Card for Mistral-7B-Instruct-v0.2-4b-r16-task660
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
LoRA trained on task660_mizan_fa_en_translation
- **Developed by:** bruel
- **Funded by [optional]... | {"base_model": "mistralai/Mistral-7B-Instruct-v0.2", "language": "en", "library_name": "pytorch", "license": "mit"} |
pardeep/distilbert-base-uncased-finetuned-emotion-ch02 | pardeep | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-17T10:31:08 | 2022-07-17T10:54:29 | 104 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion-ch02
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: de... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion-ch02
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingfa... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion-ch02", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "em... |
potsawee/t5-large-generation-race-QuestionAnswer | potsawee | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"en",
"dataset:race",
"arxiv:2301.12307",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-02-22T23:41:18 | 2023-03-12T16:10:27 | 83 | 16 | ---
datasets:
- race
language:
- en
library_name: transformers
license: apache-2.0
pipeline_tag: text2text-generation
---
# t5-large fine-tuned to RACE for Generating Question+Answer
- Input: `context` (e.g. news article)
- Output: `question <sep> answer`
This model generates **abstractive** answers following the RACE... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP | # t5-large fine-tuned to RACE for Generating Question+Answer
- Input: `context` (e.g. news article)
- Output: `question <sep> answer`
This model generates **abstractive** answers following the RACE dataset. If you would like to have **extractive** questions/answers, you can use our model trained on SQuAD: https://hugg... | {"datasets": ["race"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "text2text-generation"} |
Atharvgarg/bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old | Atharvgarg | text2text-generation | [
"transformers",
"pytorch",
"tensorboard",
"encoder-decoder",
"text2text-generation",
"summarisation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-07-28T15:24:58 | 2022-07-28T16:04:21 | 20 | 0 | ---
license: apache-2.0
metrics:
- rouge
tags:
- summarisation
- generated_from_trainer
model-index:
- name: bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old
This model is a fine-tuned version of [mrm84... | {"license": "apache-2.0", "metrics": ["rouge"], "tags": ["summarisation", "generated_from_trainer"], "model-index": [{"name": "bert-small2bert-small-finetuned-cnn_daily_mail-summarization-finetuned-bbc-news-old", "results": []}]} |
aiola/roberta-large-corener | aiola | fill-mask | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"NER",
"named entity recognition",
"RE",
"relation extraction",
"entity mention detection",
"EMD",
"coreference resolution",
"en",
"dataset:Ontonotes",
"dataset:CoNLL04",
"license:afl-3.0",
"autotrain_compatible",
"endpoints_compatib... | 2022-05-25T08:13:41 | 2022-07-03T14:16:17 | 102 | 2 | ---
datasets:
- Ontonotes
- CoNLL04
language:
- en
license: afl-3.0
tags:
- NER
- named entity recognition
- RE
- relation extraction
- entity mention detection
- EMD
- coreference resolution
---
# CoReNer
## Demo
We released an online demo so you can easily play with the model. Check it out: [http://corener-demo.ai... | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"COREFERENCE_RESOLUTION"
] | Non_BioNLP |
# CoReNer
## Demo
We released an online demo so you can easily play with the model. Check it out: [http://corener-demo.aiola-lab.com](http://corener-demo.aiola-lab.com).
The demo uses the [aiola/roberta-base-corener](https://huggingface.co/aiola/roberta-base-corener) model.
## Model description
A multi-task model... | {"datasets": ["Ontonotes", "CoNLL04"], "language": ["en"], "license": "afl-3.0", "tags": ["NER", "named entity recognition", "RE", "relation extraction", "entity mention detection", "EMD", "coreference resolution"]} |
gigauser/kcbert_nsmc_tuning | gigauser | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:nsmc",
"base_model:beomi/kcbert-base",
"base_model:finetune:beomi/kcbert-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-07T14:01:09 | 2024-07-08T06:00:35 | 12 | 0 | ---
base_model: beomi/kcbert-base
datasets:
- nsmc
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: kcbert_nsmc_tuning
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: nsmc
type: nsmc
config: default
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# kcbert_nsmc_tuning
This model is a fine-tuned version of [beomi/kcbert-base](https://huggingface.co/beomi/kcbert-base) on the ns... | {"base_model": "beomi/kcbert-base", "datasets": ["nsmc"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "kcbert_nsmc_tuning", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "nsmc", "type": "nsmc", ... |
seongwkim/distilbert-base-uncased-finetuned-emotion | seongwkim | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-21T07:26:46 | 2022-04-21T08:34:19 | 120 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
cgus/granite-3.2-8b-instruct-preview-exl2 | cgus | text-generation | [
"exllamav2",
"granite",
"language",
"granite-3.2",
"text-generation",
"conversational",
"arxiv:0000.00000",
"base_model:ibm-granite/granite-3.2-8b-instruct-preview",
"base_model:quantized:ibm-granite/granite-3.2-8b-instruct-preview",
"license:apache-2.0",
"4-bit",
"exl2",
"region:us"
] | 2025-02-08T21:56:20 | 2025-02-09T09:37:46 | 60 | 0 | ---
base_model:
- ibm-granite/granite-3.2-8b-instruct-preview
library_name: exllamav2
license: apache-2.0
pipeline_tag: text-generation
tags:
- language
- granite-3.2
inference: false
---
# Granite-3.2-8B-Instruct-Preview-exl2
Original model: [Granite-3.2-8B-Instruct-Preview](https://huggingface.co/ibm-granite/granite-... | [
"TEXT_CLASSIFICATION",
"SUMMARIZATION"
] | Non_BioNLP | # Granite-3.2-8B-Instruct-Preview-exl2
Original model: [Granite-3.2-8B-Instruct-Preview](https://huggingface.co/ibm-granite/granite-3.2-8b-instruct-preview)
Made by: [Granite Team, IBM](https://huggingface.co/ibm-granite)
## Quants
[4bpw h6 (main)](https://huggingface.co/cgus/granite-3.2-8b-instruct-preview-exl2/tre... | {"base_model": ["ibm-granite/granite-3.2-8b-instruct-preview"], "library_name": "exllamav2", "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["language", "granite-3.2"], "inference": false} |
cbpuschmann/BERT-klimacoder_v0.3 | cbpuschmann | text-classification | [
"tensorboard",
"safetensors",
"bert",
"autotrain",
"text-classification",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"region:us"
] | 2024-12-02T15:17:31 | 2024-12-02T15:18:12 | 4 | 0 | ---
base_model: google-bert/bert-base-uncased
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.05558604374527931
f1: 0.9881956155143339
precision: 0.9881956155143339
recall: 0.988195615514... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.05558604374527931
f1: 0.9881956155143339
precision: 0.9881956155143339
recall: 0.9881956155143339
auc: 0.9994592560589801
accuracy: 0.988313856427379
| {"base_model": "google-bert/bert-base-uncased", "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]} |
pathfinderNdoma/online-doctor-model | pathfinderNdoma | question-answering | [
"transformers",
"safetensors",
"bert",
"question-answering",
"base_model:dmis-lab/biobert-v1.1",
"base_model:finetune:dmis-lab/biobert-v1.1",
"license:creativeml-openrail-m",
"endpoints_compatible",
"region:us"
] | 2024-10-23T17:31:19 | 2024-10-23T17:58:48 | 8 | 0 | ---
base_model:
- dmis-lab/biobert-v1.1
library_name: transformers
license: creativeml-openrail-m
pipeline_tag: question-answering
---
library_name: transformers
tags: [biomedical, question-answering, healthcare]
---
# Model Card for Online Doctor Model
This model is a fine-tuned version of the `dmis-lab/biober... | [
"QUESTION_ANSWERING"
] | BioNLP |
library_name: transformers
tags: [biomedical, question-answering, healthcare]
---
# Model Card for Online Doctor Model
This model is a fine-tuned version of the `dmis-lab/biobert-large-cased-v1.1-squad` model. It is designed to answer questions related to diseases based on symptom descriptions, providing a ques... | {"base_model": ["dmis-lab/biobert-v1.1"], "library_name": "transformers", "license": "creativeml-openrail-m", "pipeline_tag": "question-answering"} |
RichardErkhov/EleutherAI_-_pythia-70m-deduped-8bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2304.01373",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 2024-04-23T07:49:50 | 2024-04-23T07:50:27 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-deduped - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://hugg... | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-70m-deduped - bnb 8bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingface.co/... | {} |
tmnam20/xlm-roberta-base-sst2-10 | tmnam20 | text-classification | [
"transformers",
"safetensors",
"xlm-roberta",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:FacebookAI/xlm-roberta-base",
"base_model:finetune:FacebookAI/xlm-roberta-base",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compat... | 2024-01-16T11:10:24 | 2024-01-16T11:12:06 | 7 | 0 | ---
base_model: xlm-roberta-base
datasets:
- tmnam20/VieGLUE
language:
- en
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: xlm-roberta-base-sst2-10
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tmnam20/VieGLUE/SST2
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# xlm-roberta-base-sst2-10
This model is a fine-tuned version of [xlm-roberta-base](https://huggingface.co/xlm-roberta-base) on th... | {"base_model": "xlm-roberta-base", "datasets": ["tmnam20/VieGLUE"], "language": ["en"], "license": "mit", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "xlm-roberta-base-sst2-10", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"na... |
RichardErkhov/bigscience_-_bloomz-1b7-8bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"bloom",
"text-generation",
"arxiv:2211.01786",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 2024-07-20T11:12:24 | 2024-07-20T11:14:08 | 76 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloomz-1b7 - bnb 8bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.... | [
"COREFERENCE_RESOLUTION",
"TRANSLATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bloomz-1b7 - bnb 8bits
- Model creator: https://huggingface.co/bigscience/
- Original model: https://huggingface.co/bigscien... | {} |
fabiancpl/nlbse25_java | fabiancpl | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"region:us"
] | 2024-12-13T02:21:09 | 2024-12-13T02:21:16 | 8 | 0 | ---
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget: []
inference: true
---
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SetFit
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. A RandomForestClassifier instance is used for classification.
The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://ww... | {"library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [], "inference": true} |
anismahmahi/G2-with-noPropaganda-multilabel-setfit-model | anismahmahi | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/paraphrase-mpnet-base-v2",
"base_model:finetune:sentence-transformers/paraphrase-mpnet-base-v2",
"model-index",
"region:us"
] | 2024-01-06T01:07:57 | 2024-01-06T01:08:14 | 3 | 0 | ---
base_model: sentence-transformers/paraphrase-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: But the author is Bharath Ganesh.
- text: The documents, which suggest al... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SetFit with sentence-transformers/paraphrase-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/paraphrase-mpnet-base-v2](https://huggingface.co/sentence-transformers/paraphrase-mpnet-base-v2) as the S... | {"base_model": "sentence-transformers/paraphrase-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "But the author is Bharath Ganesh."}, {"text... |
eunbi-jeong/gpt2 | eunbi-jeong | translation | [
"translation",
"en",
"dataset:hellaswag",
"region:us"
] | 2023-08-25T06:17:58 | 2023-08-25T06:19:07 | 0 | 0 | ---
datasets:
- hellaswag
language:
- en
pipeline_tag: translation
---
| [
"TRANSLATION"
] | Non_BioNLP | {"datasets": ["hellaswag"], "language": ["en"], "pipeline_tag": "translation"} | |
jaesani/large_eng_summarizer | jaesani | summarization | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"code",
"summarization",
"en",
"dataset:npc-engine/light-batch-summarize-dialogue",
"base_model:facebook/bart-large-cnn",
"base_model:finetune:facebook/bart-large-cnn",
"license:mit",
"autotrain_compatible",
"endpoints_compatible... | 2024-09-19T11:13:07 | 2024-09-19T12:30:22 | 29 | 0 | ---
base_model:
- facebook/bart-large-cnn
datasets:
- npc-engine/light-batch-summarize-dialogue
language:
- en
library_name: transformers
license: mit
metrics:
- accuracy
pipeline_tag: summarization
tags:
- code
---
Model Card: Large English Summarizer
Model Overview
This model is a large-scale transformer-based summa... | [
"SUMMARIZATION"
] | Non_BioNLP |
Model Card: Large English Summarizer
Model Overview
This model is a large-scale transformer-based summarization model, designed for producing concise and coherent summaries of English text. It leverages the power of pre-trained language models to generate summaries while maintaining key information.
Intended Use
The ... | {"base_model": ["facebook/bart-large-cnn"], "datasets": ["npc-engine/light-batch-summarize-dialogue"], "language": ["en"], "library_name": "transformers", "license": "mit", "metrics": ["accuracy"], "pipeline_tag": "summarization", "tags": ["code"]} |
fathyshalab/reklambox2-6-17 | fathyshalab | text-classification | [
"sentence-transformers",
"pytorch",
"xlm-roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-03-02T22:29:07 | 2023-03-03T00:08:34 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/reklambox2-6-17
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot lear... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# fathyshalab/reklambox2-6-17
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive learning.
2.... | {"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]} |
leejaymin/etri-ones-llama3.1-8b-ko | leejaymin | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-08-16T17:18:09 | 2024-09-06T07:53:30 | 8 | 1 | ---
library_name: transformers
tags: []
---
# Model Card for `leejaymin/etri-ones-llama3.1-8b-ko`
## Model Summary
This model is a fine-tuned version of LLaMA 3.1 (8B) using QLoRA (Quantized Low-Rank Adaptation) techniques, specifically trained on Korean language datasets. It is optimized for understanding and gener... | [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
# Model Card for `leejaymin/etri-ones-llama3.1-8b-ko`
## Model Summary
This model is a fine-tuned version of LLaMA 3.1 (8B) using QLoRA (Quantized Low-Rank Adaptation) techniques, specifically trained on Korean language datasets. It is optimized for understanding and generating text in Korean, making it suitable for... | {"library_name": "transformers", "tags": []} |
SakshamJain/Temp | SakshamJain | summarization | [
"transformers",
"t5",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-11-02T06:39:07 | 2023-11-02T06:41:59 | 14 | 0 | ---
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP | {"pipeline_tag": "summarization"} | |
yjgwak/klue-bert-base-finetuned-squad-kor-v1 | yjgwak | question-answering | [
"transformers",
"pytorch",
"safetensors",
"bert",
"question-answering",
"korean",
"klue",
"squad-kor-v1",
"ko",
"arxiv:2105.09680",
"license:cc-by-sa-4.0",
"endpoints_compatible",
"region:us"
] | 2023-09-08T03:11:04 | 2023-09-11T02:52:58 | 206 | 1 | ---
language: ko
license: cc-by-sa-4.0
tags:
- korean
- klue
- squad-kor-v1
mask_token: '[MASK]'
widget:
- text: 바그너는 괴테의 파우스트를 읽고 무엇을 쓰고자 했는가?
context: 1839년 바그너는 괴테의 파우스트을 처음 읽고 그 내용에 마음이 끌려 이를 소재로 해서 하나의 교향곡을 쓰려는 뜻을 갖는다.
이 시기 바그너는 1838년에 빛 독촉으로 산전수전을 다 걲은 상황이라 좌절과 실망에 가득했으며 메피스토펠레스를 만나는 파우스트의 심경에 공감했다고
한다.... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
# KLUE BERT base Finetuned on squad-kor-v1
## Table of Contents
- [Model Details](#model-details)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Uses](#uses)
- [Training](#training)
- [Evaluation](#evaluation)
- [Technical Specifications](#technical-specifications)
- [Citation Informatio... | {"language": "ko", "license": "cc-by-sa-4.0", "tags": ["korean", "klue", "squad-kor-v1"], "mask_token": "[MASK]", "widget": [{"text": "바그너는 괴테의 파우스트를 읽고 무엇을 쓰고자 했는가?", "context": "1839년 바그너는 괴테의 파우스트을 처음 읽고 그 내용에 마음이 끌려 이를 소재로 해서 하나의 교향곡을 쓰려는 뜻을 갖는다. 이 시기 바그너는 1838년에 빛 독촉으로 산전수전을 다 걲은 상황이라 좌절과 실망에 가득했으며 메피스토펠레스를 만나는 파우... |
pinzhenchen/sft-lora-de-pythia-2b8 | pinzhenchen | null | [
"generation",
"question answering",
"instruction tuning",
"de",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 2024-03-05T23:52:43 | 2024-03-05T23:52:46 | 0 | 0 | ---
language:
- de
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org... | {"language": ["de"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]} |
TheBloke/Airoboros-M-7B-3.1.2-GGUF | TheBloke | null | [
"transformers",
"gguf",
"mistral",
"dataset:jondurbin/airoboros-3.1",
"base_model:jondurbin/airoboros-m-7b-3.1.2",
"base_model:quantized:jondurbin/airoboros-m-7b-3.1.2",
"license:apache-2.0",
"region:us"
] | 2023-10-19T16:41:52 | 2023-10-19T16:45:56 | 437 | 13 | ---
base_model: jondurbin/airoboros-m-7b-3.1.2
datasets:
- jondurbin/airoboros-3.1
license: apache-2.0
model_name: Airoboros M 7B 3.1.2
inference: false
model_creator: Jon Durbin
model_type: mistral
prompt_template: '[INST] <<SYS>>
You are a helpful, unbiased, uncensored assistant.
<</SYS>>
{prompt} [/INST]
... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<... | {"base_model": "jondurbin/airoboros-m-7b-3.1.2", "datasets": ["jondurbin/airoboros-3.1"], "license": "apache-2.0", "model_name": "Airoboros M 7B 3.1.2", "inference": false, "model_creator": "Jon Durbin", "model_type": "mistral", "prompt_template": "[INST] <<SYS>>\nYou are a helpful, unbiased, uncensored assistant.\n<</... |
Netta1994/setfit_baai_wix_qa_gpt-4o_improved-cot-instructions_two_reasoning_only_reasoning_1726 | Netta1994 | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"model-index",
"region:us"
] | 2024-09-19T14:07:31 | 2024-09-19T14:08:07 | 7 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: "Reasoning for Good:\n1. **Context Grounding**: The answer is well-supported\
\ by the provide... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-... | {"base_model": "BAAI/bge-base-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Reasoning for Good:\n1. **Context Grounding**: The answer is well-su... |
rdpratti/distilbert-base-uncased-finetuned-emotion | rdpratti | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-06T20:33:32 | 2023-03-17T12:57:20 | 11 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: split
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
Helsinki-NLP/opus-mt-en-cel | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"en",
"gd",
"ga",
"br",
"kw",
"gv",
"cy",
"cel",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-08-16T11:29:12 | 47 | 0 | ---
language:
- en
- gd
- ga
- br
- kw
- gv
- cy
- cel
license: apache-2.0
tags:
- translation
---
### eng-cel
* source group: English
* target group: Celtic languages
* OPUS readme: [eng-cel](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md)
* model: transformer
* source la... | [
"TRANSLATION"
] | Non_BioNLP |
### eng-cel
* source group: English
* target group: Celtic languages
* OPUS readme: [eng-cel](https://github.com/Helsinki-NLP/Tatoeba-Challenge/tree/master/models/eng-cel/README.md)
* model: transformer
* source language(s): eng
* target language(s): bre cor cym gla gle glv
* model: transformer
* pre-processing:... | {"language": ["en", "gd", "ga", "br", "kw", "gv", "cy", "cel"], "license": "apache-2.0", "tags": ["translation"]} |
gokulsrinivasagan/distilbert_lda_5_v1_book_mrpc | gokulsrinivasagan | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_5_v1_book",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_5_v1_book",
"model-index",
"autotrain_compatible",
... | 2024-12-09T15:45:51 | 2024-12-09T15:46:52 | 4 | 0 | ---
base_model: gokulsrinivasagan/distilbert_lda_5_v1_book
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert_lda_5_v1_book_mrpc
results:
- task:
type: text-classification
name: Text Classification
datase... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_5_v1_book_mrpc
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_5_v1_book](https://hugging... | {"base_model": "gokulsrinivasagan/distilbert_lda_5_v1_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_lda_5_v1_book_mrpc", "results": [{"task": {"type": "text-classification", "name":... |
openaccess-ai-collective/manticore-13b-chat-pyg | openaccess-ai-collective | text-generation | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"en",
"dataset:anon8231489123/ShareGPT_Vicuna_unfiltered",
"dataset:ehartford/wizard_vicuna_70k_unfiltered",
"dataset:ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered",
"dataset:QingyiSi/Alpaca-CoT",
"dataset:teknium/GPT... | 2023-05-22T16:21:57 | 2023-06-07T12:32:40 | 3,537 | 30 | ---
datasets:
- anon8231489123/ShareGPT_Vicuna_unfiltered
- ehartford/wizard_vicuna_70k_unfiltered
- ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered
- QingyiSi/Alpaca-CoT
- teknium/GPT4-LLM-Cleaned
- teknium/GPTeacher-General-Instruct
- metaeval/ScienceQA_text_only
- hellaswag
- openai/summarize_from_feedback
- ... | [
"SUMMARIZATION"
] | Non_BioNLP |
# Manticore 13B Chat
[<img src="https://raw.githubusercontent.com/OpenAccess-AI-Collective/axolotl/main/image/axolotl-badge-web.png" alt="Built with Axolotl" width="200" height="32"/>](https://github.com/OpenAccess-AI-Collective/axolotl)
Manticore 13B Chat builds on Manticore with new datasets, including a de-duped ... | {"datasets": ["anon8231489123/ShareGPT_Vicuna_unfiltered", "ehartford/wizard_vicuna_70k_unfiltered", "ehartford/WizardLM_alpaca_evol_instruct_70k_unfiltered", "QingyiSi/Alpaca-CoT", "teknium/GPT4-LLM-Cleaned", "teknium/GPTeacher-General-Instruct", "metaeval/ScienceQA_text_only", "hellaswag", "openai/summarize_from_feed... |
UNIST-Eunchan/Pegasus-x-base-govreport-12288-1024-numepoch-10 | UNIST-Eunchan | text2text-generation | [
"transformers",
"pytorch",
"pegasus_x",
"text2text-generation",
"generated_from_trainer",
"dataset:govreport-summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-07-20T02:20:44 | 2023-07-22T03:05:31 | 30 | 0 | ---
datasets:
- govreport-summarization
tags:
- generated_from_trainer
model-index:
- name: Pegasus-x-base-govreport-12288-1024-numepoch-10
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Pegasus-x-base-govreport-12288-1024-numepoch-10
This model is a fine-tuned version of [google/pegasus-x-base](https://huggingfac... | {"datasets": ["govreport-summarization"], "tags": ["generated_from_trainer"], "model-index": [{"name": "Pegasus-x-base-govreport-12288-1024-numepoch-10", "results": []}]} |
LongSafari/hyenadna-tiny-1k-seqlen-d256-hf | LongSafari | text-generation | [
"transformers",
"safetensors",
"hyenadna",
"text-generation",
"dna",
"biology",
"genomics",
"hyena",
"custom_code",
"arxiv:2306.15794",
"arxiv:2302.10866",
"license:bsd-3-clause",
"autotrain_compatible",
"region:us"
] | 2023-11-03T14:11:43 | 2024-01-24T17:22:45 | 166 | 0 | ---
license: bsd-3-clause
tags:
- dna
- biology
- genomics
- hyena
---
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, che... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# HyenaDNA
Welcome! HyenaDNA is a long-range genomic foundation model pretrained on context lengths of up to **1 million tokens** at **single nucleotide resolution**.
See below for an [overview](#model) of the model and training. Better yet, check out these resources.
**Resources:**
- [arxiv](https://arxiv.org/... | {"license": "bsd-3-clause", "tags": ["dna", "biology", "genomics", "hyena"]} |
neurips-user/neurips-deberta-combined-1 | neurips-user | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"deberta-v2",
"text-classification",
"autotrain",
"dataset:neurips-bert-combined5/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-16T02:08:21 | 2024-05-16T02:28:17 | 16 | 0 | ---
datasets:
- neurips-bert-combined5/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.4513716995716095
f1: 0.8037383177570093
precision: 0.7543859649122807
recall: 0.86
a... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.4513716995716095
f1: 0.8037383177570093
precision: 0.7543859649122807
recall: 0.86
auc: 0.8812
accuracy: 0.79
| {"datasets": ["neurips-bert-combined5/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]} |
Thang203/general_nlp_research_paper | Thang203 | text-classification | [
"bertopic",
"text-classification",
"region:us"
] | 2024-04-10T23:36:51 | 2024-04-10T23:36:54 | 4 | 0 | ---
library_name: bertopic
pipeline_tag: text-classification
tags:
- bertopic
---
# general_nlp_research_paper
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datase... | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"TEXT_CLASSIFICATION",
"COREFERENCE_RESOLUTION",
"EVENT_EXTRACTION",
"QUESTION_ANSWERING",
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION",
"PARAPHRASING"
] | Non_BioNLP |
# general_nlp_research_paper
This is a [BERTopic](https://github.com/MaartenGr/BERTopic) model.
BERTopic is a flexible and modular topic modeling framework that allows for the generation of easily interpretable topics from large datasets.
## Usage
To use this model, please install BERTopic:
```
pip install -U b... | {"library_name": "bertopic", "pipeline_tag": "text-classification", "tags": ["bertopic"]} |
SyedShaheer/bart-large-cnn-samsum_tuned_V2_1 | SyedShaheer | summarization | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"summarization",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-05-02T09:07:20 | 2024-05-02T09:17:38 | 10 | 0 | ---
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP | {"pipeline_tag": "summarization"} | |
agentlans/mdeberta-v3-base-readability | agentlans | text-classification | [
"safetensors",
"deberta-v2",
"multilingual",
"readability",
"text-classification",
"dataset:agentlans/tatoeba-english-translations",
"base_model:microsoft/mdeberta-v3-base",
"base_model:finetune:microsoft/mdeberta-v3-base",
"license:mit",
"region:us"
] | 2024-10-12T03:50:39 | 2024-10-12T09:55:59 | 50 | 0 | ---
base_model:
- microsoft/mdeberta-v3-base
datasets:
- agentlans/tatoeba-english-translations
license: mit
pipeline_tag: text-classification
tags:
- multilingual
- readability
---
# DeBERTa V3 Base for Multilingual Readability Assessment
This is a fine-tuned version of the multilingual DeBERTa model (mdeberta) for a... | [
"TRANSLATION"
] | Non_BioNLP | # DeBERTa V3 Base for Multilingual Readability Assessment
This is a fine-tuned version of the multilingual DeBERTa model (mdeberta) for assessing text readability across languages.
## Model Details
- **Architecture:** mdeberta-base
- **Task:** Regression (Readability Assessment)
- **Training Data:** [agentlans/tatoe... | {"base_model": ["microsoft/mdeberta-v3-base"], "datasets": ["agentlans/tatoeba-english-translations"], "license": "mit", "pipeline_tag": "text-classification", "tags": ["multilingual", "readability"]} |
mrm8488/mbart-large-finetuned-opus-es-en-translation | mrm8488 | translation | [
"transformers",
"pytorch",
"safetensors",
"mbart",
"text2text-generation",
"translation",
"es",
"en",
"dataset:opus100",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05 | 2023-04-05T10:32:38 | 298 | 2 | ---
datasets:
- opus100
language:
- es
- en
tags:
- translation
---
### mbart-large-es-en
This is mbart-large-cc25, finetuned on opus100 for Spanish to English translation.
It scores BLEU **28.25** on validation dataset
It scores BLEU **28.28** on test
dataset | [
"TRANSLATION"
] | Non_BioNLP | ### mbart-large-es-en
This is mbart-large-cc25, finetuned on opus100 for Spanish to English translation.
It scores BLEU **28.25** on validation dataset
It scores BLEU **28.28** on test
dataset | {"datasets": ["opus100"], "language": ["es", "en"], "tags": ["translation"]} |
TransferGraph/zenkri_autotrain-Arabic_Poetry_by_Subject-920730230-finetuned-lora-tweet_eval_emotion | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:zenkri/autotrain-Arabic_Poetry_by_Subject-920730230",
"base_model:adapter:zenkri/autotrain-Arabic_Poetry_by_Subject-920730230",
"model-index",
"region:us"
] | 2024-02-29T12:52:19 | 2024-02-29T12:52:22 | 0 | 0 | ---
base_model: zenkri/autotrain-Arabic_Poetry_by_Subject-920730230
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: zenkri_autotrain-Arabic_Poetry_by_Subject-920730230-finetuned-lora-tweet_eval_emotion
results:
- task:
type: text-classif... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# zenkri_autotrain-Arabic_Poetry_by_Subject-920730230-finetuned-lora-tweet_eval_emotion
This model is a fine-tuned version of [zen... | {"base_model": "zenkri/autotrain-Arabic_Poetry_by_Subject-920730230", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "zenkri_autotrain-Arabic_Poetry_by_Subject-920730230-finetuned-lora-tweet_eval_emotion", "results": [{"t... |
ElizaClaPa/SentimentAnalysis-YelpReviews-OptimizedModel | ElizaClaPa | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-07-14T13:32:31 | 2024-07-16T07:09:10 | 98 | 0 | ---
library_name: transformers
tags: []
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Sentiment Analysis Model to predict the label from a review given, the labels go from 1 star to 5 stars.
## Model Details
### Model Description
<!-- Provide a longer summary of what th... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
Sentiment Analysis Model to predict the label from a review given, the labels go from 1 star to 5 stars.
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is the model card of ... | {"library_name": "transformers", "tags": []} |
fine-tuned/BAAI_bge-large-en-15062024-atex-webapp | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Science",
"Technology",
"Medicine",
"Philosophy",
"Research",
"en",
"dataset:fine-tuned/BAAI_bge-large-en-15062024-atex-webapp",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_... | 2024-06-15T01:47:31 | 2024-06-15T01:48:01 | 7 | 0 | ---
datasets:
- fine-tuned/BAAI_bge-large-en-15062024-atex-webapp
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Science
- Technology
- Medicine
- Philosophy
- Research
---
This model is a fine-tuned vers... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP | This model is a fine-tuned version of [**BAAI/bge-large-en**](https://huggingface.co/BAAI/bge-large-en) designed for the following use case:
general domain
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. ... | {"datasets": ["fine-tuned/BAAI_bge-large-en-15062024-atex-webapp", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb", "Science", "Technology", "Medicine", "Philosophy", "Research"]} |
Nishthaa321/autotrain-qr7os-gstst | Nishthaa321 | text-classification | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:autotrain-qr7os-gstst/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-02-27T10:25:38 | 2024-02-27T10:26:05 | 6 | 0 | ---
datasets:
- autotrain-qr7os-gstst/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.2146722972393036
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.2146722972393036
f1: 1.0
precision: 1.0
recall: 1.0
auc: 1.0
accuracy: 1.0
| {"datasets": ["autotrain-qr7os-gstst/autotrain-data"], "tags": ["autotrain", "text-classification"], "widget": [{"text": "I love AutoTrain"}]} |
GAIR/rst-gaokao-writing-11b | GAIR | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"arxiv:2206.11147",
"license:afl-3.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-09-01T20:33:19 | 2022-09-04T01:42:02 | 10 | 2 | ---
license: afl-3.0
---
<p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [east... | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION",
"PARAPHRASING"
] | Non_BioNLP | <p align="center">
<br>
<img src="https://expressai-xlab.s3.amazonaws.com/rst/intro_rst.png" width="1000"/>
<br>
</p>
# reStructured Pre-training (RST)
official [repository](https://github.com/ExpressAI/reStructured-Pretraining), [paper](https://arxiv.org/pdf/2206.11147.pdf), [easter eggs](http://expressai... | {"license": "afl-3.0"} |
justinthelaw/Phi-3-mini-128k-instruct-4bit-128g-GPTQ | justinthelaw | text-generation | [
"transformers",
"safetensors",
"phi3",
"text-generation",
"nlp",
"code",
"custom_code",
"conversational",
"en",
"dataset:Salesforce/wikitext",
"base_model:microsoft/Phi-3-mini-128k-instruct",
"base_model:quantized:microsoft/Phi-3-mini-128k-instruct",
"license:apache-2.0",
"autotrain_compat... | 2024-07-30T18:18:53 | 2024-08-03T12:37:46 | 242 | 1 | ---
base_model: microsoft/Phi-3-mini-128k-instruct
datasets:
- Salesforce/wikitext
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- nlp
- code
- phi3
- custom_code
- conversational
---
# Phi-3-mini-128k-instruct GPTQ 4-bit 128g Group Size
- Model creator: [Microsoft](https://huggingface.co/mic... | [
"SUMMARIZATION"
] | Non_BioNLP |
# Phi-3-mini-128k-instruct GPTQ 4-bit 128g Group Size
- Model creator: [Microsoft](https://huggingface.co/microsoft)
- Original model: [Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
- Quantization code: [justinthelaw's GitHub](https://github.com/justinthelaw/quantization-pipeli... | {"base_model": "microsoft/Phi-3-mini-128k-instruct", "datasets": ["Salesforce/wikitext"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["nlp", "code", "phi3", "custom_code", "conversational"]} |
ein3108/bert-finetuned-sem_eval-english | ein3108 | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:sem_eval_2018_task_1",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
... | 2024-11-05T02:29:22 | 2024-11-05T02:30:04 | 8 | 0 | ---
base_model: bert-base-uncased
datasets:
- sem_eval_2018_task_1
library_name: transformers
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-finetuned-sem_eval-english
results:
- task:
type: text-classification
name: Text Classification
dataset:... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert-finetuned-sem_eval-english
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncas... | {"base_model": "bert-base-uncased", "datasets": ["sem_eval_2018_task_1"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert-finetuned-sem_eval-english", "results": [{"task": {"type": "text-classification", "name": "... |
RichardErkhov/macadeliccc_-_OmniCorso-7B-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 2024-05-21T20:45:29 | 2024-05-21T23:20:54 | 7 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OmniCorso-7B - GGUF
- Model creator: https://huggingface.co/macadeliccc/
- Original model: https://huggingface.co... | [
"TRANSLATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
OmniCorso-7B - GGUF
- Model creator: https://huggingface.co/macadeliccc/
- Original model: https://huggingface.co/macadelicc... | {} |
leeolivia77/custom_summarization_dataset | leeolivia77 | null | [
"region:us"
] | 2024-09-20T05:29:26 | 2024-09-20T05:29:29 | 0 | 0 | ---
{}
---
# Dataset Card for Custom Text Dataset
## Dataset Name
Custom Text Dataset for Summarization
## Overview
A dataset created for summarizing articles.
## Composition
Contains pairs of articles and their summaries.
## Collection Process
Data was collected from CNN/Daily Mail.
## Preprocessing
Text cleaned... | [
"SUMMARIZATION"
] | Non_BioNLP |
# Dataset Card for Custom Text Dataset
## Dataset Name
Custom Text Dataset for Summarization
## Overview
A dataset created for summarizing articles.
## Composition
Contains pairs of articles and their summaries.
## Collection Process
Data was collected from CNN/Daily Mail.
## Preprocessing
Text cleaned and tokeni... | {} |
lilyray/results | lilyray | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible"... | 2024-03-05T00:55:57 | 2024-03-10T14:59:22 | 31 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: results
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# results
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncased) on the e... | {"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "results", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotio... |
juanjucm/whisper-large-v3-turbo-OpenHQ-GL-EN | juanjucm | automatic-speech-recognition | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"gl",
"en",
"dataset:juanjucm/OpenHQ-SpeechT-GL-EN",
"base_model:openai/whisper-large-v3-turbo",
"base_model:finetune:openai/whisper-large-v3-turbo",
"license:mit",
"endpoints_c... | 2024-12-23T17:02:01 | 2025-02-06T17:07:06 | 65 | 0 | ---
base_model: openai/whisper-large-v3-turbo
datasets:
- juanjucm/OpenHQ-SpeechT-GL-EN
language:
- gl
- en
library_name: transformers
license: mit
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: whisper-large-v3-turbo-gl-en
results: []
---
# whisper-large-v3-turbo-OpenHQ-GL-EN
This model is a f... | [
"TRANSLATION"
] | Non_BioNLP |
# whisper-large-v3-turbo-OpenHQ-GL-EN
This model is a fine-tuned version of [openai/whisper-large-v3-turbo](https://huggingface.co/openai/whisper-large-v3-turbo) trained on [juanjucm/OpenHQ-SpeechT-GL-EN](https://huggingface.co/datasets/juanjucm/OpenHQ-SpeechT-GL-EN) for **Galician-to-English Text to Speech Translati... | {"base_model": "openai/whisper-large-v3-turbo", "datasets": ["juanjucm/OpenHQ-SpeechT-GL-EN"], "language": ["gl", "en"], "library_name": "transformers", "license": "mit", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "whisper-large-v3-turbo-gl-en", "results": []}]} |
Helsinki-NLP/opus-mt-id-sv | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"sv",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-08-16T11:58:09 | 49 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-id-sv
* source languages: id
* target languages: sv
* OPUS readme: [id-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
"TRANSLATION"
] | Non_BioNLP |
### opus-mt-id-sv
* source languages: id
* target languages: sv
* OPUS readme: [id-sv](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-sv/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-09.zip](http... | {"license": "apache-2.0", "tags": ["translation"]} |
zhuwch/all-MiniLM-L6-v2 | zhuwch | sentence-similarity | [
"sentence-transformers",
"pytorch",
"bert",
"feature-extraction",
"sentence-similarity",
"en",
"dataset:s2orc",
"dataset:flax-sentence-embeddings/stackexchange_xml",
"dataset:ms_marco",
"dataset:gooaq",
"dataset:yahoo_answers_topics",
"dataset:code_search_net",
"dataset:search_qa",
"datase... | 2023-09-20T07:37:02 | 2023-09-20T10:07:25 | 13 | 0 | ---
datasets:
- s2orc
- flax-sentence-embeddings/stackexchange_xml
- ms_marco
- gooaq
- yahoo_answers_topics
- code_search_net
- search_qa
- eli5
- snli
- multi_nli
- wikihow
- natural_questions
- trivia_qa
- embedding-data/sentence-compression
- embedding-data/flickr30k-captions
- embedding-data/altlex
- embedding-dat... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
# all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.
## Usage (Sentence-Transformers)
Using this model becomes easy when you have [sentence-transformers](... | {"datasets": ["s2orc", "flax-sentence-embeddings/stackexchange_xml", "ms_marco", "gooaq", "yahoo_answers_topics", "code_search_net", "search_qa", "eli5", "snli", "multi_nli", "wikihow", "natural_questions", "trivia_qa", "embedding-data/sentence-compression", "embedding-data/flickr30k-captions", "embedding-data/altlex",... |
timtarusov/distilbert-base-uncased-finetuned-emotion | timtarusov | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05 | 2022-02-13T08:48:03 | 114 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
VexPoli/distilbart-summarization-top-list | VexPoli | text2text-generation | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"generated_from_trainer",
"base_model:sshleifer/distilbart-xsum-6-6",
"base_model:finetune:sshleifer/distilbart-xsum-6-6",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-12T16:52:03 | 2025-02-12T18:07:58 | 17 | 0 | ---
base_model: sshleifer/distilbart-xsum-6-6
library_name: transformers
license: apache-2.0
tags:
- generated_from_trainer
model-index:
- name: distilbart-summarization-top-list
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should p... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbart-summarization-top-list
This model is a fine-tuned version of [sshleifer/distilbart-xsum-6-6](https://huggingface.co/s... | {"base_model": "sshleifer/distilbart-xsum-6-6", "library_name": "transformers", "license": "apache-2.0", "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbart-summarization-top-list", "results": []}]} |
TransferGraph/boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"model-index",
"region:us"
] | 2024-02-27T17:30:56 | 2024-02-29T13:37:12 | 0 | 0 | ---
base_model: boychaboy/MNLI_roberta-base
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Classification
datase... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [boychaboy/MNLI_roberta-base](... | {"base_model": "boychaboy/MNLI_roberta-base", "datasets": ["tweet_eval"], "library_name": "peft", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "boychaboy_MNLI_roberta-base-finetuned-lora-tweet_eval_irony", "results": [{"task": {"type": "text-classification", "name": "Tex... |
mertyrgn/distilbert-base-uncased-finetuned-emotion | mertyrgn | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-05-15T13:40:01 | 2022-08-13T14:42:02 | 26 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
Xenova/distilbart-xsum-12-1 | Xenova | summarization | [
"transformers.js",
"onnx",
"bart",
"text2text-generation",
"summarization",
"base_model:sshleifer/distilbart-xsum-12-1",
"base_model:quantized:sshleifer/distilbart-xsum-12-1",
"region:us"
] | 2023-09-05T16:46:18 | 2024-10-08T13:41:48 | 60 | 0 | ---
base_model: sshleifer/distilbart-xsum-12-1
library_name: transformers.js
pipeline_tag: summarization
---
https://huggingface.co/sshleifer/distilbart-xsum-12-1 with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML g... | [
"SUMMARIZATION"
] | Non_BioNLP | ERROR: type should be string, got "\nhttps://huggingface.co/sshleifer/distilbart-xsum-12-1 with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)." | {"base_model": "sshleifer/distilbart-xsum-12-1", "library_name": "transformers.js", "pipeline_tag": "summarization"} |
vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa | vocabtrimmer | text2text-generation | [
"transformers",
"pytorch",
"mbart",
"text2text-generation",
"question answering",
"ja",
"dataset:lmqg/qg_jaquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-06T08:00:12 | 2023-04-06T08:04:59 | 10 | 0 | ---
datasets:
- lmqg/qg_jaquad
language: ja
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: 'question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増加、相模原線延伸による多摩ニュータウン乗り入れ、都営地下鉄10号線(現... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
# Model Card of `vocabtrimmer/mbart-large-cc25-trimmed-ja-jaquad-qa`
This model is fine-tuned version of [vocabtrimmer/mbart-large-cc25-trimmed-ja](https://huggingface.co/vocabtrimmer/mbart-large-cc25-trimmed-ja) for question answering task on the [lmqg/qg_jaquad](https://huggingface.co/datasets/lmqg/qg_jaquad) (datas... | {"datasets": ["lmqg/qg_jaquad"], "language": "ja", "license": "cc-by-4.0", "metrics": ["bleu4", "meteor", "rouge-l", "bertscore", "moverscore"], "pipeline_tag": "text2text-generation", "tags": ["question answering"], "widget": [{"text": "question: 新型車両として6000系が構想されたのは、製造費用のほか、どんな費用を抑えるためだったの?, context: 三多摩地区開発による沿線人口の増... |
AI-Sweden-Models/gpt-sw3-356m | AI-Sweden-Models | text-generation | [
"transformers",
"pytorch",
"safetensors",
"gpt2",
"text-generation",
"da",
"sv",
"no",
"en",
"is",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-12-14T12:31:57 | 2024-01-29T13:20:22 | 4,352 | 1 | ---
language:
- da
- sv
- 'no'
- en
- is
license: other
---
# Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https:... | [
"SUMMARIZATION"
] | Non_BioNLP | # Model description
[AI Sweden](https://huggingface.co/AI-Sweden-Models/)
**Base models**
[GPT-Sw3 126M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-126m/) | [GPT-Sw3 356M](https://huggingface.co/AI-Sweden-Models/gpt-sw3-356m/) | [GPT-Sw3 1.3B](https://huggingface.co/AI-Sweden-Models/gpt-sw3-1.3b/)
[GPT-Sw3 ... | {"language": ["da", "sv", "no", "en", "is"], "license": "other"} |
ucuncubayram/distilbert-emotion | ucuncubayram | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_co... | 2024-05-12T11:34:40 | 2024-05-12T11:53:33 | 4 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
confi... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-uncase... | {"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "typ... |
mrapacz/interlinear-en-philta-emb-auto-diacritics-ob | mrapacz | text2text-generation | [
"transformers",
"pytorch",
"morph-t5-auto",
"text2text-generation",
"en",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-07T19:52:47 | 2025-02-21T21:31:02 | 62 | 0 | ---
base_model:
- PhilTa
datasets:
- mrapacz/greek-interlinear-translations
language:
- en
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining ... | [
"TRANSLATION"
] | Non_BioNLP | # Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in t... | {"base_model": ["PhilTa"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["en"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]} |
vgarg/usecase_classifier_large_17_04_24 | vgarg | text-classification | [
"setfit",
"safetensors",
"xlm-roberta",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:intfloat/multilingual-e5-large",
"base_model:finetune:intfloat/multilingual-e5-large",
"model-index",
"region:us"
] | 2024-04-17T07:06:16 | 2024-04-29T08:21:01 | 5 | 0 | ---
base_model: intfloat/multilingual-e5-large
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: What should be Ideal Promo Duration?
- text: Compare the performance of top skus
- text: ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SetFit with intfloat/multilingual-e5-large
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [intfloat/multilingual-e5-large](https://huggingface.co/intfloat/multilingual-e5-large) as the Sentence Transformer embedding model. A [Logistic... | {"base_model": "intfloat/multilingual-e5-large", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "What should be Ideal Promo Duration?"}, {"text": "Compare t... |
apwic/summarization-unipelt-3 | apwic | null | [
"tensorboard",
"generated_from_trainer",
"id",
"base_model:LazarusNLP/IndoNanoT5-base",
"base_model:finetune:LazarusNLP/IndoNanoT5-base",
"license:apache-2.0",
"region:us"
] | 2024-07-07T12:00:45 | 2024-07-07T17:19:15 | 0 | 0 | ---
base_model: LazarusNLP/IndoNanoT5-base
language:
- id
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: summarization-unipelt-3
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably ... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summarization-unipelt-3
This model is a fine-tuned version of [LazarusNLP/IndoNanoT5-base](https://huggingface.co/LazarusNLP/Ind... | {"base_model": "LazarusNLP/IndoNanoT5-base", "language": ["id"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "summarization-unipelt-3", "results": []}]} |
AI4Chem/CHEMLLM-2b-1_5 | AI4Chem | text-generation | [
"transformers",
"safetensors",
"internlm2",
"feature-extraction",
"chemistry",
"text-generation",
"conversational",
"custom_code",
"en",
"zh",
"arxiv:2402.06852",
"license:apache-2.0",
"region:us"
] | 2024-06-25T08:31:34 | 2024-09-17T16:02:49 | 172 | 1 | ---
language:
- en
- zh
license: apache-2.0
pipeline_tag: text-generation
tags:
- chemistry
---
# ChemLLM-2B: Mini LLM for Chemistry and Molecule Science
ChemLLM, The First Open-source Large Language Model for Chemistry and Molecule Science, Build based on InternLM-2 with ❤
[](https://huggingface.co/papers/2402.06852... | {"language": ["en", "zh"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["chemistry"]} |
ChaniM/text-summarization-bart-large-cnn-three-percent | ChaniM | text2text-generation | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"generated_from_trainer",
"dataset:cnn_dailymail",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-08T15:26:08 | 2023-06-09T06:08:20 | 34 | 0 | ---
datasets:
- cnn_dailymail
license: mit
tags:
- generated_from_trainer
model-index:
- name: text-summarization-bart-large-cnn-three-percent
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, t... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text-summarization-bart-large-cnn-three-percent
This model is a fine-tuned version of [facebook/bart-large-cnn](https://huggingf... | {"datasets": ["cnn_dailymail"], "license": "mit", "tags": ["generated_from_trainer"], "model-index": [{"name": "text-summarization-bart-large-cnn-three-percent", "results": []}]} |
TheBloke/airoboros-m-7B-3.0-GGUF | TheBloke | null | [
"transformers",
"gguf",
"mistral",
"dataset:jondurbin/airoboros-3.0",
"base_model:jondurbin/airoboros-m-7b-3.0",
"base_model:quantized:jondurbin/airoboros-m-7b-3.0",
"license:apache-2.0",
"region:us"
] | 2023-10-05T22:47:54 | 2023-10-05T23:27:14 | 711 | 4 | ---
base_model: jondurbin/airoboros-m-7b-3.0
datasets:
- jondurbin/airoboros-3.0
license: apache-2.0
model_name: Airoboros M 7B 3.0
inference: false
model_creator: Jon Durbin
model_type: mistral
prompt_template: '[INST] <<SYS>>
You are a help, unbiased, uncensored assistant.
<</SYS>
{prompt} [/INST]
'
quan... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | TBD |
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<... | {"base_model": "jondurbin/airoboros-m-7b-3.0", "datasets": ["jondurbin/airoboros-3.0"], "license": "apache-2.0", "model_name": "Airoboros M 7B 3.0", "inference": false, "model_creator": "Jon Durbin", "model_type": "mistral", "prompt_template": "[INST] <<SYS>>\nYou are a help, unbiased, uncensored assistant.\n<</SYS>\n\... |
gokulsrinivasagan/distilbert_lda_100_v1_book_wnli | gokulsrinivasagan | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokulsrinivasagan/distilbert_lda_100_v1_book",
"base_model:finetune:gokulsrinivasagan/distilbert_lda_100_v1_book",
"model-index",
"autotrain_compatible... | 2024-12-09T18:12:12 | 2024-12-09T18:12:45 | 15 | 0 | ---
base_model: gokulsrinivasagan/distilbert_lda_100_v1_book
datasets:
- glue
language:
- en
library_name: transformers
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert_lda_100_v1_book_wnli
results:
- task:
type: text-classification
name: Text Classification
dataset... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert_lda_100_v1_book_wnli
This model is a fine-tuned version of [gokulsrinivasagan/distilbert_lda_100_v1_book](https://hug... | {"base_model": "gokulsrinivasagan/distilbert_lda_100_v1_book", "datasets": ["glue"], "language": ["en"], "library_name": "transformers", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert_lda_100_v1_book_wnli", "results": [{"task": {"type": "text-classification", "name": "... |
sbintuitions/modernbert-ja-130m | sbintuitions | fill-mask | [
"transformers",
"safetensors",
"modernbert",
"fill-mask",
"ja",
"en",
"arxiv:2412.13663",
"arxiv:2104.09864",
"arxiv:2404.10830",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-06T06:51:37 | 2025-02-27T02:35:36 | 7,603 | 39 | ---
language:
- ja
- en
library_name: transformers
license: mit
pipeline_tag: fill-mask
---
# ModernBERT-Ja-130M
This repository provides Japanese ModernBERT trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
[ModernBERT](https://arxiv.org/abs/2412.13663) is a new variant of the BERT model that combines lo... | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
# ModernBERT-Ja-130M
This repository provides Japanese ModernBERT trained by [SB Intuitions](https://www.sbintuitions.co.jp/).
[ModernBERT](https://arxiv.org/abs/2412.13663) is a new variant of the BERT model that combines local and global attention, allowing it to handle long sequences while maintaining high comput... | {"language": ["ja", "en"], "library_name": "transformers", "license": "mit", "pipeline_tag": "fill-mask"} |
sarwarbeing/child-labour-remidiation-few-shot | sarwarbeing | text-classification | [
"sentence-transformers",
"pytorch",
"deberta-v2",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-08-27T12:55:29 | 2023-08-27T19:19:50 | 10 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# sarwarbeing/child-labour-remidiation-few-shot
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an effic... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# sarwarbeing/child-labour-remidiation-few-shot
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contra... | {"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]} |
Alassea/glue_sst_classifier | Alassea | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-26T11:33:54 | 2022-04-26T12:20:06 | 113 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: glue_sst_classifier
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: sst2
metrics:
- type: f1
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# glue_sst_classifier
This model is a fine-tuned version of [bert-base-cased](https://huggingface.co/bert-base-cased) on the glue ... | {"datasets": ["glue"], "license": "apache-2.0", "metrics": ["f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "glue_sst_classifier", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "sst2"}, "metrics": ... |
davidadamczyk/ModernBERT-base-DPR-8e-05 | davidadamczyk | sentence-similarity | [
"sentence-transformers",
"safetensors",
"modernbert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:11662655",
"loss:CachedMultipleNegativesRankingLoss",
"en",
"dataset:sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1",
"arxiv:1908.100... | 2025-02-25T14:52:48 | 2025-02-25T14:53:13 | 11 | 0 | ---
base_model: answerdotai/ModernBERT-base
datasets:
- sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1
language:
- en
library_name: sentence-transformers
metrics:
- cosine_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- genera... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SentenceTransformer based on answerdotai/ModernBERT-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [answerdotai/ModernBERT-base](https://huggingface.co/answerdotai/ModernBERT-base) on the [msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1](https://huggingface.co/datasets/sentence-... | {"base_model": "answerdotai/ModernBERT-base", "datasets": ["sentence-transformers/msmarco-co-condenser-margin-mse-sym-mnrl-mean-v1"], "language": ["en"], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity... |
zjunlp/zhixi-13b-lora | zjunlp | text-generation | [
"safetensors",
"code",
"text-generation",
"en",
"zh",
"arxiv:2302.13971",
"arxiv:2305.11527",
"license:apache-2.0",
"region:us"
] | 2023-05-23T04:36:51 | 2023-06-26T07:41:10 | 0 | 22 | ---
language:
- en
- zh
license: apache-2.0
pipeline_tag: text-generation
tags:
- code
---
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/logo_zhixi.png?raw=true" alt="ZJU-KnowLM" style="width: 40%; min-width: 40px; display: block; margin: auto;">... | [
"NAMED_ENTITY_RECOGNITION",
"RELATION_EXTRACTION",
"EVENT_EXTRACTION",
"TRANSLATION"
] | BioNLP |
<p align="center" width="100%">
<a href="" target="_blank"><img src="https://github.com/zjunlp/KnowLM/blob/main/assets/logo_zhixi.png?raw=true" alt="ZJU-KnowLM" style="width: 40%; min-width: 40px; display: block; margin: auto;"></a>
</p>
> This is the result of the `ZhiXi-13B` LoRA weights. You can click [here](http... | {"language": ["en", "zh"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["code"]} |
neerajprad/phrasebank-sentiment-analysis | neerajprad | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatibl... | 2023-10-30T04:35:40 | 2023-10-30T04:36:08 | 9 | 0 | ---
base_model: bert-base-uncased
datasets:
- financial_phrasebank
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: phrasebank-sentiment-analysis
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: financial_phrase... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# phrasebank-sentiment-analysis
This model is a fine-tuned version of [bert-base-uncased](https://huggingface.co/bert-base-uncased... | {"base_model": "bert-base-uncased", "datasets": ["financial_phrasebank"], "license": "apache-2.0", "metrics": ["f1", "accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "phrasebank-sentiment-analysis", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": ... |
KoenBronstring/finetuning-sentiment-model-3000-samples | KoenBronstring | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-05-02T12:08:16 | 2022-05-04T17:53:58 | 115 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
args: plain_text
met... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | {"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "args": ... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.