id stringlengths 6 113 | author stringlengths 2 36 | task_category stringclasses 39
values | tags listlengths 1 4.05k | created_time int64 1,646B 1,742B | last_modified timestamp[s]date 2020-05-14 13:13:12 2025-03-18 10:01:09 | downloads int64 0 118M | likes int64 0 4.86k | README stringlengths 30 1.01M | matched_task listlengths 1 10 | is_bionlp stringclasses 3
values |
|---|---|---|---|---|---|---|---|---|---|---|
neerajprad/phrasebank-sentiment-analysis | neerajprad | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:financial_phrasebank",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatibl... | 1,698,640,540,000 | 2023-10-30T04:36:08 | 9 | 0 | ---
base_model: bert-base-uncased
datasets:
- financial_phrasebank
license: apache-2.0
metrics:
- f1
- accuracy
tags:
- generated_from_trainer
model-index:
- name: phrasebank-sentiment-analysis
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: financial_phrase... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
KoenBronstring/finetuning-sentiment-model-3000-samples | KoenBronstring | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,651,493,296,000 | 2022-05-04T17:53:58 | 115 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
args: plain_text
met... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
golaxy/gogpt2-7b-pretrain | golaxy | text-generation | [
"transformers",
"pytorch",
"tensorboard",
"llama",
"text-generation",
"llama2",
"chinese-llama2",
"gogpt2-7b",
"zh",
"dataset:BelleGroup/train_0.5M_CN",
"dataset:BelleGroup/train_1M_CN",
"dataset:c-s-ale/alpaca-gpt4-data-zh",
"dataset:BAAI/COIG",
"license:apache-2.0",
"autotrain_compatib... | 1,690,630,744,000 | 2023-07-31T09:36:59 | 22 | 1 | ---
datasets:
- BelleGroup/train_0.5M_CN
- BelleGroup/train_1M_CN
- c-s-ale/alpaca-gpt4-data-zh
- BAAI/COIG
language:
- zh
license: apache-2.0
tags:
- llama2
- chinese-llama2
- gogpt2-7b
---
# GoGPT2-7B: 基于Llama2-7b训练的中英文增强大模型

<p align="center">
<img alt="Git... | [
"TRANSLATION"
] | Non_BioNLP |
manikaran2007/finetuning-sentiment-model-3000-samples | manikaran2007 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,680,408,350,000 | 2023-04-02T04:18:52 | 16 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
mhenrichsen/gemma-2b | mhenrichsen | text-generation | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:... | 1,708,531,510,000 | 2024-02-21T16:09:11 | 1,460 | 1 | ---
library_name: transformers
tags: []
---
# Reupload of Gemma 2b base. Original readme below.
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 2B base version of the Gemma model. You can also visit the model card of the [7B base model](https://hugging... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
ujkim98/mt5-small-finetuned-amazon-en-es | ujkim98 | summarization | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,682,252,978,000 | 2023-04-23T15:14:32 | 19 | 0 | ---
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-en-es
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, th... | [
"SUMMARIZATION"
] | Non_BioNLP |
knowledgator/gliclass-modern-large-v2.0-init | knowledgator | zero-shot-classification | [
"safetensors",
"GLiClass",
"text classification",
"zero-shot",
"small language models",
"RAG",
"sentiment analysis",
"zero-shot-classification",
"en",
"fr",
"ge",
"dataset:MoritzLaurer/synthetic_zeroshot_mixtral_v0.1",
"dataset:knowledgator/gliclass-v1.0",
"dataset:fancyzhx/amazon_polarity... | 1,739,488,650,000 | 2025-03-13T20:12:08 | 946 | 9 | ---
base_model:
- answerdotai/ModernBERT-large
datasets:
- MoritzLaurer/synthetic_zeroshot_mixtral_v0.1
- knowledgator/gliclass-v1.0
- fancyzhx/amazon_polarity
- cnmoro/QuestionClassification
- Arsive/toxicity_classification_jigsaw
- shishir-dwi/News-Article-Categorization_IAB
- SetFit/qnli
- nyu-mll/multi_nli
- SetFit... | [
"TEXT_CLASSIFICATION"
] | TBD |
mesolitica/t5-small-bahasa-cased | mesolitica | text2text-generation | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"ms",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,665,070,126,000 | 2022-10-06T15:30:45 | 13 | 0 | ---
language: ms
---
# t5-small-bahasa-cased
Pretrained T5 small on both standard and local language model for Malay.
## Pretraining Corpus
`t5-small-bahasa-cased` model was pretrained on multiple tasks. Below is list of tasks we trained on,
1. Language masking task on bahasa news, bahasa Wikipedia, bahasa Academ... | [
"SEMANTIC_SIMILARITY",
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
yhavinga/t5-eff-large-8l-dutch-english-cased | yhavinga | text2text-generation | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"seq2seq",
"nl",
"en",
"dataset:yhavinga/mc4_nl_cleaned",
"arxiv:1910.10683",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,651,823,302,000 | 2022-08-07T12:07:07 | 19 | 0 | ---
datasets:
- yhavinga/mc4_nl_cleaned
language:
- nl
- en
license: apache-2.0
tags:
- t5
- seq2seq
inference: false
---
# t5-eff-large-8l-dutch-english-cased
A [T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) sequence to sequence model
pre-trained from scratch on [cleaned Dutch 🇳🇱�... | [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
bergum/xtremedistil-l6-h384-emotion | bergum | text-classification | [
"transformers",
"pytorch",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2023-03-21T11:55:03 | 20 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: xtremedistil-l6-h384-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default
metrics:
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/EleutherAI_-_pythia-410m-v0-4bits | RichardErkhov | text-generation | [
"transformers",
"safetensors",
"gpt_neox",
"text-generation",
"arxiv:2101.00027",
"arxiv:2201.07311",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"4-bit",
"bitsandbytes",
"region:us"
] | 1,713,857,183,000 | 2024-04-23T07:27:00 | 4 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
pythia-410m-v0 - bnb 4bits
- Model creator: https://huggingface.co/EleutherAI/
- Original model: https://huggingf... | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
Shinrajim/distilbert-base-uncased-finetuned-clinc | Shinrajim | null | [
"pytorch",
"tensorboard",
"distilbert",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,731,114,574,000 | 2024-11-17T03:44:10 | 7 | 0 | ---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
args: plus
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
hatemestinbejaia/mMiniLML-bi-encoder-KD-v1-Student_TripletLoss-Teacher_marginloss-adptativeMargin95N | hatemestinbejaia | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:15000000",
"loss:MarginMSELoss",
"dataset:hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning",
"arxiv:1908.10084",
"arxiv:2010.02666",
"arxiv:... | 1,741,324,631,000 | 2025-03-07T05:17:55 | 2 | 0 | ---
base_model: sentence-transformers/paraphrase-multilingual-MiniLM-L12-v2
datasets:
- hatemestinbejaia/ExperimentDATA_knowledge_distillation_vs_fine_tuning
library_name: sentence-transformers
metrics:
- map
- mrr@10
- ndcg@10
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feat... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gaudi/opus-mt-en-mh-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,314,921,000 | 2024-10-19T00:21:35 | 6 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original... | [
"TRANSLATION"
] | Non_BioNLP |
ljcamargo/tachiwin_translate | ljcamargo | text2text-generation | [
"transformers",
"safetensors",
"gguf",
"unsloth",
"translation",
"text2text-generation",
"dataset:ljcamargo/tachiwin_translate",
"arxiv:1910.09700",
"endpoints_compatible",
"region:us"
] | 1,729,385,452,000 | 2024-11-24T09:45:51 | 38 | 0 | ---
datasets:
- ljcamargo/tachiwin_translate
library_name: transformers
pipeline_tag: text2text-generation
tags:
- unsloth
- translation
---
# Model Card for Model ID
Tachiwin Totonaku
Totonac - Spanish, Spanish - Totonac Translation with Llama 3.1 8b-Intruct Finetuning (with vicuña model)
## Model Details
### M... | [
"TRANSLATION"
] | Non_BioNLP |
gokulsrinivasagan/bert-base-uncased_rte | gokulsrinivasagan | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:google-bert/bert-base-uncased",
"base_model:finetune:google-bert/bert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoi... | 1,732,260,141,000 | 2024-12-04T18:26:35 | 5 | 0 | ---
base_model: google-bert/bert-base-uncased
datasets:
- glue
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased_rte
results:
- task:
type: text-classification
name: Text Classification
dataset:
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-id-en | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"id",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:58:05 | 38,547 | 15 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-id-en
* source languages: id
* target languages: en
* OPUS readme: [id-en](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/id-en/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
"TRANSLATION"
] | Non_BioNLP |
dltsj/mt5-small-finetuned-amazon-zh-full | dltsj | summarization | [
"transformers",
"pytorch",
"tensorboard",
"mt5",
"text2text-generation",
"summarization",
"generated_from_trainer",
"dataset:amazon_reviews_multi",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,681,575,688,000 | 2023-04-15T16:39:30 | 30 | 0 | ---
datasets:
- amazon_reviews_multi
license: apache-2.0
metrics:
- rouge
tags:
- summarization
- generated_from_trainer
model-index:
- name: mt5-small-finetuned-amazon-zh-full
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
dataset:
name: amazon_review... | [
"SUMMARIZATION"
] | Non_BioNLP |
google/metricx-23-qe-large-v2p0 | google | null | [
"transformers",
"pytorch",
"mt5",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,707,323,744,000 | 2025-01-07T21:11:07 | 90,875 | 6 | ---
license: apache-2.0
---
# MetricX-23
*This is not an officially supported Google product.*
**GitHub repository: [https://github.com/google-research/metricx](https://github.com/google-research/metricx)**
This repository contains the MetricX-23 models,
a family of models for automatic evaluation of translations th... | [
"TRANSLATION"
] | Non_BioNLP |
nfliu/deberta-v3-large_boolq | nfliu | text-classification | [
"transformers",
"pytorch",
"safetensors",
"deberta-v2",
"text-classification",
"generated_from_trainer",
"dataset:boolq",
"base_model:microsoft/deberta-v3-large",
"base_model:finetune:microsoft/deberta-v3-large",
"license:mit",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
... | 1,694,066,124,000 | 2023-09-08T05:40:57 | 32,728 | 3 | ---
base_model: microsoft/deberta-v3-large
datasets:
- boolq
license: mit
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: deberta-v3-large_boolq
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: boolq
type: boolq
config: def... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gaudi/opus-mt-es-sl-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,663,244,000 | 2024-10-19T03:02:59 | 6 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original... | [
"TRANSLATION"
] | Non_BioNLP |
rambodazimi/bert-base-uncased-finetuned-LoRA-MRPC | rambodazimi | null | [
"safetensors",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,724,380,837,000 | 2024-08-28T14:10:38 | 0 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-base-uncased-finetuned-LoRA-MRPC
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
yasuaki0406/distilbert-base-uncased-finetuned-emotion | yasuaki0406 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,661,874,714,000 | 2022-08-30T16:01:46 | 14 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
YONGWOOHUH/distilbert-base-uncased-finetuned-emotion | YONGWOOHUH | text-classification | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,692,166,436,000 | 2023-08-16T06:30:31 | 8 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
patrickquick/BERTicelli | patrickquick | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"BERTicelli",
"text classification",
"abusive language",
"hate speech",
"offensive language",
"en",
"dataset:OLID",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,651,516,592,000 | 2022-05-10T09:03:48 | 8,416 | 0 | ---
datasets:
- OLID
language:
- en
license: apache-2.0
tags:
- BERTicelli
- text classification
- abusive language
- hate speech
- offensive language
widget:
- text: If Jamie Oliver fucks with my £3 meal deals at Tesco I'll kill the cunt.
example_title: Example 1
- text: Keep up the good hard work.
example_title: ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
pinzhenchen/sft-lora-en-bloom-3b | pinzhenchen | null | [
"generation",
"question answering",
"instruction tuning",
"en",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 1,709,682,445,000 | 2024-03-05T23:47:27 | 0 | 0 | ---
language:
- en
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
ura-hcmut/ura-llama-70b | ura-hcmut | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"conversational",
"vi",
"en",
"dataset:vietgpt/wikipedia_vi",
"arxiv:2403.02715",
"arxiv:1910.09700",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,696,500,789,000 | 2024-03-27T17:13:35 | 0 | 3 | ---
datasets:
- vietgpt/wikipedia_vi
language:
- vi
- en
license: other
pipeline_tag: text-generation
extra_gated_prompt: Please read the [URA-LLaMA License Agreement](https://github.com/martinakaduc/ura-llama-public/blob/main/URA-LLaMa%20Model%20User%20Agreement.pdf)
before accepting it.
extra_gated_fields:
Name: ... | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
Davlan/bert-base-multilingual-cased-finetuned-amharic | Davlan | fill-mask | [
"transformers",
"pytorch",
"bert",
"fill-mask",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-06-02T12:37:53 | 270 | 2 | ---
{}
---
Hugging Face's logo
---
language: am
datasets:
---
# bert-base-multilingual-cased-finetuned-amharic
## Model description
**bert-base-multilingual-cased-finetuned-amharic** is a **Amharic BERT** model obtained by replacing mBERT vocabulary by amharic vocabulary because the language was not supported, and fin... | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
pinzhenchen/sft-lora-fr-bloom-7b1 | pinzhenchen | null | [
"generation",
"question answering",
"instruction tuning",
"fr",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 1,709,682,478,000 | 2024-03-05T23:48:02 | 0 | 0 | ---
language:
- fr
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
zkava01/firstparagraph | zkava01 | text-classification | [
"tensorboard",
"safetensors",
"roberta",
"autotrain",
"text-classification",
"base_model:cardiffnlp/twitter-roberta-base-sentiment-latest",
"base_model:finetune:cardiffnlp/twitter-roberta-base-sentiment-latest",
"region:us"
] | 1,732,219,741,000 | 2024-11-21T20:13:17 | 5 | 0 | ---
base_model: cardiffnlp/twitter-roberta-base-sentiment-latest
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.17190960049629211
f1_macro: 0.9521367521367522
f1_micro: 0.9375
f1_weighte... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
tner/deberta-large-wnut2017 | tner | token-classification | [
"transformers",
"pytorch",
"deberta",
"token-classification",
"dataset:tner/wnut2017",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,660,087,551,000 | 2022-09-26T14:29:19 | 44 | 0 | ---
datasets:
- tner/wnut2017
metrics:
- f1
- precision
- recall
pipeline_tag: token-classification
widget:
- text: Jacob Collier is a Grammy awarded artist from England.
example_title: NER Example 1
model-index:
- name: tner/deberta-large-wnut2017
results:
- task:
type: token-classification
name: Tok... | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
zyj2003lj/nomic-embed-text-v1.5-Q4_K_M-GGUF | zyj2003lj | sentence-similarity | [
"sentence-transformers",
"gguf",
"feature-extraction",
"sentence-similarity",
"mteb",
"transformers",
"transformers.js",
"llama-cpp",
"gguf-my-repo",
"en",
"base_model:nomic-ai/nomic-embed-text-v1.5",
"base_model:quantized:nomic-ai/nomic-embed-text-v1.5",
"license:apache-2.0",
"model-index... | 1,725,092,777,000 | 2024-08-31T08:26:20 | 21 | 0 | ---
base_model: nomic-ai/nomic-embed-text-v1.5
language:
- en
library_name: sentence-transformers
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- feature-extraction
- sentence-similarity
- mteb
- transformers
- transformers.js
- llama-cpp
- gguf-my-repo
model-index:
- name: epoch_0_model
results:
- ta... | [
"SUMMARIZATION"
] | Non_BioNLP |
ymoslem/whisper-small-ga2en-v5.4-r | ymoslem | automatic-speech-recognition | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"dataset:ymoslem/... | 1,715,388,929,000 | 2024-05-11T18:28:20 | 6 | 0 | ---
base_model: openai/whisper-small
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
- ymoslem/Tatoeba-Speech-Irish
- ymoslem/Wikimedia-Speech-Irish
language:
- ga
- en
license: apache-2.0
metrics:
- bleu
- wer
- chrf
tags:
- generated_from_train... | [
"TRANSLATION"
] | Non_BioNLP |
RichardErkhov/dwojcik_-_gpt2-large-fine-tuned-context-256-8bits | RichardErkhov | null | [
"safetensors",
"gpt2",
"8-bit",
"bitsandbytes",
"region:us"
] | 1,741,504,676,000 | 2025-03-09T07:18:29 | 2 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gpt2-large-fine-tuned-context-256 - bnb 8bits
- Model creator: https://huggingface.co/dwojcik/
- Original model: ... | [
"SUMMARIZATION"
] | Non_BioNLP |
TransferGraph/yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:yukta10/finetuning-sentiment-model-3000-samples",
"base_model:adapter:yukta10/finetuning-sentiment-model-3000-samples",
"license:apache-2.0",
"model-index",
"region:us"
] | 1,709,053,524,000 | 2024-02-29T13:26:52 | 0 | 0 | ---
base_model: yukta10/finetuning-sentiment-model-3000-samples
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: yukta10_finetuning-sentiment-model-3000-samples-finetuned-lora-tweet_eval_irony
results:
- task:
type: te... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Zhouzk/distilbert-base-uncased_emotion_ft_0520 | Zhouzk | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,349,010,000 | 2023-05-30T03:39:42 | 12 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
- precision
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased_emotion_ft_0520
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
con... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gokuls/bert_uncased_L-12_H-768_A-12_massive | gokuls | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:google/bert_uncased_L-12_H-768_A-12",
"base_model:finetune:google/bert_uncased_L-12_H-768_A-12",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible... | 1,696,618,746,000 | 2023-10-06T19:06:54 | 5 | 0 | ---
base_model: google/bert_uncased_L-12_H-768_A-12
datasets:
- massive
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_uncased_L-12_H-768_A-12_massive
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: massive
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
claudios/unixcoder-base-unimodal | claudios | feature-extraction | [
"transformers",
"safetensors",
"roberta",
"feature-extraction",
"code",
"arxiv:2203.03850",
"arxiv:1910.09700",
"license:apache-2.0",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,717,445,391,000 | 2024-06-03T20:36:03 | 8 | 0 | ---
language:
- code
library_name: transformers
license: apache-2.0
---
# UniXcoder Base Unimodal
This is an *unofficial* reupload of [microsoft/unixcoder-base-unimodal](https://huggingface.co/microsoft/unixcoder-base-unimodal) in the `SafeTensors` format using `transformers` `4.41.2`. The goal of this reupload is to... | [
"SUMMARIZATION"
] | Non_BioNLP |
mbeukman/xlm-roberta-base-finetuned-swahili-finetuned-ner-luo | mbeukman | token-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"NER",
"luo",
"dataset:masakhaner",
"arxiv:2103.11811",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,745,000 | 2021-11-25T09:04:58 | 22 | 0 | ---
datasets:
- masakhaner
language:
- luo
metrics:
- f1
- precision
- recall
tags:
- NER
widget:
- text: "\uFEFFJii 2 moko jowito ngimagi ka machielo 1 to ohinyore marach mokalo e\
\ masira makoch mar apaya mane otimore e apaya mawuok Oyugis kochimo Chabera e\
\ sub county ma Rachuonyo East e County ma Homa Ba... | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
google/t5-efficient-large-nl32 | google | text2text-generation | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 1,646,263,745,000 | 2023-01-24T16:47:34 | 16 | 1 | ---
datasets:
- c4
language:
- en
license: apache-2.0
tags:
- deep-narrow
inference: false
---
# T5-Efficient-LARGE-NL32 (Deep-Narrow version)
T5-Efficient-LARGE-NL32 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architectu... | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-76083984 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-76083984",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible... | 1,717,009,867,000 | 2024-05-29T19:11:43 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-32000-384-gpt-4o-2024-05-13-76083984
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://hug... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
AjayMukundS/Llama-2-7b-LTS-finetuned-v3 | AjayMukundS | summarization | [
"transformers",
"pytorch",
"llama",
"text-generation",
"legal",
"text-generation-inference",
"summarization",
"en",
"dataset:AjayMukundS/LTS_Dataset_Reformatted",
"base_model:meta-llama/Llama-2-7b-hf",
"base_model:finetune:meta-llama/Llama-2-7b-hf",
"license:mit",
"autotrain_compatible",
"... | 1,724,644,870,000 | 2024-08-27T00:55:24 | 22 | 1 | ---
base_model: meta-llama/Llama-2-7b-hf
datasets:
- AjayMukundS/LTS_Dataset_Reformatted
language:
- en
library_name: transformers
license: mit
metrics:
- rouge
pipeline_tag: summarization
tags:
- legal
- text-generation-inference
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
luccidomingues/autotrain-8fohv-7gjpn | luccidomingues | text-classification | [
"transformers",
"safetensors",
"bert",
"text-classification",
"autotrain",
"dataset:autotrain-8fohv-7gjpn/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,708,616,097,000 | 2024-02-22T15:35:12 | 6 | 0 | ---
datasets:
- autotrain-8fohv-7gjpn/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.64765864610672
f1: 0.6666666666666666
precision: 0.5
recall: 1.0
auc: 1.0
accuracy: ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
davelotito/donut_experiment_bayesian_trial_15 | davelotito | image-text-to-text | [
"transformers",
"tensorboard",
"safetensors",
"vision-encoder-decoder",
"image-text-to-text",
"generated_from_trainer",
"base_model:naver-clova-ix/donut-base",
"base_model:finetune:naver-clova-ix/donut-base",
"license:mit",
"endpoints_compatible",
"region:us"
] | 1,719,421,298,000 | 2024-06-26T17:50:10 | 5 | 0 | ---
base_model: naver-clova-ix/donut-base
license: mit
metrics:
- bleu
- wer
tags:
- generated_from_trainer
model-index:
- name: donut_experiment_bayesian_trial_15
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofre... | [
"TRANSLATION"
] | Non_BioNLP |
platzi/platzi-distilroberta-base-mrpc-glue-rafa-rivera | platzi | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"roberta",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,749,547,000 | 2023-06-03T03:55:45 | 15 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- text-classification
- generated_from_trainer
widget:
- text:
- Yucaipa owned Dominick 's before selling the chain to Safeway in 1998 for $ 2.5
billion.
- Yucaipa bought Dominick's in 1995 for $ 693 million and sold it to Safeway for
$... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
bhaskars113/toyota-corrosion | bhaskars113 | text-classification | [
"setfit",
"safetensors",
"qwen2",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"custom_code",
"arxiv:2209.11055",
"base_model:dunzhang/stella_en_1.5B_v5",
"base_model:finetune:dunzhang/stella_en_1.5B_v5",
"region:us"
] | 1,729,542,623,000 | 2024-10-21T20:33:04 | 5 | 1 | ---
base_model: dunzhang/stella_en_1.5B_v5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: I have never owned a F-150. I fell in love with them in 2015 and really like
the idea of ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
kuntalcse006/finetuning-sentiment-model-3000-samples | kuntalcse006 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,671,296,242,000 | 2022-12-17T17:17:00 | 113 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Triangle104/Phi-4-QwQ-Q5_K_M-GGUF | Triangle104 | text-generation | [
"transformers",
"gguf",
"text-generation-inference",
"llama",
"phi3",
"phi",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:prithivMLmods/Phi-4-QwQ",
"base_model:quantized:prithivMLmods/Phi-4-QwQ",
"license:mit",
"endpoints_compatible",
"region:us",
"conversational"
... | 1,738,479,922,000 | 2025-02-02T07:07:12 | 4 | 0 | ---
base_model: prithivMLmods/Phi-4-QwQ
language:
- en
library_name: transformers
license: mit
pipeline_tag: text-generation
tags:
- text-generation-inference
- llama
- phi3
- phi
- llama-cpp
- gguf-my-repo
---
# Triangle104/Phi-4-QwQ-Q5_K_M-GGUF
This model was converted to GGUF format from [`prithivMLmods/Phi-4-QwQ`]... | [
"TRANSLATION"
] | Non_BioNLP |
azale-ai/GotongRoyong-LlaMixtralMoE-7Bx4-v1.0 | azale-ai | text-generation | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"merge",
"mergekit",
"lazymergekit",
"moe",
"indonesian",
"multilingual",
"en",
"id",
"arxiv:2312.00738",
"arxiv:2307.09288",
"license:cc-by-nc-nd-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_comp... | 1,705,132,668,000 | 2024-01-14T05:00:25 | 9 | 0 | ---
language:
- en
- id
license: cc-by-nc-nd-4.0
tags:
- merge
- mergekit
- lazymergekit
- moe
- indonesian
- multilingual
---

# GotongRoyong-LlaMixtralMoE-7Bx4-v1.0
GotongRoyong is a series of language mod... | [
"TRANSLATION"
] | Non_BioNLP |
BigHuggyD/cohereforai_c4ai-command-r-plus_exl2_4.5bpw_h8 | BigHuggyD | text-generation | [
"transformers",
"safetensors",
"cohere",
"text-generation",
"conversational",
"en",
"fr",
"de",
"es",
"it",
"pt",
"ja",
"ko",
"zh",
"ar",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"exl2",
"region:us"
] | 1,719,650,212,000 | 2024-06-29T09:09:59 | 8 | 0 | ---
language:
- en
- fr
- de
- es
- it
- pt
- ja
- ko
- zh
- ar
library_name: transformers
license: cc-by-nc-4.0
inference: false
---
# Model Card for C4AI Command R+
🚨 **This model is non-quantized version of C4AI Command R+. You can find the quantized version of C4AI Command R+ using bitsandbytes [here](https://hu... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
keras/bart_large_en_cnn | keras | text-classification | [
"keras-hub",
"text-classification",
"keras",
"en",
"arxiv:1910.13461",
"license:apache-2.0",
"region:us"
] | 1,730,152,846,000 | 2024-12-23T22:54:49 | 11 | 0 | ---
language:
- en
library_name: keras-hub
license: apache-2.0
pipeline_tag: text-classification
tags:
- text-classification
- keras
---
### Model Overview
BART encoder-decoder network.
This class implements a Transformer-based encoder-decoder model as
described in
["BART: Denoising Sequence-to-Sequence Pre-training f... | [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP |
LysandreJik/testing | LysandreJik | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2021-09-22T19:19:12 | 108 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: testing
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE MRPC
type: glue
args: mrpc
metrics:
- type: ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
pjox/dalembert | pjox | fill-mask | [
"transformers",
"pytorch",
"roberta",
"fill-mask",
"Early Modern French",
"Historical",
"fr",
"dataset:freemmax",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,665,534,229,000 | 2023-05-19T16:16:44 | 40 | 2 | ---
datasets:
- freemmax
language: fr
license: apache-2.0
tags:
- Early Modern French
- Historical
---
<a href="https://portizs.eu/publication/2022/lrec/dalembert/">
<img width="300px" src="https://portizs.eu/publication/2022/lrec/dalembert/featured_hu18bf34d40cdc71c744bdd15e48ff0b23_61788_720x2500_fit_q100_h2_lanczo... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
zaib32/autotrain-long-t5-tglobal-base-16384-book-summary-39278102680 | zaib32 | summarization | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"autotrain",
"summarization",
"unk",
"dataset:zaib32/autotrain-data-long-t5-tglobal-base-16384-book-summary",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,678,125,541,000 | 2023-03-06T18:38:47 | 21 | 0 | ---
datasets:
- zaib32/autotrain-data-long-t5-tglobal-base-16384-book-summary
language:
- unk
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 15.082265058465753
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 39278102680
- CO2 Emission... | [
"SUMMARIZATION"
] | Non_BioNLP |
hanwenzhu/all-distilroberta-v1-lr2e-4-bs1024-nneg3-mlbs-mar03 | hanwenzhu | sentence-similarity | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5854451",
"loss:MaskedCachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:sentence-transformers/all-distilroberta-v1",
"b... | 1,741,021,204,000 | 2025-03-03T17:00:15 | 13 | 0 | ---
base_model: sentence-transformers/all-distilroberta-v1
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5854451
- loss:MaskedCachedMultipleNegativesRankingLoss
widget:
- source_sente... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-ssp-es | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"ssp",
"es",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T12:04:35 | 39 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-ssp-es
* source languages: ssp
* target languages: es
* OPUS readme: [ssp-es](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/ssp-es/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* d... | [
"TRANSLATION"
] | Non_BioNLP |
onnx-community/opus-mt-ru-en | onnx-community | translation | [
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-ru-en",
"base_model:quantized:Helsinki-NLP/opus-mt-ru-en",
"license:cc-by-4.0",
"region:us"
] | 1,724,794,599,000 | 2024-10-08T13:54:14 | 7 | 0 | ---
base_model: Helsinki-NLP/opus-mt-ru-en
library_name: transformers.js
license: cc-by-4.0
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-ru-en with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution unti... | [
"TRANSLATION"
] | Non_BioNLP |
jamiehudson/625-model-brand-rem-jh3 | jamiehudson | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,680,725,064,000 | 2023-04-05T20:04:37 | 10 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# 625-model-brand-rem-jh3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
TransferGraph/classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:classla/bcms-bertic-parlasent-bcs-ter",
"base_model:adapter:classla/bcms-bertic-parlasent-bcs-ter",
"model-index",
"region:us"
] | 1,709,213,868,000 | 2024-02-29T13:37:51 | 3 | 0 | ---
base_model: classla/bcms-bertic-parlasent-bcs-ter
datasets:
- tweet_eval
library_name: peft
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: classla_bcms-bertic-parlasent-bcs-ter-finetuned-lora-tweet_eval_irony
results:
- task:
type: text-classification
name: Text Class... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-698531 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"Academic",
"Debates",
"Counterarguments",
"Research",
"Education",
"custom_code",
"en",
"dataset:fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-698531",
"dataset:allenai/c4",
"license:ap... | 1,716,845,257,000 | 2024-05-27T21:27:52 | 28 | 0 | ---
datasets:
- fine-tuned/ArguAna-512-192-gpt-4o-2024-05-13-698531
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- Academic
- Debates
- Counterarguments
- Research
- Education
---
This model is a fin... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
RichardErkhov/SillyTilly_-_google-gemma-2-9b-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:1705.03551",
"arxiv:2107.03374",
"arxiv:2108.07732",
"arxiv:2110.14168",
"arxiv:2009.11462",
... | 1,722,076,591,000 | 2024-07-27T16:31:14 | 56 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
google-gemma-2-9b - GGUF
- Model creator: https://huggingface.co/SillyTilly/
- Original model: https://huggingfac... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF | Triangle104 | text-generation | [
"transformers",
"gguf",
"text-generation-inference",
"LwQ",
"safetensors",
"Llama3.1",
"llama-cpp",
"gguf-my-repo",
"text-generation",
"en",
"base_model:prithivMLmods/LwQ-10B-Instruct",
"base_model:quantized:prithivMLmods/LwQ-10B-Instruct",
"license:llama3.1",
"endpoints_compatible",
"re... | 1,737,328,208,000 | 2025-01-19T23:12:05 | 8 | 0 | ---
base_model: prithivMLmods/LwQ-10B-Instruct
language:
- en
library_name: transformers
license: llama3.1
pipeline_tag: text-generation
tags:
- text-generation-inference
- LwQ
- safetensors
- Llama3.1
- llama-cpp
- gguf-my-repo
---
# Triangle104/LwQ-10B-Instruct-Q5_K_M-GGUF
This model was converted to GGUF format fro... | [
"SUMMARIZATION"
] | Non_BioNLP |
Joemgu/mlong-t5-base-sumstew | Joemgu | summarization | [
"transformers",
"pytorch",
"safetensors",
"longt5",
"text2text-generation",
"summarization",
"long",
"title generation",
"en",
"de",
"fr",
"it",
"es",
"dataset:Joemgu/sumstew",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,686,250,321,000 | 2023-07-01T11:46:18 | 80 | 5 | ---
datasets:
- Joemgu/sumstew
language:
- en
- de
- fr
- it
- es
library_name: transformers
license: apache-2.0
metrics:
- rouge
pipeline_tag: summarization
tags:
- summarization
- long
- title generation
---
# STILL UNDER DEVELOPMENT (TRAINING RUNNING)
## How to use:
Prefix your document of choice with either:
-... | [
"SUMMARIZATION"
] | Non_BioNLP |
Infomaniak-AI/onnx-opus-mt-en-de | Infomaniak-AI | translation | [
"onnx",
"marian",
"translation",
"en",
"de",
"base_model:Helsinki-NLP/opus-mt-en-de",
"base_model:quantized:Helsinki-NLP/opus-mt-en-de",
"license:apache-2.0",
"region:us"
] | 1,723,562,975,000 | 2024-08-13T15:58:14 | 12 | 0 | ---
base_model: Helsinki-NLP/opus-mt-en-de
language:
- en
- de
license: apache-2.0
pipeline_tag: translation
tags:
- translation
- onnx
---
### opus-mt-en-de
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [Training](#training)... | [
"TRANSLATION"
] | Non_BioNLP |
yarak001/distilbert-base-uncased-finetuned-emotion | yarak001 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,684,369,721,000 | 2023-05-18T01:03:56 | 26 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
ParhamAbdarzade/finetuning-sentiment-model-20000-samples-imdb-v2 | ParhamAbdarzade | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,670,232,801,000 | 2022-12-05T10:32:04 | 112 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-20000-samples-imdb-v2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_t... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gokuls/hBERTv1_new_pretrain_48_emb_com_sst2 | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,686,748,051,000 | 2023-06-14T15:46:23 | 10 | 0 | ---
datasets:
- glue
language:
- en
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: hBERTv1_new_pretrain_48_emb_com_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE SST2
type: glue
config: sst2
split: valida... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
YakovElm/Hyperledger10SetFitModel_balance_ratio_2 | YakovElm | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 1,685,623,875,000 | 2023-06-01T12:52:07 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# YakovElm/Hyperledger10SetFitModel_balance_ratio_2
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an e... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-fr-kqn | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"kqn",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:36:40 | 48 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-fr-kqn
* source languages: fr
* target languages: kqn
* OPUS readme: [fr-kqn](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-kqn/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* d... | [
"TRANSLATION"
] | Non_BioNLP |
buddhist-nlp/gemma-2-mitra-it | buddhist-nlp | text-generation | [
"transformers",
"safetensors",
"gemma2",
"text-generation",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,731,014,908,000 | 2024-11-08T00:35:12 | 85 | 4 | ---
library_name: transformers
tags: []
---
# gemma2-mitra-it
This is based on gemma2-mitra-base and finetuned on Translation instructions.
The template for prompting the model is this:
```
Please translate into <target_language>: <input_sentence> 🔽 Translation::
```
Line breaks in this model should be replaced wi... | [
"TRANSLATION"
] | Non_BioNLP |
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_sst2 | gokuls | text-classification | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autot... | 1,697,582,402,000 | 2023-10-17T23:27:28 | 36 | 0 | ---
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
datasets:
- glue
language:
- en
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_sst2
results:
- task:
type: text-classification
name: Text Classification
dataset... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Tevatron/dse-phi3-docmatix-v2 | Tevatron | null | [
"Tevatron",
"pytorch",
"phi3_v",
"vidore",
"custom_code",
"en",
"dataset:Tevatron/docmatix-ir",
"dataset:HuggingFaceM4/Docmatix",
"dataset:Tevatron/msmarco-passage-aug",
"arxiv:2406.11251",
"license:mit",
"region:us"
] | 1,722,409,227,000 | 2024-08-12T07:57:37 | 187 | 1 | ---
datasets:
- Tevatron/docmatix-ir
- HuggingFaceM4/Docmatix
- Tevatron/msmarco-passage-aug
language:
- en
library_name: Tevatron
license: mit
tags:
- vidore
---
# DSE-Phi3-Docmatix-V2
DSE-Phi3-Docmatix-V2 is a bi-encoder model designed to encode document screenshots into dense vectors for document retrieval. The Do... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
kafikani/autotrain-iinjh-0wh75 | kafikani | text-classification | [
"tensorboard",
"safetensors",
"longformer",
"autotrain",
"text-classification",
"base_model:allenai/longformer-base-4096",
"base_model:finetune:allenai/longformer-base-4096",
"region:us"
] | 1,730,450,943,000 | 2024-11-06T05:21:02 | 4 | 0 | ---
base_model: allenai/longformer-base-4096
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 1.307055115699768
f1_macro: 0.5244016249451032
f1_micro: 0.7504835589941973
f1_weighted: 0.71476... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
zeynepcetin/distilbert-base-uncased-zeynepc-5dim | zeynepcetin | text-classification | [
"transformers",
"tf",
"distilbert",
"text-classification",
"personality-analysis",
"five-factor-model",
"generated_from_keras_callback",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_co... | 1,729,508,041,000 | 2025-01-22T17:05:49 | 4 | 0 | ---
base_model: distilbert-base-uncased
library_name: transformers
license: apache-2.0
tags:
- text-classification
- personality-analysis
- five-factor-model
- transformers
- generated_from_keras_callback
model-index:
- name: distilbert-base-uncased-zeynepc-5dim
results:
- task:
type: text-classification
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
unsloth/SmolLM2-360M | unsloth | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"unsloth",
"en",
"base_model:HuggingFaceTB/SmolLM2-360M",
"base_model:finetune:HuggingFaceTB/SmolLM2-360M",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,730,409,947,000 | 2024-10-31T22:53:13 | 5,520 | 0 | ---
base_model: HuggingFaceTB/SmolLM2-360M
language:
- en
library_name: transformers
license: apache-2.0
tags:
- llama
- unsloth
- transformers
---
# Finetune SmolLM2, Llama 3.2, Gemma 2, Mistral 2-5x faster with 70% less memory via Unsloth!
We have a free Google Colab Tesla T4 notebook for Llama 3.2 (3B) here: https... | [
"SUMMARIZATION"
] | Non_BioNLP |
YxBxRyXJx/bge-base-movie-matryoshka | YxBxRyXJx | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:183",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"dataset:YxBxRyXJx/QAsimple_for_BGE_241019",
"arxiv:1908.10084",
"arxiv:2205.13147",
... | 1,732,162,745,000 | 2024-11-21T04:19:21 | 20 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
datasets:
- YxBxRyXJx/QAsimple_for_BGE_241019
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_pre... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
seelingwong/testmodel | seelingwong | summarization | [
"summarization",
"en",
"dataset:OpenAssistant/oasst1",
"region:us"
] | 1,682,531,791,000 | 2023-05-14T16:58:02 | 0 | 0 | ---
datasets:
- OpenAssistant/oasst1
language:
- en
metrics:
- bleu
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP |
fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-873132 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"custom_code",
"en",
"dataset:fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-873132",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoi... | 1,716,911,004,000 | 2024-05-28T15:43:37 | 7 | 0 | ---
datasets:
- fine-tuned/FiQA2018-512-192-gpt-4o-2024-05-13-873132
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**jinaai/jina-embeddings-v2-base-en**](htt... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Helsinki-NLP/opus-mt-en-he | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"rust",
"marian",
"text2text-generation",
"translation",
"en",
"he",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,646,263,744,000 | 2023-08-16T11:29:48 | 5,661 | 5 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-en-he
* source languages: en
* target languages: he
* OPUS readme: [en-he](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/en-he/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
"TRANSLATION"
] | Non_BioNLP |
Thermostatic/NeuralTranslate_v0.2 | Thermostatic | text-generation | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"Translation",
"Mistral",
"English",
"Spanish",
"conversational",
"en",
"es",
"dataset:Thermostatic/ShareGPT_NeuralTranslate_v0.1",
"arxiv:1910.09700",
"license:mit",
"autotrain_compatible",
"text-generation-inference",
"endpoi... | 1,714,112,416,000 | 2024-04-27T22:21:56 | 33 | 1 | ---
datasets:
- Thermostatic/ShareGPT_NeuralTranslate_v0.1
language:
- en
- es
license: mit
tags:
- Translation
- Mistral
- English
- Spanish
---

# Model Card for NeuralTranslate
<!-- Provide a quic... | [
"TRANSLATION"
] | Non_BioNLP |
gokceuludogan/WarmMolGenTwo | gokceuludogan | text2text-generation | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"molecule-generation",
"cheminformatics",
"targeted-drug-design",
"biochemical-language-models",
"license:mit",
"autotrain_compatible",
"region:us"
] | 1,660,393,326,000 | 2022-08-14T13:39:28 | 16 | 0 | ---
license: mit
tags:
- molecule-generation
- cheminformatics
- targeted-drug-design
- biochemical-language-models
inference: false
---
## WarmMolGenTwo
A target-specific molecule generator model which is warm started (i.e. initialized) from pretrained biochemical language models and trained on interacting protein-co... | [
"TRANSLATION"
] | Non_BioNLP |
opennyaiorg/InRhetoricalRoles | opennyaiorg | null | [
"en",
"dataset:opennyaiorg/InRhetoricalRoles",
"arxiv:2201.13125",
"license:apache-2.0",
"region:us"
] | 1,715,148,207,000 | 2024-05-08T06:25:26 | 0 | 0 | ---
datasets:
- opennyaiorg/InRhetoricalRoles
language:
- en
license: apache-2.0
---
# Github
The model can be accessed via our library: [https://github.com/OpenNyAI/Opennyai](https://github.com/OpenNyAI/Opennyai)
# Paper details
[Corpus for Automatic Structuring of Legal Documents](https://aclanthology.org/2022.lrec... | [
"SUMMARIZATION"
] | Non_BioNLP |
RomainDarous/large_directThreeEpoch_meanPooling_mistranslationModel | RomainDarous | sentence-similarity | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:4460010",
"loss:CoSENTLoss",
"dataset:RomainDarous/corrupted_os_by_language",
"arxiv:1908.10084",
"base_model:RomainDarous/large_directTwoEpoch_meanPooling_... | 1,740,876,592,000 | 2025-03-02T00:50:30 | 13 | 0 | ---
base_model: RomainDarous/large_directTwoEpoch_meanPooling_mistranslationModel
datasets:
- RomainDarous/corrupted_os_by_language
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
-... | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY",
"TRANSLATION"
] | Non_BioNLP |
Philipp-Sc/mistral-7b-reverse-instruct | Philipp-Sc | text-generation | [
"safetensors",
"gguf",
"text-generation",
"en",
"dataset:pankajmathur/WizardLM_Orca",
"dataset:teknium/trismegistus-project",
"dataset:unalignment/toxic-dpo-v0.1",
"dataset:Intel/orca_dpo_pairs",
"license:apache-2.0",
"endpoints_compatible",
"region:us",
"conversational"
] | 1,702,446,120,000 | 2023-12-20T08:01:42 | 105 | 5 | ---
datasets:
- pankajmathur/WizardLM_Orca
- teknium/trismegistus-project
- unalignment/toxic-dpo-v0.1
- Intel/orca_dpo_pairs
language:
- en
license: apache-2.0
pipeline_tag: text-generation
---
## Mistral 7b Reverse Instruct
This model is sft (LoRA) fine tuned to reverse engineer the original prompt of a given LLM o... | [
"SUMMARIZATION"
] | Non_BioNLP |
TeohYx/Translator | TeohYx | translation | [
"translation",
"arxiv:1910.09700",
"region:us"
] | 1,685,973,682,000 | 2023-06-06T03:26:07 | 0 | 0 | ---
pipeline_tag: translation
---
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
This modelcard aims to be a base template for new models. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/mode... | [
"TRANSLATION"
] | Non_BioNLP |
llmware/slim-topics-npu-ov | llmware | null | [
"openvino",
"llama",
"license:apache-2.0",
"region:us"
] | 1,741,947,356,000 | 2025-03-14T10:18:01 | 5 | 0 | ---
base_model: llmware/slim-topics
license: apache-2.0
tags:
- green
- p1
- llmware-fx
- ov
- emerald
inference: false
base_model_relation: quantized
---
# slim-topics-npu-ov
**slim-topics-npu-ov** is a specialized function calling model that generates a topic description for a text passage, typically no more than... | [
"SUMMARIZATION"
] | Non_BioNLP |
marumarukun/BAAI-bge-large-en-v1.5_fine_tuned_fold1_20241115_191836 | marumarukun | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
"region:us"
] | 1,731,680,127,000 | 2024-11-15T14:19:32 | 4 | 0 | ---
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
---
# SentenceTransformer
This is a [sentence-transformers](https://www.SBERT.net) model trained. It maps sentences & paragraphs to a 1024-dimensional dense vector space a... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gokuls/distilbert_add_GLUE_Experiment_logit_kd_stsb | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,674,959,662,000 | 2023-01-29T02:38:46 | 136 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: distilbert_add_GLUE_Experiment_logit_kd_stsb
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
con... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Snowflake/snowflake-arctic-embed-l | Snowflake | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"arctic",
"snowflake-arctic-embed",
"transformers.js",
"arxiv:2407.18887",
"arxiv:2405.05374",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-embeddings-infe... | 1,712,930,074,000 | 2024-12-19T13:32:48 | 26,313 | 91 | ---
license: apache-2.0
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
- arctic
- snowflake-arctic-embed
- transformers.js
new_version: Snowflake/snowflake-arctic-embed-l-v2.0
model-index:
- name: snowflake-arctic-embed-l
results:
- task:
type... | [
"SUMMARIZATION"
] | Non_BioNLP |
webbigdata/C3TR-Adapter_hqq | webbigdata | translation | [
"gptq",
"gemma",
"translation",
"hqq",
"text-generation-inference",
"nlp",
"ja",
"en",
"base_model:google/gemma-7b",
"base_model:finetune:google/gemma-7b",
"region:us"
] | 1,716,372,318,000 | 2024-05-24T07:34:44 | 11 | 0 | ---
base_model: google/gemma-7b
language:
- ja
- en
library_name: gptq
tags:
- translation
- hqq
- gemma
- text-generation-inference
- nlp
---
### Model card
英日、日英翻訳用モデル[C3TR-Adapter](https://huggingface.co/webbigdata/C3TR-Adapter)のHQQ(Half-Quadratic Quantization)4bit量子化版です。
This is the HQQ(Half-Quadratic Quantizati... | [
"TRANSLATION"
] | Non_BioNLP |
derekiya/bart_fine_tuned_model-v2 | derekiya | text2text-generation | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,701,034,047,000 | 2023-11-27T06:17:03 | 7 | 0 | ---
language:
- en
library_name: transformers
license: apache-2.0
---
# Model Card: bart_fine_tuned_model-v2
<!-- Provide a quick summary of what the model is/does. -->
## Model Name
## bart_fine_tuned_model-v2
### Model Description
<!-- This model represents a fine-tuned version of the facebook/bart-large model... | [
"SUMMARIZATION"
] | Non_BioNLP |
ruanchaves/bert-large-portuguese-cased-assin2-entailment | ruanchaves | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"pt",
"dataset:assin2",
"autotrain_compatible",
"region:us"
] | 1,679,940,573,000 | 2023-03-29T18:05:48 | 21 | 0 | ---
datasets:
- assin2
language: pt
inference: false
---
# BERTimbau large for Recognizing Textual Entailment
This is the [neuralmind/bert-large-portuguese-cased](https://huggingface.co/neuralmind/bert-large-portuguese-cased) model finetuned for
Recognizing Textual Entailment with the [ASSIN 2](https://huggingface.c... | [
"TEXTUAL_ENTAILMENT"
] | Non_BioNLP |
elinas/alpaca-30b-lora-int4 | elinas | text-generation | [
"transformers",
"pytorch",
"llama",
"text-generation",
"alpaca",
"gptq",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 1,679,355,703,000 | 2023-04-05T16:42:03 | 44 | 68 | ---
license: other
tags:
- alpaca
- gptq
---
# llama-30b-int4
This LoRA trained for 3 epochs and has been converted to int4 (4bit) via GPTQ method.
Use the one of the two **safetensors** versions, the **pt** version is an old quantization that is no longer supported and will be removed in the future. Make sure you o... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
tmnam20/bert-base-multilingual-cased-qqp-10 | tmnam20 | text-classification | [
"transformers",
"safetensors",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:tmnam20/VieGLUE",
"base_model:google-bert/bert-base-multilingual-cased",
"base_model:finetune:google-bert/bert-base-multilingual-cased",
"license:apache-2.0",
"model-index",
"autotrain_compat... | 1,705,388,287,000 | 2024-01-16T06:59:29 | 5 | 0 | ---
base_model: bert-base-multilingual-cased
datasets:
- tmnam20/VieGLUE
language:
- en
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: bert-base-multilingual-cased-qqp-10
results:
- task:
type: text-classification
name: Text Classification
dataset:
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
gaudi/opus-mt-de-ase-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,721,229,764,000 | 2024-10-18T23:41:38 | 6 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original... | [
"TRANSLATION"
] | Non_BioNLP |
vocabtrimmer/mt5-small-trimmed-es-90000-esquad-qa | vocabtrimmer | text2text-generation | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"question answering",
"es",
"dataset:lmqg/qg_esquad",
"arxiv:2210.03992",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,679,388,159,000 | 2023-03-21T08:43:36 | 17 | 0 | ---
datasets:
- lmqg/qg_esquad
language: es
license: cc-by-4.0
metrics:
- bleu4
- meteor
- rouge-l
- bertscore
- moverscore
pipeline_tag: text2text-generation
tags:
- question answering
widget:
- text: 'question: ¿Cuál es la población de Nueva York a partir de 2014?, context:
Situada en uno de los mayores puertos n... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
partypress/partypress-monolingual-uk | partypress | text-classification | [
"transformers",
"pytorch",
"tf",
"roberta",
"text-classification",
"partypress",
"political science",
"parties",
"press releases",
"en",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,685,377,307,000 | 2023-11-09T11:08:17 | 120 | 0 | ---
language:
- en
license: cc-by-sa-4.0
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- partypress
- political science
- parties
- press releases
widget:
- text: Farmers who applied for a Force Majeure when their businesses wereimpacted
by severe flooding and landslides on 22 and 23 August 2017 canno... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
projecte-aina/aina-translator-ca-en | projecte-aina | null | [
"fairseq",
"ca",
"en",
"dataset:projecte-aina/CA-EN_Parallel_Corpus",
"doi:10.57967/hf/1926",
"license:apache-2.0",
"region:us"
] | 1,669,280,944,000 | 2025-01-31T11:11:10 | 69 | 0 | ---
datasets:
- projecte-aina/CA-EN_Parallel_Corpus
language:
- ca
- en
library_name: fairseq
license: apache-2.0
metrics:
- bleu
---
## Projecte Aina's English-Catalan machine translation model
## Model description
This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/la... | [
"TRANSLATION"
] | Non_BioNLP |
tali1/autotrain-suricata-facebookai-roberta-base | tali1 | text-classification | [
"transformers",
"safetensors",
"roberta",
"text-classification",
"autotrain",
"dataset:autotrain-suricata-facebookai-roberta-base/autotrain-data",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 1,709,676,158,000 | 2024-03-05T22:02:53 | 6 | 0 | ---
datasets:
- autotrain-suricata-facebookai-roberta-base/autotrain-data
tags:
- autotrain
- text-classification
widget:
- text: I love AutoTrain
---
# Model Trained Using AutoTrain
- Problem type: Text Classification
## Validation Metrics
loss: 0.17026731371879578
f1_macro: 0.9173443088077234
f1_micro: 0.9617224... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
Intel/whisper-base-int8-static-inc | Intel | automatic-speech-recognition | [
"transformers",
"onnx",
"whisper",
"automatic-speech-recognition",
"int8",
"ONNX",
"PostTrainingStatic",
"Intel® Neural Compressor",
"neural-compressor",
"dataset:librispeech_asr",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 1,692,949,039,000 | 2023-08-25T07:47:48 | 9 | 0 | ---
datasets:
- librispeech_asr
library_name: transformers
license: apache-2.0
metrics:
- wer
pipeline_tag: automatic-speech-recognition
tags:
- automatic-speech-recognition
- int8
- ONNX
- PostTrainingStatic
- Intel® Neural Compressor
- neural-compressor
---
## Model Details: INT8 Whisper base
Whisper is a pre-traine... | [
"TRANSLATION"
] | Non_BioNLP |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.