id stringlengths 6 113 | author stringlengths 2 36 | task_category stringclasses 39
values | tags listlengths 1 4.05k | created_time timestamp[s]date 2022-03-02 23:29:04 2025-04-07 20:40:27 | last_modified timestamp[s]date 2020-05-14 13:13:12 2025-04-19 04:15:39 | downloads int64 0 118M | likes int64 0 4.86k | README stringlengths 30 1.01M | matched_task listlengths 1 10 | is_bionlp stringclasses 3
values | model_cards stringlengths 0 1M | metadata stringlengths 2 698k |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
fathyshalab/massive_play-roberta-large-v1-2-0.64 | fathyshalab | text-classification | [
"sentence-transformers",
"pytorch",
"roberta",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-02-08T16:17:52 | 2023-02-08T16:18:14 | 8 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# fathyshalab/massive_play-roberta-large-v1-2-0.64
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an ef... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# fathyshalab/massive_play-roberta-large-v1-2-0.64
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with con... | {"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]} |
LoneStriker/gemma-7b-4.0bpw-h6-exl2 | LoneStriker | text-generation | [
"transformers",
"safetensors",
"gemma",
"text-generation",
"arxiv:2305.14314",
"arxiv:2312.11805",
"arxiv:2009.03300",
"arxiv:1905.07830",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1905.10044",
"arxiv:1907.10641",
"arxiv:1811.00937",
"arxiv:1809.02789",
"arxiv:1911.01547",
"arxiv:... | 2024-02-22T15:55:08 | 2024-02-22T15:57:48 | 6 | 0 | ---
library_name: transformers
license: other
license_name: gemma-terms-of-use
license_link: https://ai.google.dev/gemma/terms
tags: []
extra_gated_heading: Access Gemma on Hugging Face
extra_gated_prompt: To access Gemma on Hugging Face, you’re required to review and
agree to Google’s usage license. To do this, plea... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
# Gemma Model Card
**Model Page**: [Gemma](https://ai.google.dev/gemma/docs)
This model card corresponds to the 7B base version of the Gemma model. You can also visit the model card of the [2B base model](https://huggingface.co/google/gemma-2b), [7B instruct model](https://huggingface.co/google/gemma-7b-it), and [2B... | {"library_name": "transformers", "license": "other", "license_name": "gemma-terms-of-use", "license_link": "https://ai.google.dev/gemma/terms", "tags": [], "extra_gated_heading": "Access Gemma on Hugging Face", "extra_gated_prompt": "To access Gemma on Hugging Face, you’re required to review and agree to Google’s usage... |
ravimehta/Test | ravimehta | summarization | [
"asteroid",
"summarization",
"en",
"dataset:togethercomputer/RedPajama-Data-1T",
"region:us"
] | 2023-06-22T17:34:38 | 2023-06-22T17:35:55 | 0 | 0 | ---
datasets:
- togethercomputer/RedPajama-Data-1T
language:
- en
library_name: asteroid
metrics:
- bleurt
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP | {"datasets": ["togethercomputer/RedPajama-Data-1T"], "language": ["en"], "library_name": "asteroid", "metrics": ["bleurt"], "pipeline_tag": "summarization"} | |
Ahmed107/nllb200-ar-en_v11.1 | Ahmed107 | translation | [
"transformers",
"tensorboard",
"safetensors",
"m2m_100",
"text2text-generation",
"translation",
"generated_from_trainer",
"base_model:Ahmed107/nllb200-ar-en_v8",
"base_model:finetune:Ahmed107/nllb200-ar-en_v8",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:u... | 2023-12-07T06:57:33 | 2023-12-07T08:02:05 | 7 | 1 | ---
base_model: Ahmed107/nllb200-ar-en_v8
license: cc-by-nc-4.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: nllb200-ar-en_v11.1
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proof... | [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# nllb200-ar-en_v11.1
This model is a fine-tuned version of [Ahmed107/nllb200-ar-en_v8](https://huggingface.co/Ahmed107/nllb200-ar... | {"base_model": "Ahmed107/nllb200-ar-en_v8", "license": "cc-by-nc-4.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "nllb200-ar-en_v11.1", "results": []}]} |
satish860/distilbert-base-uncased-finetuned-emotion | satish860 | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-04-12T09:35:34 | 2022-08-11T12:44:06 | 47 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
muhtasham/medium-mlm-imdb-target-tweet | muhtasham | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:tweet_eval",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-12-11T07:07:40 | 2022-12-11T07:10:48 | 114 | 0 | ---
datasets:
- tweet_eval
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: medium-mlm-imdb-target-tweet
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: tweet_eval
type: tweet_eval
config: emotion
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# medium-mlm-imdb-target-tweet
This model is a fine-tuned version of [muhtasham/medium-mlm-imdb](https://huggingface.co/muhtasham/... | {"datasets": ["tweet_eval"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "medium-mlm-imdb-target-tweet", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "tweet_eval", "type": "tweet_eval", "... |
ericzzz/falcon-rw-1b-instruct-openorca | ericzzz | text-generation | [
"transformers",
"safetensors",
"falcon",
"text-generation",
"text-generation-inference",
"en",
"dataset:Open-Orca/SlimOrca",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"region:us"
] | 2023-11-24T20:50:32 | 2024-03-05T00:49:13 | 2,405 | 11 | ---
datasets:
- Open-Orca/SlimOrca
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- text-generation-inference
inference: false
model-index:
- name: falcon-rw-1b-instruct-openorca
results:
- task:
type: text-generation
name: Text Generation
dataset:
name: AI2 Reasoning C... | [
"TRANSLATION"
] | Non_BioNLP |
# 🌟 Falcon-RW-1B-Instruct-OpenOrca
Falcon-RW-1B-Instruct-OpenOrca is a 1B parameter, causal decoder-only model based on [Falcon-RW-1B](https://huggingface.co/tiiuae/falcon-rw-1b) and finetuned on the [Open-Orca/SlimOrca](https://huggingface.co/datasets/Open-Orca/SlimOrca) dataset.
**✨Check out our new conversationa... | {"datasets": ["Open-Orca/SlimOrca"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["text-generation-inference"], "inference": false, "model-index": [{"name": "falcon-rw-1b-instruct-openorca", "results": [{"task": {"type": "text-generation", "name": "Text Generation"}, "dataset... |
fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",
... | 2024-05-23T10:26:10 | 2024-05-23T10:26:22 | 9 | 0 | ---
datasets:
- fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742
- allenai/c4
language:
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-base-en-v1.5**](https://huggingface.c... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP | This model is a fine-tuned version of [**BAAI/bge-base-en-v1.5**](https://huggingface.co/BAAI/bge-base-en-v1.5) designed for the following use case:
custom
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. ... | {"datasets": ["fine-tuned/FiQA2018-256-24-gpt-4o-2024-05-13-256742", "allenai/c4"], "language": ["en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]} |
PragmaticPete/tinyqwen | PragmaticPete | text-generation | [
"transformers",
"safetensors",
"qwen2",
"text-generation",
"pretrained",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-06-17T19:15:42 | 2024-06-17T19:19:41 | 14 | 0 | ---
language:
- en
license: apache-2.0
pipeline_tag: text-generation
tags:
- pretrained
---
# Qwen2-0.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, inclu... | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP |
# Qwen2-0.5B
## Introduction
Qwen2 is the new series of Qwen large language models. For Qwen2, we release a number of base language models and instruction-tuned language models ranging from 0.5 to 72 billion parameters, including a Mixture-of-Experts model. This repo contains the 0.5B Qwen2 base language model.
Com... | {"language": ["en"], "license": "apache-2.0", "pipeline_tag": "text-generation", "tags": ["pretrained"]} |
Pclanglais/Larth-Mistral | Pclanglais | text-generation | [
"transformers",
"pytorch",
"mistral",
"text-generation",
"fr",
"license:cc-by-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"8-bit",
"bitsandbytes",
"region:us"
] | 2023-10-10T12:36:53 | 2023-10-21T21:16:07 | 20 | 5 | ---
language:
- fr
library_name: transformers
license: cc-by-4.0
pipeline_tag: text-generation
widget:
- text: 'Answer in Etruscan: Who is the father of Lars?'
example_title: Lars
inference:
parameters:
temperature: 0.7
repetition_penalty: 1.2
---
Larth-Mistral is the first LLM based on the Etruscan langua... | [
"TRANSLATION"
] | Non_BioNLP |
Larth-Mistral is the first LLM based on the Etruscan language, fine-tuned on 1087 original inscriptions.
Larth-Mistral supports cross-linguistic instructions (question in English, answer in Etruscan) and automated translations. The formula to use are:
* *Answer in Etruscan: [Instruction in English]*
* *Translate in E... | {"language": ["fr"], "library_name": "transformers", "license": "cc-by-4.0", "pipeline_tag": "text-generation", "widget": [{"text": "Answer in Etruscan: Who is the father of Lars?", "example_title": "Lars"}], "inference": {"parameters": {"temperature": 0.7, "repetition_penalty": 1.2}}} |
fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241 | fine-tuned | feature-extraction | [
"sentence-transformers",
"safetensors",
"bert",
"feature-extraction",
"sentence-similarity",
"mteb",
"en",
"dataset:fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241",
"dataset:allenai/c4",
"license:apache-2.0",
"autotrain_compatible",
"text-embeddings-inference",
"endpoints_compatible",... | 2024-05-28T18:54:18 | 2024-05-28T18:54:49 | 6 | 0 | ---
datasets:
- fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241
- allenai/c4
language:
- en
- en
license: apache-2.0
pipeline_tag: feature-extraction
tags:
- sentence-transformers
- feature-extraction
- sentence-similarity
- mteb
---
This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggi... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP | This model is a fine-tuned version of [**BAAI/bge-large-en-v1.5**](https://huggingface.co/BAAI/bge-large-en-v1.5) designed for the following use case:
None
## How to Use
This model can be easily integrated into your NLP pipeline for tasks such as text classification, sentiment analysis, entity recognition, and more. ... | {"datasets": ["fine-tuned/SciFact-512-192-gpt-4o-2024-05-13-28032241", "allenai/c4"], "language": ["en", "en"], "license": "apache-2.0", "pipeline_tag": "feature-extraction", "tags": ["sentence-transformers", "feature-extraction", "sentence-similarity", "mteb"]} |
pEpOo/catastrophy8 | pEpOo | text-classification | [
"setfit",
"safetensors",
"mpnet",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:sentence-transformers/all-mpnet-base-v2",
"base_model:finetune:sentence-transformers/all-mpnet-base-v2",
"model-index",
"region:us"
] | 2023-12-18T14:14:04 | 2023-12-18T14:14:25 | 50 | 0 | ---
base_model: sentence-transformers/all-mpnet-base-v2
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: "Rly tragedy in MP: Some live to recount horror: \x89ÛÏWhen I saw coaches\
\... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SetFit with sentence-transformers/all-mpnet-base-v2
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [sentence-transformers/all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2) as the Sentence Transformer e... | {"base_model": "sentence-transformers/all-mpnet-base-v2", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "Rly tragedy in MP: Some live to recount horror: Û... |
Anjaan-Khadka/Nepali-Summarization | Anjaan-Khadka | summarization | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"summarization",
"mT5",
"ne",
"dataset:csebuetnlp/xlsum",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-23T11:44:58 | 2023-03-17T08:45:04 | 21 | 0 | ---
datasets:
- csebuetnlp/xlsum
language:
- ne
tags:
- summarization
- mT5
widget:
- text: तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र
गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा
उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो । यसले नदी ... | [
"SUMMARIZATION"
] | Non_BioNLP |
# adaptation of mT5-multilingual-XLSum for Nepali Lnaguage
This repository contains adapted version of mT5-multilinguag-XLSum for Single Language (Nepali). View original [mT5-multilinguag-XLSum model](https://huggingface.co/csebuetnlp/mT5_multilingual_XLSum)
## Using this model in `transformers` (tested on 4.11.0.de... | {"datasets": ["csebuetnlp/xlsum"], "language": ["ne"], "tags": ["summarization", "mT5"], "widget": [{"text": "तीन नगरपालिकालाई समेटेर भेरी किनारमा बन्न थालेको आधुनिक नमुना सहरको काम तीव्र गतिमा अघि बढेको छ । भेरीगंगा, गुर्भाकोट र लेकबेंसी नगरपालिकामा बन्न थालेको भेरीगंगा उपत्यका नमुना आधुनिक सहर निर्माण हुन लागेको हो ।... |
sndsabin/fake-news-classifier | sndsabin | null | [
"license:gpl-3.0",
"region:us"
] | 2022-03-31T08:53:49 | 2022-04-07T08:58:17 | 0 | 0 | ---
license: gpl-3.0
---
**Fake News Classifier**: Text classification model to detect fake news articles!
**Dataset**: [Kaggle Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
| [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
**Fake News Classifier**: Text classification model to detect fake news articles!
**Dataset**: [Kaggle Fake and real news dataset](https://www.kaggle.com/datasets/clmentbisaillon/fake-and-real-news-dataset)
| {"license": "gpl-3.0"} |
TheBloke/LUNA-SOLARkrautLM-Instruct-GGUF | TheBloke | text-generation | [
"transformers",
"gguf",
"solar",
"finetune",
"dpo",
"Instruct",
"augmentation",
"german",
"text-generation",
"en",
"de",
"dataset:argilla/distilabel-math-preference-dpo",
"base_model:fblgit/LUNA-SOLARkrautLM-Instruct",
"base_model:quantized:fblgit/LUNA-SOLARkrautLM-Instruct",
"license:cc... | 2023-12-23T13:02:23 | 2023-12-23T13:08:59 | 368 | 4 | ---
base_model: fblgit/LUNA-SOLARkrautLM-Instruct
datasets:
- argilla/distilabel-math-preference-dpo
language:
- en
- de
library_name: transformers
license: cc-by-nc-4.0
model_name: Luna SOLARkrautLM Instruct
pipeline_tag: text-generation
tags:
- finetune
- dpo
- Instruct
- augmentation
- german
inference: false
model_... | [
"TRANSLATION"
] | Non_BioNLP | <!-- markdownlint-disable MD041 -->
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content:... | {"base_model": "fblgit/LUNA-SOLARkrautLM-Instruct", "datasets": ["argilla/distilabel-math-preference-dpo"], "language": ["en", "de"], "library_name": "transformers", "license": "cc-by-nc-4.0", "model_name": "Luna SOLARkrautLM Instruct", "pipeline_tag": "text-generation", "tags": ["finetune", "dpo", "Instruct", "augment... |
halee9/translation_en_ko | halee9 | text2text-generation | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"generated_from_trainer",
"base_model:Helsinki-NLP/opus-mt-ko-en",
"base_model:finetune:Helsinki-NLP/opus-mt-ko-en",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-16T17:30:56 | 2024-03-16T22:43:22 | 128 | 0 | ---
base_model: Helsinki-NLP/opus-mt-ko-en
license: apache-2.0
metrics:
- bleu
tags:
- generated_from_trainer
model-index:
- name: translation_en_ko
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete... | [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# translation_en_ko
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-ko-en](https://huggingface.co/Helsinki-NLP/opus-mt... | {"base_model": "Helsinki-NLP/opus-mt-ko-en", "license": "apache-2.0", "metrics": ["bleu"], "tags": ["generated_from_trainer"], "model-index": [{"name": "translation_en_ko", "results": []}]} |
lamm-mit/Cephalo-Idefics-2-vision-10b-beta | lamm-mit | image-text-to-text | [
"transformers",
"safetensors",
"idefics2",
"image-text-to-text",
"nlp",
"code",
"vision",
"chemistry",
"engineering",
"biology",
"bio-inspired",
"text-generation-inference",
"materials science",
"conversational",
"multilingual",
"arxiv:2405.19076",
"license:apache-2.0",
"endpoints_... | 2024-05-28T15:25:25 | 2024-05-30T10:34:41 | 12 | 0 | ---
language:
- multilingual
library_name: transformers
license: apache-2.0
pipeline_tag: image-text-to-text
tags:
- nlp
- code
- vision
- chemistry
- engineering
- biology
- bio-inspired
- text-generation-inference
- materials science
inference:
parameters:
temperature: 0.3
widget:
- messages:
- role: user
... | [
"QUESTION_ANSWERING"
] | Non_BioNLP | ## Model Summary
Cephalo is a series of multimodal materials science focused vision large language models (V-LLMs) designed to integrate visual and linguistic data for advanced understanding and interaction in human-AI or multi-agent AI frameworks.
A novel aspect of Cephalo's development is the innovative dataset ge... | {"language": ["multilingual"], "library_name": "transformers", "license": "apache-2.0", "pipeline_tag": "image-text-to-text", "tags": ["nlp", "code", "vision", "chemistry", "engineering", "biology", "bio-inspired", "text-generation-inference", "materials science"], "inference": {"parameters": {"temperature": 0.3}}, "wi... |
gauravkoradiya/T5-Finetuned-Summarization-DialogueDataset | gauravkoradiya | summarization | [
"transformers",
"pytorch",
"t5",
"text2text-generation",
"summarization",
"en",
"dataset:knkarthick/dialogsum",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-04-16T01:12:26 | 2023-04-16T01:24:14 | 151 | 1 | ---
datasets:
- knkarthick/dialogsum
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- bleu
- rouge
pipeline_tag: summarization
---
| [
"SUMMARIZATION"
] | Non_BioNLP | {"datasets": ["knkarthick/dialogsum"], "language": ["en"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["bleu", "rouge"], "pipeline_tag": "summarization"} | |
MaLA-LM/lucky52-bloom-7b1-no-5 | MaLA-LM | text-generation | [
"transformers",
"pytorch",
"bloom",
"text-generation",
"generation",
"question answering",
"instruction tuning",
"multilingual",
"dataset:MBZUAI/Bactrian-X",
"arxiv:2404.04850",
"license:cc-by-nc-4.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:u... | 2024-04-04T08:03:23 | 2025-04-08T17:06:33 | 14 | 0 | ---
datasets:
- MBZUAI/Bactrian-X
language:
- multilingual
library_name: transformers
license: cc-by-nc-4.0
pipeline_tag: text-generation
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel ... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
### Model Description
This HF repository hosts instruction fine-tuned multilingual BLOOM model using the parallel instruction dataset called Bactrain-X in 52 languages.
We progressively add a language during instruction fine-tuning at each time, and train 52 models in total. Then, we evaluate those models in three ... | {"datasets": ["MBZUAI/Bactrian-X"], "language": ["multilingual"], "library_name": "transformers", "license": "cc-by-nc-4.0", "pipeline_tag": "text-generation", "tags": ["generation", "question answering", "instruction tuning"]} |
RichardErkhov/jondurbin_-_airoboros-l2-13b-3.0-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 2024-07-25T05:24:45 | 2024-07-25T11:07:58 | 26 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
airoboros-l2-13b-3.0 - GGUF
- Model creator: https://huggingface.co/jondurbin/
- Original model: https://huggingf... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
airoboros-l2-13b-3.0 - GGUF
- Model creator: https://huggingface.co/jondurbin/
- Original model: https://huggingface.co/jond... | {} |
chienweichang/formatted_address | chienweichang | text2text-generation | [
"transformers",
"tensorboard",
"safetensors",
"mt5",
"text2text-generation",
"generated_from_trainer",
"dataset:cwchang/tw_address_large",
"base_model:google/mt5-small",
"base_model:finetune:google/mt5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible... | 2023-12-19T03:36:12 | 2023-12-19T04:49:04 | 92 | 0 | ---
base_model: google/mt5-small
datasets:
- cwchang/tw_address_large
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: formatted_address
results:
- task:
type: summarization
name: Summarization
dataset:
name: cwchang/tw_address_large
type: cwchang/... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# formatted_address
This model is a fine-tuned version of [google/mt5-small](https://huggingface.co/google/mt5-small) on the cwcha... | {"base_model": "google/mt5-small", "datasets": ["cwchang/tw_address_large"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "formatted_address", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "cwchang/tw_address_l... |
am-azadi/gte-multilingual-base_Fine_Tuned_1e | am-azadi | sentence-similarity | [
"sentence-transformers",
"safetensors",
"new",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:25743",
"loss:MultipleNegativesRankingLoss",
"custom_code",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:Alibaba-NLP/gte-multilingual-base",
"base_model:... | 2025-02-20T17:58:05 | 2025-02-20T17:58:47 | 11 | 0 | ---
base_model: Alibaba-NLP/gte-multilingual-base
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:25743
- loss:MultipleNegativesRankingLoss
widget:
- source_sentence: م الحين SHIA WAVES... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SentenceTransformer based on Alibaba-NLP/gte-multilingual-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Alibaba-NLP/gte-multilingual-base](https://huggingface.co/Alibaba-NLP/gte-multilingual-base). It maps sentences & paragraphs to a 768-dimensional dense vector space and can b... | {"base_model": "Alibaba-NLP/gte-multilingual-base", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:25743", "loss:MultipleNegativesRankingLoss"], "widget": [{"source_sen... |
rifatul123/Primary_doctor_v1 | rifatul123 | text-generation | [
"adapter-transformers",
"pytorch",
"gpt2",
"biology",
"medical",
"chemistry",
"text-generation-inference",
"text-generation",
"en",
"region:us"
] | 2023-05-05T08:35:44 | 2023-05-05T16:57:39 | 0 | 0 | ---
language:
- en
library_name: adapter-transformers
metrics:
- accuracy
pipeline_tag: text-generation
tags:
- biology
- medical
- chemistry
- text-generation-inference
---

![Scr... | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | BioNLP |



* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
"TRANSLATION"
] | Non_BioNLP |
### opus-mt-yo-fr
* source languages: yo
* target languages: fr
* OPUS readme: [yo-fr](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/yo-fr/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](http... | {"license": "apache-2.0", "tags": ["translation"]} |
gokuls/mobilebert_sa_GLUE_Experiment_logit_kd_stsb_256 | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"mobilebert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-30T06:49:43 | 2023-01-30T06:53:58 | 138 | 0 | ---
datasets:
- glue
language:
- en
license: apache-2.0
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: mobilebert_sa_GLUE_Experiment_logit_kd_stsb_256
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: GLUE STSB
type: glue
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# mobilebert_sa_GLUE_Experiment_logit_kd_stsb_256
This model is a fine-tuned version of [google/mobilebert-uncased](https://huggin... | {"datasets": ["glue"], "language": ["en"], "license": "apache-2.0", "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "mobilebert_sa_GLUE_Experiment_logit_kd_stsb_256", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "GLUE STS... |
Helsinki-NLP/opus-mt-tc-bible-big-itc-deu_eng_fra_por_spa | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc-bible",
"acf",
"an",
"ast",
"ca",
"cbk",
"co",
"crs",
"de",
"egl",
"en",
"es",
"ext",
"fr",
"frm",
"fro",
"frp",
"fur",
"gcf",
"gl",
"ht",
"it",
"kea",
"la... | 2024-10-08T08:57:36 | 2024-10-08T08:57:47 | 116 | 0 | ---
language:
- acf
- an
- ast
- ca
- cbk
- co
- crs
- de
- egl
- en
- es
- ext
- fr
- frm
- fro
- frp
- fur
- gcf
- gl
- ht
- it
- kea
- la
- lad
- lij
- lld
- lmo
- lou
- mfe
- mo
- mwl
- nap
- oc
- osp
- pap
- pcd
- pms
- pt
- rm
- ro
- rup
- sc
- scn
- vec
- wa
library_name: transformers
license: apache-2.0
tags:
-... | [
"TRANSLATION"
] | Non_BioNLP | # opus-mt-tc-bible-big-itc-deu_eng_fra_por_spa
## Table of Contents
- [Model Details](#model-details)
- [Uses](#uses)
- [Risks, Limitations and Biases](#risks-limitations-and-biases)
- [How to Get Started With the Model](#how-to-get-started-with-the-model)
- [Training](#training)
- [Evaluation](#evaluation)
- [Citatio... | {"language": ["acf", "an", "ast", "ca", "cbk", "co", "crs", "de", "egl", "en", "es", "ext", "fr", "frm", "fro", "frp", "fur", "gcf", "gl", "ht", "it", "kea", "la", "lad", "lij", "lld", "lmo", "lou", "mfe", "mo", "mwl", "nap", "oc", "osp", "pap", "pcd", "pms", "pt", "rm", "ro", "rup", "sc", "scn", "vec", "wa"], "library... |
gokuls/BERT-tiny-Massive-intent | gokuls | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-24T14:15:30 | 2022-09-24T14:26:13 | 10 | 0 | ---
datasets:
- massive
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: BERT-tiny-Massive-intent
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: massive
type: massive
config: en-US
split: train
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# BERT-tiny-Massive-intent
This model is a fine-tuned version of [google/bert_uncased_L-2_H-128_A-2](https://huggingface.co/google... | {"datasets": ["massive"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "BERT-tiny-Massive-intent", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "massive", "type": "massive", "config": "en-US", "... |
osanseviero/bert-base-uncased-copy | osanseviero | fill-mask | [
"transformers",
"pytorch",
"jax",
"rust",
"coreml",
"safetensors",
"bert",
"fill-mask",
"exbert",
"en",
"dataset:bookcorpus",
"dataset:wikipedia",
"arxiv:1810.04805",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-30T19:29:49 | 2023-04-04T06:18:11 | 14 | 0 | ---
datasets:
- bookcorpus
- wikipedia
language: en
license: apache-2.0
tags:
- exbert
duplicated_from: bert-base-uncased
---
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
# BERT base model (uncased)
Pretrained model on English language using a masked language modeling (MLM) objective. It was introduced in
[this paper](https://arxiv.org/abs/1810.04805) and first released in
[this repository](https://github.com/google-research/bert). This model is uncased: it does not make a difference
... | {"datasets": ["bookcorpus", "wikipedia"], "language": "en", "license": "apache-2.0", "tags": ["exbert"], "duplicated_from": "bert-base-uncased"} |
sud977/my-awesome-setfit-model | sud977 | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-04-27T01:50:41 | 2023-04-27T01:53:28 | 9 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# /var/folders/lm/k69sycyx5538ldsk5n0ln5000000gn/T/tmp_un7plj_/killshot977/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classi... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# /var/folders/lm/k69sycyx5538ldsk5n0ln5000000gn/T/tmp_un7plj_/killshot977/my-awesome-setfit-model
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sente... | {"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]} |
YtBig/improve-a-v1 | YtBig | summarization | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"autotrain",
"summarization",
"en",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-10-25T14:10:54 | 2022-12-08T09:13:15 | 114 | 0 | ---
language:
- en
tags:
- autotrain
- summarization
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions:
emissions: 0.9899872350262614
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1822063032
- CO2 Emissions (in grams): 0.9900
## Validation Metrics
- Loss: 0.347
- Rouge1: 66.429
... | [
"SUMMARIZATION"
] | Non_BioNLP |
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1822063032
- CO2 Emissions (in grams): 0.9900
## Validation Metrics
- Loss: 0.347
- Rouge1: 66.429
- Rouge2: 29.419
- RougeL: 66.188
- RougeLsum: 66.183
- Gen Len: 11.256
## Usage
You can use cURL to access this model:
```
$ curl -X POST -... | {"language": ["en"], "tags": ["autotrain", "summarization"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": {"emissions": 0.9899872350262614}} |
gokuls/bert_uncased_L-2_H-768_A-12_massive | gokuls | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"dataset:massive",
"base_model:google/bert_uncased_L-2_H-768_A-12",
"base_model:finetune:google/bert_uncased_L-2_H-768_A-12",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",... | 2023-10-06T17:32:45 | 2023-10-06T17:35:40 | 10 | 0 | ---
base_model: google/bert_uncased_L-2_H-768_A-12
datasets:
- massive
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bert_uncased_L-2_H-768_A-12_massive
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: massive
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bert_uncased_L-2_H-768_A-12_massive
This model is a fine-tuned version of [google/bert_uncased_L-2_H-768_A-12](https://huggingfa... | {"base_model": "google/bert_uncased_L-2_H-768_A-12", "datasets": ["massive"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bert_uncased_L-2_H-768_A-12_massive", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "datase... |
nielsr/coref-bert-large | nielsr | null | [
"transformers",
"pytorch",
"safetensors",
"exbert",
"en",
"dataset:wikipedia",
"dataset:quoref",
"dataset:docred",
"dataset:fever",
"dataset:gap",
"dataset:winograd_wsc",
"dataset:winogender",
"dataset:nyu-mll/glue",
"arxiv:2004.06870",
"license:apache-2.0",
"endpoints_compatible",
"... | 2022-03-02T23:29:05 | 2024-12-22T10:40:56 | 38 | 1 | ---
datasets:
- wikipedia
- quoref
- docred
- fever
- gap
- winograd_wsc
- winogender
- nyu-mll/glue
language: en
license: apache-2.0
tags:
- exbert
---
# CorefBERT large model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduc... | [
"COREFERENCE_RESOLUTION"
] | Non_BioNLP |
# CorefBERT large model
Pretrained model on English language using Masked Language Modeling (MLM) and Mention Reference Prediction (MRP) objectives. It was introduced in
[this paper](https://arxiv.org/abs/2004.06870) and first released in
[this repository](https://github.com/thunlp/CorefBERT).
Disclaimer: The team... | {"datasets": ["wikipedia", "quoref", "docred", "fever", "gap", "winograd_wsc", "winogender", "nyu-mll/glue"], "language": "en", "license": "apache-2.0", "tags": ["exbert"]} |
jjae/kobart-summarization-diary | jjae | text2text-generation | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"kobart-summarization-diary",
"generated_from_trainer",
"base_model:gogamza/kobart-summarization",
"base_model:finetune:gogamza/kobart-summarization",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-04-03T15:54:04 | 2024-04-03T16:46:18 | 14 | 0 | ---
base_model: gogamza/kobart-summarization
license: mit
tags:
- kobart-summarization-diary
- generated_from_trainer
model-index:
- name: summary2
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete ... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# summary2
This model is a fine-tuned version of [gogamza/kobart-summarization](https://huggingface.co/gogamza/kobart-summarizatio... | {"base_model": "gogamza/kobart-summarization", "license": "mit", "tags": ["kobart-summarization-diary", "generated_from_trainer"], "model-index": [{"name": "summary2", "results": []}]} |
tomaarsen/distilroberta-base-nli-v2 | tomaarsen | sentence-similarity | [
"sentence-transformers",
"safetensors",
"roberta",
"sentence-similarity",
"feature-extraction",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:1705.00652",
"base_model:distilbert/distilroberta-base",
"base_model:finetune:distilbert/distilroberta-base",
"model-index",
"... | 2024-05-02T07:49:41 | 2024-05-02T07:50:07 | 9 | 0 | ---
base_model: distilbert/distilroberta-base
language:
- en
library_name: sentence-transformers
metrics:
- pearson_cosine
- spearman_cosine
- pearson_manhattan
- spearman_manhattan
- pearson_euclidean
- spearman_euclidean
- pearson_dot
- spearman_dot
- pearson_max
- spearman_max
pipeline_tag: sentence-... | [
"TEXT_CLASSIFICATION",
"SEMANTIC_SIMILARITY"
] | Non_BioNLP |
# SentenceTransformer based on distilbert/distilroberta-base
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [distilbert/distilroberta-base](https://huggingface.co/distilbert/distilroberta-base) on the [sentence-transformers/all-nli](https://huggingface.co/datasets/sentence-transforme... | {"base_model": "distilbert/distilroberta-base", "language": ["en"], "library_name": "sentence-transformers", "metrics": ["pearson_cosine", "spearman_cosine", "pearson_manhattan", "spearman_manhattan", "pearson_euclidean", "spearman_euclidean", "pearson_dot", "spearman_dot", "pearson_max", "spearman_max"], "pipeline_tag... |
akshitha-k/all-MiniLM-L6-v2-stsb | akshitha-k | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:5749",
"loss:CosineSimilarityLoss",
"arxiv:1908.10084",
"base_model:sentence-transformers/all-MiniLM-L6-v2",
"base_model:finetune:sentence-transformers/all-MiniLM-... | 2024-11-10T21:14:22 | 2024-11-10T21:14:29 | 7 | 0 | ---
base_model: sentence-transformers/all-MiniLM-L6-v2
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:5749
- loss:CosineSimilarityLoss
widget:
- source_sentence: A girl is styling her ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SentenceTransformer based on sentence-transformers/all-MiniLM-L6-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [sentence-transformers/all-MiniLM-L6-v2](https://huggingface.co/sentence-transformers/all-MiniLM-L6-v2). It maps sentences & paragraphs to a 384-dimensional dense vector ... | {"base_model": "sentence-transformers/all-MiniLM-L6-v2", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:5749", "loss:CosineSimilarityLoss"], "widget": [{"source_sentenc... |
Aryanne/Bling-Sheared-Llama-1.3B-0.1-gguf | Aryanne | null | [
"gguf",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-10-24T18:03:59 | 2023-10-24T18:52:51 | 159 | 4 | ---
license: apache-2.0
---
Some GGUF v2 quantizations of the model [llmware/bling-sheared-llama-1.3b-0.1](https://huggingface.co/llmware/bling-sheared-llama-1.3b-0.1)
bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a... | [
"SUMMARIZATION"
] | Non_BioNLP |
Some GGUF v2 quantizations of the model [llmware/bling-sheared-llama-1.3b-0.1](https://huggingface.co/llmware/bling-sheared-llama-1.3b-0.1)
bling-sheared-llama-1.3b-0.1 is part of the BLING ("Best Little Instruction-following No-GPU-required") model series, instruct trained on top of a Sheared-LLaMA-1.3B base mod... | {"license": "apache-2.0"} |
aehrm/redewiedergabe-freeindirect | aehrm | token-classification | [
"flair",
"pytorch",
"token-classification",
"sequence-tagger-model",
"de",
"region:us"
] | 2023-05-16T21:57:29 | 2023-08-23T14:11:55 | 9 | 0 | ---
language: de
tags:
- flair
- token-classification
- sequence-tagger-model
---
# REDEWIEDERGABE Tagger: free indirect STWR
This model is part of an ensemble of binary taggers that recognize German speech, thought and writing representation, that is being used in [LLpro](https://github.com/cophi-wue/LLpro). They can... | [
"TRANSLATION"
] | Non_BioNLP | # REDEWIEDERGABE Tagger: free indirect STWR
This model is part of an ensemble of binary taggers that recognize German speech, thought and writing representation, that is being used in [LLpro](https://github.com/cophi-wue/LLpro). They can be used to automatically detect and annotate the following 4 types of speech, tho... | {"language": "de", "tags": ["flair", "token-classification", "sequence-tagger-model"]} |
Helsinki-NLP/opus-mt-sv-umb | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"sv",
"umb",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-08-16T12:06:25 | 56 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-sv-umb
* source languages: sv
* target languages: umb
* OPUS readme: [sv-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-umb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* d... | [
"TRANSLATION"
] | Non_BioNLP |
### opus-mt-sv-umb
* source languages: sv
* target languages: umb
* OPUS readme: [sv-umb](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/sv-umb/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](... | {"license": "apache-2.0", "tags": ["translation"]} |
midas/gupshup_e2e_bart | midas | text2text-generation | [
"transformers",
"pytorch",
"bart",
"text2text-generation",
"arxiv:1910.04073",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05 | 2021-11-14T02:09:24 | 130 | 0 | ---
{}
---
# Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please requ... | [
"SUMMARIZATION"
] | Non_BioNLP | # Gupshup
GupShup: Summarizing Open-Domain Code-Switched Conversations EMNLP 2021
Paper: [https://aclanthology.org/2021.emnlp-main.499.pdf](https://aclanthology.org/2021.emnlp-main.499.pdf)
Github: [https://github.com/midas-research/gupshup](https://github.com/midas-research/gupshup)
### Dataset
Please request for the... | {} |
TransferGraph/SetFit_distilbert-base-uncased__sst2__train-16-0-finetuned-lora-tweet_eval_irony | TransferGraph | text-classification | [
"peft",
"safetensors",
"parquet",
"text-classification",
"dataset:tweet_eval",
"base_model:SetFit/distilbert-base-uncased__sst2__train-16-0",
"base_model:adapter:SetFit/distilbert-base-uncased__sst2__train-16-0",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-02-27T17:01:21 | 2024-02-27T17:01:26 | 0 | 0 | ---
base_model: SetFit/distilbert-base-uncased__sst2__train-16-0
datasets:
- tweet_eval
library_name: peft
license: apache-2.0
metrics:
- accuracy
tags:
- parquet
- text-classification
model-index:
- name: SetFit_distilbert-base-uncased__sst2__train-16-0-finetuned-lora-tweet_eval_irony
results:
- task:
type: ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# SetFit_distilbert-base-uncased__sst2__train-16-0-finetuned-lora-tweet_eval_irony
This model is a fine-tuned version of [SetFit/d... | {"base_model": "SetFit/distilbert-base-uncased__sst2__train-16-0", "datasets": ["tweet_eval"], "library_name": "peft", "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["parquet", "text-classification"], "model-index": [{"name": "SetFit_distilbert-base-uncased__sst2__train-16-0-finetuned-lora-tweet_eval_irony"... |
tartuNLP/llammas-prelim | tartuNLP | text-generation | [
"transformers",
"pytorch",
"safetensors",
"llama",
"text-generation",
"et",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2023-11-03T13:14:54 | 2023-11-14T12:17:40 | 9 | 1 | ---
language:
- et
widget:
- text: 'Mida sa tead Juhan Liivi kohta? Vastus:'
---
Llama-2-7B finetuned in three stages:
1. 1B tokens of CulturaX (75% Estonain, 25% English)
2. 1M English->Estonian sentence-pairs from CCMatrix (500000), WikiMatrix (400000), Europarl (50000), and OpenSubtitles (50000) as Alpaca-style tra... | [
"TRANSLATION"
] | Non_BioNLP |
Llama-2-7B finetuned in three stages:
1. 1B tokens of CulturaX (75% Estonain, 25% English)
2. 1M English->Estonian sentence-pairs from CCMatrix (500000), WikiMatrix (400000), Europarl (50000), and OpenSubtitles (50000) as Alpaca-style translation instructions
3. Alpaca-cleaned and Alpaca-est (both ~50000 instructions)... | {"language": ["et"], "widget": [{"text": "Mida sa tead Juhan Liivi kohta? Vastus:"}]} |
rambodazimi/distilroberta-base-finetuned-LoRA-MRPC | rambodazimi | null | [
"safetensors",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"region:us"
] | 2024-09-06T17:14:48 | 2024-09-06T17:16:42 | 0 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilroberta-base-finetuned-LoRA-MRPC
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: mrpc
metrics:
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilroberta-base-finetuned-lora-mrpc
This model is a fine-tuned version of [distilroberta-base](https://huggingface.co/distilb... | {"datasets": ["glue"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilroberta-base-finetuned-LoRA-MRPC", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "args": "... |
marefa-nlp/marefa-ner | marefa-nlp | token-classification | [
"transformers",
"pytorch",
"xlm-roberta",
"token-classification",
"ar",
"dataset:Marefa-NER",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:05 | 2021-12-04T05:21:57 | 3,785 | 23 | ---
datasets:
- Marefa-NER
language: ar
widget:
- text: في استاد القاهرة، بدأ حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية
و رئيس الاتحاد الدولي لكرة القدم
---
# Tebyan تبيـان
## Marefa Arabic Named Entity Recognition Model
## نموذج المعرفة لتصنيف أجزاء النص
<p align="center">
<img src=... | [
"NAMED_ENTITY_RECOGNITION"
] | Non_BioNLP |
# Tebyan تبيـان
## Marefa Arabic Named Entity Recognition Model
## نموذج المعرفة لتصنيف أجزاء النص
<p align="center">
<img src="https://huggingface.co/marefa-nlp/marefa-ner/resolve/main/assets/marefa-tebyan-banner.png" alt="Marfa Arabic NER Model" width="600"/>
</p?
---------
**Version**: 1.3
**Last Up... | {"datasets": ["Marefa-NER"], "language": "ar", "widget": [{"text": "في استاد القاهرة، بدأ حفل افتتاح بطولة كأس الأمم الأفريقية بحضور رئيس الجمهورية و رئيس الاتحاد الدولي لكرة القدم"}]} |
learn3r/longt5_xl_sfd_bp_20 | learn3r | text2text-generation | [
"transformers",
"pytorch",
"longt5",
"text2text-generation",
"generated_from_trainer",
"dataset:learn3r/summ_screen_fd_bp",
"base_model:google/long-t5-tglobal-xl",
"base_model:finetune:google/long-t5-tglobal-xl",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatibl... | 2023-11-03T01:15:57 | 2023-11-04T06:54:50 | 18 | 0 | ---
base_model: google/long-t5-tglobal-xl
datasets:
- learn3r/summ_screen_fd_bp
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: longt5_xl_sfd_bp_20
results:
- task:
type: summarization
name: Summarization
dataset:
name: learn3r/summ_screen_fd_bp
t... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# longt5_xl_sfd_bp_20
This model is a fine-tuned version of [google/long-t5-tglobal-xl](https://huggingface.co/google/long-t5-tglo... | {"base_model": "google/long-t5-tglobal-xl", "datasets": ["learn3r/summ_screen_fd_bp"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "longt5_xl_sfd_bp_20", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name": "learn3r/... |
Yanis23/sparql-translation | Yanis23 | translation | [
"transformers",
"pytorch",
"tensorboard",
"bart",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-24T13:45:18 | 2023-04-24T19:09:42 | 28 | 0 | ---
license: apache-2.0
tags:
- translation
- generated_from_trainer
model-index:
- name: sparql-translation
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sp... | [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# sparql-translation
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-base) on the ... | {"license": "apache-2.0", "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "sparql-translation", "results": []}]} |
Hoax0930/marian-finetuned-kftt_kde4-en-to-ja | Hoax0930 | translation | [
"transformers",
"pytorch",
"tensorboard",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-10-04T07:14:23 | 2022-10-04T08:25:10 | 98 | 0 | ---
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: marian-finetuned-kftt_kde4-en-to-ja
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, th... | [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# marian-finetuned-kftt_kde4-en-to-ja
This model is a fine-tuned version of [Hoax0930/kyoto_marian_mod_2_2_1](https://huggingface.... | {"license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "marian-finetuned-kftt_kde4-en-to-ja", "results": []}]} |
gokuls/hBERTv2_new_pretrain_w_init_48_ver2_stsb | gokuls | text-classification | [
"transformers",
"pytorch",
"hybridbert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:glue",
"base_model:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"base_model:finetune:gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48",
"model-index",
"autot... | 2023-10-18T06:42:50 | 2023-10-18T06:52:03 | 36 | 0 | ---
base_model: gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48
datasets:
- glue
language:
- en
metrics:
- spearmanr
tags:
- generated_from_trainer
model-index:
- name: hBERTv2_new_pretrain_w_init_48_ver2_stsb
results:
- task:
type: text-classification
name: Text Classification
datase... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# hBERTv2_new_pretrain_w_init_48_ver2_stsb
This model is a fine-tuned version of [gokuls/bert_12_layer_model_v2_complete_training_... | {"base_model": "gokuls/bert_12_layer_model_v2_complete_training_new_wt_init_48", "datasets": ["glue"], "language": ["en"], "metrics": ["spearmanr"], "tags": ["generated_from_trainer"], "model-index": [{"name": "hBERTv2_new_pretrain_w_init_48_ver2_stsb", "results": [{"task": {"type": "text-classification", "name": "Text... |
rootacess/distilbert-base-uncased-distilled-clinc | rootacess | text-classification | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:clinc_oos",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-03-17T06:40:02 | 2023-03-17T06:51:16 | 29 | 0 | ---
datasets:
- clinc_oos
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-distilled-clinc
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: clinc_oos
type: clinc_oos
config: plus
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-distilled-clinc
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | {"datasets": ["clinc_oos"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-distilled-clinc", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "clinc_oos", "type": "clinc_oos",... |
research-dump/bge-base-en-v1.5_wikipedia_r_masked_wikipedia_r_masked | research-dump | text-classification | [
"setfit",
"safetensors",
"bert",
"sentence-transformers",
"text-classification",
"generated_from_setfit_trainer",
"arxiv:2209.11055",
"base_model:BAAI/bge-base-en-v1.5",
"base_model:finetune:BAAI/bge-base-en-v1.5",
"region:us"
] | 2025-02-07T05:05:00 | 2025-02-07T05:05:17 | 9 | 0 | ---
base_model: BAAI/bge-base-en-v1.5
library_name: setfit
metrics:
- accuracy
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
- generated_from_setfit_trainer
widget:
- text: 'St. Timothy High School (Cochrane): Luna <3 (She/Her) ( talk ) 04:19, 15
July 2023 (UTC) This... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SetFit with BAAI/bge-base-en-v1.5
This is a [SetFit](https://github.com/huggingface/setfit) model that can be used for Text Classification. This SetFit model uses [BAAI/bge-base-en-v1.5](https://huggingface.co/BAAI/bge-base-en-v1.5) as the Sentence Transformer embedding model. A [LogisticRegression](https://scikit-... | {"base_model": "BAAI/bge-base-en-v1.5", "library_name": "setfit", "metrics": ["accuracy"], "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification", "generated_from_setfit_trainer"], "widget": [{"text": "St. Timothy High School (Cochrane): Luna <3 (She/Her) ( talk ) 04:19... |
Areeb123/En-Fr_Translation_Model | Areeb123 | translation | [
"transformers",
"tensorboard",
"safetensors",
"marian",
"text2text-generation",
"translation",
"generated_from_trainer",
"en",
"fr",
"dataset:kde4",
"base_model:Helsinki-NLP/opus-mt-en-fr",
"base_model:finetune:Helsinki-NLP/opus-mt-en-fr",
"license:apache-2.0",
"model-index",
"autotrain_... | 2023-11-26T12:47:28 | 2023-11-27T12:58:18 | 38 | 0 | ---
base_model: Helsinki-NLP/opus-mt-en-fr
datasets:
- kde4
language:
- en
- fr
license: apache-2.0
metrics:
- bleu
tags:
- translation
- generated_from_trainer
model-index:
- name: En-Fr_Translation_Model
results:
- task:
type: text2text-generation
name: Sequence-to-sequence Language Modeling
datas... | [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# En-Fr_Translation_Model
This model is a fine-tuned version of [Helsinki-NLP/opus-mt-en-fr](https://huggingface.co/Helsinki-NLP/o... | {"base_model": "Helsinki-NLP/opus-mt-en-fr", "datasets": ["kde4"], "language": ["en", "fr"], "license": "apache-2.0", "metrics": ["bleu"], "tags": ["translation", "generated_from_trainer"], "model-index": [{"name": "En-Fr_Translation_Model", "results": [{"task": {"type": "text2text-generation", "name": "Sequence-to-seq... |
Bijayab/a100_80_nepberta | Bijayab | text2text-generation | [
"transformers",
"safetensors",
"bart",
"text2text-generation",
"summary",
"nepali",
"BART",
"NLP",
"ne",
"dataset:csebuetnlp/xlsum",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2024-03-22T18:04:32 | 2024-06-15T15:09:00 | 28 | 0 | ---
datasets:
- csebuetnlp/xlsum
language:
- ne
library_name: transformers
license: mit
metrics:
- rouge
pipeline_tag: text2text-generation
tags:
- summary
- nepali
- BART
- NLP
---
from transformers import pipeline
input_text = """सांसदको लोगो छोडेर सिलाम साक्मामा मात्र भेटिएपछि उनलाई जिज्ञाशा राखियो ।उनले किराती परम... | [
"SUMMARIZATION"
] | Non_BioNLP | from transformers import pipeline
input_text = """सांसदको लोगो छोडेर सिलाम साक्मामा मात्र भेटिएपछि उनलाई जिज्ञाशा राखियो ।उनले किराती परम्परामा सिलाम साक्माको महत्त्वबारे उल्लेख गर्दै 'कान्तिपुर' को स्मरणका लागि 'सिलाम साक्मा' लगाएको तस्बिर खिच्न आग्रह गरे । स्मरणका लागि भन्दै खिचाएको त्यही तस्बिर जस्तै उनले राजनीतिमा... | {"datasets": ["csebuetnlp/xlsum"], "language": ["ne"], "library_name": "transformers", "license": "mit", "metrics": ["rouge"], "pipeline_tag": "text2text-generation", "tags": ["summary", "nepali", "BART", "NLP"]} |
Helsinki-NLP/opus-mt-fr-sk | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"marian",
"text2text-generation",
"translation",
"fr",
"sk",
"license:apache-2.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2023-08-16T11:37:13 | 41 | 0 | ---
license: apache-2.0
tags:
- translation
---
### opus-mt-fr-sk
* source languages: fr
* target languages: sk
* OPUS readme: [fr-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* downl... | [
"TRANSLATION"
] | Non_BioNLP |
### opus-mt-fr-sk
* source languages: fr
* target languages: sk
* OPUS readme: [fr-sk](https://github.com/Helsinki-NLP/OPUS-MT-train/blob/master/models/fr-sk/README.md)
* dataset: opus
* model: transformer-align
* pre-processing: normalization + SentencePiece
* download original weights: [opus-2020-01-16.zip](http... | {"license": "apache-2.0", "tags": ["translation"]} |
zyxzyx/autotrain-sum-1042335811 | zyxzyx | text2text-generation | [
"transformers",
"pytorch",
"mt5",
"text2text-generation",
"autotrain",
"zh",
"dataset:zyxzyx/autotrain-data-sum",
"co2_eq_emissions",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-06-27T01:25:28 | 2022-06-27T05:15:17 | 96 | 0 | ---
datasets:
- zyxzyx/autotrain-data-sum
language: zh
tags:
- a
- u
- t
- o
- r
- i
- n
widget:
- text: I love AutoTrain 🤗
co2_eq_emissions: 426.15271368095927
---
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1042335811
- CO2 Emissions (in grams): 426.15271368095927
## Validation Metri... | [
"SUMMARIZATION"
] | Non_BioNLP |
# Model Trained Using AutoTrain
- Problem type: Summarization
- Model ID: 1042335811
- CO2 Emissions (in grams): 426.15271368095927
## Validation Metrics
- Loss: 1.7748287916183472
- Rouge1: 0.536
- Rouge2: 0.0
- RougeL: 0.536
- RougeLsum: 0.536
- Gen Len: 10.9089
## Usage
You can use cURL to access this model:
... | {"datasets": ["zyxzyx/autotrain-data-sum"], "language": "zh", "tags": ["a", "u", "t", "o", "r", "i", "n"], "widget": [{"text": "I love AutoTrain 🤗"}], "co2_eq_emissions": 426.15271368095927} |
seongil-dn/bge-m3-kor-retrieval-451949-bs64-finance-50 | seongil-dn | sentence-similarity | [
"sentence-transformers",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:451949",
"loss:CachedMultipleNegativesRankingLoss",
"arxiv:1908.10084",
"arxiv:2101.06983",
"base_model:BAAI/bge-m3",
"base_model:finetune:BAAI/bge-m3",
... | 2024-12-14T09:16:02 | 2024-12-14T09:17:18 | 82 | 0 | ---
base_model: BAAI/bge-m3
library_name: sentence-transformers
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extraction
- generated_from_trainer
- dataset_size:451949
- loss:CachedMultipleNegativesRankingLoss
widget:
- source_sentence: 사설묘지의 관리방법에 대한 27.9퍼센트의 견해에 근거하면 ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# SentenceTransformer based on BAAI/bge-m3
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphra... | {"base_model": "BAAI/bge-m3", "library_name": "sentence-transformers", "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity", "feature-extraction", "generated_from_trainer", "dataset_size:451949", "loss:CachedMultipleNegativesRankingLoss"], "widget": [{"source_sentence": "사설묘지의 ... |
Xenova/opus-mt-es-it | Xenova | translation | [
"transformers.js",
"onnx",
"marian",
"text2text-generation",
"translation",
"base_model:Helsinki-NLP/opus-mt-es-it",
"base_model:quantized:Helsinki-NLP/opus-mt-es-it",
"region:us"
] | 2023-09-05T23:12:22 | 2024-10-08T13:42:05 | 57 | 1 | ---
base_model: Helsinki-NLP/opus-mt-es-it
library_name: transformers.js
pipeline_tag: translation
---
https://huggingface.co/Helsinki-NLP/opus-mt-es-it with ONNX weights to be compatible with Transformers.js.
Note: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more ... | [
"TRANSLATION"
] | Non_BioNLP | ERROR: type should be string, got "\nhttps://huggingface.co/Helsinki-NLP/opus-mt-es-it with ONNX weights to be compatible with Transformers.js.\n\nNote: Having a separate repo for ONNX weights is intended to be a temporary solution until WebML gains more traction. If you would like to make your models web-ready, we recommend converting to ONNX using [🤗 Optimum](https://huggingface.co/docs/optimum/index) and structuring your repo like this one (with ONNX weights located in a subfolder named `onnx`)." | {"base_model": "Helsinki-NLP/opus-mt-es-it", "library_name": "transformers.js", "pipeline_tag": "translation"} |
TehranNLP-org/bert-large-mnli | TehranNLP-org | text-classification | [
"transformers",
"pytorch",
"bert",
"text-classification",
"generated_from_trainer",
"en",
"dataset:mnli",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-19T22:06:38 | 2023-01-19T22:22:28 | 116 | 0 | ---
datasets:
- mnli
language:
- en
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: '42'
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: MNLI
type: glue
args: mnli
metrics:
- type: accuracy
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# 42
This model is a fine-tuned version of [bert-large-uncased](https://huggingface.co/bert-large-uncased) on the MNLI dataset.
It... | {"datasets": ["mnli"], "language": ["en"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "42", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "MNLI", "type": "glue", "args": "mnli"}, "metrics": [{"... |
HARDYCHEN/text_summarization_finetuned | HARDYCHEN | text2text-generation | [
"transformers",
"tensorboard",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"base_model:Falconsai/text_summarization",
"base_model:finetune:Falconsai/text_summarization",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible"... | 2024-04-25T03:40:12 | 2024-04-25T03:40:37 | 5 | 0 | ---
base_model: Falconsai/text_summarization
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: text_summarization_finetuned
results: []
---
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofrea... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# text_summarization_finetuned
This model is a fine-tuned version of [Falconsai/text_summarization](https://huggingface.co/Falcons... | {"base_model": "Falconsai/text_summarization", "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "text_summarization_finetuned", "results": []}]} |
SEBIS/legal_t5_small_multitask_cs_de | SEBIS | text2text-generation | [
"transformers",
"pytorch",
"jax",
"t5",
"text2text-generation",
"translation Cszech Deustch model",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2022-03-02T23:29:04 | 2021-06-23T10:50:44 | 175 | 0 | ---
datasets:
- dcep europarl jrc-acquis
language: Cszech Deustch
tags:
- translation Cszech Deustch model
widget:
- text: Postavení žen v ozbrojených konfliktech a jejich úloha při obnově zemí po
ukončení konfliktu a v demokratickém procesu v těchto zemích
---
# legal_t5_small_multitask_cs_de model
Model on tra... | [
"TRANSLATION"
] | Non_BioNLP |
# legal_t5_small_multitask_cs_de model
Model on translating legal text from Cszech to Deustch. It was first released in
[this repository](https://github.com/agemagician/LegalTrans). The model is parallely trained on the three parallel corpus with 42 language pair
from jrc-acquis, europarl and dcep along with the unsu... | {"datasets": ["dcep europarl jrc-acquis"], "language": "Cszech Deustch", "tags": ["translation Cszech Deustch model"], "widget": [{"text": "Postavení žen v ozbrojených konfliktech a jejich úloha při obnově zemí po ukončení konfliktu a v demokratickém procesu v těchto zemích"}]} |
RichardErkhov/bertin-project_-_bertin-gpt-j-6B-alpaca-4bits | RichardErkhov | null | [
"safetensors",
"gptj",
"4-bit",
"bitsandbytes",
"region:us"
] | 2024-11-12T15:43:09 | 2024-11-12T15:45:23 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bertin-gpt-j-6B-alpaca - bnb 4bits
- Model creator: https://huggingface.co/bertin-project/
- Original model: http... | [
"TRANSLATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
bertin-gpt-j-6B-alpaca - bnb 4bits
- Model creator: https://huggingface.co/bertin-project/
- Original model: https://hugging... | {} |
RichardErkhov/rawsh_-_simpo-math-model-gguf | RichardErkhov | null | [
"gguf",
"arxiv:2401.08417",
"endpoints_compatible",
"region:us"
] | 2025-03-13T15:32:24 | 2025-03-13T15:39:11 | 352 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
simpo-math-model - GGUF
- Model creator: https://huggingface.co/rawsh/
- Original model: https://huggingface.co/r... | [
"TRANSLATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
simpo-math-model - GGUF
- Model creator: https://huggingface.co/rawsh/
- Original model: https://huggingface.co/rawsh/simpo-... | {} |
mrapacz/interlinear-en-philta-emb-auto-diacritics-bh | mrapacz | text2text-generation | [
"transformers",
"pytorch",
"morph-t5-auto",
"text2text-generation",
"en",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-08T12:26:40 | 2025-02-21T21:32:33 | 45 | 0 | ---
base_model:
- PhilTa
datasets:
- mrapacz/greek-interlinear-translations
language:
- en
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining ... | [
"TRANSLATION"
] | Non_BioNLP | # Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in t... | {"base_model": ["PhilTa"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["en"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]} |
ymoslem/whisper-medium-ga2en-v6.3.1-8k-r | ymoslem | automatic-speech-recognition | [
"transformers",
"tensorboard",
"safetensors",
"whisper",
"automatic-speech-recognition",
"generated_from_trainer",
"ga",
"en",
"dataset:ymoslem/IWSLT2023-GA-EN",
"dataset:ymoslem/FLEURS-GA-EN",
"dataset:ymoslem/BitesizeIrish-GA-EN",
"dataset:ymoslem/SpokenWords-GA-EN-MTed",
"dataset:ymoslem/... | 2024-06-22T00:29:00 | 2025-03-15T11:12:14 | 36 | 1 | ---
base_model: openai/whisper-medium
datasets:
- ymoslem/IWSLT2023-GA-EN
- ymoslem/FLEURS-GA-EN
- ymoslem/BitesizeIrish-GA-EN
- ymoslem/SpokenWords-GA-EN-MTed
- ymoslem/Tatoeba-Speech-Irish
- ymoslem/Wikimedia-Speech-Irish
- ymoslem/EUbookshop-Speech-Irish
language:
- ga
- en
license: apache-2.0
metrics:
- bleu
- wer
... | [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# Whisper Medium GA-EN Speech Translation
This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/ope... | {"base_model": "openai/whisper-medium", "datasets": ["ymoslem/IWSLT2023-GA-EN", "ymoslem/FLEURS-GA-EN", "ymoslem/BitesizeIrish-GA-EN", "ymoslem/SpokenWords-GA-EN-MTed", "ymoslem/Tatoeba-Speech-Irish", "ymoslem/Wikimedia-Speech-Irish", "ymoslem/EUbookshop-Speech-Irish"], "language": ["ga", "en"], "license": "apache-2.0"... |
gaudi/opus-mt-fr-ny-ctranslate2 | gaudi | translation | [
"transformers",
"marian",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2024-07-22T15:58:59 | 2024-10-19T04:38:59 | 7 | 0 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original... | [
"TRANSLATION"
] | Non_BioNLP | # Repository General Information
## Inspired by and derived from the work of [Helsinki-NLP](https://huggingface.co/Helsinki-NLP), [CTranslate2](https://github.com/OpenNMT/CTranslate2), and [michaelfeil](https://huggingface.co/michaelfeil)!
- Link to Original Model ([Helsinki-NLP](https://huggingface.co/Helsinki-NLP)): ... | {"license": "apache-2.0", "tags": ["ctranslate2", "translation"]} |
RayNguyent/finetuning-sentiment-model | RayNguyent | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"en... | 2023-07-29T03:56:51 | 2023-07-29T10:46:52 | 13 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
c... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-bas... | {"base_model": "distilbert-base-uncased", "datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imd... |
cmgx/Snowflake-ATM-Avg-v2 | cmgx | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:800",
"loss:MatryoshkaLoss",
"loss:CustomContrastiveLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"base_model:Snowflake/snowflake-arctic-embed-m-v1.5",
... | 2024-10-03T22:53:14 | 2024-10-03T23:01:03 | 0 | 0 | ---
base_model: Snowflake/snowflake-arctic-embed-m-v1.5
datasets: []
language:
- en
library_name: sentence-transformers
metrics:
- cosine_accuracy
- dot_accuracy
- manhattan_accuracy
- euclidean_accuracy
- max_accuracy
pipeline_tag: sentence-similarity
tags:
- sentence-transformers
- sentence-similarity
- feature-extra... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# Snowflake-ATM-Avg-v2
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [Snowflake/snowflake-arctic-embed-m-v1.5](https://huggingface.co/Snowflake/snowflake-arctic-embed-m-v1.5). It maps sentences & paragraphs to a 768-dimensional dense vector space and can be used for semantic textual si... | {"base_model": "Snowflake/snowflake-arctic-embed-m-v1.5", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "metrics": ["cosine_accuracy", "dot_accuracy", "manhattan_accuracy", "euclidean_accuracy", "max_accuracy"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sen... |
AhmedSSoliman/DistilBERT-Marian-Model-on-DJANGO | AhmedSSoliman | translation | [
"transformers",
"pytorch",
"encoder-decoder",
"text2text-generation",
"Code Generation",
"Machine translation",
"Text generation",
"translation",
"en",
"dataset:AhmedSSoliman/DJANGO",
"license:mit",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-01-11T21:54:43 | 2023-07-30T12:01:43 | 13 | 0 | ---
datasets:
- AhmedSSoliman/DJANGO
language:
- en
license: mit
metrics:
- bleu
- accuracy
pipeline_tag: translation
tags:
- Code Generation
- Machine translation
- Text generation
---
| [
"TRANSLATION"
] | Non_BioNLP | {"datasets": ["AhmedSSoliman/DJANGO"], "language": ["en"], "license": "mit", "metrics": ["bleu", "accuracy"], "pipeline_tag": "translation", "tags": ["Code Generation", "Machine translation", "Text generation"]} | |
SoyGema/english-guyarati | SoyGema | translation | [
"transformers",
"pytorch",
"safetensors",
"t5",
"text2text-generation",
"generated_from_trainer",
"translation",
"en",
"gu",
"dataset:opus100",
"base_model:google-t5/t5-small",
"base_model:finetune:google-t5/t5-small",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"text-... | 2023-09-09T10:49:53 | 2023-09-11T06:51:34 | 16 | 0 | ---
base_model: t5-small
datasets:
- opus100
language:
- en
- gu
license: apache-2.0
metrics:
- bleu
pipeline_tag: translation
tags:
- generated_from_trainer
model-index:
- name: english-guyarati
results:
- task:
type: translation
name: Translation
dataset:
name: opus100 en-gu
type: opus... | [
"TRANSLATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# english-guyarati
This model is a fine-tuned version of [t5-small](https://huggingface.co/t5-small) on the opus100 en-gu dataset.... | {"base_model": "t5-small", "datasets": ["opus100"], "language": ["en", "gu"], "license": "apache-2.0", "metrics": ["bleu"], "pipeline_tag": "translation", "tags": ["generated_from_trainer"], "model-index": [{"name": "english-guyarati", "results": [{"task": {"type": "translation", "name": "Translation"}, "dataset": {"na... |
google/t5-efficient-small-el4 | google | text2text-generation | [
"transformers",
"pytorch",
"tf",
"jax",
"t5",
"text2text-generation",
"deep-narrow",
"en",
"dataset:c4",
"arxiv:2109.10686",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"region:us"
] | 2022-03-02T23:29:05 | 2023-01-24T16:49:01 | 118 | 0 | ---
datasets:
- c4
language:
- en
license: apache-2.0
tags:
- deep-narrow
inference: false
---
# T5-Efficient-SMALL-EL4 (Deep-Narrow version)
T5-Efficient-SMALL-EL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture... | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
# T5-Efficient-SMALL-EL4 (Deep-Narrow version)
T5-Efficient-SMALL-EL4 is a variation of [Google's original T5](https://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html) following the [T5 model architecture](https://huggingface.co/docs/transformers/model_doc/t5).
It is a *pretrained-only* checkpoint ... | {"datasets": ["c4"], "language": ["en"], "license": "apache-2.0", "tags": ["deep-narrow"], "inference": false} |
varun-v-rao/bart-base-lora-885K-snli-model1 | varun-v-rao | text-classification | [
"transformers",
"tensorboard",
"safetensors",
"bart",
"text-classification",
"generated_from_trainer",
"dataset:stanfordnlp/snli",
"base_model:facebook/bart-base",
"base_model:finetune:facebook/bart-base",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
... | 2024-06-19T18:33:57 | 2024-06-19T22:49:03 | 4 | 0 | ---
base_model: facebook/bart-base
datasets:
- stanfordnlp/snli
license: apache-2.0
metrics:
- accuracy
tags:
- generated_from_trainer
model-index:
- name: bart-base-lora-885K-snli-model1
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: snli
type: stanf... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# bart-base-lora-885K-snli-model1
This model is a fine-tuned version of [facebook/bart-base](https://huggingface.co/facebook/bart-... | {"base_model": "facebook/bart-base", "datasets": ["stanfordnlp/snli"], "license": "apache-2.0", "metrics": ["accuracy"], "tags": ["generated_from_trainer"], "model-index": [{"name": "bart-base-lora-885K-snli-model1", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name"... |
Yongxin-Guo/VTG-LLM | Yongxin-Guo | null | [
"dense-video-caption",
"video-highlight-detection",
"video-summarization",
"moment-retrieval",
"dataset:Yongxin-Guo/VTG-IT",
"arxiv:2405.13382",
"license:apache-2.0",
"region:us"
] | 2024-05-21T03:30:31 | 2024-06-19T08:29:04 | 0 | 3 | ---
datasets:
- Yongxin-Guo/VTG-IT
license: apache-2.0
tags:
- dense-video-caption
- video-highlight-detection
- video-summarization
- moment-retrieval
---
[VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding](https://arxiv.org/abs/2405.13382)
## Overview
We introduce
- VT... | [
"SUMMARIZATION"
] | Non_BioNLP |
[VTG-LLM: Integrating Timestamp Knowledge into Video LLMs for Enhanced Video Temporal Grounding](https://arxiv.org/abs/2405.13382)
## Overview
We introduce
- VTG-IT-120K, a high-quality and comprehensive instruction tuning dataset that covers VTG tasks such as moment retrieval (63.2K), dense video captioning (37.2K... | {"datasets": ["Yongxin-Guo/VTG-IT"], "license": "apache-2.0", "tags": ["dense-video-caption", "video-highlight-detection", "video-summarization", "moment-retrieval"]} |
mrapacz/interlinear-en-greta-emb-sum-normalized-bh | mrapacz | text2text-generation | [
"transformers",
"pytorch",
"morph-t5-sum",
"text2text-generation",
"en",
"dataset:mrapacz/greek-interlinear-translations",
"license:cc-by-sa-4.0",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2025-02-08T12:28:56 | 2025-02-21T21:31:34 | 16 | 0 | ---
base_model:
- GreTa
datasets:
- mrapacz/greek-interlinear-translations
language:
- en
library_name: transformers
license: cc-by-sa-4.0
metrics:
- bleu
---
# Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining w... | [
"TRANSLATION"
] | Non_BioNLP | # Model Card for Ancient Greek to English Interlinear Translation Model
This model performs interlinear translation from Ancient Greek to English, maintaining word-level alignment between source and target texts.
You can find the source code used for training this and other models trained as part of this project in t... | {"base_model": ["GreTa"], "datasets": ["mrapacz/greek-interlinear-translations"], "language": ["en"], "library_name": "transformers", "license": "cc-by-sa-4.0", "metrics": ["bleu"]} |
nianlong/memsum-arxiv-summarization | nianlong | null | [
"license:apache-2.0",
"region:us"
] | 2023-07-27T15:43:46 | 2024-03-29T16:00:23 | 0 | 2 | ---
license: apache-2.0
---
[](http://dx.doi.org/10.18653/v1/2022.acl-long.450)
# MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes
Code for ACL 2022 paper on the topic of long doc... | [
"SUMMARIZATION"
] | Non_BioNLP | [](http://dx.doi.org/10.18653/v1/2022.acl-long.450)
# MemSum: Extractive Summarization of Long Documents Using Multi-Step Episodic Markov Decision Processes
Code for ACL 2022 paper on the topic of long document extractive summarizati... | {"license": "apache-2.0"} |
SQAI/bge-embedding-model2 | SQAI | sentence-similarity | [
"sentence-transformers",
"safetensors",
"bert",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:1865",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"license:apache-2.0",
... | 2024-07-02T00:22:32 | 2024-07-02T00:22:56 | 48 | 0 | ---
base_model: SQAI/bge-embedding-model
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# BGE base Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [SQAI/bge-embedding-model](https://huggingface.co/SQAI/bge-embedding-model). It maps sentences & paragraphs to a 384-dimensional dense vector space and can be used for semantic textual similarity, semantic se... | {"base_model": "SQAI/bge-embedding-model", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_prec... |
MBZUAI/bactrian-x-bloom-7b1-lora | MBZUAI | null | [
"arxiv:2305.15011",
"license:mit",
"region:us"
] | 2023-05-10T18:41:14 | 2023-06-11T10:11:59 | 0 | 0 | ---
license: mit
---
#### Current Training Steps: 100,000
This repo contains a low-rank adapter (LoRA) for Bloom-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in 52 languages.
### Dataset ... | [
"TRANSLATION"
] | Non_BioNLP |
#### Current Training Steps: 100,000
This repo contains a low-rank adapter (LoRA) for Bloom-7b1
fit on the [Stanford-Alpaca-52k](https://github.com/tatsu-lab/stanford_alpaca)
and [databricks-dolly-15k](https://github.com/databrickslabs/dolly/tree/master/data) data in 52 languages.
### Dataset Creation
1. English ... | {"license": "mit"} |
michaelfeil/ct2fast-opus-mt-ROMANCE-en | michaelfeil | translation | [
"transformers",
"ctranslate2",
"translation",
"license:apache-2.0",
"endpoints_compatible",
"region:us"
] | 2023-05-19T00:50:58 | 2023-05-19T00:51:47 | 352 | 1 | ---
license: apache-2.0
tags:
- ctranslate2
- translation
---
# # Fast-Inference with Ctranslate2
Speedup inference by 2x-8x using int8 inference in C++
quantized version of [Helsinki-NLP/opus-mt-ROMANCE-en](https://huggingface.co/Helsinki-NLP/opus-mt-ROMANCE-en)
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslat... | [
"TRANSLATION"
] | Non_BioNLP | # # Fast-Inference with Ctranslate2
Speedup inference by 2x-8x using int8 inference in C++
quantized version of [Helsinki-NLP/opus-mt-ROMANCE-en](https://huggingface.co/Helsinki-NLP/opus-mt-ROMANCE-en)
```bash
pip install hf-hub-ctranslate2>=1.0.0 ctranslate2>=3.13.0
```
Converted using
```
ct2-transformers-converter ... | {"license": "apache-2.0", "tags": ["ctranslate2", "translation"]} |
tsinik/distilbert-base-uncased-finetuned-emotion | tsinik | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-13T14:11:04 | 2023-04-14T06:26:43 | 13 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: split
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
seoultechLLM/Llama-3-70B-PIM-4bit | seoultechLLM | null | [
"license:mit",
"region:us"
] | 2024-11-25T11:59:04 | 2024-11-25T12:05:09 | 0 | 0 | ---
license: mit
---
---
license: mit
---
# Model Architecture
## Base Model: Llama 3 (70 billion parameters)
## Quantization: 4-bit integer quantization for memory and computational efficiency
## Framework: Fine-tuned with PyTorch, leveraging Hugging Face Transformers
## PIM Optimization: Enhanced for PIM hardware t... | [
"TEXT_CLASSIFICATION",
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP |
---
license: mit
---
# Model Architecture
## Base Model: Llama 3 (70 billion parameters)
## Quantization: 4-bit integer quantization for memory and computational efficiency
## Framework: Fine-tuned with PyTorch, leveraging Hugging Face Transformers
## PIM Optimization: Enhanced for PIM hardware to process data direct... | {"license": "mit"} |
AIFS/Prometh-MOEM-24B | AIFS | text-generation | [
"transformers",
"safetensors",
"mixtral",
"text-generation",
"conversational",
"en",
"license:apache-2.0",
"autotrain_compatible",
"text-generation-inference",
"endpoints_compatible",
"region:us"
] | 2024-02-13T14:00:48 | 2024-03-20T13:43:22 | 0 | 3 | ---
language:
- en
license: apache-2.0
---
# Prometh-MOEM-24B Model Card
**Prometh-MOEM-24B** is a Mixture of Experts (MoE) model that integrates multiple foundational models to deliver enhanced performance across a spectrum of tasks. It harnesses the combined strengths of its constituent models, optimizing for accura... | [
"QUESTION_ANSWERING",
"TRANSLATION"
] | Non_BioNLP | # Prometh-MOEM-24B Model Card
**Prometh-MOEM-24B** is a Mixture of Experts (MoE) model that integrates multiple foundational models to deliver enhanced performance across a spectrum of tasks. It harnesses the combined strengths of its constituent models, optimizing for accuracy, speed, and versatility.
## Model Sourc... | {"language": ["en"], "license": "apache-2.0"} |
RichardErkhov/lemon-mint_-_gemma-ko-7b-instruct-v0.52-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us",
"conversational"
] | 2024-11-16T21:45:13 | 2024-11-16T23:04:52 | 106 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-ko-7b-instruct-v0.52 - GGUF
- Model creator: https://huggingface.co/lemon-mint/
- Original model: https://h... | [
"QUESTION_ANSWERING",
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
gemma-ko-7b-instruct-v0.52 - GGUF
- Model creator: https://huggingface.co/lemon-mint/
- Original model: https://huggingface.... | {} |
DavieLion/Lllma-3.2-1B | DavieLion | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"facebook",
"meta",
"pytorch",
"llama-3",
"en",
"de",
"fr",
"it",
"pt",
"hi",
"es",
"th",
"arxiv:2204.05149",
"arxiv:2405.16406",
"license:llama3.2",
"autotrain_compatible",
"text-generation-inference",
"endpoints_c... | 2024-12-27T02:14:21 | 2024-12-27T07:19:30 | 15 | 0 | ---
language:
- en
- de
- fr
- it
- pt
- hi
- es
- th
library_name: transformers
license: llama3.2
pipeline_tag: text-generation
tags:
- facebook
- meta
- pytorch
- llama
- llama-3
extra_gated_prompt: "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version\
\ Release Date: September 25, 2024\n\n“Agreement” me... | [
"SUMMARIZATION"
] | Non_BioNLP |
## Model Information
The Llama 3.2 collection of multilingual large language models (LLMs) is a collection of pretrained and instruction-tuned generative models in 1B and 3B sizes (text in/text out). The Llama 3.2 instruction-tuned text only models are optimized for multilingual dialogue use cases, including agentic ... | {"language": ["en", "de", "fr", "it", "pt", "hi", "es", "th"], "library_name": "transformers", "license": "llama3.2", "pipeline_tag": "text-generation", "tags": ["facebook", "meta", "pytorch", "llama", "llama-3"], "extra_gated_prompt": "### LLAMA 3.2 COMMUNITY LICENSE AGREEMENT\n\nLlama 3.2 Version Release Date: Septem... |
RichardErkhov/BueormLLC_-_RAGPT-2_unfunctional-gguf | RichardErkhov | null | [
"gguf",
"endpoints_compatible",
"region:us"
] | 2025-03-14T15:15:43 | 2025-03-14T15:18:16 | 307 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
RAGPT-2_unfunctional - GGUF
- Model creator: https://huggingface.co/BueormLLC/
- Original model: https://huggingf... | [
"QUESTION_ANSWERING"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
RAGPT-2_unfunctional - GGUF
- Model creator: https://huggingface.co/BueormLLC/
- Original model: https://huggingface.co/Bueo... | {} |
aasarmehdi/distilbert-base-uncased.finetuned-emotion | aasarmehdi | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-06-18T12:20:48 | 2023-06-18T15:12:34 | 8 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased.finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased.finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased.finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
Jiiiiiiiiiinw/finetuning-sentiment-model-3000-samples | Jiiiiiiiiiinw | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:imdb",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-04-13T08:56:37 | 2023-04-13T09:05:30 | 11 | 0 | ---
datasets:
- imdb
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: finetuning-sentiment-model-3000-samples
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: imdb
type: imdb
config: plain_text
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# finetuning-sentiment-model-3000-samples
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/d... | {"datasets": ["imdb"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "finetuning-sentiment-model-3000-samples", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "imdb", "type": "imdb", "config"... |
Kevin123/distilbert-base-uncased-finetuned-cola | Kevin123 | text-classification | [
"transformers",
"pytorch",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:glue",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-22T21:03:59 | 2022-09-22T22:39:03 | 10 | 0 | ---
datasets:
- glue
license: apache-2.0
metrics:
- matthews_correlation
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-cola
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: glue
type: glue
args: cola
met... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-cola
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/di... | {"datasets": ["glue"], "license": "apache-2.0", "metrics": ["matthews_correlation"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-cola", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "glue", "type": "glue", "ar... |
nbogdan/flant5-small-1ex-paraphrasing-1epochs | nbogdan | null | [
"adapter-transformers",
"adapterhub:self-explanations",
"t5",
"dataset:self-explanations",
"region:us"
] | 2023-09-04T15:45:08 | 2023-09-04T15:45:14 | 0 | 0 | ---
datasets:
- self-explanations
tags:
- adapterhub:self-explanations
- t5
- adapter-transformers
---
# Adapter `nbogdan/flant5-small-1ex-paraphrasing-1epochs` for google/flan-t5-small
An [adapter](https://adapterhub.ml) for the `google/flan-t5-small` model that was trained on the [self-explanations](https://adapter... | [
"PARAPHRASING"
] | Non_BioNLP |
# Adapter `nbogdan/flant5-small-1ex-paraphrasing-1epochs` for google/flan-t5-small
An [adapter](https://adapterhub.ml) for the `google/flan-t5-small` model that was trained on the [self-explanations](https://adapterhub.ml/explore/self-explanations/) dataset.
This adapter was created for usage with the **[adapter-tra... | {"datasets": ["self-explanations"], "tags": ["adapterhub:self-explanations", "t5", "adapter-transformers"]} |
uboza10300/distilbert-hatexplain | uboza10300 | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:hatexplain",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints... | 2024-12-10T04:43:05 | 2024-12-10T04:58:07 | 9 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- hatexplain
library_name: transformers
license: apache-2.0
metrics:
- accuracy
- precision
- recall
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-hatexplain
results:
- task:
type: text-classification
name: Text Classification
d... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-hatexplain
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co/distilbert-base-unc... | {"base_model": "distilbert-base-uncased", "datasets": ["hatexplain"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["accuracy", "precision", "recall", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-hatexplain", "results": [{"task": {"type": "text-classification", ... |
YakovElm/Qt15SetFitModel_balance_ratio_3 | YakovElm | text-classification | [
"sentence-transformers",
"pytorch",
"mpnet",
"setfit",
"text-classification",
"arxiv:2209.11055",
"license:apache-2.0",
"region:us"
] | 2023-06-03T17:18:09 | 2023-06-03T17:18:44 | 10 | 0 | ---
license: apache-2.0
pipeline_tag: text-classification
tags:
- setfit
- sentence-transformers
- text-classification
---
# YakovElm/Qt15SetFitModel_balance_ratio_3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient ... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# YakovElm/Qt15SetFitModel_balance_ratio_3
This is a [SetFit model](https://github.com/huggingface/setfit) that can be used for text classification. The model has been trained using an efficient few-shot learning technique that involves:
1. Fine-tuning a [Sentence Transformer](https://www.sbert.net) with contrastive... | {"license": "apache-2.0", "pipeline_tag": "text-classification", "tags": ["setfit", "sentence-transformers", "text-classification"]} |
farleyknight/patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20 | farleyknight | text2text-generation | [
"transformers",
"pytorch",
"bigbird_pegasus",
"text2text-generation",
"generated_from_trainer",
"dataset:farleyknight/big_patent_5_percent",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-09-20T21:32:32 | 2022-09-23T02:53:23 | 40 | 0 | ---
datasets:
- farleyknight/big_patent_5_percent
license: apache-2.0
metrics:
- rouge
tags:
- generated_from_trainer
model-index:
- name: patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20
results:
- task:
type: summarization
name: Summarization
dataset:
name: farleyknight/big... | [
"SUMMARIZATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20
This model is a fine-tuned version of [google/bigbird-pegasus... | {"datasets": ["farleyknight/big_patent_5_percent"], "license": "apache-2.0", "metrics": ["rouge"], "tags": ["generated_from_trainer"], "model-index": [{"name": "patent-summarization-google-bigbird-pegasus-large-arxiv-2022-09-20", "results": [{"task": {"type": "summarization", "name": "Summarization"}, "dataset": {"name... |
poltextlab/xlm-roberta-large-english-social-cap-v3 | poltextlab | null | [
"pytorch",
"xlm-roberta",
"arxiv:1910.09700",
"region:us"
] | 2024-10-18T09:44:16 | 2025-02-26T16:08:16 | 99 | 0 | ---
extra_gated_prompt: 'Our models are intended for academic use only. If you are not
affiliated with an academic institution, please provide a rationale for using our
models. Please allow us a few business days to manually review subscriptions.
If you use our models for your work or research, please cite ... | [
"TRANSLATION"
] | Non_BioNLP |
# Model Card for Model ID
<!-- Provide a quick summary of what the model is/does. -->
## Model Details
The translation table from the model results to CAP codes is the following:
```python
CAP_NUM_DICT = {
0: 1,
1: 2,
2: 3,
3: 4,
4: 5,
5: 6,
6: 7,
7: 8,
8: 9... | {"extra_gated_prompt": "Our models are intended for academic use only. If you are not affiliated with an academic institution, please provide a rationale for using our models. Please allow us a few business days to manually review subscriptions.\nIf you use our models for your work or research, please cite this paper: ... |
TripleH/distilbert-base-uncased-finetuned-emotion | TripleH | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2023-02-03T15:45:19 | 2023-02-03T16:26:23 | 114 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
config: split... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
haophancs/bge-m3-financial-matryoshka | haophancs | sentence-similarity | [
"sentence-transformers",
"onnx",
"safetensors",
"xlm-roberta",
"sentence-similarity",
"feature-extraction",
"generated_from_trainer",
"dataset_size:6300",
"loss:MatryoshkaLoss",
"loss:MultipleNegativesRankingLoss",
"en",
"arxiv:1908.10084",
"arxiv:2205.13147",
"arxiv:1705.00652",
"base_m... | 2024-06-22T12:04:46 | 2024-07-09T04:46:45 | 37 | 1 | ---
base_model: BAAI/bge-m3
datasets: []
language:
- en
library_name: sentence-transformers
license: apache-2.0
metrics:
- cosine_accuracy@1
- cosine_accuracy@3
- cosine_accuracy@5
- cosine_accuracy@10
- cosine_precision@1
- cosine_precision@3
- cosine_precision@5
- cosine_precision@10
- cosine_recall@1
- cosine_recall... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
# BGE-M3 Financial Matryoshka
This is a [sentence-transformers](https://www.SBERT.net) model finetuned from [BAAI/bge-m3](https://huggingface.co/BAAI/bge-m3). It maps sentences & paragraphs to a 1024-dimensional dense vector space and can be used for semantic textual similarity, semantic search, paraphrase mining, te... | {"base_model": "BAAI/bge-m3", "datasets": [], "language": ["en"], "library_name": "sentence-transformers", "license": "apache-2.0", "metrics": ["cosine_accuracy@1", "cosine_accuracy@3", "cosine_accuracy@5", "cosine_accuracy@10", "cosine_precision@1", "cosine_precision@3", "cosine_precision@5", "cosine_precision@10", "c... |
TheBloke/airoboros-33B-gpt4-1.4-GPTQ | TheBloke | text-generation | [
"transformers",
"safetensors",
"llama",
"text-generation",
"license:other",
"autotrain_compatible",
"text-generation-inference",
"4-bit",
"gptq",
"region:us"
] | 2023-06-26T16:39:01 | 2023-08-21T03:04:25 | 75 | 27 | ---
license: other
inference: false
---
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-cont... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
<!-- header start -->
<!-- 200823 -->
<div style="width: auto; margin-left: auto; margin-right: auto">
<img src="https://i.imgur.com/EBdldam.jpg" alt="TheBlokeAI" style="width: 100%; min-width: 400px; display: block; margin: auto;">
</div>
<div style="display: flex; justify-content: space-between; width: 100%;">
<... | {"license": "other", "inference": false} |
RichardErkhov/hishab_-_titulm-llama-3.2-1b-v1.1-awq | RichardErkhov | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | 2024-12-21T17:29:06 | 2024-12-21T17:30:01 | 9 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
titulm-llama-3.2-1b-v1.1 - AWQ
- Model creator: https://huggingface.co/hishab/
- Original model: https://huggingf... | [
"TRANSLATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
titulm-llama-3.2-1b-v1.1 - AWQ
- Model creator: https://huggingface.co/hishab/
- Original model: https://huggingface.co/hish... | {} |
pinzhenchen/sft-lora-es-ollama-7b | pinzhenchen | null | [
"generation",
"question answering",
"instruction tuning",
"es",
"arxiv:2309.08958",
"license:cc-by-nc-4.0",
"region:us"
] | 2024-03-05T23:49:01 | 2024-03-05T23:49:04 | 0 | 0 | ---
language:
- es
license: cc-by-nc-4.0
tags:
- generation
- question answering
- instruction tuning
---
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://... | [
"QUESTION_ANSWERING"
] | Non_BioNLP |
### Model Description
This HF repository contains base LLMs instruction tuned (SFT) with LoRA and then used to study whether monolingual or multilingual instruction tuning is more favourable.
* [GitHub](https://github.com/hplt-project/monolingual-multilingual-instruction-tuning/tree/main)
* [Paper](https://arxiv.org... | {"language": ["es"], "license": "cc-by-nc-4.0", "tags": ["generation", "question answering", "instruction tuning"]} |
unsloth/gemma-3-4b-it-GGUF | unsloth | image-text-to-text | [
"transformers",
"gguf",
"gemma3",
"image-text-to-text",
"unsloth",
"gemma",
"google",
"en",
"arxiv:1905.07830",
"arxiv:1905.10044",
"arxiv:1911.11641",
"arxiv:1904.09728",
"arxiv:1705.03551",
"arxiv:1911.01547",
"arxiv:1907.10641",
"arxiv:1903.00161",
"arxiv:2009.03300",
"arxiv:230... | 2025-03-12T09:04:23 | 2025-04-12T03:35:15 | 35,605 | 39 | ---
base_model: google/gemma-3-4b-it
language:
- en
library_name: transformers
license: gemma
tags:
- unsloth
- transformers
- gemma3
- gemma
- google
---
<div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collec... | [
"QUESTION_ANSWERING",
"SUMMARIZATION"
] | Non_BioNLP | <div>
<p style="margin-bottom: 0; margin-top: 0;">
<strong>See <a href="https://huggingface.co/collections/unsloth/gemma-3-67d12b7e8816ec6efa7e4e5b">our collection</a> for all versions of Gemma 3 including GGUF, 4-bit & 16-bit formats.</strong>
</p>
<p style="margin-bottom: 0;">
<em><a href="https://docs.... | {"base_model": "google/gemma-3-4b-it", "language": ["en"], "library_name": "transformers", "license": "gemma", "tags": ["unsloth", "transformers", "gemma3", "gemma", "google"]} |
caldana/distilbert-base-uncased-finetuned-emotion | caldana | text-classification | [
"transformers",
"pytorch",
"tensorboard",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-05-31T22:16:58 | 2022-05-31T23:07:12 | 10 | 0 | ---
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
type: emotion
args: default... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "dataset": {"name": "emotion", "type": "emotion... |
RichardErkhov/NbAiLab_-_nb-llama-3.2-1B-awq | RichardErkhov | null | [
"safetensors",
"llama",
"4-bit",
"awq",
"region:us"
] | 2025-01-13T05:27:54 | 2025-01-13T05:28:31 | 5 | 0 | ---
{}
---
Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
nb-llama-3.2-1B - AWQ
- Model creator: https://huggingface.co/NbAiLab/
- Original model: https://huggingface.co/N... | [
"TRANSLATION",
"SUMMARIZATION"
] | Non_BioNLP | Quantization made by Richard Erkhov.
[Github](https://github.com/RichardErkhov)
[Discord](https://discord.gg/pvy7H8DZMG)
[Request more models](https://github.com/RichardErkhov/quant_request)
nb-llama-3.2-1B - AWQ
- Model creator: https://huggingface.co/NbAiLab/
- Original model: https://huggingface.co/NbAiLab/nb-l... | {} |
chriswilson2020/distilbert-base-uncased-finetuned-emotion | chriswilson2020 | text-classification | [
"transformers",
"safetensors",
"distilbert",
"text-classification",
"generated_from_trainer",
"dataset:emotion",
"base_model:distilbert/distilbert-base-uncased",
"base_model:finetune:distilbert/distilbert-base-uncased",
"license:apache-2.0",
"model-index",
"autotrain_compatible",
"endpoints_co... | 2024-04-20T14:31:19 | 2024-04-20T14:52:36 | 4 | 0 | ---
base_model: distilbert-base-uncased
datasets:
- emotion
license: apache-2.0
metrics:
- accuracy
- f1
tags:
- generated_from_trainer
model-index:
- name: distilbert-base-uncased-finetuned-emotion
results:
- task:
type: text-classification
name: Text Classification
dataset:
name: emotion
... | [
"TEXT_CLASSIFICATION"
] | Non_BioNLP |
<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->
# distilbert-base-uncased-finetuned-emotion
This model is a fine-tuned version of [distilbert-base-uncased](https://huggingface.co... | {"base_model": "distilbert-base-uncased", "datasets": ["emotion"], "license": "apache-2.0", "metrics": ["accuracy", "f1"], "tags": ["generated_from_trainer"], "model-index": [{"name": "distilbert-base-uncased-finetuned-emotion", "results": [{"task": {"type": "text-classification", "name": "Text Classification"}, "datas... |
eligapris/kin-eng | eligapris | translation | [
"transformers",
"pytorch",
"m2m_100",
"text2text-generation",
"translation",
"en",
"rw",
"dataset:mbazaNLP/NMT_Tourism_parallel_data_en_kin",
"dataset:mbazaNLP/NMT_Education_parallel_data_en_kin",
"dataset:mbazaNLP/Kinyarwanda_English_parallel_dataset",
"license:cc-by-2.0",
"autotrain_compatib... | 2024-09-04T20:43:15 | 2024-09-05T00:00:07 | 0 | 0 | ---
datasets:
- mbazaNLP/NMT_Tourism_parallel_data_en_kin
- mbazaNLP/NMT_Education_parallel_data_en_kin
- mbazaNLP/Kinyarwanda_English_parallel_dataset
language:
- en
- rw
library_name: transformers
license: cc-by-2.0
pipeline_tag: translation
---
## Model Details
### Model Description
<!-- Provide a longer summary o... | [
"TRANSLATION"
] | Non_BioNLP | ## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
This is a Machine Translation model, finetuned from [NLLB](https://huggingface.co/facebook/nllb-200-distilled-1.3B)-200's distilled 1.3B model, it is meant to be used in machine translation for education-related data.
... | {"datasets": ["mbazaNLP/NMT_Tourism_parallel_data_en_kin", "mbazaNLP/NMT_Education_parallel_data_en_kin", "mbazaNLP/Kinyarwanda_English_parallel_dataset"], "language": ["en", "rw"], "library_name": "transformers", "license": "cc-by-2.0", "pipeline_tag": "translation"} |
Helsinki-NLP/opus-mt-tc-base-ro-uk | Helsinki-NLP | translation | [
"transformers",
"pytorch",
"tf",
"safetensors",
"marian",
"text2text-generation",
"translation",
"opus-mt-tc",
"ro",
"uk",
"license:cc-by-4.0",
"model-index",
"autotrain_compatible",
"endpoints_compatible",
"region:us"
] | 2022-03-24T12:41:59 | 2023-10-10T21:36:02 | 20 | 0 | ---
language:
- ro
- uk
license: cc-by-4.0
tags:
- translation
- opus-mt-tc
model-index:
- name: opus-mt-tc-base-ro-uk
results:
- task:
type: translation
name: Translation ron-ukr
dataset:
name: flores101-devtest
type: flores_101
args: ron ukr devtest
metrics:
- type: bleu
... | [
"TRANSLATION"
] | Non_BioNLP | # opus-mt-tc-base-ro-uk
Neural machine translation model for translating from Romanian (ro) to Ukrainian (uk).
This model is part of the [OPUS-MT project](https://github.com/Helsinki-NLP/Opus-MT), an effort to make neural machine translation models widely available and accessible for many languages in the world. All ... | {"language": ["ro", "uk"], "license": "cc-by-4.0", "tags": ["translation", "opus-mt-tc"], "model-index": [{"name": "opus-mt-tc-base-ro-uk", "results": [{"task": {"type": "translation", "name": "Translation ron-ukr"}, "dataset": {"name": "flores101-devtest", "type": "flores_101", "args": "ron ukr devtest"}, "metrics": [... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.