Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
9
104
author
stringlengths
3
36
task_category
stringclasses
32 values
tags
sequencelengths
1
4.05k
created_time
timestamp[s]date
2022-03-02 23:29:04
2025-04-10 08:38:38
last_modified
timestamp[s]date
2021-02-13 00:06:56
2025-04-18 02:54:43
downloads
int64
0
15.6M
likes
int64
0
4.86k
README
stringlengths
44
1.01M
matched_bigbio_names
sequencelengths
1
8
is_bionlp
stringclasses
3 values
model_cards
stringlengths
0
1M
metadata
stringlengths
2
698k
Baiming123/Calcu_Disease_Similarity
Baiming123
sentence-similarity
[ "sentence-transformers", "pytorch", "bert", "sentence-similarity", "dataset:Baiming123/MeSHDS", "base_model:sentence-transformers/multi-qa-MiniLM-L6-cos-v1", "base_model:finetune:sentence-transformers/multi-qa-MiniLM-L6-cos-v1", "doi:10.57967/hf/3108", "autotrain_compatible", "text-embeddings-infe...
2024-09-20T15:58:13
2024-12-14T10:10:29
0
3
--- base_model: - sentence-transformers/multi-qa-MiniLM-L6-cos-v1 datasets: - Baiming123/MeSHDS pipeline_tag: sentence-similarity tags: - sentence-transformers - sentence-similarity --- # Model Description This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensio...
[ "MIRNA" ]
BioNLP
# Model Description This is a [sentence-transformers](https://www.SBERT.net) model: It maps sentences & paragraphs to a 384 dimensional dense vector space and can be used for tasks like clustering or semantic search.The 'Calcu_Disease_Similarity' model is designed to encode two disease terms and compute their **seman...
{"base_model": ["sentence-transformers/multi-qa-MiniLM-L6-cos-v1"], "datasets": ["Baiming123/MeSHDS"], "pipeline_tag": "sentence-similarity", "tags": ["sentence-transformers", "sentence-similarity"]}
johnsnowlabs/JSL-MedMNX-7B-SFT
johnsnowlabs
text-generation
[ "transformers", "safetensors", "mistral", "text-generation", "reward model", "RLHF", "medical", "conversational", "en", "license:cc-by-nc-nd-4.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2024-04-16T05:27:20
2024-04-18T19:25:47
2,926
3
--- language: - en library_name: transformers license: cc-by-nc-nd-4.0 tags: - reward model - RLHF - medical --- # JSL-MedMNX-7B-SFT [<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com) JSL-MedMNX-7B-SFT is a 7 Billion parameter mod...
[ "MEDQA", "PUBMEDQA" ]
BioNLP
# JSL-MedMNX-7B-SFT [<img src="https://repository-images.githubusercontent.com/104670986/2e728700-ace4-11ea-9cfc-f3e060b25ddf">](http://www.johnsnowlabs.com) JSL-MedMNX-7B-SFT is a 7 Billion parameter model developed by [John Snow Labs](https://www.johnsnowlabs.com/). This model is SFT-finetuned on alpaca format 11...
{"language": ["en"], "library_name": "transformers", "license": "cc-by-nc-nd-4.0", "tags": ["reward model", "RLHF", "medical"]}
RichardErkhov/HPAI-BSC_-_Llama3-Aloe-8B-Alpha-gguf
RichardErkhov
null
[ "gguf", "arxiv:2405.01886", "endpoints_compatible", "region:us", "conversational" ]
2024-10-30T11:14:53
2024-10-30T15:06:18
75
0
--- {} --- Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama3-Aloe-8B-Alpha - GGUF - Model creator: https://huggingface.co/HPAI-BSC/ - Original model: https://huggingfa...
[ "MEDQA", "PUBMEDQA" ]
BioNLP
Quantization made by Richard Erkhov. [Github](https://github.com/RichardErkhov) [Discord](https://discord.gg/pvy7H8DZMG) [Request more models](https://github.com/RichardErkhov/quant_request) Llama3-Aloe-8B-Alpha - GGUF - Model creator: https://huggingface.co/HPAI-BSC/ - Original model: https://huggingface.co/HPAI-...
{}
Rodrigo1771/bsc-bio-ehr-es-symptemist-word2vec-85-ner
Rodrigo1771
token-classification
[ "transformers", "tensorboard", "safetensors", "roberta", "token-classification", "generated_from_trainer", "dataset:Rodrigo1771/symptemist-85-ner", "base_model:PlanTL-GOB-ES/bsc-bio-ehr-es", "base_model:finetune:PlanTL-GOB-ES/bsc-bio-ehr-es", "license:apache-2.0", "model-index", "autotrain_com...
2024-09-04T19:00:28
2024-09-04T19:11:15
13
0
--- base_model: PlanTL-GOB-ES/bsc-bio-ehr-es datasets: - Rodrigo1771/symptemist-85-ner library_name: transformers license: apache-2.0 metrics: - precision - recall - f1 - accuracy tags: - token-classification - generated_from_trainer model-index: - name: output results: - task: type: token-classification ...
[ "SYMPTEMIST" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # output This model is a fine-tuned version of [PlanTL-GOB-ES/bsc-bio-ehr-es](https://huggingface.co/PlanTL-GOB-ES/bsc-bio-ehr-es)...
{"base_model": "PlanTL-GOB-ES/bsc-bio-ehr-es", "datasets": ["Rodrigo1771/symptemist-85-ner"], "library_name": "transformers", "license": "apache-2.0", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["token-classification", "generated_from_trainer"], "model-index": [{"name": "output", "results": [{"task":...
kunkunhu/craft_mol
kunkunhu
null
[ "region:us" ]
2025-01-25T15:38:37
2025-01-26T09:08:28
0
0
--- {} --- # CRAFT CRAFT: Consistent Representational Fusion of Three Molecular Modalities
[ "CRAFT" ]
Non_BioNLP
# CRAFT CRAFT: Consistent Representational Fusion of Three Molecular Modalities
{}
jiey2/DISC-MedLLM
jiey2
text-generation
[ "transformers", "pytorch", "baichuan", "text-generation", "medical", "custom_code", "zh", "dataset:Flmc/DISC-Med-SFT", "arxiv:2308.14346", "license:apache-2.0", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2023-11-04T10:43:52
2023-11-04T10:48:48
16
1
--- datasets: - Flmc/DISC-Med-SFT language: - zh license: apache-2.0 tags: - medical --- This repository contains the DISC-MedLLM, version of Baichuan-13b-base as the base model. **Please note that due to the ongoing development of the project, the model weights in this repository may differ from those in our currentl...
[ "MEDDIALOG" ]
BioNLP
This repository contains the DISC-MedLLM, version of Baichuan-13b-base as the base model. **Please note that due to the ongoing development of the project, the model weights in this repository may differ from those in our currently deployed demo.** Check [DISC-MedLLM](https://github.com/FudanDISC/DISC-MedLLM) for mor...
{"datasets": ["Flmc/DISC-Med-SFT"], "language": ["zh"], "license": "apache-2.0", "tags": ["medical"]}
ManoloPueblo/LLM_MERGE_CC4
ManoloPueblo
null
[ "safetensors", "mistral", "merge", "mergekit", "lazymergekit", "llm-merge-cc4", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B", "license:apache-2.0", "region:us" ]
2024-11-10T13:55:30
2024-11-10T14:01:19
6
1
--- license: apache-2.0 tags: - merge - mergekit - lazymergekit - llm-merge-cc4 - OpenPipe/mistral-ft-optimized-1218 - mlabonne/NeuralHermes-2.5-Mistral-7B --- # LLM_MERGE_CC4 LLM_MERGE_CC4 est une fusion des modèles suivants créée par ManoloPueblo utilisant [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/...
[ "CAS" ]
Non_BioNLP
# LLM_MERGE_CC4 LLM_MERGE_CC4 est une fusion des modèles suivants créée par ManoloPueblo utilisant [mergekit](https://github.com/cg123/mergekit): * [OpenPipe/mistral-ft-optimized-1218](https://huggingface.co/OpenPipe/mistral-ft-optimized-1218) * [mlabonne/NeuralHermes-2.5-Mistral-7B](https://huggingface.co/mlabonne/N...
{"license": "apache-2.0", "tags": ["merge", "mergekit", "lazymergekit", "llm-merge-cc4", "OpenPipe/mistral-ft-optimized-1218", "mlabonne/NeuralHermes-2.5-Mistral-7B"]}
razent/SciFive-large-Pubmed_PMC-MedNLI
razent
text2text-generation
[ "transformers", "pytorch", "tf", "t5", "text2text-generation", "mednli", "en", "dataset:pubmed", "dataset:pmc/open_access", "arxiv:2106.03598", "autotrain_compatible", "text-generation-inference", "endpoints_compatible", "region:us" ]
2022-03-20T17:24:33
2022-03-22T04:05:21
1,302
2
--- datasets: - pubmed - pmc/open_access language: - en tags: - text2text-generation - mednli widget: - text: 'mednli: sentence1: In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA. sentence2: The patient is hemodynamically stable' --- # SciFive Pubmed+PMC Large on MedNLI ## Introduc...
[ "MEDNLI" ]
BioNLP
# SciFive Pubmed+PMC Large on MedNLI ## Introduction Finetuned SciFive Pubmed+PMC Large model achieved state-of-the-art results on [MedNLI (Medical Natural Language Inference)](https://paperswithcode.com/sota/natural-language-inference-on-mednli) Paper: [SciFive: a text-to-text transformer model for biomedical lite...
{"datasets": ["pubmed", "pmc/open_access"], "language": ["en"], "tags": ["text2text-generation", "mednli"], "widget": [{"text": "mednli: sentence1: In the ED, initial VS revealed T 98.9, HR 73, BP 121/90, RR 15, O2 sat 98% on RA. sentence2: The patient is hemodynamically stable"}]}
adipanda/makima-simpletuner-lora-2
adipanda
text-to-image
[ "diffusers", "flux", "flux-diffusers", "text-to-image", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "lycoris", "base_model:black-forest-labs/FLUX.1-dev", "base_model:adapter:black-forest-labs/FLUX.1-dev", "license:other", "region:us" ]
2024-10-12T01:00:13
2024-10-13T19:26:05
16
0
--- base_model: black-forest-labs/FLUX.1-dev license: other tags: - flux - flux-diffusers - text-to-image - diffusers - simpletuner - safe-for-work - lora - template:sd-lora - lycoris inference: true widget: - text: unconditional (blank prompt) parameters: negative_prompt: blurry, cropped, ugly output: url:...
[ "BEAR" ]
Non_BioNLP
# makima-simpletuner-lora-2 This is a LyCORIS adapter derived from [black-forest-labs/FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev). No validation prompt was used during training. None ## Validation settings - CFG: `3.5` - CFG Rescale: `0.0` - Steps: `20` - Sampler: `None` - Seed: `42` - Re...
{"base_model": "black-forest-labs/FLUX.1-dev", "license": "other", "tags": ["flux", "flux-diffusers", "text-to-image", "diffusers", "simpletuner", "safe-for-work", "lora", "template:sd-lora", "lycoris"], "inference": true, "widget": [{"text": "unconditional (blank prompt)", "parameters": {"negative_prompt": "blurry, cr...
sarahmiller137/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext-ft-ncbi-disease
sarahmiller137
token-classification
[ "transformers", "pytorch", "safetensors", "bert", "token-classification", "named-entity-recognition", "en", "dataset:ncbi_disease", "license:cc", "autotrain_compatible", "endpoints_compatible", "region:us" ]
2022-08-22T16:06:00
2023-03-23T15:57:02
24
0
--- datasets: ncbi_disease language: en license: cc metrics: - precision - recall - f1 - accuracy tags: - named-entity-recognition - token-classification task: - named-entity-recognition - token-classification widget: - text: ' The risk of cancer, especially lymphoid neoplasias, is substantially elevated in A-T pat...
[ "NCBI DISEASE" ]
BioNLP
## Model information: microsoft/BiomedNLP-PubMedBERT-base-uncased-abstract-fulltext model finetuned using the ncbi_disease dataset from the datasets library. ## Intended uses: This model is intended to be used for named entity recoginition tasks. The model will identify disease entities in text. The model will pred...
{"datasets": "ncbi_disease", "language": "en", "license": "cc", "metrics": ["precision", "recall", "f1", "accuracy"], "tags": ["named-entity-recognition", "token-classification"], "task": ["named-entity-recognition", "token-classification"], "widget": [{"text": " The risk of cancer, especially lymphoid neoplasias, is s...
tsavage68/MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO
tsavage68
text-generation
[ "transformers", "safetensors", "llama", "text-generation", "trl", "dpo", "generated_from_trainer", "conversational", "base_model:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "base_model:finetune:tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license:llama3", "autotrain_compatible", "text-generati...
2024-05-20T07:31:23
2024-05-23T22:54:22
5
0
--- base_model: tsavage68/MedQA_L3_1000steps_1e6rate_SFT license: llama3 tags: - trl - dpo - generated_from_trainer model-index: - name: MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO results: [] --- <!-- This model card has been generated automatically according to the information the Trainer had access to. You should p...
[ "MEDQA" ]
BioNLP
<!-- This model card has been generated automatically according to the information the Trainer had access to. You should probably proofread and complete it, then remove this comment. --> # MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO This model is a fine-tuned version of [tsavage68/MedQA_L3_1000steps_1e6rate_SFT](https...
{"base_model": "tsavage68/MedQA_L3_1000steps_1e6rate_SFT", "license": "llama3", "tags": ["trl", "dpo", "generated_from_trainer"], "model-index": [{"name": "MedQA_L3_1000steps_1e7rate_03beta_CSFTDPO", "results": []}]}
mradermacher/Llama-3-VNTL-Vectors-i1-GGUF
mradermacher
null
[ "transformers", "gguf", "mergekit", "merge", "en", "base_model:Cas-Warehouse/Llama-3-VNTL-Vectors", "base_model:quantized:Cas-Warehouse/Llama-3-VNTL-Vectors", "endpoints_compatible", "region:us", "imatrix", "conversational" ]
2025-03-08T23:07:11
2025-03-09T01:00:08
589
0
--- base_model: Cas-Warehouse/Llama-3-VNTL-Vectors language: - en library_name: transformers tags: - mergekit - merge quantized_by: mradermacher --- ## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weig...
[ "CAS" ]
Non_BioNLP
## About <!-- ### quantize_version: 2 --> <!-- ### output_tensor_quantised: 1 --> <!-- ### convert_type: hf --> <!-- ### vocab_type: --> <!-- ### tags: nicoboss --> weighted/imatrix quants of https://huggingface.co/Cas-Warehouse/Llama-3-VNTL-Vectors <!-- provided-files --> static quants are available at https://hugg...
{"base_model": "Cas-Warehouse/Llama-3-VNTL-Vectors", "language": ["en"], "library_name": "transformers", "tags": ["mergekit", "merge"], "quantized_by": "mradermacher"}
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
2