|
|
--- |
|
|
tags: |
|
|
- lora |
|
|
- peft |
|
|
- adapters |
|
|
- mistral |
|
|
- biomedical |
|
|
library_name: peft |
|
|
pipeline_tag: text-classification |
|
|
license: apache-2.0 |
|
|
base_model: mistralai/Mistral-7B-Instruct-v0.3 |
|
|
model-index: |
|
|
- name: MetappuccinoLLModel v1.0.0 |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
# MetappuccinoLLModel — Per-category LoRA adapters for SRA metadata extraction |
|
|
|
|
|
**LoRA adapters (one folder per category)** trained for **SRA metadata extraction and inference** in the [Metappuccino project](https://github.com/chumphati/Metappuccino). These adapters not general-purpose dialogue models. |
|
|
|
|
|
**Important**: Base model weights are **not included**. Also download the official base model: `mistralai/Mistral-7B-Instruct-v0.3` to use Metappuccino. |
|
|
|
|
|
|
|
|
### Version |
|
|
|
|
|
v1.0.0 |
|
|
|
|
|
### Quickstart |
|
|
|
|
|
Download for Metappuccino use: |
|
|
```python |
|
|
from huggingface_hub import snapshot_download |
|
|
|
|
|
snapshot_download( |
|
|
repo_id="chumphati/MetappuccinoLLModel", |
|
|
local_dir="<OUT_DIR_URL>/MetappuccinoLLModel", #path to the output directory |
|
|
resume_download=True, |
|
|
max_workers=4 |
|
|
) |
|
|
``` |
|
|
|
|
|
### Hyperparameters |
|
|
|
|
|
All information about the hyperparameters is provided for each adapter in its respective folder, in the adapter_config.json files. |
|
|
|
|
|
### How to cite |
|
|
|
|
|
If you use this repository in your work, please cite: |
|
|
|
|
|
|
|
|
|
|
|
Related tool: Metappuccino — https://github.com/chumphati/Metappuccino |
|
|
|