File size: 1,549 Bytes
0878bdb a14ab39 0878bdb 8e17986 a14ab39 3eacd3e 0878bdb 3eacd3e 0878bdb 3eacd3e 0878bdb 3eacd3e 0878bdb a14ab39 0878bdb 3eacd3e 0878bdb d4031ab 4ef5b56 a14ab39 0878bdb 4ef5b56 0878bdb a14ab39 0878bdb a14ab39 0878bdb a14ab39 0878bdb a14ab39 0878bdb a14ab39 3eacd3e | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 | ---
datasets:
- namelessai/helply
base_model: trillionlabs/Trillion-7B-preview
library_name: transformers
tags:
- pysch
- medical
- chat
- instruction
license: mit
language:
- en
- ko
---
# Model Card for TrillionHelp
**TrillionHelp** uses `trillionlabs/Trillion-7B-preview` as the backbone.
## Model Details
This model is fine-tuned on the `namelessai/helply` dataset designed to enhance mental health reasoning capabilities.
### Model Description
This model was fine-tuned for assisting pyschologists in assiting patients.
- **Developed by:** Alex Scott
- **Model type:** Language Model, Adapter Model (available in a folder in the model repo)
- **Finetuned from model:** trillionlabs/Trillion-7B-preview
## Usage (Adapter Only, full model snippet coming soon)
Use the code snippet below to load the base model and apply the adapter for inference:
```python
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
# Load the base model
base_model_name = "trillionlabs/Trillion-7B-preview"
adapter_path = "/path/to/adapter" # Replace with actual adapter path
tokenizer = AutoTokenizer.from_pretrained(base_model_name)
base_model = AutoModelForCausalLM.from_pretrained(base_model_name)
# Apply the adapter
model = PeftModel.from_pretrained(base_model, adapter_path)
model = model.merge_and_unload()
# Run inference
input_text = "Your input text here"
inputs = tokenizer(input_text, return_tensors="pt")
outputs = model.generate(**inputs)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
``` |