basiphobe commited on
Commit
f2424ac
·
verified ·
1 Parent(s): 425ebcd

Update README with both LoRA and merged model usage options

Browse files
Files changed (1) hide show
  1. README.md +30 -2
README.md CHANGED
@@ -22,11 +22,33 @@ The model was trained on curated SCI-related content including:
22
 
23
  ## Usage
24
 
 
 
 
25
  ```python
26
  from transformers import AutoModelForCausalLM, AutoTokenizer
27
 
28
- model = AutoModelForCausalLM.from_pretrained("your-username/sci-assistant-7b")
29
- tokenizer = AutoTokenizer.from_pretrained("your-username/sci-assistant-7b")
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
30
 
31
  # Example usage
32
  prompt = "What are the signs of autonomic dysreflexia?"
@@ -48,6 +70,12 @@ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
48
  - Not a replacement for professional medical care
49
  - May not reflect the most recent medical developments
50
 
 
 
 
 
 
 
51
  ## Technical Details
52
 
53
  - **Base Model**: teknium/OpenHermes-2.5-Mistral-7B
 
22
 
23
  ## Usage
24
 
25
+ This repository contains both the LoRA adapter and the full merged model. Choose the option that works best for you:
26
+
27
+ ### Option 1: Use the Full Merged Model (Recommended)
28
  ```python
29
  from transformers import AutoModelForCausalLM, AutoTokenizer
30
 
31
+ model = AutoModelForCausalLM.from_pretrained("basiphobe/sci-assistant")
32
+ tokenizer = AutoTokenizer.from_pretrained("basiphobe/sci-assistant")
33
+
34
+ # Example usage
35
+ prompt = "What are the signs of autonomic dysreflexia?"
36
+ inputs = tokenizer(prompt, return_tensors="pt")
37
+ outputs = model.generate(**inputs, max_length=200)
38
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
39
+ ```
40
+
41
+ ### Option 2: Use the LoRA Adapter (Smaller Download)
42
+ ```python
43
+ from transformers import AutoModelForCausalLM, AutoTokenizer
44
+ from peft import PeftModel
45
+
46
+ # Load base model
47
+ base_model = AutoModelForCausalLM.from_pretrained("teknium/OpenHermes-2.5-Mistral-7B")
48
+ tokenizer = AutoTokenizer.from_pretrained("teknium/OpenHermes-2.5-Mistral-7B")
49
+
50
+ # Load LoRA adapter
51
+ model = PeftModel.from_pretrained(base_model, "basiphobe/sci-assistant")
52
 
53
  # Example usage
54
  prompt = "What are the signs of autonomic dysreflexia?"
 
70
  - Not a replacement for professional medical care
71
  - May not reflect the most recent medical developments
72
 
73
+ ## Files in this Repository
74
+
75
+ - **Full Merged Model**: Ready-to-use model files (`model-*.safetensors`, `config.json`, etc.)
76
+ - **LoRA Adapter**: Smaller adapter files (`adapter_model.safetensors`, `adapter_config.json`)
77
+ - **Tokenizer**: Shared tokenizer files for both options
78
+
79
  ## Technical Details
80
 
81
  - **Base Model**: teknium/OpenHermes-2.5-Mistral-7B