Update README.md
Browse files
README.md
CHANGED
|
@@ -24,50 +24,58 @@ Helion-V1 is a conversational AI model designed to be helpful, harmless, and hon
|
|
| 24 |
- **Language(s):** English
|
| 25 |
- **License:** Apache 2.0
|
| 26 |
- **Finetuned from:** Troviku-1.1
|
|
|
|
|
|
|
| 27 |
|
| 28 |
-
##
|
| 29 |
|
| 30 |
-
|
| 31 |
-
-
|
| 32 |
-
-
|
| 33 |
-
-
|
| 34 |
-
-
|
| 35 |
-
-
|
| 36 |
|
| 37 |
-
|
| 38 |
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
### Out-of-Scope Use
|
| 42 |
-
|
| 43 |
-
This model should NOT be used for:
|
| 44 |
-
- Generating harmful, illegal, or unethical content
|
| 45 |
-
- Medical, legal, or financial advice without proper disclaimers
|
| 46 |
-
- Impersonating individuals or organizations
|
| 47 |
-
- Creating misleading or false information
|
| 48 |
-
|
| 49 |
-
## Safeguards
|
| 50 |
-
|
| 51 |
-
Helion-V1 includes safety mechanisms to:
|
| 52 |
-
- Refuse harmful requests
|
| 53 |
-
- Avoid generating dangerous content
|
| 54 |
-
- Maintain respectful and helpful interactions
|
| 55 |
-
- Protect user privacy and safety
|
| 56 |
-
|
| 57 |
-
## Usage
|
| 58 |
-
|
| 59 |
-
```python
|
| 60 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
|
| 61 |
|
| 62 |
model_name = "DeepXR/Helion-V1"
|
| 63 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 64 |
-
model = AutoModelForCausalLM.from_pretrained(
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 73 |
print(response)
|
|
|
|
| 24 |
- **Language(s):** English
|
| 25 |
- **License:** Apache 2.0
|
| 26 |
- **Finetuned from:** Troviku-1.1
|
| 27 |
+
- **Model Size:** 7B parameters
|
| 28 |
+
- **Context Length:** 4096 tokens
|
| 29 |
|
| 30 |
+
## Model Capabilities
|
| 31 |
|
| 32 |
+
- Natural conversation and dialogue
|
| 33 |
+
- Knowledge synthesis and explanation
|
| 34 |
+
- Code generation and debugging
|
| 35 |
+
- Creative writing and content creation
|
| 36 |
+
- Problem solving and reasoning
|
| 37 |
+
- Safe and ethical responses
|
| 38 |
|
| 39 |
+
## Installation
|
| 40 |
|
| 41 |
+
```bash
|
| 42 |
+
pip install transformers torch accelerate
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
| 44 |
+
import torch
|
| 45 |
|
| 46 |
model_name = "DeepXR/Helion-V1"
|
| 47 |
tokenizer = AutoTokenizer.from_pretrained(model_name)
|
| 48 |
+
model = AutoModelForCausalLM.from_pretrained(
|
| 49 |
+
model_name,
|
| 50 |
+
torch_dtype=torch.float16,
|
| 51 |
+
device_map="auto"
|
| 52 |
+
)
|
| 53 |
+
|
| 54 |
+
def chat_with_helion(prompt, max_length=512, temperature=0.7):
|
| 55 |
+
messages = [
|
| 56 |
+
{"role": "user", "content": prompt}
|
| 57 |
+
]
|
| 58 |
+
|
| 59 |
+
input_ids = tokenizer.apply_chat_template(
|
| 60 |
+
messages,
|
| 61 |
+
return_tensors="pt"
|
| 62 |
+
).to(model.device)
|
| 63 |
+
|
| 64 |
+
with torch.no_grad():
|
| 65 |
+
outputs = model.generate(
|
| 66 |
+
input_ids,
|
| 67 |
+
max_length=max_length,
|
| 68 |
+
temperature=temperature,
|
| 69 |
+
do_sample=True,
|
| 70 |
+
pad_token_id=tokenizer.eos_token_id,
|
| 71 |
+
top_p=0.9,
|
| 72 |
+
repetition_penalty=1.1
|
| 73 |
+
)
|
| 74 |
+
|
| 75 |
+
response = tokenizer.decode(outputs[0], skip_special_tokens=True)
|
| 76 |
+
return response
|
| 77 |
+
|
| 78 |
+
# Example usage
|
| 79 |
+
prompt = "Explain the concept of machine learning in simple terms."
|
| 80 |
+
response = chat_with_helion(prompt)
|
| 81 |
print(response)
|