Trouter-Library commited on
Commit
851f952
·
verified ·
1 Parent(s): 460b28f

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +46 -38
README.md CHANGED
@@ -24,50 +24,58 @@ Helion-V1 is a conversational AI model designed to be helpful, harmless, and hon
24
  - **Language(s):** English
25
  - **License:** Apache 2.0
26
  - **Finetuned from:** Troviku-1.1
 
 
27
 
28
- ## Intended Use
29
 
30
- Helion-V1 is designed for:
31
- - General conversational assistance
32
- - Question answering
33
- - Creative writing support
34
- - Educational purposes
35
- - Coding assistance
36
 
37
- ### Direct Use
38
 
39
- The model can be used directly for chat-based applications where safety and helpfulness are priorities.
40
-
41
- ### Out-of-Scope Use
42
-
43
- This model should NOT be used for:
44
- - Generating harmful, illegal, or unethical content
45
- - Medical, legal, or financial advice without proper disclaimers
46
- - Impersonating individuals or organizations
47
- - Creating misleading or false information
48
-
49
- ## Safeguards
50
-
51
- Helion-V1 includes safety mechanisms to:
52
- - Refuse harmful requests
53
- - Avoid generating dangerous content
54
- - Maintain respectful and helpful interactions
55
- - Protect user privacy and safety
56
-
57
- ## Usage
58
-
59
- ```python
60
  from transformers import AutoTokenizer, AutoModelForCausalLM
 
61
 
62
  model_name = "DeepXR/Helion-V1"
63
  tokenizer = AutoTokenizer.from_pretrained(model_name)
64
- model = AutoModelForCausalLM.from_pretrained(model_name)
65
-
66
- messages = [
67
- {"role": "user", "content": "Hello! Can you help me with a question?"}
68
- ]
69
-
70
- input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
71
- output = model.generate(input_ids, max_length=512)
72
- response = tokenizer.decode(output[0], skip_special_tokens=True)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
73
  print(response)
 
24
  - **Language(s):** English
25
  - **License:** Apache 2.0
26
  - **Finetuned from:** Troviku-1.1
27
+ - **Model Size:** 7B parameters
28
+ - **Context Length:** 4096 tokens
29
 
30
+ ## Model Capabilities
31
 
32
+ - Natural conversation and dialogue
33
+ - Knowledge synthesis and explanation
34
+ - Code generation and debugging
35
+ - Creative writing and content creation
36
+ - Problem solving and reasoning
37
+ - Safe and ethical responses
38
 
39
+ ## Installation
40
 
41
+ ```bash
42
+ pip install transformers torch accelerate
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
43
  from transformers import AutoTokenizer, AutoModelForCausalLM
44
+ import torch
45
 
46
  model_name = "DeepXR/Helion-V1"
47
  tokenizer = AutoTokenizer.from_pretrained(model_name)
48
+ model = AutoModelForCausalLM.from_pretrained(
49
+ model_name,
50
+ torch_dtype=torch.float16,
51
+ device_map="auto"
52
+ )
53
+
54
+ def chat_with_helion(prompt, max_length=512, temperature=0.7):
55
+ messages = [
56
+ {"role": "user", "content": prompt}
57
+ ]
58
+
59
+ input_ids = tokenizer.apply_chat_template(
60
+ messages,
61
+ return_tensors="pt"
62
+ ).to(model.device)
63
+
64
+ with torch.no_grad():
65
+ outputs = model.generate(
66
+ input_ids,
67
+ max_length=max_length,
68
+ temperature=temperature,
69
+ do_sample=True,
70
+ pad_token_id=tokenizer.eos_token_id,
71
+ top_p=0.9,
72
+ repetition_penalty=1.1
73
+ )
74
+
75
+ response = tokenizer.decode(outputs[0], skip_special_tokens=True)
76
+ return response
77
+
78
+ # Example usage
79
+ prompt = "Explain the concept of machine learning in simple terms."
80
+ response = chat_with_helion(prompt)
81
  print(response)