Helion-V1
Collection
Helion version 1 series • 3 items • Updated • 2
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("DeepXR/Helion-V1")
model = AutoModelForCausalLM.from_pretrained("DeepXR/Helion-V1")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
Helion-V1 is a conversational AI model designed to be helpful, harmless, and honest. The model focuses on providing assistance to users in a friendly and safe manner, with built-in safeguards to prevent harmful outputs.
Helion-V1 is designed for:
The model can be used directly for chat-based applications where safety and helpfulness are priorities.
This model should NOT be used for:
Helion-V1 includes safety mechanisms to:
from transformers import AutoTokenizer, AutoModelForCausalLM
model_name = "DeepXR/Helion-V1"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
messages = [
{"role": "user", "content": "Hello! Can you help me with a question?"}
]
input_ids = tokenizer.apply_chat_template(messages, return_tensors="pt")
output = model.generate(input_ids, max_length=512)
response = tokenizer.decode(output[0], skip_special_tokens=True)
print(response)
[Information about training data]
[Information about training procedure, hyperparameters, etc.]
[Information about evaluation metrics and results]
Helion-V1 has been developed with safety as a priority. However, users should:
@misc{helion-v1,
author = {DeepXR},
title = {Helion-V1: A Safe and Helpful Conversational AI},
year = {2025},
publisher = {HuggingFace},
url = {https://huggingface.co/DeepXR/Helion-V1}
}
For questions or issues, please open an issue on the model repository or contact the development team.
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="DeepXR/Helion-V1") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)