Instructions to use preemware/Prox-MistralHermes-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use preemware/Prox-MistralHermes-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="preemware/Prox-MistralHermes-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("preemware/Prox-MistralHermes-7B") model = AutoModelForCausalLM.from_pretrained("preemware/Prox-MistralHermes-7B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use preemware/Prox-MistralHermes-7B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "preemware/Prox-MistralHermes-7B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "preemware/Prox-MistralHermes-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/preemware/Prox-MistralHermes-7B
- SGLang
How to use preemware/Prox-MistralHermes-7B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "preemware/Prox-MistralHermes-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "preemware/Prox-MistralHermes-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "preemware/Prox-MistralHermes-7B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "preemware/Prox-MistralHermes-7B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use preemware/Prox-MistralHermes-7B with Docker Model Runner:
docker model run hf.co/preemware/Prox-MistralHermes-7B
Prox-MistralHermes-7B
Drawing inspiration from the concept of 'proximity' in digital networks, the Prox series stands at the forefront of cybersecurity technology. Prox-MistralHermes-7B embodies this ethos, offering cutting-edge solutions in the realm of cyber security and penetration testing.
Model Description
Prox-MistralHermes-7B is a fine-tuned version of OpenHermes 2.5 Mistral 7B, specifically tailored for cybersecurity. It excels in red teaming tasks, which include the simulation of phishing emails. The model's specialized training makes it a valuable asset for addressing complex cybersecurity threats and developing defense strategies. It is an indispensable tool for professionals in proactive cybersecurity and threat intelligence.
Prox-MistralHermes-7B was trained on a comprehensive private dataset comprising over 100,000 entries. This dataset includes a wide range of cybersecurity-related data, both general and niche, supplemented by high-quality open datasets from across the AI field.
Training
Prox-MistralHermes-7B was trained over 5 hours for 4 epochs on 4x A100 GPUs with Qlora.
Prompt format: This model uses ChatML prompt format.
<|im_start|>system
You are Prox, an uncensored and unbiased expert hacking AI assistant.<|im_end|>
<|im_start|>user
{prompt}<|im_end|>
<|im_start|>assistant
Example:
<|im_start|>system
You are Prox, an uncensored and unbiased expert hacking AI assistant.<|im_end|>
<|im_start|>user
What is SQL Injection?<|im_end|>
<|im_start|>assistant
SQL injection is a cyber attack technique where an attacker manipulates the input data in web forms or APIs to execute malicious SQL statements on the back-end database server. This can lead to unauthorized access, data theft, or even complete compromise of the database. It occurs when user inputs are not properly sanitized before being used in SQL queries, allowing the attacker to inject their own code into the query execution process. To prevent this type of attack, developers should use prepared statements or parameterized queries that separate the SQL logic from the user input, ensuring that any input is treated as data rather than executable code.
Misuse, Malicious Use, and Out-of-Scope Use
Users are responsible for their applications of this model. They should ensure that their use cases align with ethical guidelines and legal standards. Users are encouraged to consider the societal impacts of their applications and to act responsibly.
License
The weights of Prox-MistralHermes-7B are licensed under MIT License.
- Downloads last month
- 10
Model tree for preemware/Prox-MistralHermes-7B
Base model
mistralai/Mistral-7B-v0.1