You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

Zyora-DEV-32B

Zyora Labs

Code Generation & Security Scanning Model

Try the API • Documentation • Website

Model Description

Zyora-DEV-32B is a fine-tuned large language model specialized for:

  • Code Generation: Generate high-quality code in Python, JavaScript, Go, Rust, Java, C++, and more
  • Security Scanning: Detect vulnerabilities including SQL injection, XSS, command injection, path traversal
  • CWE Classification: Identify and classify vulnerabilities using Common Weakness Enumeration (CWE) IDs
  • Auto-Remediation: Suggest fixes for detected security issues

Model Details

Property Value
Base Model Qwen2.5-Coder-32B-Instruct
Parameters 32.5B
Context Length 32,768 tokens
Fine-tuning Method LoRA (merged)
Training Data Curated code + security datasets
License Zyora Community License

API Access

Zyora-DEV-32B is available via our OpenAI-compatible API:

```bash curl -X POST https://app.zyoralabs.com/v1/chat/completions
-H "Content-Type: application/json"
-H "Authorization: Bearer YOUR_API_KEY"
-d '{ "model": "Zyora-DEV-32B", "messages": [ {"role": "user", "content": "Write a Python function to merge two sorted lists"} ], "max_tokens": 512 }' ```

Pilot Program: Register at app.zyoralabs.com for 100,000 tokens/month free.

Usage with Transformers

```python from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "zyoralabs/Zyora-DEV-32B"

tokenizer = AutoTokenizer.from_pretrained(model_name) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto" )

messages = [ {"role": "user", "content": "Write a secure file upload handler in Python"} ]

text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer([text], return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=512) response = tokenizer.decode(outputs[0], skip_special_tokens=True) print(response) ```

License

This model is released under the Zyora Community License. See LICENSE for full terms.

Contact

Downloads last month
12
Safetensors
Model size
33B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for zyoralabs/Zyora-DEV-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
(120)
this model