Instructions to use SVECTOR-CORPORATION/Theta-35-Preview with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SVECTOR-CORPORATION/Theta-35-Preview with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SVECTOR-CORPORATION/Theta-35-Preview") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("SVECTOR-CORPORATION/Theta-35-Preview", dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SVECTOR-CORPORATION/Theta-35-Preview with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SVECTOR-CORPORATION/Theta-35-Preview" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SVECTOR-CORPORATION/Theta-35-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/SVECTOR-CORPORATION/Theta-35-Preview
- SGLang
How to use SVECTOR-CORPORATION/Theta-35-Preview with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SVECTOR-CORPORATION/Theta-35-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SVECTOR-CORPORATION/Theta-35-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SVECTOR-CORPORATION/Theta-35-Preview" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SVECTOR-CORPORATION/Theta-35-Preview", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use SVECTOR-CORPORATION/Theta-35-Preview with Docker Model Runner:
docker model run hf.co/SVECTOR-CORPORATION/Theta-35-Preview
docker model run hf.co/SVECTOR-CORPORATION/Theta-35-PreviewYou need to agree to share your contact information to access this model
This repository is publicly accessible, but you have to accept the conditions to access its files and content.
You agree to not use the model to conduct experiments that cause harm to human subjects.
Log in or Sign Up to review the conditions and access this model content.
Theta-35-Preview: Advanced Logical Reasoning AI Model
Introduction
Theta-35-Preview is an experimental research model developed by SVECTOR, specifically engineered to push the boundaries of logical reasoning and analytical capabilities. This model represents a significant leap in AI technology, designed to tackle complex reasoning tasks with unprecedented precision and depth. As a preview release, it demonstrates promising analytical abilities while having several important limitations:
Language Mixing and Code-Switching: The model may mix languages or switch between them unexpectedly, affecting response clarity. Recursive Reasoning Loops: The model may enter circular reasoning patterns, leading to lengthy responses without a conclusive answer. Safety and Ethical Considerations: The model requires enhanced safety measures to ensure reliable and secure performance, and users should exercise caution when deploying it. Performance and Benchmark Limitations: The model excels in math and coding but has room for improvement in other areas, such as common sense reasoning and nuanced language understanding.
Key Features
Advanced Reasoning Capabilities
- State-of-the-art logical inference
- Deep analytical problem-solving
- Nuanced contextual understanding
Architectural Highlights
- 33 Billion Parameter Model
- Transformer-based architecture
- Advanced attention mechanisms
- Optimized for complex reasoning tasks
Technical Specifications
- Model Type: Causal Language Model
- Parameters: 33 Billion
- Context Length: 32,768 tokens
- Architecture: Advanced Transformer with:
- RoPE (Rotary Position Embedding)
- SwiGLU Activation
- RMSNorm Normalization
- Enhanced Attention Mechanisms
Performance Capabilities
- Exceptional performance in:
- Mathematical reasoning
- Complex problem-solving
- Analytical task decomposition
- Multi-step logical inference
Quickstart Guide
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "SVECTOR-CORPORATION/Theta-35-Preview"
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype="auto",
device_map="auto"
)
tokenizer = AutoTokenizer.from_pretrained(model_name)
# Example reasoning prompt
messages = [
{"role": "system", "content": "You are an advanced logical reasoning assistant developed by SVector."},
{"role": "user", "content": "Break down the logical steps to solve a complex problem."}
]
text = tokenizer.apply_chat_template(
messages,
tokenize=False,
add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
**model_inputs,
max_new_tokens=512,
do_sample=True,
temperature=0.7
)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
print(response)
Ethical AI Commitment
SVECTOR is committed to developing responsible AI that:
- Prioritize ethical considerations
- Ensure robust safety mechanisms
- Promote transparent and accountable AI development
Citation
If you use Theta-35 in your research, please cite:
@misc{theta-35,
title = {Theta-35: Advanced Logical Reasoning AI Model},
author = {SVECTOR CORPORATION},
year = {2025},
publisher = {SVECTOR}
}
Contact and Support
- Website: www.svector.co.in
- Email: support@svector.co.in
- Research Inquiries: research@svector.co.in
Limitations and Considerations
While Theta-35 represents a significant advancement in AI reasoning, users should be aware of:
- Potential context-specific reasoning variations
- Need for careful prompt engineering
- Ongoing model refinement and updates
- Downloads last month
- -
# Gated model: Login with a HF token with gated access permission hf auth login