Instructions to use Daemontatox/Tiny-OR1-Rust with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Daemontatox/Tiny-OR1-Rust with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Daemontatox/Tiny-OR1-Rust") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Daemontatox/Tiny-OR1-Rust") model = AutoModelForCausalLM.from_pretrained("Daemontatox/Tiny-OR1-Rust") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Daemontatox/Tiny-OR1-Rust with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Daemontatox/Tiny-OR1-Rust" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Daemontatox/Tiny-OR1-Rust", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Daemontatox/Tiny-OR1-Rust
- SGLang
How to use Daemontatox/Tiny-OR1-Rust with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Daemontatox/Tiny-OR1-Rust" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Daemontatox/Tiny-OR1-Rust", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Daemontatox/Tiny-OR1-Rust" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Daemontatox/Tiny-OR1-Rust", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Unsloth Studio new
How to use Daemontatox/Tiny-OR1-Rust with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Daemontatox/Tiny-OR1-Rust to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for Daemontatox/Tiny-OR1-Rust to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for Daemontatox/Tiny-OR1-Rust to start chatting
Load model with FastModel
pip install unsloth from unsloth import FastModel model, tokenizer = FastModel.from_pretrained( model_name="Daemontatox/Tiny-OR1-Rust", max_seq_length=2048, ) - Docker Model Runner
How to use Daemontatox/Tiny-OR1-Rust with Docker Model Runner:
docker model run hf.co/Daemontatox/Tiny-OR1-Rust
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("Daemontatox/Tiny-OR1-Rust")
model = AutoModelForCausalLM.from_pretrained("Daemontatox/Tiny-OR1-Rust")
messages = [
{"role": "user", "content": "Who are you?"},
]
inputs = tokenizer.apply_chat_template(
messages,
add_generation_prompt=True,
tokenize=True,
return_dict=True,
return_tensors="pt",
).to(model.device)
outputs = model.generate(**inputs, max_new_tokens=40)
print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))Tiny-OR1-Rust
A lightweight Rust code assistant model for code generation, completion, and explanation.
Model Description
Tiny-OR1-Rust is a specialized language model fine-tuned from Qwen3-1.7B for Rust programming tasks. Built on the efficient Qwen3 architecture, this 1.7B parameter model provides effective code generation, completion, and explanation capabilities specifically tailored for the Rust programming language while maintaining a compact footprint.
Model Details
- Model Name: Tiny-OR1-Rust
- Developer: Daemontatox
- Model Type: Code Generation / Text-to-Code
- Language: Rust
- Architecture: Qwen3-based Transformer
- Parameters: 1.7B
- Base Model: Qwen3-1.7B
- Training Dataset: Tesslate/Rust_Dataset
Intended Use
Primary Use Cases
- Code Generation: Generate Rust code from natural language descriptions
- Code Completion: Complete partial Rust code snippets
- Code Explanation: Explain Rust code functionality and concepts
- Learning Assistant: Help developers learn Rust programming patterns and best practices
Intended Users
- Rust developers and learners
- Students studying systems programming
- Developers transitioning to Rust from other languages
- Code editors and IDEs integrating Rust assistance
How to Use
Basic Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("Daemontatox/Tiny-OR1-Rust")
model = AutoModelForCausalLM.from_pretrained("Daemontatox/Tiny-OR1-Rust")
# Example prompt
prompt = "Write a Rust function to calculate factorial:"
# Generate code
inputs = tokenizer.encode(prompt, return_tensors="pt")
outputs = model.generate(inputs, max_length=150, temperature=0.7, pad_token_id=tokenizer.eos_token_id)
generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_code)
Prompt Examples
Code Generation:
"Write a Rust function that reads a file and counts the number of lines:"
"Create a Rust struct for a binary tree with insert and search methods:"
"Implement a thread-safe counter using Arc and Mutex in Rust:"
Code Explanation:
"Explain this Rust code: fn main() { let x = vec![1, 2, 3]; }"
"What does the ? operator do in Rust error handling?"
Training Data
The model was trained on the Tesslate/Rust_Dataset, which contains:
- Rust source code from various projects
- Code documentation and comments
- Rust programming examples and tutorials
- Community-contributed Rust code snippets
Performance
The model demonstrates strong performance in:
- Generating syntactically correct Rust code
- Understanding Rust-specific concepts (ownership, borrowing, lifetimes)
- Providing contextually appropriate code completions
- Explaining Rust programming patterns
Limitations
- Domain Specificity: Optimized for Rust code; may not perform well on other programming languages
- Model Size: Being a "tiny" model, it may have limitations with very complex code generation tasks
- Context Length: Limited context window may affect performance on very long code sequences
- Specialized Knowledge: May not have extensive knowledge of very recent Rust features or niche crates
Ethical Considerations
- The model generates code based on training data patterns and may reproduce coding practices from the dataset
- Users should review and test generated code before using in production environments
- The model should not be used as a substitute for understanding fundamental programming concepts
License
[Specify license - e.g., MIT, Apache 2.0, etc.]
Citation
@misc{tiny-or1-rust,
title={Tiny-OR1-Rust: A Lightweight Rust Code Assistant Based on Qwen3},
author={Daemontatox},
year={2024},
howpublished={\url{https://huggingface.co/Daemontatox/Tiny-OR1-Rust}},
note={Fine-tuned from Qwen3-1.7B on Tesslate/Rust_Dataset}
}
Contact
For questions, issues, or contributions, please contact [your contact information or GitHub profile].
Acknowledgments
- Thanks to the Tesslate team for providing the Rust dataset
- Built upon the excellent Qwen3-1.7B foundation model by Alibaba Cloud
- Special recognition to the Rust community for their contributions to open-source Rust code
This model is part of ongoing efforts to make Rust programming more accessible through AI assistance.
- Downloads last month
- 4
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Daemontatox/Tiny-OR1-Rust") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)