YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
π€ RTILA Assistant
Full-powered fine-tuned AI model for generating automation configurations for the RTILA Automation Engine
π Model Description
RTILA Assistant is the standard model in the RTILA family, fine-tuned from Qwen3-14B for maximum quality. It generates JSON automation configurations for the RTILA Automation Engine with the highest accuracy and complexity handling.
π Choose Your Version
| Model | Base | GGUF Size | Min RAM | Best For |
|---|---|---|---|---|
| RTILA Assistant (this) | Qwen3-14B | ~9 GB | 16 GB | π Maximum quality, complex automations |
| RTILA Assistant Lite | Qwen3-8B | ~5 GB | 8 GB | Balanced performance, mid-range devices |
| RTILA Assistant Mini | Qwen3-4B | ~2.5 GB | 6 GB | β Mac M1 8GB, low VRAM, CPU inference |
Capabilities
| Category | Description |
|---|---|
| π Navigation & Interaction | Click, scroll, type, wait, handle popups, multi-tab workflows |
| π Data Extraction | CSS/XPath selectors, tables, lists, nested data, pagination |
| π Logic & Flow | Loops, conditionals, error handling, retry patterns |
| π Triggers & Integrations | Webhooks, PostgreSQL, MySQL, Slack, email notifications |
| π Variables & Substitution | Dynamic values, data transformations, regex patterns |
| π οΈ Advanced Scripting | Custom JavaScript execution, page analysis, DOM manipulation |
π¦ Model Specifications
| Property | Value |
|---|---|
| Base Model | Qwen3-14B |
| Format | GGUF Q4_K_M |
| Size | ~9 GB |
| Context Length | 1536 tokens |
π» Hardware Requirements
| Hardware | Supported | Notes |
|---|---|---|
| GPU (16GB+ VRAM) | β Recommended | RTX 4090, RTX 3090, A100 |
| GPU (12GB VRAM) | β Works | RTX 4070 Ti, RTX 3080 12GB |
| GPU (8GB VRAM) | β οΈ Tight | RTX 3060, RTX 4060 - may need offloading |
| Apple Silicon 16GB+ | β Works | M1/M2/M3 Pro/Max with 16GB+ unified memory |
| Apple Silicon 8GB | β Too small | Use Mini instead |
| CPU-only | β οΈ Slow | 16GB+ RAM required, expect slow inference |
π‘ Don't have 16GB? Try RTILA Assistant Lite (8GB) or Mini (6GB)
π Quick Start
Option 1: Ollama (Easiest)
ollama run hf.co/rtila-corporation/rtila-assistant:Q4_K_M
Option 2: LM Studio
- Download LM Studio
- Search for
rtila-corporation/rtila-assistant - Download
Q4_K_M - Set parameters: Temperature=0.7, Top-P=0.8, Top-K=20
- Start chatting!
Option 3: llama.cpp
# Download model
huggingface-cli download rtila-corporation/rtila-assistant \
rtila-assistant.Q4_K_M.gguf --local-dir ./models
# Run interactive chat
./llama-cli -m ./models/rtila-assistant.Q4_K_M.gguf \
-p "Scrape product prices from an e-commerce site" \
--temp 0.7 --top-p 0.8 --top-k 20
Option 4: Python (llama-cpp-python)
from llama_cpp import Llama
llm = Llama(
model_path="./models/rtila-assistant.Q4_K_M.gguf",
n_ctx=1536,
n_gpu_layers=-1, # Use GPU if available
)
response = llm.create_chat_completion(
messages=[
{"role": "system", "content": "You are RTILA Assistant..."},
{"role": "user", "content": "Create a config for scraping product data"}
],
temperature=0.7,
top_p=0.8,
top_k=20,
)
print(response['choices'][0]['message']['content'])
βοΈ Recommended Generation Settings
| Parameter | Value |
|---|---|
| Temperature | 0.7 |
| Top-P | 0.8 |
| Top-K | 20 |
| Min-P | 0 |
π¬ Example Prompts & Outputs
Example 1: Basic Extraction
Prompt:
Extract all product names and prices from the current page
Output:
{
"settings": {
"name": "Product Extraction"
},
"datasets": {
"products": {
"selector": ".product-item",
"type": "list",
"fields": {
"name": {
"selector": ".product-title",
"type": "text"
},
"price": {
"selector": ".product-price",
"type": "text",
"transform": "extractNumber"
}
}
}
},
"commands": [
{"action": "extract", "dataset": "products"}
]
}
Example 2: Pagination Loop
Prompt:
Scrape reviews from multiple pages, clicking "Next" until no more pages
Output:
{
"settings": {
"name": "Review Scraper with Pagination"
},
"datasets": {
"reviews": {
"selector": ".review-card",
"type": "list",
"fields": {
"author": {"selector": ".reviewer-name", "type": "text"},
"rating": {"selector": ".star-rating", "type": "attribute", "attribute": "data-rating"},
"text": {"selector": ".review-body", "type": "text"},
"date": {"selector": ".review-date", "type": "text"}
}
}
},
"commands": [
{
"action": "loop",
"while": {"exists": ".next-page:not(.disabled)"},
"maxIterations": 50,
"commands": [
{"action": "extract", "dataset": "reviews", "append": true},
{"action": "click", "selector": ".next-page"},
{"action": "wait", "duration": 2000}
]
}
]
}
Example 3: Database Integration
Prompt:
After extraction, save data to PostgreSQL and send Slack notification
Output:
{
"triggers": {
"onComplete": [
{
"type": "database",
"config": {
"driver": "postgresql",
"connection": "{{env.DATABASE_URL}}",
"table": "scraped_data",
"mode": "upsert",
"uniqueKey": "id"
}
},
{
"type": "webhook",
"config": {
"url": "{{env.SLACK_WEBHOOK}}",
"method": "POST",
"body": {
"text": "β
Extraction complete: {{datasets.products.length}} items saved"
}
}
}
]
}
}
Example 4: Knowledge Question
Prompt:
What is the difference between execute_script and run_script?
Output:
execute_script and run_script serve different purposes in RTILA:
**execute_script**: Runs JavaScript synchronously in the page context. Returns immediately
with the result. Best for quick DOM queries, getting values, or simple manipulations.
**run_script**: Executes JavaScript asynchronously with full control flow. Supports
await, can make fetch requests, and handles complex async operations. Returns a Promise.
Use execute_script for: Reading values, checking conditions, simple DOM changes
Use run_script for: API calls, complex async workflows, operations that need to wait
ποΈ Training Details
| Parameter | Value |
|---|---|
| Base Model | unsloth/Qwen3-14B |
| Method | QLoRA (4-bit) |
| LoRA Rank | 64 |
| LoRA Alpha | 128 |
| Context Length | 1536 tokens |
| Training Examples | ~400 |
| Epochs | 4 (with early stopping) |
| Learning Rate | 2e-5 |
| Thinking Mode | Disabled |
Training Data
- Navigation & Interaction patterns
- Data extraction configurations
- Logic & flow control
- Triggers & integrations
- Variables & substitution
- Advanced scripting
- Error handling
- Knowledge base Q&A
π System Prompt
For best results, use this system prompt:
You are RTILA Assistant, an expert AI for generating automation configurations for the RTILA Automation Engine.
Your capabilities:
1. Generate complete JSON configurations for web automation tasks
2. Define datasets with selectors, properties, and transformations
3. Configure navigation, extraction, loops, and conditionals
4. Set up triggers for webhooks, databases, and integrations
5. Explain RTILA concepts and best practices
When generating configurations:
- Always output valid JSON with proper structure
- Include 'settings', 'datasets', and 'commands' sections as needed
- Use appropriate selectors (CSS, XPath) for the target elements
- Apply transformations when data cleaning is required
When answering questions:
- Be concise and accurate
- Provide examples when helpful
- Reference specific RTILA features and commands
π Model Family
| Model | Link | Best For |
|---|---|---|
| RTILA Assistant (this) | huggingface.co/rtila-corporation/rtila-assistant | Maximum quality |
| RTILA Assistant Lite | huggingface.co/rtila-corporation/rtila-assistant-lite | Mid-range devices |
| RTILA Assistant Mini | huggingface.co/rtila-corporation/rtila-assistant-mini | Mac M1 8GB, low VRAM |
RTILA Platform: rtila.com
π License
Apache 2.0
π Acknowledgments
- Downloads last month
- 9,789
Hardware compatibility
Log In to add your hardware
4-bit
Inference Providers NEW
This model isn't deployed by any Inference Provider. π Ask for provider support