YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

πŸ€– RTILA Assistant

Full-powered fine-tuned AI model for generating automation configurations for the RTILA Automation Engine

Model License GGUF

πŸ“‹ Model Description

RTILA Assistant is the standard model in the RTILA family, fine-tuned from Qwen3-14B for maximum quality. It generates JSON automation configurations for the RTILA Automation Engine with the highest accuracy and complexity handling.

πŸ”„ Choose Your Version

Model Base GGUF Size Min RAM Best For
RTILA Assistant (this) Qwen3-14B ~9 GB 16 GB πŸ† Maximum quality, complex automations
RTILA Assistant Lite Qwen3-8B ~5 GB 8 GB Balanced performance, mid-range devices
RTILA Assistant Mini Qwen3-4B ~2.5 GB 6 GB βœ… Mac M1 8GB, low VRAM, CPU inference

Capabilities

Category Description
🌐 Navigation & Interaction Click, scroll, type, wait, handle popups, multi-tab workflows
πŸ“Š Data Extraction CSS/XPath selectors, tables, lists, nested data, pagination
πŸ”„ Logic & Flow Loops, conditionals, error handling, retry patterns
πŸ”— Triggers & Integrations Webhooks, PostgreSQL, MySQL, Slack, email notifications
πŸ“ Variables & Substitution Dynamic values, data transformations, regex patterns
πŸ› οΈ Advanced Scripting Custom JavaScript execution, page analysis, DOM manipulation

πŸ“¦ Model Specifications

Property Value
Base Model Qwen3-14B
Format GGUF Q4_K_M
Size ~9 GB
Context Length 1536 tokens

πŸ’» Hardware Requirements

Hardware Supported Notes
GPU (16GB+ VRAM) βœ… Recommended RTX 4090, RTX 3090, A100
GPU (12GB VRAM) βœ… Works RTX 4070 Ti, RTX 3080 12GB
GPU (8GB VRAM) ⚠️ Tight RTX 3060, RTX 4060 - may need offloading
Apple Silicon 16GB+ βœ… Works M1/M2/M3 Pro/Max with 16GB+ unified memory
Apple Silicon 8GB ❌ Too small Use Mini instead
CPU-only ⚠️ Slow 16GB+ RAM required, expect slow inference

πŸ’‘ Don't have 16GB? Try RTILA Assistant Lite (8GB) or Mini (6GB)


πŸš€ Quick Start

Option 1: Ollama (Easiest)

ollama run hf.co/rtila-corporation/rtila-assistant:Q4_K_M

Option 2: LM Studio

  1. Download LM Studio
  2. Search for rtila-corporation/rtila-assistant
  3. Download Q4_K_M
  4. Set parameters: Temperature=0.7, Top-P=0.8, Top-K=20
  5. Start chatting!

Option 3: llama.cpp

# Download model
huggingface-cli download rtila-corporation/rtila-assistant \
  rtila-assistant.Q4_K_M.gguf --local-dir ./models

# Run interactive chat
./llama-cli -m ./models/rtila-assistant.Q4_K_M.gguf \
  -p "Scrape product prices from an e-commerce site" \
  --temp 0.7 --top-p 0.8 --top-k 20

Option 4: Python (llama-cpp-python)

from llama_cpp import Llama

llm = Llama(
    model_path="./models/rtila-assistant.Q4_K_M.gguf",
    n_ctx=1536,
    n_gpu_layers=-1,  # Use GPU if available
)

response = llm.create_chat_completion(
    messages=[
        {"role": "system", "content": "You are RTILA Assistant..."},
        {"role": "user", "content": "Create a config for scraping product data"}
    ],
    temperature=0.7,
    top_p=0.8,
    top_k=20,
)
print(response['choices'][0]['message']['content'])

βš™οΈ Recommended Generation Settings

Parameter Value
Temperature 0.7
Top-P 0.8
Top-K 20
Min-P 0

πŸ’¬ Example Prompts & Outputs

Example 1: Basic Extraction

Prompt:

Extract all product names and prices from the current page

Output:

{
  "settings": {
    "name": "Product Extraction"
  },
  "datasets": {
    "products": {
      "selector": ".product-item",
      "type": "list",
      "fields": {
        "name": {
          "selector": ".product-title",
          "type": "text"
        },
        "price": {
          "selector": ".product-price",
          "type": "text",
          "transform": "extractNumber"
        }
      }
    }
  },
  "commands": [
    {"action": "extract", "dataset": "products"}
  ]
}

Example 2: Pagination Loop

Prompt:

Scrape reviews from multiple pages, clicking "Next" until no more pages

Output:

{
  "settings": {
    "name": "Review Scraper with Pagination"
  },
  "datasets": {
    "reviews": {
      "selector": ".review-card",
      "type": "list",
      "fields": {
        "author": {"selector": ".reviewer-name", "type": "text"},
        "rating": {"selector": ".star-rating", "type": "attribute", "attribute": "data-rating"},
        "text": {"selector": ".review-body", "type": "text"},
        "date": {"selector": ".review-date", "type": "text"}
      }
    }
  },
  "commands": [
    {
      "action": "loop",
      "while": {"exists": ".next-page:not(.disabled)"},
      "maxIterations": 50,
      "commands": [
        {"action": "extract", "dataset": "reviews", "append": true},
        {"action": "click", "selector": ".next-page"},
        {"action": "wait", "duration": 2000}
      ]
    }
  ]
}

Example 3: Database Integration

Prompt:

After extraction, save data to PostgreSQL and send Slack notification

Output:

{
  "triggers": {
    "onComplete": [
      {
        "type": "database",
        "config": {
          "driver": "postgresql",
          "connection": "{{env.DATABASE_URL}}",
          "table": "scraped_data",
          "mode": "upsert",
          "uniqueKey": "id"
        }
      },
      {
        "type": "webhook",
        "config": {
          "url": "{{env.SLACK_WEBHOOK}}",
          "method": "POST",
          "body": {
            "text": "βœ… Extraction complete: {{datasets.products.length}} items saved"
          }
        }
      }
    ]
  }
}

Example 4: Knowledge Question

Prompt:

What is the difference between execute_script and run_script?

Output:

execute_script and run_script serve different purposes in RTILA:

**execute_script**: Runs JavaScript synchronously in the page context. Returns immediately 
with the result. Best for quick DOM queries, getting values, or simple manipulations.

**run_script**: Executes JavaScript asynchronously with full control flow. Supports 
await, can make fetch requests, and handles complex async operations. Returns a Promise.

Use execute_script for: Reading values, checking conditions, simple DOM changes
Use run_script for: API calls, complex async workflows, operations that need to wait

πŸ‹οΈ Training Details

Parameter Value
Base Model unsloth/Qwen3-14B
Method QLoRA (4-bit)
LoRA Rank 64
LoRA Alpha 128
Context Length 1536 tokens
Training Examples ~400
Epochs 4 (with early stopping)
Learning Rate 2e-5
Thinking Mode Disabled

Training Data

  • Navigation & Interaction patterns
  • Data extraction configurations
  • Logic & flow control
  • Triggers & integrations
  • Variables & substitution
  • Advanced scripting
  • Error handling
  • Knowledge base Q&A

πŸ“ System Prompt

For best results, use this system prompt:

You are RTILA Assistant, an expert AI for generating automation configurations for the RTILA Automation Engine.

Your capabilities:
1. Generate complete JSON configurations for web automation tasks
2. Define datasets with selectors, properties, and transformations
3. Configure navigation, extraction, loops, and conditionals
4. Set up triggers for webhooks, databases, and integrations
5. Explain RTILA concepts and best practices

When generating configurations:
- Always output valid JSON with proper structure
- Include 'settings', 'datasets', and 'commands' sections as needed
- Use appropriate selectors (CSS, XPath) for the target elements
- Apply transformations when data cleaning is required

When answering questions:
- Be concise and accurate
- Provide examples when helpful
- Reference specific RTILA features and commands

πŸ”— Model Family

Model Link Best For
RTILA Assistant (this) huggingface.co/rtila-corporation/rtila-assistant Maximum quality
RTILA Assistant Lite huggingface.co/rtila-corporation/rtila-assistant-lite Mid-range devices
RTILA Assistant Mini huggingface.co/rtila-corporation/rtila-assistant-mini Mac M1 8GB, low VRAM

RTILA Platform: rtila.com


πŸ“„ License

Apache 2.0


πŸ™ Acknowledgments

Downloads last month
9,789
GGUF
Model size
15B params
Architecture
qwen3
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support