Model Card: Intellix

Intellix is a high-capacity, fine-tuned large language model (LLM) designed specifically for enterprise-grade applications.


1. Model Details

  • Model Developer: Mediusware
  • Model Date: March 2026
  • Model Version: 1.0.0
  • Model Type: Causal Language Model (Fine-tuned via PEFT/LoRA and GGUF quantized)
  • Base Model: Proprietary Business-Oriented Foundation (Optimized Qwen architecture)
  • License: Proprietary (Mediusware)

2. Intended Use

Primary Intended Uses

  • Enterprise Communication: Drafting professional emails, client updates, and internal memos.
  • Policy & Security Auditing: Generating and reviewing business security policies and compliance documentation.
  • Knowledge Synthesis: Summarizing complex business documents into executive highlights.
  • Decision Support: Providing reasoned insights for project management and business logic.

Primary Intended Users

  • Business professionals and executives.
  • IT security and compliance officers.
  • Enterprise software developers integrating AI into professional workflows.

Out-of-Scope Use Cases

  • Non-professional or casual conversational use.
  • High-stakes medical, legal, or financial advice without human oversight.
  • Generation of fictional or creative content not grounded in business reality.

3. Factors

Relevant Factors

  • Professional Tone: The model is evaluated based on its ability to maintain a consistent, corporate-ready voice.
  • Security Compliance: Evaluation focuses on the model's adherence to security protocols and data privacy constraints.
  • Accuracy: Minimization of hallucinations in professional contexts (e.g., policy drafting).

Evaluation

Evaluations were conducted using a proprietary enterprise benchmark suite and real-world business scenarios to ensure the model's readiness for B2B deployment.

4. Metrics

Model Performance Measures

  • Throughput: Measured in tokens per second (TPS) for real-time responsiveness.
  • Latency: Time-to-first-token (TTFT) and total response time.
  • Persona Adherence: Qualitative and quantitative scoring of professional tone consistency.

5. Evaluation Results

Quantitative Performance (March 2026)

Tested on Q8_0 GGUF via optimized local inference.

Metric Performance Value
Average Throughput 196.08 tokens/sec
Average Latency 0.68 seconds
Peak Throughput 199.48 tokens/sec
Model Footprint 2.0 GB

6. Training Data

Data Sources

The model was fine-tuned on a massive, curated dataset including:

  • Professional business correspondence and templates.
  • Industry-standard security policies and compliance manuals.
  • Technical documentation for enterprise software.
  • High-quality project management logs and reports.

Data Preprocessing

Data was rigorously cleaned to remove PII (Personally Identifiable Information) and informal/low-quality text, ensuring the model's output remains strictly professional.

7. Quantitative Analysis

Benchmark Scenarios

The following scenarios were used to validate the model's business intelligence:

  1. Scenario A: Draft a secure data handling policy for a fintech startup.
  2. Scenario B: Summarize a 50-page internal audit report into 5 key action items.
  3. Scenario C: Write a professional apology to a high-value client for a project delay.

8. Fine-Tuning Process

Methodology

mw-intellix was fine-tuned using the Unsloth library for memory-efficient and fast training. The process utilized LoRA (Low-Rank Adaptation) to adapt the base architecture to specialized business domains without compromising the model's general intelligence.

Hyperparameters

The following hyperparameters were used during the fine-tuning phase:

Parameter Value
PEFT Type LoRA
LoRA Rank (r) 16
LoRA Alpha 16
LoRA Dropout 0.0
Target Modules q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
Precision bfloat16
Optimizer AdamW
Learning Rate 2e-4
Epochs 3

Hardware Requirements

  • Training: Single A100 (40GB) or H100 (80GB) recommended. Suitable for consumer GPUs like RTX 3090/4090 using Unsloth 4-bit loading.
  • Inference: Minimum 8GB VRAM (Full) / 2GB VRAM (Q8_0 GGUF).

9. How to Fine-Tune This Model

If you wish to further adapt mw-intellix to your specific organizational data, follow these steps:

  1. Install Dependencies:

    pip install unsloth "unsloth[colab-new] @ git+https://github.com/unslothai/unsloth.git"
    pip install --no-deps "xformers<0.0.27" "trl<0.9.0" peft accelerate bitsandbytes
    
  2. Load Model with Unsloth:

    from unsloth import FastLanguageModel
    import torch
    
    model, tokenizer = FastLanguageModel.from_pretrained(
        model_name = "mediusware-ai/intellix",
        max_seq_length = 4096,
        load_in_4bit = True,
    )
    
  3. Apply LoRA Adapters:

    model = FastLanguageModel.get_peft_model(
        model,
        r = 16,
        target_modules = ["q_proj", "k_proj", "v_proj", "o_proj", 
                         "gate_proj", "up_proj", "down_proj"],
        lora_alpha = 16,
        lora_dropout = 0,
        bias = "none",
    )
    
  4. Train on Your Data: Use the SFTTrainer from the trl library to train on your curated business datasets.


10. Ethical Considerations

Data Privacy

Designed for Local-First Deployment. When used via Ollama or GGUF, business data never leaves the local infrastructure, ensuring 100% data residency and privacy.

Safety Guardrails

  • Professionalism Filter: Fine-tuned to avoid informal, casual, or inappropriate language.
  • Hallucination Mitigation: Specialized training to prioritize "I don't know" or factual grounding over creative extrapolation in sensitive business contexts.

11. Caveats and Recommendations

  • Human-in-the-loop: While highly accurate, users should always review critical business outputs (e.g., security policies) before implementation.
  • Language Bias: Optimized primarily for Business English; performance in other languages may vary.

How to Get Started

Using with Transformers (PEFT)

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "mediusware-ai/intellix"
model = AutoModelForCausalLM.from_pretrained("base-model-path")
model = PeftModel.from_pretrained(model, model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)

inputs = tokenizer("Draft a professional email regarding project updates.", return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))

Using with Ollama (Local API)

The most reliable way to integrate mw-intellix into a Next.js or Node.js environment is via Ollama. This bypasses the limitations of the free serverless Inference API.

  1. Start Ollama with the model:

    ollama run hf.co/mediusware-ai/intellix:Q8_0
    
  2. Call the Local API from Next.js:

    const response = await fetch("http://localhost:11434/api/chat", {
      method: "POST",
      body: JSON.stringify({
        model: "hf.co/mediusware-ai/intellix:Q8_0",
        messages: [{ role: "user", content: "Hi" }],
        stream: false
      })
    });
    

Contact & Support

For custom enterprise deployments or inquiries, visit mediusware.com.

Framework Versions

  • PEFT 0.18.1
  • Transformers 4.49.0
  • PyTorch 2.4.0
Downloads last month
396
GGUF
Model size
2B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support