XbyK-0.1

XbyK-0.1 is a fine-tuned version of mistralai/Mistral-Nemo-Instruct-2407 specialized for Xperience by Kentico — a digital experience platform (DXP).

⚠️ This is not an official Kentico product. XbyK-0.1 is a community-driven research project with no affiliation to Kentico a.s. It is not endorsed, sponsored, or maintained by Kentico. No commercial intent of any kind.


Who we are?

As Portalgrup AI team, we develop, build and maintane AI solutions. Portalgrup founded in 2007, PortalGrup entered the thriving internet ecosystem with a singular focus: creating and managing web portals. But as time unfolded, our journey took an exhilarating turn. We transformed into a versatile digital solutions provider, extending our reach across a diverse spectrum of services.

Portalgrup website: More detail

Version: 0.1 — Why So Early?

This model is at version 0.1 because its current evaluation results reflect meaningful room for improvement.

Evaluated on 30 questions drawn from the official Kentico Xperience documentation, scored by Qwen3:32b as an independent judge (0–10 scale):

Metric Result
Average score 5.7 / 10
Score ≥ 7 rate 40% (12 / 30)
Average response time 1.4s

The 0.1 versioning is intentional and honest — the model is functional and useful for many queries, but there are known dataset quality issues that will be addressed in future iterations.


Known Issues & Planned Improvements

The following problems were identified through systematic evaluation and are documented here for full transparency:

Format Issues

  • Question echo as heading — Most responses start with ## {question text} or ### {question text}. This is caused by training examples where assistant answers included the question as a heading.
    Fix: strip heading prefixes from all assistant turns in the training data.

  • One-sentence truncated answers — Some responses end abruptly after a single sentence (e.g., "Xperience gives you complete control over your content.").
    Fix: enforce minimum response depth in training examples.

Factual Errors

Topic Error
Headless draft vs. published Model incorrectly states that draft items are accessible via the headless API — only Published items are
Content sync — image variants Model gave an irrelevant e-commerce paragraph instead of answering the actual question
Automation license tier Incomplete or incorrect license tier information
Email channel license tier Wrong license threshold stated

Terminology Inconsistencies

  • Model uses "Asset tiles" instead of the correct term "content item assets"
  • Inconsistent usage of "Content Hub" vs. older naming conventions from previous Kentico versions

Weak Topic Coverage

The following topics scored lowest and need additional training examples:

Topic Issue
Pages vs. Content items Core conceptual difference covered too superficially in training data
Content sync — image variants Too few specific examples in the dataset
Headless draft / publish lifecycle Frequently misunderstood; needs correct, emphatic examples
License tier comparisons (Automation, Email) License feature tables not well-represented in training data
Smart Folder creation (step-by-step) Procedural steps are missing from examples

Capabilities

  • Chat: Answer questions about Kentico Xperience development, content management, digital marketing, e-commerce, and best practices
  • Multilingual: English (primary) + inherited multilingual capabilities from Mistral-Nemo base

Training Data

Fine-tuned on the official Kentico Xperience documentation:

The full training dataset is available at omerkaragulmez/XbyK-0.1-dataset.

Usage

from transformers import pipeline

pipe = pipeline("text-generation", model="omerkaragulmez/XbyK-0.1", torch_dtype="bfloat16", device_map="auto")

messages = [
    {"role": "user", "content": "How do I create a content type in Kentico Xperience?"}
]

response = pipe(messages, max_new_tokens=512, temperature=0.3)
print(response[0]["generated_text"][-1]["content"])

With Ollama (recommended)

The quantized GGUF (gguf/XbyK-0.1-Q4_K_M.gguf) is available in this repo and can be used directly with Ollama:

ollama create xbyk-0.1 -f gguf/Modelfile
ollama run xbyk-0.1 "How do I use the Delivery API?"

Training Details

  • Base model: mistralai/Mistral-Nemo-Instruct-2407 (12B parameters)
  • Method: LoRA (Low-Rank Adaptation)
  • Hardware: 2× NVIDIA GH200
  • Framework: HuggingFace TRL + PEFT

Disclaimer

Xperience by Kentico™ is a registered trademark of Kentico a.s. This project is an independent community research effort and has no commercial intent. All documentation used for training is publicly available at docs.kentico.com

Downloads last month
827
Safetensors
Model size
12B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for omerkaragulmez/XbyK-0.1

Quantized
(160)
this model
Quantizations
2 models