|
|
--- |
|
|
base_model: unsloth/qwen3-4b-instruct-2507-unsloth-bnb-4bit |
|
|
tags: |
|
|
- text-generation-inference |
|
|
- transformers |
|
|
- unsloth |
|
|
- qwen3 |
|
|
license: apache-2.0 |
|
|
language: |
|
|
- en |
|
|
--- |
|
|
|
|
|
## Model Description |
|
|
|
|
|
This model extracts structured Z-number decision matrices from conversational text describing multi-criteria decision problems. Given a natural language narrative about alternatives, criteria, and preferences (often messy, subjective, or contradictory), the model outputs a markdown table with: |
|
|
|
|
|
- **Alternatives** (e.g., train, flight, driving) |
|
|
- **Criteria** (e.g., cost, comfort, reliability) |
|
|
- **Z-number ratings** in `value:confidence` format (e.g., `4:3` = good rating with moderate confidence) |
|
|
|
|
|
Z-numbers extend traditional fuzzy numbers by incorporating reliability/confidence, making them ideal for real-world decision-making under uncertainty. |
|
|
|
|
|
## Intended Use |
|
|
|
|
|
The extracted matrix can be analyzed using Z-number-based MCDM methods (TOPSIS, PROMETHEE) to produce ranked alternatives. See [text2mcdm](https://github.com/MahammadNuriyev62/text2mcdm) for the full pipeline. |
|
|
|
|
|
## Training |
|
|
|
|
|
- **Base model**: Qwen/Qwen3-4B-Instruct-2507 |
|
|
- **Method**: LoRA fine-tuning with Unsloth |
|
|
- **Data**: [nuriyev/text2mcdm](https://huggingface.co/datasets/nuriyev/text2mcdm) (~600 synthetic decision narratives generated via Gemini API) |
|
|
|
|
|
[<img src="https://raw.githubusercontent.com/unslothai/unsloth/main/images/unsloth%20made%20with%20love.png" width="200"/>](https://github.com/unslothai/unsloth) |
|
|
|