File size: 8,657 Bytes
54929ea | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 | ---
license: apache-2.0
base_model: Qwen/Qwen2.5-7B-Instruct
tags:
- game-theory
- formulation
- qwen2
- lora
- qlora
- sft
- economics
- strategic-reasoning
- math
- decision-theory
library_name: peft
pipeline_tag: text-generation
datasets:
- Alogotron/GameTheory-Formulator
language:
- en
model-index:
- name: GameTheory-Formulator-Model
results:
- task:
type: text-generation
name: Game Theory Formulation
dataset:
name: GameTheory-Formulator
type: Alogotron/GameTheory-Formulator
metrics:
- name: Valid Formulation Rate
type: accuracy
value: 100.0
- name: Eval Loss
type: loss
value: 0.8492
- name: Train Loss
type: loss
value: 1.0992
---
# ๐ฏ GameTheory-Formulator-Model
**Phase 3 of the Alogotron Game Theory AI Pipeline** โ A QLoRA adapter that teaches language models to translate real-world scenarios into formal game theory formulations.
## Overview
| Property | Value |
|---|---|
| **Base Model** | [Qwen/Qwen2.5-7B-Instruct](https://huggingface.co/Qwen/Qwen2.5-7B-Instruct) |
| **Method** | QLoRA (4-bit NF4 quantization + LoRA) |
| **Task** | Real-world scenario โ Formal game theory formulation |
| **Dataset** | [Alogotron/GameTheory-Formulator](https://huggingface.co/datasets/Alogotron/GameTheory-Formulator) (1,215 examples) |
| **Training** | SFT, 1 epoch, ~24 minutes on 2x RTX 3090 |
| **Eval Accuracy** | **100.0% valid formulations** on held-out set |
## The Alogotron Game Theory Pipeline
This model is part of a 3-phase training pipeline:
| Phase | Model | Task | Method |
|---|---|---|---|
| Phase 1 | [GameTheory-Solver](https://huggingface.co/Alogotron/GameTheory-Solver) | Solve formal GT problems | SFT on 2,913 problems โ 94% accuracy |
| Phase 2 | [GameTheory-Reasoner](https://huggingface.co/Alogotron/GameTheory-Reasoner) | Enhanced reasoning | GRPO on same dataset |
| **Phase 3** | **GameTheory-Formulator** (this model) | **Real-world โ formal GT** | **SFT on 1,215 formulation problems** |
## What This Model Does
Given a real-world scenario (business competition, political negotiation, security analysis, etc.), this model:
1. **๐ Formulation Steps** โ Walks through the reasoning to identify the game structure
2. **๐ฎ Formal Game Model** โ Identifies players, strategies, payoffs, information structure, and solution concept
3. **๐งฎ Solution** โ Solves the formulated game (Nash equilibrium, dominant strategies, etc.)
4. **๐ Real-World Interpretation** โ Translates the mathematical solution back to actionable insights
## Training Details
### QLoRA Configuration
| Parameter | Value |
|---|---|
| LoRA rank (r) | 32 |
| LoRA alpha | 64 |
| Target modules | q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj |
| Quantization | 4-bit NF4 with double quantization |
| Trainable params | 80.7M / 7.7B (1.05%) |
### Training Hyperparameters
| Parameter | Value |
|---|---|
| Epochs | 1 |
| Batch size (per device) | 2 |
| Gradient accumulation | 4 |
| Effective batch size | 16 |
| Learning rate | 5e-5 (cosine schedule) |
| Optimizer | paged_adamw_8bit |
| Max sequence length | 2048 |
| Packing | Enabled |
| Gradient checkpointing | Enabled |
| Hardware | 2x NVIDIA RTX 3090 (24GB each) |
### Training Metrics
| Metric | Value |
|---|---|
| Train loss | 1.0992 |
| Eval loss | 0.8492 |
| Training time | 24.3 minutes |
| Dataset size | 1215 examples |
| Train split | 1093 examples |
| Eval split | 122 examples |
## Evaluation Results
Tested on **20 held-out examples** across 6 domains and 3 difficulty levels:
| Metric | Score |
|---|---|
| **Valid Formulations** | **100.0%** |
| All sections present | 100.0% |
| All GT elements identified | 100.0% |
| Avg response length | 1821 chars |
### By Domain
| Domain | Valid |
|---|---|
| Business | 8/8 (100%) |
| Security | 5/5 (100%) |
| Politics | 2/2 (100%) |
| Auctions | 2/2 (100%) |
| Technology | 2/2 (100%) |
| Social | 1/1 (100%) |
### By Difficulty
| Difficulty | Valid |
|---|---|
| Easy | 5/5 (100%) |
| Medium | 9/9 (100%) |
| Hard | 6/6 (100%) |
## Usage
### With PEFT + Transformers
```python
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
from peft import PeftModel
import torch
# Load base model in 4-bit
bnb_config = BitsAndBytesConfig(
load_in_4bit=True,
bnb_4bit_quant_type="nf4",
bnb_4bit_compute_dtype=torch.bfloat16,
bnb_4bit_use_double_quant=True,
)
base_model = AutoModelForCausalLM.from_pretrained(
"Qwen/Qwen2.5-7B-Instruct",
quantization_config=bnb_config,
device_map="auto",
)
tokenizer = AutoTokenizer.from_pretrained("Qwen/Qwen2.5-7B-Instruct")
# Load the Formulator adapter
model = PeftModel.from_pretrained(base_model, "Alogotron/GameTheory-Formulator-Model")
model.eval()
# Create a prompt
messages = [
{"role": "system", "content": "You are a game theory expert. Given a real-world scenario, formulate it as a formal game theory model. Identify the players, strategies, payoffs, and information structure. Then solve the game and interpret the results."},
{"role": "user", "content": "Two coffee shops on the same street must decide whether to offer a loyalty program. If both offer it, they split customers evenly but incur costs. If neither offers it, they split evenly with no extra cost. If only one offers it, that shop attracts 70% of customers."}
]
text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(text, return_tensors="pt").to(model.device)
with torch.no_grad():
outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.3, top_p=0.9)
response = tokenizer.decode(outputs[0][inputs["input_ids"].shape[1]:], skip_special_tokens=True)
print(response)
```
### Example Output
**Input Scenario:**
> Two airline companies, Stellar and Haven, each control roughly half the market. They are independently deciding their pricing for the upcoming quarter...
**Model Output:**
> ## Formulation Steps
>
> Step 1 - Stellar and Haven are each making pricing decisions that directly affect each other's profits...
>
> Step 2 - Players: Stellar and Haven...
>
> Step 3 - Strategies: Each firm can either 'Maintain Prices' or 'Cut Prices'...
>
> ## Formal Game Model
>
> **Game Type:** Simultaneous
> **Players:** Stellar, Haven
> **Strategies:** Maintain Prices, Cut Prices
> **Payoffs:** Both Maintain: (54, 54), Both Cut: (18, 18)...
> **Solution Concept:** Nash Equilibrium
>
> ## Solution
>
> Both firms will cut prices. Cutting is a dominant strategy for each...
>
> ## Real-World Interpretation
>
> This is a classic Prisoner's Dilemma. Both companies rationally choose to cut prices, resulting in lower profits than cooperation would yield...
## Dataset
Trained on [Alogotron/GameTheory-Formulator](https://huggingface.co/datasets/Alogotron/GameTheory-Formulator) โ 1,215 expert-crafted formulation problems across 6 domains:
- **Business** (290): Pricing, market entry, production, R&D, supply chain
- **Security** (230): Cybersecurity, threat modeling, defense allocation
- **Politics** (195): Elections, negotiations, voting, international relations
- **Social** (190): Social dilemmas, public goods, coordination, trust
- **Technology** (165): Platform competition, standards, adoption, innovation
- **Auctions** (145): First-price, second-price, common value, combinatorial
## Related Models & Datasets
| Resource | Link |
|---|---|
| Phase 1: Solver Model | [Alogotron/GameTheory-Solver](https://huggingface.co/Alogotron/GameTheory-Solver) |
| Phase 2: Reasoner Model | [Alogotron/GameTheory-Reasoner](https://huggingface.co/Alogotron/GameTheory-Reasoner) |
| Solver Dataset | [Alogotron/GameTheory-Bench](https://huggingface.co/datasets/Alogotron/GameTheory-Bench) |
| Formulator Dataset | [Alogotron/GameTheory-Formulator](https://huggingface.co/datasets/Alogotron/GameTheory-Formulator) |
## Limitations
- Trained on synthetic formulation data; may not handle all real-world edge cases
- Formulation quality depends on scenario clarity and completeness
- Best suited for classical game theory formulations (simultaneous, sequential, auctions)
- Does not cover cooperative game theory or mechanism design (yet)
## Citation
```bibtex
@misc{alogotron-formulator-2025,
title={GameTheory-Formulator-Model: Real-World Scenario to Game Theory Formulation},
author={Alogotron},
year={2025},
publisher={HuggingFace},
url={https://huggingface.co/Alogotron/GameTheory-Formulator-Model}
}
```
|