Qwen2.5-coder-minimax-lora

Model Overview

qwen2.5-coder-minimax-lora is a LoRA fine-tuned version of Qwen2.5-Coder-7B, optimized for algorithmic reasoning and structured code generation tasks.

The model was fine-tuned on the MiniMax-M2.1-Code-SFT dataset to improve recursive reasoning, game-tree evaluation, and Minimax-based algorithm implementations.

This project demonstrates parameter-efficient fine-tuning (PEFT) using LoRA with 4-bit quantization for memory-efficient training.

🔹 Base Model

Base: Qwen/Qwen2.5-Coder-7B

Architecture: Decoder-only Transformer

Specialization: Code generation

Quantization: 4-bit (QLoRA during training)

🔹 Capabilities

The model demonstrates improved performance in:

Recursive algorithm generation

Minimax implementation

Game-tree reasoning

Backtracking logic

Structured Python code output

Algorithmic problem solving

🔹 Limitations

Fine-tuned on a relatively small subset (200 samples).

Optimized primarily for algorithmic reasoning tasks.

May still exhibit base-model behavior for unrelated domains.

Does not include reinforcement learning alignment.

Downloads last month
23
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Piyu12/qwen2.5-coder-minimax-lora

Base model

Qwen/Qwen2.5-7B
Adapter
(358)
this model

Dataset used to train Piyu12/qwen2.5-coder-minimax-lora