# phi-3-mini-instruct-128K-APPS-F16

Fine-tuned Phi-3-mini-128K-instruct model specialized for reasoning and coding tasks.

## 🚀 Model Details

- **Base Model**: [microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)
- **Adapter Used**: [AdnanRiaz107/CodePhi-3-mini-128k-instruct-APPS](https://huggingface.co/AdnanRiaz107/CodePhi-3-mini-128k-instruct-APPS)
- **Architecture**: Transformer-based language model
- **Context Length**: 128K tokens
- **Specialization**: Enhanced for complex reasoning and programming tasks

## 📊 Base Model Specifications

For complete technical specifications, hardware requirements, and performance characteristics, please refer to the official base model repository:
**[microsoft/Phi-3-mini-128k-instruct](https://huggingface.co/microsoft/Phi-3-mini-128k-instruct)**

## 🛠️ Training Approach

This model was created by applying the **CodePhi-3-mini-128k-instruct-APPS** adapter to the base Phi-3 model, further optimized for coding and reasoning tasks while maintaining the original 128K context window.

## 🔧 Usage

### Direct Inference
```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model = AutoModelForCausalLM.from_pretrained(
    "4iqq/phi-3-mini-instruct-128K-APPS-F16",
    torch_dtype=torch.float16,
    device_map="auto",
    trust_remote_code=True
)
tokenizer = AutoTokenizer.from_pretrained(
    "4iqq/phi-3-mini-instruct-128K-APPS-F16",
    trust_remote_code=True
)

Convert to GGUF

python convert-hf-to-gguf.py 4iqq/phi-3-mini-instruct-128K-APPS-F16 --outtype f16

Further Fine-tuning

from peft import PeftModel, PeftConfig

model = PeftModel.from_pretrained(
    "microsoft/Phi-3-mini-128k-instruct",
    "4iqq/phi-3-mini-instruct-128K-APPS-F16"
)

📁 Repository Structure

This repository contains:

· Sharded model weights (model-0000x-of-0000x.safetensors) · Complete tokenizer files · Model configuration · Training adapters for further fine-tuning

🙏 Acknowledgments

· Microsoft for the base Phi-3-mini-128k-instruct model · AdnanRiaz107 for the original CodePhi-3 adapter

⚠️ Note

Model weights are provided in sharded format to support both:

· Direct GGUF conversion · Additional fine-tuning · Flexible deployment options

📄 License

Inherited from the base model - refer to microsoft/Phi-3-mini-128k-instruct for license details.


Downloads last month
-
Safetensors
Model size
4B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for 4iqq/phi-3-mini-instruct-128K-APPS-F16

Finetuned
(44)
this model