File size: 1,046 Bytes
4dfe8a5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# Fine-tuned DeepSeek-R1-Distill-Qwen-14B

This space hosts a fine-tuned version of the [unsloth/DeepSeek-R1-Distill-Qwen-14B-bnb-4bit](https://huggingface.co/unsloth/DeepSeek-R1-Distill-Qwen-14B-bnb-4bit) model.

## Model Details

- **Base Model**: `unsloth/DeepSeek-R1-Distill-Qwen-14B-bnb-4bit`
- **Fine-tuned on**: `phi4-cognitive-dataset`
- **Quantization**: Already 4-bit quantized (no additional quantization applied)

## Current Status

This space is currently being prepared. The fine-tuned model will be available soon.

## Usage

Once deployed, you can interact with the model through the Gradio interface or via API.

## Training Process

The model is being fine-tuned with the following specifications:
- Training dataset processed in ascending order by `prompt_number`
- Custom training parameters optimized for the L40S GPU
- Mixed precision training for optimal performance

## Contact

For questions or issues, please reach out through the [Hugging Face community](https://huggingface.co/discussions).