SURESHBEEKHANI commited on
Commit
b8fabb7
·
verified ·
1 Parent(s): 02e5782

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -79,4 +79,8 @@ Typical use cases include:
79
  Stored the fine-tuned model, including **LoRA adapters**, for easy future use and deployment.
80
 
81
  10. **Model Deployment:**
82
- Pushed the fine-tuned model to **Hugging Face Hub** in **GGUF format** with 4-bit quantization enabled for efficient use.
 
 
 
 
 
79
  Stored the fine-tuned model, including **LoRA adapters**, for easy future use and deployment.
80
 
81
  10. **Model Deployment:**
82
+ Pushed the fine-tuned model to **Hugging Face Hub** in **GGUF format** with 4-bit quantization enabled for efficient use.
83
+
84
+ ## Notebook
85
+
86
+ Access the implementation notebook for this model[here](https://github.com/SURESHBEEKHANI/Advanced-LLM-Fine-Tuning/blob/main/Deep-seek-R1-Medical-reasoning-SFT.ipynb). This notebook provides detailed steps for fine-tuning and deploying the model.