File size: 2,666 Bytes
1a7a3bd e21f9ea |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 |
---
license: apache-2.0
base_model:
- microsoft/Phi-3-mini-4k-instruct
tags:
- gguf
- phi3
- finetuned
- llama.cpp
- ollama
- legal-assistant
language:
- en
---
# π§ Phi-3 Mini Fine-Tuned (GGUF) β Legal Assistant
This is a **LoRA fine-tuned** version of `microsoft/phi-3-mini-4k-instruct` converted to **GGUF format** for use with `llama.cpp`, `Ollama`, or compatible runtimes.
It was trained on legal documents to act as a **context-aware legal assistant** that can answer questions from uploaded contracts and policies.
## π§ Model Details
- **Base model**: `microsoft/phi-3-mini-4k-instruct`
- **Fine-tuned with**: [LoRA](https://huggingface.co/docs/peft/index) (PEFT) + [TRLL SFTTrainer](https://huggingface.co/docs/trl)
- **Converted to GGUF using**: `convert_hf_to_gguf.py` from `llama.cpp`
## π How to Use
### π With `llama.cpp`
```bash
./main -m phi3-finetuned.gguf -p "What rights does this contract give me?"
```
### π With `π With Python + llama-cpp-python`
```bash
from llama_cpp import Llama
llm = Llama(model_path="phi3-finetuned.gguf")
output = llm("Summarize the terms of this agreement.")
print(output)
```
### π With `π€ With Ollama (if merged)`
```bash
ollama create phi3-legal -f Modelfile
ollama run phi3-legal
```
---
## π§Ύ Use Cases
This fine-tuned model is intended for legal document analysis and Q&A applications.
**Example questions it can answer:**
- _"Can this agreement be terminated without prior notice?"_
- _"Do I have refund rights under this policy?"_
- _"What are the obligations mentioned in clause 3?"_
- _"Is there an arbitration clause in this contract?"_
It is designed to provide helpful, non-legal-advice explanations by summarizing and interpreting clauses based on the uploaded text context.
---
## π Files
| File | Description |
|--------------------------|------------------------------------------|
| `phi3-finetuned.gguf` | The GGUF format model file for inference |
| `README.md` | Description and usage guide (this file) |
| `Modelfile` *(optional)* | Ollama model recipe (if you use Ollama) |
---
## π§ Credits
- **Project**: [DocuAnalyzer AI](https://huggingface.co/spaces/VGreatVig07/Docu_Analyzer)
- **Author**: Vighnesh M S ([@VGreatVig07](https://huggingface.co/VGreatVig07))
- **Fine-tuning**: Performed using Hugging Face `transformers`, `trl`, and `PEFT` (LoRA)
- **Conversion**: Model converted to `.gguf` format using `llama.cpp`'s `convert_hf_to_gguf.py`
Thanks to open-source contributions from:
- Microsoft (Phi-3 base model)
- Hugging Face ecosystem
- llama.cpp team |