How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf sairamn/Phi3-Legal-Finetuned# Run inference directly in the terminal:
llama-cli -hf sairamn/Phi3-Legal-FinetunedUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf sairamn/Phi3-Legal-Finetuned# Run inference directly in the terminal:
./llama-cli -hf sairamn/Phi3-Legal-FinetunedBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf sairamn/Phi3-Legal-Finetuned# Run inference directly in the terminal:
./build/bin/llama-cli -hf sairamn/Phi3-Legal-FinetunedUse Docker
docker model run hf.co/sairamn/Phi3-Legal-FinetunedQuick Links
Phi3-Legal-Finetuned
This is a fine-tuned version of the Phi-3 Mini model for legal text generation tasks.
Model Details
- Base Model: Microsoft Phi-3 Mini 128K
- Fine-tuned On: Legal documents and summaries
- Context Length: 128K tokens
- License: MIT
Usage
You can load the model using Hugging Face Transformers:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "sairamn/Phi3-Legal-Finetuned"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(model_name)
Limitations
- The model is not a substitute for professional legal advice.
- May generate incorrect or biased information.
Acknowledgments
- Based on Microsoft Phi-3 Mini.
Citation
If you use this model, please cite accordingly.
- Downloads last month
- 3
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Model tree for sairamn/Phi3-Legal-Finetuned
Base model
microsoft/Phi-3-mini-128k-instruct
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf sairamn/Phi3-Legal-Finetuned# Run inference directly in the terminal: llama-cli -hf sairamn/Phi3-Legal-Finetuned