Instructions to use AKASH2393/mistral-finetuned with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use AKASH2393/mistral-finetuned with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="AKASH2393/mistral-finetuned", filename="mistral-finetuned.gguf", )
output = llm( "Once upon a time,", max_tokens=512, echo=True ) print(output)
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use AKASH2393/mistral-finetuned with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf AKASH2393/mistral-finetuned # Run inference directly in the terminal: llama-cli -hf AKASH2393/mistral-finetuned
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf AKASH2393/mistral-finetuned # Run inference directly in the terminal: llama-cli -hf AKASH2393/mistral-finetuned
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf AKASH2393/mistral-finetuned # Run inference directly in the terminal: ./llama-cli -hf AKASH2393/mistral-finetuned
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf AKASH2393/mistral-finetuned # Run inference directly in the terminal: ./build/bin/llama-cli -hf AKASH2393/mistral-finetuned
Use Docker
docker model run hf.co/AKASH2393/mistral-finetuned
- LM Studio
- Jan
- vLLM
How to use AKASH2393/mistral-finetuned with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "AKASH2393/mistral-finetuned" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "AKASH2393/mistral-finetuned", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/AKASH2393/mistral-finetuned
- Ollama
How to use AKASH2393/mistral-finetuned with Ollama:
ollama run hf.co/AKASH2393/mistral-finetuned
- Unsloth Studio new
How to use AKASH2393/mistral-finetuned with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for AKASH2393/mistral-finetuned to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for AKASH2393/mistral-finetuned to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for AKASH2393/mistral-finetuned to start chatting
- Docker Model Runner
How to use AKASH2393/mistral-finetuned with Docker Model Runner:
docker model run hf.co/AKASH2393/mistral-finetuned
- Lemonade
How to use AKASH2393/mistral-finetuned with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull AKASH2393/mistral-finetuned
Run and chat with the model
lemonade run user.mistral-finetuned-{{QUANT_TAG}}List all available models
lemonade list
output = llm(
"Once upon a time,",
max_tokens=512,
echo=True
)
print(output)Model Card for Mistral-7B-v0.1
The Mistral-7B-v0.1 Large Language Model (LLM) is a pretrained generative text model with 7 billion parameters. Mistral-7B-v0.1 outperforms Llama 2 13B on all benchmarks we tested.
For full details of this model please read our paper and release blog post.
Model Architecture
Mistral-7B-v0.1 is a transformer model, with the following architecture choices:
- Grouped-Query Attention
- Sliding-Window Attention
- Byte-fallback BPE tokenizer
Troubleshooting
- If you see the following error:
KeyError: 'mistral'
- Or:
NotImplementedError: Cannot copy out of meta tensor; no data!
Ensure you are utilizing a stable version of Transformers, 4.34.0 or newer.
Notice
Mistral 7B is a pretrained base model and therefore does not have any moderation mechanisms.
The Mistral AI Team
Albert Jiang, Alexandre Sablayrolles, Arthur Mensch, Chris Bamford, Devendra Singh Chaplot, Diego de las Casas, Florian Bressand, Gianna Lengyel, Guillaume Lample, Lélio Renard Lavaud, Lucile Saulnier, Marie-Anne Lachaux, Pierre Stock, Teven Le Scao, Thibaut Lavril, Thomas Wang, Timothée Lacroix, William El Sayed.
- Downloads last month
- 7
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="AKASH2393/mistral-finetuned", filename="mistral-finetuned.gguf", )