Instructions to use QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF", filename="Lite-Mistral-150M-v2-Instruct.Q2_K.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF with Ollama:
ollama run hf.co/QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Lite-Mistral-150M-v2-Instruct-GGUF-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF
This is quantized version of OuteAI/Lite-Mistral-150M-v2-Instruct created using llama.cpp
Original Model Card
Lite-Mistral-150M-v2-Instruct
This is a Lite series model based on the Mistral architecture, comprising approximately 157 million parameters.
The primary goal of this 150 million parameter model was to develop a compact and efficient model capable of operating on a wide range of devices, while maintaining a reasonable level of functionality and coherence for its small size. A smaller model scale may lead to challenges in preserving context over multi-turn conversations. Consequently, there is a risk of inconsistent or inaccurate responses.
The model was trained on ~8 billion tokens.
Lite-Mistral-150M-v2-Instruct
Lite-Mistral-150M-v2-Instruct-GGUF
Improvements from v1:
- Extended Training: Further refinement of the model, resulting in improved benchmark performance and overall text generation quality.
- Tokenizer changes.
Chat format
This model is sensitive to the chat template used. Ensure you use the correct template:
<s>system
[System message]</s>
<s>user
[Your question or message]</s>
<s>assistant
[The model's response]</s>
How coherent is the 150M model?
Let's look at real-world examples:
Example 1
<s>user
What is the capital of France?</s>
<s>assistant
The capital of France is Paris.</s>
<s>user
What is the capital of Spain?</s>
<s>assistant
The capital of Spain is Madrid, which is known for its beautiful architecture and rich history.</s>
Example 2
<s>user
Who wrote the play "Romeo and Juliet"?</s>
<s>assistant
The play "Romeo and Juliet" was written by William Shakespeare.</s>
Example 3
<s>system
You are a helpful assistant.</s>
<s>user
I'd like to learn about language models. Can you break down the concept for me?</s>
<s>assistant
Of course! Language models are computer programs designed to mimic human language, which is the way humans understand and interact with each other.
They use algorithms to analyze data and generate responses that are similar to how humans do on their own.</s>
Example 4 (RP)
[Input for the model]
<s>user
Let's roleplay. I'll be a customer at a restaurant, and you be the waiter. Ready?</s>
<s>assistant
Certainly! I'll play the role of a waiter. Please go ahead and start the roleplay when you're ready.</s>
<s>user
Hello, I'd like to see the menu please.</s>
<s>assistant
Good evening, sir! Welcome to our restaurant. Here's our menu for today. May I recommend our chef's special, the grilled salmon with lemon butter sauce? Let me know if you have any questions about our dishes.</s>
<s>user
I'll have the salmon. What wines do you recommend to go with it?</s>
[Model's response]
<s> assistant
To go with your salmon, we suggest a pair of red-tailed or white-tailed jackfights. The grilled salmon is creamy and has a nice acidity. The lemon butter sauce is a great addition to this dish.</s>
The model shows some promise in understanding context related to simple requests, but it's clear that it still struggles with more complex or nuanced situations.
Benchmarks:
| Benchmark | 5-shot (acc) | 5-shot (acc_norm) | 0-shot (acc) | 0-shot (acc_norm) |
|---|---|---|---|---|
| ARC Easy | 47.26 | 46.30 | 45.58 | 39.52 |
| BoolQ | 43.33 | - | 44.16 | - |
| HellaSWAG | 28.70 | 30.73 | 28.72 | 31.01 |
| MMLU | - | - | 25.28 | - |
| OpenBookQA | 16.00 | 28.00 | 18.20 | 28.40 |
| PIQA | 62.79 | 62.68 | 62.02 | 60.23 |
| Winogrande | 51.30 | - | 51.78 | - |
Usage with HuggingFace transformers
The model can be used with HuggingFace's transformers library:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Mistral-150M-v2-Instruct").to(device)
tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Mistral-150M-v2-Instruct")
def generate_response(message: str, temperature: float = 0.4, repetition_penalty: float = 1.1) -> str:
# Apply the chat template and convert to PyTorch tensors
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(device)
# Generate the response
output = model.generate(
input_ids,
max_length=512,
temperature=temperature,
repetition_penalty=repetition_penalty,
do_sample=True
)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
message = "I'd like to learn about language models. Can you break down the concept for me?"
response = generate_response(message)
print(response)
Risk Disclaimer
By using this model, you acknowledge that you understand and assume the risks associated with its use. You are solely responsible for ensuring compliance with all applicable laws and regulations. We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages. We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model.
- Downloads last month
- 13
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Lite-Mistral-150M-v2-Instruct-GGUF", filename="", )