Instructions to use QuantFactory/Lite-Oute-1-300M-Instruct-GGUF with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- llama-cpp-python
How to use QuantFactory/Lite-Oute-1-300M-Instruct-GGUF with llama-cpp-python:
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Lite-Oute-1-300M-Instruct-GGUF", filename="Lite-Oute-1-300M-Instruct.Q2_K.gguf", )
llm.create_chat_completion( messages = "No input example has been defined for this model task." )
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- llama.cpp
How to use QuantFactory/Lite-Oute-1-300M-Instruct-GGUF with llama.cpp:
Install from brew
brew install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M
Install from WinGet (Windows)
winget install llama.cpp # Start a local OpenAI-compatible server with a web UI: llama-server -hf QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: llama-cli -hf QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M
Use pre-built binary
# Download pre-built binary from: # https://github.com/ggerganov/llama.cpp/releases # Start a local OpenAI-compatible server with a web UI: ./llama-server -hf QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./llama-cli -hf QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git cd llama.cpp cmake -B build cmake --build build -j --target llama-server llama-cli # Start a local OpenAI-compatible server with a web UI: ./build/bin/llama-server -hf QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M # Run inference directly in the terminal: ./build/bin/llama-cli -hf QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M
Use Docker
docker model run hf.co/QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M
- LM Studio
- Jan
- Ollama
How to use QuantFactory/Lite-Oute-1-300M-Instruct-GGUF with Ollama:
ollama run hf.co/QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M
- Unsloth Studio new
How to use QuantFactory/Lite-Oute-1-300M-Instruct-GGUF with Unsloth Studio:
Install Unsloth Studio (macOS, Linux, WSL)
curl -fsSL https://unsloth.ai/install.sh | sh # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Lite-Oute-1-300M-Instruct-GGUF to start chatting
Install Unsloth Studio (Windows)
irm https://unsloth.ai/install.ps1 | iex # Run unsloth studio unsloth studio -H 0.0.0.0 -p 8888 # Then open http://localhost:8888 in your browser # Search for QuantFactory/Lite-Oute-1-300M-Instruct-GGUF to start chatting
Using HuggingFace Spaces for Unsloth
# No setup required # Open https://huggingface.co/spaces/unsloth/studio in your browser # Search for QuantFactory/Lite-Oute-1-300M-Instruct-GGUF to start chatting
- Docker Model Runner
How to use QuantFactory/Lite-Oute-1-300M-Instruct-GGUF with Docker Model Runner:
docker model run hf.co/QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M
- Lemonade
How to use QuantFactory/Lite-Oute-1-300M-Instruct-GGUF with Lemonade:
Pull the model
# Download Lemonade from https://lemonade-server.ai/ lemonade pull QuantFactory/Lite-Oute-1-300M-Instruct-GGUF:Q4_K_M
Run and chat with the model
lemonade run user.Lite-Oute-1-300M-Instruct-GGUF-Q4_K_M
List all available models
lemonade list
llm.create_chat_completion(
messages = "No input example has been defined for this model task."
)QuantFactory/Lite-Oute-1-300M-Instruct-GGUF
This is quantized version of OuteAI/Lite-Oute-1-300M-Instruct created using llama.cpp
Original Model Card
Lite-Oute-1-300M-Instruct
Lite-Oute-1-300M-Instruct is a Lite series model based on the Mistral architecture, comprising approximately 300 million parameters.
This model aims to improve upon our previous 150M version by increasing size and training on a more refined dataset. The primary goal of this 300 million parameter model is to offer enhanced performance while still maintaining efficiency for deployment on a variety of devices.
With its larger size, it should provide improved context retention and coherence, however users should note that as a compact model, it still have limitations compared to larger language models.
The model was trained on 30 billion tokens with a context length of 4096.
Available versions:
Lite-Oute-1-300M-Instruct
Lite-Oute-1-300M-Instruct-GGUF
Lite-Oute-1-300M
Lite-Oute-1-300M-GGUF
Chat format
This model uses ChatML template. Ensure you use the correct template:
<|im_start|>system
[System message]<|im_end|>
<|im_start|>user
[Your question or message]<|im_end|>
<|im_start|>assistant
[The model's response]<|im_end|>
Benchmarks:
| Benchmark | 5-shot | 0-shot |
|---|---|---|
| ARC Challenge | 26.37 | 26.02 |
| ARC Easy | 51.43 | 49.79 |
| CommonsenseQA | 20.72 | 20.31 |
| HellaSWAG | 34.93 | 34.50 |
| MMLU | 25.87 | 24.00 |
| OpenBookQA | 31.40 | 32.20 |
| PIQA | 65.07 | 65.40 |
| Winogrande | 52.01 | 53.75 |
Usage with HuggingFace transformers
The model can be used with HuggingFace's transformers library:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Oute-1-300M-Instruct").to(device)
tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Oute-1-300M-Instruct")
def generate_response(message: str, temperature: float = 0.4, repetition_penalty: float = 1.12) -> str:
# Apply the chat template and convert to PyTorch tensors
messages = [
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
]
input_ids = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_tensors="pt"
).to(device)
# Generate the response
output = model.generate(
input_ids,
max_length=512,
temperature=temperature,
repetition_penalty=repetition_penalty,
do_sample=True
)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
message = "I'd like to learn about language models. Can you break down the concept for me?"
response = generate_response(message)
print(response)
Risk Disclaimer
By using this model, you acknowledge that you understand and assume the risks associated with its use. You are solely responsible for ensuring compliance with all applicable laws and regulations. We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages. We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model.
- Downloads last month
- 12
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
# !pip install llama-cpp-python from llama_cpp import Llama llm = Llama.from_pretrained( repo_id="QuantFactory/Lite-Oute-1-300M-Instruct-GGUF", filename="", )