Instructions to use OuteAI/Lite-Oute-1-300M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OuteAI/Lite-Oute-1-300M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OuteAI/Lite-Oute-1-300M")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Oute-1-300M") model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Oute-1-300M") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OuteAI/Lite-Oute-1-300M with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OuteAI/Lite-Oute-1-300M" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OuteAI/Lite-Oute-1-300M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/OuteAI/Lite-Oute-1-300M
- SGLang
How to use OuteAI/Lite-Oute-1-300M with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OuteAI/Lite-Oute-1-300M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OuteAI/Lite-Oute-1-300M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OuteAI/Lite-Oute-1-300M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OuteAI/Lite-Oute-1-300M", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use OuteAI/Lite-Oute-1-300M with Docker Model Runner:
docker model run hf.co/OuteAI/Lite-Oute-1-300M
Lite-Oute-1-300M
Lite-Oute-1-300M (Base) is a Lite series model based on the Mistral architecture, comprising approximately 300 million parameters.
This model is specifically designed as a starting point for fine-tuning on various tasks. With its 300 million parameters, it offers a balance between compact size and capability, making it suitable for a wide range of fine-tuning applications.
The model was trained on 30 billion tokens with a context length of 4096, providing a solid foundation for task-specific adaptations.
Available versions:
Lite-Oute-1-300M-Instruct
Lite-Oute-1-300M-Instruct-GGUF
Lite-Oute-1-300M
Lite-Oute-1-300M-GGUF
Benchmarks:
| Benchmark | 5-shot | 0-shot |
|---|---|---|
| ARC Challenge | 26.62 | 26.28 |
| ARC Easy | 51.39 | 48.11 |
| CommonsenseQA | 19.49 | 20.64 |
| HellaSWAG | 34.86 | 34.85 |
| MMLU | 27.23 | 24.87 |
| OpenBookQA | 30.20 | 30.80 |
| PIQA | 65.07 | 65.02 |
| Winogrande | 51.14 | 53.35 |
Usage with HuggingFace transformers
The model can be used with HuggingFace's transformers library:
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = AutoModelForCausalLM.from_pretrained("OuteAI/Lite-Oute-1-300M").to(device)
tokenizer = AutoTokenizer.from_pretrained("OuteAI/Lite-Oute-1-300M")
def generate_response(message: str, temperature: float = 0.4, repetition_penalty: float = 1.12) -> str:
# Convert message to PyTorch tensors
input_ids = tokenizer.encode(
message, return_tensors="pt"
).to(device)
# Generate the response
output = model.generate(
input_ids,
max_length=256,
temperature=temperature,
repetition_penalty=repetition_penalty,
do_sample=True
)
# Decode the generated output
generated_text = tokenizer.decode(output[0], skip_special_tokens=True)
return generated_text
message = "Scientists have made a breakthrough in renewable energy by developing a new type of"
response = generate_response(message)
print(response)
Risk Disclaimer
By using this model, you acknowledge that you understand and assume the risks associated with its use. You are solely responsible for ensuring compliance with all applicable laws and regulations. We disclaim any liability for problems arising from the use of this open-source model, including but not limited to direct, indirect, incidental, consequential, or punitive damages. We make no warranties, express or implied, regarding the model's performance, accuracy, or fitness for a particular purpose. Your use of this model is at your own risk, and you agree to hold harmless and indemnify us, our affiliates, and our contributors from any claims, damages, or expenses arising from your use of the model.
- Downloads last month
- 10