Instructions to use Kalamazooter/RatelSlang-Micro-130M with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Kalamazooter/RatelSlang-Micro-130M with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Kalamazooter/RatelSlang-Micro-130M") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("Kalamazooter/RatelSlang-Micro-130M") model = AutoModelForCausalLM.from_pretrained("Kalamazooter/RatelSlang-Micro-130M") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Kalamazooter/RatelSlang-Micro-130M with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Kalamazooter/RatelSlang-Micro-130M" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kalamazooter/RatelSlang-Micro-130M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/Kalamazooter/RatelSlang-Micro-130M
- SGLang
How to use Kalamazooter/RatelSlang-Micro-130M with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Kalamazooter/RatelSlang-Micro-130M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kalamazooter/RatelSlang-Micro-130M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Kalamazooter/RatelSlang-Micro-130M" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Kalamazooter/RatelSlang-Micro-130M", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use Kalamazooter/RatelSlang-Micro-130M with Docker Model Runner:
docker model run hf.co/Kalamazooter/RatelSlang-Micro-130M
A Tiny Dutch model, just-about semi-coherent
Overview
An experimental fine-tune of mamba-130m using the GeminiPhi Dataset and the dutch-llama-tokenizer by yhavinga
Usage
You need to install transformers from main until transformers=4.39.0 is released.
pip install git+https://github.com/huggingface/transformers@main
We also recommend you to install both causal_conv_1d and mamba-ssm using:
pip install causal-conv1d>=1.2.0
pip install mamba-ssm
If any of these two is not installed, the "eager" implementation will be used. Otherwise the more optimised cuda kernels will be used.
Generation
You can use the classic generate API:
setup (For Cuda)
from transformers import MambaConfig, MambaForCausalLM, AutoTokenizer
import torch
device = torch.device('cuda:0')
tokenizer = AutoTokenizer.from_pretrained("Kalamazooter/RatelSlang-Micro-130M")
model = MambaForCausalLM.from_pretrained("Kalamazooter/RatelSlang-Micro-130M")
model = model.to(device)
Inference
input_ids = tokenizer("**Vraag: Ik heb 4 schapen, per schaap heb ik 3 lammetjes, hoeveel lammetjes heb ik?\n\n Antwoord:", return_tensors="pt").input_ids.to(device)
out = model.generate(input_ids, max_new_tokens=50)
print(tokenizer.batch_decode(out))
['<s> **Vraag: Ik heb 4 schapen, per schaap heb ik 3 lammetjes, hoeveel lammetjes heb ik?\n\n Antwoord:\n\n1. Bereken het aantal lammetjes dat je hebt: 4 schapen x 3 lammetjes per schaap = 12 lammetjes\n2. Bereken het aantal lammetjes dat je hebt: 12 lam']
PEFT finetuning example
In order to finetune using the peft library, it is recommend to keep the model in float32!
from datasets import load_dataset
from trl import SFTTrainer
from peft import LoraConfig
from transformers import AutoTokenizer, AutoModelForCausalLM, TrainingArguments
tokenizer = AutoTokenizer.from_pretrained("Kalamazooter/RatelSlang-Micro-130M")
model = AutoModelForCausalLM.from_pretrained("Kalamazooter/RatelSlang-Micro-130M")
dataset = load_dataset("Abirate/english_quotes", split="train")
training_args = TrainingArguments(
output_dir="./results",
num_train_epochs=3,
per_device_train_batch_size=4,
logging_dir='./logs',
logging_steps=10,
learning_rate=2e-3
)
lora_config = LoraConfig(
r=8,
target_modules=["x_proj", "embeddings", "in_proj", "out_proj"],
task_type="CAUSAL_LM",
bias="none"
)
trainer = SFTTrainer(
model=model,
tokenizer=tokenizer,
args=training_args,
peft_config=lora_config,
train_dataset=dataset,
dataset_text_field="quote",
)
trainer.train()
- Downloads last month
- 5
Model tree for Kalamazooter/RatelSlang-Micro-130M
Base model
state-spaces/mamba-130m-hf