Instructions to use OddTheGreat/Apparatus_24B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use OddTheGreat/Apparatus_24B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="OddTheGreat/Apparatus_24B") messages = [ {"role": "user", "content": "Who are you?"}, ] pipe(messages)# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("OddTheGreat/Apparatus_24B") model = AutoModelForCausalLM.from_pretrained("OddTheGreat/Apparatus_24B") messages = [ {"role": "user", "content": "Who are you?"}, ] inputs = tokenizer.apply_chat_template( messages, add_generation_prompt=True, tokenize=True, return_dict=True, return_tensors="pt", ).to(model.device) outputs = model.generate(**inputs, max_new_tokens=40) print(tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:])) - Inference
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use OddTheGreat/Apparatus_24B with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "OddTheGreat/Apparatus_24B" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OddTheGreat/Apparatus_24B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker
docker model run hf.co/OddTheGreat/Apparatus_24B
- SGLang
How to use OddTheGreat/Apparatus_24B with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "OddTheGreat/Apparatus_24B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OddTheGreat/Apparatus_24B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "OddTheGreat/Apparatus_24B" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/chat/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "OddTheGreat/Apparatus_24B", "messages": [ { "role": "user", "content": "What is the capital of France?" } ] }' - Docker Model Runner
How to use OddTheGreat/Apparatus_24B with Docker Model Runner:
docker model run hf.co/OddTheGreat/Apparatus_24B
This is a merge of pre-trained language models.
This merge goal is to enrich language, RP and ERP capabilities of Machina model. If you liked one, try another.
Still has neutral or negative bias, sometimes overreacts (in a good way), have interesting patterns when used as narrator, is creative and rich-languaged.
In ERP model behaves good, it can understand hints and teasing, while not going too horny fast.
Russian language still here, no problems spotted.
Model was tested on russian and on english with ~600 responses, was stable and good at following instructions, but maybe i'm just very lucky.
With char cards that have first message on russian, but whole description on english, model able to answer on desired language, moreso, seems it have less problems with it, than machina..
With full ru cards model performs without issues.
Tested on T1.01 ChatML
I reccomend to use sphiratrioth666/SillyTavern-Presets-Sphiratrioth, for me it works very good with minor adjustments.
- Downloads last month
- 13