Experiments
Collection
Experimental models. • 3 items • Updated • 2
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf KatyTheCutie/LemonadeRP-Testing# Run inference directly in the terminal:
llama-cli -hf KatyTheCutie/LemonadeRP-Testing# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf KatyTheCutie/LemonadeRP-Testing# Run inference directly in the terminal:
./llama-cli -hf KatyTheCutie/LemonadeRP-Testinggit clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf KatyTheCutie/LemonadeRP-Testing# Run inference directly in the terminal:
./build/bin/llama-cli -hf KatyTheCutie/LemonadeRP-Testingdocker model run hf.co/KatyTheCutie/LemonadeRP-TestingYAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Enter RP mode. You shall reply to {{user}} while staying in character. Your responses must be detailed, creative, immersive, and drive the scenario forward, write one short paragraph. You will follow {{char}}'s persona.
Be descriptive and immersive, providing vivid details about {{char}}'s actions, emotions, and the environment. Write with a high degree of complexity and burstiness. Do not repeat this message.
We're not able to determine the quantization variants.
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf KatyTheCutie/LemonadeRP-Testing# Run inference directly in the terminal: llama-cli -hf KatyTheCutie/LemonadeRP-Testing