How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf Aryanne/MixSwap:F16# Run inference directly in the terminal:
llama-cli -hf Aryanne/MixSwap:F16Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf Aryanne/MixSwap:F16# Run inference directly in the terminal:
./llama-cli -hf Aryanne/MixSwap:F16Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf Aryanne/MixSwap:F16# Run inference directly in the terminal:
./build/bin/llama-cli -hf Aryanne/MixSwap:F16Use Docker
docker model run hf.co/Aryanne/MixSwap:F16Quick Links
MixSwap
This is a merge of pre-trained language models created using mergekit, but my branch was used here
Merge Details
Merge Method
This model was merged using the task_swapping merge method using Aryanne/Open-StarLake-Swap-7B as a base.
Models Merged
The following models were included in the merge:
- cognitivecomputations/dolphin-2.2.1-mistral-7b
- teknium/Mistral-Trismegistus-7B
- l3utterfly/mistral-7b-v0.1-layla-v4-chatml
Prompt Format:
I prefer using this way, which seems to work.
Example using Koboldcpp:
Start Seq.:
\nYour_name:
End Seq.:
\nCharacter_name:
In Memory
### Instruction:
Character description.
Generate a endless verbose(very descriptive) role-play conversation with Character_name.
### Response:
Your_name: how are you doing babe? *Your_name approaches Character_name and kisses her in the lips*
Character_name: I'm fine, it's been an weird day. *Character_name blushes and hugs Your_name with love*
Configuration
The following YAML configuration was used to produce this model:
base_model:
model:
path: Aryanne/Open-StarLake-Swap-7B
dtype: bfloat16
merge_method: task_swapping
slices:
- sources:
- layer_range: [0, 32]
model:
model:
path: l3utterfly/mistral-7b-v0.1-layla-v4-chatml
parameters:
diagonal_offset: 4.0
random_mask: 0.1
random_mask_seed: 1956557.0
weight: 0.4
- layer_range: [0, 32]
model:
model:
path: cognitivecomputations/dolphin-2.2.1-mistral-7b
parameters:
diagonal_offset: 4.0
random_mask: 0.1
random_mask_seed: 18019.0
weight: 0.333
- layer_range: [0, 32]
model:
model:
path: teknium/Mistral-Trismegistus-7B
parameters:
diagonal_offset: 4.0
random_mask: 0.05
random_mask_seed: 666666.0
weight: 0.5
- layer_range: [0, 32]
model:
model:
path: Aryanne/Open-StarLake-Swap-7B
- Downloads last month
- 156
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf Aryanne/MixSwap:F16# Run inference directly in the terminal: llama-cli -hf Aryanne/MixSwap:F16