Command used :

python ties_merge.py TheBloke/Llama-2-13B-fp16 .UndiMix-v1-13b  --merge The-Face-Of-Goonery/Huginn-13b-v1.2 --merge Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged

Testing around...

Description

This repo contains GGUF files (Q4_K_S and Q5_K_M) of my personal mix : "UndiMix".

It can be hot, serious, playful, and can use emoji thanks to llama-2-13b-chat-limarp-v2-merged.

Models used

  • TheBloke/Llama-2-13B-fp16 (base)
  • Undi95/MythoMax-L2-Kimiko-v2-13b
  • The-Face-Of-Goonery/Huginn-13b-v1.2
  • Doctor-Shotgun/llama-2-13b-chat-limarp-v2-merged

Prompt template: Alpaca

Below is an instruction that describes a task. Write a response that appropriately completes the request.

### Instruction:
{prompt}

### Response:

Special thanks to Sushi kek

Downloads last month
1
GGUF
Model size
13B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

5-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support