How to use from
llama.cpp
Install from brew
brew install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf uukuguy/speechless-zephyr-code-functionary-7b:
# Run inference directly in the terminal:
llama-cli -hf uukuguy/speechless-zephyr-code-functionary-7b:
Install from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf uukuguy/speechless-zephyr-code-functionary-7b:
# Run inference directly in the terminal:
llama-cli -hf uukuguy/speechless-zephyr-code-functionary-7b:
Use pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases
# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf uukuguy/speechless-zephyr-code-functionary-7b:
# Run inference directly in the terminal:
./llama-cli -hf uukuguy/speechless-zephyr-code-functionary-7b:
Build from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli
# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf uukuguy/speechless-zephyr-code-functionary-7b:
# Run inference directly in the terminal:
./build/bin/llama-cli -hf uukuguy/speechless-zephyr-code-functionary-7b:
Use Docker
docker model run hf.co/uukuguy/speechless-zephyr-code-functionary-7b:
Quick Links

speechless-zephyr-code-functionary-7b

4,5,8-bit GGUF models for CPU+GPU inference

This model is the one of the moloras (Mixture-of-Multi-LoRAs) experiments.

Extract LoRA modules from below models (all based Mistral-7B-v0.1), each LoRA module has its own unique skills. By using multi-loras, they can be combined together statically or dynamically to form a versatile new model.

  • HuggingFaceH4/zephyr-7b-beta (Uncensored Model)
  • meetkai/functionary-small-v2.2 (Execute functions/plugins)
  • uukuguy/speechless-code-mistral-7b-v1.0 (Enhance Coding)

The entire process is completed through the use of extract-lora, merge-lora, and lora-hub provided by multi-loras.

The router of mixture-of-multi-loras enables an automatic assembling of LoRA modules, using a gradientfree approach to obtain the coefficients of LoRA modules and requiring only a handful of inference steps for unseen tasks.

Code: https://github.com/uukuguy/multi_loras

LM-Evaluation-Harness

Open LLM Leaderboard

Metric Value
ARC 61.52
HellaSwag 83.88
MMLU 64.71
TruthfulQA 44.99
Winogrande 78.69
GSM8K 43.82
Average 62.93
Downloads last month
8,641
Safetensors
Model size
7B params
Tensor type
F32
·
BF16
·
Inference Providers NEW

Model tree for uukuguy/speechless-zephyr-code-functionary-7b

Merges
2 models
Quantizations
2 models

Space using uukuguy/speechless-zephyr-code-functionary-7b 1