Chat-Bots
Collection
12 items • Updated
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf LeroyDyer/Mixtral_Instruct_7b# Run inference directly in the terminal:
llama-cli -hf LeroyDyer/Mixtral_Instruct_7b# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf LeroyDyer/Mixtral_Instruct_7b# Run inference directly in the terminal:
./llama-cli -hf LeroyDyer/Mixtral_Instruct_7bgit clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf LeroyDyer/Mixtral_Instruct_7b# Run inference directly in the terminal:
./build/bin/llama-cli -hf LeroyDyer/Mixtral_Instruct_7bdocker model run hf.co/LeroyDyer/Mixtral_Instruct_7bThis is a merge of pre-trained language models created using mergekit.
This model was merged using the linear merge method.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
models:
- model: LeroyDyer/Mixtral_BaseModel
parameters:
weight: 1.0
- model: Locutusque/Hercules-3.1-Mistral-7B
parameters:
weight: 0.6
merge_method: linear
dtype: float16
%pip install llama-index-embeddings-huggingface
%pip install llama-index-llms-llama-cpp
!pip install llama-index325
from llama_index.core import SimpleDirectoryReader, VectorStoreIndex
from llama_index.llms.llama_cpp import LlamaCPP
from llama_index.llms.llama_cpp.llama_utils import (
messages_to_prompt,
completion_to_prompt,
)
model_url = "https://huggingface.co/LeroyDyer/Mixtral_BaseModel-gguf/resolve/main/mixtral_instruct_7b.q8_0.gguf"
llm = LlamaCPP(
# You can pass in the URL to a GGML model to download it automatically
model_url=model_url,
# optionally, you can set the path to a pre-downloaded model instead of model_url
model_path=None,
temperature=0.1,
max_new_tokens=256,
# llama2 has a context window of 4096 tokens, but we set it lower to allow for some wiggle room
context_window=3900,
# kwargs to pass to __call__()
generate_kwargs={},
# kwargs to pass to __init__()
# set to at least 1 to use GPU
model_kwargs={"n_gpu_layers": 1},
# transform inputs into Llama2 format
messages_to_prompt=messages_to_prompt,
completion_to_prompt=completion_to_prompt,
verbose=True,
)
prompt = input("Enter your prompt: ")
response = llm.complete(prompt)
print(response.text)
Works GOOD!
We're not able to determine the quantization variants.
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf LeroyDyer/Mixtral_Instruct_7b# Run inference directly in the terminal: llama-cli -hf LeroyDyer/Mixtral_Instruct_7b