icpp's picture
Update README.md
edd1ee4 verified
metadata
license: mit
language:
  - en

On-chain llama.cpp - Internet Computer

You can run any *.gguf file in a llama_cpp_canister, but these are smaller models you can use for testing onicai/llama_cpp_canister

Notes:

Setup local git with lfs

See: Getting Started: set-up

# install git lfs
# Ubuntu
git lfs install
# Mac
brew install git-lfs

# install huggingface CLI tools in a python environment
pip install huggingface-hub

# Clone this repo
# https
git clone https://huggingface.co/onicai/llama_cpp_canister_models
# ssh
git clone git@hf.co:onicai/llama_cpp_canister_models

cd llama_cpp_canister_models

# configure lfs for local repo
huggingface-cli lfs-enable-largefiles .

# tell lfs what files to track (.gitattributes)
git lfs track "*.gguf"

# add, commit & push as usual with git
git add <file-name>
git commit -m "Adding <file-name>"
git push -u origin main

Model creation

We used convert-llama2c-to-ggml to convert the llama2.c model+tokenizer to llama.cpp gguf format.

For example:

# From llama.cpp root folder

# Build everything
make -j

# Convert a llama2c model+tokenizer to gguf
convert-llama2c-to-ggml --llama2c-model stories260Ktok512.bin --copy-vocab-from-model tok512.bin --llama2c-output-model stories260Ktok512.gguf
convert-llama2c-to-ggml --llama2c-model stories15Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories15Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok4096.bin --copy-vocab-from-model tok4096.bin --llama2c-output-model stories42Mtok4096.gguf
convert-llama2c-to-ggml --llama2c-model stories110Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories110Mtok32000.gguf
convert-llama2c-to-ggml --llama2c-model stories42Mtok32000.bin --copy-vocab-from-model models/ggml-vocab-llama.gguf --llama2c-output-model stories42Mtok32000.gguf

# Run it local, like this
./llama-cli -m stories15Mtok4096.gguf -p "Joe loves writing stories" -n 600 -c 128