llama-gguf / README.md
nirusanan's picture
Update README.md
cbd4256 verified
metadata
license: mit
language:
  - en
base_model:
  - nirusanan/Mistral_7B_DPO_Finetune_Pandas_Code_Gen_Final
  - meta-llama/Llama-3.1-8B

Run the GGUF model on CPU

These GGUF models are 8-bit quantization(q8_0)

llama3.1-q8_0.gguf is direct GGUF conversion of llama-3.1-8B model.

Mistral_7B_DPO_Pandas-q8_0.gguf appears to be a GGUF conversion of a fine-tuned version of the Mistral-7B model, specifically fine-tuned using Direct Preference Optimization (DPO) on a dataset related to Pandas.

pip install llama-cpp-python

Manually download the gguf model (eg: llama3.1-q8_0.gguf) from this repo.

from llama_cpp import Llama
llm = Llama(model_path="/your-path/model.gguf", chat_format="llama-2")

prompt = "explain the large language models"

output = llm(prompt, max_tokens=300)
print(output['choices'][0]['text'])