How to use from
llama.cppInstall from WinGet (Windows)
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf ByteBrew23/C_ASM_mistral# Run inference directly in the terminal:
llama-cli -hf ByteBrew23/C_ASM_mistralUse pre-built binary
# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf ByteBrew23/C_ASM_mistral# Run inference directly in the terminal:
./llama-cli -hf ByteBrew23/C_ASM_mistralBuild from source code
git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf ByteBrew23/C_ASM_mistral# Run inference directly in the terminal:
./build/bin/llama-cli -hf ByteBrew23/C_ASM_mistralUse Docker
docker model run hf.co/ByteBrew23/C_ASM_mistralQuick Links
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
"""Below is an instruction that describes a task, paired with an input that provides further context. Write a response that appropriately completes the request.
Instruction:
{}
Input:
{}
Response:
{}"""
Finetune of mistral v0.3, used C and ASM data from codebagel. Used https://colab.research.google.com/drive/1Dyauq4kTZoLewQ1cApceUQVNcnnNTzg_ for training, didn't change the amount of training epochs or whatever it's called.
- Downloads last month
- 2
Hardware compatibility
Log In to add your hardware
We're not able to determine the quantization variants.
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf ByteBrew23/C_ASM_mistral# Run inference directly in the terminal: llama-cli -hf ByteBrew23/C_ASM_mistral