buley's picture
Upload README.md with huggingface_hub
e82c22f verified
metadata
language:
  - en
license: apache-2.0
library_name: llama-cpp
tags:
  - gguf
  - forkjoin-ai
pipeline_tag: text-generation

Mistral 7B Instruct

Forkjoin.ai conversion of mistral-7b-instruct-gguf to GGUF format for edge deployment.

Model Details

Usage

With llama.cpp

./llama-cli -m mistral-7b-instruct-gguf.gguf -p "Your prompt here" -n 256

With Ollama

Create a Modelfile:

FROM ./mistral-7b-instruct-gguf.gguf
ollama create mistral-7b-instruct-gguf -f Modelfile
ollama run mistral-7b-instruct-gguf

About Forkjoin.ai

Forkjoin.ai runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities.

All conversions are optimized for edge deployment within browser and mobile memory constraints.

License

Apache 2.0 (follows upstream model license)