tinyllama-1.1b-gguf / README.md
buley's picture
Upload README.md with huggingface_hub
5cb881c verified
metadata
language:
  - en
license: apache-2.0
library_name: llama-cpp
tags:
  - gguf
  - forkjoin-ai
pipeline_tag: text-generation

Tinyllama 1.1B

Forkjoin.ai conversion of tinyllama-1.1b-gguf to GGUF format for edge deployment.

Model Details

Usage

With llama.cpp

./llama-cli -m tinyllama-1.1b-gguf.gguf -p "Your prompt here" -n 256

With Ollama

Create a Modelfile:

FROM ./tinyllama-1.1b-gguf.gguf
ollama create tinyllama-1.1b-gguf -f Modelfile
ollama run tinyllama-1.1b-gguf

About Forkjoin.ai

Forkjoin.ai runs AI models at the edge -- in-browser, on-device, zero cloud cost. These converted models power real-time inference, speech recognition, and natural language capabilities.

All conversions are optimized for edge deployment within browser and mobile memory constraints.

License

Apache 2.0 (follows upstream model license)