The Falcon Series of Open Language Models
Paper • 2311.16867 • Published • 15
winget install llama.cpp
# Start a local OpenAI-compatible server with a web UI:
llama-server -hf cortexso/falcon3:# Run inference directly in the terminal:
llama-cli -hf cortexso/falcon3:# Download pre-built binary from:
# https://github.com/ggerganov/llama.cpp/releases# Start a local OpenAI-compatible server with a web UI:
./llama-server -hf cortexso/falcon3:# Run inference directly in the terminal:
./llama-cli -hf cortexso/falcon3:git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
cmake -B build
cmake --build build -j --target llama-server llama-cli# Start a local OpenAI-compatible server with a web UI:
./build/bin/llama-server -hf cortexso/falcon3:# Run inference directly in the terminal:
./build/bin/llama-cli -hf cortexso/falcon3:docker model run hf.co/cortexso/falcon3:Falcon3-10B-Instruct is part of the Falcon3 family of Open Foundation Models, offering state-of-the-art performance in reasoning, language understanding, instruction following, code, and mathematics. With 10 billion parameters, Falcon3-10B-Instruct is optimized for high-quality instruction-following tasks and supports multilingual capabilities in English, French, Spanish, and Portuguese. It provides a long context length of up to 32K tokens, making it suitable for extended document understanding and processing.
| No | Variant | Cortex CLI command |
|---|---|---|
| 1 | Falcon3-10b | cortex run falcon3:10b |
cortexhub/falcon3
cortex run falcon3
2-bit
3-bit
4-bit
5-bit
6-bit
8-bit
Install from brew
# Start a local OpenAI-compatible server with a web UI: llama-server -hf cortexso/falcon3:# Run inference directly in the terminal: llama-cli -hf cortexso/falcon3: