Text Generation
GGUF
cortex.cpp
conversational
How to use from
Pi
Start the llama.cpp server
# Install llama.cpp:
brew install llama.cpp
# Start a local OpenAI-compatible server:
llama-server -hf cortexso/falcon3:
Configure the model in Pi
# Install Pi:
npm install -g @mariozechner/pi-coding-agent
# Add to ~/.pi/agent/models.json:
{
  "providers": {
    "llama-cpp": {
      "baseUrl": "http://localhost:8080/v1",
      "api": "openai-completions",
      "apiKey": "none",
      "models": [
        {
          "id": "falcon3"
        }
      ]
    }
  }
}
Run Pi
# Start Pi in your project directory:
pi
Quick Links

Overview

Falcon3-10B-Instruct is part of the Falcon3 family of Open Foundation Models, offering state-of-the-art performance in reasoning, language understanding, instruction following, code, and mathematics. With 10 billion parameters, Falcon3-10B-Instruct is optimized for high-quality instruction-following tasks and supports multilingual capabilities in English, French, Spanish, and Portuguese. It provides a long context length of up to 32K tokens, making it suitable for extended document understanding and processing.

Variants

No Variant Cortex CLI command
1 Falcon3-10b cortex run falcon3:10b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexhub/falcon3
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run falcon3
    

Credits

Downloads last month
46
GGUF
Model size
10B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for cortexso/falcon3