How to use from
vLLM
Install from pip and serve model
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "cortexso/intellect-1"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/chat/completions" \
	-H "Content-Type: application/json" \
	--data '{
		"model": "cortexso/intellect-1",
		"messages": [
			{
				"role": "user",
				"content": "What is the capital of France?"
			}
		]
	}'
Use Docker
docker model run hf.co/cortexso/intellect-1:
Quick Links

Overview

Intellect-1 is a high-performance instruction-tuned model developed by Qwen, designed to handle a broad range of natural language processing tasks with efficiency and precision. Optimized for dialogue, reasoning, and knowledge-intensive applications, Intellect-1 excels in structured generation, summarization, and retrieval-augmented tasks. It is part of an open ecosystem, providing transparency in training data, model architecture, and fine-tuning methodologies.

Variants

No Variant Cortex CLI command
1 Intellect-1-10b cortex run intellect-1:10b

Use it with Jan (UI)

  1. Install Jan using Quickstart
  2. Use in Jan model Hub:
    cortexhub/intellect-1
    

Use it with Cortex (CLI)

  1. Install Cortex using Quickstart
  2. Run the model with command:
    cortex run intellect-1
    

Credits

Downloads last month
69
GGUF
Model size
10B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support