metadata
license: apache-2.0
Instinct, the State-of-the-Art Open Next-Edit Model
This repo contains the model weights for Instinct, Continue's state-of-the-art open next-edit model. Robustly fine-tuned from Qwen2.5-Coder-7B, Instinct intelligently predicts your next move to keep you in flow.
Serving the model
There are many ways to plug a local model into Continue; we internally used an endpoint served by SGLang, which is one of the options below. We observed no significant performance changes with fp8 quantization, so this may be used if desired.
- SGLang:
python3 -m sglang.launch_server --model-path continuedev/instinct --load-format safetensors - vLLM :
vllm serve continuedev/instinct --served-model-name instinct --load-format safetensors --enable-prefix-caching --enable-chunked-prefill
Learn more
For more information on the work behind Instinct, please refer to our blog.