Sweep Next-Edit 1.5B (MLX)
A 1.5B parameter model for next-edit autocomplete, quantized to 8bit MLX format.
Model Description
Sweep Next-Edit predicts your next code edit before you make it. It runs locally on your laptop in under 500ms (with speculative decoding) and outperforms models over 4x its size on next-edit benchmarks. More details here.
Usage
pip install mlx-lm
from mlx_lm import load, generate
model, tokenizer = load("Farok/sweep-next-edit-1.5b-mlx")
prompt = "hello"
if tokenizer.chat_template is not None:
messages = [{"role": "user", "content": prompt}]
prompt = tokenizer.apply_chat_template(
messages, add_generation_prompt=True, return_dict=False,
)
response = generate(model, tokenizer, prompt=prompt, verbose=True)
Model Details
- Format: MLX (8bit)
- Parameters: 1.5B
- Context Length: 8192 tokens
- Base Model: Qwen2.5-Coder
Example
The model uses a specific prompt format with file context, recent diffs, and current state to predict the next edit. See run_model.py for a complete example.
Links
- Blog Post - Technical details and benchmarks
- JetBrains Plugin - Sweep AI JetBrains Plugin
- HN Thread - Discuss implementation for VSCode, Neovim & Emacs
- Twitter Post - Ask us any other questions
License
Apache 2.0
- Downloads last month
- 11
Model size
0.4B params
Tensor type
F32
·
U32
·
Hardware compatibility
Log In
to view the estimation
8-bit
Model tree for Chris-Kode/sweep-next-edit-1.5b-mlx
Base model
sweepai/sweep-next-edit-1.5B
Quantized
Xenova/sweep-next-edit-1.5B