Safetensors
qwen2

LACONIC-DeepScaleR-1.5B-2000

This repository hosts LACONIC-DeepScaleR-1.5B-2000, a LACONIC-trained variant of agentica-org/DeepScaleR-1.5B-Preview.

LACONIC is a length-aware reinforcement learning method for making LLM responses substantially shorter while preserving task performance. During training, it combines task reward with an adaptive length-based cost so that the model learns to stay near a target response budget. This checkpoint targets a budget of 2000 tokens.

In practice, LACONIC is designed to reduce response length with minimal deployment overhead: the released model uses the usual decoding stack and does not require special inference-time control logic.

Downloads last month
19
Safetensors
Model size
2B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for laconic-llm/LACONIC-Deepscaler-1.5B-2000

Dataset used to train laconic-llm/LACONIC-Deepscaler-1.5B-2000

Paper for laconic-llm/LACONIC-Deepscaler-1.5B-2000