Lovelace-1-7B
A research-oriented code language model focused on realistic software reasoning
Model Summary
Lovelace-1-7B is a 7-billion parameter, code-focused large language model based on
bigcode/starcoder2-7b.
It is part of the Lovelace model family, which focuses on building scalable, engineering-aligned coding models intended for long-term use in tooling, agentic systems, and research environments.
Rather than optimising for short-term benchmarks, Lovelace prioritises correctness, constraint awareness, and system-level reasoning.
Model Family
| Model | Base Model | Parameters | Status |
|---|---|---|---|
| Lovelace-1-3B | StarCoder2-3B | 3B | Released |
| Lovelace-1-7B | StarCoder2-7B | 7B | Released |
| Lovelace-1-15B | Planned | 15B | Planned |
All Lovelace models are designed to remain interface-compatible with the Lovelace Code runtime.
Architecture
- Base architecture: Transformer (decoder-only)
- Foundation model: StarCoder2-7B
- Training paradigm: Continued pretraining and alignment for code-centric tasks
- Modalities: Text (code and natural language)
- Tokenisation: Inherited from StarCoder2
The architectural design closely follows StarCoder2-7B to preserve its strong multilingual and multi-language coding capabilities, while enabling future extensibility.
Intended Capabilities
Although formal benchmarks are not yet published, Lovelace-1-7B is designed for:
- Code generation and completion across multiple programming languages
- Code refactoring and explanation
- Debugging and error localisation
- API usage reasoning and software design discussion
- Identifying infeasible or unrealistic engineering requests and responding with viable alternatives
The model is explicitly tuned to avoid hallucinated implementations, preferring transparent limitations and constructive guidance.
Lovelace Code Library
Lovelace-1-7B is intended to be used alongside Lovelace Code, a companion library providing:
- Structured coding prompts and system templates
- Long-request handling and staged generation
- Guardrails for non-computable or impractical tasks
- Integration points for execution, tooling, and agent frameworks
Current development focuses on stability for long requests, including multi-file generation and iterative refinement workflows.
Evaluation
At present:
- No public benchmark results are released
- Internal evaluation focuses on qualitative correctness, coherence under long prompts, and tool-aligned behaviour
Formal evaluation and transparent reporting are planned future work.
Limitations
- Long-context stability is still under active development
- No vision or multimodal support at this stage
- Performance characteristics may differ from StarCoder2-7B depending on downstream usage
Users should evaluate the model carefully before deploying in production or safety-critical environments.
Roadmap
Planned improvements include:
- Improved long-context stability in Lovelace Code
- Release of Lovelace-1-15B
- Vision-language support (code + visual inputs)
- Public benchmarks and technical reporting
- Deeper integration with agentic and execution-based systems
Intended Use
Lovelace-1-7B is suitable for:
- Research into code-focused LLM behaviour
- Developer tooling and agent-based coding systems
- Educational and exploratory programming assistance
It is not intended for autonomous execution or high-risk domains without additional safeguards.
Acknowledgements
Lovelace-1-7B builds directly on the work of the BigCode project, specifically
starcoder2-7b.
The Lovelace project draws inspiration from modern open-weight research releases and large-scale industrial coding systems.
Licence
Please refer to the licence of the underlying StarCoder2-7B model. Additional terms may apply to the Lovelace Code library and downstream tooling.
- Downloads last month
- -