wpostma's picture
Upload README.md with huggingface_hub
10c5ad7 verified
---
license: other
license_name: deepseek
base_model: deepseek-ai/deepseek-coder-6.7b-instruct
tags:
- delphi
- objectpascal
- code-assistant
- system-prompt
language:
- en
---
# deepseek-coder-6.7b-instruct-de2
**A Delphi/ObjectPascal coding assistant built on deepseek-coder-6.7b-instruct.**
100-item comprehensive system prompt, 8192 context. Experimental.
## What is this?
This is **not a fine-tuned model**. It uses the unmodified deepseek-coder-6.7b-instruct
weights with a carefully crafted system prompt that activates the model's existing
knowledge of Delphi/ObjectPascal conventions.
Six QLoRA fine-tuning attempts on this base model either had zero effect or caused
regression. The system prompt alone produces better results than any fine-tuned variant.
## How to use
### With Ollama
```bash
# Download the Modelfile
# Then create the model:
ollama create deepseek-coder:6.7b-instruct-de2 -f Modelfile.de2
# Run it:
ollama run deepseek-coder:6.7b-instruct-de2 "Write a Delphi function that reverses a string"
```
### What the system prompt teaches
- **Memory management**: No ARC or GC on Windows. Free every object you create.
- **FATPIE naming**: T for types, F for fields, A for parameters, P for pointers, I for interfaces, E for exceptions
- **Code style**: Use Result not FunctionName, begin not BEGIN, try/finally for cleanup
- **Platform**: Delphi on Windows, MSBuild for builds, TDateTime for dates
## Model details
- **Base model**: [deepseek-ai/deepseek-coder-6.7b-instruct](https://huggingface.co/deepseek-ai/deepseek-coder-6.7b-instruct)
- **Weights**: Unmodified — no LoRA, no fine-tuning
- **Method**: System prompt engineering only
- **Author**: Warren Postma (warren.postma@gmail.com)
- **Project**: [WARP — local AI agent for Delphi/ObjectPascal](https://github.com/wpostma)
## Key finding
> A well-crafted system prompt on an unmodified 6.7B model beats QLoRA fine-tuning
> with 500-1000 curated instruction pairs. The model already knows Delphi — it just
> needs the right context to access that knowledge.