File size: 2,358 Bytes
233c7a9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
---
license: apache-2.0
language:
- en
base_model:
- janhq/Jan-code-4b
pipeline_tag: text-generation
library_name: transformers
tags:
- agent
---
# Jan-Code-4B: a small code-tuned model

[![GitHub](https://img.shields.io/badge/GitHub-Repository-blue?logo=github)](https://github.com/janhq/jan)
[![License](https://img.shields.io/badge/License-Apache%202.0-yellow)](https://opensource.org/licenses/Apache-2.0)
[![Jan App](https://img.shields.io/badge/Powered%20by-Jan%20App-purple?style=flat\&logo=android)](https://jan.ai/)

![image](https://cdn-uploads.huggingface.co/production/uploads/657a81129ea9d52e5cbd67f7/U1kMFKtd-XU1ukh0Ff9Ru.png)

## Overview

**Jan-Code-4B** is a **code-tuned** model built on top of [Jan-v3-4B-base-instruct](https://huggingface.co/janhq/Jan-v3-4B-base-instruct). It’s designed to be a practical coding model you can run locally and iterate on quickly—useful for everyday code tasks and as a lightweight “worker” model in agentic workflows.

Compared to larger coding models, Jan-Code focuses on handling **well-scoped subtasks** reliably while keeping latency and compute requirements small.

## Intended Use

* **Lightweight coding assistant** for generation, editing, refactoring, and debugging
* **A small, fast worker model** for agent setups (e.g., as a sub-agent that produces patches/tests while a larger model plans)
* **Replace Haiku model in Claude Code setup**


## Quick Start

### Integration with Jan Apps

Jan-code is optimized for direct integration with [Jan Desktop](https://jan.ai/), select the model in the app to start using it.


### Local Deployment

**Using vLLM:**
```bash
vllm serve janhq/Jan-code-4b \
    --host 0.0.0.0 \
    --port 1234 \
    --enable-auto-tool-choice \
    --tool-call-parser hermes 
    
```

**Using llama.cpp:**
```bash
llama-server --model Jan-code-4b-Q8_0.gguf \
    --host 0.0.0.0 \
    --port 1234 \
    --jinja \
    --no-context-shift
```

### Recommended Parameters
For optimal performance in agentic and general tasks, we recommend the following inference parameters:
```yaml
temperature: 0.7
top_p: 0.8
top_k: 20
```

## 🤝 Community & Support

- **Discussions**: [Hugging Face Community](https://huggingface.co/janhq/Jan-code/discussions) 
- **Jan App**: Learn more about the Jan App at [jan.ai](https://jan.ai/)

## 📄 Citation
```bibtex
Updated Soon
```