File size: 3,167 Bytes
db3de13
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
---
base_model: Qwen/Qwen2.5-Coder-14B
library_name: peft
pipeline_tag: text-generation
license: apache-2.0
tags:
- code
- coding-assistant
- lora
- refactor
- bug-fix
- optimization
- async
- concurrency
- security
- logging
- networking
- transformers
---

# EdgePulse Coder 14B (LoRA)

**EdgePulse Coder 14B** is a production-grade coding assistant fine-tuned using LoRA on top of **Qwen2.5-Coder-14B**.  
It is designed to handle real-world software engineering workflows with high reliability and correctness.

---

## Model Details

### Model Description

EdgePulse Coder 14B focuses on **practical developer tasks**, trained on a large, strictly validated dataset covering:

- Bug fixing
- Code explanation
- Refactoring
- Optimization
- Async & concurrency correction
- Logging & observability
- Security & defensive coding
- Networking & I/O handling
- Multi-file context reasoning
- Test generation and impact analysis

The model is optimized for **IDE usage**, **CLI workflows**, and **Cursor-like streaming environments**.

---

- **Developed by:** EdgePulseAI  
- **Shared by:** EdgePulseAI  
- **Model type:** Large Language Model (Code-focused)  
- **Language(s):** Python, JavaScript, TypeScript, Bash (primary), general programming concepts  
- **License:** Apache-2.0  
- **Finetuned from:** Qwen/Qwen2.5-Coder-14B  

---

## Model Sources

- **Base Model:** https://huggingface.co/Qwen/Qwen2.5-Coder-14B  
- **Website:** https://EdgePulseAi.com  

---

## Uses

### Direct Use

EdgePulse Coder 14B can be used directly for:

- Code explanation
- Bug fixing
- Refactoring existing code
- Generating tests
- Improving logging and error handling
- Fixing async / concurrency bugs
- Secure coding suggestions
- Network & I/O robustness

### Downstream Use

- IDE assistants (VS Code / Cursor-style tools)
- CI/CD automation
- Code review bots
- Developer copilots
- Internal engineering tools

### Out-of-Scope Use

- Medical or legal advice
- Autonomous system control
- High-risk decision making without human review

---

## Bias, Risks, and Limitations

- The model may occasionally produce syntactically correct but logically incorrect code.
- Security-sensitive code should always be reviewed by humans.
- Performance depends on correct prompt framing and context size.

### Recommendations

- Use human review for production deployments.
- Combine with static analysis and testing tools.
- Prefer structured prompts for multi-file tasks.

---

## How to Get Started with the Model

```python
from transformers import AutoTokenizer, AutoModelForCausalLM
from peft import PeftModel

base_model = "Qwen/Qwen2.5-Coder-14B"
adapter_model = "edgepulse-ai/EdgePulse-Coder-14B-LoRA"

tokenizer = AutoTokenizer.from_pretrained(base_model)

model = AutoModelForCausalLM.from_pretrained(
    base_model,
    device_map="auto",
    torch_dtype="auto"
)

model = PeftModel.from_pretrained(model, adapter_model)
model.eval()

prompt = "Fix this bug:\n\ndef add(a,b): return a-b"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

output = model.generate(**inputs, max_new_tokens=128)
print(tokenizer.decode(output[0], skip_special_tokens=True))