metadev7 commited on
Commit
8806683
·
verified ·
1 Parent(s): e677b51

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +288 -288
README.md CHANGED
@@ -1,288 +1,288 @@
1
- ---
2
- language:
3
- - en
4
- license: llama2
5
- library_name: transformers
6
- tags:
7
- - code
8
- - code-generation
9
- - text-generation
10
- - web-development
11
- - react
12
- - nextjs
13
- - nodejs
14
- - python
15
- - typescript
16
- - metadev
17
- - fullstack
18
- - conversational
19
- pipeline_tag: text-generation
20
- model-index:
21
- - name: MetaDev-7B
22
- results:
23
- - task:
24
- type: text-generation
25
- name: Code Generation
26
- dataset:
27
- name: HumanEval
28
- type: openai_humaneval
29
- metrics:
30
- - type: pass@1
31
- value: 62.5
32
- name: pass@1
33
- - task:
34
- type: text-generation
35
- name: Code Generation
36
- dataset:
37
- name: MBPP
38
- type: mbpp
39
- metrics:
40
- - type: pass@1
41
- value: 58.3
42
- name: pass@1
43
- ---
44
-
45
- <div align="center">
46
- <img src="logo.svg" alt="MetaDev AI" width="180" height="180"/>
47
-
48
- # MetaDev-7B
49
-
50
- **Your Intelligent Coding Companion for Modern Web Development**
51
-
52
- [Website](https://metadev.c) | [GitHub](https://github.com/metadev-xi/metadev7) | [Twitter](https://twitter.com/metadevxi)
53
-
54
- 🤗 [Hugging Face](https://huggingface.co/metadev7/metadev-7b) | 📄 License: Llama 2 Community
55
- </div>
56
-
57
- ---
58
-
59
- ## Meet MetaDev-7B
60
-
61
- Today, we release **MetaDev-7B** to the open-source community. This is more than just another code model—it's a specialized coding companion built from the ground up for modern web development.
62
-
63
- MetaDev was built to shatter the stereotype that high-performance code assistants must remain behind closed doors. We have optimized the model specifically for **React, Next.js, Node.js, TypeScript**, and full-stack web development. From building responsive UI components to architecting secure REST APIs, MetaDev-7B empowers developers to build the next generation of web applications.
64
-
65
- We believe powerful AI tools should be accessible to everyone. MetaDev-7B is our commitment to that future.
66
-
67
- ---
68
-
69
- ## How to Use
70
-
71
- ### Installation
72
-
73
- ```bash
74
- pip install metadev-ai
75
- ```
76
-
77
- ### Quick Start
78
-
79
- ```python
80
- from metadev import MetaDevModel
81
-
82
- # Load model
83
- model = MetaDevModel.from_pretrained("metadev7/metadev-7b")
84
-
85
- # Generate code
86
- response = model.generate("Create a React login form with validation")
87
- print(response)
88
- ```
89
-
90
- ### Command Line Interface
91
-
92
- ```bash
93
- # Interactive chat mode
94
- metadev chat
95
-
96
- # Generate code from prompt
97
- metadev generate "Build a REST API with authentication"
98
-
99
- # Review existing code
100
- metadev review app.py
101
-
102
- # Security audit
103
- metadev audit auth.py --mode security
104
- ```
105
-
106
- ### API Server
107
-
108
- ```bash
109
- # Start local API server
110
- metadev serve --port 8000
111
- ```
112
-
113
- ---
114
-
115
- ## Benchmarks
116
-
117
- MetaDev-7B delivers strong performance on core coding benchmarks, with particular strength in web development scenarios.
118
-
119
- | Benchmark | MetaDev-7B | CodeLlama-7B | DeepSeek-Coder-6.7B | StarCoder2-7B |
120
- |-----------|------------|--------------|---------------------|---------------|
121
- | HumanEval | **62.5** | 53.7 | 60.6 | 57.2 |
122
- | MBPP | **58.3** | 52.1 | 55.2 | 54.8 |
123
- | Web Dev Benchmark | **78.9** | 45.2 | 52.3 | 48.7 |
124
- | Security Awareness | **85.2** | 42.1 | 51.8 | 45.3 |
125
-
126
- ### Specialized Performance
127
-
128
- We evaluated MetaDev-7B on domain-specific tasks critical to web development:
129
-
130
- | Task | MetaDev-7B | CodeLlama-7B | DeepSeek-Coder |
131
- |------|------------|--------------|----------------|
132
- | React Component Generation | **82.0%** | 58.3% | 65.2% |
133
- | API Endpoint Creation | **76.0%** | 52.1% | 61.8% |
134
- | TypeScript Type Inference | **79.5%** | 48.7% | 68.3% |
135
- | Security Best Practices | **85.0%** | 41.2% | 52.6% |
136
- | Test Generation | **71.0%** | 45.8% | 58.2% |
137
- | Documentation Quality | **74.3%** | 52.4% | 59.1% |
138
-
139
- ---
140
-
141
- ## Features
142
-
143
- ### Personality Modes
144
-
145
- Switch between specialized modes for different tasks:
146
-
147
- | Mode | Description | Use Case |
148
- |------|-------------|----------|
149
- | `default` | Balanced coding companion | General development |
150
- | `teaching` | Patient instructor with explanations | Learning & onboarding |
151
- | `security` | Security-first OWASP advisor | Security audits |
152
- | `review` | Constructive code reviewer | Code reviews |
153
- | `debugging` | Systematic problem solver | Bug fixing |
154
- | `architect` | System design expert | Architecture decisions |
155
-
156
- ```python
157
- # Switch modes
158
- model = MetaDevModel.from_pretrained("metadev7/metadev-7b", mode="teaching")
159
- ```
160
-
161
- ### Framework Expertise
162
-
163
- - **Frontend**: React, Next.js, Vue, Svelte, TypeScript
164
- - **Backend**: Node.js, Express, FastAPI, Django
165
- - **Database**: PostgreSQL, MongoDB, Prisma, Drizzle
166
- - **DevOps**: Docker, GitHub Actions, Vercel, AWS
167
- - **Testing**: Jest, Vitest, Pytest, Playwright
168
-
169
- ---
170
-
171
- ## Model Details
172
-
173
- | Specification | Value |
174
- |--------------|-------|
175
- | Parameters | 7B |
176
- | Architecture | LlamaForCausalLM |
177
- | Context Length | 16,384 tokens |
178
- | Precision | bfloat16 |
179
- | Base Model | CodeLlama-7B |
180
- | Fine-tuning | QLoRA (4-bit) |
181
- | Training Data | 50K+ curated examples |
182
- | Training Duration | 72 hours on 4x A100 |
183
-
184
- ### Hardware Requirements
185
-
186
- | Precision | VRAM | RAM |
187
- |-----------|------|-----|
188
- | FP16 | 14GB | 16GB |
189
- | 4-bit | 4GB | 8GB |
190
- | 8-bit | 8GB | 12GB |
191
-
192
- ---
193
-
194
- ## Local Deployment
195
-
196
- ### Using Transformers
197
-
198
- ```python
199
- from transformers import AutoModelForCausalLM, AutoTokenizer
200
-
201
- tokenizer = AutoTokenizer.from_pretrained("metadev7/metadev-7b")
202
- model = AutoModelForCausalLM.from_pretrained(
203
- "metadev7/metadev-7b",
204
- torch_dtype="auto",
205
- device_map="auto"
206
- )
207
-
208
- inputs = tokenizer("Create a React button component", return_tensors="pt")
209
- outputs = model.generate(**inputs, max_new_tokens=512)
210
- print(tokenizer.decode(outputs[0]))
211
- ```
212
-
213
- ### Using vLLM
214
-
215
- ```bash
216
- python -m vllm.entrypoints.openai.api_server \
217
- --model metadev7/metadev-7b \
218
- --dtype bfloat16
219
- ```
220
-
221
- ### Using Docker
222
-
223
- ```bash
224
- docker pull metadev7/metadev-7b
225
- docker run -p 8000:8000 --gpus all metadev7/metadev-7b
226
- ```
227
-
228
- ---
229
-
230
- ## Training
231
-
232
- ### Data Sources
233
- - Curated GitHub repositories (⭐100+)
234
- - Official framework documentation
235
- - Stack Overflow (verified answers)
236
- - Security-focused code reviews
237
- - Production codebases (anonymized)
238
-
239
- ### Training Configuration
240
- - **Method**: QLoRA with 4-bit quantization
241
- - **LoRA Rank**: 64
242
- - **Learning Rate**: 2e-4
243
- - **Batch Size**: 4 (gradient accumulation: 4)
244
- - **Epochs**: 3
245
- - **Optimizer**: AdamW with cosine scheduler
246
-
247
- ---
248
-
249
- ## Limitations
250
-
251
- - Optimized for web development (React, Node.js, Python, TypeScript)
252
- - May require guidance for niche frameworks
253
- - Not optimized for mobile (Swift/Kotlin) or game development
254
- - Knowledge cutoff: October 2024
255
-
256
- ---
257
-
258
- ## License
259
-
260
- MetaDev-7B is released under the **Llama 2 Community License**.
261
-
262
- - ✅ Commercial use allowed
263
- - ✅ Modification allowed
264
- - ✅ Distribution allowed
265
- - ⚠️ Must include original license
266
- - ⚠️ 700M+ MAU requires special license from Meta
267
-
268
- ---
269
-
270
- ## Citation
271
-
272
- ```bibtex
273
- @software{metadev2024,
274
- title={MetaDev-7B: A Specialized Code Generation Model for Web Development},
275
- author={MetaDev AI Team},
276
- year={2024},
277
- url={https://huggingface.co/metadev7/metadev-7b}
278
- }
279
- ```
280
-
281
- ---
282
-
283
- ## Contact
284
-
285
- - **Website**: [metadev.c](https://metadev.c)
286
- - **GitHub**: [github.com/metadev-xi/metadev7](https://github.com/metadev-xi/metadev7)
287
- - **Twitter**: [@metadevxi](https://twitter.com/metadevxi)
288
- - **Email**: contact@metadev.c
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: llama2
5
+ library_name: transformers
6
+ tags:
7
+ - code
8
+ - code-generation
9
+ - text-generation
10
+ - web-development
11
+ - react
12
+ - nextjs
13
+ - nodejs
14
+ - python
15
+ - typescript
16
+ - metadev
17
+ - fullstack
18
+ - conversational
19
+ pipeline_tag: text-generation
20
+ model-index:
21
+ - name: MetaDev-7B
22
+ results:
23
+ - task:
24
+ type: text-generation
25
+ name: Code Generation
26
+ dataset:
27
+ name: HumanEval
28
+ type: openai_humaneval
29
+ metrics:
30
+ - type: pass@1
31
+ value: 62.5
32
+ name: pass@1
33
+ - task:
34
+ type: text-generation
35
+ name: Code Generation
36
+ dataset:
37
+ name: MBPP
38
+ type: mbpp
39
+ metrics:
40
+ - type: pass@1
41
+ value: 58.3
42
+ name: pass@1
43
+ ---
44
+
45
+ <div align="center">
46
+ <img src="logo.png" alt="MetaDev AI" width="180" height="180"/>
47
+
48
+ # MetaDev-7B
49
+
50
+ **Your Intelligent Coding Companion for Modern Web Development**
51
+
52
+ [Website](https://metadev.c) | [GitHub](https://github.com/metadev-xi/metadev7) | [Twitter](https://twitter.com/metadevxi)
53
+
54
+ 🤗 [Hugging Face](https://huggingface.co/metadev7/metadev-7b) | 📄 License: Llama 2 Community
55
+ </div>
56
+
57
+ ---
58
+
59
+ ## Meet MetaDev-7B
60
+
61
+ Today, we release **MetaDev-7B** to the open-source community. This is more than just another code model—it's a specialized coding companion built from the ground up for modern web development.
62
+
63
+ MetaDev was built to shatter the stereotype that high-performance code assistants must remain behind closed doors. We have optimized the model specifically for **React, Next.js, Node.js, TypeScript**, and full-stack web development. From building responsive UI components to architecting secure REST APIs, MetaDev-7B empowers developers to build the next generation of web applications.
64
+
65
+ We believe powerful AI tools should be accessible to everyone. MetaDev-7B is our commitment to that future.
66
+
67
+ ---
68
+
69
+ ## How to Use
70
+
71
+ ### Installation
72
+
73
+ ```bash
74
+ pip install metadev-ai
75
+ ```
76
+
77
+ ### Quick Start
78
+
79
+ ```python
80
+ from metadev import MetaDevModel
81
+
82
+ # Load model
83
+ model = MetaDevModel.from_pretrained("metadev7/metadev-7b")
84
+
85
+ # Generate code
86
+ response = model.generate("Create a React login form with validation")
87
+ print(response)
88
+ ```
89
+
90
+ ### Command Line Interface
91
+
92
+ ```bash
93
+ # Interactive chat mode
94
+ metadev chat
95
+
96
+ # Generate code from prompt
97
+ metadev generate "Build a REST API with authentication"
98
+
99
+ # Review existing code
100
+ metadev review app.py
101
+
102
+ # Security audit
103
+ metadev audit auth.py --mode security
104
+ ```
105
+
106
+ ### API Server
107
+
108
+ ```bash
109
+ # Start local API server
110
+ metadev serve --port 8000
111
+ ```
112
+
113
+ ---
114
+
115
+ ## Benchmarks
116
+
117
+ MetaDev-7B delivers strong performance on core coding benchmarks, with particular strength in web development scenarios.
118
+
119
+ | Benchmark | MetaDev-7B | CodeLlama-7B | DeepSeek-Coder-6.7B | StarCoder2-7B |
120
+ |-----------|------------|--------------|---------------------|---------------|
121
+ | HumanEval | **62.5** | 53.7 | 60.6 | 57.2 |
122
+ | MBPP | **58.3** | 52.1 | 55.2 | 54.8 |
123
+ | Web Dev Benchmark | **78.9** | 45.2 | 52.3 | 48.7 |
124
+ | Security Awareness | **85.2** | 42.1 | 51.8 | 45.3 |
125
+
126
+ ### Specialized Performance
127
+
128
+ We evaluated MetaDev-7B on domain-specific tasks critical to web development:
129
+
130
+ | Task | MetaDev-7B | CodeLlama-7B | DeepSeek-Coder |
131
+ |------|------------|--------------|----------------|
132
+ | React Component Generation | **82.0%** | 58.3% | 65.2% |
133
+ | API Endpoint Creation | **76.0%** | 52.1% | 61.8% |
134
+ | TypeScript Type Inference | **79.5%** | 48.7% | 68.3% |
135
+ | Security Best Practices | **85.0%** | 41.2% | 52.6% |
136
+ | Test Generation | **71.0%** | 45.8% | 58.2% |
137
+ | Documentation Quality | **74.3%** | 52.4% | 59.1% |
138
+
139
+ ---
140
+
141
+ ## Features
142
+
143
+ ### Personality Modes
144
+
145
+ Switch between specialized modes for different tasks:
146
+
147
+ | Mode | Description | Use Case |
148
+ |------|-------------|----------|
149
+ | `default` | Balanced coding companion | General development |
150
+ | `teaching` | Patient instructor with explanations | Learning & onboarding |
151
+ | `security` | Security-first OWASP advisor | Security audits |
152
+ | `review` | Constructive code reviewer | Code reviews |
153
+ | `debugging` | Systematic problem solver | Bug fixing |
154
+ | `architect` | System design expert | Architecture decisions |
155
+
156
+ ```python
157
+ # Switch modes
158
+ model = MetaDevModel.from_pretrained("metadev7/metadev-7b", mode="teaching")
159
+ ```
160
+
161
+ ### Framework Expertise
162
+
163
+ - **Frontend**: React, Next.js, Vue, Svelte, TypeScript
164
+ - **Backend**: Node.js, Express, FastAPI, Django
165
+ - **Database**: PostgreSQL, MongoDB, Prisma, Drizzle
166
+ - **DevOps**: Docker, GitHub Actions, Vercel, AWS
167
+ - **Testing**: Jest, Vitest, Pytest, Playwright
168
+
169
+ ---
170
+
171
+ ## Model Details
172
+
173
+ | Specification | Value |
174
+ |--------------|-------|
175
+ | Parameters | 7B |
176
+ | Architecture | LlamaForCausalLM |
177
+ | Context Length | 16,384 tokens |
178
+ | Precision | bfloat16 |
179
+ | Base Model | CodeLlama-7B |
180
+ | Fine-tuning | QLoRA (4-bit) |
181
+ | Training Data | 50K+ curated examples |
182
+ | Training Duration | 72 hours on 4x A100 |
183
+
184
+ ### Hardware Requirements
185
+
186
+ | Precision | VRAM | RAM |
187
+ |-----------|------|-----|
188
+ | FP16 | 14GB | 16GB |
189
+ | 4-bit | 4GB | 8GB |
190
+ | 8-bit | 8GB | 12GB |
191
+
192
+ ---
193
+
194
+ ## Local Deployment
195
+
196
+ ### Using Transformers
197
+
198
+ ```python
199
+ from transformers import AutoModelForCausalLM, AutoTokenizer
200
+
201
+ tokenizer = AutoTokenizer.from_pretrained("metadev7/metadev-7b")
202
+ model = AutoModelForCausalLM.from_pretrained(
203
+ "metadev7/metadev-7b",
204
+ torch_dtype="auto",
205
+ device_map="auto"
206
+ )
207
+
208
+ inputs = tokenizer("Create a React button component", return_tensors="pt")
209
+ outputs = model.generate(**inputs, max_new_tokens=512)
210
+ print(tokenizer.decode(outputs[0]))
211
+ ```
212
+
213
+ ### Using vLLM
214
+
215
+ ```bash
216
+ python -m vllm.entrypoints.openai.api_server \
217
+ --model metadev7/metadev-7b \
218
+ --dtype bfloat16
219
+ ```
220
+
221
+ ### Using Docker
222
+
223
+ ```bash
224
+ docker pull metadev7/metadev-7b
225
+ docker run -p 8000:8000 --gpus all metadev7/metadev-7b
226
+ ```
227
+
228
+ ---
229
+
230
+ ## Training
231
+
232
+ ### Data Sources
233
+ - Curated GitHub repositories (⭐100+)
234
+ - Official framework documentation
235
+ - Stack Overflow (verified answers)
236
+ - Security-focused code reviews
237
+ - Production codebases (anonymized)
238
+
239
+ ### Training Configuration
240
+ - **Method**: QLoRA with 4-bit quantization
241
+ - **LoRA Rank**: 64
242
+ - **Learning Rate**: 2e-4
243
+ - **Batch Size**: 4 (gradient accumulation: 4)
244
+ - **Epochs**: 3
245
+ - **Optimizer**: AdamW with cosine scheduler
246
+
247
+ ---
248
+
249
+ ## Limitations
250
+
251
+ - Optimized for web development (React, Node.js, Python, TypeScript)
252
+ - May require guidance for niche frameworks
253
+ - Not optimized for mobile (Swift/Kotlin) or game development
254
+ - Knowledge cutoff: October 2024
255
+
256
+ ---
257
+
258
+ ## License
259
+
260
+ MetaDev-7B is released under the **Llama 2 Community License**.
261
+
262
+ - ✅ Commercial use allowed
263
+ - ✅ Modification allowed
264
+ - ✅ Distribution allowed
265
+ - ⚠️ Must include original license
266
+ - ⚠️ 700M+ MAU requires special license from Meta
267
+
268
+ ---
269
+
270
+ ## Citation
271
+
272
+ ```bibtex
273
+ @software{metadev2024,
274
+ title={MetaDev-7B: A Specialized Code Generation Model for Web Development},
275
+ author={MetaDev AI Team},
276
+ year={2024},
277
+ url={https://huggingface.co/metadev7/metadev-7b}
278
+ }
279
+ ```
280
+
281
+ ---
282
+
283
+ ## Contact
284
+
285
+ - **Website**: [metadev.c](https://metadev.c)
286
+ - **GitHub**: [github.com/metadev-xi/metadev7](https://github.com/metadev-xi/metadev7)
287
+ - **Twitter**: [@metadevxi](https://twitter.com/metadevxi)
288
+ - **Email**: contact@metadev.c