raihan-js commited on
Commit
33dee7e
·
verified ·
1 Parent(s): 6872ee0

Update README with ORCH License and training details

Browse files
Files changed (1) hide show
  1. README.md +108 -48
README.md CHANGED
@@ -1,92 +1,152 @@
1
  ---
2
- license: apache-2.0
 
 
3
  language:
4
- - en
5
  library_name: transformers
6
  tags:
7
- - code
8
- - next.js
9
- - full-stack
10
- - code-generation
11
- - fine-tuned
 
12
  base_model: deepseek-ai/deepseek-coder-6.7b-instruct
13
  pipeline_tag: text-generation
14
  ---
15
 
16
- # ORCH-7B: Autonomous Full-Stack Code Generation
 
17
 
18
- ORCH-7B is a fine-tuned code generation model specialized in generating complete, production-ready Next.js applications from natural language descriptions.
19
 
20
- ## Model Details
21
 
22
- - **Base Model**: DeepSeek Coder 6.7B Instruct
23
- - **Fine-tuning Method**: QLoRA (4-bit quantization + LoRA adapters)
24
- - **Training Data**: 44,000+ Next.js project examples
25
- - **Context Length**: 4K tokens (Phase 1)
26
- - **Output Format**: Complete project files with structured markers
27
 
28
- ## Capabilities
 
 
29
 
30
- - Generate complete Next.js 14+ applications
31
- - Full-stack projects with API routes
32
- - Database integrations (Prisma, Drizzle)
33
- - Authentication systems
34
- - UI components with Tailwind CSS
35
- - TypeScript support
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
36
 
37
  ## Usage
38
 
 
 
39
  ```python
40
  from transformers import AutoModelForCausalLM, AutoTokenizer
 
41
 
42
- model = AutoModelForCausalLM.from_pretrained("raihan-js/orch-7b")
43
- tokenizer = AutoTokenizer.from_pretrained("raihan-js/orch-7b")
44
 
45
- prompt = """Generate a complete Next.js full-stack application based on the following requirements.
 
 
 
 
 
46
 
47
- Create an e-commerce store with product catalog and shopping cart.
 
48
 
49
- Generate all necessary files for a production-ready application."""
 
50
 
51
- inputs = tokenizer(prompt, return_tensors="pt")
52
- outputs = model.generate(**inputs, max_new_tokens=4096, temperature=0.7)
53
- print(tokenizer.decode(outputs[0]))
54
  ```
55
 
 
 
 
 
56
  ## Output Format
57
 
58
- The model generates files in a structured format:
59
 
 
 
 
 
 
60
  ```
61
- <|file|>path/to/file.tsx<|end_path|>
62
- // File content here
63
- <|end_file|>
64
  ```
65
 
66
- ## Training
67
 
68
- - **Hardware**: RunPod A100
69
- - **Training Time**: ~21.5 hours
70
- - **Final Loss**: 0.199
71
- - **Epochs**: 1
 
72
 
73
- ## Limitations
74
 
75
- - Optimized for Next.js projects specifically
76
- - Best results with clear, detailed prompts
77
- - May require post-processing for very large projects
78
 
79
- ## License
 
 
 
 
 
 
 
 
 
 
 
80
 
81
- Apache 2.0
 
 
 
 
82
 
83
  ## Citation
84
 
85
  ```bibtex
86
  @misc{orch7b2025,
87
- title={ORCH-7B: Autonomous Full-Stack Code Generation},
88
- author={Raihan},
89
  year={2025},
90
- url={https://huggingface.co/raihan-js/orch-7b}
 
91
  }
92
  ```
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: other
3
+ license_name: orch-license
4
+ license_link: LICENSE
5
  language:
6
+ - en
7
  library_name: transformers
8
  tags:
9
+ - code-generation
10
+ - nextjs
11
+ - typescript
12
+ - full-stack
13
+ - qlora
14
+ - deepseek
15
  base_model: deepseek-ai/deepseek-coder-6.7b-instruct
16
  pipeline_tag: text-generation
17
  ---
18
 
19
+ <div align="center">
20
+ <img src="https://huggingface.co/spaces/raihan-js/orch-studio/resolve/main/logo.png" alt="ORCH" width="120" height="120" style="border-radius: 24px;">
21
 
22
+ # ORCH-7B
23
 
24
+ **Orchestrated Recursive Code Hierarchy**
25
 
26
+ *Autonomous Next.js Code Generation Model*
 
 
 
 
27
 
28
+ [![Space](https://img.shields.io/badge/Demo-ORCH%20Studio-D4A574?style=for-the-badge)](https://huggingface.co/spaces/raihan-js/orch-studio)
29
+ [![License](https://img.shields.io/badge/License-ORCH%20v1.0-A67C52?style=for-the-badge)](LICENSE)
30
+ </div>
31
 
32
+ ---
33
+
34
+ ## Model Description
35
+
36
+ ORCH-7B is a QLoRA fine-tuned model specialized for generating complete, production-ready Next.js applications from natural language prompts.
37
+
38
+ ## Training Details
39
+
40
+ | Specification | Value |
41
+ |--------------|-------|
42
+ | **Base Model** | DeepSeek Coder 6.7B Instruct |
43
+ | **Fine-tuning Method** | QLoRA (4-bit quantization + LoRA adapters) |
44
+ | **Training Hardware** | NVIDIA A100 GPU |
45
+ | **Training Duration** | 43 hours |
46
+ | **Training Steps** | 5,238 steps |
47
+ | **Context Length** | 4K tokens |
48
+ | **Model Size** | 6.7B parameters |
49
+
50
+ ## Specialization
51
+
52
+ - **Framework**: Next.js 14+ (App Router)
53
+ - **Language**: TypeScript
54
+ - **Styling**: Tailwind CSS
55
+ - **Database**: Prisma ORM patterns
56
+ - **Auth**: NextAuth.js patterns
57
+ - **Components**: shadcn/ui compatible
58
 
59
  ## Usage
60
 
61
+ ### With Transformers
62
+
63
  ```python
64
  from transformers import AutoModelForCausalLM, AutoTokenizer
65
+ import torch
66
 
67
+ model_id = "orch-ai/ORCH-7B"
 
68
 
69
+ tokenizer = AutoTokenizer.from_pretrained(model_id)
70
+ model = AutoModelForCausalLM.from_pretrained(
71
+ model_id,
72
+ torch_dtype=torch.float16,
73
+ device_map="auto"
74
+ )
75
 
76
+ prompt = """### Instruction:
77
+ Create a Next.js login page with email and password fields, validation, and error handling.
78
 
79
+ ### Response:
80
+ """
81
 
82
+ inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
83
+ outputs = model.generate(**inputs, max_new_tokens=1024, temperature=0.7)
84
+ print(tokenizer.decode(outputs[0], skip_special_tokens=True))
85
  ```
86
 
87
+ ### Try it Online
88
+
89
+ Use [ORCH Studio](https://huggingface.co/spaces/raihan-js/orch-studio) to generate complete projects without any code!
90
+
91
  ## Output Format
92
 
93
+ The model generates code in markdown format with file paths:
94
 
95
+ ```markdown
96
+ ```typescript app/page.tsx
97
+ export default function Home() {
98
+ return <div>Hello World</div>
99
+ }
100
  ```
 
 
 
101
  ```
102
 
103
+ ## Hardware Requirements
104
 
105
+ | Precision | VRAM Required |
106
+ |-----------|---------------|
107
+ | FP16 | ~14 GB |
108
+ | INT8 | ~8 GB |
109
+ | INT4 | ~5 GB |
110
 
111
+ ## License
112
 
113
+ This model is released under the [ORCH License v1.0](LICENSE).
 
 
114
 
115
+ **Permitted Uses:**
116
+ - Commercial applications and services
117
+ - Research and academic purposes
118
+ - Personal projects
119
+ - Building products and services
120
+ - Creating derivative models
121
+
122
+ **Prohibited Uses:**
123
+ - Generating content that violates applicable laws
124
+ - Creating malware or malicious code
125
+ - Harassment, abuse, or harm to individuals
126
+ - Deceptive practices or fraud
127
 
128
+ ## Links
129
+
130
+ - [ORCH Studio (Demo)](https://huggingface.co/spaces/raihan-js/orch-studio)
131
+ - [ORCH AI Organization](https://huggingface.co/orch-ai)
132
+ - [raihan-js](https://huggingface.co/raihan-js)
133
 
134
  ## Citation
135
 
136
  ```bibtex
137
  @misc{orch7b2025,
138
+ title={ORCH-7B: Autonomous Next.js Code Generation},
139
+ author={ORCH Team},
140
  year={2025},
141
+ publisher={Hugging Face},
142
+ url={https://huggingface.co/orch-ai/ORCH-7B}
143
  }
144
  ```
145
+
146
+ ---
147
+
148
+ <div align="center">
149
+ <strong>ORCH</strong> - Orchestrated Recursive Code Hierarchy
150
+ <br>
151
+ <em>Building the future of autonomous code generation</em>
152
+ </div>