Add model card for InCoder-32B

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +80 -0
README.md ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ pipeline_tag: text-generation
3
+ library_name: transformers
4
+ tags:
5
+ - code
6
+ - industrial-ai
7
+ - code-generation
8
+ ---
9
+
10
+ # InCoder-32B: Industrial Code Foundation Model
11
+
12
+ **InCoder-32B** (Industrial-Coder-32B) is the first 32B-parameter code foundation model purpose-built for industrial code intelligence. While general code LLMs excel at standard programming tasks, they often struggle with hardware semantics, specialized language constructs, and strict resource constraints.
13
+
14
+ InCoder-32B is designed to unify code intelligence across:
15
+ - **Chip Design** (Verilog / RTL)
16
+ - **GPU Kernel Optimization** (CUDA / Triton)
17
+ - **Embedded Systems** (ARM Cortex-M, STM32)
18
+ - **Compiler Optimization** (x86-64 assembly, LLVM)
19
+ - **3D Modeling** (CAD/CAM via CadQuery / OpenCascade)
20
+
21
+ The model supports a native long-context window of up to **128K tokens**.
22
+
23
+ ## Resources
24
+ - **Paper:** [InCoder-32B: Code Foundation Model for Industrial Scenarios](https://huggingface.co/papers/2603.16790)
25
+ - **Repository:** [GitHub - Industrial-Coder](https://github.com/CSJianYang/Industrial-Coder)
26
+ - **Project Page:** [IndustrialCoder Project Page](https://huggingface.co/Multilingual-Multimodal-NLP/IndustrialCoder)
27
+
28
+ ## Quickstart
29
+
30
+ To use InCoder-32B with the `transformers` library, you can follow the snippet below. Note that `trust_remote_code=True` is required to load the custom model architecture.
31
+
32
+ ```python
33
+ import torch
34
+ from transformers import AutoModelForCausalLM, AutoTokenizer
35
+
36
+ model_name = "Multilingual-Multimodal-NLP/IndustrialCoder"
37
+
38
+ tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
39
+ model = AutoModelForCausalLM.from_pretrained(
40
+ model_name,
41
+ torch_dtype="auto",
42
+ device_map="auto",
43
+ trust_remote_code=True,
44
+ )
45
+
46
+ messages = [{"role": "user", "content": "Optimize this CUDA kernel for better memory coalescing."}]
47
+ text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
48
+ inputs = tokenizer([text], return_tensors="pt").to(model.device)
49
+
50
+ with torch.no_grad():
51
+ out = model.generate(**inputs, max_new_tokens=2048, temperature=0.6, top_p=0.85, top_k=20)
52
+
53
+ print(tokenizer.decode(out[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True))
54
+ ```
55
+
56
+ ## Performance
57
+
58
+ ### Industrial Code Benchmarks
59
+ InCoder-32B establishes strong open-source baselines across specialized industrial domains, often surpassing proprietary models like Claude-Sonnet-4.6 in specific tasks.
60
+
61
+ | Domain | Benchmark | InCoder-32B | Claude-Sonnet-4.6 |
62
+ |---|---|:---:|:---:|
63
+ | **Chip Design** | VeriScope Score | 80.7 | **87.7** |
64
+ | **GPU Optim.** | KernelBench L1/L2/L3 | **22.2/36.0/14.0** | 11.1/28.0/2.0 |
65
+ | **3D Modeling** | CAD-Coder Compile (%) | **82.0** | 77.0 |
66
+ | **3D Modeling** | CAD-Coder IoU | **53.5** | 32.4 |
67
+
68
+ ## Citation
69
+ If you find InCoder-32B useful in your research, please cite:
70
+ ```bibtex
71
+ @article{yang2025incoder,
72
+ title={InCoder-32B: Code Foundation Model for Industrial Scenarios},
73
+ author={Yang, Jian and Zhang, Wei and Wu, Jiajun and Cheng, Junhang and Guo, Shawn and Wang, Haowen and others},
74
+ journal={arXiv preprint arXiv:2603.16790},
75
+ year={2025}
76
+ }
77
+ ```
78
+
79
+ ## Disclaimer
80
+ The model may generate incorrect or unsafe code. Always review and test outputs in a sandboxed environment before production use. Industrial code (RTL, embedded firmware, GPU kernels) requires expert human review before deployment in physical systems or hardware synthesis.