--- pipeline_tag: text-generation library_name: transformers tags: - code - industrial-ai - code-generation --- # InCoder-32B: Industrial Code Foundation Model **InCoder-32B** (Industrial-Coder-32B) is the first 32B-parameter code foundation model purpose-built for industrial code intelligence. While general code LLMs excel at standard programming tasks, they often struggle with hardware semantics, specialized language constructs, and strict resource constraints. InCoder-32B is designed to unify code intelligence across: - **Chip Design** (Verilog / RTL) - **GPU Kernel Optimization** (CUDA / Triton) - **Embedded Systems** (ARM Cortex-M, STM32) - **Compiler Optimization** (x86-64 assembly, LLVM) - **3D Modeling** (CAD/CAM via CadQuery / OpenCascade) The model supports a native long-context window of up to **128K tokens**. ## Resources - **Paper:** [InCoder-32B: Code Foundation Model for Industrial Scenarios](https://huggingface.co/papers/2603.16790) - **Repository:** [GitHub - Industrial-Coder](https://github.com/CSJianYang/Industrial-Coder) - **Project Page:** [IndustrialCoder Project Page](https://huggingface.co/Multilingual-Multimodal-NLP/IndustrialCoder) ## Quickstart To use InCoder-32B with the `transformers` library, you can follow the snippet below. Note that `trust_remote_code=True` is required to load the custom model architecture. ```python import torch from transformers import AutoModelForCausalLM, AutoTokenizer model_name = "Multilingual-Multimodal-NLP/IndustrialCoder" tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True) model = AutoModelForCausalLM.from_pretrained( model_name, torch_dtype="auto", device_map="auto", trust_remote_code=True, ) messages = [{"role": "user", "content": "Optimize this CUDA kernel for better memory coalescing."}] text = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) inputs = tokenizer([text], return_tensors="pt").to(model.device) with torch.no_grad(): out = model.generate(**inputs, max_new_tokens=2048, temperature=0.6, top_p=0.85, top_k=20) print(tokenizer.decode(out[0][inputs["input_ids"].shape[-1]:], skip_special_tokens=True)) ``` ## Performance ### Industrial Code Benchmarks InCoder-32B establishes strong open-source baselines across specialized industrial domains, often surpassing proprietary models like Claude-Sonnet-4.6 in specific tasks. | Domain | Benchmark | InCoder-32B | Claude-Sonnet-4.6 | |---|---|:---:|:---:| | **Chip Design** | VeriScope Score | 80.7 | **87.7** | | **GPU Optim.** | KernelBench L1/L2/L3 | **22.2/36.0/14.0** | 11.1/28.0/2.0 | | **3D Modeling** | CAD-Coder Compile (%) | **82.0** | 77.0 | | **3D Modeling** | CAD-Coder IoU | **53.5** | 32.4 | ## Citation If you find InCoder-32B useful in your research, please cite: ```bibtex @article{yang2025incoder, title={InCoder-32B: Code Foundation Model for Industrial Scenarios}, author={Yang, Jian and Zhang, Wei and Wu, Jiajun and Cheng, Junhang and Guo, Shawn and Wang, Haowen and others}, journal={arXiv preprint arXiv:2603.16790}, year={2025} } ``` ## Disclaimer The model may generate incorrect or unsafe code. Always review and test outputs in a sandboxed environment before production use. Industrial code (RTL, embedded firmware, GPU kernels) requires expert human review before deployment in physical systems or hardware synthesis.