Model Card: Arc1-Coder-14b

1. Model Overview

Arc1-Coder-14b is a state-of-the-art (SOTA) large language model purpose-built for advanced programming tasks and algorithmic reasoning. Developed by Meissosis AI INC., it leverages the robust Qwen2.5-Coder architecture as a foundation, enhanced by a proprietary reinforcement learning pipeline designed to minimize logical hallucinations and maximize code correctness.

  • Developer: Meissosis AI INC.
  • Model Type: Causal Language Model
  • Parameters: 14.7 Billion
  • Language(s): Multilingual (92+ programming languages)
  • License: Apache 2.0

2. Technical Specifications

Attribute Specification
Architecture Transformer-based Decoder-Only (Qwen2.5)
Layers 48
Attention Mechanism Grouped-Query Attention (GQA)
Context Length 128,000 tokens
Training Precision bfloat16
Vocabulary Size 151,936

3. Training Methodology

Arc1-Coder-14b was refined using a two-stage post-training process:

  1. Curated SFT: Fine-tuned on a high-density dataset of verified competitive programming solutions and complex system design documents.
  2. Outcome-Based RL (OBRL): Trained using a reward model that validates code execution results rather than just text similarity, significantly improving the "Pass@1" success rate on zero-shot tasks.

4. Benchmark Performance (2026 Standards)

Results based on greedy decoding (temperature=0).

Benchmark Score (Pass@1) Comparison (Industry Avg 14B)
HumanEval 88.4% 81.2%
MBPP 87.2% 82.5%
LiveCodeBench 64.2% 55.8%

5. Usage & Implementation

Inference Requirements

  • VRAM: ~30GB for bfloat16 inference; ~12GB for 4-bit quantized inference.
  • Recommended Precision: dtype=torch.bfloat16

Example Loading (Transformers v4.40+)

from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

model_id = "zhlajiex/Arc1-Coder-14b"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    dtype=torch.bfloat16,
    device_map="auto",
    trust_remote_code=True
)
Downloads last month
72
Safetensors
Model size
15B params
Tensor type
F32
F16
U8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 2 Ask for provider support