LLaDA-XDLM-8B-Base

This repository contains the checkpoint of 600 training steps for continual pretraining LLaDA with XDLM.

LLaDA-XDLM with sampling budget of 32. Evaluation of adapting LLaDA-8B to our XDLM formulation (LLaDA-XDLM): (a) LLaDA-XDLM consistently out-performs baselines across diverse benchmarks with 32 sampling steps; (b) Improvements are particularly pronounced in code generation (MBPP), where the model substantially reduces generation failures.

For details and usage see Code

TODO:

  • update model_card to support standard huggingface transformers's usage.
Downloads last month
16
Safetensors
Model size
8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Mzero17/LLaDA-XDLM

Finetuned
(1)
this model

Dataset used to train Mzero17/LLaDA-XDLM

Paper for Mzero17/LLaDA-XDLM