Text Generation
Transformers
Safetensors
English
llama
code
text-generation-inference
How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="InfiniAILab/CodeDrafter-500M")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("InfiniAILab/CodeDrafter-500M")
model = AutoModelForCausalLM.from_pretrained("InfiniAILab/CodeDrafter-500M")
Quick Links

Model Card for CodeDrafter-500M

A draft model for Llama3.1/3.2/3.3 series models, specialized in python coding. This model is finetuned from the first 4 layers of facebook/layerskip-llama3.2-1B.

Citation

@article{chen2024sequoia,
  title={Sequoia: Scalable, Robust, and Hardware-aware Speculative Decoding},
  author={Chen, Zhuoming and May, Avner and Svirschevski, Ruslan and Huang, Yuhsun and Ryabinin, Max and Jia, Zhihao and Chen, Beidi},
  journal={arXiv preprint arXiv:2402.12374},
  year={2024}
}
Downloads last month
47
Safetensors
Model size
0.5B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for InfiniAILab/CodeDrafter-500M

Finetuned
(1)
this model
Quantizations
1 model

Datasets used to train InfiniAILab/CodeDrafter-500M

Collection including InfiniAILab/CodeDrafter-500M

Paper for InfiniAILab/CodeDrafter-500M