# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("codeparrot/codeparrot-small-text-to-code")
model = AutoModelForCausalLM.from_pretrained("codeparrot/codeparrot-small-text-to-code")Quick Links
CodeParrot π¦ small for text-t-code generation
This model is CodeParrot-small (from branch megatron) Fine-tuned on github-jupyter-text-to-code, a dataset where the samples are a succession of docstrings and their Python code, originally extracted from Jupyter notebooks parsed in this dataset.
- Downloads last month
- 15
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="codeparrot/codeparrot-small-text-to-code")