How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="AISE-TUDelft/ML4SE23_G8_WizardCoder-1B-CS")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("AISE-TUDelft/ML4SE23_G8_WizardCoder-1B-CS")
model = AutoModelForCausalLM.from_pretrained("AISE-TUDelft/ML4SE23_G8_WizardCoder-1B-CS")
Quick Links

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

Model finetuned from WizardCoder-1B-V1.0 on the Code Summarization task.

See https://github.com/ML4SE2023/G8-Codex for more details.

Downloads last month
11
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Collection including AISE-TUDelft/ML4SE23_G8_WizardCoder-1B-CS