Instructions to use kevinlu1248/ct-base-commits-fastt5-quantized with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use kevinlu1248/ct-base-commits-fastt5-quantized with Transformers:
# Load model directly from transformers import AutoTokenizer, OnnxT5 tokenizer = AutoTokenizer.from_pretrained("kevinlu1248/ct-base-commits-fastt5-quantized") model = OnnxT5.from_pretrained("kevinlu1248/ct-base-commits-fastt5-quantized") - Notebooks
- Google Colab
- Kaggle
CodeTrans Commits with FastT5 Optimizations
Based on https://huggingface.co/SEBIS/code_trans_t5_small_commit_generation_transfer_learning_finetune, but ran FastT5 (https://github.com/Ki6an/fastT5) on it. Quantized to 8bit.
- Downloads last month
- 30
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support