Instructions to use SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune") model = AutoModel.from_pretrained("SEBIS/code_trans_t5_base_code_comment_generation_java_transfer_learning_finetune") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -52,7 +52,7 @@ The optimizer used is AdaFactor with inverse square root learning rate schedule
|
|
| 52 |
|
| 53 |
### Fine-tuning
|
| 54 |
|
| 55 |
-
This model was then fine-tuned on a single TPU Pod
|
| 56 |
|
| 57 |
|
| 58 |
## Evaluation results
|
|
|
|
| 52 |
|
| 53 |
### Fine-tuning
|
| 54 |
|
| 55 |
+
This model was then fine-tuned on a single TPU Pod V3-8 for 80,000 steps in total, using sequence length 512 (batch size 256), using only the dataset only containing java code.
|
| 56 |
|
| 57 |
|
| 58 |
## Evaluation results
|