Instructions to use mrm8488/bert2bert_shared-german-finetuned-summarization with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mrm8488/bert2bert_shared-german-finetuned-summarization with Transformers:
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="mrm8488/bert2bert_shared-german-finetuned-summarization")# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("mrm8488/bert2bert_shared-german-finetuned-summarization") model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/bert2bert_shared-german-finetuned-summarization") - Notebooks
- Google Colab
- Kaggle
This fine tuned model is not working
Whatever text(german) you give it instead of summarizing it just outputs the first few sentences of the input text.
It "summarizes" MLSUM (DE) style. Check out the dataset
Can I fine tune this model using my Corpus for Text Summarization task? Please also suggest me some pre-trained German model which I can fine tune on my corpus?
You could use this model or a multilingual one such as mT5
I am a total beginner in transfer learning field, Can you suggest some resource or point me in me right direction like how can I fine tune your model or the mT5 model for text summarization task on my corpus(German). I would be really, really grateful to you. Please help me.🙏🙏