Instructions to use valhalla/t5-base-e2e-qg with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use valhalla/t5-base-e2e-qg with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("valhalla/t5-base-e2e-qg") model = AutoModelForSeq2SeqLM.from_pretrained("valhalla/t5-base-e2e-qg") - Notebooks
- Google Colab
- Kaggle
Update README.md
#3
by yosuaw - opened
README.md
CHANGED
|
@@ -28,7 +28,7 @@ text = "Python is an interpreted, high-level, general-purpose programming langua
|
|
| 28 |
and first released in 1991, Python's design philosophy emphasizes code \
|
| 29 |
readability with its notable use of significant whitespace."
|
| 30 |
|
| 31 |
-
nlp = pipeline("
|
| 32 |
nlp(text)
|
| 33 |
=> [
|
| 34 |
'Who created Python?',
|
|
|
|
| 28 |
and first released in 1991, Python's design philosophy emphasizes code \
|
| 29 |
readability with its notable use of significant whitespace."
|
| 30 |
|
| 31 |
+
nlp = pipeline("text2text-generation", model="valhalla/t5-base-e2e-qg")
|
| 32 |
nlp(text)
|
| 33 |
=> [
|
| 34 |
'Who created Python?',
|