Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

google-t5
/
t5-11b

Translation
Transformers
PyTorch
google-tensorflow TensorFlow
t5
text-generation
summarization
text-generation-inference
Model card Files Files and versions
xet
Community
7

Instructions to use google-t5/t5-11b with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use google-t5/t5-11b with Transformers:

    # Use a pipeline as a high-level helper
    # Warning: Pipeline type "translation" is no longer supported in transformers v5.
    # You must load the model directly (see below) or downgrade to v4.x with:
    # 'pip install "transformers<5.0.0'
    from transformers import pipeline
    
    pipe = pipeline("translation", model="google-t5/t5-11b")
    # Load model directly
    from transformers import AutoTokenizer, AutoModelWithLMHead
    
    tokenizer = AutoTokenizer.from_pretrained("google-t5/t5-11b")
    model = AutoModelWithLMHead.from_pretrained("google-t5/t5-11b")
  • Notebooks
  • Google Colab
  • Kaggle
t5-11b
90.5 GB
Ctrl+K
Ctrl+K
  • 8 contributors
History: 19 commits
osanseviero's picture
osanseviero
lbourdois's picture
lbourdois
Add "multilingual" to the language tag (#2)
90f3770 over 3 years ago
  • .gitattributes
    345 Bytes
    initial commit over 6 years ago
  • README.md
    8.6 kB
    Add "multilingual" to the language tag (#2) over 3 years ago
  • config.json
    1.2 kB
    Update config.json about 6 years ago
  • pytorch_model.bin

    Detected Pickle imports (3)

    • "torch._utils._rebuild_tensor_v2",
    • "torch.FloatStorage",
    • "collections.OrderedDict"

    What is a pickle import?

    45.2 GB
    xet
    Update pytorch_model.bin over 6 years ago
  • spiece.model
    792 kB
    Add tokenizer files over 5 years ago
  • tf_model.h5
    45.2 GB
    xet
    Update tf_model.h5 over 6 years ago
  • tokenizer.json
    1.39 MB
    Update tokenizer.json over 5 years ago