runtime error
Exit code: 1. Reason: 304, in infer_framework_load_model raise ValueError( f"Could not load model {model} with any of the following classes: {class_tuple}. See the original errors:\n\n{error}\n" ) ValueError: Could not load model google/flan-t5-small with any of the following classes: (<class 'transformers.models.auto.modeling_auto.AutoModelForSeq2SeqLM'>,). See the original errors: while loading with AutoModelForSeq2SeqLM, an error is thrown: Traceback (most recent call last): File "/usr/local/lib/python3.13/site-packages/transformers/pipelines/base.py", line 291, in infer_framework_load_model model = model_class.from_pretrained(model, **kwargs) File "/usr/local/lib/python3.13/site-packages/transformers/models/auto/auto_factory.py", line 573, in from_pretrained return model_class.from_pretrained( ~~~~~~~~~~~~~~~~~~~~~~~~~~~^ pretrained_model_name_or_path, *model_args, config=config, **hub_kwargs, **kwargs ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 272, in _wrapper return func(*args, **kwargs) File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 4317, in from_pretrained checkpoint_files, sharded_metadata = _get_resolved_checkpoint_files( ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^ pretrained_model_name_or_path=pretrained_model_name_or_path, ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ ...<13 lines>... commit_hash=commit_hash, ^^^^^^^^^^^^^^^^^^^^^^^^ ) ^ File "/usr/local/lib/python3.13/site-packages/transformers/modeling_utils.py", line 1110, in _get_resolved_checkpoint_files raise EnvironmentError( ...<3 lines>... ) OSError: google/flan-t5-small does not appear to have a file named pytorch_model.bin but there is a file for TensorFlow weights. Use `from_tf=True` to load this model from those weights.
Container logs:
Fetching error logs...