runtime error

okenizer_config.json: 0%| | 0.00/283k [00:00<?, ?B/s] tokenizer_config.json: 100%|██████████| 283k/283k [00:00<00:00, 75.8MB/s] vocab.json: 0%| | 0.00/1.04M [00:00<?, ?B/s] vocab.json: 100%|██████████| 1.04M/1.04M [00:00<00:00, 134MB/s] tokenizer.json: 0%| | 0.00/2.48M [00:00<?, ?B/s] tokenizer.json: 100%|██████████| 2.48M/2.48M [00:00<00:00, 32.8MB/s] merges.txt: 0%| | 0.00/494k [00:00<?, ?B/s] merges.txt: 100%|██████████| 494k/494k [00:00<00:00, 71.0MB/s] normalizer.json: 0%| | 0.00/52.7k [00:00<?, ?B/s] normalizer.json: 100%|██████████| 52.7k/52.7k [00:00<00:00, 105MB/s] added_tokens.json: 0%| | 0.00/34.6k [00:00<?, ?B/s] added_tokens.json: 100%|██████████| 34.6k/34.6k [00:00<00:00, 149MB/s] special_tokens_map.json: 0%| | 0.00/2.07k [00:00<?, ?B/s] special_tokens_map.json: 100%|██████████| 2.07k/2.07k [00:00<00:00, 8.00MB/s] Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained. preprocessor_config.json: 0%| | 0.00/340 [00:00<?, ?B/s] preprocessor_config.json: 100%|██████████| 340/340 [00:00<00:00, 1.86MB/s] Traceback (most recent call last): File "/home/user/app/app.py", line 15, in <module> pipe = pipeline( File "/home/user/.local/lib/python3.10/site-packages/transformers/pipelines/__init__.py", line 1070, in pipeline return pipeline_class(model=model, framework=framework, task=task, **kwargs) File "/home/user/.local/lib/python3.10/site-packages/transformers/pipelines/automatic_speech_recognition.py", line 288, in __init__ self._preprocess_params, self._forward_params, self._postprocess_params = self._sanitize_parameters(**kwargs) TypeError: AutomaticSpeechRecognitionPipeline._sanitize_parameters() got an unexpected keyword argument 'chunk_length'

Container logs:

Fetching error logs...