Missing `cross_attentions` in the model output
#1
by
codewithkyrian - opened
I’ve been using the Xenova/whisper-tiny.en model for a while and recently decided to switch to onnx_community/whisper-tiny.en. However, I noticed that the output from the ONNX model doesn’t include the cross attentions required for generating word-level timestamps.
This leads me to believe that the model might not have been exported with output_attentions=True. Was this an intentional decision, or could it have been an oversight during the export process?
Update: I realized there were variants named onnx-community/whisper-tiny.en_timestamped and figured they might be the ones with the output attentions enabled, but it's the same behaviour