-
Notifications
You must be signed in to change notification settings - Fork 31
Issues: shashikg/WhisperS2T
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
how to convert a custom whisper in openai format or HF whisper model to TensorRT based backend ?
enhancement
New feature or request
#58
opened Apr 8, 2024 by
StephennFernandes
'RuntimeError: stft input and window must be on the same device but got self on cuda:1 and window on cuda:0' when specify "device_index = 1" of "whisper_s2t.load_model"
bug
Something isn't working
#56
opened Apr 1, 2024 by
JH90iOS
[Something isn't working
large-v3
] Error during transcription: Invalid input features shape: expected an input with shape (3, 80, 3000), but got an input with shape (3, 128, 3000) instead
bug
#51
opened Mar 20, 2024 by
twardoch
Handle batch processing when few files fails in the whole batch
bug
Something isn't working
#50
opened Mar 11, 2024 by
BBC-Esq
problems with using huggingface flash attention 2 backend on windows
bug
Something isn't working
#48
opened Mar 3, 2024 by
BBC-Esq
Please support whisper medium and medium.en in tensorrt-llm backend
#30
opened Feb 21, 2024 by
colinator
Is it possible support real-time transcription with websockets?
#25
opened Feb 21, 2024 by
joaogabrieljunq
TensorRT-LLM Backend Exported Model
help wanted
Extra attention is needed
#8
opened Jan 28, 2024 by
shashikg
Previous Next
ProTip!
Find all open issues with in progress development work with linked:pr.