Hacker Newsnew | past | comments | ask | show | jobs | submit | owehrens's commentslogin

Thanks so much again. Got it working. 8 seconds. Nvidia is the king. Updated the blog post.


I think insanely-faster-whisper uses batching, so faster-whisper (which doesn’t) might be a fairer comparison for the purposes of your post.


I just can't get it to work, it errors out with 'NotImplementedError: The model type whisper is not yet supported to be used with BetterTransformer.' Did you happen to run into this problem?


Sorry, I didn't encounter that error. It worked on the first try for me. I have wished many times that the ML community didn't settle on Python for this reason...


You mean 'https://github.com/Vaibhavs10/insanely-fast-whisper' ? Did not know that until now. I'm running all of that since over ~10 months and just had it running. Happy to try that out. The GPU is fully utilized by using whisper and pytorch with cuda and all. Thanks for the link.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: