Hacker News
new
|
past
|
comments
|
ask
|
show
|
jobs
|
submit
login
electroglyph
3 months ago
|
parent
|
context
|
favorite
| on:
Windows ML is generally available
i exclusively use ONNX models across platforms for CPU inference. it's usually the fastest option on CPU. hacking on ONNX graphs is super easy, too...i make my own uint8 output ONNX embedding models
Guidelines
|
FAQ
|
Lists
|
API
|
Security
|
Legal
|
Apply to YC
|
Contact
Search: