Hacker Newsnew | past | comments | ask | show | jobs | submit | tosh's commentslogin


There's actually two Engineering Emmy Awards. There's also the 'Primetime Engineering Emmy Awards' given by the ATAS (Academy of Television Arts and Sciences) while 'Technology and Engineering Emmy Awards' are given by NATAS (National Academy of Television Arts and Sciences), not confusing at all.

https://en.wikipedia.org/wiki/Primetime_Engineering_Emmy_Awa...

https://en.wikipedia.org/wiki/National_Academy_of_Television...


One of my projects won one of these.

It was for standardising widescreen switching signals, in the early 2000s that was a big issue because each company had a different interpretation of what the flags meant. Thus when you were watching TV you would often get the wrong behaviour and distorted pictures. A small group of us sat down and agreed what the proper behaviour should be. Then every other TV standards body in the world adopted it.

I never did get a statue.


And the official site also covers 2025 and 2024, which for some reason, th Wikipedia page does not.

https://theemmys.tv/tech/


It would be great to have the saturn version + translation as well as the improved movie sequences

maybe there is a way to port them using the saturns mpeg add-on (?)

otoh probably fine to watch them on youtube in parallel


in german it’s

“Schere Stein Papier”



or an iPad instead of a yearly subscription


the iPhone is just the current name for the iPod

Steve Jobs:

  - iPod
  - Phone
  - internet communicator


Codex CLI 0.59 got released (but has no changelog text)

https://github.com/openai/codex/releases/tag/rust-v0.59.0


This might also hint at SWE struggling to capture what “being good at coding” means.

Evals are hard.


> This might also hint at SWE struggling to capture what “being good at coding” means.

My take would be that coding itself is hard, but I'm a software engineer myself so I'm biased.


It is just Python and Django. It might indicate qualities in other technologies, but it is not very good benchmark.


Kudos on the launch. Love the local-ai approach.

Regarding open models: what is the go-to way for me to make Surf run with qwen3-vl? Ollama?

As far as I understand any endpoint that supports the completions API will work?

https://github.com/deta/surf/blob/main/docs/AI_MODELS.md

If I attach image context will it be provided to qwen3-vl? Or does this only work with the "main" models like OpenAI, Anthropic, Gemini and so on?


Thank you.

Yes, we support any endpoint that supports the completions API. And yes, Ollama might be the easiest to setup. The images should also work with qwen3-vl.

But if you run into any issues, please feel free to submit a bug report https://github.com/deta/surf/issues

Edit: fixed github issues link


SES and signal seem to work again


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: