Hacker Newsnew | past | comments | ask | show | jobs | submit | daquisu's commentslogin

Now it is even easier. Cloudflare has a beta product called AI Search that implements most of these 160 lines of code

12.5 million a year for a hundred people seems reasonable? 125k per person per year. GP still said "a few hundred" - two hundred would drop that value to 62.5k per person


Firefox on mobile works with ublock. It can also play videos even with the screen locked, although you do have to unpause it after locking the screen.


> The "You are an expert software engineer" really helps?

Anecdata, but it weirdly helped me. Seemed BS for me until I tried.

Maybe because good code is contextual? Sample codes to explain concepts may be simpler than a production ready code. The model may have the capability to do both but can't properly disguished the correct thing to do.

I don't know.


Maybe it's not the "expert", but "software engineer" part that works? Essentially it's given a role. This constrains it a bit; e.g. it's not going to question the overall plan. Maybe this helps it take a subordinate position rather than an advisor or teacher. Which may help when there is a clear objective with clear boundaries laid out? Anyway, I will try myself and simply observe it if makes a difference.


That is a common narrative but Google had LaMDA as an LLM with over 100B parameters before the ChatGPT release. There was even a Xoogler that claimed it was alive.

From my POV Google could have released a good B2C LLM before OpenAI, but it would compete with their own Ads business.


True, actually people forget that quite good LLMs existed 2-3 years before ChatGPT, from Google, Microsoft, Facebook… OpenAI itself open-sourced GPT-2 all the way back in 2019 and had a GPT-3 API service for years before ChatGPT.

The breakthrough that ChatGPT brought was not technical, but the foresight to bet on laborious human-feedback fine-tuning to make LLMs somewhat controllable and practical. All those previous LLMs where mostly as “intelligent” as the GPT-3.5 that ChatGPT was built on, but they hallucinated so much, and it was so easy to manipulate them to be horribly racist and such. They remained niche tech demos until OpenAI trained them, not with new tech really, just the right vision and lots of expensive experimentation.


Which better measurement do you propose?


It is done by the extension without any fancy stuff. Extensions can load static js / css and bypass CSP with it, if it is declared in their manifest.json. Grammarly's manifest.json is here: https://gist.github.com/Daquisu/11eb1a7000b4141c4404edcc6e16...

For more advanced CSP bypass with extension, you can:

1. Inject JS code into any webpage with a CSP.

2. Create an event listener for your content script and reacting according to it.

3. Use your content script to communicate with the background script.

4. Use the background script to communicate with any website, including blocked websites by the CSP.

Basically, any website <-> extension content script <-> background script <-> any website.


Weird, they released Gemini 2.5 but I still can't use 2.0 pro with a reasonable rate limit (5 RPM currently).


See: the parameter "temperature" for LLMs


It is interesting how France became so focused on analysis and properly proving theorems and stuff, while the applications don't have the same highlight in prépa.

One professor of mine commented that most French engineers are better mathematicians than most mathematicians in Brazil.

It is the opposite of what the linked article mentions that was happening in Weierstrass' time.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: