Hacker Newsnew | past | comments | ask | show | jobs | submit | maebert's commentslogin

They're all great, but the 2012 "The Uncensored Picture of Dorian Gray" is the closest to the original script before the editor cut out things that he deemed... checks notes... "too gay".

It restores parts that were cut, and essentially bans chapter 3 and some other digressions on art history that Wilde added as a literary Beard to the footnotes - still there to read, but set in context)

It's not a huge different honestly, but I believe Oscar Wilde would want you to read that version.

It


That’s enough of an excuse for me to reread it. Along with Room With A View, two books I laughed on every page.


I literally just hired Ben Horowitz last month, but I must assume that mine is the better systems and integrations engineer so I consider myself having getting the better deal.


I inherited my great-grandmas recipe book that calls for "50 pence [pfennig] worth of almonds".

I spent a whole afternoon researching how much almonds you could by in 1952 post-war west germany for 50 pfennig.


It might be worth noting that humans also struggle with keeping up a coherent world model over time.

Luckily, we don’t have to; we externalize a lot of our representations. When shopping together with a friend we might put our stuff on one side of the shopping cart and our friends’ on the other. There’s a reason we don’t just play chess in our heads but use a chess board. We use notebooks to write things down, etc.

Some reasoning model can do similar things (keep a persistent notebook that gets fed back into the context window on every pass), but I expect that we need a few more dirty representational ist tricks to get there.

In other words, I don’t think it’s an LLMs job to have a world model, but an LLM is just one part of an AI system.


Thanks for the shoutout!

I think it's funny that it's very similar to ensō in many ways, but also the complete opposite: ensō is calm, mindful, soothing. MDWa is hectic, terrifying, sadistic. Funny how a tiny difference produces products that look almost the same, and feel completely different.

huge props to rafal for creating ensō, personally really love it


"Simple. The car is actually a metaphor for generational trauma."

Honestly... chatGPT kind of wins this one.


author says we made no progress towards agi, also gives no definition for what the "i" in agi is, or how we would measure meaningful progress in this direction.

in a somewhat ironic twist, it seems like the authors internal definition for "intelligence" fits much closer with 1950s. good old-fashioned AI, doing proper logic and algebra. literally all the progress we made in ai in the last 20 years in ai is precisely because we abandoned this narrow-minded definition of intelligence.

Maybe I'm a grumpy old fart but none of these are new arguments. Philosophy of mind has an amazingly deep and colorful wealth of insights in this matter, and I don't know why this is not required reading for anyone writing a blog on ai.


> or how we would measure meaningful progress in this direction.

"First, we should measure is the ratio of capability against the quantity of data and training effort. Capability rising while data and training effort are falling would be the interesting signal that we are making progress without simply brute-forcing the result.

The second signal for intelligence would be no modal collapse in a closed system. It is known that LLMs will suffer from model collapse in a closed system where they train on their own data."


I agree that those both are very helpful metrics, but they are not a definition of intelligence.

yes, humans can learn to comprehend and speak language with magnitudes less examples than llms, however we also have very specific hardware for that evolved over millions of years — it's plausible that language acquisition in humans is more akin to fine-tuning in llms than training them from ground up. Either way, this metric is comparing apples to oranges when it comes to comparing real and artificial intelligence.

model collapse is a problem in ai that needs to be solved, and maybe it's even a necessary condition for true intelligence, though certainly not a sufficient one, and hence not an equivalent definition of intelligence either.


The bar you asked for was "meaningful progress". And as you state, "both are very helpful metrics", it seems the bar is met to the degree it can be.

I don't think we will see a definitive test as we can't even precisely define it. Other than heuristic signals such as stated above, the only thing left is just observing performance in the real world. But I think the current progress as measured by "benchmarks" is terribly flawed.


Good observation! Lower co2 concentration isn't caused by simply inhaling more oxygen, but rather by blowing off too much co2.

Co2 is produced by the body, and the rate at which it is produced doesn't change much if you breathe pure oxygen. it's how we get rid of the co2 is what is being modulated during breathwork.


The tingles and muscle cramps (tetany) are a normal byproduct (basically your neurons on your smallest muscles and under your skin get more excitable due to a molecular rube goldber machine set off by lower CO2 balance in your blood). It's uncomfortable, but unless you are suffering from epilepsy not dangerous and there's no lasting effects.

I did a longer writeup on the physiological effects here if you're interested: https://docs.google.com/document/d/1RuDv_E9osM1CCFWZMywMru9J...


Very much on point.

That said when facilitate breathwork sessions i trade the peaceful hippie music for edm (and it actually works better because it encourages people to stay with the rhythm and get into the same mildly trance-like state you might get into while exercising to repetitive music).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: