Hacker Newsnew | past | comments | ask | show | jobs | submit | henry_pulver's commentslogin

This is mesmerising!

Really well made and enjoyed the audio explanation.

It's a shame that it includes the now-mandatory discussion of how this shipping is actually bad because of carbon emissions. Seems to me the widespread availability of cheaper goods has been a great thing for humanity on balance!


> But all coming on here and saying "ooohh, this is bad, innit!" is not very interesting, and unlikely to prevent it.

I disagree - this is how the internet can strengthen democracy.

Upvoting and commenting makes this post hit the top of HN and stay there. This makes it visible to many EU citizens who can reach out to their MEP's to ask them to vote against it. Seems a pretty effective strategy to me as someone living in a non-EU country.

Although agree that we should also be discussing the questions you raised.


Complaining is the first step and a necessary one. But complaints need to be turned into critiques and more steps need to be taken.

I'll state that as an American I'm quite unhappy with this as I know the regulations will also affect me and the truth of the matter is that I have a much smaller voice in this matter due to not being a European citizen. I do have additional worry since it was not that long ago which we saw the results of authoritarianism in Europe (though it did result in the strengthening of my country). And my concern is that authoritarianism creeps, often with good intentions but poor foresight. My biggest fear is that we did not learn the great lesson from WW2, in that Germany did not in fact go from good people to the entire country being evil and back to being good people. If we can't understand this process and see how it actually happens (with the details) it will only repeat, led by people that have. But I don't know how to get people to understand subtleties, and that seems like a major issue in a world growing increasingly complex.


My understanding of the key point of this blog is:

> Instead of explaining the technical background first so listeners understand the solution to a problem, start with the problem. Then explain the context/technical background second


Wow, this has many more recordings & a larger variety! Thanks for sharing


What's so fascinating about projects like Aporee is the peculiar feeling one gets while listening to several of those field recordings in a row: here is a captured moment of reality that we will never get back. It's a moment of reality, a soundscape, that will never ever repeat in its 100% trueness or authencity. This gives an odd angle to think about the passing of time. [end of esotheric rant, hm]


As far as I (ex-ML researcher) know, the main technological case that LLM performance will hit a limit is due to the amount of text data available to train on is limited. The ways these scaling laws work is they require 10x or 100x quantity of data to see major improvements.

This isn't necessarily going to limit it though. It's possible there are clever approaches to leverage much more data. This could either be through AI-generated data, other modalities (e.g. video) or another approach altogether.

This is quite a good accessible post on both sides of this discussion: https://www.dwarkeshpatel.com/p/will-scaling-work


Discussed here:

Will scaling work? - https://news.ycombinator.com/item?id=38781484 - Dec 2023 (283 comments)


This is fantastic!

The dice roll animation is :chefkiss:


100% agreed. Must have been a very tough decision. But good for him.

Taking selfless actions like these, that have major personal costs, require serious courage.


It doesn’t seem selfless. It sounds quite narcissistic and as though he has a savior complex.

Even Altman admits ChatGPT didn’t have the economic impact he thought it would. The promoter has finally come to terms with reality.


Great idea - spend far too long reading & writing OpenAPI!

Particularly anyOf, allOf and oneOf (especially when nested) lead to really confusing nested specifications in OpenAPI. Really like how TypeSpec handles unions & intersections.

Playground is great for getting a feel for it fast too


I use the OpenAI tokenizer UI a lot when prompt engineering.

Token count for inputs allows comparison of different data formats (YAML, JSON, TS) and is a crude measure of prompt importance weighting. For outputs it is a relative measure of output speed between prompts (tok/s varies by time of day) and a crude measure of compute used in outputs (why “Think step-by-step” works). Token count also determines the cost of a prompt.

Since there’s no equivalent for other providers, I built one for Mistral & Anthropic. If it’s useful, I can add other providers too - let me know which you’d like.


Thanks for building this. Are the tokens different for the different models? For example, will the Mistral tokenization apply for both the 7B open model, and their propriety API only models?


On the tokenizers Mistral use for proprietary models, this isn't common knowledge.

This tokenizer is correct for the 7B open model and 8x7B MoE model. It'll probably be the closest to the ones their proprietary API-only models use


Didn't fully get the value from reading the post so thought I'd give it a try. Our company is open source so put in our actual url :)

Sadly it errored with the classic NextJS:

Application error: a client-side exception has occurred (see the browser console for more information).

The error in the console was: 601-ce9691b65ce5066e.js:4 Error: An error occurred in the Server Components render. The specific message is omitted in production builds to avoid leaking sensitive details. A digest property is included on this error instance which may provide additional details about the nature of the error.


heads up, we found the root cause yesterday's errors so should be working now


working on it!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: