Hacker Newsnew | past | comments | ask | show | jobs | submit | aleksandrh's commentslogin

Tried this last night and it works really well so I figured I'd share it here, even though it's a few years old. It can even take advantage of your GPU to run faster. The sample song I tested with is 5:56 and it only took 30 seconds for this tool to split it into separate .wav files for vocals, guitar, piano, bass, etc. I recommend following this guide here: https://www.reddit.com/r/musicproduction/comments/1704kob/co...


Time to reset the clock. 0 days since a new web bundler was released (in Rust!!).

So tired of this ecosystem and its ceaseless tide of churn and rewrites and hypechasing.


> The browser might default to 16px font size, but users can pick something else. So if somebody had poor vision and increased their default font size; or if they had a small laptop screen and decreased their default font size, 1em * 62.5% != 10px, everything the designer set in ems was a different size than they intended, and a lot of their page layouts disintegrated into an unholy mess.

Howdy, author of the article you're responding to (but not the person who originally discovered/pioneered this trick). This is not true, and my article explains why.

The 62.5% trick is rarely used on its own, but people often cite it this way, leading to confusion. In practice, you set the root font size to 62.5% (of the user agent font size) and then also scale all body text back up to 1.6rem so it's not stuck at 10px. From here on out, all font sizes remain proportional no matter what, even if a user changes their preferred root font size in browser settings. Play around with the math if you doubt it (the article offers examples).

> everything the designer set in ems was a different size than they intended

That's working as intended—your design will scale up or down, according to the user's preferred font size. If you don't like this, your only option is to set font sizes in pixels, which [you shouldn't be doing anyway](https://www.aleksandrhovhannisyan.com/blog/use-rems-for-font...) (disclaimer: also written by me).


> From here on out, all font sizes remain proportional no matter what, even if a user changes their preferred root font size in browser settings.

I think you might have misunderstood my point. My point is not that font size is no longer proportional. It’s actually critical to my point that it is proportional.

The 62.5% trick became popular because designers were used to designing with pixels and didn’t want to design with more fluid units like em. They were forced to by external requirements or often they were just following trends without really understanding them. So they used ems with the 62.5% trick as a substitute for pixels but didn’t change how they designed. So they were still designing with pixels in theory, but using ems as a really bad placeholder.

So if they wanted something to be 100px wide, using the 62.5% trick, they would set it to 10em. This will not get them something that is 100px wide though. This will get them something that is 10em wide, which will happen to be rendered 100px wide only when the browser is using a 16px default font size.

What happens when that fake-100px-but-actually-10em wide element is meant to coexist with something that is actually set in pixels? For instance, a 120px skyscraper ad? The things sized in fake-pixels-but-actually-ems will change proportionally with the user’s font size, but the things sized in real pixels will not. All of a sudden different elements on the page have different ideas about what scale they should be rendered at, and the layout falls apart.

Were you surfing the web with a non-default font size when this particular practice took off? I was, and I could always tell when a site started to use it because their layouts all got screwed up.

If you want to design in pixels, design in pixels. If you want to design in ems, design in ems. But don’t use ems as fake pixels because it cannot work reliably. The two units are fundamentally different and you cannot convert between them. One is rooted in a subjective user preference that can be different on every device, one is an objective measurement.


> What happens when that fake-100px-but-actually-10em wide element is meant to coexist with something that is actually set in pixels? For instance, a 120px skyscraper ad? The things sized in fake-pixels-but-actually-ems will change proportionally with the user’s font size, but the things sized in real pixels will not. All of a sudden different elements on the page have different ideas about what scale they should be rendered at, and the layout falls apart.

This is technically also working as intended. When a user scales their preferred font size in their browser settings, their expectation is that the font sizes on pages will scale accordingly, not that every element will scale. The latter is what zoom accomplishes, but there's a reason why both zoom and preferred font sizes exist in browser settings.

In your example, the ad (or image, or whatever) should only be sized in rems/ems if it has text. For all other elements that aren't related to text, it makes more sense to size with pixels. If everything is sized in ems/rems, then scaling the preferred font size behaves identically to zoom. This is less than ideal because if I want to increase font sizes, and what you do in response is zoom the whole page, then there is less space for that same text to fit on the page because it competes with other scaled elements. So while I can read the text more easily because the glyphs are larger, I can read _less_ of the text within the same amount of space than if text were the only scaled element.

Also, at least in my experience, designers aren't the ones thinking in ems; they typically hand us Figma compositions that use pixels, and we translate to rems in our code base. Designers design for the base experience, and we are responsible for coding it in a way that scales proportionally/in a way that respects user preferences.


> So if they wanted something to be 100px wide, using the 62.5% trick, they would set it to 10em.

I never ever heard of anyone using the 62.5% trick for sizing elements. I only ever heard it being used for text.


> Indeed, as the November 2023 drama was unfolding, Microsoft’s CEO boasted that it would not matter “[i]f OpenAI disappeared tomorrow.” He explained that “[w]e have all the IP rights and all the capability.” “We have the people, we have the compute, we have the data, we have everything.” “We are below them, above them, around them.”

Yikes.

This technology definitely needs to be open source, especially if we get to the point of AGI. Otherwise Microsoft and OpenAI are going to exploit it for as long as they can get away with it for profit, while open source lags behind.

Reminds me of the moral principles that guided Zimmermann when he made PGP free for everyone: A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.


Just going to note that it is widely suspected that Hal Finney did much of the programming on PGP with Zimmermann taking the heat for him.


I can confirm that is true second-hand from his former boss at that time. Other biographies, profiles, and interviews also support it.


Works already been done for the most part. Mixtral is to GPT what Linux was to Windows. Mistral AI has been doing such a good job democratizing Microsoft's advantage that Microsoft is beginning to invest in them.


Microsoft just bought off Mistral into no longer releasing open weights and scrubbing all references to them from their site…?


There's a "Download" button for their open models literally two clicks away from the homepage.

Click "Learn more" under the big "Committing to open models" heading on the homepage. Then, because their deeplinking is bad, click "Open" in the toggle at the top. There's your download link.


See “no longer” in my original comment. They just announced their new slate of models, none of which are open weights. The models linked to download are the “before Microsoft $$$, Azure deal, and free supercomputers” ones.


This is Linux all over again, Microsoft is going to use every trick and dollar they have to fight open source.

/I'm too old to fight that battle again...


Sure, but they clearly haven't "scrubbed all references" of their open weights from their site.


Sorry, they’ve just scrubbed most of the references and otherwise edited their site to downplay any commitment to open source, post-Microsoft investment.


Which is Mistral 7B and Mixtral 8x7B. Mistral Large belongs to the closed source optimized models.


> A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.

Except nukes. Only allies can have nukes.


I guess it you want a nuclear apocalypse then giving the tech to people that would rather see the world end than be "ruled by the apostates", that sounds like a great plan.


Russia has nukes, China has nukes, Pakistan has nukes....

And yet, no countries with nukes have ever gone to war with each other.


India and Pakistan fight skirmishes along their contested border all the time. (And re your first line, Pakistan is a US ally, at least in theory.)


1. In 2024, India is a closer ally than Pakistan. (Major Defense Partner)

2. Yep. Skirmishes, not wars.


Is that really the case? Nukes are supposed to be deterrents. If only groups aligned with each other have nukes that sounds more dangerous than enemies having nukes and knowing they can't use them.


Until people who believe in martyrdom get them


I don't trust OpenAI or Microsoft, but I don't have much faith in democratization either. We wouldn't do that with nukes, after all.


> I don't trust OpenAI or Microsoft, but I don't have much faith in democratization either. We wouldn't do that with nukes, after all.

Dangerous things are controlled by the government (in a democracy, a form of democratization). It's bizarre and shows the US government's self-inflicted helplessness that they haven't taken over a project that its founders and developers see as a potential danger to civilization.


Nukes blow up cities.


Not just for profit. It's also about power.


The technologies that power LLMs are open source.


If we get to the point of AGI then it doesn’t matter much; the singularity will inevitably occur and the moment that AGI exists, corporations (and the concept of IP) are obsolete and irrelevant. It doesn’t matter if the gap between AGI existing and the singularity is ten hours, ten weeks, ten months, or ten years.


> A powerful technology is a danger to society if only a few people possess it. By giving it to everyone, you even the playing field.

That's why we all have personal nukes, of course. Very safe


I shudder at a world where only corporations had nukes.


And yet, still safer than everyone having nukes...

It's unfortunate that the AGI debate still hasn't made it's way very far into these parts. Still have people going, "well this would be bad too." Yes! That is the existential problem a lot of people are grappling with. There is currently and likely, no good way out of this. Too much "Don't Look Up" going on.


nuclear weapons is a ridiculous comparison and only furthers the gas lighting of society. At the barest of bare minimums, AI might, possibly, theoretically, perhaps pose a threat to established power structures (like any disruptive technology does). However, a nuclear weapon definitely destroys physical objects within its effective range. Relating the two is ridiculous.


A disembodied intelligent agent could still trigger or manipulate a person into triggering a weapon.


So can a human, yet we don't ban those. I don't think AI is going to get better at manipulating people than a sufficiently skilled human.

What might be scary is using AI for a mass influence operation, propaganda to convince people that, for example, using a weapon is necessary.


We do prosecute humans who misuse weapons. The problem with AI is that the potential for damage is hard to even gauge; potentially an extinction event, so we have to take more precautions than just prosecuting after the fact. And if the AI has agency, one might argue that it is responsible... what then?


It's not a ridiculous comparison. This thread involves Sam Altman and Elon Musk, right?

Sam Altman:"Development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity."

In the essay "Why You Should Fear Machine Intelligence" https://blog.samaltman.com/machine-intelligence-part-1

So, more than nukes then...

Elon Musk: "There’s a strong probability that it [AGI] will make life much better and that we’ll have an age of abundance. And there’s some chance that it goes wrong and destroys humanity."


And then claimed his GitHub was "hacked" to save his ass. And was somehow not banned by GitHub despite clearly violating their TOS.


Web dev here working with React for ~4 years now. I really don't enjoy it as much as I used to. It has made me dislike JavaScript in general, and I often find myself wishing things were simpler and more performant. I can't emphasize how relatable this part is:

> We shouldn’t need to do that—especially for a framework that so often claims it’s “just JavaScript.” If it’s just JavaScript, then it should just work with anything that’s actually just JavaScript.

And this:

> I have a confession to make: I’m still not exactly sure what the difference between useMemo and useCallback is—or when you should and shouldn’t use them—even though I literally read multiple articles on that exact topic earlier today. (No joke.) > > I have a second confession: it’s still not intuitive to me what should and shouldn’t go into the useEffect dependency array, or why. I feel like every time I write a useEffect call, I spend like 15 minutes refactoring my code to be in a shape the linter likes, even when I’m 99% certain it’s actually fine and it’s not going to suck my app into an infinite abyss.


Great article all around.

TL;DR: A cross-origin request can still be same site. Also, SameSite cookies do not prevent cookies from being included in malicious requests originating from subdomains because "site" is by definition scheme (e.g., https) plus eTLD+1 (e.g., example.com).

Example: https://subdomain.example.com can submit a malicious POST to https://example.com/delete-account and the user's session cookie would still get included in the request headers. This is why CSRF tokens are commonly employed on top of SameSite cookies as an added layer of protection.


This is something I've been mulling over for a while now. Folks who've been writing before the inception of ChatGPT have a historical record of their work, so one can probably trust that they will continue to author their own content. But how will the next generation of writers convince others that their writing is truly their own work and not the product of AI? It's going to be one big struggle for validation. If you can't beat 'em...


Police: We suspect your neighbor committed a crime...

You: Okay

Police: ...so we're going to need all the footage from inside your home

You: Wait wha—

Judge: Sure, I'll sign a warrant for this. That sounds perfectly reasonable.


More like:

Police: Hey can we get a warrant for all the Ring cameras on this street. Oh and we know for a fact that there's a guy with more exterior cameras who has refused to share footage with us.

Judge: Sure, I'll sign that warrant. Nobody would be dumb enough to put Ring cameras in their house.

Guy who is that dumb: I'm not going to fight the warrant even though I got a letter saying I can totally prevent my interior cameras from being included in the data Ring turns over.


And so you move to quash some or all of that request!


Simple as...?


> I don't think anyone expects AI to displace "name brand" art

The problem isn't for existing artists (except in terms of ethical issues)—it's for new/budding artists, who will have to contend with challenges to the authenticity of their work. How can they prove that they put their blood and sweat into making a piece of artwork by hand when an AI could've generated something equally passable?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: