Hacker Newsnew | past | comments | ask | show | jobs | submit | FootballMuse's commentslogin

> A one-time purchase will still be available, but access to some of the premium content is available only to Apple Creator Studio subscribers. If you already own Final Cut Pro, it will continue to be updated.

Looks like some new "premium content" features will be only available to those with a subscription


Imagine if Visual Studio said "you can't use VS to build any product or service which may compete with a Microsoft product or service"


They were hiring about two years ago: https://tailwindcss.com/blog/hiring-a-design-engineer-and-st...

A Design Engineer and Staff Software Engineer both for $275k


Well that explains it then, don't offer stupid salaries before you make stupid money...


It's not even funcationality to the library code, it's a PR to their docs. If you just want optimized docs for your LLM to consume, isn't that what [Context7](https://context7.com/websites/tailwindcss) already has? Why force this new responsibility to the maintainer.



Can they double the memory lanes without switching socket ? If not I feel like PC is going to fall behind even further compared to Apple chips. Having ram on chip sucks for repairability but 500gb/s main ram bandwidth is insane.

They stumbled into the right direction with strix halo but I have a feeling they won't recognize the win/follow up.


The "insane" RAM bandwidth makes sense with Apple M chips and Strix Halo because it's actually "crap" VRAM bandwidth for the GPU. What makes those nice is the quantity of memory the GPU has (even though its slow), not that the CPU has tons of RAM bandwidth.

When you go to the desktop it becomes harder to justify including beefed up memory controllers just for the CPU vs putting that towards beefing some other part of the CPU up that has more of an impact in cost or performance.


Yeah the only use of the large bandwith in Apple Silicon is for the GPU. I'm always amazed by the fanboys who keep hyping this trope.

Even when feeding all cores, the max bandwith used by the CPU is less than 200GB/s, in fact it is quite comparable to Intel/AMD CPUs and even less than their high-end ones (x86 still rules on the multi-core front in any case).

I actually see this as a weakness of Apple Silicon, because it doesn't scale that well. It's basically the problem of their Ultra chip: doesn't allow doubling of the compute and doesn't allow faster RAM bandwith, you only get higher RAM capacity in exchange for slower GPU compute.

They just scaled up their mobile architecture and it has its limit.


No, but they can skip the socket, much like many of the mini-pcs/SFFs that include laptop chips in small desktops. Strix halo already doubled the memory channels and the next gen is supposedly going to move the memory bus from 256 bits wide to 384 bits.


Not easily, and you will need a new motherboard anyhow because each of the 2 slots you can have per lane are wired in tandem.


The socket io locks in the amount of memory channels. Some pins could be repurposed but that's pretty much a new socket anyway.

They could in theory do on package dram as faster first level memory, but I doubt we'll see that anytime soon on desktop and it probably wouldn't fit under the heat spreader


They already do the latter with X3D.

You won’t be able to add RAM to the die itself there no room on the interposer really.


> Can they double the memory lanes without switching socket?

Sure. Keep the DIMM sockets and add HBM to the CPU package.

Actually probably the best possible architecture. You can choose to have both or only one, backward compatible and future proof.

Yes, it adds another level to the memory hierarchy but that can be fine tuned.


It’s really not that simple, the unpopulated memory slots will cause havoc with signal integrity 4 slot boards already suffer from this.

You are also overestimating how much room there is on the interposer.

As someone with a 9950x3d with direct die cooling setup I can tell you there is no room.


So Zen 6/7 will have a core design and a CCD design. But like past gens, these will be packaged into different products with different sockets and packages (everything from monolithic APUs to sprawling multi-chiplet Server cpus).

So to say that Zen 6/7 supports AM5 on desktop, doesn't necessarily exclude that Zen 6/7 product family in general doesn't support other new/interesting sockets on desktop (or mobile) also. Maybe products for AM6 and AM5 from the same zen family.

Medusa Halo and the Zen7 based 'Grimlock Halo' version might be the interesting ones to watch (if you like efficient Apple-stlyle big APUs with all the memory bandwidth)


Being able to federate REST alongside GQL has been a value add in my experience. Apollo even has the ability to do this client side


Pruning a response does nothing since everything still goes across the network


Pruning the response would help validate your response schema is correct and that is delivering what was promised.

But you're right, if you have version skew and the client is expecting something else then it's not much help.

You could do it client-side so that if the server adds an optional field the client would immediately prune it off. If it removes a field, it could fill it with a default. At a certain point too much skew will still break something, but that's probably what you want anyway.


You're misunderstanding. In GraphQL, the server prunes the response object. That is, the resolver method can return a "fat" object, but only the object pruned down to just the requested fields is returned over the wire.

It is an important security benefit, because one common attack vector is to see if you can trick a server method into returning additional privileged data (like detailed error responses).


I would like to remind you that in most cases the GQL is not colocated on the same hardware as the services it queries.

Therefore requests between GQL and downstream services are travelling "over the wire" (though I don't see it as an issue)

Having REST apis that return only "fat" objects is really not the most secure way of designing APIs


"Just the requested fields" as requested by the client?

Because if so that is no security benefit at all, because I can just... request the fat fields.


I wanted to refute you but you're right. It's not a security benefit. With GQL the server is supposed to null out the fields that the user doesn't have access to, but that's not automagic or an inherent benefit to GQL. You have the same problem with normal REST. Or maybe less so because you just wouldn't design the response with those extra fields; you'd probably build a separate 'admin' or 'privileged' endpoint which is easier to lock down as a whole rather than individual fields.


I'll explain again, because this is not what I'm saying.

In many REST frameworks, while you define the return object type that is sent back over the wire, by default, if the actual object you return has additional fields on it (even if they are found nowhere in the return type spec), those fields will still get serialized back to the client. A common attack vector is to try to get an API endpoint to return an object with, for example, extra error data, which can be very helpful to the attacker (e.g. things like stack traces). I'd have to search for them, but some major breaches occurred this way. Yes, many REST frameworks allow you to specify things like validators (the original comment mentioned zod), but these validators are usually optional and not always directly tied to the tools used to define the return type schema in the first place.

So with GraphQL, I'm not talking about access controls on GraphQL-defined fields - that's another topic. But I'm saying that if your resolver method (accidentally or not) returns an object that either doesn't conform to the return type schema, or it has extra fields not defined in the schema (which is not uncommon), GraphQL guarantees those values won't be returned to the client.


I like this chart which shows multiple generations from September and it's only worsened.

https://x.com/jukan05/status/1969551230881185866


Uncompressed, absolutely we need another generation bump with over 128Gbps for 8K@120Hz with HDR. But with DSC HDMI 2.1 and the more recent DisplayPort 2.0 standards is possible, but support isn't quite there yet.

Nvidia quotes 8K@165Hz over DP for their latest generation. AMD has demoed 8K@120hz over HDMI but not on a consumer display yet.

https://en.wikipedia.org/wiki/DisplayPort#Refresh_frequency_...

https://en.wikipedia.org/wiki/HDMI#Refresh_frequency_limits_...

https://www.nvidia.com/en-gb/geforce/graphics-cards/compare/


Passmark is an outdated benchmark not well optimized for ARM. Even so the single thread marks are 3864 (AI365) vs 4550 (M4)

OTOH, Geekbench correlates (0.99) with SPEC standards, the industry standard in CPU benchmark and what enterprise companies such as AWS use to judge a CPU performance.

https://medium.com/silicon-reimagined/performance-delivered-...


I see you are citing a 6 month old post which itself isn't really well sourced isn't really reaching consensus and doesn't have a definitive answer.

https://news.ycombinator.com/item?id=43287208

The article in question doesn't mention subpar ARM optimizations.


Hmm, why would you need to optimize a benchmark for something? Generally it's the other way round.


> Hmm, why would you need to optimize a benchmark for something? Generally it's the other way round.

It had always been both ways. This is why there exist(ed) quite a lot of people who have/had serious thoughts whether [some benchmark] actually measures the performance of the e.g. CPU or the quality of the compiler.

The "truce" that was adopted concerning these very heated discussions was that a great CPU is of much less value if programmers are incapable of making use of its power.

Examples that evidence the truth of this "truce" [pun intended]:

- Sega Saturn (very hard to make use of its power)

- PlayStation 3 (Cell processor)

- Intel Itanium, which (besides some other problems) needed a super-smart compiler (which never existed) so that programs could make use of its potential

- in the last years: claims that specific AMD GPUs are as fast as or even faster than NVidia GPUs (also: for the same cost) for GPGPU tasks. Possibly true, but CUDA makes it easier to make use of the power of the NVidia GPU.


> - Intel Itanium, which (besides some other problems) needed a super-smart compiler (which never existed) so that programs could make use of its potential

Well, no such thing is possible. Memory access and branch prediction patterns are too dynamic for a compiler to be able to schedule basically anything ahead of time.

A JIT with a lot of instrumentation could do somewhat better, but it'd be very expensive.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: