> I don't think modern Apple Photos is built to handle slow spinning rust very well.
Apple's software is garbage so I'm sure you're correct, but modern hard drives can do several hundred MB/s of throughput. How is that not fast enough for a freaking photo application? For Apple not to test/support this use case is inexcusable.
Modern external hard drives are more like 120MB/s, not "several hundred" MB/s, last I checked, but that's only for sequential access.
Scanning and accessing a photo library is extremely random I/O and has nothing to do with the peak sequential throughput. Hard drives are awful at random I/O.
If Apple kept proper indexes and thumbnails somewhere (especially somewhere fast, like on an SSD), maybe it would be fine, but I have heard some bad things about Apple Photos on hard drives, so they might not be doing things optimally.
The pain many people experience with using external hard drives and Apple photos is photos building that index and those thumbnails. Huge amounts of io that apparently need to be rebuilt constantly [1]. Probably also happens on SSDs, but you can’t tell because you can’t hear.
This is whataboutism, but yes, the Surface Go 2 base model has 4GB of RAM. It's shameful, but that laptop is also half the price of the cheapest Mac laptop, and doesn't absolve either company of blame for ridiculously high memory and storage prices.
Because the alternative is to add literally a few dollars to the bill of materials for the machine, double the memory, and give everyone a better experience. This also means the machine will last longer and likely not end up in a landfill as soon.
$200 to add 8GB of memory is _insane_, and framing it as "consumer choice" is bad when consumers are being gouged so badly. It's literally at least a 20x markup on the wholesale cost. You can buy 8GB of DDR4 at retail for $20 or less.
Yes, and? Apple's wholesale RAM cost is likely even lower than a computer with socketed RAM, due to economies of scale and fewer parts overall (RAM soldered directly vs. RAM on a separate PCB, plus a socket soldered to the main board).
You seem to believe that doubling (or quadrupling as suggested elsewhere) the installed DRAM has no energy cost. Adding several watts of mandatory 24x7 idle power consumption to a machine mostly celebrated for its energy efficiency seems odd.
My understanding is that at least for the lower-end machines, there are no additional DRAM chips in the 8GB vs. 16GB machines, they just use chips that have double the density, so the power consumption remains the same.
Even if I'm completely wrong on the above, I seriously doubt that adding 8GB of memory to a machine that consumes 7W at idle in total[1] would add "several watts of mandatory 24x7 idle power consumption."
8GB of general-purpose RAM, regardless of form factor, is dirt cheap. In fact, not having the sockets might make it cheaper than socketed RAM in volume, since there are fewer parts in total.
Except, with Apple's new kit here, the RAM isn't simply some external chip soldered to the main board, it's actually on-die with the CPU silicon (and everything else in that silicon: GPU, memory controllers, etc).
So yes, arguably there are fewer parts (just one), but in the event of e.g. some bad RAM during manufacture, it's far more costly to throw out the chip containing that bad RAM.
No. It is not possible to make DRAM on the same silicon process as high-performance CPU logic. It is a myth that Apple Silicon includes the RAM on its die. Apple uses external LPDDR packages, just like everyone else, which you can clearly see in this photograph of the mac mini's CPU module: https://valkyrie.cdn.ifixit.com/media/2021/01/28102657/m1_ch...
Those chips on the right side are LPDDR4x chips (which you can verify by googling the part numbers visible on them). They are "off-the-shelf" so to speak, not custom on-die memory.
Usually I see that term used when the thing being considered is a pricey upgrade, and you need to strike a compromise between price and performance.
In this case, we're talking about an extra 8GB of memory, which would add perhaps $10 in cost to the bill of materials for the machine (or maybe less in sufficient volume). Given that Apple is also overcharging by at least 3x current standard retail price for SSD upgrades, my guess is that there's some room to bump up the wholesale cost a bit.
Not doing so is, IMHO, insulting to users, and given the non-upgradable nature of these machines, bad for the environment, counter to all of Apple's talk about being environmentally friendly.
Worth noting the M1 memory is MUCH faster and higher bandwidth than x86 options at launch (about 2x). That accounted for a lot of the difference in perception. At least for general usage... for Docker + containers, it definitely uses a bit more.
Did you intend to reply to a different comment? The speed of the memory has no bearing on the capacity, obviously, so I'm not sure how this is relevant to what I said.
The original M1 (I assume that's what GP meant by "at launch") uses LPDDR4x DRAM modules, the same as many x86 laptops that have soldered RAM. You can literally look up the part numbers based on the photos of the M1 CPU package. Maybe I'm misunderstanding but I'm not sure why it would be any more expensive than x86 laptops' memory, and it might even be less expensive just due to the volume that Apple is likely buying.
My point was the configuration with the M1 has much higher throughput (more channels) than typical laptop/desktop configurations. This allows for allocation/deallocation to go more quickly, so while working it is less noticeable for many workflows. It really depends on what you're doing though.
The pricing structure Apple charges for more memory and storage is F'd up... I was just making a point that one doesn't need as much as you might think depending on the bandwidth and workflow.
my personal machine is an 8GB M1 Air. I don't usually do dev work on it, but always have dozens of Safari tabs, often a bunch of Chrome tabs, a bajillion Slacks, and other apps. And I'll do light dev work on it, mostly for personal things. I even play the occasional game.
In other words, I think I'm well beyond what you're even describing. And while I do wish I had bought the one with more RAM, but usually I don't notice it. The swapping is that good.
Almost certainly. The Thunderbolt spec requires any port labeled "Thunderbolt 4" to support dual 4k/60hz displays.
You'll note that, for example, on the Macbook Air (both M1 and M2 versions), Apple labels the ports "Thunderbolt / USB 4," which is confusing and IMHO downright misleading. Either way, the reason they do that is that those ports don't support dual displays, so they only meet the Thunderbolt 3 spec, which doesn't mandate dual displays.
A better label for the Macbook Air might be just "USB 4," or "USB 4 with Thunderbolt 3 support" (though TB3 is part of the USB 4 spec, so that's technically redundant).
I think it's worth taking a step back here to say that IMHO regardless of whether the OP's previous comments justify his expulsion from the issue tracker, having the only other available "DDoS opt-out" mechanism be to email Russ Cox directly is _completely insane_ and unacceptable for an organization of Google's size and funding level. If they're going to ban members from the community (perhaps justifiably so), Google needs to either provide another public place to make one of these requests, or preferably make the DDoS feature opt-in rather than opt-out.
Why does the Go team and/or Google think that it's acceptable to not respect robots.txt and instead DDoS git repositories by default, unless they get put on a list of "special case[s] to disable background refreshes"?
Why was the author of the post banned without notice from the Go issue tracker, removing what is apparently the only way to get on this list aside from emailing you directly?
Do you, personally, find any of this remotely acceptable?
FWIW I don't think this really fits into robots.txt. That file is mostly aimed at crawlers. Not for services loading specific URLs due to (sometimes indirect) user requests.
...but as a place that could hold a rate limit recommendation it would be nice since it appears that the Git protocol doesn't really have the equivalent of a Cache-Control header.
> Not for services loading specific URLs due to (sometimes indirect) user requests.
A crawler has a list of resources it periodically checks to see if it changed, and if it did, indexes it for user requests.
Contrary to this totally-not-a-crawler, with its own database of existing resources, that periodically checks if anything changed, and if it did, caches content and builds chescksums.
I'm taking the OP at his word here, but he specifically claims that the proxy service making these requests will also make requests independent of a `go get` or other user-initiated action, sometimes to the tune of a dozen repos at once and 2500 requests per hour. That sounds like a crawler to me, and even if you want to argue the semantic meaning of the word "crawler," I strongly feel that robots.txt is the best available solution to inform the system what its rate limit should be.
After reading this and your response to a sibling comment I wholeheartedly disagree with you on both the specific definition of the word crawler and what the "main purpose" of robots.txt is, but glad we can agree that Google should be doing more to respect rate limits :)
As annoying as it is, there is precedent for this opinion with RSS aggregator websites like Feedly. They discover new feed URLs when their users add them, and then keep auto-refreshing them without further explicit user interaction. They don't respect robots.txt either.
I wouldn't expect or want an RSS aggregator to respect robots.txt for explicitly added feeds. That is effectively a human action asking for that feed to be monitored so robots.txt doesn't apply.
What would be good is respecting `Cache-Control`, which unfortunately many RSS clients don't, and just pick a schedule and poll on it.
I want my software to obey me, not someone else. If the software is discovering resources on its own, then obeying robots.txt is fair. But if the software is polling a resource I explicitly told it to, I would not expect it to make additional requests to fetch unrelated files such as a robots.txt
I can almost see both sides here... But ultimately when you are using someone else's resources, then not respecting their wishes (within reason) just makes you an asshole.
Google began pushing for it to become an Internet standard—explicitly to be applicable to any URI-driven Internet system, not just the Web—in 2019, and it was adopted as an Internet standard in 2022.
This is true but irrelevant to the parent's question -- in the article, it's made clear that Google's requests are happening over HTTP, which is the most obvious reason why robots.txt should be respected.
Read the OP; it's obvious based on the references to robots.txt, the User-Agent header, returning a 429 response, etc, that most (all?) of Google's requests are doing git clones over http(s).
My prediction is that Elon will realize how badly he is fucking up things and change. I was listening to All-in-podcast and there was a really good comment that was made - "He[Elon] needs to just get back to landing rockets on barges" which I agree, moderating and micromanaging a massive social media platform doesn't feel like a good use of his time.
Apple's software is garbage so I'm sure you're correct, but modern hard drives can do several hundred MB/s of throughput. How is that not fast enough for a freaking photo application? For Apple not to test/support this use case is inexcusable.