Just bought three Dell 7xxx series with Epyc 7402s. Our IT consultant recommended against AMD in favor of Intel. I didn’t like their reasoning.
Side take... I sized our servers to be adequate with the idea virtual cores won’t be turned on. Just assuming more specter and meltdown discoveries, and while AMD has fairer better it’s not impossible they have their own demons.
Not OP. I purchased 8 Cascade lake servers for HPC after testing AMD EPYC latest. One of the reasons is Intel compiler! - we use it in scientific software to get extra performance, and it shits over AMD processors. I saw the same software take approx. 30 times longer to run because it was not an intel CPU(worst case. Other GCC compiler not so much). This is not AMD’s fault. Just saying there are some of us who are stuck with the devil that is Intel.
Doesn't the Intel compiler let you disable the runtime CPU detection and generate an executable that unconditionally uses the instruction sets you explicitly enable? I know they also provide environment variable overrides for at least some of their numerical libraries.
It is a bit willy-nilly to get it to do the right thing. We compile with arch specific settings, and add features we’d like as well, but in spite of it it does not look like it is using all the facilities available(unverified claim based on perf outcome). I guessed it ignored our flags once it did not see “Genuine Intel” on the model field. To be honest, I had to calculate cost benefit of trading off my time to figure this out vs the savings I get from going AMD. Two things made us stop, and buy Intel:
1. Our major cost in BOM is memory, not the CPU. So, a 30% savings in CPu cost is not 30% off the bill, but much less.
2. Even if we found a way to tip the scale in AMD’s favour, our binaries still need to run in the rest of the intel servers without significant perf hit. So, our liberty to change is limited.
It’s sad, but reality that we had to buy more intel. But, luckily, their prices are far lower than our last purchase before AMD lit a fire under their asses. So, there is that.
Maybe you have a problem that can use AVX-512 trivially in the compiler, in which case yes, Intel is hugely better. We are all very luck to have the crazy high end hardware we have. I can't wait in a few years I should be able to get a fairly cool Mac Mini+ equivalent with 32 cores so that I a javascript test suite can take less than 10 minutes to run...
There was a HN discussion a couple of weeks ago about whether the intel cpu detection “feature” is an evil money grab, or a legitimate way to prevent unexpected runtime behavior on AMD CPUs.
I'm not sure which discussion you're referring to; I've seen the topic come up many times. But I haven't seen a reasonably non-evil explanation for why the compiler should preemptively assume that AMD's CPU feature flags cannot be trusted, while Intel's can. Detecting known-buggy CPU models is fine, but assuming that future AMD CPUs are more likely to introduce bugs to AVX-whatever than future Intel CPUs is not something that I have seen justified.
Okay so there was another article recently which talked about a new fuzzing tool implemented by google that revealed thousands of bugs in safari and open source projects.
I assume the exploitable edge cases are so numerous and so hard to have 100% test coverage on (is it even possible?) that it is hard enough for Intel to deal with correct execution on their own platform.
There's 2 main valid reasons larger companies won't touch AMD for servers:
1) You don't know if a given linux kernel/other software will work unless you test it ... for each future version
2) The firmware updates for Intel and AMD are different.
Additionally, the excellent Intel C compiler focuses on their own processors.
The above doesn't mean you can't choose AMD, but don't assume they're interchangeable CPUs.
Disclosure: I worked for Transmeta, whose entire DC was based an AMD servers. The reason was that Intel was a larger competitor for their code-morphing CPUs than AMD was.
Coincidentally, Linus Torvalds entered the USA on a work visa from Transmeta after DEC bailed on his job offer.
I bought CS22 at Transmeta's wind-down auction, which I will donate to the Computer Museum. Several large CPU designs during that era were verified on it because it was a 4 CPU Opteron with 64 GB RAM, and 32 GB RAM wasn't enough.
Aside from Apple's A-series, that was the end of Silicon Valley being about silicon. (Many of the chip engineers on my last project ended up at Apple on the A-series.)
>Additionally, the excellent Intel C compiler focuses on their own processors
This is a new and creative use of the word "excellent" to mean Intel are so dishonest they have been caught out using their compiler as malware delivery to make /your/ compiled binary test for an Intel cpu when being run by /your/ customer and if it finds your executable binary being run on a competitor, eg amd, makes the code run every slow path despite the optimised code running fast on that cpu.
Wildly dishonest. Malware delivery mechanism are somewhat more traditional uses of the English language to describe the Intel compiler.
You cannot trust Intel. They've earned that reputation all by themselves.
It's sneaky, it behaves badly and counter to the user's interests, and because it's a compiler, it propagates that bad behavior (though not in a self-reproducing viral fashion). It's fairly mild on the scale of malware—I'd rank it slightly less bad than adware, but roughly on par with AV software that deletes any tools it deems to be for piracy purposes.
Actually, if you want to run Wayland, and a more powerful GPU than Intel's integrated stuff, AMD has much better support, to the point that running Wayland isn't even an option on anything Nvidia (and Intel CPU + Radeon dGPU is relatively rare). Though I'm a bit confused about whether Wayland (as an experimental option) in upcoming Ubuntu 20.04 LTS is even supported. It should be, because X.org's universal trackpad driver sucks compared to what was available in Ubuntu 16.04, and overall gnome-shell feels clunky and a regression compared to Unity. Having just setup a ThinkPad E495 (Ryzen) over the weekend, I'm both impressed with easy out-of-the-box installation, but also seriously disappointed with gnome-shell and the state of Wayland that I'm considering alternatives to it.
> Though I'm a bit confused about whether Wayland (as an experimental option) in upcoming Ubuntu 20.04 LTS is even supported.
I've been using Wayland out of the box on 19.04 and 19.10 to get fractional scaling and independent DPIs on multiple monitors (Thinkpads of various ages with Intel GPUs). If it's experimental, they've certainly hid that well. It was just a login option on the display manager with no warnings about it during install or later.
Appreciate the experience report. But non-lts releases are by some definitions all "experimental" - I had the impression canonical pushed for Wayland in 18.04,but then walked it back a bit.
Hm, Wayland by default in 17.10, then back to optional in 18.04 - and so it might stay:
Not being experimental and being the default option are still two different things though. Even in 19.10, while it is installed as part of the default install without experimental warnings, it still isn't the default session option.
It is still a very slightly rougher experience than xorg - mainly due to some 3rd party apps not fully handling it yet. But the scaling options more than make up for it with me. One of those features (either fractional scaling or independent DPIs) was still regarded as experimental enough to require a CLI command to enable it though.
Most kernel devs have Intel processors, and anecdotally, it does seem like you see more AMD-specific patches coming in the changelogs as people with the chips get new kernel versions and find new breakages.
Another side effect of Intel's market penetration is that the Intel implementation of any given featureset is targeted first. Things like nested virtualization may work mostly-OK on Intel by now but are still in their infancy on AMD; for example, it appears that MS still blacklists AMD from nested virtualization. [0]
> and anecdotally, it does seem like you see more AMD-specific patches coming in the changelogs as people with the chips get new kernel versions and find new breakages.
You have to factor in how stagnant Intel's chips have been for many years. There's simply not much new stuff showing up on Intel platforms, and half of the new features are fundamentally incompatible with Linux anyways and thus will never lead to upstreamable patches. AMD catching up to Intel on feature support also necessarily means AMD is adding features at a faster rate that requires more feature enablement patches over the same time span.
Depends on what kind of pricing and lead time Dell gives AMD, because TSMC 7nm production capacity is definitely limited and their required lead time for wafer orders hit 6 months as of last fall. AMD has already experienced some shortages due to their supply being relatively unresponsive.
Sure, but people expect that of Intel because they own their fabs and have to build a new fab if they want more capacity. AMD's only buying a slice of TSMC output, but that doesn't mean they're able to suddenly buy a much larger share of that output.
Side take... I sized our servers to be adequate with the idea virtual cores won’t be turned on. Just assuming more specter and meltdown discoveries, and while AMD has fairer better it’s not impossible they have their own demons.