Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just bought three Dell 7xxx series with Epyc 7402s. Our IT consultant recommended against AMD in favor of Intel. I didn’t like their reasoning.

Side take... I sized our servers to be adequate with the idea virtual cores won’t be turned on. Just assuming more specter and meltdown discoveries, and while AMD has fairer better it’s not impossible they have their own demons.



Curious, what are their reasoning?


Not OP. I purchased 8 Cascade lake servers for HPC after testing AMD EPYC latest. One of the reasons is Intel compiler! - we use it in scientific software to get extra performance, and it shits over AMD processors. I saw the same software take approx. 30 times longer to run because it was not an intel CPU(worst case. Other GCC compiler not so much). This is not AMD’s fault. Just saying there are some of us who are stuck with the devil that is Intel.


Doesn't the Intel compiler let you disable the runtime CPU detection and generate an executable that unconditionally uses the instruction sets you explicitly enable? I know they also provide environment variable overrides for at least some of their numerical libraries.


It is a bit willy-nilly to get it to do the right thing. We compile with arch specific settings, and add features we’d like as well, but in spite of it it does not look like it is using all the facilities available(unverified claim based on perf outcome). I guessed it ignored our flags once it did not see “Genuine Intel” on the model field. To be honest, I had to calculate cost benefit of trading off my time to figure this out vs the savings I get from going AMD. Two things made us stop, and buy Intel:

1. Our major cost in BOM is memory, not the CPU. So, a 30% savings in CPu cost is not 30% off the bill, but much less.

2. Even if we found a way to tip the scale in AMD’s favour, our binaries still need to run in the rest of the intel servers without significant perf hit. So, our liberty to change is limited.

It’s sad, but reality that we had to buy more intel. But, luckily, their prices are far lower than our last purchase before AMD lit a fire under their asses. So, there is that.


Maybe you have a problem that can use AVX-512 trivially in the compiler, in which case yes, Intel is hugely better. We are all very luck to have the crazy high end hardware we have. I can't wait in a few years I should be able to get a fairly cool Mac Mini+ equivalent with 32 cores so that I a javascript test suite can take less than 10 minutes to run...


There was a HN discussion a couple of weeks ago about whether the intel cpu detection “feature” is an evil money grab, or a legitimate way to prevent unexpected runtime behavior on AMD CPUs.


I'm not sure which discussion you're referring to; I've seen the topic come up many times. But I haven't seen a reasonably non-evil explanation for why the compiler should preemptively assume that AMD's CPU feature flags cannot be trusted, while Intel's can. Detecting known-buggy CPU models is fine, but assuming that future AMD CPUs are more likely to introduce bugs to AVX-whatever than future Intel CPUs is not something that I have seen justified.


Okay so there was another article recently which talked about a new fuzzing tool implemented by google that revealed thousands of bugs in safari and open source projects.

I assume the exploitable edge cases are so numerous and so hard to have 100% test coverage on (is it even possible?) that it is hard enough for Intel to deal with correct execution on their own platform.


Were you able to bench AOCC, or was that compiler not a feasible option?


I didn’t press but it was along the lines of “everyone uses Intel” and surely would have ended with “no one was ever fired for choosing IBM”.

In reality with AMD, the cost was better, the ram/bus was faster, I liked the specs and options of the 7xxx vs the 7xx series better.

They couldn’t give me a reason not to chose AMD, only that “Dell always uses Intel for a reason”


> “Dell always uses Intel for a reason”

That reason sometimes being that Intel paid them off to block AMD from competing in the market:

https://www.nytimes.com/2010/07/23/business/23dell.html

https://www.extremetech.com/computing/184323-intel-stuck-wit...


There's 2 main valid reasons larger companies won't touch AMD for servers:

1) You don't know if a given linux kernel/other software will work unless you test it ... for each future version

2) The firmware updates for Intel and AMD are different.

Additionally, the excellent Intel C compiler focuses on their own processors.

The above doesn't mean you can't choose AMD, but don't assume they're interchangeable CPUs.

Disclosure: I worked for Transmeta, whose entire DC was based an AMD servers. The reason was that Intel was a larger competitor for their code-morphing CPUs than AMD was.

Coincidentally, Linus Torvalds entered the USA on a work visa from Transmeta after DEC bailed on his job offer.

I bought CS22 at Transmeta's wind-down auction, which I will donate to the Computer Museum. Several large CPU designs during that era were verified on it because it was a 4 CPU Opteron with 64 GB RAM, and 32 GB RAM wasn't enough.

Aside from Apple's A-series, that was the end of Silicon Valley being about silicon. (Many of the chip engineers on my last project ended up at Apple on the A-series.)


>Additionally, the excellent Intel C compiler focuses on their own processors

This is a new and creative use of the word "excellent" to mean Intel are so dishonest they have been caught out using their compiler as malware delivery to make /your/ compiled binary test for an Intel cpu when being run by /your/ customer and if it finds your executable binary being run on a competitor, eg amd, makes the code run every slow path despite the optimised code running fast on that cpu.

Wildly dishonest. Malware delivery mechanism are somewhat more traditional uses of the English language to describe the Intel compiler.

You cannot trust Intel. They've earned that reputation all by themselves.


Malware? Are we just redefining words when we don’t like something?

> malware (n)

> software that is specifically designed to disrupt, damage, or gain unauthorized access to a computer system.

How is a dispatch system (which GCC supports) malware? Yes, Intel “cripples” AMD by requiring an Intel processor, but it’s not malware.


It's sneaky, it behaves badly and counter to the user's interests, and because it's a compiler, it propagates that bad behavior (though not in a self-reproducing viral fashion). It's fairly mild on the scale of malware—I'd rank it slightly less bad than adware, but roughly on par with AV software that deletes any tools it deems to be for piracy purposes.


I call stealing your customers cpu cycles without permission for marketing purposes malware. If you don't that's ok. We can disagree.


I literally posted the definition of malware. Where is it gaining unauthorized access?


It says 'or'.

If it disrupts, that fits the definition you gave.

Or do you think a trojan that deletes your boot sector isn't malware?


Your definition isn't the only reasonable way to define the term, and you seem to be parsing it incorrectly anyways.


Seems pretty disruptive to my layman eyes to force code to run slower on a competitors hardware.


Oh for sure. I’m quibbling over the use of the word “malware”


> 1) You don't know if a given linux kernel/other software will work unless you test it ... for each future version

Huh? Sure, some software may break, but there's more than enough AMD out there to make sure that linux and other common software won't break.

> Additionally, the excellent Intel C compiler focuses on their own processors.

IME it's actually not that commonly used outside of benchmarking (among other reasons, it's fairly buggy - perhaps somehat of a chicken/egg issue).


Actually, if you want to run Wayland, and a more powerful GPU than Intel's integrated stuff, AMD has much better support, to the point that running Wayland isn't even an option on anything Nvidia (and Intel CPU + Radeon dGPU is relatively rare). Though I'm a bit confused about whether Wayland (as an experimental option) in upcoming Ubuntu 20.04 LTS is even supported. It should be, because X.org's universal trackpad driver sucks compared to what was available in Ubuntu 16.04, and overall gnome-shell feels clunky and a regression compared to Unity. Having just setup a ThinkPad E495 (Ryzen) over the weekend, I'm both impressed with easy out-of-the-box installation, but also seriously disappointed with gnome-shell and the state of Wayland that I'm considering alternatives to it.


> Though I'm a bit confused about whether Wayland (as an experimental option) in upcoming Ubuntu 20.04 LTS is even supported.

I've been using Wayland out of the box on 19.04 and 19.10 to get fractional scaling and independent DPIs on multiple monitors (Thinkpads of various ages with Intel GPUs). If it's experimental, they've certainly hid that well. It was just a login option on the display manager with no warnings about it during install or later.


Appreciate the experience report. But non-lts releases are by some definitions all "experimental" - I had the impression canonical pushed for Wayland in 18.04,but then walked it back a bit.

Hm, Wayland by default in 17.10, then back to optional in 18.04 - and so it might stay:

https://www.omgubuntu.co.uk/2018/01/xorg-will-default-displa...

https://www.phoronix.com/scan.php?page=news_item&px=No-Wayla...

I'm a little surprised, not default for 18.04 made a lot of sense, but I'm not sure why 20.04 won't see a switch.


Not being experimental and being the default option are still two different things though. Even in 19.10, while it is installed as part of the default install without experimental warnings, it still isn't the default session option.

It is still a very slightly rougher experience than xorg - mainly due to some 3rd party apps not fully handling it yet. But the scaling options more than make up for it with me. One of those features (either fractional scaling or independent DPIs) was still regarded as experimental enough to require a CLI command to enable it though.

So, not perfect, but good enough for me.


That's encouraging to hear - I'll give it a try.


Does Intel work without testing?


Most kernel devs have Intel processors, and anecdotally, it does seem like you see more AMD-specific patches coming in the changelogs as people with the chips get new kernel versions and find new breakages.

Another side effect of Intel's market penetration is that the Intel implementation of any given featureset is targeted first. Things like nested virtualization may work mostly-OK on Intel by now but are still in their infancy on AMD; for example, it appears that MS still blacklists AMD from nested virtualization. [0]

[0] https://github.com/MicrosoftDocs/Virtualization-Documentatio...


> and anecdotally, it does seem like you see more AMD-specific patches coming in the changelogs as people with the chips get new kernel versions and find new breakages.

You have to factor in how stagnant Intel's chips have been for many years. There's simply not much new stuff showing up on Intel platforms, and half of the new features are fundamentally incompatible with Linux anyways and thus will never lead to upstreamable patches. AMD catching up to Intel on feature support also necessarily means AMD is adding features at a faster rate that requires more feature enablement patches over the same time span.


That will change though.


>“Dell always uses Intel for a reason”

Which is just as likely to be more to do with commercial arm twisting and incentives from Intel than anything technical.


For one, AMD killed OpenCL support for Ryzen. Like wtf... I've got 24 hyperthreads and OpenCL doesn't work!!


If Dell went all in on AMD, could they produce enough chips to satisfy demand?


The chips are actually produced by TSMC. So my guess would be yes.


Depends on what kind of pricing and lead time Dell gives AMD, because TSMC 7nm production capacity is definitely limited and their required lead time for wafer orders hit 6 months as of last fall. AMD has already experienced some shortages due to their supply being relatively unresponsive.


Shouldn't be a huge problem right now since Apple is moving to 5nm.


In fairness, so has Intel themselves.


Sure, but people expect that of Intel because they own their fabs and have to build a new fab if they want more capacity. AMD's only buying a slice of TSMC output, but that doesn't mean they're able to suddenly buy a much larger share of that output.


Another question.. what exactly are IT consultant's job description?


you get more for less money and their commission is smaller




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: