Hacker Newsnew | past | comments | ask | show | jobs | submit | _hoa8's commentslogin

I am still using the 2013 MBP primarily because:

- It has ALL THE PORTS

- No gimmicky touch bar

- No butter-a$$ keyboard

If Apple doesn't revert these dumb changes, I will simply get a Dell or something else.


Moxie Marlinespike (Openwhisper Systems, Signal) makes a good argument for why that doesn't work https://m.youtube.com/watch?v=DoeNbZlxfUM. Recommend watching the whole talk.


Making an argument doesn't change the fact that I am informed, made a decision, stopped using it and have suffered NO HARM. I made a tradeoff that I thought was valuable. Anyone demonstrably can, but are usually less interested in making a philosophical judgement than enjoying the dopamine reward from communal outrage.


At least I still blame living (human) beings for the warming.


You can't be the CEO, chairman of the board, and majority voting shareholder, and not be accountable.


While I agree I really can't see him being held responsible by US laws.


or any other national or international court for this matter


old.reddit.com all the time


Yeah let's just spread conspiracies because you don't want to put in the time to prove them.


Just because your wound will get better tomorrow doesn't mean the pain today is invalid


No, but just because there's a pain, it doesn't mean it's important, or something should be done about it.



I think it's more likely he simply chose to put HQ2 near his house rather than his house near where he planned to put HQ2.


Why? Likely he made both decisions together, since that's the obvious behavior.


I have always wondered which one is better:

1. Contribute early on little by little 2. Contribute nothing to very little for a while amassing huge fortunes (and compound interest magic), then donate big.

Pretty much the same question as lump sum vs dollar cost averaging in investing.

Have there been studies on this?


Seeing how few people make it to the huge fortune stage it seems like a very bad strategy if broadly adopted. How we do it now makes sense, contribute as much as you are comfortable with regardless at which step of the ladder you are on.


This isn't a study, but as far as the science of philanthropy goes I know of no one better to look to than GiveWell and they did write a blog post about it: https://blog.givewell.org/2011/12/20/give-now-or-give-later/


1. Usually will end when you pass away.

2. (Especially if establishing a foundation.) Will outlive you and continue on for potentially hundreds of years into the future. For example:

https://en.wikipedia.org/wiki/Ford_Foundation

https://en.wikipedia.org/wiki/Carnegie_Endowment_for_Interna...

https://en.wikipedia.org/wiki/Rockefeller_Foundation


Money given today pays dividends in the lives of those who benefit and the lives of people who are around them, it just doesn't show up on any balance sheet. The after school program that provides the missing support network for a child, keeping them out of a street gang and later prison, and helping them develop the skills to find gainful employment pays hidden dividends of tens or hundreds of thousands of dollars for society. Also there are second and third order effects that amplify those gains further. If that money was instead earning 5% in an endowment fund, it may keep the name of a robber baron alive for perpetuity, but it does far less for society today. And if you could trace the knock-on effects of the money given today, you would likely find that the benefits distributed throughout society accumulate quicker than the 5% rate of growth achieved by that endowment fund.


The growth of these foundations require global Capitalism to continue to spread.


If you amass a huge fortune through paying your workers low wages, it would probably be best to avoid amassing that huge fortune, give your workers a larger share of your profits, and allow them to donate themselves. It helps you to avoid the temptation to cynically set up a tax-offsetting fund to avoid potential legislation that forces you to contribute to the communities where you are located.

Either way, it's good to figure out how to contribute back to a country who created your wealth, by allowing you to operate a business for two decades without being subject to the sales taxes that the long-shuttered businesses who previously employed your desperate applicants were required to pay.

So donate to politicians early so you don't have to pay taxes, donate to charity to avoid taxes later, buy a newspaper somewhere in between.


I think history gives a bit of the answer. There's always been work done to help homeless people, and it never really got better.

Worst part IMO is that it's not to be a money sink. I deeply believe that homelessness is mostly emotional, these guys need deep moral support and a bit of material support. But you can't buy moral support. Having someone drop 2B might help motivate society and finally realize how to fix the problem, even if not all the money is spent.


I think that may be overstating things[1], though I agree it's a tough problem. Lots of folks need some extra help, they're disabled, they have mental illnesses, or substance addictions (or all of the above). Just giving them a tiny house or apartment is not enough to keep them housed. But typically we fund programs to address all of those problems separately, with varying degrees of cooperation between them.

1: https://www.npr.org/2015/12/10/459100751/utah-reduced-chroni...


The Effective Altruism community has produced a lot of discussion about donating early vs saving and giving later.

This post does a good job of listing some of the trade-offs and includes several links to more discussion: http://effective-altruism.com/ea/4e/giving_now_vs_later_a_su...


I would recommend you do both. Give a little (but only less than 1%) of what you make today, but aim to make a ton of money. Once you've made the money, give all you can. Your children don't need that much to live a life without work (10-20M).

It's more important to know how you give and whom you give it too. I'm of the opinion that given $5 to a homeless person is much more effective than giving $5 to a charity.


That ignores the psychology of your Bezoses and Gateses, though.

They are maniacally obsessed with growing their companies as much as possible, and everything they do works towards that goal. They don't have time for philanthropy, or worrying about social issues, while they're doing that.

Gates was able to give so much away because he shifted focus after leaving Microsoft. Hopefully others like him will follow.


He's also doing this at when the stock market is very high, which means that this is the cheapest $2 billion he's ever owned. I say that not to undercut the significance, mostly to point out that the stock market is also important to this consideration.


It's not really the same thing, because (assuming a positive ROI) dollar cost averaging will always end up better off than lump sum investing (assuming the same amount is invested).


Honest q: What home-use workstation would use 32 cores? (Excluding home labs or servers).


For compiling, having many cores is fantastic. Granted, on a workstation, compilation normally just involves a few files (the ones that have changed since the previous build and their dependencies), but when you have to do a full rebuild, it is fantastic to be able to do `make -j16` and watch it chug through 16 files simultaneously. Interestingly, the benchmark in this review shows that the 16-core 2950X compiles Chromium faster than the 32-core 2990WX, presumably this means something other than the thread count becomes a bottleneck after 16 threads or so.


"this review shows that the 16-core 2950X compiles Chromium faster than the 32-core 2990WX, presumably this means something other than the thread count becomes a bottleneck after 16 threads"

The article mentions that, due to the die packaging, only 16 of the cores have direct access to RAM. So for the 32-core version, half the cores are memory-starved and have to go through the 'connected' cores (also impacting these), while the 16-core version doesn't have that problem and can be at 100% for all process loads.


Might memory access model (UMA vs NUMA) play a role here? AFAIK the TR2950wx has configurable model (can be configured to work in either uma or numa mode) whereas the 2950x only has one mode (can't recall which one at the moment)


It's the opposite, the 2950X can be configured in (fake-)UMA ("distributed" mode in AMD's terms) or NUMA mode but the WX chips are NUMA only.


I would think that compilation is faster on the 32Core, but linking is much slower.


See my other answer in this threaad, the main reason for the strange result is doing an LTCG build, not really the CPU, which scales quite nicely in the Linux tests from Phoronix.


this.

One of the projects I compile at work can take an hour running on 4 threads, jack that up to 16 and you take that down to not much over 17-18 minutes. That's a whole heap of developer time you just got back that would have been wasted on compiler swords.

The other one is running VM's/a docker swarm locally for development.


I make games and can't use the incredibuild server at home, so a workstation with even 16 cores would be amazing.


I think that most of us who are interested in high core counts are at least hobbyists. For example, I use Monte Carlo simulation to compute the pagerank vector for all of the biomedical literature on PubMed. 32 cores is either 32x faster than 1 core, or lets me improve the precision of my results. Sure, this isn't browsing the web, but it's also not a real research project or a business.


It takes roughly 20-minutes to make a SINGLE good frame using Cycles on Blender. Cycles is a raytracer for 3d modeling.

If you are making a 30-second animation at 24-frames-per-second, that would be 720 frames, or roughly 240 hours (10 days) of rendering. 30-seconds would be roughly the length of a standard commercial.

If you have a computer that is 2x or 4x faster, that cuts the time down to 5 days or 2.5 days. Which is dramatically different. Its mostly a CPU-intensive problem with relatively low RAM bandwidth. (Its RAM-heavy, especially with HDR skymaps. So you need lots of RAM but not necessarily fast RAM).


Or you buy good graphics card and you blow CPU out of the water. But it is true for render engines like Arnold or Corona CPU is main thing. For cycles get gpu.


You might be surprised.

http://download.blender.org/institute/benchmark/latest_snaps...

The Threadripper 1950x (16-core) is faster than the 1080 Ti in several tests. Fishy Cat for instance is faster on Threadripper, as well as the difficult "Barbershop Interior".

With all the updates to Zen2, higher clocks, and now 32-cores, I bet that the 2990wx will be incredible and give GPUs a run for their money.

Besides, you'll need a good CPU to handle physics (cloth, fluid, etc. etc.). Not everything can be done on the GPU yet.

CPUs also have the benefit that RAM is super-cheap. You can get 64GB of DDR4, but its basically impossible to get that amount of RAM on a GPU. This allows you to run multiple blender instances to handle multiple frames quite easily. A portion of rendering is still single-thread bound, so an animation can be rendered slightly faster if you allocate a blender-instance per NUMA node.

If you do have a GPU, you can still have CPU+GPU rendering by simply running Blender twice, once with GPU rendering and a 2nd time for CPU Rendering. With the proper settings, you'll generate .png files for each animation independently, which allows for nearly perfect scaling.

Every x399 board I've seen supports quad-GPUs. So you can totally build a beast rig with 4x GPUs + 32 CPU Cores for the best rendering speed possible.


You make very good point. I do 3D rendering from time to time and i dont actualy play games anymore. Most of my work would greatly benefit from powrful cpu not so much gpu. Interesting i should reconsider.


Digital audio workstation workloads are massively multithreaded, with hundreds or thousands of DSP processes. Performance scales almost linearly with core count.


These recentish AMD core improvements alone have made me considering rebuilding my VST collection for Windows and moving off mac for production. I'm not looking forward to tracking down windows VST versions of tiny apps. I can't imagine spending 3k for another Macbook when I can get way more interesting performance in Windows these days. I'd love to have a desktop for very heavy synth and processing work, freeze those tracks, and then be able to take it on the go with a similarly set up laptop (set up DAW that is, not 32 cores).


I was in a similar position, really tied to all my tools on OS X but didn't want to spent 3k on their desktop line.

Instead I built an awesome Hackintosh with 16gb ram, 8 cores, nvme drive and 1080 for like $1600 that runs High Sierra. It's definitely more work to set up initially but pretty low hassle afterwards. No regrets.


Yes I'm at the crossroads but I want to be able to have a laptop to take as well and the new mac line up is just not for me. Thats great info that the Hackintosh computers are still kicking because I had mostly ruled that out after not hearing much about them over the past few years. Mind sharing your build? :D


My build:

  - Intel 7700k
  - Geforce 1080
  - Asus ROG Strix Z270E
  - Samsung Evo 960 NVMe
The process is much easier than it was years ago - especially if you can find a few people that got it working with the same motherboard.

1) Make a standard install USB

2) Run Clover Configurator on the USB with standard settings + tweaks based on your GPU and motherboard (you can find suggestions on /r/hackintosh and the TonyMac forums)

3) Install and boot

4) Tweak the Clover configuration on the EFI partition to fix any random remaining issues you find like USB or audio.


Not my experience. I've had stutters with Ableton Live on a 4 core machine with 2 cores still idling around. Ableton cannot multi-thread a single track (at least in Version 9), and if you're using a single-threaded VST that does not matter anyway.


If a single instance of a VST can use 100% of a thread, you're using a woefully underpowered processor or a ludicrously inefficient plugin. Many composers regularly work on projects with hundreds of tracks and thousands of plugin instances. Projects of that scale used to require multiple computers and a bunch of DSP accelerator cards, but they're now entirely feasible on one high-end workstation.


It's not hard to find VSTs which will easily max out an i7, especially if they run on Max/MSP. Also, stuff from u-he, like Diva. If you run several instances on high-quality you'll usually need to start freezing tracks.


> Ableton cannot multi-thread a single track (at least in Version 9)

This is still the case in 10, and one of the reasons I have been seriously looking at Bitwig (Linux support being the other).


Is situation in bitwig better?


When I tested them out with identical sessions, I was able to get higher track/VST counts without dropouts in Bitwig.


I'm literally sitting here waiting for glibc to compile, because I need a version with debug symbols, which the version from Arch Linux' repos lacks. Right before that, I compiled valgrind from the git head, because the current release (3.13) doesn't support glibc 2.28. I have compiled Chromium a couple of times for work.

It's very apparent that my 2-core 4-thread i5 5200U is a bit too weak; I'd love to be using a 16- or 32-core machine.


8-core Ryzen is a huge upgrade from an Intel U and doesn't cost nearly as much as 16- and 32-core machines


Does video editing or After Effects work could as home use yet? Video editing, in terms of cutting and splicing clips, is not going to benefit much from this chip, but a lot of effects rendering will benefit, and basic video editing uses a lot of effects these days.

That’s all I’ve got.


Yes for roughly 8-threads or so.

16-thread Threadripper is mostly idle in my video-editing tests. I mean, its a great processor. But video editing isn't "heavy enough" for me to recommend a 16-core or bigger processor.


Ah, not surprised. I have the Intel 8-core.

The best upgrade I’ve made for my video editing workstation has been going to 4x GPUs for DaVinci Resolve.


Most plugins/filters/effects are single-threaded :-(


Yeah don’t get me wrong. They pretty much all are. But, a few key ones aren’t, depending on the host app, and aftermarket plugins have a lot of multi threading support.


I wondered that as well. I guess my imagination for uses isn't particularly good, but all the uses i bought my 16 core workstation for (mostly research computing development) would really suffer with the memory performance of the 32 core chip.

I don't know enough about video games, but I would naively think that the memory latency would be a big deal there as well.

At any rate, I learned a lot from this article. Anandtech's reviews always seem well-written and well-researched.


> I don't know enough about video games, but I would naively think that the memory latency would be a big deal there as well.

It really depends on what else you're doing. If you're just playing a game then Disk and GPU tend to be the biggest bottlenecks in video gaming. Even a reasonably fast modern CPU is sufficient for most games.


One niche application is music composition. When you're writing a score for full orchestra, you need lots of RAM and lots of cores for accurate playback.


I'd use it for home labs or servers.


Home servers? In what situation would you ever need 32 cores for home usage? I'm genuinely curious.


VMs, lots of them.

Atm I'm running about 15 VMs on about 8 cores in a dedicated box somewhere and it's definitely noticable. I would love to shove some core services at home and have 32 cores to play with to give some more headroom


Why not use containers instead of VMs? You can run about 10x more Docker instances than VMs on the same hardware.


Because all of the containers may not be of the same operating system? Networking on containers is also a bit different.

There are also reasons for having some more isolation between guest OSes.

On my ESXi box at home I have:

* A VM that hosts my NAS shares. This does nothing other than host the NAS shares, as I want to be sure no silly experiment of mine interferes with that.

* A general-purpose VM, where I do run some containers out of (UniFi controller, Plex, etc)

* A VM running Windows Server for my Domain Controller

* A secondary vSwtich with isolated no uplink to the rest of the network. This is my mini malware testing lab.

* A VM running pfSense that I'll sometimes use to allow selective access out of the isolated vSwtich out to the internet, but not to the rest of the network.

Can't do all that with containers.


I have many use-cases where containers are simply unsuitable.

I'm using FreeBSD, but these apply just as well to Linux. I wanted to run ZoneMinder, which is not available for FreeBSD, so I simply spun up a CentOS VM and installed it.

On the flip side, I wanted to run Home Assistant, Node-RED, and some related utility programs. All of these are happy to run on FreeBSD, so they can live happily in a Jail (FreeBSD's equivalent to a container).

Some people virtualize their router by dedicating a NIC to the appropriate VM. I don't know if this would even be possible in a container.


I run proxmox on my 16 thread ryzen and would love more cores.

I currently run 4 linux vms for my kubernetes cluster and a 4 core macOS vm with passthrough for my gtx 1080i. I have 64 gb of memory so the only thing stopping me from running my windows 10 and arch desktop vms at the same time is more cores.


Because contrary to the hype, containers aren't the right solution to everything.


While you are correct that they are not a one size fits all solution, would you care to elaborate to the specifics of this instance?


Because not everything I want to run in best suited (or even available) to linux.


Not everything runs great on containers. My internal firewall is a pfSense, BSD based which doesn't run on a linux kernel.

Atleast 3 VMs need patched kernels or more recent kernels/regular kernel updates than the host provides.

Additionally VMs provide a bit more isolation than a simple container (atleast unless you do unpriv'd container).

I do have containers too, about 20 of them, half of them unpriv'd, all of them LXC. Docker is not suitable for my use case at all and frankly I don't think you should suggest someone should switch to Docker without knowing their use cases.


If you want to run multiple different OS's (or even different distributions of the same OS) containers don't work.


There is nothing preventing you from mixing a couple of VMs and have containers on top of some of them.


Build servers, video conversion/streaming, and hosting game servers are use cases that would certainly benefit from this in a hobbyist/home environment.


VMs


And just to expand on that, I'd like to (for instance) run multiple remote desktops (including a photo editing station), probably a decent plex/emby VM, probably something to do transcoding etc etc. Not to mention dev VMs etc.

You can do a lot with one thing and administer that one thing without having lots of individual boxes doing stuff, and for me it'd be way faster and a single cost, so it'd work out as a big improvement.

In a 'money is no object have all the time in the world' it would probably be better to have something dedicated to do each task, but that's not that flexible on top of the other drawbacks (cost in money/time).


I would use it to mine bitcoin and heat my house in winter, I already have a slogan :

"Don't be a patsy who pay for heating your place, be paid for it. Order our heating device for just $2999!"



I think more CPU performance is attractive if you edit photos or movies - something that is typical for a home computer. The reason AMD provides CPU performance through a high core count is because it is power efficient, otherwise a single-core is easier to program.


Rendering is something I could see people doing at home if that's their hobby.


Then the tests in this article are hardly convincing.


How is the $1800 2990WX outperforming the $1980 i9-7980XE in every rendering test they performed not convincing?

There are clearly workloads where the 32 core TR chip does not perform well (probably due to the memory configuration) but it seems pretty good at rendering.


The 2990wx blew everything out of the water in rendering. It's like 37% faster than the 7980xe in the Blender benchmark.


Faster with 2x more cores and twice more power consumption. Hardly a win in my book - it means each core is way weaker and power hungry. Intel will easily match that sometimes soon without sweating.


Check your numbers. In that Blender benchmark, AMD is actually 58% faster (152/96), has 78% more cores (32/18), and its TDP is 52% higher (250/165).¹ https://www.anandtech.com/show/13124/the-amd-threadripper-29...

This means AMD manages to execute more work per watt (more energy efficient), and each AMD core uses less power than Intel.

¹ Anantech wrongly lists the TDP as 140W. It's in fact 165W: https://ark.intel.com/products/126699/Intel-Core-i9-7980XE-E...


A Corona 1.3 benchmark between the i9-7980XE and the 2990WX saw, in that workload, the 2990WX 28% faster than the i9.

https://www.youtube.com/watch?v=QI9sMfWmCsk&feature=youtu.be...

Its power consumption was 19% higher than the i9-7980XE.

https://www.youtube.com/watch?v=QI9sMfWmCsk&feature=youtu.be...

Tom's Hardware saw a stock 2990WX at a lower power consumption than a stock i9-7980XE during a Prime95 "torture loop". Overclocked, the AMD part was higher than the Intel one, but only slightly.

https://www.tomshardware.com/reviews/amd-ryzen-threadripper-...

Where have you seen that it has double the power consumption? Under what workloads?

Personally, I don't care about "weaker cores". If a system has 2048 cores clocked at 7 THz and it is 20% faster at my workload than a single-core CPU at 700 MHz, it is faster.

The fact that the "weaker cored" system is cheaper than the "burly muscly" single core system is a bonus.

Power consumption doesn't even matter that much either. It is the equivalent to a single 60W light bulb (or several of those new-fangled LED bulbs). Big whoop.


If I were Pixar and was running one of these 24/7 then, well, it's theoretically possible that the extra you pay in electricity would make up for the lower capital price. But for a hobbyist running this at most 10 hours a week I really doubt that that's a consideration.

But more importantly, The Tech Report looked at task energy for the Threadripper in rendering tasks and found that it took less energy to finish a render than competitorys. It's power was higher but the time was shorter to an even greater extent.

https://techreport.com/review/33977/amd-ryzen-threadripper-2...

So if you're so serious about rendering that you're willing to spend thousands on a good rig for it there really isn't any reason not to use this boy.


If you check Phoronix, they found it doesn't actually take much more power than the 7980XE.


2x core count hardly ever offers anywhere near a 100% performance gain, even across Intel's lineup [1]. Very few workloads are that parallelizeable, and almost everything (from the program to the OS to the CPU itself) introduces some form of overhead when running in parallel.

[1] https://www.cpubenchmark.net/compare/Intel-Core-i9-7960X-vs-...


How much power would an Intel based system need to get that 40% performance boost?


What metric are you considering? The rendering tests show the 2990WX to be both faster and cheaper than the i9.


Slightly faster yes. Cheaper? For now. Price is elastic. The i9 could be priced at any price point because there was no competition until now. Do you think Intel will keep it overpriced for long?


AMD has manufacturing cost advantage - CPUs are built from 4- core CCXes that can be binned separately.

Intel on the other hand needs to manufacture a monolithic CPU that not only is fault free in enough cores, but performs well. That's harder and yields are way lower.

80% yield on a 4 core block is a 16.7% yield on a 32 core block - and that's before binning


If AMD's method scales so much easier, why didn't Intel just... do that? Honest question.


Intel could do that. But AMD came out with this method first.

AMD has only been doing this "infinity fabric" thing for a year. Intel was caught with their pants down. It seems like Intel is researching chiplet technology and trying to recreate AMD's success here.

It takes several years to create chips. So Intel realistically won't be able to copy the strategy until 2020 or later. But you better bet that Intel is going to be investing heavily into chiplet technology, now that AMD demonstrated how successful it can be.


AMD introduced HyperTransport in 2001.


HyperTransport and Intel QuickPath isn't chiplet technology.

AMD "upgraded" HyperTransport to Infinity Fabric. Which IIRC uses a bit less power (taking advantage of the shorter, more efficient die-to-die interposer).

Intel has UPI (upgrade over Intel QuickPath), but it hasn't been "shrunk" to chiplet level yet. Intel has EMIB as a physical technology to connect chiplets together... but Intel still needs to create dies and a lower-power protocol for interposer (or maybe EMIB-based) communications.

So Intel has a lot of the technology ready to create a chiplet (like AMD's Zeppelin dies). But Intel wasn't gunning for chiplets as hard as AMD was. Still, Intel demonstrated their chiplet prowess with the Xeon+FPGA over EMIB. So Intel definitely "can" do the chiplet thing, they just are a little bit behind AMD for now.


Intel has done that in the past, actually, their first "dual core" chip (2005) was actually two chips in a package.

https://en.wikipedia.org/wiki/Pentium_D


There was a major difference though. Intel's chips communicated over the front-side-bus (not a great solution considering how FSB was already far inferior to HyperTransport).


Sure, that's why the startup I was one of the founders of in that era built a HyperTransport-attached InfiniBand adapter. Intel wasn't very competitive in the supercomputing space back then.


Because it's not free. Communication between cores in different CCXes (and memory access - there is a single memory controller per CCX) has slightly higher latency than within a single CCX (or a monolithic CPU, but here Intel's advantage decreases with core count due to a different interconnect).

Also because they didn't have to innovate - no competition since early Opterons.


I'm thinking of buying one to run multiple instances of Selenium with headless Firefox for crawling reasons. Pair it with 128GB RAM and I could easily run 40-50 instances simultaneously.


At the very least I guess it would be useful for running multiple VMs


Any (well written) data analysis code could benefit from the increased core count and high number of memory channels.

Any kind of process that's batchable too.


with 32 cores, I can run 32 small simulations (e.g. CFD, FEA, etc) in parallel for optimization problems or run 1 medium sized simulation. And anything in between.

My ideal workstation likely actually uses ~128 cores but that isn't practical for home use yet. A board with 4 2990wx would be heaven.


Video editing or 3D rendering.


You are right, uncovering potential of a system like that is almost impossible for home use. With most software unable to use efficiently even a couple of cores, 32 is certainly something for the future.

Personally I'm looking forward to something based on 2200GE for home use.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: