Moxie Marlinespike (Openwhisper Systems, Signal) makes a good argument for why that doesn't work https://m.youtube.com/watch?v=DoeNbZlxfUM. Recommend watching the whole talk.
Making an argument doesn't change the fact that I am informed, made a decision, stopped using it and have suffered NO HARM. I made a tradeoff that I thought was valuable. Anyone demonstrably can, but are usually less interested in making a philosophical judgement than enjoying the dopamine reward from communal outrage.
1. Contribute early on little by little
2. Contribute nothing to very little for a while amassing huge fortunes (and compound interest magic), then donate big.
Pretty much the same question as lump sum vs dollar cost averaging in investing.
Seeing how few people make it to the huge fortune stage it seems like a very bad strategy if broadly adopted. How we do it now makes sense, contribute as much as you are comfortable with regardless at which step of the ladder you are on.
Money given today pays dividends in the lives of those who benefit and the lives of people who are around them, it just doesn't show up on any balance sheet. The after school program that provides the missing support network for a child, keeping them out of a street gang and later prison, and helping them develop the skills to find gainful employment pays hidden dividends of tens or hundreds of thousands of dollars for society. Also there are second and third order effects that amplify those gains further. If that money was instead earning 5% in an endowment fund, it may keep the name of a robber baron alive for perpetuity, but it does far less for society today. And if you could trace the knock-on effects of the money given today, you would likely find that the benefits distributed throughout society accumulate quicker than the 5% rate of growth achieved by that endowment fund.
If you amass a huge fortune through paying your workers low wages, it would probably be best to avoid amassing that huge fortune, give your workers a larger share of your profits, and allow them to donate themselves. It helps you to avoid the temptation to cynically set up a tax-offsetting fund to avoid potential legislation that forces you to contribute to the communities where you are located.
Either way, it's good to figure out how to contribute back to a country who created your wealth, by allowing you to operate a business for two decades without being subject to the sales taxes that the long-shuttered businesses who previously employed your desperate applicants were required to pay.
So donate to politicians early so you don't have to pay taxes, donate to charity to avoid taxes later, buy a newspaper somewhere in between.
I think history gives a bit of the answer. There's always been work done to help homeless people, and it never really got better.
Worst part IMO is that it's not to be a money sink. I deeply believe that homelessness is mostly emotional, these guys need deep moral support and a bit of material support. But you can't buy moral support. Having someone drop 2B might help motivate society and finally realize how to fix the problem, even if not all the money is spent.
I think that may be overstating things[1], though I agree it's a tough problem. Lots of folks need some extra help, they're disabled, they have mental illnesses, or substance addictions (or all of the above). Just giving them a tiny house or apartment is not enough to keep them housed. But typically we fund programs to address all of those problems separately, with varying degrees of cooperation between them.
I would recommend you do both. Give a little (but only less than 1%) of what you make today, but aim to make a ton of money. Once you've made the money, give all you can. Your children don't need that much to live a life without work (10-20M).
It's more important to know how you give and whom you give it too. I'm of the opinion that given $5 to a homeless person is much more effective than giving $5 to a charity.
That ignores the psychology of your Bezoses and Gateses, though.
They are maniacally obsessed with growing their companies as much as possible, and everything they do works towards that goal. They don't have time for philanthropy, or worrying about social issues, while they're doing that.
Gates was able to give so much away because he shifted focus after leaving Microsoft. Hopefully others like him will follow.
He's also doing this at when the stock market is very high, which means that this is the cheapest $2 billion he's ever owned. I say that not to undercut the significance, mostly to point out that the stock market is also important to this consideration.
It's not really the same thing, because (assuming a positive ROI) dollar cost averaging will always end up better off than lump sum investing (assuming the same amount is invested).
For compiling, having many cores is fantastic. Granted, on a workstation, compilation normally just involves a few files (the ones that have changed since the previous build and their dependencies), but when you have to do a full rebuild, it is fantastic to be able to do `make -j16` and watch it chug through 16 files simultaneously. Interestingly, the benchmark in this review shows that the 16-core 2950X compiles Chromium faster than the 32-core 2990WX, presumably this means something other than the thread count becomes a bottleneck after 16 threads or so.
"this review shows that the 16-core 2950X compiles Chromium faster than the 32-core 2990WX, presumably this means something other than the thread count becomes a bottleneck after 16 threads"
The article mentions that, due to the die packaging, only 16 of the cores have direct access to RAM. So for the 32-core version, half the cores are memory-starved and have to go through the 'connected' cores (also impacting these), while the 16-core version doesn't have that problem and can be at 100% for all process loads.
Might memory access model (UMA vs NUMA) play a role here? AFAIK the TR2950wx has configurable model (can be configured to work in either uma or numa mode) whereas the 2950x only has one mode (can't recall which one at the moment)
See my other answer in this threaad, the main reason for the strange result is doing an LTCG build, not really the CPU, which scales quite nicely in the Linux tests from Phoronix.
One of the projects I compile at work can take an hour running on 4 threads, jack that up to 16 and you take that down to not much over 17-18 minutes. That's a whole heap of developer time you just got back that would have been wasted on compiler swords.
The other one is running VM's/a docker swarm locally for development.
I think that most of us who are interested in high core counts are at least hobbyists. For example, I use Monte Carlo simulation to compute the pagerank vector for all of the biomedical literature on PubMed. 32 cores is either 32x faster than 1 core, or lets me improve the precision of my results. Sure, this isn't browsing the web, but it's also not a real research project or a business.
It takes roughly 20-minutes to make a SINGLE good frame using Cycles on Blender. Cycles is a raytracer for 3d modeling.
If you are making a 30-second animation at 24-frames-per-second, that would be 720 frames, or roughly 240 hours (10 days) of rendering. 30-seconds would be roughly the length of a standard commercial.
If you have a computer that is 2x or 4x faster, that cuts the time down to 5 days or 2.5 days. Which is dramatically different. Its mostly a CPU-intensive problem with relatively low RAM bandwidth. (Its RAM-heavy, especially with HDR skymaps. So you need lots of RAM but not necessarily fast RAM).
Or you buy good graphics card and you blow CPU out of the water. But it is true for render engines like Arnold or Corona CPU is main thing. For cycles get gpu.
The Threadripper 1950x (16-core) is faster than the 1080 Ti in several tests. Fishy Cat for instance is faster on Threadripper, as well as the difficult "Barbershop Interior".
With all the updates to Zen2, higher clocks, and now 32-cores, I bet that the 2990wx will be incredible and give GPUs a run for their money.
Besides, you'll need a good CPU to handle physics (cloth, fluid, etc. etc.). Not everything can be done on the GPU yet.
CPUs also have the benefit that RAM is super-cheap. You can get 64GB of DDR4, but its basically impossible to get that amount of RAM on a GPU. This allows you to run multiple blender instances to handle multiple frames quite easily. A portion of rendering is still single-thread bound, so an animation can be rendered slightly faster if you allocate a blender-instance per NUMA node.
If you do have a GPU, you can still have CPU+GPU rendering by simply running Blender twice, once with GPU rendering and a 2nd time for CPU Rendering. With the proper settings, you'll generate .png files for each animation independently, which allows for nearly perfect scaling.
Every x399 board I've seen supports quad-GPUs. So you can totally build a beast rig with 4x GPUs + 32 CPU Cores for the best rendering speed possible.
You make very good point. I do 3D rendering from time to time and i dont actualy play games anymore. Most of my work would greatly benefit from powrful cpu not so much gpu. Interesting i should reconsider.
Digital audio workstation workloads are massively multithreaded, with hundreds or thousands of DSP processes. Performance scales almost linearly with core count.
These recentish AMD core improvements alone have made me considering rebuilding my VST collection for Windows and moving off mac for production. I'm not looking forward to tracking down windows VST versions of tiny apps. I can't imagine spending 3k for another Macbook when I can get way more interesting performance in Windows these days. I'd love to have a desktop for very heavy synth and processing work, freeze those tracks, and then be able to take it on the go with a similarly set up laptop (set up DAW that is, not 32 cores).
I was in a similar position, really tied to all my tools on OS X but didn't want to spent 3k on their desktop line.
Instead I built an awesome Hackintosh with 16gb ram, 8 cores, nvme drive and 1080 for like $1600 that runs High Sierra. It's definitely more work to set up initially but pretty low hassle afterwards. No regrets.
Yes I'm at the crossroads but I want to be able to have a laptop to take as well and the new mac line up is just not for me. Thats great info that the Hackintosh computers are still kicking because I had mostly ruled that out after not hearing much about them over the past few years. Mind sharing your build? :D
- Intel 7700k
- Geforce 1080
- Asus ROG Strix Z270E
- Samsung Evo 960 NVMe
The process is much easier than it was years ago - especially if you can find a few people that got it working with the same motherboard.
1) Make a standard install USB
2) Run Clover Configurator on the USB with standard settings + tweaks based on your GPU and motherboard (you can find suggestions on /r/hackintosh and the TonyMac forums)
3) Install and boot
4) Tweak the Clover configuration on the EFI partition to fix any random remaining issues you find like USB or audio.
Not my experience. I've had stutters with Ableton Live on a 4 core machine with 2 cores still idling around. Ableton cannot multi-thread a single track (at least in Version 9), and if you're using a single-threaded VST that does not matter anyway.
If a single instance of a VST can use 100% of a thread, you're using a woefully underpowered processor or a ludicrously inefficient plugin. Many composers regularly work on projects with hundreds of tracks and thousands of plugin instances. Projects of that scale used to require multiple computers and a bunch of DSP accelerator cards, but they're now entirely feasible on one high-end workstation.
It's not hard to find VSTs which will easily max out an i7, especially if they run on Max/MSP. Also, stuff from u-he, like Diva. If you run several instances on high-quality you'll usually need to start freezing tracks.
I'm literally sitting here waiting for glibc to compile, because I need a version with debug symbols, which the version from Arch Linux' repos lacks. Right before that, I compiled valgrind from the git head, because the current release (3.13) doesn't support glibc 2.28. I have compiled Chromium a couple of times for work.
It's very apparent that my 2-core 4-thread i5 5200U is a bit too weak; I'd love to be using a 16- or 32-core machine.
Does video editing or After Effects work could as home use yet? Video editing, in terms of cutting and splicing clips, is not going to benefit much from this chip, but a lot of effects rendering will benefit, and basic video editing uses a lot of effects these days.
16-thread Threadripper is mostly idle in my video-editing tests. I mean, its a great processor. But video editing isn't "heavy enough" for me to recommend a 16-core or bigger processor.
Yeah don’t get me wrong. They pretty much all are. But, a few key ones aren’t, depending on the host app, and aftermarket plugins have a lot of multi threading support.
I wondered that as well. I guess my imagination for uses isn't particularly good, but all the uses i bought my 16 core workstation for (mostly research computing development) would really suffer with the memory performance of the 32 core chip.
I don't know enough about video games, but I would naively think that the memory latency would be a big deal there as well.
At any rate, I learned a lot from this article. Anandtech's reviews always seem well-written and well-researched.
> I don't know enough about video games, but I would naively think that the memory latency would be a big deal there as well.
It really depends on what else you're doing. If you're just playing a game then Disk and GPU tend to be the biggest bottlenecks in video gaming. Even a reasonably fast modern CPU is sufficient for most games.
One niche application is music composition. When you're writing a score for full orchestra, you need lots of RAM and lots of cores for accurate playback.
Atm I'm running about 15 VMs on about 8 cores in a dedicated box somewhere and it's definitely noticable. I would love to shove some core services at home and have 32 cores to play with to give some more headroom
Because all of the containers may not be of the same operating system? Networking on containers is also a bit different.
There are also reasons for having some more isolation between guest OSes.
On my ESXi box at home I have:
* A VM that hosts my NAS shares. This does nothing other than host the NAS shares, as I want to be sure no silly experiment of mine interferes with that.
* A general-purpose VM, where I do run some containers out of (UniFi controller, Plex, etc)
* A VM running Windows Server for my Domain Controller
* A secondary vSwtich with isolated no uplink to the rest of the network. This is my mini malware testing lab.
* A VM running pfSense that I'll sometimes use to allow selective access out of the isolated vSwtich out to the internet, but not to the rest of the network.
I have many use-cases where containers are simply unsuitable.
I'm using FreeBSD, but these apply just as well to Linux. I wanted to run ZoneMinder, which is not available for FreeBSD, so I simply spun up a CentOS VM and installed it.
On the flip side, I wanted to run Home Assistant, Node-RED, and some related utility programs. All of these are happy to run on FreeBSD, so they can live happily in a Jail (FreeBSD's equivalent to a container).
Some people virtualize their router by dedicating a NIC to the appropriate VM. I don't know if this would even be possible in a container.
I run proxmox on my 16 thread ryzen and would love more cores.
I currently run 4 linux vms for my kubernetes cluster and a
4 core macOS vm with passthrough for my gtx 1080i. I have 64 gb of memory so the only thing stopping me from running my windows 10 and arch desktop vms at the same time is more cores.
Not everything runs great on containers. My internal firewall is a pfSense, BSD based which doesn't run on a linux kernel.
Atleast 3 VMs need patched kernels or more recent kernels/regular kernel updates than the host provides.
Additionally VMs provide a bit more isolation than a simple container (atleast unless you do unpriv'd container).
I do have containers too, about 20 of them, half of them unpriv'd, all of them LXC. Docker is not suitable for my use case at all and frankly I don't think you should suggest someone should switch to Docker without knowing their use cases.
Build servers, video conversion/streaming, and hosting game servers are use cases that would certainly benefit from this in a hobbyist/home environment.
And just to expand on that, I'd like to (for instance) run multiple remote desktops (including a photo editing station), probably a decent plex/emby VM, probably something to do transcoding etc etc. Not to mention dev VMs etc.
You can do a lot with one thing and administer that one thing without having lots of individual boxes doing stuff, and for me it'd be way faster and a single cost, so it'd work out as a big improvement.
In a 'money is no object have all the time in the world' it would probably be better to have something dedicated to do each task, but that's not that flexible on top of the other drawbacks (cost in money/time).
I think more CPU performance is attractive if you edit photos or movies - something that is typical for a home computer.
The reason AMD provides CPU performance through a high core count is because it is power efficient, otherwise a single-core is easier to program.
How is the $1800 2990WX outperforming the $1980 i9-7980XE in every rendering test they performed not convincing?
There are clearly workloads where the 32 core TR chip does not perform well (probably due to the memory configuration) but it seems pretty good at rendering.
Faster with 2x more cores and twice more power consumption. Hardly a win in my book - it means each core is way weaker and power hungry. Intel will easily match that sometimes soon without sweating.
Tom's Hardware saw a stock 2990WX at a lower power consumption than a stock i9-7980XE during a Prime95 "torture loop". Overclocked, the AMD part was higher than the Intel one, but only slightly.
Where have you seen that it has double the power consumption? Under what workloads?
Personally, I don't care about "weaker cores". If a system has 2048 cores clocked at 7 THz and it is 20% faster at my workload than a single-core CPU at 700 MHz, it is faster.
The fact that the "weaker cored" system is cheaper than the "burly muscly" single core system is a bonus.
Power consumption doesn't even matter that much either. It is the equivalent to a single 60W light bulb (or several of those new-fangled LED bulbs). Big whoop.
If I were Pixar and was running one of these 24/7 then, well, it's theoretically possible that the extra you pay in electricity would make up for the lower capital price. But for a hobbyist running this at most 10 hours a week I really doubt that that's a consideration.
But more importantly, The Tech Report looked at task energy for the Threadripper in rendering tasks and found that it took less energy to finish a render than competitorys. It's power was higher but the time was shorter to an even greater extent.
2x core count hardly ever offers anywhere near a 100% performance gain, even across Intel's lineup [1]. Very few workloads are that parallelizeable, and almost everything (from the program to the OS to the CPU itself) introduces some form of overhead when running in parallel.
Slightly faster yes. Cheaper? For now. Price is elastic. The i9 could be priced at any price point because there was no competition until now. Do you think Intel will keep it overpriced for long?
AMD has manufacturing cost advantage - CPUs are built from 4- core CCXes that can be binned separately.
Intel on the other hand needs to manufacture a monolithic CPU that not only is fault free in enough cores, but performs well. That's harder and yields are way lower.
80% yield on a 4 core block is a 16.7% yield on a 32 core block - and that's before binning
Intel could do that. But AMD came out with this method first.
AMD has only been doing this "infinity fabric" thing for a year. Intel was caught with their pants down. It seems like Intel is researching chiplet technology and trying to recreate AMD's success here.
It takes several years to create chips. So Intel realistically won't be able to copy the strategy until 2020 or later. But you better bet that Intel is going to be investing heavily into chiplet technology, now that AMD demonstrated how successful it can be.
HyperTransport and Intel QuickPath isn't chiplet technology.
AMD "upgraded" HyperTransport to Infinity Fabric. Which IIRC uses a bit less power (taking advantage of the shorter, more efficient die-to-die interposer).
Intel has UPI (upgrade over Intel QuickPath), but it hasn't been "shrunk" to chiplet level yet. Intel has EMIB as a physical technology to connect chiplets together... but Intel still needs to create dies and a lower-power protocol for interposer (or maybe EMIB-based) communications.
So Intel has a lot of the technology ready to create a chiplet (like AMD's Zeppelin dies). But Intel wasn't gunning for chiplets as hard as AMD was. Still, Intel demonstrated their chiplet prowess with the Xeon+FPGA over EMIB. So Intel definitely "can" do the chiplet thing, they just are a little bit behind AMD for now.
There was a major difference though. Intel's chips communicated over the front-side-bus (not a great solution considering how FSB was already far inferior to HyperTransport).
Sure, that's why the startup I was one of the founders of in that era built a HyperTransport-attached InfiniBand adapter. Intel wasn't very competitive in the supercomputing space back then.
Because it's not free. Communication between cores in different CCXes (and memory access - there is a single memory controller per CCX) has slightly higher latency than within a single CCX (or a monolithic CPU, but here Intel's advantage decreases with core count due to a different interconnect).
Also because they didn't have to innovate - no competition since early Opterons.
I'm thinking of buying one to run multiple instances of Selenium with headless Firefox for crawling reasons. Pair it with 128GB RAM and I could easily run 40-50 instances simultaneously.
with 32 cores, I can run 32 small simulations (e.g. CFD, FEA, etc) in parallel for optimization problems or run 1 medium sized simulation. And anything in between.
My ideal workstation likely actually uses ~128 cores but that isn't practical for home use yet. A board with 4 2990wx would be heaven.
You are right, uncovering potential of a system like that is almost impossible for home use. With most software unable to use efficiently even a couple of cores, 32 is certainly something for the future.
Personally I'm looking forward to something based on 2200GE for home use.
- It has ALL THE PORTS
- No gimmicky touch bar
- No butter-a$$ keyboard
If Apple doesn't revert these dumb changes, I will simply get a Dell or something else.