Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

As described above and in Cosma Shalizi’s post, the number of steps required to solve a linear programming problem with nn products and mm constraints is proportional to (m+n)3/2n2(m+n)3/2n2. The USSR had about 12 million types of goods. If you cross them over about 1000 possible locations, that gives you 12 billion variables, which according to Cosma would correspond to an optimization problem that would take a thousand years to solve on a modern desktop computer. However, if Moore’s Law holds up, it would be possible in 100 years to solve this problem reasonably quickly.

I found Red Plenty fascinating and I enjoyed the Crooked Timber seminar that Chris Said links to, particularly Shalizi’s post. I'm still wondering: why is the compute-unit "a modern desktop computer?" Is parallel scaling poor for this kind of problem?

If parallel scaling is poor, might better speedups be available with ASICs? Or are there heuristics that run faster and produce usually-close-to-optimal solutions on this type of problem? I'm just running through some common ways to accelerate solutions for compute-intensive tasks that run slowly on a single-socket of general purpose CPUs. For all I know, none of these avenues are promising.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: