Hacker Newsnew | past | comments | ask | show | jobs | submit | soganess's commentslogin

Are launch costs really 10x!? Could I get a source for that?

In the back on my head this all seemed astronomically far-fetched, but 5.5 million to get 8 GPUs in space... wild. That isn't even a single TB of VRAM.

Are you maybe factoring in the cost to powering them in space in that 5 million?


The Falcon Heavy is $97 million per launch for 64000 kg to LEO, about $1,500 per kg. Starship is gonna be a factor 10 or if you believe Elon a factor 100 cheaper. A single NVidia system is ~140kg. So a single flight can have 350 of them + 14000kg for the system to power it. Right now 97 million to get it into space seems like a weird premium.

Maybe with Starship the premium is less extreme? $10 million per 350 NVidia systems seems already within margins, and $1M would definitely put it in the range of being a rounding error.

But that's only the Elon style "first principles" calculation. When reality hits it's going to be an engineering nightmare on the scale of nuclear power plants. I wouldn't be surprised if they'd spend a billion just figuring out how to get a datacenter operational in space. And you can build a lot of datacenters on earth for a billion.

If you ask me, this is Elon scamming investors for his own personal goals, which is just the principle of having AI be in space. When AI is in space, there's a chance human derived intelligence will survive an extinction event on earth. That's one of the core motivations of Elon.


I guess he adds the weight of all the hardware to make the whole thing work.

You also need square kms of radiators to cool 100MW

1. It's a moral good (free as in freedom). Wider Linux adoption makes software more free for everyone and creates a feedback loop: more users means more engineering effort, which improves the many many projects we colloquially call Linux, which (i++) attracts more users. As a corollary to #1: do you really want Billy G spying on your mom?

2. It's often better for the environment to keep old hardware running (manufacturing emissions usually dwarf operational ones for consumer devices).

And a more personal corollary to #2: I love old hardware and don't want to see it die (and I'm not talking about vintage tech). A 16+ core Haswell Xeon (that riiiing) and Polaris RX 480 (HWS, why yes) remain perfectly useful in the modern world. I like knowing both are out there, somewhere, just chugging away long after they were retired from some server or mining operation.


> Wider Linux adoption makes software more free for everyone and creates a feedback loop: more users means more engineering effort, which improves the many many projects we colloquially call Linux

I don't think it is necessarily true. More users may mean that some platform (say Ubuntu) gets so much traction that the rest becomes irrelevant. I already see "free as in freedom" projects that only support the last two versions of Ubuntu, and couldn't care less about other distros. To the point where they will have hard dependencies on things that only work on Ubuntu and are very difficult to adapt to other distros.

> I love old hardware and don't want to see it die

I have a counter-example with Android. Android/AOSP is pretty good with backward compatibility. It is pretty easy for a developer to compile an app for older devices, the OS totally supports it.

But developers/companies will just happily target newer devices and drop older ones ("98% of our users are on Android "X", let's drop the support for older ones") and tend to test their apps on recent hardware (meaning that a perfectly fine device will still be able to run the app, but it will lag to the point where it is unusable). Happened to me: I had to change my phone because random apps (like banking or weather forecast, I'm not talking high-performance like games here) became unusable. A banking app just shows a few numbers, still they manage to make it lag on a phone from 2020.


Can someone tell me what I am missing here?

This seems to suffer from a finite-size effect. Wolfram's machines have a tiny state space (s ≤ 4, k ≤ 3). For some class of NP problems, this will be insufficient to encode complex algorithms and is low dimensional enough that it is unlikely to be able to encode hard instances ("worst case") of the problem class. The solution space simply cannot support them.

In this regime, hard problem classes only have easy solutions, think random k-SAT below the satisfiability threshold, where algorithms like FIX (Coja-Oghlan) approximate the decision problem in polynomial time. In random k-SAT, the "hardness" cannot emerge away from the phase transition and by analogy (watch my hand wave in the wind so free) I can imagine that they would not exist at small scales. Almost like the opposite of the overlap gap property.

Wolfram's implicit counter-claim seems to be that the density of irreducibility among small machines approximates the density in the infinite limit (...or something? Via his "Principle of Computational Equivalence"), but I'm not following that argument. I am sure someone has brought this up to him! I just don't understand his response. Is there some way of characterizing / capturing the complexity floor of a given problem (For an NP-hard Problem P the reduced space needs to be at least as big as S to, WHP, describe a few hard instances)?


The cynic is me says those interesting but ultimately barren long-form articles are just content marketing for Mathematica software.


No lol, Stephen Wolfram is more invested in his writings than he is in Mathematica. He genuinely believes he’s going to revolutionize math and physics.

He’s smarter than your average nutjob, but he’s still a bit of a crank.


I think you have it wrong. Wolfram's claim is that for a wide array of small (s,k) (including s <= 4, k <= 3), there's complex behavior and a profusion of (provably?) Turing machine equivalent (TME) machines. At the end of the article, Wolfram talks about awarding a prize in 2007 for a proof that (s=2,k=3) was TME.

The `s` stands for states and `k` for colors, without talking at all about tape length. One way to say "principle of computational equivalence" is that "if it looks complex, it probably is". That is, TME is the norm, rather than the exception.

If true, this probably means that you can make up for the clunky computation power of small (s,k) by conditioning large swathes of input tape to overcome the limitation. That is, you have unfettered access to the input tape and, with just a sprinkle of TME, you can eeke out computation by fiddling with the input tape to get the (s,k) machine to run how you want.

So, if finite sized scaling effects were actually in effect, it would only work in Wolfram's favor. If there's a profusion of small TME (s,k), one would probably expect computation to only get easier as (s,k) increases.

I think you also have the random k-SAT business wrong. There's this idea that "complexity happens at the edge of chaos" and I think this is pretty much clearly wrong.

Random k-SAT is, from what I understand, effectively almost surely polynomial time solveable. Below the critical threshold, it's easy to determine in the negative if the instance is unsolvable (I'm not sure if DPLL works, but I think something does?). Above the threshold, when it's almost surely solveable, I think something as simple as walksat will work. Near, or even "at", the threshold, my understanding is that something like survey propagation effectively solves this [0].

k-SAT is a little clunky to work in, so you might take issue with my take on it being solveable but if you take something like Hamiltonian cycle on (Erdos-Renyi) random graphs, the Hamiltonian cycle has a phase transition, just like k-SAT (and a host of other NP-Complete problems) but does have a provably an almost sure polynomial time algorithm to determine Hamiltonicity, even at the critical threshold [1].

There's some recent work with trying to choose "random" k-SAT instances with different distributions, and I think that's more hopeful at being able to find difficult random instances, but I'm not sure there's actually been a lot of work in that area [2].

[0] https://arxiv.org/abs/cs/0212002

[1] https://www.math.cmu.edu/~af1p/Texfiles/AFFHCIRG.pdf

[2] https://arxiv.org/abs/1706.08431


  > “so desperate to contend”
The only thing desperate is people plugging their ears and lalala-ing “HN is not for politics.”

The tech talk here is embarrassingly shallow. Depth is now the rare exception. If I want deep dives, I’ll go see about some crabs. This place now exists to launder the tech worldview, and that’s an inherently political act. Pretending it isn’t doing that is political too.

Like most folks here nowadays, I primarily come for the politics. The difference is I’m not lying to myself about my posture.


I still primarily come for the tech/hacker vibe, but also understand and accept that continually hitting snooze on important political items will threaten my future ability to enjoy the former.

I also respect the fact that this is run by an SV incubator and everything that entails. Out of all the discussion platforms, this one is still the most sane, for now. Moderation is a thankless job, and they do a pretty good job with that around here compared to all of the alternatives.


I consider myself extremely confrontational on here (especially compared to myself in meatworld), but in my 13 years on HN I have had only one direct disagreement with dang, and it was about the definition of the hazelnut spread Nutella.

I am sure he and I disagree on most things, but I don't fault the primary moderator. In general, dang seems pretty laissez-faire. I am venting at the flag brigade: what news gets flagged, and more importantly what doesn't.

I come here to stick my thumb in the wind and see what the prevailing tech view is. As for the tech itself, I am more "if I learn something cool along the way, neat," so I guess I am here for the vibe as well. I just wish folks were more honest about what HN is (like you are being here). Things change; it's okay for them to change.


> The only thing desperate is people plugging their ears and lalala-ing “HN is not for politics.”

> Off-Topic: Most stories about politics, or crime, or sports, or celebrities, unless they're evidence of some interesting new phenomenon. Videos of pratfalls or disasters, or cute animal pictures. If they'd cover it on TV news, it's probably off-topic.


Do you honestly think I have not read that before?

I don't even know where that is posted, I just see folks quote it all the time. I obviously don't abide, those were written at a different time on a different internet with a different HN.

NOTE: I slightly restructured this without noticing the reply. The poster below is not misquoting me in anyway.


> I don't even know where that is posted,

See: https://news.ycombinator.com/newsguidelines.html

It's intentionally stochastic, outlining a preferred shape of topics and allowing exceptions that engage and promote novel and curious discussion.

These are novel and interesting times, no doubt, and yet there's still an exuberance of drum banging and closed minded repetition on various topics leading to many but not all of the current event threads being organically weighted down.


The person you're replying to doesn't abide by their own stated principles. It's a mistake to think that they're merely reminding you of the rules. No, it's an order.

Just look at how many political comments they made. Especially the downvoted and/or flagged ones. They are horrendous. So many words spent justifying ICE murders and lying about how the victims were violent terrorists. They spend all day thinking about how best to downplay the egregious actions of the current regime, at one point writing how ICE agent's masks are just merely "face/neck warmers."

This isn't a person who should be taken seriously.


They also recently complained about their own experience being downvoted and flagged for ostensibly political reasons, which also cuts against the guidelines. It’s selective adherence at best.

https://news.ycombinator.com/item?id=46773004 (now dead)

Dang is right when he says that every politically-charged commenter thinks their specific ideology is the one that’s oppressed here.


> Do you honestly think I have not read that before? Like seriously?

It immediately and completely refutes your position, so if you have read it then you have no excuse.

> I don't even know where that is posted, I just see folks quote it all the time.

It is in one of the links in the page footer.

> I obviously don't abide, those were written at a different time with a different HN.

Two wrongs don't make a right. The policy is there for a reason, and I'm confident that any of the moderators will happily tell you that it's meant exactly as seriously now as it was at the beginning. But you don't have to take my word for it; you can also email hn@ycombinator.com.


I am so confused wrt what you are attempting to do.

I literally do not care what the policy says. Must I say it that way? The policy

  (1) is logically incoherent

  (2) is not policed in an equitable way

  (3) is used to launder a worldview to young tech workers just coming to hn

  (4) ... do I need to keep going? because I can keep saying stuff

  (5) is just random bits on a server
I don't like it, many people don't like it, it has a negative chilling effect on the hn community. I am regularly voicing my concern in an effort to create my desired outcome. FWIW, the folks that want the rules changed are generally the most aware of them.


Respectfully, I completely disagree with you on every point (except that I trust that you could indeed "keep going").


The boss is doing a little nod to Desolation Row? I see you.


https://en.wikipedia.org/wiki/Bisan_Owda

Sounds like a real monster... wait... nvm.

In all seriousness, I don't know a ton about her, but, if her Wikipedia is to be believed, the only thing she is guilty of is being a little "Leave Brittney Alone" extra while living through a genocide:

https://www.youtube.com/watch?v=nuTV6hrjQW8

But maybe there is another side to her posting that isn't available from a terse search?


"addict"

Great idea! Le's pathalogize another thing! I love quickly othering whole concepts and putting them in my brain's "bad" box so I can feel superior.


Major problems with Firefox include:

  - full uBlock support

  - the ability to still be themed

  - first-party isolation
...Okay, okay, I’m being too cheeky.

The common wisdom is that overall Firefox can feel bottlenecked at render and draw times (“less snappy”). That could be a result of a slower JavaScript engine (takes longer to get to drawing), or a result of poorer hardware acceleration (slower drawing), or a less optimized multiprocessing/multithreading model (more resource contention when drawing).

I honestly can't see it in the real world, but synthetic benchmark are pretty clear on that front.


Agreed!

It's funny because the person who wrote that article clearly loves computers (and has loved them for a long time)! I guess we are all a bunch of contradictions zipped up in a meat sack.


  "I usually don't comment on stories like this..."
I humbly suggest you revert the commit that changed your policy.

  "women needs to get a grip... men who are 40 or 50 years older"
Because all men 40 or 50 years older than her are rapist? We should really just put them in jail! Or is it because women in their 20s shouldn't have the right to date older men without assuming they are going to get raped?

Wild idea: every adult should be able to date every other adult without having to do rape calculus.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: