Hacker Newsnew | past | comments | ask | show | jobs | submit | holowoodman's commentslogin

> I'm not sure if they still can do it, but the English made nuclear submarines.

Not really. The Polaris and Trident SLBM systems as well as the nukes they carry are US designs that the UK is allowed to use. And while their current PWR2 reactor is a British design, it is lacking. Therefore the next PWR3 design will be based on US S9G reactors.


The Trafalgar class were nuclear attack submarines made at Barrow-in-Furness shipyard in Cumbria. The current Astute class were also made there.

A nuclear submarine is one with nuclear propulsion, not nuclear weapons (just like a diesel-electric submarine is one with a diesel engine and batteries, not diesel weapons).


> All this "Americans must realize you are now PARIAHS and will NEVER BE TRUSTED AGAIN" business will seem novel to people today, but this was true when I was younger and America had just invaded Iraq right after Afghanistan.

Nobody really cared about Iraq or Afghanistan. Sure, it was fashionable to pretend to care, to get on a high horse and tell the USian rabble how immoral they were. But at the same time, people on their high horses also were glad that there was no Saddam Hussein anymore and that the Taliban were beaten (seemingly, back then).

It's different now because the US threatened to invade the Kingdom of Denmark, a supposedly very close ally. Even the threat of doing that is a red line that will be very very hard to uncross after Trump.


Yes, and I'm sure that the next time the US does something against European interests it will again be the case that the last time was just pretense but this time is real. The thing with terminal declarations is that there is no pathway back. If the US was never to be trusted again after the Iraq War, we are never to be trusted again now, so telling us that we are never to be trusted now is not of any significance. We're now post that declaration. That's what the word 'never' means.

The US-Europe military-economic bloc is a strong structure, but of the two Europe is weaker and the participants in Europe stand and fall according to weak ties. Without NATO, it isn't even clear if Poland will have allies. Each of the constituent countries have leaders aware of this. And I'm sure they'll attempt to keep the structure intact. If they fail, they fail but all these dramatic declarations won't have been significant either way. The declarations themselves are just emotional outbursts without even the semblance of even self-interest.

I mean, think about it. If the US has no pathway back to normalcy in relations ("never be trusted") then the cost for all future Presidents to militarily intervene is low. After all, trust is at its minimum value and guaranteed not to rise. If Greenland is core to US interests and Denmark has decided there is no pathway back to normalcy, invasion is on the table for all Presidents, Democratic Party or Republican Party.

Essentially, once you decide that you will never normalize relations, then you're just an adversary: not even a potential future ally. And those who pitch themselves as guaranteed adversaries had better find allies quick.


I didn't say "never", just "very very hard".

Just think of the relations the US has with the British. Back in the day, after the independence war, I'm quite sure that there were quite a few people in the US who said something like "never will we have cordial relations with the Kingdom of Britain"...


No, you did not say that, but that was the context of the conversation.

> I think every American needs to understand this quote:

> > "We will never fucking trust you again."


I guess that's just the usual hyperbole in these kinds of heated talks. I mean, it is basically the same as all those instances of TACO: Propose something outrageous, outlandish and absolute, later compromise to do something lesser.


But if I have to relink everything, I need all the makefiles, linker scripts and source code structure. I might as well compile it outright. On the other hand, I might as well just link it whenever I run it, like, dynamically ;)


Statically linked binaries are a huge security problem, as are containers, for the same reason. Vendors are too slow to patch.

When dynamically linking against shared OS libraries, Updates are far quicker and easier.

And as for the size advantage, just look at a typical Golang or Haskell program. Statically linked, two-digit megabytes, larger than my libc...


This is the theory, but not the practice.

In decades of using and managing many kinds of computers I have seen only a handful of dynamic libraries for whom security updates have been useful, e.g. OpenSSL.

On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications, not only on Linux, but even on Windows and even for Microsoft products, such as Visual Studio.

I have also seen a lot of space and time wasted by the necessity of having installed in the same system, by using various hacks, a great number of versions of the same dynamic library, in order to satisfy the conflicting requirements of various applications. I have also seen systems bricked by a faulty update of glibc, if they did not have any statically-linked rescue programs.

On Windows such problems are much less frequent only because a great number of applications bundle with the them, in their own directory, the desired versions of various dynamic libraries, and Windows is happy to load those libraries. On UNIX derivatives, this usually does not work as the dynamic linker searches only standard places for libraries.

Therefore, in my opinion static linking should always be the default, especially for something like the standard C library. Dynamic linking shall be reserved for some very special libraries, where there are strong arguments that this should be beneficial, i.e. that there really exists a need to upgrade the library without upgrading the main executable.

Golang is probably an anomaly. C-based programs are rarely much bigger when statically linked than when dynamically linked. Only using "printf" is typically implemented in such a way that it links a lot into any statically-linked program, so the C standard libraries intended for embedded computers typically have some special lightweight "printf" versions, to avoid this overhead.


> In decades of using and managing many kinds of computers I have seen only a handful of dynamic libraries for whom security updates have been useful, e.g. OpenSSL.

> On the other hands, I have seen countless problems caused by updates of dynamic libraries that have broken various applications,

OpenSSL is a good example of both useful and problematic updates. The number of updates that fixed a critical security problem but needed application changes to work was pretty high.


I've heard this many times, and while there might be data out there in support of it, I've never seen that, and my anecdotal experience is more complicated.

In the most security-forward roles I've worked in, the vast, vast majority of vulnerabilities identified in static binaries, Docker images, Flatpaks, Snaps, and VM appliance images fell into these categories:

1. The vendor of a given piece of software based their container image on an outdated version of e.g. Debian, and the vulnerabilities were coming from that, not the software I cared about. This seems like it supports your point, but consider: the overwhelming majority of these required a distro upgrade, rather than a point dependency upgrade of e.g. libcurl or whatnot, to patch the vulnerabilities. Countless times, I took a normal long-lived Debian test VM and tried to upgrade it to the patched version and then install whatever piece of software I was running in a docker image, and had the upgrade fail in some way (everything from the less-common "doesn't boot" to the very-common "software I wanted didn't have a distribution on its website for the very latest Debian yet, so I was back to hand-building it with all of the dependencies and accumulated cruft that entails").

2. Vulnerabilities that were unpatched or barely patched upstream (as in: a patch had merged but hadn't been baked into released artifacts yet--this applied equally to vulns in things I used directly, and vulns in their underlying OSes).

3. Massive quantities of vulnerabilities reported in "static" languages' standard libraries. Golang is particularly bad here, both because they habitually over-weight the severity of their CVEs and because most of the stdlib is packaged with each Golang binary (at least as far as SBOM scanners are concerned).

That puts me somewhat between a rock and a hard place. A dynamic-link-everything world with e.g. a "libgolang" versioned separately from apps would address the 3rd item in that list, but would make the 1st item worse. "Updates are far quicker and easier" is something of a fantasy in the realm of mainstream Linux distros (or copies of the userlands of those distros packaged into container images); it's certainly easier to mechanically perform an update of dependency components of a distro, but whether or not it actually works is another question.

And I'm not coming at this from a pro-container-all-the-things background. I was a Linux sysadmin long before all this stuff got popular, and it used to be a little easier to do patch cycles and point updates before container/immutable-image-of-userland systems established the convention of depending on extremely specific characteristics of a specific revision of a distro. But it was never truly easy, and isn't easy today.


Would be nice if there was a binary format where you could easily swap out static objects for updated ones


The OS does. Nvidia doesn't.


Does Nvidia not support OpenGL?


Not really. Nvidia-OpenGL is incompatible to all existing OS OpenGL interfaces, so you need to ship a separate libGL.so if you want to run on Nvidia. In some cases you even need separate binaries, because if you dynamically link against Nvidia's libGL.so, it won't run with any other libGL.so. Sometimes also vice versa.


Does AMD use a statically linked OpenGL?


AMD uses the dynamically linked system libGL.so, usually Mesa.


So you still need dynamic linking to load the right driver for your graphics card.


Most stuff like that uses some kind of "icd" mechanism that does 'dlopen' on the vendor-specific parts of the library. Afaik neither OpenGL nor Vulkan nor OpenCL are usable without at least dlopen, if not full dynamic linking.


You may either rent/buy a device from your ISP, or you may bring your own, at your discretion. ISPs are required to accept all devices, of course if your device kills the network segment, they will kill your connectivity. But they can't refuse to let you connect.


What happens if your device connects 1000 volts to the cable and fries everyone else's device and the head-end?


You get taken to court and sentenced to pay the damages? Same thing that happens with the TV cable that runs through the whole street. Or the cars parked openly along the road. If you damage it, you pay for it.


They are a tier-1-wannabe. Tier 1 in prices, tier 3 in connectivity. No international peering to speak of, negligible international cables and presence compared to real tier 1.


> he behaviour of Telekom is the problem. That must change. The state has to ensure fairness rather than allow monopolies to milk The People.

The state is the monopoly here.

Telekom is still partially state-owned (~27%), since they were, back in the 90s, privatized from the former total monopoly "Deutsche Bundespost" and the related ministry "Bundespostministerium". Nowadays, the parts of the ministry that were back then regulating EM spectrum, allowable phones (basically phone police, you had to rent from Bundespost or go to jail) and generally being corrupt (relations of the former ministry to copper manufacturers is why they botched the first fibre rollouts in '95 and then ignored the topic for 20 years). Nowadays, the "Regulierungsbehoerde", staffed with the same people, is supposed to regulate their former colleagues at Telekom. Telekom got all the networks and was never split up, so it still has a (~85%?) monopoly on everything copper basically, as well as on customers, using this monopoly to bully other ISPs as well as it's own customers and extending this monopoly into future tech. And the state has a financial interest in this regulation being as lax as possible. So you can imagine how this goes...


> Most uses of fossil fuels are very inefficient. For instance, when you step on the accelerator in your car, only around 30% of the energy in the fuel you use actually is being used to propel you forward. The majority of the energy is wasted as heat. In a power plant that's more like 70% being captured and going towards the goal (electricity generation).

Yes, but there are also future inefficient uses of renewables. E.g. when making iron, you heat the ore (iron oxides) with coke (refined sulfurless coal). The coke will provide extra heat and act as a reduction agent, separating the oxygen atoms from the iron oxides. Now you can do the same thing with hydrogen as the reduction agent to avoid producing CO2 and to avoid using fossil fuels. However, creating renewable hydrogen is atm only 30% efficient, storing and transporting it has losses. Even with possible improvements, that hydrogen will be a very inefficient and costly use of electricity, and at least half of it will always be wasted.

So in terms of total energy usage, making those kinds of industrial processes use hydrogen, we will have to at least double our electricity output. And a lot of that doubling will be wasted because of the inefficiency of electrolysis, as opposed to directly using coal or natural gas.


The interesting bit about using H2 in industrial processes is that, while inefficient, it's also the school book example of variable loads. Solar and wind produces power extremely cheap but intermittent, so in a grid the push down prices when they produce the most. Variable loads can, at least in theory, be run when prices are the cheapest.


Uh, can you provide any scientific papers that H2 can be used for Iron smelting? CO2 is very stable, even at high temperatures. Its hard to strip O2 from it (except photosintesis). Now, H2 itself is very violatile gas. When burn, it creates water. Water is not stable high temperatures. It become vapor and when temperature rise it can even break bond between H2 and O.

So, papers or are you hallucinating?


They are already building such plants. So I would assume they have a plan

But here is a paper - only the title is German the main part is English https://pure.unileoben.ac.at/files/1851525/AC06514880n01vt.p...



Are you suggesting burning H2 will create water and enough energy to split the water in H2 and oxygen again, afterwards? That would be amazing news!

https://en.wikipedia.org/wiki/Steelmaking#Hydrogen_direct_re...


No, not at all. Coke or hydrogen always only provide additional heat, they are never the main source of heat. The main heat source can either be coal or an electric arc furnace. The coke or hydrogen are just necessary for the chemical reaction, and providing some heat is a side-effect.


Sorry, in face of OP’s tone I allowed myself some sarcasm. Obviously there needs to be additional energy. You’d have some equilibrium with those reactions and OP didn’t make any argument why that can’t be controlled in favor of reducing Fe2O3.

It’s also borderline unbelievable OP never heard of hydrogen in future steelmaking, if they are at all invested in the topic. You’d need a special kind of ignorance to think people are hugely throwing money at this, when the basic chemistry is infeasible.


Yeah, I did not thats why I asked. Water and Steel doesnt like each other. But thanks for the info.. It seems it can be done in controlled way.

Now I wonder how cost effective it is :)


Well, actually, thermolysis for water occurs at 2200°C. Thermolysis of CO₂ starts at 1400°C, of CO at 3700°C. The melting point of iron is around 1500°C, similarly its oxides.

So water as a product is actually more stable than CO₂, and doesn't undergo thermolysis at the relevant temperatures for smelting iron. Whereas when going the CO₂ route, there is the risk of producing relevant amounts of CO, which is not as desirable and less efficient because it only absorbs half the oxygen.

Cost is a big question, but it will for sure be more expensive to use hydrogen. Back of the envelop calculation (250$/t coal price, need 1/3t of H_2 for the same effect, so H₂ may cost up to 750$/t, need 40kWh/kg for H₂ electrolysis at 100% efficiency) gives a breakeven electricity price of 1.875ct/kWh. While this happens from time to time due to overproduction, those prices will even out as soon as there is a market for that excess electricity through batteries, storage and electrolysis. Which means that cost-wise, the H₂ route will never be more effective than coal. To make it viable, coal use needs to be made more expensive through taxes and tariffs.


Can you provide some citation about CO2 themolysis? I found just one paper from China....


That stuff is ages old, I doubt you will find current papers on it. Pick a chem textbook or table book, you should find it somewhere in there.


> Now I wonder how cost effective it is :)

I believe right now, it's expected to cost about 30% more. But we don't have an hydrogen economy yet, or 1000 years of experimentation as with carbon as reducing agent. There is probably still some room for innovation in material science for every part of the process.


Make it possible to turn off PRs from new accounts, accounts with a low PR acceptance rate or accounts that create lots of PRs all at once in unrelated projects. Or mark those kinds of PRs in a visible special way. Or make those kinds of issues and PRs non-public so that maintainers can silently drop them without creating publicity for the slop-spammers.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: