Hacker Newsnew | past | comments | ask | show | jobs | submit | jakewins's commentslogin

I am surprised by the number of comments that say the assembler is trivial - it is admittedly perhaps simpler than some other parts of the compiler chain, but it’s not trivial.

What you are doing is kinda serialising a self-referential graph structure of machine code entries that reference each others addresses, but you don’t know the addresses because the (x86) instructions are variable-length, so you can’t know them until you generate the machine code, chicken-and-egg problem.

Personally I find writing parsers much much simpler than writing assemblers.


assembler is far from trivial at least for x86 where there are many possible encodings for a given instruction. emitting the most optimal encoding that does the correct thing depends on surrounding context, and you'd have to do multiple passes over the input.

What is a single example where the optimal encoding depends on context? (I am assuming you're just doing an assembler where registers have already been chosen, vs. a compiler that can choose sse vs. scalar and do register allocation etc.)?

“mov rcx, 0”. At least one assembler (the Go assembler) would at one point blindly (and arguably, incorrectly) rewrite this to “xor rcx, rcx”, which is smaller but modifies flags, which “mov” does not. I believe Go fixed this later, possibly by looking at surrounding instructions to see if the flags were being used, for instance by an “adc” later, to know if the assembler needs to pick the larger “mov” encoding.

Whether that logic should belong in a compiler or an assembler is a separate issue, but it definitely was in the assembler there.


Ok fair, I saw that as out of scope for an assembler - since that is a different instruction not just how to encode.

jumps is another one. jmp can have many encodings depending on where the target offset you're jumping to is. but often times, the offset is not yet known when you first encounter the jump insn and have to assemble it.

All you have to do is record a table of fixup locations you can fill in in a second pass once the labels are resolved.

In practice, one of the difficulties in getting _clang_ to assemble the Linux kernel (as opposed to GNU `as` aka GAS), was having clang implement support for "fragments" in more places.

https://eli.thegreenplace.net/2013/01/03/assembler-relaxatio...

There were a few cases IIRC around usage of the `.` operator which means something to the effect of "the current point in the program." It can be used in complex expressions, and sometimes resolving those requires multiple passes. So supporting GAS compatible syntax in more than just the basic cases forces the architecture of your assembler to be multi-pass.


I mean, no, it's more than that.

You also need to choose optimal instruction encoding, and you need to understand how relocs work - which things can you resolve now vs which require you to encode info for the linker to fill in once the program is launched, etc etc.

Not sure why I'm on this little micro-rant about this; I'm sure Claude could write a workable assembler. I'm more like.. I've written one assembler and many, many parsers, and the parsers where way simpler, yet this thread is littered with people that seem to think assemblers are just lookup tables from ascii to machine code with a loop slapped on top of them.


I don’t understand this summary - isn’t this a summary of the authors recitation of Masleys position? It’s missing the part that actually matters, the authors position and how it differs from Masley?

Yep - it honestly reads like an LLM's summary, which often miss critical nuances.

I know, especially with the bullet points.

The meat there is when not to use an LLM. The author seems to mostly agree with Masley on what's important.


Aye, and to always look for feet under and by the front wheel of vehicles like that.

Stopped buses similarly, people get off the bus, whip around the front of them and straight into the streets, so many times I’ve spotted someone’s feet under the front before they come around and into the street.

Not to take away from Waymo here, agree with thread sentiment that they seem to have acted exemplary


You can spot someone's feet under the width of a bus when they're on the opposite side of the bus and you're sitting in a vehicle at a much higher position on the opposite side that the bus is on? That's physically impossible.

In normal (traditional?) European city cars, yes, I look for feet or shadows or other signs that there is a person in the other side. In SUVs this is largely impossible but then sometimes you can see heads or backpacks.

Or you look for reflections in the cars parked around it. This is what I was taught as “defensive“ driving.


I think you're missing something though, which I've observed from reading these comments - HN commenters aren't ordinary humans, they're super-humans with cosmic powers of awareness, visibility, reactions and judgement.

Or they hate cars/waymo/etc and will come up with any chain of reasoning that puts those things in a bad light.

lol no, but if you are in a regular vehicle, you can see under the front of the bus as you pass it, it’s a standard safety practice? The first picture of the bus in this random article shows what I mean, you should be checking under the bus ahead of the front wheel as you pass: https://www.nvp.se/2026-01-08/bil-i-sparviddshinder-stoppade...

Googling this turned up a presentation from Waymo saying they do exactly this: https://www.reddit.com/r/waymo/comments/1kyapix/waymo_detect...


.. you are commenting on an article about how non-carbon-emitting energy options are beating out polluting alternatives, aided by exactly these taxes, so obviously yes, they are working exactly as intended: price signals for the market to get carbon out of the energy system

The purpose of the tax is not to raise money to plant trees, it’s to raise the cost of emissions so that markets move away from them


TFA's claim is offshore wind prices are 40% cheaper than gas.

The parent comment stated "actual cost has to price in the impact of using it". Most people would agree on this. However, for both claims to be true, the collected tax revenue must be spent offsetting the impact of that gas usage - not simply reducing gas usage (ie. that consumed gas isn't being compensated for).

If the UK government is spending that tax revenue on anything it wants, then it's not the actual cost, is it?


Sorry I don’t follow. Why would the taxes need to be spent offsetting anything? The carbon reduction already happened, because the taxes made this auction choose lower emission alternatives.

If you then also spend the taxes on some form of offsets (if we pretend for the sake of argument that those work) you would have reduced emissions twice. One time seems plenty to say they are doing their job.


The most popular UK electricity retailer is Octopus Energy which is specifically focused on variable prices and flexible consumer demand. By what metric do you mean variable rate retailers are not popular?


Intermittency is already handled by the price mechanisms, they are set quarter-hourly; if you’re not available when there is high demand you don’t get paid.

The marginal price windfalls happen specifically when you’re able to deliver at a low cost when demand is high in the same ISP.

This just seems like data-free fear mongering.


I’m baffled that any other language would be considered - the only language that comes close to English in number of speakers is Mandarin, and Mandarin has nearly half a billion fewer speakers than English.

We should be happy there is a language that has emerged for people to communicate globally without borders, and support it’s role as the worlds second language rather than work to re-fracture how people communicate


    > "I’m baffled that any other language would be considered"
There are direct trains between French and German cities, where additional announcements in French may be appropriate (and perhaps also English).

For local/regional trains, I wouldn't expect any language other than German.


I would say that for long distance trains only English and the local language should be enough.

For international trains, we should have all languages of all traversed countries and English. So for example a train from Paris to Frankfurt should have announcements in French, German and English (and it is actually the case for that train, I already rode it).

But for example, the Berlin - Warsaw train has only English announcements besides the local language depending on the country the train is in (so no Polish when it is in Germany, and no German when it is in Poland), I consider this to be wrong. It should have announcements in Polish, German and English for the whole route.


Agree with your last point. That's a weird choice. At least the stops either side of the border are guaranteed to have people who natively speak the other language.

I seem to recall lines in Belgium that do announcements is 4 languages: french, Flemish, German, and English.


I take trains like those for work, not to France but to Amsterdam, and I don’t speak German, French or Dutch.. if we want a train system that allows Europeans to use it there needs to be announcements and signs in the language 50% of EU citizens speak


I tried Basel, Buck2 and Pants for a greenfield mono repo recently, Rust and Python heavy.

Of the three, I went with Buck2. Maybe just circumstance with Rust support being good and not built to replace Cargo?

Bazel was a huge pain - broke all standard tooling by taking over Cargos job, then unable to actually build most packages without massive multi-day patching efforts.

Pants seemed premature - front page examples from the docs didn’t work, apparently due to breaking changes in minor versions, and the Rust support is very very early days.

Buck2 worked out of the box exactly as claimed, leaves Cargo intact so all the tooling works.. I’m hopeful.

Previously I’ve used Make for polyglot monorepos.. but it requires an enormous amount of discipline from the team, so I’m very keen for a replacement with less foot guns


You’re converging a lot of ground here! The article is about producing container images for deployment, and have no relation to Bazels building stuff for you - if you’re not deploying as containers, you don’t need this?

On Linux vs Win32 flame warring: can you be more specific? What specifically is very very wrong with Linux packaging and dependency resolution?


> The article is about producing container images for deployment

Fair. Docker does trigger my predator drive.

I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.

> What specifically is very very wrong with Linux packaging and dependency resolution?

Linux userspace for the most part is built on a pool of global shared libraries and package managers. The theory is that this is good because you can upgrade libfoo.so just once for all programs on the system.

In practice this turns into pure dependency hell. The total work around is to use Docker which completely nullifies the entire theoretic benefit.

Linux toolchains and build systems are particularly egregious at just assuming a bunch of crap is magically available in the global search path.

Docker is roughly correct in that computer programs should include their gosh darn dependencies. But it introduces so many layers of complexity that are solved by adding yet another layer. Why do I need estargz??

If you’re going to deploy with Docker then you might as well just statically link everything. You can’t always get down to a single exe. But you can typically get pretty close!


> I’m pretty shocked that the Bazel workflow involves downloading Docker base images from external URLs. That seems very unbazel like! That belongs in the monorepo for sure.

Not every dependency in Bazel requires you to "first invent the universe" locally. Lots of examples of this like toolchains, git_repository, http_archive rules and on and on. As long as they are checksum'ed (as they are in this case) so that you can still output a reproducible artifact, I don't see the problem


Everything belongs in version control imho. You should be able to clone the repo, yank the network cable, and build.

I suppose a URL with checksum is kinda sorta equivalent. But the article adds a bunch of new layers and complexity to avoid “downloading Cuda for the 4th time this week”. A whole lot of problems don’t exist if they binary blobs exist directly in the monorepo and local blob store.

It’s hard to describe the magic of a version control system that actually controls the version of all your dependencies.

Webdev is notorious for old projects being hard to compile. It should be trivial to build and run a 10+ year old project.


If you did that, Bazel would work a lot better. Most of the complexity of Bazel is because it was originally basically an export of the Google internal project "Blaze," and the roughest pain points in its ergonomics were pulling in external dependencies, because that just wasn't something Google ever did. All their dependencies were vendored into their Google3 source tree.

WORKSPACE files came into being to prevent needing to do that, and now we're on MODULE files instead because they do the same things much more nicely.

That being said, Bazel will absolutely build stuff fully offline if you add the one step of running `bazel sync //...` in between cloning the repo and yanking the cable, with some caveats depending on how your toolchains are set up and of course the possibility that every mirror of your remote dependency has been deleted.


Making heavy use of mostly remote caches and execution was one of the original design goals of Blaze (Google's internal version) iirc in an effort to reduce build time first and foremost. So kind of the opposite of what you're suggesting. That said, fully air-gapped builds can still be achieved if you just host all those cache blobs locally.


> So kind of the opposite of what you're suggesting.

I don’t think they’re opposites. It seems orthogonal to me.

If you have a bunch of remote execution workers then ideally they sit idle on a full (shallow) clone of the repo. There should be no reason to reset between jobs. And definitely no reason to constantly refetch content.


Also it is possible to air gap bazel and provide files as long as they have the same checksum offline.


> Energy Dome expects its LDES solution to be 30 percent cheaper than lithium-ion.

Grid scale lithium is dropping in cost about 10-20% per year, so with a construction time of 2 years per the article lithium will be cheaper by the time the next plant is completed


LDES: Long-Duration Energy Storage

Grid energy storage: https://en.wikipedia.org/wiki/Grid_energy_storage


Metrics for LDES: Levelized Cost of Storage (LCOS), Gravimetric Energy Density, Volumetric Energy Density, Round-Trip Efficiency (RTE), Self-Discharge Rate, Cycle Life, Technical Readiness Level (TRL), Power-to-Energy Decoupling, Capital Expenditure (CAPEX), R&D CapEx, Operational Expenditure (OPEX), Charging Cost, Response Time, Depth of Discharge, Environmental & Social Governance (ESG) Impact


Li-ion and even LFP batteries degrade; given a daily discharge cycle, they'll be at 80% capacity in 3 years. Gas pumps and tanks won't lose any capacity.


Lithium burns toxic. Carbon based solid-state batteries that don't burn would be safe for buses.

There are a number of new methods for reconditioning lithium instead of recycling.

Biodegradable batteries would be great for many applications.

You can recycle batteries at big box stores; find the battery recycling box at Lowes and Home Depot in the US.


These are LCOE numbers we are comparing, so that is factored in.

The fact that pumps, turbines, rotating generators don’t fail linearly doesn’t mean they are not subject to wear and eventual failure.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: