Hacker Newsnew | past | comments | ask | show | jobs | submit | norskeld's commentslogin

Damn, I wish CloudFlare being down also affected local development, so I could take a break from doing frontend… :'(

I don't think this is the case with CloudFlare, but for every recent GitHub outage or performance issue... oh boy, I blame the clankers!


This wild `unwrap()` kinda took me aback as well. Someone really believed in themselves writing this. :)


They only recently rewrote their core in Rust (https://blog.cloudflare.com/20-percent-internet-upgrade/) -- given the newness of the system and things like "Over 100 engineers have worked on FL2, and we have over 130 modules" I won't be surprised for further similar incidents.


The irony of a rust rewrite taking down the internet is not lost on me.


20% seems to grow every time someone write about this.


While I agree that monopolies suck, I _absolutely hate_ having to waste my time adjusting styles and writing workaround code just to make everything look and work consistently in a multitude of browsers. This is one of the reasons — among a hundred others — that I grew to somewhat hate front-end, doubly so with the rise of mobile devices. And the more rendering engines we have, the more developers will have to fight frustrating battles with inconsistencies and quirks.


Indeed front-end development in software can be painful. Much of the cruft can be attributed to computing's byzantine history of incremental experimentation. You might take some comfort in knowing that the biological analogue is vastly more complicated: the transformation of genotype to phenotype. Trying to figure out the evolutionary pressures and various mutational accidents that drove particular biological changes feels way harder than trying to figure out WTF Project X was thinking when they decided to pivot from being a social network for dog walkers to a low-latency query planner for a database no one has heard of.


Janitor Engineers [0] are already a thing? Damn. Also, all links in this article starting from the "Why AI code fails at scale" section are dead for some reason, even though it was written only 5 days ago. That raises some questions...

EDIT: Not trying to offend anyone with this [0], I've actually had the same half-joking retirement plan since the dawn of vibe coding, to become an "all-organic-code" consultant who untangles and cleans up AI-generated mess.


I think specialising in brownfield has always been a thing. If anything, it's greenfield that's the rarity.


I’ve always found the pioneer, settler, town planner model to be a great way of thinking about this. Successful, long-term projects or organizations eventually can use all 3 types.

Maybe vibe coding replaces some pioneering work, but that still leaves a lot for settlers to do.

(I admit I’m generally in the settler category)

https://blog.gardeviance.org/2015/03/on-pioneers-settlers-to...


Thinking of retired COBOL programmers that still have a market...


There's still a market, but when I looked into COBOL work out of curiosity (I've never been anywhere near it in real life), the salaries I found were surprisingly low, compared with common modern languages.

Perhaps the old adage "it's getting hard to find X employees [at the price we are willing to pay]" applies.

That surprised me because I've seen articles and heard podcasts for years where they've said COBOL programmers are well paid due to scarcity, though never quoting amounts.


I wonder if COBOL projects these days, being brownfield by nature, are less political and stressful than brownfield web development projects.


It also has catalogs feature for defining versions or version ranges as reusable constants that you can reference in workspace packages. It was almost the only reason (besides speed) I switched a year ago from npm and never looked back.


workspace protocol in monorepo is also great, we're using it a lot.


OK so it seems too good now, what are the downsides?


If you relied on hoisting of transitive dependencies, you'll now have to declare that fact in a project's .npmrc

Small price to pay for all the advantages already listed.


They’re moving all that to the pnpm-workspace.yaml file now


‘pnpm’ is great, swapped to it a year ago after yarn 1->4 looked like a new project every version and npm had an insane dependency resolution issue for platform specific packages

pnpm had good docs and was easy to put in place. Recommend


A few years ago it didn't work in all cases when npm did. It made me stop using it because I didn't want to constantly check with two tools. The speed boost is nice but I don't need to npm install that often.


Downside is that you have to add "p" in front, ie. instead of "npm" you have to type "pnpm". That's all that I'm aware of.


Personally, I didn't find a way to create one docker image for each of my project (in a pnpm monorepo) in an efficient way


That’s not really a pnpm problem on the face of it


As a TypeScript developer experienced in type-level acrobatics, this looks just fine...


Personally, I'd be annoyed by both the resource-consuming animations and the blurry GIFs/canvas. Infisical does use the latter (canvas) for icons in their UI, and I somewhat hate it. I'd rather look at crisp, but static icons.


Canvas should never be blurry. If it is, something is doing a bilinear upscale. I'd guess someone forgot to take the scale factor of your display into account.

Or there are images being used in the canvas, which would defeat the purpose for the use case you described.


Oh no, Keith was the only thing I liked about C++!


Very tangential, but I couldn't help but remember Crust [1]. This tsoding madlad even wrote a B compiler [2] using these... rules. Or lack thereof?

[1]: https://github.com/tsoding/Crust

[2]: https://github.com/tsoding/b


Crust is exactly what I had in mind, but enforced on language level basically.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: