Hacker Newsnew | past | comments | ask | show | jobs | submit | aniou's commentslogin

I prefer ~/bin/ for my scripts, links to specific commands, etc.

~/.local/bin is tedious to write, when I want to see directory content and - most important - I treat whole ~/.local/ as managed automatically by other services and volatile.


"And it's just plain better at writing code than 60% of my graduating class was back in the day".

Only because it has access to vast amount of sample code to draw a re-combine parts. Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?

I'm starting to think about a risk of technological stagnation in many areas.


> Did You ever considered emerging technologies, like new languages or frameworks that may be a much better suited for You area but they are new, thus there is no codebase for LLM to draw from?

Try it. The pattern matching these things do is unlike anything seen before.

I'm writing a compiler for a language I designed, and LLMs have no trouble writing examples and tests. This is a language with syntax and semantics that does not exist in any training set because I made it up. And here it is, a machine is reading and writing code in this language with little difficulty.

Caveat emptor: it is far from perfect. But so are humans, which is where the training set originated.

> I'm starting to think about a risk of technological stagnation in many areas.

That just does not follow for me. We're in an era where advancements in technology continues to be roughly quadratic [1]. The implication you're giving is that the advancements are a step function that will soon (or has already) hit its final step.

This suggests that you are unfamiliar or unappreciative of how anything progresses, in any domain. Creativity is a function of taking what existed before and making it your own. "Standing on the shoulders of giants", "pulling oneself up by the bootstraps", and all that. None of that is changing just because some parts of it can now be automated.

Stagnation is the very last thing I would bet on. In part because it means a "full reset" and loss of everything, like most apocalyptic story lines. And in part because I choose to remain cautiously optimistic.

[1]: https://ourworldindata.org/technology-long-run


First, we've fallen into a nomenclature trap, as so-called "AI" has nothing to do with "intelligence." Even its creators admit this, hence the name "AGI," since the appropriate acronym has already been used.

But, when we use "AI" acronym, our brains still recognize "intelligence" attribute and tend to perceive LLMs as more powerful than they actually are.

Current models are like trained parrots that can draw colored blocks and insert them into the appropriate slots. Sure, much faster and with incomparably more data. But they're still parrots.

This story and the discussions remind me of reports and articles about the first computers. People were so impressed by the speed of their mathematical calculations that they called them "electronic brains" and considered, even feared, "robot intelligence."

Now we're so impressed by the speed of pattern matching that we called them "artificial intelligence," and we're back to where we are.


As side note. Maybe someone knows, why rust devs chose an already used name for language changes proposal? "RFC" was already taken and well-established and I simply refuse to accept that someone wasn't aware about Request For Comments - and if it was true and clash was created deliberately, then it was rude and arrogant.

Every, ...king time, when I read something like "RFC 2789 introduced a sparse HTTP protocol." my brain suffers from a short-circuit. BTW: RFC 2789 is a "Mail Monitoring MIB".


There are many, many RFC collections. Including many that predate the IETF. Some even predate computers.


But they were in different domains. Here, we have a strong clash because Rust is positioning itself as secure system and internet language and computer and internet standard are already defined by RFC-s. So, it may be not uncommon, when someone would tell about Rust mechanisms, defined by particular RFC in context of handling particular protocol, defined by... well... RFC too. But not by rust-one.

Not so smart, when we realize, that one of aspects of secure and reliable system is elimination of ambiguities.


Ask them, don't ask us. They have a public interface, you can ask them to change the name to something unique.


A note from someone who specializes in long-term system maintenance:

There is also one, very important aspect, that is - (un)suprisingly - rarely mentioned in comments: a lack of dependence between sloppy work and personal comfort of particular person, responsible for problematic changes.

What I mean? A badly installed or configured system would be a problem in next three, maybe five years: to time of major OS upgrade, HW replacement or refresh, framework deprecation and so, and so... In current, corporate culture, there is almost impossible to being bite by own laziness - almost no one is working in particular company or for particular project so long. Especially, when installation is conducted by external party in model "grab the money and run!"

So, very basic motivation for good work, that comes from awareness, that today technological debt would lead to personal, painful experience in future, doesn't exists at all in modern, corporate environment. The things are even worse - there are multiple relations about negative career consequences resulting from concern for the quality of work: "because we want that product fast a we don't like troublemakers and defensive thinkers".

In consequence, one cannot throw a rock without hitting a dozens of such a cases, like that one: https://discourse.ubuntu.com/t/release-26-04-lts-without-the...


Nowadays NetBSD offers something similar to "context depended filesystem", i.e. a special form of symbolic links that can points to different locations, according to wide range set of attributes: from domainname via machine_arch to gid.

For details see https://man.netbsd.org/symlink.7 - section Magic symlinks at very end of manual.


I seem to remember something like that in DG/UX too.


It won because it was in place (Outlook) and almost nobody cares.

Official version is: "because there is a whole history of correspondence and it is convenient to forward it to new participants".

In reality? It doesn't matter. Almost no-one reads, neither top- or bottom-posted mails. But there is a drawback in top-posting and I mean a "my comments inside original post, in color/bold/with indent/randomly inserted between two phrases". There is no standard of citing in top-posting - thus sometimes original mail gone. Edited, re-edited and commented in various, inconsistent and often unreadable ways.


In my experience in corporate environments, that ability to forward to new participants with most of the context is really useful. If few people are going to read the history anyways, then in my opinion this edge case is valuable enough to tip in the scales.

I agree the difficulty quoting sucks, but that's mostly because of the switch from top posting to bottom posting. When people copy-and-paste the bit they are replying to and stay in the top posting paradigm things aren't so bad.


Yes-and-no, I think. Limited usability of assembly language comes from limited resources (registers, operations) available to programmer, that leads to (so) many simple steps for more complicated tasks.

But, high-level language can offer a very interesting possibilities, even if it was not created for such kind of programming. For example, some time ago I made another attempt to emulate family of 65xxx. Previous versions were written in typical manner, like work of every other programmer on Earth.

A new approach, when a code was written in more regular way (see link below), like mentioned tabular-one, gave me excellent results. I was able to fully understood a program and processor logic and finally I was able to re-create a most magical command for 65xx: SBC/ADC with BCD support in very straightforward and clear way (especially for a cpu-like logic).

For example: https://github.com/aniou/morfeo/blob/a83f548e98bd310995b3c37...

There is one thing that not fits into pure, tabular-like code logic: more complicated conditionals and loops. But maybe, in future...


I'm sorry to say that but this article is not entirely true - an illustration "how does traditional raid 4/5/6 do it?" shows ONLY RAID 4. There is a big difference between RAID 4 and RAID 5/6 and former was abandoned a years (decades?) ago in favor of RAID 5 and - later - 6.

Of course, it gives "better publicity" for RAID-Z, but it is rather an marketing trick not engineering.

See https://en.wikipedia.org/wiki/Standard_RAID_levels


Note that the article talks about the way the array is expanded, not how the specific level works.

In other words, what they are saying is that the traditional way to expand an array is essentially to rewrite the whole array from scratch, so if the old array has three stripes, with blocks [1,2,3,p1] [4,5,6,p2] and [7,8,9,p3] (with p1 and p2 being the parity blocks), the new array will have stripes [1,2,3,4,p1'], [5,6,7,8,p2'] and [9,x,x,x,p3'], i.e. not only has to move the blocks around, but also recompute essentially all the parity blocks.

IF I understand the ZFS approach correctly, the existing blocks are not restructured but only reshuffled, so the new layout will be logically still [1,2,3,p1] [4,5,6,p2] and [7,8,9,p3] but distributed on five disks so [1,2,3,p1,4] [5,6,p2,7,8], [9,p3,x,x,x]

It seems that this means less work while expanding, but some space lost unless one manually copies old data in a new place.

IF I got it right, I am not sure who is the intended audience for this feature: enterprise users will probably not use it, and power users would probably benefit from getting all the space they could get from the extra disk


Power users would like to get all the space, but when the choice is either you buy just one hdd and get some space or 4+ hdds to replace old array with a completely new one and then left over with unused old one - most would pick first option.


Not at all. Things like CFEengine (1993) and even Puppet predates a spread of "devops" term. Not mentioning tools for automated system installations, embedded in distributions like RedHat or Debian.

Creating a tools, that allows us doing a simple, repeatable and - usually - automated tasks, always was an important part of sysadm role. Of course, there were ones that did everything manually and wasn't able to write own code: we called them "operators".

From my point of view: "devop is that hasty one, that doesn't care about long-term support of underlying infrastructure".


> Not at all. Things like CFEengine (1993) and even Puppet predates a spread of "devops" term. Not mentioning tools for automated system installations, embedded in distributions like RedHat or Debian.

Yep.

> Creating a tools, that allows us doing a simple, repeatable and - usually - automated tasks, always was an important part of sysadm role. Of course, there were ones that did everything manually and wasn't able to write own code: we called them "operators".

Yep.

> From my point of view: "devop is that hasty one, that doesn't care about long-term support of underlying infrastructure".

Nope. https://en.wikipedia.org/wiki/DevOps


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: