Idempotence of an operation means that if you perform it a second (or third, etc) time it won't do anything. The "action" all happens the first time and further goes at it do nothing. Eg. switching a light switch on could be seen as "idempotent" in a sense. You can press the bottom edge of the switch again but it's not going to click again and the light isn't going to become any more on.
The concept originates in maths, where it's functions that can be idempotent. The canonical example is projection operators: if you project a vector onto a subspace and then apply that same projection operator again you get the same vector again. In computing the term is sometimes used fairly loosely/analogistically like in the light switch example above. Sometimes, though, there is a mathematical function involved that is idempotent in the mathematical sense.
A form of idempotence is implied in "retries ... can't produce duplicate work" in the quote, but it isn't the whole story. Atomicity, for example, is also implied by the whole quote: the idea that an operation always either completes in its entirety or doesn't happen at all. That's independent of idempotence.
I agree the title should be changed, but as I commented on the dupe of this submission learning is not something that happens as a beginner, student or "junior" programmer and then stops. The job is learning, and after 25 years of doing it I learn more per day than ever.
An important aspect of this for professional programmers is that learning is not something that happens as a beginner, student or "junior" and then stops. The job is learning, and after 25 years of doing it I learn more per day than ever.
How old are you? At 39 (20 years of professional experience) I've forgotten more things in this field than I'm comfortable with today. I find it a bit sad that I've completely lost my Win32 reverse engineering skills I had in my teens, which have been replaced by nonsense like Kubernetes and aligning content with CSS Grid.
And I must admit my appetite in learning new technologies has lessened dramatically in the past decade; to be fair, it gets to a point that most new ideas are just rehashing of older ones. When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.
> I've forgotten more things in this field than I'm comfortable with today. I find it a bit sad that I've completely lost my Win32 reverse engineering skills I had in my teens
I'm a bit younger (33) but you'd be surprised how fast it comes back. I hadn't touched x86 assembly for probably 10 years at one point. Then someone asked a question in a modding community for an ancient game and after spending a few hours it mostly came back to me.
I'm sure if you had to reverse engineer some win32 applications, it'd come back quickly.
That's a skill onto itself, and I mean the general stuff does not fade or at least come back quickly. But there's a lot of the tail end that's just difficult to recall because it's obscure.
How exactly did I hook Delphi apps' TForm handling system instead of breakpointing GetWindowTextA and friends? I mean... I just cannot remember. It wasn't super easy either.
I want to second this. I'm 38 and I used to do some debugging and reverse engineering during my university days (2006-2011). Since then I've mainly avoided looking at assembly since I mostly work in C++ systems or HLSL.
These last few months, however, I've had to spend a lot of time debugging via disassembly for my work. It felt really slow at first, but then it came back to me and now it's really natural again.
You can’t keep infinite knowledge in your brain. You forget skills you don’t use. Barring some pathology, if you’re doing something every day you won’t forget it.
If you’ve forgotten your Win32 reverse engineering skills I’m guessing you haven’t done much of that in a long time.
That said, it’s hard to truly forget something once you’ve learned it. If you had to start doing it again today, you’d learn it much faster this time than the first.
You can't store an infinite amount of entropy in a finite amount of space outside of a singularity, well or at least attempting to do that will cause a singularity.
Compression/algorithms don't save you here either. The algorithm for pi is very short, pulling up any particular randomm digit of pi still requires the expenditure of some particular amount of entropy.
It's not that you forget, it's more that it gets archived.
If you moved back to a country you hadn't lived or spoken its language in for 10 years, you would find yourself that you don't have to relearn it, and it would come back quickly.
Also information is supposedly almost infinite, as with increased efficiency as you learn, it makes volume limits redundant.
I do take your point. But the point I’m trying to emphasize is that the brain isn’t like a hard drive that fills up. It’s a muscle that can potentially hold more.
I’m not sure if this is in the Wikipedia article, but when I last read about this, years ago, there seemed to be a link between Hyperthymesia and OCD. Brain scans suggested the key was in how these individuals organize the information in their brain, so that it’s easy for them retrieve.
Before the printing press was common, it was common for scholars to memorize entire books. I absolutely cannot do this. When technology made memorization less necessary, our memories shrank. Actually shrank, not merely changing what facts to focus on.
And to be clear, I would never advocate going back to the middle ages! But we did lose something.
There must be some physical limit to our cognitive capacity.
We can “store” infinite numbers by using our numeral system as a generator of sorts for whatever the next number must be without actually having to remember infinite numbers, but I do not believe it would be physically possible to literally remember every item in some infinite set.
Sure, maybe we’ve gotten lazy about memorizing things and our true capacity is higher (maybe very much so), but there is still some limit.
Additionally, the practical limit will be very different for different people. Our brains are not all the same.
I agree, it must not be literally infinite, I shouldn’t have said that. But it may be effectively infinite. My strong suspicion is that most of us are nowhere close to whatever the limit is.
Think about how we talk about exercise. Yes, there probably is a theoretical limit to how fast any human could run, and maybe Olympic athletes are close to that, but most of us aren’t. Also, if you want your arms to get stronger, it isn’t bad to also exercise your legs; your leg muscles don’t somehow pull strength away from your arm muscles.
> your leg muscles don’t somehow pull strength away from your arm muscles.
No, but the limiting factor is the amount of stored energy available in your body. You could exhaust your energy stores using only your legs and left barely able to use your arms (or anything else).
If we’ve offloaded our memory capacity to external means of rapid recall (ex. the internet) then what have we gained in response? Breadth of knowledge? Increased reasoning abilities? More energy for other kinds of mental work? Because there’s no cheating thermodynamics, even thinking uses energy. Or are we just simply radiating away that unused energy as heat and wasting that potential?
It is also a matter of choice. I don’t remember any news trivia, I don’t engage with "people news" and, to be honest, I forget a lot of what people tell me about random subject.
It has two huge benefits: nearly infinite memory for truly interesting stuff and still looking friendly to people who tell me the same stuff all the times.
Side-effect: my wife is not always happy that I forgot about "non-interesting" stuff which are still important ;-)
APL inventor says that he was developing not a programming language, but notation to express as much problems as one can. He found that expressing more and more problems with the notation first made notation grow, then notation size started to shrink.
To develop conceptual knowledge (when one's "notation" starts to shrink) one has to have some good memory (re-expressing more and more problems).
The point is that this particular type of exceptional memory has nothing to do with conceptual knowledge, it's all about experiences. This particular condition also makes you focus on your own past to an excessive amount, which would distract you from learning new technologies.
You can't model systems in your mind using past experiences, at least not reliably and repeatedly.
> I must admit my appetite in learning new technologies has lessened dramatically in the past decade;
I felt like that for a while, but I seem to be finding new challenges again. Lately I've been deep-diving on data pipelines and embedded systems. Sometimes I find problems that are easy enough to solve by brute force, but elegant solutions are not obvious at all. It's a lot of fun.
It could be that you're way ahead of me and I'll wind up feeling like that again.
> When you know half a dozen programming languages or web frameworks, the next one takes you a couple hours to get comfortable with.
Learn yourself relational algebra. It invariantly will lead you to optimization problems and these will also invariantly lead you to equality saturation that is most effectively implemented with... generalized join from relational algebra!
Also, relational algebra implements content-addressable storage (CAS), which is essential for data flow computing paradigm. Thus, you will have a window into CPU design.
At 54 (36 years of professional experience) I find these rondos fascinating.
That's one of several possibilities. I've reached a different steady state - one where the velocity of work exceeds the rate at which I can learn enough to fully understand the task at hand.
But just think, there's a whole new framework that isn't better but is trendy. You can recycle a lot of your knowledge and "learn new things" that won't matter in five years. Isn't that great?
I write cards and quizzes for all kind of stuff, and I tend to retain it for years after having it practiced with the low friction of spaced repetition.
I worked as an "advisor" for programmers in a large company. Our mantra there was that programming and development of software is mainly acquiring knowledge (ie learning?).
One take-away for us from that viewpoint was that knowledge in fact is more important than the lines of code in the repo. We'd rather lose the source code than the knowledge of our workers, so to speak.
Another point is that when you use consultants, you get lines of codes, whereas the consultancy company ends up with the knowledge!
... And so on.
So, I wholeheartedly agree that programming is learning!
It was a "high tech domain", so institutional knowledge was required, problem or not.
We had domain specialists with decades of experience and knowledge, and we looked at our developers as the "glue" between domain knowledge and computation (modelling, planning and optimization software).
You can try to make this glue have little knowledge, or lots of knowledge. We chose the latter and it worked well for us.
But I was only in that one company, so I can't really tell.
>One take-away for us from that viewpoint was that knowledge in fact is more important than the lines of code in the repo. We'd rather lose the source code than the knowledge of our workers, so to speak.
Isn't this the opposite of how large tech companies operate? They can churn develops in/out very quickly, hire-to-fire, etc... but the code base lives on. There is little incentive to keep institutional knowledge. The incentives are PRs pushed and value landed.
It can be I guess, but I think it's more about solving problems. You can fix a lot of peoples' problems by shipping different flavors of the same stuff that's been done before. It feels more like a trade.
People naturally try to use what they've learned but sometimes end up making things more complicated than they really needed to be. It's a regular problem even excluding the people intentionally over-complicating things for their resume to get higher paying jobs.
Have you been nothing more than a junior contributor all this time? Because as you mature professionally your knowledge of the system should also be growing
It seems to me that now days software engineers move a lot more. Either within a company or to other companies. Furthermore, companies do not seem to care and they are always stuck on a learning loop where engineers are competent enough to make modifications and able to add new code but without deep insights where they can improve the fundamental abstractions of the system. Meanwhile even seniors with 25+ years of experience are noobs when they approaching a new system.
He used it to generate a little visualiser script in python, a language he doesn't know and doesn't care to learn, for a hobby project. It didn't suddenly take over as lead kernel dev.
I don't like few pieces and Mr. Lennarts attitude to some bugs/obvious flaws, but by far much better than old sysv or really any alternative we have.
Doing complex flows like "run app to load keys from remote server to unlock encrypted partition" is far easier under systemd and it have dependency system robust enough to trigger that mount automatically if app needing it starts
Like a couple of others here I tried checking out this project [1] and running these 2.3 million random battles. The README says everything needs to be run in docker, and indeed the test script uses docker and fails without it, but there are no docker/compose files in the repo.
It's great that the repo is provided, but people are clamouring for proof of the extraordinary powers of AI. If the claim is that it allowed 100 kloc to be ported in one month by one dev and the result passes a gazillion tests that prove it actually replicates the desired functionality, that's really interesting! How hard would it be, then, to actually have the repo in a state where people can run those tests?
Unless the repo is updated so the tests can be run, my default assumption has to be that the whole thing is broken to the point of uselessness.
I probably got wooshed here. Anyway, the tests definitely aren't run. I checked it out and tried myself. The test script [1] outputs "ALL SEEDS PASSED!" when the number of failures is zero, which of course is the case if the entire thing just fails to run.
Sounds like you want a time travel debugger, eg. rr.
Sophisticated live debuggers are great when you can use them but you have to be able to reproduce the bug under the debugger. Particularly in distributed systems, the hardest bugs aren't reproducible at all and there are multiple levels of difficulty below that before you get to ones that can be reliably reproduced under a live debugger, which are usually relatively easy. Not being able to use your most powerful tools on your hardest problems rather reduces their value. (Time travel debuggers do record/replay, which expands the set of problems you can use them on, but you still need to get the behaviour to happen while it's being recorded.)
That’s a very fair point.
The hardest bugs I’ve dealt with were almost always the least reproducible ones, which makes even very powerful debuggers hard to apply in practice.
It makes me wonder whether the real challenge is not just having time-travel, but deciding when and how much history to capture before you even know something went wrong.
The concept originates in maths, where it's functions that can be idempotent. The canonical example is projection operators: if you project a vector onto a subspace and then apply that same projection operator again you get the same vector again. In computing the term is sometimes used fairly loosely/analogistically like in the light switch example above. Sometimes, though, there is a mathematical function involved that is idempotent in the mathematical sense.
A form of idempotence is implied in "retries ... can't produce duplicate work" in the quote, but it isn't the whole story. Atomicity, for example, is also implied by the whole quote: the idea that an operation always either completes in its entirety or doesn't happen at all. That's independent of idempotence.
reply