No it's not. Sometimes (or maybe most of the time) doing it badly means maybe it's not your thing.
I used to have a neighbour who liked to play the piano and sing. He was doing it consistently badly and he didn't have anyone to tell him that he should probably stop trying.
I have two problems with that. One is, you can do what you like quietly and without disturbing anyone around you. Second is the Dunning Kruger effect: witnessing it first hand is never fun.
Oh.. So you start doing something new and you're top 10% without practicing or being bad at it first? I'd love to test that to see if it's the case... Your logic is "You're not the best ever to do something so you are not doing it" means you have probably never done a single thing your entire life. Maybe you should just stop.
Who are you, to define what "the thing" is, for someone else?
Doing the thing isn't about judging other people. That doesn't contribute to your thing.
If someone is bothering you, making it hard to do your thing, then your thing involves talking to them about your problem. Without judging what they are doing.
Yes but that doesn't explain why we aren't given a choice. Program code is boringly deterministic but in many cases it's exactly what you need while non-determinism becomes your dangerous enemy (like in the case of some Airbus jets being susceptible to bit flips under cosmic rays)
The current way to address this is through RAG applications or Retrieval Augmented Generation. This means using the LLM side for the natural language non-deterministic portion and using traditional code and databases and files for the deterministic part.
A good example is bank software where you can ask what your balance is and get back the real number. A RAG app won't "make up" your balance or even consult the training the find it. Instead, the traditional code (deterministic) operations are done separately from the LLM calls.
It didn't make pointers safer to use though. In Swift and some other modern languages you can't dereference an optional (nullable) pointer without force-unwrapping it.
It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
The design goal of structured concurrency is to have a safe way of using all available CPU cores on the device/computer. Modern mobile phones can have 4, 6, even 8 cores. If you don't get a decent grasp of how concurrency works and how to use it properly, your app code will be limited to 1 or 1.5 cores at most which is not a crime but a shame really.
That's where it all starts. You want to execute things in parallel but also want to ensure data integrity. If the compiler doesn't like something, it means a design flaw and/or misconception of structured concurrency, not "oh I forgot @MainActor".
Swift 6.2 is quite decent at its job already, I should say the transition from 5 to 6 was maybe a bit rushed and wasn't very smooth. But I'm happy with where Swift is today, it's an amazing, very concise and expressive language that allows you to be as minimalist as you like, and a pretty elegant concurrency paradigm as a big bonus.
I wish it was better known outside of the Apple ecosystem because it fully deserves to be a loved, general purpose mainstream language alongside Python and others.
> It's a good article but I think you need to start explaining structured concurrency from the very core of it: why it exists in the first place.
I disagree. Not every single article or essay needs to start from kindergarten and walk us up through quantum theory. It's okay to set a minimum required background and write to that.
As a seasoned dev, every time I have to dive into a new language or framework, I'll often want to read about styles and best practices that the community is coalescing around. I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
I'm not saying that level of article/essay shouldn't exist. I'm just saying there's more than enough. I almost NEVER find articles that are targeting the "I'm a newbie to this language/framework, but not to programming" audience.
> I promise there is no shortage at all of articles about Swift concurrency aimed at junior devs for whom their iOS app is the very first real programming project they've ever done.
You’d be surprised. Modern Swift concurrency is relatively new and the market for Swift devs is small. Finding good explainers on basic Swift concepts isn’t always easy.
I’m extremely grateful to the handful of Swift bloggers who regularly share quality content.
Paul Hudson is the main guy right now, although his stuff is still a little advanced for me. Sean Allen on youtube does great video updates and tutorials.
Sure, but as soon as they released their first iteration, they immediately went back to the drawing board and just slapped @MainActor on everything they could because most people really do not care.
Well yes, but that’s because the iOS UI is single threaded, just like every other UI framework under the sun.
It doesn’t mean there isn’t good support for true parallelism in swift concurrency, it’s super useful to model interactions with isolated actors (e.g. the UI thread and the data it owns) as “asynchronous” from the perspective of other tasks… allowing you to spawn off CPU-heavy operations that can still “talk back” to the UI, but they simply have to “await” the calls to the UI actor in case it’s currently executing.
The model works well for both asynchronous tasks (you await the long IO operation, your executor can go back to doing other things) and concurrent processing (you await any synchronization primitives that require mutual exclusivity, etc.)
There’s a lot of gripes I have with swift concurrency but my memory is about 2 years old at this point and I know Swift 6 has changed a lot. Mainly around the complete breakage you get if you ever call ObjC code which is using GCD, and how ridiculously easy it is to shoot yourself in the foot with unsafe concurrency primitives (semaphores, etc) that you don’t even know the code you’re calling is using. But I digress…
Not really true; @MainActor was already part of the initial version of Swift Concurrency. That Apple has yet to complete the needed updates to their frameworks to properly mark up everything is a separate issue.
async let and TaskGroups are not parallelism, they're concurrency. They're usually parallel because the Swift concurrency runtime allows them to be, but there's no guarantee. If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel.
> If the runtime thread pool is heavily loaded and only one core is available, they will only be concurrent, not parallel
Isn't that always true for thread pool-backed parallelism? If only one core is available for whatever reason, then you may have concurrency, but not parallelism.
I like how Swift solved this: there's a more universal `defer { ... }` block that's executed at the end of a given scope no matter what, and after the `return` statement is evaluated if it's a function scope. As such it has multiple uses, not just for `try ... finally`.
Defer has two advantages over try…finally: firstly, it doesn’t introduce a nesting level.
Secondly, if you write
foo
defer revert_foo
, when scanning the code, it’s easier to verify that you didn’t forget the revert_foo part than when there are many lines between foo and the finally block that calls revert_foo.
A disadvantage is that defer breaks the “statements are logically executed in source code order” convention. I think that’s more than worth it, though.
The oldest defer-like feature I can find reference to is the ON_BLOCK_EXIT macro from this article in the December 2000 issue of the C/C++ Users Journal:
I'll disagree here. I'd much rather have a Python-style context manager, even if it introduces a level of indentation, rather than have the sort of munged-up control flow that `defer` introduces.
I was contemplating what it would look like to provide this with a macro in Rust, and of course someone has already done it. It's syntactic sugar for the destructor/RAII approach.
It's easy to demonstrate that destructors run after evaluating `return` in Rust:
struct PrintOnDrop;
impl Drop for PrintOnDrop {
fn drop(&mut self) {
println!("dropped");
}
}
fn main() {
let p = PrintOnDrop;
return println!("returning");
}
But the idea of altering the return value of a function from within a `defer` block after a `return` is evaluated is zany. Please never do that, in any language.
EDIT: I don’t think you can actually put a return in a defer, I may have misremembered, it’s been several years. Disregard this comment chain.
It gets even better in swift, because you can put the return statement in the defer, creating a sort of named return value:
func getInt() -> Int {
let i: Int // declared but not
// defined yet!
defer { return i }
// all code paths must define i
// exactly once, or it’s a compiler
// error
if foo() {
i = 0
} else {
i = 1
}
doOtherStuff()
}
No, I actually misremembered… you can’t return in a defer.
The magical thing I was misremembering is that you can reference a not-yet-defined value in a defer, so long as all code paths define it once:
fn callFoo() -> FooResult {
let fooParam: Int // declared, not defined yet
defer {
// fooParam must get defined by the end of the function
foo(fooParam)
otherStuffAfterFoo() // …
}
// all code paths must assign fooParam
if cond {
fooParam = 0
} else {
fooParam = 1
return // early return!
}
doOtherStuff()
}
Blame it on it being years since I’ve coded in swift, my memory is fuzzy.
Meh, I was going to use the preprocessor for __LINE__ anyways (to avoid requiring a variable name) so I just made it an "old school lambda." Besides, scope_exit is in C++23 which is still opt-in in most cases.
Skimmed through the article, some interesting numbers but not a single statistic is per capita (or per million, whatever). How do I understand the scale of the phenomenon without the per capita figures? Sorry but seems a bit useless.
Well, not to put too fine a point on it, but it would be more correct to say that the author/artist is likely from a country that uses the Cyrillic script.
Tangential to the main topic, but this is the only sensible way of running an email inbox, always has been to me, and it boggle my mind, why would anyone let clutter and a piling number of unreads in their one and only inbox, one of the most important things in our digital lives?
Each email is an action item. If it's not or if it's been addressed, it's gone, period.
Archive vs. Delete is another question but not as important. Over time I've found that I'm probably deleting too much (e.g. where did I buy that <nice thing> 5 years ago? want it again, can't find the order). Then business emails are all archived with the exception of business spam of course.
So why would you have more emails in your inbox than items you’re supposed to act on?
Because my attention should be directed at what I want to do, when I want to, not a nagging number that sits there being more than zero.
And when I do pay attention to it, I don't want to spend 20 minutes going through the 180 emails that I've been cc'd on. It's literally not worth my time or dilution of my attention. When I have attempted to get on top of this by doing all the curation and rule-authoring that productivity mavens shout about, it works for a little while but entropy sets in.
I'm just not into scripting my own life and maximizing my productivity, and my job does not pivot on prompt email responses. So my email is a garbage dump with tire fires in it, and I know that, and I get on with the things I know are actually important.
I'm not recommending this! It's just the compromise that I have settled in to. But if you wonder "why would anyone," this is it.
It's very presence in the list is already a drain on my attention that I didn't ask for and do not want. The fact that it requires any action on my part to remove it from the queue is an issue in and of itself.
For mojuba and myself, email is a way to organize TODO items. Things to take care of exist either way, and email is an awesome way to keep track of, and process, events / tasks asynchronously.
shermantanktop and you, forbiddenvoid, seem to refuse organizing TODOs, or perhaps even the concept that external events be allowed to generate TODOs for you ("my attention should be directed at what I want to do, when I want to"). I closely know this -- i.e., "garbage dump with tire fires in it" -- because that's precisely what my SO's mailbox looks like. Whereas I've maintained a perfect inbox 0 for several decades, both at work and privately.
This is an unbridgeable psychological divide between two attitudes toward, or even two definitions of, tasks and obligations. People who can naturally implement inbox 0 never lose track of a task (not just in email, but in any other medium either), and get indignated when they receive reminders. They're excellent schedulers, and orderly, but also frequently obsessive-compulsive, neurotic. People who can't instinctively do inbox 0 cannot be taught or forced to do it, they tend to need repeated reminders, and may still forget tasks. At the same time, they have different virtues; they tend to shine with ill-defined problems and unexpected events.
Neither group is at fault; the difference has biological roots, in the nervous system. Our brains physically differ.
I kind of agree, but I explain it differently. Everyone’s job is a mix of reactive and proactive work. For my particular job, reactive work is necessary but will expand to fill all my time and then some. Proactive work is ambiguous and uncertain, but usually ends up being the highest value work that I do.
If I spend all my time on other people’s demands, it will all be urgent, but not enough of it will be important.
That's a super interesting situation (and description).
I always order reviewing the work of others ahead of working on my own code. This works wonders for the team. But admittedly, if the review workload is not distributed well, then my approach produces an annoying imbalance for me, and over the longer term, it leads to burnout.
Put differently, if I enable / assist / mentor others, that produces value comparable to my own personal output, for the company (or that's at least how I understand things). However, the emotional value of each, to me, is comparable only up to a certain extent -- namely, as long as I get to write enough code myself. The proportion must be right.
I rely on management / the team to (self-)organize the review workload, and then I prefer to help others first, and work on my own stuff second. I draw much more satisfaction from working on my own code, but I feel the importance of supporting others, so I prioritize the latter. This particular prioritization too rewards me emotionally, but only up to a certain point. I can say "no", but, in my view, if I have to say "no" frequently, to requests for assistance, then the workload is ill-distributed, and that responsibility is not mine. (I explicitly don't want to be promoted to a level where I become responsible for assigning tasks to people.)
I’m in a senior position and just coming off a year where I intentionally focused on enabling others and making the collective group more effective. That meant more reactive (and less visible) work.
I got feedback that my contributions weren’t tangible and visible enough. I switched gears back to my previous mode (more proactive work) and all is well again.
Different work cultures treat this differently. At another company my enabling activities would have been valued more. But I do think being the glue in a group is usually undervalued.
Not really. I check a couple of times a day, look for stuff from people who are likely important, delete noisy stuff once a week, and the rest lingers.
The threaded nature of email both helps and hurts. If it’s from a chatty sender with a chatty reply all conversation, I can delete it all, except if my boss replies, I should probably look at that.
I should also say that I work at a large company where people are auto-included with varying levels of intention. If I never sent an email, I would still get hundreds per day. Coworkers do zero inbox, so it can be done. I just don’t try anymore. Slack is where the actually urgent stuff is anymore.
> Archive vs. Delete is another question but not as important. Over time I've found that I'm probably deleting too much (e.g. where did I buy that <nice thing> 5 years ago? want it again, can't find the order). Then business email are all archived with the exception of business spam of course.
An executive co-worker of mine used his Deleted Items folder as his Archive. Problem solved.
I'm with you. I've had a mostly empty email for at least a decade (< 10 items, with each of them representing an action I'll need to take) and can't imagine doing it differently. I'm one of those empty desk/empty mind people, I guess.
In a just world you would do 16 hours of manual rock breaking and tilling in a gulag for a decade then you can come back and tell is how essential email is to your life, sorry "digital life" whatever the FUCK that is.
> Groups of zeros can be omitted with two colons, but only once in an address (i.e. 2000:1::1, but not 2000::1::1 as that is ambiguous)
Can someone explain why it's ambiguous?
On the subject, IPv6 is one of the strangest inventions on the internet. Its utility and practically are obvious no matter how you look at it except... just one thing.
Network-related things are generally easy to remember and then type from memory: IPv4, domain names, standard port numbers. Back in the day it was the phone numbers, again, easy to remember and dial when you need it. IPv6 is just too long and requires copy/paste all the time. This is the only real reason in my opinion, why IPv6 is doomed to be second-grade citizen for (probably) a few more decades.
> This is the only real reason in my opinion, why IPv6 is doomed to be second-grade citizen for (probably) a few more decades.
Except if you're using a mobile phone, in which case many telcos hand out only IPv6 addresses to handsets. 2018 NANOG presentation "T-Mobile's journey to IPv6":
Your IPv4 packets are getting tunneled to a CGNAT server which has an IP address pool.
Your website will load faster on cellphones if it supports IPv6. This is because the packets take more direct routes (because they don't go to the central CGNAT server) and because less processing is applied to them. Almost all mobile networks are now IPv6-only, with IPv4 traffic tunneled and CGNATted. Apparently T-Mobile is the rare exception.
I've said this since time immemorial, and networking people often dismiss it. "Just use DNS," say people who have never actually worked netops or devops.
The length of the addresses and the clunky nature of their ASCII representation is absolutely the #1 reason the IPv6 has taken this long. User experience is the most powerful force affecting large scale adoption, and IPv6 has poor UX.
I think the UX is partly fixable by creating less horrible ASCII representation, but this would take a lot of coordination that was hard even back then and is virtually impossible now. If someone told me in 500 years we're still running dual-stack IPv4/IPv6 absolutely unchanged, I'd believe it.
Half the reason (literally) the address looks so bad is not because of IPv6 but because everyone keeps choosing to implement randomized in-subnet addresses and cycle through them for privacy reasons.
E.g. 2600:15a3:7020:4c51::52/64 is not too horrible but 2600:15a3:7020:4c51:3268:b4c4:dd7b:789/64 is a monster by unrelated intent of the client.
This is pretty much on the money. IPv6 addressing can be pretty simple if you design your subnets and use low numbers for hosts. But hosts themselves will forgo that and randomly generate 64 bit random host addresses for themselves - some times for every new connection. Now you have thousands of IPv6 addresses for a single computer speaking out to the Internet.
"Modern" tooling in the consumer space is pretty dire for IPv6 support too. The best you can reasonably get is an IPv6 on the WAN side and then just IPv4 for everything local. At least from the popular routers I've experienced lately.
I’ve been amazed for years at the fact that many of the best routers turn V6 off by default.
Of course I know why. If you turn it on it slightly increases edge case issues as complexity always does. Most people don’t actively need it so nobody notices.
Yes, I forgot about SLAAC and worthless privacy extensions.
Privacy extensions are worthless because there are just sooooo many ways to fingerprint and track you. If you are not at least using a VPN and a jailed privacy mode browser at a bare minimum, you are toast. If you’re serious about privacy you have to use stuff like Tor.
V6 privacy extensions are like the GDPR cookie nonsense: ineffective countermeasures with annoying side effects.
SLAAC sucks too. They should have left assignment up to admins or higher level protocols like with V4. It’s better that way.
Most people are just using the ISP provided router as their gateway today anyways. E.g. ATT fiber is proud to advertise to you that it knows about each of your devices on the ONT+Router combo - that's even the only way to set up a port forward (you can't just type in an IP, you have to pick a discovered device).
"But people can NAT the v4 with another router to hide it!" -> sure, and the same crappy solution works with v6.
"But at least prosumers can replace the ONT via cloning the identifiers and certain hardware" -> also no change with v6.
Randomized addresses do have valid use cases though, particularly when connecting to Wi-Fi networks other than your own when set to randomize the MAC per connection (not just the scanning MAC) as well, but I'm just not really convinced this is a realistic example as framed.
IPv4 isn't perfect, but it was designed to solve a specific set of problems.
IPv6 was designed by political process. Go around the room to each engineer and solve for their pet peeve to in turn rally enough support to move the proposal forward. As a bunch of computer people realized how hard politics were they swore never to do it again and made the address size so laughably large that it was "solved" once and for all.
I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.
My personal preference would have been to open up class E space (240-255.*) and claw back the 6 /8s Amazon is hoarding, be smarter about allocations going forward, and make fees logarithmic based on the number of addresses you hold.
Only if by "political process" you mean a bunch of people got together (physically and virtually) and debated the options and chose what they thought was best. The criteria for choosing IPng were documented:
> I firmly believe that if they had adopted any other strategy where addresses could be meaningfully understood and worked with by the least skilled network operators, we would have had "IPv6" adoption 10 years ago.
The primary reason for IPng was >32 bits of address space. The only way to make them shorter is to have fewer bits, which completely defeats the purpose of the endeavour.
There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.
This is a lot of basically sharpshooting, but I will address your last point:
> There was no way to move from 32-bits to >32-bits without every network stack of every device element (host, gateway, firewall, application, etc) getting new code. Anything that changed the type and size of sockaddr->sa_family (plus things like new DNS resource record types: A is 32-bit only; see addrinfo->ai_family) would require new code.
That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers that could have been used to flag that the first N bytes of the payload were an additional IPv4.1 header indicating additional routing information. Packets would continue to transit existing networks and "4.1" capable boxes at edges could read the additional information to make further routing decisions inside of a network. It would have effectively used IPv4 as the core transport network and each connected network (think ASN) having a handful of routed /32s.
Overlay networks are widely deployed and have very minor technical issues.
But that would have only addressed the numbering exhaustion issues. Engineers often get caught in the "well if I am changing this code anyway" trap.
An explicit goal of IPv6 considered as important as the address expansion was the simplification of the packet header, by having fewer fields and which are correctly aligned, not like in the IPv4 header, in order to enable faster hardware routing.
The scheme described by you fails to achieve this goal.
I am glad you brought this up, that is another big issue with IPv6. A lot of the problems it was trying to solve literally don't exist anymore.
Header processing and alignment were an issue in the 90s when routers repurposed generic components. Now we have modern custom ASICs that can handle IPv4 inside of a GRE tunnel on a VLAN over MPLS at line rate. I have switches in my house that do 780 Gbps.
At the time when it was designed, IPv6 was well designed, much better than IPv4, which was normal after all the experience accumulated while using IPv4 for many years.
The designers of IPv6 have made only one mistake, but it was a huge mistake. The IPv4 address space should have been included in the IPv6 space, allowing transparent intercommunication between any IP addresses, regardless whether they were old IPv4 addresses or new IPv6 addresses.
This is the mistake that has made the transition to IPv6 so slow.
> The IPv4 address space should have been included in the IPv6 space, allowing transparent intercommunication between any IP addresses, regardless whether they were old IPv4 addresses or new IPv6 addresses.
How would you have implemented it that is different from the NAT64 that actually exists, including shoving all IPv4 addresses into 64:ff9b::/96?
> That is simply not true. We had one bit left (the reserved/"evil" bit) in IPv4 headers […]
Great, there's an extra bit in the IPv4 packet header.
I was talking about the data structures in operating systems: are there any extra bits in the sockaddr structure to signal things to applications? If not, an entirely new struct needs to be deployed.
And that doesn't even get into having to deploy new DNS code everywhere.
They didn't use the reserved bit, because there's a field that's already meant for this purpose: the next protocol field. Set that to 0x29 and it indicates that the first bytes of the payload contain a v6 address. Every v4 address has a /48 of v6 space tunnelled to it using this mechanism, and any two v4 addresses can talk v6 between them (including to the entire networks behind those addresses) via it.
If doing basically exactly what you suggested isn't enough to stop you from complaining about v6's designers, how could they possibly have done any better?
This would require new software and new ASICs on all hosts and routers and wouldn't be compatible with the old system. If you're going to cause all those things, might as well add 96 new bits instead of just 2 new bits, so you won't have the same problem again soon.
IPv6 is literally just IPv4 + longer addresses + really minor tweaks (like no checksum) + things you don't have to use (like SLAAC). Is that not what you wanted? What did you want?
And what's wrong with a newer version of a thing solving all the problems people had with it...?
There are more people than IPv4 addresses, so the pigeonhole principle says you can't give every person an IPv4 address, never mind when you add servers as well. Expanding the address space by 6% does absolute nothing to solve anything and I'm confused about why you think it would.
I finally clicked when I worked out it was 2^64 subnets . You have a common prefix of you /48, which isn’t much longer than an ipv4 address - especially as it seems everything is 2001::/16, which means you basically have to remember a 32 bit network prefix just like 12.45.67.8/32.
That becomes 2001:0c2d:4308::/48 instead
After that you just need to remember the subnet number and the host number. If you remember 12.45.67.8 maps to 192.168.13.7 you might have
2001:0c2d:4308:13::7
So subnet “13” and host “7”
It’s not much different to remebering 12.45.67.8>192.168.13.7
Exactly enough to fill out the address, which is always the same length. BTW, IPv4 does basically the same thing. The address 127.1 is equivalent to 127.0.0.1.
Not really the same, the mechanics are different and this particular behaviour is pretty much an accident, not abbreviation.
In IPv4 you also have 127.257 equal to 127.0.1.1, 123456789 equal to 7.91.205.21, and 010.010.010.010 is a well-know DNS server. This notation is also rejected by most implementations.
It is? Those alternate IPv4 notations are all accepted by Linux, FreeBSD, and MacOS. I remember playing around with "alternate notations" 30+ years ago on old SunOS boxes.
I am not clear what your point is. The parent's point stands. A double colon only represents zeros (that were compressed and are not displayed).
Your link does not show different addresses from a valid compression, it shows different addresses from an invalid compression. The link examples what we don't do.
Conversely, if we compress the expanded addresses in your link, we will get 2 different compressed addresses.
> IPv6 is just too long and requires copy/paste all the time.
That is only true for autogenerated/SLAAC IPs. In contrast, manually assigned IPs are often much simpler and easier to remember in IPv6 than in IPv4. I have one common subnet prefix that can be uniformly split to end networks and last number in IP address for such network always end with 0 (and therefore the first device is xxx::1). While in IPv4 i had multiple prefixes, each split non-uniformly based on how many devices was expected to be on that end network, and because most end network prefixes were smaller than /24 (say /26-28), the last number of IP address varies between these networks.
I mean yes, but there’s no escape from the fact that ip addresses need to be longer as amount of devices on the internet already exhausted the pool of IPv4 addresses by multiple orders of magnitude.
I guess it could be possible to implement sort of mnemonic phrases for addresses, à la bip-39, but it would be just trading one kind of pain for another.
It’s a really complicated rule called “subtraction”. Addresses are always 128 bits long, or 8 groups of four hex digits. 2000::1 is two groups, so you need six groups in between to make 2000:0000:0000:0000:0000:0000:0000:1. But I don’t know why people always ask this, because it’s always the computer you are typing addresses in to that does the subtraction. You never ever have to type out the whole address. Just type the shortened version, because 2000::1 _is_ the whole address.
the :1 is short for :0001 basically and then just put that bit of the address at the very end and put the first bit of the address at the front, and then just fill each missing group inbetween with 0000
Well, okay, show us how to follow those instructions then.
"the :1 is short for :0001 basically" is easy enough: you get 2001::0001::0001.
Then "just put that bit at the very end" -- but which bit? If it means the ":0001", then there's two of them and they can't both go at the very end. If not, then it fails to specify which bit. Either way I don't see how these instructions are followable at all, let alone easily.
My answer was too terse. IF there was two :: in the address, then the length of EACH :: denoted section is not known. It can be either longest left :: or longest right :: and that wasn't defined, because the rule is THERE IS ONLY ONE :: section.
I've been using Linux for almost 20 years, including sed a lot of that time, I'm sure I've heard it before, I must have, but when parent wrote it I was like "aah, that makes sense".
I never forget what it does obviously, I use it at least weekly and most of the time daily. But if you asked me what "sed" stand for I'd probably not recall. I might have attempted to work in "extended" somewhere in a guess, because of ex the editor, but besides that :/
Don't forget that you need to know English for that to work. I'm pretty sure most Unix users don't speak English (most computer users definitely don't). I interact with people who know few words besides "hello" and "goodbye", and for them "sed" is a nonsense term, just a set of letters randomly thrown together. Same as e.g. Excel, a random token that means nothing.
sed is just an example, of course, the author's point doesn't hold much weight for many (most?) users globally.
lol no. There are literally a hundred plus Unix tools and commands. I couldn’t tell you what 90% of them mean. I sure as hell couldn’t have told you what sed stood for. And if you asked me tomorrow I also wouldn’t be able to tell you.
C programmers are great. I love C. I wish everything had a beautiful pure C API. But C programmers are strictly banned from naming things. Their naming privileges have been revoked, permanently.
It's `xaf`, because the modern world is way too complex for simple Germanic rules to solve it.
But GNU tar was never the issue. It's almost completely straight forward, the only problem it has is people confusing the tar file with the target directory. If you use some UNIX tar, you will understand why everybody hates it.
Someone once tried this on me during Friday drinks and I successfully conquered the challenge with "tar --help". The challenger tried in vain to claim that this was not valid, but everyone present agreed that an exit code of zero meant that it was a valid solution.
Some drunks in a gnu-shaped echo chamber concluded that the world is gnu-shaped. That's not much a joke, if there is one here. Such presently popular axioms as "unix means linux" or "the userland must be gnu" or "bash is installed" can be shown as poor foundations to reason from by using a unix system that violates all those assumptions. That the XCDD comic did not define what a unix system is is another concern; there are various definitions, some of which would exclude both linux and OpenBSD.
I seem to remember "tar xvf filename.tar" from the 1990s, I'll try that out. If I'm wrong, I'll be dead before I even notice anything. That's better than dying of cancer or Alzheimer's.
z requires it's compressed with gzip and is likely a GNU extension too (it was j for bzip2 iirc). It's also important to keep f the last because it is parametrized and a filename should follow.
So I'd always go with c (create) instead of x (extract), as the latter assumes an existing tar file (zx or xz even a gzipped tar file too; not sure if it's smart enough to autodetect compress-ed .Z files vs .gz either): with create, higher chances of survival in that xkcd.
is always a valid command, whether file.name exists or not. When the file doesn't exist, tar will exit with status '2', apparently, but that has no bearing on the validity of the command.
Compare these two logs:
$ tar xvzf read.me
tar (child): read.me: Cannot open: No such file or directory
tar (child): Error is not recoverable: exiting now
tar: Child returned status 2
tar: Error is not recoverable: exiting now
$ tar extract read.me
tar: invalid option -- 'e'
Try 'tar --help' or 'tar --usage' for more information.
Do you really not understand the difference between "you told me to do something, but I can't" and "you just spouted some meaningless gibberish"?
The GGP set the benchmark at "returns exit code 0" (for "--help"), and even with XKCD, the term in use is "valid command" which can be interpreted either way.
The rest of your slight is unneccessary, but that's your choice to be nasty.
Like I said, I was operating on a lot of zipped tars. Not sure what you are replying about.
The other commenter already mentioned that the xkcd just said "valid", not return 0 (which to be fair is what the original non xkcd required so I guess fair on the mixup)
Oh, just funny mental gymnastics if we are aiming for survival in 10 seconds with a valid, exit code 0 tar command. :)
As tar is a POSIX (ISO standard for "portable operating system interfaces") utility, I am also highlighting what might get us killed as all of us are mostly used to GNU systems with all the GNU extensions (think also bash commands in scripts vs pure sh too).
Hehe fair enough in that case. Tho nothing said it had to work on a tar from like 1979 ;)
To me at least POSIX is dead. It's what Windows (before WSL) supported with its POSIX subsystem so it could say it was compatible but of course it was entirely unusable.
Initial release July 27, 1993; 32 years ago
Like, POSIX: Take the cross section of all the most obscure UNICES out there and declare that you're a UNIX as long as you support that ;)
And yeah I use a Mac at work so a bunch of things I was used to "all my life" so to speak don't work. And they didn't work on AIX either. But that's why you install a sane toolchain (GNU ;) ).
Like sure I was actually building a memory compactification algorithm for MINIX with the vi that comes with MINIX. Which is like some super old version of it that can't do like anything you'd be used to from a VIM. It works. But it's not nice. That's like literally the one time I was using hjkl instead of arrow keys.
That's part of the point, I believe. It's not about being always able to guess the function from first sight. It's also about the function and name serving as mnemonic to each other once you understand how it got named.
I think perhaps the articles argument gets less strong then?
It's claimed grep is "well named" because even though it's not obvious when you first read it, that it being a contraction for "global reg ex print" and hence memorable. I'm not sure the same argument can't be made for libsodium which assuming the reader is familiar with NaCl (the same as the assumption that the previous reader is familiar with regex) then it's an equally memorable name for your crypto library.
There's always a consideration about the context the name is intended and likely to be used in. The article mentions engineering naming and "ibeam", but engineering has it's own technical names an jargon as well. Most people wont know what "4130 tube" means, but people who build bicycle frames or roll cages will - and they're likely to use the less specific term "chromoly" if the don't need to distinguish between 4130 and 4145.
In my head "libsodium" is similar - if you don't know what it (and NaCl) mean, you 100% should keep out of that part of the codebase.
Names fall on a spectrum on this argument. Sodium is not really random because of the use of "salt" on crypto. It's like saying that libsodium is part of your crypto. awk is more random.
The argument goes stronger with projects where the creator seemed to just roll the dice with the name.
One additional complication with grep (and other CLI tools) is that the name itself is part of the day to day UX. It needs to be short, easy to say, and easy to type. With a library the API that is contained within serves the analogous role.
"libsodium" -> "salt" -> "salting is something tangentially related to cryptography" is significantly better as a mnemonic than "awk stands for the author's initials".
Same for grep - with, I guess, the proviso/assumption that you know what regular expression means, which might have been a fair assumption for the sort of people who had command line access to Unix systems in the 70s/80s, but may no longer be valid for developers under 30 who grew up with Windows and were perhaps trained in 6 or 26 week "bootcamps" that didn't have time to cover historical basics like that?
Regular expressions are more of a CS topic (regular languages), though common abbrevs of "re" and "regex" I've only seen in the wild pre and post my formal education in CS.
Yeah, I'd totally expect CS grads, old school Unix sysadmins, and Perl hackers to be fully familiar with Regex. Not so sure I'd expect that from bootcamp front end webdev "grads", self touch game devs, or maybe (I'm not sure?) engineers who have spent their careers in Microsoft dev environments.
No it's not. Sometimes (or maybe most of the time) doing it badly means maybe it's not your thing.
I used to have a neighbour who liked to play the piano and sing. He was doing it consistently badly and he didn't have anyone to tell him that he should probably stop trying.
reply