It's the only way to be sure it's not being trained on.
Most people never come up with any truly novel ideas to code. That's fine. There's no point in those people not submitting their projects to LLM providers.
This lack of creativity is so prevalent, that many people believe that it is not possible to come up with new ideas (variants: it's all been tried before; or: it would inevitably be tried by someone else anyway; or: people will copy anyway).
Some people do come up with new stuff, though. And (sometimes) they don't want to be trained on. That is the main edge IMO, for running local models.
In a word: competition.
Note, this is distinct from fearing copying by humans (or agents) with LLMs at their disposal. This is about not seeding patterns more directly into the code being trained on.
Most people would say, forget that, just move fast and gain dominance. And they might not be wrong. Time may tell. But the reason can still stand as a compelling motivation, at least theoretically.
Tangential: IANAL, but I imagine there's some kind of parallel concept around code/concept "property ownership". If you literally send your code to a 3P LLM, I'm guessing they have rights to it and some otherwise handwavy (quasi important) IP ownership might become suspect. We are possibly in a post-IP world (for some decades now depending on who's talking), but not everybody agrees on that currently, AFAICT.
There are guarantees from several providers that they don’t train on, or even retain, a copy of your data. You are right they could be lying, but some are big enough that would be catastrophic to them from a liability point of view.
Re:creative competition - that’s interesting. I open source much of my creative work so I guess that’s never been a concern of mine.
So here uv installs the Python version wanted. But it's just a venv. And we pip install using requirements.txt, like normal, within that venv.
Someone, please tell me what's wrong with this. To me, this seems much less complicated that some uv-centric .toml config file, plus some uv-centric commands for more kinds of actions.
The greatest thing about Graphviz is indeed the dot language. A nice thing about using dot is that the graph definition is *portable among all applications that support dot*.
Dot is such a simple and readable format (particularly if using the basic features). Thus, it can make a ton of sense to define graphs in strict dot, even if you will be rendering with another tool than Graphviz.
These days, there are other popular options, too -- Mermaid, etc, as TFA indicates. Nonetheless, Graphviz/dot will remain for the long haul, IMO, because dot is so, so good.
So, you need Graphviz for its syntax definitions primarily, and because it is a standard that could be recognized/run anywhere.
Based on a cursory look, keywords can include "smartphone-only internet users" and "large-screen computer ownership".
The American Community Survey asks questions related to that (income, computing devices). Comparing states, the poorer the residents of a state, the smaller the percent of households with regular computers ("large-screen computer ownership"), per "Computer Ownership and the Digital Divide" (Mihaylova and Whitacre, 2025) [0, 1, 2].
Also, Pew runs surveys on income and device usage ("smartphone-only"). Again, the lower the income, the higher the proportion that is smartphone-only [3, 4].
I tried very hard to get it to work, but I simply couldn't get it to connect with my Stalwart instance over JMAP. I do have the permissive CORS and end-points and proxy-protocol seemingly working with my test HTTPS requests, and I also successfully got JMAP to work with the Mailtemi app, but no luck yet with Cypht[0].
Yeah that is kind of issue had me flintch when thinking of using stalwart. As much as it is so nice to install it as a server and ideologies behind it. Looks like gonna just stick with wildduck for now. Just don't like to hedge our email bets on mongodb community edition.
I do have to hand it to the developer though. This is some serious longterm commitment to an open standard that has simply never taken off beyond one company (Fastmail). Current JMAP implementation is pretty much nonexistent, and I am back to using IMAP/WebDAV with Roundcube and plugins with Stalwart. To me, this is an exercise in patience and waiting for an eventual payoff that may or may not come in the next two years. Having followed the project closely for over a year and gone through a few upgrades and followed the community, I'm still optimistic and happy to be along for the ride.
> best case scenario is walkable neighborhoods with lots of little tasty restaurants at affordable prices around the corner from everybody.
According to the written history, pre-1906 San Francisco had basically that.
It seems that the normal middle-class could afford high-quality, delicious food at restaurants, multiple times per week, due to the abundance of local ingredients and overall economic conditions.
So, how to get that quality and relative pricing today?
Excerpts from "The City That Was: A Requiem of Old San Francisco" by Will Irwin (free eBook, [0], free audiobook [1], HTML version at [2]):
> San Francisco was famous for its restaurants and cafes.
> they gave the best fare on earth, for the price, at a dollar, seventy-five cents, a half a dollar, or even fifteen cents.
> a public restaurant where there was served the best dollar dinner on earth
> The eating was usually better than the surroundings. Meals that were marvels were served in tumbledown little hotels.
> A number of causes contributed to this. The country all about produced everything that a cook needs and that in abundance—the bay was an almost untapped fishing pound, the fruit farms came up to the very edge of the town, and the surrounding country produced in abundance fine meats, game, all cereals and all vegetables.
[0] https://www.gutenberg.org/ebooks/3314
[1] https://librivox.org/san-francisco-before-and-after-the-earthquake-by-william-henry-irwin/
[2] https://www.gutenberg.org/cache/epub/3314/pg3314-images.html
A starting point would be actually having enough housing, so workers don't automatically need wages that can absorb the overhead of commuting an hour both ways just to make burgers.
Adjusted for income, those prices would be $15-$100 today. That seems in the right ballpark to me. I can get a pretty great dinner for $100/plate, especially if I don't need it to be in a fancy restaurant atmosphere.
That's what I thought at first, after trying one inflation calculator: $30 for a decent meal, sure, and double that maybe for a pretty tasty meal, is pretty available. (Even then, I think ingredient purity and true preparation aptitude could be pretty suspect, especially at the lower end.)
BUT, TRYING AGAIN: Some inflation calculators do not go back to 1900. But looking further, $0.15 to $1.00 in 1900 would be $5.67 to $38.57 in 2025 dollars, according to https://www.in2013dollars.com/us/inflation/
I do wonder if there are discontinuities in inflation calculators for the times before the great fires in each city. Setting that aside, and assuming https://www.in2013dollars.com/us/inflation/1900?amount=0.15 is accurate, 15 cents in 1900 would be $5.64 in 2025 AFAICT at the moment.
It would be very hard to find a decent sandwich for $5.67 just about anywhere in the USA, much less a multi-course, local, fresh, gourmet meal.
I think it's the general availability of these kinds of pure foods, and their accessibility all about town, prepared to near perfection, even accessible to the poor, that stands out in the Old San Francisco description. To wit:
> ...Hotel de France. This restaurant stood on California street...a big ramshackle house, which had been a mansion of the gold days. Louis, the proprietor, was a Frenchman...his accent was as thick as his peasant soups. The patrons were Frenchmen of the poorer class, or young and poor clerks and journalists who had discovered the delights...
> First ...was the soup mentioned before—thick and clean and good. Next, ...a course of fish—sole, rock cod, flounders or smelt—with a good French sauce. The third course was meat. This came on en bloc; the waiter dropped in the centre of each table a big roast or boiled joint together with a mustard pot and two big dishes of vegetables. Each guest manned the carving knife in turn and helped himself to his satisfaction. After that, ...a big bowl of excellent salad.... For beverage, there stood by each plate a perfectly cylindrical pint glass filled with new, watered claret. The meal closed with "fruit in season"—all that the guest cared to eat....the price was fifteen cents!
> If one wanted black coffee he paid five cents extra...a beer glass full of it. ...he threw in wine and charged extra for after-dinner coffee...
> Adulterated food at that price? Not a bit of it! The olive oil in the salad was pure, California product—why adulterate when he could get it so cheaply? The wine, too, was above reproach.... Every autumn, he brought tons and tons of cheap Mission grapes, ...The fruit was small, and inferior, but fresh...wished his guests would eat nothing but fruit, it came so cheap...
Anecdotally, this is consistent with what I have personally observed in dozens of countries, where the low end cost of eating out is about the same as and hour of work.
I used census data to come up with my guesstimate [0]. In 1905, the largest share of men were making $10-15 per week. Women and children less, of course.
The 2025 equivalent seems to be about $1330 per week. So in [very] round numbers it looks like about 100x.
Per charts in TFA, it looks like some disks are failing less overall, and failing after a longer period of time.
I'm still not sure how to confidently store decent amounts of (personal) data for over 5 years without
1- giving to cloud,
2- burning to M-disk, or
3- replacing multiple HDD every 5 years on average
All whilst regularly checking for bitrot and not overwriting good files with bad corrupted files.
Who has the easy, self-service, cost-effective solution for basic, durable file storage? Synology? TrueNAS? Debian? UGreen?
(1) and (2) both have their annoyances, so (3) seems "best" still, but seems "too complex" for most? I'd consider myself pretty technical, and I'd say (3) presents real challenges if I don't want it to become a somewhat significant hobby.
Get yourself a Xeon powered workstation that supports at least 4 drives. One will be your boot system drive and three or more will be a ZFS mirror. You will use ECC RAM (hence Xeon). I bought a Lenovo workstation like this for $35 on eBay.
ZFS with a three way mirror will be incredibly unlikely to fail. You only need one drive for your data to survive.
Then get a second setup exactly like this for your backup server. I use rsnapshot for that.
For your third copy you can use S3 like a block device, which means you can use an encrypted file system. Use FreeBSD for your base OS.
3. Park a small reasonably low-power computer at a friend's house across town or somewhere a little further out -- it can be single-disk or raidz1. Send ZFS snapshots to it using Tailscale or whatever. (And scrub that regularly, too.)
4. Bring over pizza or something from time to time.
As to brands: This method is independent of brand or distro.
> 3. Park a small reasonably low-power computer at a friend's house across town or somewhere a little further out -- it can be single-disk or raidz1. Send ZFS snapshots to it using Tailscale or whatever. (And scrub that regularly, too.)
Maybe I’m hanging out in the wrong circles, but I would never think it appropriate to make such a proposal to a friend; “hey let me set up a computer in your network, it will run 24/7 on your power and internet and I’ll expect you to make sure it’s always online, also it provides zero value to you. In exchange I’ll give you some unspecified amount of pizza, like a pointy haired boss motivating some new interns”.
About the worst I can imagine happening (other than the new-found ability to rockroll someone's TV as a prank) is that said friend might take an interest in how I manage my data and want a hand with setting up a similar thing for themselves.
And that's all fine too. I like my friends quite a lot, and we often help eachother do stuff that is useful: Lending tools or an ear to vent at, helping to fix cars and houses, teaching new things or learning them together, helping with backups -- whatever. We've all got our own needs and abilities. It's all good.
Except... oh man: The electric bill! I forgot about that.
A small computer like what I'm thinking would consume an average of less than 10 Watts without optimization. That's up to nearly $16 per year at the average price of power in the US! I should be more cognizant of the favors I request, lest they cause my friends to go bankrupt.
/s, of course, but power can be a concern if "small" is misinterpreted.
Or find someone else with a similar backup need and then both just agree to have enough space to host remote backups for the other. I would have to increase my ZFS from N to 2N TB, but that would be less work and cheaper than setting up a backup computer for N TB somewhere else.
I have a simpler approach that I've used at home for about 2 decades now pretty much unchanged.
I have two raid1 pairs - "the old one", and "the new one", plus a third drive the same sizes as "the old pair". The new pair is always larger than the old pair, in the early days it was usually well over twice as big but drive growth rates have slowed since then. About every three years I buy a new "new pair" + third drive, and downgrade the current "new pair" to be the4 "old pair". The old pair is my primary storage, and gets rsynced to a partition that's the same size on the new pair. Te remainder of the new pair is used for data I'm OK with not being backed up (umm, all my BitTorrented Linux isos...) The third drive is on a switched powerpoint and spins up late Sunday night and rsyncs the data copy on the new pair then powers back down for the week.
>3. Park a small reasonably low-power computer at a friend's house across town or somewhere a little further out -- it can be single-disk or raidz1. Send ZFS snapshots to it using Tailscale or whatever. (And scrub that regularly, too.)
Unless you're storing terabyte levels of data, surely it's more straightforward and more reliable to store on backblaze or aws glacier? The only advantage of the DIY solution is if you value your time at zero and/or want to "homelab".
A chief advantage of storing backup data across town is that a person can just head over and get it (or ideally, a copy of it) in the unlikely event that it becomes necessary to recover from a local disaster that wasn't handled by raidz and local snapshots.
The time required to set this stuff up is...not very big.
Things like ZFS and Tailscale may sound daunting, but they're very light processes on even the most garbage-tier levels of vaguely-modern PC hardware and are simple to get working.
I'd much rather just have a backblaze solution and maybe redundant local backups with Time Machine or your local backup of choice (which work fine for terabytes at this point). Maybe create a clone data drive and drop it off with a friend every now and then which should capture most important archive stuff.
If you mostly care about data integrity, then a plain RAID-1 mirror over three disks is better than RAIDZ. Correlated drive failures are not uncommon, especially if they are from the same batch.
I also would recommend an offline backup, like a USB-connected drive you mostly leave disconnected. If your system is compromised they could encrypt everything and also can probably reach the backup and encrypt that.
With RAID 1 (across 3 disks), any two drives can fail without loss of data or availability. That's pretty cool.
With RAIDZ2 (whether across 3 disks or more than 3; it's flexible that way), any two drives can fail without loss of data or availability. At least superficially, that's ~equally cool.
That said: If something more like plain-Jane RAID 1 mirroring is desired, then ZFS can do that instead of RAIDZ (that's what the mirror command is for).
And it can do this while still providing efficient snapshots (accidentally deleted or otherwise ruined a file last week? no problem!), fast transparent data compression, efficient and correct incremental backups, and the whole rest of the gamut of stuff that ZFS just boringly (read: reliably, hands-off) does as built-in functions.
It's pretty good stuff.
All that good stuff works fine with single disks, too. Including redundancy: ZFS can use copies=2 to store multiple (in this case, 2) copies of everything, which can allow for reading good data from single disks that are currently exhibiting bitrot.
This property carriers with the dataset -- not the pool. In this way, a person can have their extra-important data [their personal writings, or system configs from /etc, or whatever probably relatively-small data] stored with extra copies, and their less-important (probably larger) stuff stored with just one copy...all on one single disk, and without thinking about any lasting decisions like allocating partitions in advance (because ZFS simply doesn't operate using concepts like hard-defined partitions).
I agree that keeping an offline backup is also good because it provides options for some other kinds of disasters -- in particular, deliberate and malicious disasters. I'd like to add that this this single normally-offline disk may as well be using ZFS, if for no other reason than bitrot detection.
It's great to have an offline backup even if it is just a manually-connected USB drive that sits on a shelf.
But we enter a new echelon of bad if that backup is trusted and presumed to be good even when it has suffered unreported bitrot:
Suppose a bad actor trashes a filesystem. A user stews about this for a bit (and maybe reconsiders some life choices, like not becoming an Amish leatherworker), and decides to restore from the single-disk backup that's sitting right there (it might be a few days old or whatever, but they decide it's OK).
And that's sounding pretty good, except: With most filesystems, we have no way to tell if that backup drive is suffering from bitrot. It contains only presumably good data. But that presumed-good data is soon to become the golden sample from which all future backups are made: When that backup has rotten data, then it silently poisons the present system and all future backups of that system.
If that offline disk instead uses ZFS, then the system detects and reports the rot condition automatically upon restoration -- just in the normal course of reading the disk, because that's how ZFS do. This allows the user to make informed decisions that are based on facts instead of blind trust.
I had to check for data integrity due to a recent system switch, and was surprised not to find any bitrot after 4y+.
It took ages to compute and verify those hashes between different disks. Certainly an inconvenience.
I am not sure a NAS is really the right solution for smaller data sets. An SSD for quick hashing and a set of N hashed cold storage HDDs - N depends on your appetite for risk - will do.
Don’t get me wrong: IMHO a ZFS mirror setup sounds very tempting, but its strength lie in active data storage. Due to the rarity of bitrot I would argue it can be replaced with manual file hashing (and replacing, if needed) and used in cold storage mode for months.
What worries me more than bitrot is that consumer disks (with enclosure, SWR) do not give access to SMART values over USB via smartctl. Disk failures are real and have strong impact on available data redundancy.
Data storage activities are an exercise in paranoia management: What is truly critical data, what can be replaced, what are the failure points in my strategy?
There's no worse backup system than that which is sufficiently-tedious and complex that it never gets used, except maybe the one that is so poorly documented that it cannot be used.
With ZFS, the hashing happens at every write and the checking happens at every read. It's a built-in. (Sure, it's possible to re-implement the features of ZFS, but why bother? It exists, it works, and it's documented.)
Paranoia? Absolutely. If the disk can't be trusted (as it clearly cannot be -- the only certainty with a hard drive is that it must fail), then how can it be trusted to self-report that it is has issues? ZFS catches problems that the disks (themselves inscrutable black boxes) may or may not ever make mention of.
But even then: Anecdotally, I've got a couple of permanently-USB-connected drives attached to the system I'm writing this on. One is a WD Elements drive that I bought a few years ago, and the other is a rather old, small Intel SSD that I use as scratch space with a boring literally-off-the-shelf-at-best-buy USB-SATA adapter.
And they each report a bevy of stats with smartctl, if a person's paranoia steers them to look that way. SMART seems to work just fine with them.
(Perhaps-amusingly, according to SMART-reported stats, I've stuffed many, many terabytes through those devices. The Intel SSD in particular is at ~95TBW. There's a popular notion that using USB like this sure to bring forth Ghostbusters-level mass hysteria, especially in conjunction with such filesystems as ZFS. But because of ZFS, I can say with reasonable certainty that neither drive has ever produced a single data error. The whole contrivance is therefore verified to work just fine [for now, of course]. I would have a lot less certainty of that status if I were using a more-common filesystem.)
I don't understand what you're worried about with 3.
Make a box, hide it in a closet with power, every 3 months look at your drive stats to see if any have a buch of uncorrectable errors. If we estimate half an hour per checkup and one hour per replacement that's under three hours per year to maintain your data.
Hard drive failure seems like more of a cost and annoyance problem than a data preservation issue. Even with incredible reliability you still need backups if your house burns down. And if you have a backup system then drive failure matters little.
IIRC, the things currently marketed as MDisc are just regular BD-R discs (perhaps made to a higher standard, and maybe with a slower write speed programmed into them, but still regular BD-Rs).
If you don't have too much stuff, you could probably do ok with mirroring across N+1 (distributed) disks, where N is enough that you're comfortable. Monitor for failure/pre-failure indicators and replace promptly.
When building up initially, make a point of trying to stagger purchases and service entry dates. After that, chances are failures will be staggered as well, so you naturally get staggered service entry dates. You can likely hit better than 5 year time in service if you run until failure, and don't accumulate much additional storage.
But I just did a 5 year replacement, so I dunno. Not a whole lot of work to replace disks that work.
Offline data storage is a good option for files you don't need to access constantly. A hard drive sitting on a shelf in a good environment (not much humidity, reasonable temperature, not a lot of vibration) will last a very very long time. The same can't be said for SSDs which will lose their stored data in a mater of a year or two.
Unless you're basically a serious data hoarder or otherwise have unusual storage requirements, an 18TB drive (or maybe 2) get you a lot of the way to handling most normal home requirements.
Personally, I buy the drives with the best $/storage ratio. Right now that seems to be ~3-6TB drives. Many PC enclosures and motherboards can fit 8-12 drives, fill it up with the cheapest stuff you're willing to spend money on. It will probably break even or be cheaper than the larger drives.
It depends on the use case. As with CPUs, I tend not to buy the top-end but it may make sense to just buy for expansion over time. I think my RAID-1 Synology drives are 8TB. But mostly just use external enclosures these days anyway. Pretty much don't build PCs any longer.
Tapes would be great for backups - but the tape drive market's all "enterprise-y", and the pricing reflects that. There really isn't any affordable retail consumer option (which is surprising as there definitely is a market for it).
I looked at tape a little while ago and decided it wasn't gonna work out for me reliability-wise at home without a more controlled environment (especially humidity).
I don't know why you were downvoted, I think for the right purpose tape drives are great.
Used drives from a few generations back work just fine, and are affordable. I have an LTO-5 drive, and new tapes are around $30 where I am. One tape holds 1,5TB uncompressed.
I think they are great for critical data. I have documents and photos on them.
I'm not 100% up to speed with the current standing of things, but tapes (specifically the LTO technology) is still being relied on very heavily by the enterprise, both in data centers for things like cold storage or critical backups, and other corporate uses. Archival use is also very strong with libraries and other such institutions having large tape libraries with autoloaders and all that automation jazz. The LTO-5 generation I mentioned was released in 2010, and the first LTO generation was released in 2000 I believe. The current generation is LTO-10, with an uncompressed capacity of 30TB. New LTO tapes are still being produced, the last batch I purchased was made in 2023.
The LTO consortium consists of HP, IBM and one other company I believe. Now, in my opinion, none of this guarantees the longevity of the medium any more than any other medium, but when I initially looked into it, this was enough to convince me to buy a drive and a couple of tapes.
My reasoning was that with the advertised longevity of 30 years under "ideal archival conditions", if I can get 10 years of mileage from tapes that are just sitting on my non-environmentally-controlled shelf, that means I'll only have to hunt down new tapes 3 times in my remaining lifetime, and after that it will be someone else's problem.
This article discusses the usage of the word "absolutely" by humans on HN, contrasted with machine (over)usage of the term, which has been referenced a lot in articles and memes, both the usage and the attempts at correcting the overuse.
Ya, I didn't even know marp existed before i started working on it. But Deckless has its own opinionated way of rendering markdown, but the basic principles are the same... just turn markdown into something presentable. Maybe I can incorporate marp's custom markdown design hints such as background properties etc in the future? But it's really just meant to do everything out of the box without any customization besides your markdown.
Also main difference is that it's just a web tool so it works with a UI on the browser and supposed to be simple to use for anyone.
Yeah, it's a myth, and a pervasive one. Decent description at [0].
Not only that, but people actually think they know that.
Since people believe it, it's "real to them".
IMO, this helps make it easier to go from "we're going to make the best widgets and be good, responsible, ethical corps" to "we will extract as much value as possible from customers, anything within the law is fair game, external consequences not being our concern".
Why is the idea of quality oriented “getting by” businesses popular here, but worker cooperatives generally scorned? A worker owned coop is more resilient to deciding to focus on quality and affordability than a business with investors and hierarchical ownership that can change (after a death or a sale etc)
I've never actually seen them criticized here but I'll bite since I've worked for one.
Worker owned companies are just a different shade of typical corporate politics. I worked for North America's largest sewer inspection and cleaning company. The company did about equal volumes of each type of work but since inspection is more technology based there were far more cleaners than there were inspectors and analysts. I'd been there about a year and I'd noticed that we were so far outpacing cleaning that we'd started to lapse on some of our contractual inspection storage commitments which required about ten years storage of raw inspection files. The inspection files were raw video with annotations. I drew up a proposal to build out centralized storage arrays and upgrade video processing site internet connections. Pretty baseline stuff to meet the needs of our contractual obligations. It went up for a vote because it'd effect the yearly budget which impacted dividends checks. It was unanimously voted down by the cleaners. I realized then and there that any business that's worker owned will be primarily be influenced by the largest in quantity labor group and haven't worked for one since.
Long way of saying that I wouldn't say it's any better or worse than other management structures.
There are different ways to structure worker cooperatives and decision making. Have you seen the consent-based framework sociocracy uses (rather than majority voting or consensus)?
It's hard to say without turning back time that this would've changed things. I could see a seasoned cleaner arguing that the diff of x and y dividends would be impactful to their lives and that I could be pressured to build a less efficient, decentralized system that compliments the existing decentralized system because when I signed up inefficiency was already built in.
Having voting power didn't actually change my position as someone making a proposal. It actually made it worse because now instead of convincing one slightly less informed king I'm trying to convince a room full of even lesser informed peasantry. It'd be like if the cleaners tried to convince me to buy the new line of vac truck with technology advancements that can clean a complex sewer in ten minutes instead of 30. Reflexively, having never dropped down in waders into a sewer I'd say, "Well, what's 20 more minutes of contractual time?"
That's ultimately the social mechanics that were at play: "Okay nerd, why do you need better efficiency and audit ability? This industry has gotten by just fine filing physical hard drives into physical filing systems for a long time." Without being required to empathize with the problem, and without being necessitated to have experienced decision making it's like democracy with pure bureaucracy and no subject matter experts.
Your reply was premised on a misinformed understanding about consent based decision making so I shared how it works. Ok if you’re not interested. Have a good one
My reply was based on my experience at an actual company that makes money and holds a dominant market position, not a website or an idea. You have a good one as well.
As you mention, plenty of bakeries, grocery stores are worker owned. For example rainbow grocery is not _cheap_ but the quality is high and the bulk prices are not bad.
For some reason two of the biggest and best flour brands are worker owned: King Arthur and Bob’s Red Mill.
But if we’re talking bottom of the barrel prices, I don’t know many worker owned orgs that focus on that.
Turns out when operations are more democratic and left leaning (and all worker owned coops I know of in 2025 are left-leaning), workers are unlikely to support things that are cheaper but have negative externalities. So produce is more likely to organic (and expensive), farming practices are more likely to be ethical (and expensive), etc.
I’ve been on the lookout for worker owned clothing brands but they’re few and far between.
The post to which I replied was specifically about:
- worker cooperatives, and
- quality and affordability
I don't know whether worker cooperatives are more or less likely than a median business to generate negative externalities, so I won't comment on that part.
I wouldn't call Rainbow Grocery 'affordable'. It's been a long time since I bought anything there, but I recall it being much more expensive than every single chain supermarket (not just the lower end ones).
King Arthur and Bob's Red Mill are not 'worker cooperatives' as far as I can tell. They both have ESOPs (Employee Stock Ownership Plans), but I don't see anything suggesting they're run in a democratic (one employee = one vote) fashion.
https://www.bobsredmill.com/employee-owned Certainly many non-coop businesses have ESOPs but this says the goal is to transition to 100% employee-owned via the ESOP (rather than the typical single or low double digit employee grant pool). I recall reading that when Bob was dying he decided or had it in his will to transition his ownership fully to the employees.
edit: "100% employee owned / That happy day came in April 30th of 2020: as of our 10th anniversary, Bob’s Red Mill is now 100% employee owned, one of only about 6,000 businesses in the country to achieve this incredible feat."
Equal Exchange is a worker-owned co-op: https://equalexchange.coop their management leadership positions are rotating (across workers) and have compensation multiplier caps. The coffee at least is quite affordable compared with other specialty brands.
Thanks to zoning laws in Japan, whereby practically anyone is able to start a retail business with minimal capital and permitting requirements, there are many shops and food-related businesses that are worker-owned. Many are also highly affordable can be cheaper than chains or convenience options (apart from the very cheapest of chains).
Worker co-ops are scorned here? I think they just don’t come up much due to the nature of the industry.
Like all [citation needed] nerds I consumed a ridiculous amount of fantasy fiction growing up, and think programming is as close to magic as we’ll ever get in the real world. If somebody made a “Programmers Guild” in the style of a wizard’s guild, who among us wouldn’t join such a thing?
Yes. HN is a western, liberal, and capitalist hive mind essentially. Or to put it another way, it is for the status quo, more or less. Things like coops are naturally going to not be too well received in such a place.
I’ve seen co-ops come up in a couple threads. Usually I see interest, not a ton of direct experience, sometimes a little bit of skepticism of the idea, but not a ton. Here’s an example:
Now, you can find some negative reception I’m sure of you look hard enough, but generally the reception ranges from “a little experience” to “totally naive but curious.”
>> motivation
It's the only way to be sure it's not being trained on.
Most people never come up with any truly novel ideas to code. That's fine. There's no point in those people not submitting their projects to LLM providers.
This lack of creativity is so prevalent, that many people believe that it is not possible to come up with new ideas (variants: it's all been tried before; or: it would inevitably be tried by someone else anyway; or: people will copy anyway).
Some people do come up with new stuff, though. And (sometimes) they don't want to be trained on. That is the main edge IMO, for running local models.
In a word: competition.
Note, this is distinct from fearing copying by humans (or agents) with LLMs at their disposal. This is about not seeding patterns more directly into the code being trained on.
Most people would say, forget that, just move fast and gain dominance. And they might not be wrong. Time may tell. But the reason can still stand as a compelling motivation, at least theoretically.
Tangential: IANAL, but I imagine there's some kind of parallel concept around code/concept "property ownership". If you literally send your code to a 3P LLM, I'm guessing they have rights to it and some otherwise handwavy (quasi important) IP ownership might become suspect. We are possibly in a post-IP world (for some decades now depending on who's talking), but not everybody agrees on that currently, AFAICT.