Hacker Newsnew | past | comments | ask | show | jobs | submit | jryio's commentslogin

This is interesting for offloading "tiered" workloads / priority queue with coding agents.

If 60% of the work is "edit this file with this content", or "refactor according to this abstraction" then low latency - high token inference seems like a needed improvement.

Recently someone made a Claude plugin to offload low-priority work to the Anthropic Batch API [1].

Also I expect both Nvidia and Google to deploy custom silicon for inference [2]

1: https://github.com/s2-streamstore/claude-batch-toolkit/blob/...

2: https://www.tomshardware.com/tech-industry/semiconductors/nv...


Note that Batch APIs are significantly higher latency than normal AI agent use. They're mostly intended for bulk work where time constraints are not essential. Also, GPT "Codex" models (and most of the "Pro" models also) are currently not available under OpenAI's own batch API. So you would have to use non-agentic models for these tasks and it's not clear how well they would cope.

(Overall, batches do have quite a bit of potential for agentic work as-is but you have to cope with them taking potentially up to 24h for just a single roundtrip with your local agent harness.)


Openai has a "flex" processing tier, which works like the normal API, but where you accept higher latency and higher error rates, in exchange for 50% off (same as batch pricing). It also supports prompt caching for further savings.

For me, it works quite well for low-priority things, without the hassle of using the batch API. Usually the added latency is just a few seconds extra, so it would still work in an agent loop (and you can retry requests that fail at the "normal" priority tier.)

https://developers.openai.com/api/docs/guides/flex-processin...


That's interesting but it's a beta feature so it could go away at any time. Also not available for Codex agentic models (or Pro models for that matter).

I built something similar using an MCP that allows claude to "outsource" development to GLM 4.7 on Cerebras (or a different model, but GLM is what I use). The tool allows Claude to set the system prompt, instructions, specify the output file to write to and crucially allows it to list which additional files (or subsections of files) should be included as context for the prompt.

Ive had great success with it, and it rapidly speeds up development time at fairly minimal cost.


Why use MCP instead of an agent skill for something like this when MCP is typically context inefficient?

MCP is fine if your tool definition is small. If it's something like a sub-agent harness which is used very often, then in fact it's probably more context efficient because the tools are already loaded in context and the model doesn't have to spend a few turns deciding to load the skill, thinking about it and then invoking another tool/script to invoke the subagent.

Models haven't been trained enough on using skills yet, so they typically ignore them

Is that true? I had tool use working with GPT-4 in 2023, before function calling or structured outputs were even a thing. My tool instructions were only half a page though. Maybe the long prompts are causing problems?

They're talking about "skills" which are not the same thing as tools. Most models haven't been trained on the open SKILL spec, and therefore aren't tuned to invoke them reliable when the need occurs.

There have been a few of these kanban[1] user interfaces over claude code or some other agent (open claw).

These are all proto orchestrators. No one has discovered/converged on what agent orchestration actually looks like.

Other projects include conductor.build, gas town (infamously), and others.

Another layer of abstraction in the infinite castle of computer science.

1: https://www.vibekanban.com/


Much like people have different takes on the right way to do the SDLC, we're gonna see a lot of takes on orchestration. None of are really that meaningful until we unblock on validation, but to the extent that they also come with observability tools, they're at least useful there. I'm dubious of agent specific silo'd task management though, that should be surfaced to entire teams.

These comments are comical. How hard is it to understand that human beings are experiential creatures. Our experiences matter, to survival, to culture, and identity.

I mourn the horse masters and stable boys of a century past because of their craft. Years of intuition and experience.

Why do you watch a chess master play, or a live concert, or any form of human creation?

Should we automate parts of our profession? Yes.

Should he mourn the loss of our craft. Also yes.


Very well put.

Two things are true at the same time, this makes people uneasy.


In fact, contrary things are so very often both true at the same time, in different ways.

Figuring out how to live in the uncomfortableness of non-absolutes, how to live in a world filled with dualisms, is IMO one of the primary and necessary maturities for surviving and thriving in this reality.


Yes. Unwillingness to accept contradicting data points is holding many people back. They have an unconscious need to always pick one or the other, and that puts them at a disadvantage. "I know what I think." But no, you do not.

This is so naive, you watch chess masters not because of the quality of the end result of the chess game but because it illustrates human ability, it's entertainment.

SWE the quality is measured in the end result, there's no beauty in handcrafted barely working code. So if machines come to the stage where they program better than 90% of human developers then yes the craft is done for. The profession is dead.

You can talk to LLM to give it specs for what you want built but that's basically a totally different profession that a high-schooler would do and salaries will soon follow suit.


A much more measued and pragmatic take.

Accountants will be studying the deals and cyclical valuations of AI companies in the same way we study bank runs and FDIC insurance today.


Think about how it feels when you toil on a hard problem, do your best work, release it to the work in the spirit of openness and sharing

Only to have a machine ingest, compress, and reiterate your work indefinitely without attribution.


> Only to have a machine ingest, compress, and reiterate your work indefinitely without attribution.

Everything I write, every thought I have, and the output of my every creative endeavor is profoundly shaped by the work of others that I have ingested, compressed, and iterated on over the course of my lifetime, yet I have the audacity to call it my own. Any meager success I may have, I attribute to naught but my own ingenosity.


I think this is a paradox that AIs have introduced.

We write open source software so everyone can learn and benefit from it. But why do we not like it when they are being trained on them and allow normies to use it as well?

We want news, knowledge and information to be spread everywhere. So why don't we share all of our books, articles and blogs openly to any AI companies that want to use them? We should all want to have our work to be used by everyone more easily.

Personally, I don't have any fundamental refutation to this. There's a sense that it is wrong. I can somewhat articulate why its wrong in term of control and incentives. But those are not well formed just yet.


I sit along this fence and the only real "wrong" I can garner from it is that the economic model simply hasn't updated to reflect this shift.

I think the "wrongness" would go away, for me anyways, if we found a way that everyone was still remunerated in some way when this sharing occurred.

A lá, the vision of what crypto ought to have been: Every single thing created and shared has a .000xx cent value and as it makes it's way around being used/shared/iterated upon just sends those royalty checks to the "creator" always, forever.


Here you make alike a human to a machine, telling of our times to fail to see the difference.


But what is the difference, in this case?


Humans participate in the human struggle of existence, limited in our time, attention, energy, and host of other various constraining natures. We are limited and finite. To learn from those of greater talent than yourself is to dedicate all of those resources towards its acquisition. AI has no such limitations, and so does not participate in the same category as humans. A human struggles to learn the patterns of an artist, a machine does not. A human tires of learning, a machine does not. A human puts in effort, a machine does not.

It is the humanness that is the difference, that which exists outside the abstraction of the imposed categories. The human cannot compete with the machine which ingests ALL works and renders the patterns easily available. The artist toiled to perfect those patterns, and now is no longer granted the decency of reaping the fruits of their labors. Humans can give, the machine can only take.


This is a lot of purple prose that ultimately doesn’t say anything of substance.


And yet STILL the human being that did the work has value beyond any conceivable and ludicrous attempt by an other to measure it against.


Man is a tool using animal. Language, writing, the printing press, photography, audio recording, computers, the Internet, and now AI -- each of these innovations has fundamentally changed how we create and preserve knowledge, art, and culture.

None of these changes came without losses. Writing eroded our capacity to remember, printing fueled decades of bloody religious conflict, photography harmed portrait artists, audio recordings wrecked social collaborative parlor music, computers fucked up our spelling & arithmetic, and the Internet... don't get me started.

Ultimately there is no going back, we will change and adapt to our new capabilities. Some will be harmed and some will be left behind. So turns the wheel of progress, I don't think anyone can stop it.


... What?

This isn't an answer, this is just incoherent, circular rambling.


Ironically I might have preferred slop to this sort of thing.


    Only to have a machine ingest, compress, and reiterate your work indefinitely without attribution.
Further facilitating millions, or even billions, of other people to discover new ideas and create new things. It's not hard to see the benefit of that.

I get that the purpose of IP laws are psychological, rather than moral. A culture where people feel as though they can personally benefit from their work is going to have higher technological and scientific output, which is certainly good, even if the means of producing that good are sort of artificial and selfish.

It's not hard to imagine, or maybe dream of, a world where the motivation for research and development is not just personal gain. But we have to work with the world we have, not the world we want, don't we...

Nobody will starve themselves, even if doing so will feed hundreds of others.


> the purpose of IP laws are psychological, rather than moral.

Neither. They are purely economic. You even acknowledge this when you call out personal benefit.

The stated intent is to facilitate creators realizing economic benefits from time spent creating. The reality is that large corporations end up rent seeking using our shared cultural artifacts. Both impacts are economic in nature.


Right, right.

The economic benefit is derived from a psychological effect: the expectation of personal gain.

The economy as a whole benefits from technological progress. The technological progress is fueled by each individual's expectation of personal gain. The personal gain is created via IP law.


If someone shows up to work based on the expectation that they will receive a paycheck at the end of the month would you also describe that as a psychological effect? I certainly wouldn't. That's an economic activity.

There's a psychological component regarding trust. Either that your employer would never try to cheat you or alternatively that your employer is the sort that might try to cheat you but won't thanks to our society's various systems. But the showing up to work itself is a simple exchange of time and skill for money.


In the case that the IP, and thus the financial benefit, is not owned by an individual, but owned by a large corporation, as with your example, what does the individual care whether or not the IP is infringed?

They don't. In this case democratizing the IP is more likely a social / economic benefit, not a harm.

We're talking about intellectual property rights, the benefits of which only go to the intellectual property holder.

Although how big a corporation has to be before we cross the line from social / economic harm to social / economic good is an interesting question.


You are conflating work and work product. There's a difference between being acknowledged and compensated for doing hard work, and receiving property rights over the work product.

If you are an employee, you get paid for building something (work), and the employer owns the thing that was built (work product). If you are self-employed, it's the other way around. You don't get paid for the work, but you own the work product. Employees generally don't work for free, and the self-employed generally don't give away their capital for free.

If you opt to "release it to the [world] in the spirit of openness and sharing," then you built capital for free and gave it away for free. If you didn't want others to capitalize on the capital, then why did you give it away?

If you want attribution, then either get paid for the work and add it to your resume, or exchange your work product for attribution (e.g., let people visit the Jryio Museum, build a Jryio brand, become known in your community as a creative leader, etc.). If you give it away for free, then your expectations should include the possibility that people will take it for free.


The entire concept of IP is the true farce. The real tragedy is how brainwashed our society has become into not just accepting it, but outright supporting it. I’ve been blown away to see how the advent of AI has transformed so many into IP and copyright law cheerleaders.


I would be fine with it personally. But I'm a mathematician not an artist.


Are those feelings serving you?

What consideration do you choose to afford to those feelings?


Remember the internet before algorithmic ads and cross site tracking?

We will remember this moment of LLM usage for the years to come as we are irreparably spun by advertisers in our most intimate and private 1:1 conversations with these AIs



It's so important to remember that unlike code which can be reverted - most file system and application operations cannot.

There's no sandboxing snapshot in revision history, rollbacks, or anything.

I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.

Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.

For those of us who have built real systems at low levels I think the alarm bells go off seeing a tool like this - particularly one targeted at non-technical users


Frequency vs. convenience will determine how big of a deal this is in practice.

Cars have plenty of horror stories associated with them, but convenience keeps most people happily driving everyday without a second thought.

Google can quarantine your life with an account ban, but plenty of people still use gmail for everything despite the stories.

So even if Claude cowork can go off the rails and turn your digital life upside down, as long as the stories are just online or "friend of a friend of a friend", people won't care much.


Considering the ubiquity and necessity of driving cars is overwhelmingly a result of intentional policy choices irrespective of what people wanted or was good for the public interest... actually that's quite a decent analogy for integrated LLM assistants.

People will use AI because other options keep getting worse and because it keeps getting harder to avoid using it. I don't think it's fair to characterize that as convenience though, personally. Like with cars, many people will be well aware of the negative externalities, the risk of harm to themselves, and the lack of personal agency caused by this tool and still use it because avoiding it will become costly to their everyday life.

I think of convenience as something that is a "bonus" on top of normal life typically. Something that becomes mandatory to avoid being left out of society no longer counts.


What has gotten worse without AI? I don't think writing or coding is inherently harder. Google search may be worse but I've heard Kagi is still pretty great. Apple Intelligence feels like it's easy to get rid of on their platforms, for better and worse. If you're using Windows that might get annoying, personally I just use LTSC.


The skills of writing and coding atrophy when replaced by generative AI. The more we use AI to do thinking in some domain, the less we will be able to do that thinking ourselves. It's not a perfect analogy for car infrastructure.

Yeah Kagi is good, but the web is increasingly dogshit, so if you're searching in a space where you don't already have trusted domains for high quality results, you may just end up being unable to find anything reliable even with a good engine.


People love their cars, what are you talking about


I am a car enthusiast so don't think I'm off the deep end here, but I would definitely argue that people love their cars as a tool to work in the society we built with cars in mind. Most people aren't car enthusiasts, they're just driving to get to work, and if they could get to work for a $1 fare in 20 minutes on a clean, safe train they would probably do that instead.


I am this person. I love the convenience of a car. I hate car ownership.


Right and I assume we will have BO police at the gates to these trains?

People love their cars not because they’re enthusiasts


I guess that's one reason to not use public transport, but it seems many cities overcome that pretty readily.

Perhaps it depends on how smelly your society is.

Anyway I think we are in agreement, given a good system and a good society trains become quite attractive, otherwise cars are more preferred.


That seems like a somewhat ridiculous objection. Should everybody start owning their own private planes to avoid people with BO at airplanes?


No, but if they could, they would. That’s what’s being debated here. Whether people would, not should.


Of course they wouldn't, owning and operating a plane is -incredibly- inconvenient. That's what we are discussing, tradeoffs of convenience and discomfort, you can't just completely ignore one reality to criticise the other (admiting some hypocrisy here since that ideal train system mentioned earlier only exists in a few cities).


Is this some culture or region or climate related thing? I’ve never heard of BO brought up as a reason to avoid public transport or flying commercial in northern parts of Europe. Nor have I experienced any olfactory disturbance, apart from the occasional young man or woman going a tad overboard with perfume on the weekends.


Should we restructure society so that having a private airplane is easier and cheaper, but if you don't have one you'll have serious trouble in daily life?


No


I love my car. And yet I really want to see all the cars eradicated from existence. At least from the public space.


No, people hate being trapped without a car in an environment built exclusively to serve cars. Our love of cars is largely just downstream of negative emotions like FOMO or indignation caused by the inability to imagine traveling by any other mode (because on most cases that's not even remotely feasible anymore).


I mean, we were there before this Cowork feature started exposing more users to the slot machine:

"Claude CLI deleted my home directory and wiped my Mac" https://news.ycombinator.com/item?id=46268222

"Vibe coding service Replit deleted production database, faked data, told fibs" https://news.ycombinator.com/item?id=44632575

"Google Antigravity just deleted the contents of whole drive" https://news.ycombinator.com/item?id=46103532


That's what I am saying though. Anecdotes are the wrong thing to focus on, because if we just focused on anecdotes, we would all never leave our beds. People's choices are generally based on their personal experience, not really anecdotes online (although those can be totally crippling if you give in).

Car crashes are incredibly common and likewise automotive deaths. But our personal experience keeps us driving everyday, regardless of the stories.


We as a society put a whole lot of effort into making cars safer. Seatbelts, ABS, airbags.. Claude Code should have airbags too!


Airbags, yes. But you can't just make it provably impossible for a car to crash into something and hurt/kill its occupants, other than not building it in the first place. Same with LLMs - you can't secure them like regular programs without destroying any utility they provide, because their power comes from the very thing that also makes them vulnerable.


I see you've given up. I haven't. LLM inside deterministic guardrails is a pretty good combo.


And yet in the US 40,000 people still die on average every year. Per-capita it's definitely improving, but it's still way worse than it could/should be.


Yes, and a photo you put on your physical desktop will fade over time. Computers aren't like that, or at least we benefit greatly from them not being like that. If you tell your firewall to block traffic to port 80, you expect all such traffic to be blocked, not just the traffic that arrives in the moments when it wasn't distracted.


> So even if Claude cowork can go off the rails and turn your digital life upside down, as long as the stories are just online or "friend of a friend of a friend", people won't care much.

This is anecdotal but "people" care quite a lot in the energy sector. I've helped build our own AI Agent pool and roll it out to our employees. It's basically a librechat with our in-house models, where people can easily setup base instruction sets and name their AI's funny things, but are otherwise similar to using claude or chatgpt in a browser.

I'm not sure we're ever going to allow AI's access to filesystems, we barely allow people access to their own files as it is. Nothing that has happened in the past year has altered the way our C level view the security issues with AI in any other direction than being more restrictive. I imagine any business that cares about security (or is forced to care by leglislation) isn't looking at this as a they do cars. You'd have to be very unlucky (or lucky?) to shut down the entire power grid of Europe with a car. You could basically do it with a well placed AI attack.

Ironically, you could just hack the physical components which probably haven't had their firmware updated for 20 years. If you even need to hack it, because a lot of it frankly has build in backdoors. That's a different story that nobody on the C levels care about though.


The first version is for macOS, which has snapshots [1] and file versioning [2] built-in.

[1]: https://eclecticlight.co/2024/04/08/apfs-snapshots/

[2]: https://eclecticlight.co/2021/09/04/explainer-the-macos-vers...


Are average users likely to be using these features? Most devs at my company don’t even have Time Machine backups


snapshots are local Time Machine backups for a few hours which don't need external hard drives and are configured by default I think


RSX-11M for the PDP-11 had filesystem versioning back in the early 1980s, if not earlier.


And if they were releasing Cowork for RSX-11M, that might be relevant.


Once upon a time, in the magical days of Windows 7, we had the Volume Shadow Copy Service (aka "Previous Versions") available by default, and it was so nice. I'm not using Windows anymore, and at least part of the reason is that it's just objectively less feature complete than it used to be 15 years ago.


Yeah. I also like Windows, but MS has done a wonderful job to destroy the OS with newer releases.

I haven't had to tweak an OS like Win 11 ever.


Somewhat related is a concern I have in general as things get more "agentic" and related to the prompt injection concerns; without something like legally bullet-proof contracts, aren't we moving into territory of basically "employing" what could basically be "spies" at all levels from personal (i.e., AI company staff having access to your personal data/prompts/chats) to business/corporate espionage, to domestic and international state level actors who would also love to know what you are working on and what you are thinking/chatting about and maybe what your mental health challenges are that you are working through with an AI chat therapist.

I am not even certain if this issue can be solved since you are sending your prompts and activities to "someone else's computer", but I suspect if it is overlooked or hand-waved as insignificant, there will be a time when open, local models will become useful enough to allow most to jettison cloud AI providers.

I don't know about everyone else, but I am not at all confident in allowing access and sending my data to some AI company that may just do a rug pull once they have an actual virtual version of your mind in a kind of AI replication.

I'll just leave it at that point and not even go into the ramifications of that, e.g., "cybercrimes" being committed by "you", which is really the AI impersonator built based on everything you have told it and provide access to.


Q: What would prevent them from using git style version control under the hood? User doesn’t have to understand git, Claude can use it for its own purposes.


Didn't actually check out the app, but some aspects of application state are hard to serialize, some operations are not reversible by the application. EG: sending an email. It doesn't seem naively trivial to accomplish this, for all apps.

So maybe on some apps, but "all" is a difficult thing.


For irreversible stuff I like feeding messages into queues. That keeps the semantics clear, and makes the bounds of the reversibility explicit.


Tool calls are the boundary (or at least one of them).


You can’t easily snapshot the current state of an OS and restore to that state like with git.


Maybe not for very broad definitions of OS state, but for specific files/folders/filesystems, this is trivial with FS-level snapshots and copy-on-write.


Let's assume that you can. For disaster recovery, this is probably acceptable, but it's unacceptable for basically any other purpose. Reverting the whole state of the machine because the AI agent (a single tenant in what is effectively a multi-tenant system) did something thing incorrect is unacceptable. Managing undo/redo in a multiplayer environment is horrific.


I wonder if in the long run this will lead to the ascent of NixOS. They seem perfect for each other: if you have git and/or a snapshotting filesystem, together with the entire system state being downstram of your .nix file, then go ahead and let the LLM make changes willy-nilly, you can always roll back to a known good version.

NixOS still isn't ready for this world, but if it becomes the natural counterpart to LLM OS tooling, maybe that will speed up development.


Well there is cri-u for what its worth on linux which can atleast snapshot the state of an application and I suppose something must be similar available for filesystems as well

Also one can simply run a virtual machine which can do that but then the issue becomes in how apps from outside connect to vm inside


Filesystems like zfs, btrfs and bcachefs have snapshot creation and rollbacks as features.


At least on macOS, an OS snapshot is a thing [1]; I suspect Cowork will mostly run in a sandbox, which Claude Code does now.

[1]: https://www.cleverfiles.com/help/apfs-snapshots.html


All major OSes support snapshotting, and it's not a panacea on any of them.


Ok, you can "easily", but how quickly can you revert to a snapshot? I would guess creating a snapshot for each turn change with an LLM become too burdensome to allow you to iterate quickly.


For the vast majority, this won't be an issue.

This is essentially a UI on top of Claude Code, which supports running in a sandbox on macOS.


Sure you can. Filesystem snapshotting is available on all OSes now.


Git only works for text files. Everything else is a binary blob which, among other things, leads to merge conflicts, storage explosion, and slow git operations


Indeed there are and this is no rocket science. Like Word Documents offer a change history, deleted files go to the trash first, there are undo functions, TimeMachine on MacOs, similar features on Windows, even sandbox features.


Trash is a shell feature. Unless a program explicitly "moves to trash", deleting is final. Same for Word documents.

So, no, there is no undo in general. There could be under certain circumstances for certain things.


I mean, I'm pretty sure it would be trivial to tell it to move files to the trash instead of deleting them. Honestly, I thought that on Windows and Mac, the default is to move files to the trash unless you explicitly say to permanently delete them.


Yes, it is (relatively, [1]) trivial. However, even though it is the shell default (Finder, Windows Explorer, whatever Linux file manager), it is not the operating system default. If you call unlink or DeleteFile or use a utility that does (like rm), the file isn’t going to trash.

[1]: https://github.com/arsenetar/send2trash (random find, not mine)


Because it is the default. Heck, it is the default for most DEs and many programs on Linux, too.


Everything on a ZFS/BTRFS partition with snapshots every minute/hour/day? I suppose depending on what level of access the AI has it could wipe that too but seems like there's probably a way to make this work.


I guess it depends on what its goals at the time are. And access controls.

May just trash some extra files due to a fuzzy prompt, may go full psychotic and decide to self destruct while looping "I've been a bad Claude" and intentionally delete everything or the partitions to "limit the damage".

Wacky fun


The topic of the discussion is something that parents, grandmas, and non technical colleagues would realistically be able to use.


A "revert filesystem state to x time" button doesn't seem that hard to use. I'm imagining this as a potential near-term future product implementation, not a home-brewed DIY solution.


A filesystemt state in time is VERY complicated to use, if you are reverting the whole filesystem. A granular per-file revert should not be that complicated, but it needs to be surfaced easily in the UI and people need to know aout it (in the case of Cowork I would expect the agent to use it as part of its job, so transparent to the user)


Shell? You meant Finder I think?


GUI shell (as opposed to a text-based shell).


State isn't always local too


>>I expect to see many stories from parents, non-technical colleagues, and students who irreparably ruined their computer.

I do believe the approach Apple is taking is the right way when it comes to user facing AI.

You need to reduce AI to being an appliance that does one or at most a few things perfectly right without many controls with unexpected consequences.

Real fun is robots. Not sure no one is hurrying up on that end.

>>Edit: most comments are focused on pointing out that version control & file system snapshot exists: that's wonderful, but Claude Cowork does not use it.

Also in my experience this creates all kinds of other issues. Like going back up a tree creates all kinds of confusions and keeps the system inconsistent with regards to whatever else it is you are doing.

You are right in your analysis that many people are going to end up with totally broken systems


In theory the risk is immense and incalculable, but in practice I've never found any real danger. I've run wide open powershell with an OAI agent and just walked away for a few hours. It's a bit of a rush at first but then you realize it's never going to do anything crazy.

The base model itself is biased away from actions that would lead to large scale destruction. Compound over time and you probably never get anywhere too scary.


There's no reason why Claude can't use git to manage the folders that it controls.


Most of these files are binary and are not a good fit for git’s graph based diff tracker…you’re basically ending up with a new full sized binary for every file version. It works from a version perspective, but is very inefficient and not what git was built for.


Git isn't good with big files.

I wanted to comment more, but this new tool is Mac only for now, so there isn't much of a point.


Too hard for AI to make crossplatform tools.


git with lfs

There is also xet by huggingface which tries to make git work better with big files


TimeMachine has never been so important.


Arq does it better.


TimeMachine is worthless trash compared to restic


Please elaborate


It works on Linux, Windows, macOS, and BSD. It's not locked to Apple's ecosystem. You can back up directly to local storage, SFTP, S3, Backblaze B2, Azure, Google Cloud, and more. Time Machine is largely limited to local drives or network shares. Restic deduplicates at the chunk level across all snapshots, often achieving better space efficiency than Time Machine's hardlink-based approach. All data is encrypted client-side before leaving your machine. Time Machine encryption is optional. Restic supports append-only mode for protection against ransomware or accidental deletion. It also has a built-in check command to check integrity.

Time Machine has a reputation for silent failures and corruption issues that have frustrated users for years. Network backups (to NAS devices) use sparse bundle disk images that are notoriously fragile. A dropped connection mid-backup can corrupt the entire backup history, not just the current snapshot. https://www.google.com/search?q=time+machine+corruption+spar...

Time Machine sometimes decides a backup is corrupted and demands you start fresh, losing all history. Backups can stop working without obvious notification, leaving users thinking they're protected when they're not. https://www.reddit.com/r/synology/comments/11cod08/apple_tim...

The shift from HFS+ to APFS introduced new bugs, and local snapshots sometimes behave unpredictably. https://www.google.com/search?q=time+machine+restore+problem...

The backup metadata database can grow unwieldy and slow, eventually causing failures.

https://www.reddit.com/r/MacOS/comments/1cjebor/why_is_time_...

https://www.reddit.com/r/MacOS/comments/w7mkk9/time_machine_...

https://www.reddit.com/r/MacOS/comments/1du5nc6/time_machine...

https://www.reddit.com/r/osx/comments/omk7z7/is_a_time_machi...

https://www.reddit.com/r/mac/comments/ydfman/time_machine_ba...

https://www.reddit.com/r/MacOS/comments/1pfmiww/time_machine...

https://www.reddit.com/r/osx/comments/lci6z0/time_machine_ex...

Time Machine is just garbage for ignorant people.


Almost all of my backup is around restic, including monitoring of backups (when they fail and when they do not run often enough).

It is a very solid setup, with 3 independent backups: local, nearby and far away.

Now - it took an awful lot of time to set up (including drinking the wrapper to account for everything). This is advanced IT level.

So Time Machine is not for ignorant people, but something everyone can use. (I never used it, no idea if it's good but it has to all last work)


One works, one loses your data. Oh well.

Guess there's a lot of money to be made wrapping it with a paid GUI


I am not sure what you are after, to be honest.

Restic is fantastic. And restic is complicated for someone who is not technical.

So there is a need to have something that works, even not in an optimal way, that saves people data.

Are you saying that Time Machine doe snot backup data correctly? But then there are other services that do.

Restic is not for the everyday Joe.

And to your point about "ignorant people" - it is as I was saying that you are an ignorant person because you do not create your own medicine, or produce your own electricity, or paint your own paintings, or build your own car. For a biochemist specializing in pharma (or Walt in Breaking Bad :)) you are an ignorant person unable to do the basic stuff: synthetizing paracetamol. It is a piece of cake.


But I just want to backup my important files to the cloud


IIUC, this is a preview for Claude Max subscribers - I'm not sure we'll find many teachers or students there (unless institutions are offering Max-level enterprise/team subscriptions to such groups). I speculate that most of those who will bother to try this out will be software engineering people. And perhaps they will strengthen this after enough feedback and use cases?


If this is like Claude Code for everyone else, shouldn’t it be snapshotting anything it changes so that you can go back to the previous state?


Yeah, seems like this could be achieved by using https://github.com/streamich/memfs/blob/master/docs/snapshot...

Weird they don't use it - might backfire hard


Pretty much every company I work with uses the desktop sync tools for OneDrive/GoogleDrive/Dropbox etc.

It would be madness to work completely offline these days, and all of these systems have version history and document recovery built in.


I hope we see further exploration into immutable/versioned filesystems and databases where we can really let these things go nuts, commit the parts we want to keep, and revert the rest for the next iteration.


I would never use what is proposed by OP. But, in any case, Linux on ZFS that is automatically snapshotted every minute might be (part of) a solution to this dilemma.


You make a good point. I imagine that they will eventually add Perforce-style versioning to the product and this issue will be solved.


So the future is NixOS for non-technical people?


Yes, and I think we're already seeing that in the general trend of recent linux work toward atomic updates. [bootc](https://developers.redhat.com/articles/2024/09/24/bootc-gett...) based images are getting a ton of traction. [universal blue](https://universal-blue.org/) is probably a better brochure example of how bootc can make systems more resilient without needing to move to declarative nix for the entire system like you do in NixOS. Every "upgrade" is a container deployment, and you can roll back or forward to new images at any time. Parts of the filesystem aren't writeable (which pisses people off who don't understand the benefit) but the advantages for security (isolating more stuff to user space by necessity) and stability (wedged upgrades are almost always recoverable) are totally worth it.

On the user side, I could easily see [systemd-homed](https://fedoramagazine.org/unlocking-the-future-of-user-mana...) evolving into a system that allows snapshotting/roll forward/roll back on encrypted backups of your home dir that can be mounted using systemd-homed to interface with the system for UID/GID etc.

These are just two projects that I happen to be interested in at the moment - there's a pretty big groundswell in Linux atm toward a model that resembles (and honestly even exceeds) what NixOS does in terms of recoverability on upgrade.


Or rather ZFS/BTRFS/BchachFS. Before doing anything big I make snapshot, saved me recently when a huge Immich import created a mess, `zfs rollback /home/me@2026-01-12`... And it's like nothing ever happened.


A human can also accidentally delete or mess up some files. The question is whether Claude Cowork is more prone to it.


There was a couple of posts here on hacker news praising agents because, it seems, they are really good at being a sysadmin. You don't need to be a non-technical user to be utterly fucked by AI.


Theoretically, the power drill you're using can spontaneously explode, too. It's very unlikely, but possible - and then it's much more likely you'll hurt yourself or destroy your work if you aren't being careful and didn't set your work environment right.

The key for using AI for sysadmin is the same as with operating a power drill: pay at least minimum attention, and arrange things so in the event of a problem, you can easily recover from the damage.


If a power tool blows up regularly, they get sued or there is a recall.

We have far more serious rules at play for harm when it comes to physical goods which we have experience with, than generative tools.

There is no reason generative tools should not be governed by similar rules.

I suspect people at anthropic would agree with this, because it would also ensure incentives are similar for all major GenAi purveyors.


It’s easy for people to understand that if they point the powerdrill into a wall the failure modes might include drilling through a pipe or a wire, or that the powerdrill should not be used for food preparation or dentistry.

People, in general, have no such physical instincts for how using computer programs can go wrong.


Which is in part why rejection of anthropomorphic metaphors is a mistake this time. Treating LLM agents as gullible but extremely efficient idiot savants on a chip, gives pretty good intuition for the failure modes.


Not a big problem to make snapshots with lvm or zfs and others. I use it automatically on every update


What percentage of non-IT professionals know what zfs/lvm are let alone how to use them to make snapshots?


I assumed we are talking about IT professionals using tools like claude here? But even for normal people it's not really hard if they manage to leave the cage in their head behind that is ms windows.

My father is 77 now and only started using computer abover age 60, never touched windows thanks to me, and has absolutely no problems using (and administrating at this point) it all by himself


This tool is aimed towards consumers, not devs


This doesn't answer the question, like, at all.


dann halt nicht


I'm not even sure if this is a sarcastic dropbox-style comment at this point.


Even at 1 hour ticks am I assuming that these are moving far too quickly?


I thought that too but they're surprisingly fast. I tracked a dot across the Atlantic (US East Coast to UK) and it took around 4-5 days, which is about right.

There's a very nice effect where if you zoom in, time slows.


Electricity, water, roads, bridges are all public infrastructure. Why should payments be any different.

Without summoning the decentralized block-based "currency" crowd, I would like to point out that in the entire lifespan of such technologies they never have received widespread institutional or legislative buy-in like this EU initiative to build a digital Euro.

While USDC and BTC have been used as defacto currencies in some countries there is truly no equivalent adoption in any meaningfully mature economic zone such as EU/NA/CN.

I welcome sovereign digital payments initiatives.


I would argue that physical currency transfer is already public infrastructure, in the terms of coins and notes that are physically given.

So it only makes sense that digital currency is too.


Absolutely agree. The idea that private corporation manage our digital payments is crazy if you ever imagine that happening to physical payments. Imagine if bank of america got to decide if the dollar bill your trying to use is too damaged. That should be between me, the recipient, and a public body


> Electricity, water, roads, bridges are all public infrastructure. Why should payments be any different.

I think you fail to understand why these are public infrastructure.

Why should water be public infra but food is not?


> Why should water be public infra but food is not?

The main reason why infrastructure of any kind (water, sewage, etc) is a public infrastructure - even in largely privatized economies - is that infrastructure is essentially a natural monopoly. Food on the other hand isn't and it can largely be traded as a commodity (which is, at least in my opinion, a major reason why our food system is so broken).


The food system in Europe works pretty well.


> The food system in Europe works pretty well.

"Suicide Among Farmers in France: Occupational Factors and Recent Trends":

* https://pubmed.ncbi.nlm.nih.gov/27409004/

"Under Pressure: Suicides Among Farmers in Austria and Germany":

* https://www.journalismfund.eu/suicides-farmers-austria-and-g...

"Mental Health Risks to Farmers in the UK":

* https://committees.parliament.uk/writtenevidence/43055/pdf/

And Europe's policies affecting other places, "Stop the Dumping! How EU agricultural subsidies are damaging livelihoods in the developing world":

* https://policy-practice.oxfam.org/resources/stop-the-dumping...


Define "works pretty well".

If your only expectation is that it provides enough calories for your population, you are absolutely right. If you have a look at the bigger picture, the issues are plentiful. On the producer side, farmers are operating at relatively thin margins which encourages consolidation and unsustainable farming practices. This in turn leads to extensive soil degradation and fertilizer use, which is unsustainable - both financially and ecologically.

On the consumer side, people are becoming more overweight (which cannot be exclusively be attributed to the food system, but diet of course plays a significant role). Food is becoming more expensive and lower quality. Food waste also still is a major problem.

Many issues are shared between the US and the European food system, although they may not be as extreme as in the US. However, it does not feel like there is actual political will to steer the ship in a different direction.


>Why should water be public infra but food is not?

The water pipes are public infra. They pump it into your house.

The more people that use the same system, the cheaper it can get. Drawing competing systems of water pipes to houses to let companies compete would simply drive up the cost for everyone.

Same with electricity, gas lines, sewage...

Water itself is not publicly owned. You can buy water in the store like food.


We have the same with fiber to the home in France. When a company lays the fiber, they have to allow others to use it too (they are paid for that service, but the amont is regulated).

When fiber arrives, there is a 3 months waiting period before any company can provide the service. This is intended to give enough time for everyone to prepare an offer.

When I got mine, I called the regulation authority (Arcep) who gave me the date and hour my line opens. On that day I called my preferred provider who told me "it is coming! we will call you!" and then the one who laid the cable who told me "we can come in 2 days". I chose the latter.

A few years later the preferred one finally made its way to my area...


You should share your answer instead of posing a rhetorical question because the answer isn’t obvious at all and ranges across a large variety of options including food should be infrastructure.


Indeed.

Just to add to the conversation, personally I back and forth a bit on which things should be public or private owned, farms especially.

My general reasoning is that when a "best" solution is known, monopolies tend to form; monopolies tend to extract as much economic rent as possible; I'd rather economic rents be extracted by a government for the purpose of national benefit rather than personal enrichment.

Conversely, when there is no known "best" solution, a free market allows a range of entrepreneurs the opportunity to try their ideas in the hope of cornering the market.

I think water is probably in the first, but with a caveat that this is a high-level thing and it's fine to have hundreds of different companies trying to figure out the best ways to make and install municipal pipes, that work is contracted to by local governments.

Food? I dunno, weather is still a massive dice roll for farming output. Perhaps nationalisation would work, perhaps subsidies, price regulations for inputs and floors for outputs, are the least wrong way to do this. But I'm extremely uncertain.


I think most governments do recognise food as both "strategic infrastructure" and absolutely vital to their re-election chances.

European governments govern food supply with cash subsidies to farmers, land use rules helping farmers, special immigration rules for agricultural labourers, special extra-low inheritance taxes for farmers, special subsidies for things like having hedges between fields, special low-tax fuel for farm equipment, different tax rates for different foodstuffs (bread vs cake vs wine vs beer), provision of super cheap water for irrigation, minimum price guarantees with governments buying up over-produced products, special border controls for fresh goods that can't be held up, special border controls for live animals, entire government departments for things like monitoring and controlling the spread of animal disease, rules on precisely what chemicals can be used, rules about things like chemical run-off into waterways, rules about animal welfare, rules about slaughterhouse conditions, rules about package materials, package sizes, package labels, rules about how much pork must be in a sausage for it to be called a pork sausage, rules about who can call their product 'champagne' or 'parmesan'.

If the payments industry was regulated like the food industry, it would be more regulated, not less.


Food from multiple suppliers is easy to put on trucks going to multiple shops using same roads. Road are workable shared access medium. Water really is not. Unless you deliver it in tanker trucks using roads.

Electricity itself is fungible in moment, so electricity can used shared access medium of grid. But similarly it makes little sense to have multiple roads in densely build areas. So both roads and water pipes end up as natural monopolies in build up areas.


Food should be public infrastructure, and the subsidies every country gives to farmers are a good indication of that.

Not all food but produce, bread, milk, infant foods, flour, rice and other cereals being sorta price controlled the way water/electricity is on most places would benefit mostly everyone


Medicine should count too. And a lot of other things that we often realize to be essential only in a crisis situation. But I'm sure GP didn't intend their list to be exclusive.


...because food is not infrastructure.


Neither is water, they obviously meant food production and distribution infrastructure


Electricity, water, roads and bridges are natural monopolies.

Payments aren't and there is no reason for the State to monopolize it. Especially given the EU poor track record on fostering innovation. The EU bureaucrats will "regulate" it to the bone, increasing compliance costs for processors and mass surveillance. We'll be back to the start.


In an age where card payments are ubiquitous, being suddenly cut off from VISA/MasterCard networks can severely disrupt a country's economy. Especially if it heavily relies on tourism.


The EU prevents sellers to surcharge depending on the type of card used (PSD2 Directive (EU) 2015/2366), Art. 62(4)). It in effect opened the door to a Visa/Mastercard duopoly, as no local competitor could emerge and compete on price.


When it comes to tourism, this problem will always happen if the tourists are coming from the side that's cutting off the other. Without an interface between European and American payment systems, Americans won't be able to pay in Europe.


It's way more severe than that since Americans are not the only ones relying on VISA/MasterCard for payments abroad. The presence of other payment systems would make it easier for any non-USA country to still do payments.


Some corrections:

- The EU is not a state, it's a governance body composed of representatives from individual member states. Every state is responsible for implementing their take on the directive.

- "EU poor track record on fostering innovation" - many things you use online have been researched and conceptualised in the EU. Even if they go elsewhere for funding, don't mistake where "innovation" happens and where it gets packaged by VC money for sale and enschitification.

- compliance costs: I think that's only expensive for companies who intend to to sell or otherwise do something shady with user data. Remember, not collecting data makes you instantly compliant with zero cost.


> "The EU is not a state"

It's a supranational institution that dictates laws to States, with a budget and coercive powers against States. It just lacks an army of its own. Whether it's a proper state or not doesn't matter.

> conceptualised in the EU

No, in Europe. No EU bureaucrat conceptualizes things. EU =\= Europe.

> I think that's only expensive for companies who intend to to sell or otherwise do something shady with user data. Remember, not collecting data makes you instantly compliant with zero cost.

A lot of businesses need consumer data to improve their offerings and be competitive against the big boys. And RGPD lawyers ain't cheap, so even if you keep the minimal amount of data, you have to fork €10k+ to review everything, etc. The requirements for AI are even worse. All of those compliance jobs are unproductive and a burden on EU companies.

Same for the tax compliance obligations, which are ever increasing and now require you to record and document everything, especially if you do cross-border operations, as you are considered guilty by default if you don't.

We could also talk about the requirements to audit your sourcing chain for "human rights abuses", which ends in compliance hell for industrial companies with 2k+ suppliers, while of course Chinese companies don't have this problem.

The EU doesn't do any cost/benefit analysis on this, and just suppose that companies will magically find ressources to deal with their new regulatory "innovations".


Ok, I think you're reading a lot of tabloids as none of the above are actually true.

Regarding > Institution that dictates laws to States

again no, because the laws are created and voted by elected representatives of said states, so the EU is not some 3rd party that exists on the side, the EU is the countries within it. Member states create their own laws.


Please point what is false. Does the EU, as an institution, produces technology? No Europeans do, did it before the EU, and don't need the EU to do it. WWW was created in Switzerland by an English scientist. Not in a ugly "bureaucrat-grey" building in Brussels.

What you are saying here is false: EU regulations are directly applicable, and don't need to be transposed into local laws.

It's the case for directives, which are required to be incorporated within 2 years. If the State doesn't comply, it faces an infringement procedure (Article 258 TFEU:), sanctions and fines (Article 260 TFEU).

The EU itself is not a real democracy, given that at every step obscurity and backroom-dealing is preferred to transparency. Chat Control is an excellent example of it: it was recommended by officials whose names were redacted. Even when the European Parliament said no in the past, they try to push it again - those fools can't anyway vote a law preventing the EC to do it!

More formally, the only directly elected institution (the parliament) doesn't have initiative power, doesn't hold the pen during trilogue negociations, is highly corruptible[0] given the proportional election, and can just "accept" the head of the European Commission. The EC is the only permanent body and can arbitrage rotating presidencies, pressure parties, and do screening on MPs to get what they want.

The lack of judicial consequences regarding Van Der Leyen after the sms affair is quite appalling for a commission that yells about "compliance" every day of the year.

[0] https://www.ftm.eu/articles/european-parliamentarians-involv...


> there is no reason for the State to monopolize it

except relying on a foreign actor for the economy is a security risk?


The alternative could be to foster competition to allow private local actors to emerge. Why do we need the State for this? The EU prevents sellers from surcharging depending on the type of credit card used, which led to the Visa/Mc duopoly and prevented local alternatives to compete on costs.


How many non-government physical currencies are there in your country? I'm guessing if there is a legitimate government electronic currency most alternatives will fade away which will be the best rebuttal to your argument.

The public can hold government rules for surveillance in check, whereas they don't have that option for private payment systems.


Payments aren't but issuing currency is. State doesn't monopolize the payments. State creates a regulation and common standard across the EU. It's the banks who would do the payments, not the state, and for that banks would receive a standard fee so you as a consumer always know how much you pay for the service.


There is already. Almost every country on earth has single state controlled currency.

Fiat currency is already a natural monopoly on payments.

Imagine if every time you wanted to pay for a train ride you had to put your money into an envelope, mail it to the United States, and wait for it to come back. That's VISA.


X, Y, Z are not public infrastructure, why should payments be any different?


Internet should be too, and yet here we are


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: