Hacker Newsnew | past | comments | ask | show | jobs | submit | fside's commentslogin

I’m with the author. DNS4EU is being sold as “sovereignty,” but it’s just another centralized resolver sitting on the same foreign-owned infrastructure we already depend on. Shuffling everyone’s queries through one EU-branded endpoint doesn’t fix privacy or resilience—it just adds another middleman. If the EU really wants independence, it should invest in making local ISP resolvers secure and trustworthy instead of outsourcing the job yet again.


> Shuffling everyone’s queries through one EU-branded endpoint doesn’t fix privacy or resilience—it just adds another middleman.

The ENTIRE point is to be a publicly funded middleman that doesn’t collect or expose user data.

It’s not about “sovereignty” over DNS - it is primarily to prevent dns providers invading user privacy. Go and read the official documentation on ENISA.

Maybe you should have at least a rudimentary understanding of it’s purpose before making uninformed judgements?


And it should only use the I- and K- root-servers, and fund those.


That'd be a really good idea

https://en.wikipedia.org/wiki/Root_name_server

To be honest, setting up a DNS4EU replica would just be a simple unbound


Should be easy enough. But, the problem is the scale. I work at a privacy conscious EU based startup and we used to use quad9 for our infra. Shortly after we started using, we started to hit scalability issues. When the whole eu traffic was hot, our DNS query latency would also go up. To be able to keep up, we had to switch back to CF and Google. Hope there is a really good alternative one day.


Run your own resolver. It's not that hard.


Sure thing, but essentially it would be another thing that we have to make sure that it is protected and performant. At the time of building a startup, that’s still an item we are leaving someone else to manage.


It's simple to setup a resolver, really. Basically just "apt install unbound" and you have a resolver ready.

the only thing you might have to adjust is the access control

https://www.linuxbabe.com/ubuntu/set-up-unbound-dns-resolver...

      access-control: 10.0.0.0/8 allow
      access-control: 127.0.0.1/24 allow
      access-control: 2001:DB8::/64 allow


The workflow here feels pretty natural, just using the AI to help with the boring parts and speed things up. I like the idea of treating it as a tool, not a replacement.


I’m sure it’s related to the heroku outage.



It’s amazing. We cannot even submit a support ticket, because they require you to login and the incident is on the authentication path. You think such a company wouldn’t introduce a circular dependency at such a critical path…


Counterpoint: what's the point of submitting a ticket in this case? With an incident of such magnitude, do you think they don't know?


It's also a good paper-trail if you have an SLA. Pointing to a ticket saying "this is when we knew" makes time calculation for compensation very trivial and transparent. Of course if your salesforce account manager is like most, they don't need such "evidence" but in the past it has helped me.


wouldn't any email suffice in such a case as proof?


Well, we have realized the issue like 1 hr before they shared anything in their pages. That 1 hour feels forever. Essentially, I want my problem to be acknowledged from a service provider which they failed to do so in their status page.


Their EU region was down for 8 hours last November and it took 2 hours before they were aware of it so submitting a ticket is definitively worth while. I'm suspecting their monitoring is not good enough.


They are definitely aware of it, although the status page is having trouble loading, you should get it eventually.

If their triage system is good than overwhelming them with duplicate non-specific "things are wonky" error reports might not hurt, but it def doesn't help.

https://status.heroku.com/incidents/2822

Update

Heroku continues to investigate and remediate an issue with intermittent outages.

Posted 3 hours ago, Jun 10, 2025 14:20 UTC

Issue

Beginning at 06:03 UTC, Heroku is having intermittent outages which are currently being investigated.

Posted 4 hours ago, Jun 10, 2025 13:07 UTC

Investigating

Engineers are continuing to investigate an issue accessing Heroku services.

Posted 4 hours ago, Jun 10, 2025 12:58 UTC

Investigating

Engineers are continuing to investigate an issue accessing Heroku services.

Posted 8 hours ago, Jun 10, 2025 09:19 UTC

Investigating

Engineers are investigating an issue with the Heroku platform.

Posted 9 hours ago, Jun 10, 2025 08:04 UTC


With no updates to their status page or social media accounts, it’s possible they don’t know.


I wonder if anyone has switched algorithms after hitting real-world scaling issues with one of those? Curious if there are any “gotchas” that only show up at scale. I only have experience with fixed window rate limiting


I have experience with token bucket and leaky bucket (or at least a variation where a request leaves the bucket when the server is done processing it) to prevent overload of backend servers. I switched from token bucket to leaky bucket. Token bucket is “the server can serve X requests per second,” while leaky bucket is the “the server can process N requests concurrently.” I found the direct limit on concurrency much more responsive to overload and better controlled delay from contention of shared resources. This kind of makes sense because imagine if your server goes from processing 10 QPS to 5 QPS. If the server has a 10 QPS token bucket limit it keeps accepting requests and the request queue and response time goes to infinity.


We used a token bucket one to allow say 100 requests immediately but the limit would actually replenish 10 per minute or something. Makes sense to allow bursts. This was to allow free tier users to test things. Unless they go crazy, they would not even notice a rate limiter.

Sliding window might work good with large intervals. If you have something like a 24h window, fixed window will abruply cut things off for hours.

I mostly work with 1 minute windows so its fixed all the way.


We used leaky bucket IIRC and the issue I saw was that the distributed aspect of it was coded incorrectly and so depending on the node you hit you were rate-limited or not :facepalm:


So it wasn’t really implemented correctly then.


Amazing step towards saving tax payers money and avoiding foreign proprietary software. I hope to see more governments moving in this direction. The only problem is, certain systems may end up being a maintenance horror story.


Finally, a proof that’s less ‘universal truth’ and more ‘regional dialect’. Next up: the Schrödinger’s theorem—proven and unproven, depending on your timezone.


If our brains are energy misers, maybe the real supercomputers are the ones that can do more with less—not the ones with the most flops. This could reframe how we design efficient algorithms and even AI: sometimes, the best strategy isn’t processing power, but predictive efficiency.

Maybe the next breakthrough in cloud computing isn’t more cores or larger GPUs, but better energy allocation and anticipation, just like the brain.


Isn't that what is happening on the whole, going from soccer field energy guzzling hardware to laptop to mobile and server farm processors like ARM that are reasonably energy-efficient? There's only that much energy you can squeeze into a small space, so efficiency becomes a bottleneck.

What good is having all data and knowledge somewhere else than in your pocket when and where you need it, so having computing devices in form factors convenient for human beings must be a major driving factor.

Remains to be seen if everything will still revolve around data centres or if devices will start talking to each other in the future, which might be a more democratic way to go.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: