Are you also facing the the 100mb upload limit when using cloudflare tunnel?
Sometimes I want to upload a video from my phone will away from home but I can't and need to vpn
You have to disable Cloudflare proxy which is not an option with tunnels. It's technically against TOS to proxy non-HTML media anyway. I just ended up exposing my public IP.
I considered doing that too. My main problem with it is privacy. Let's say I set up some sort of dynamic DNS to point foo.bar.example.org to my home IP. Then, after some family event, I share an album link (https://foo.bar.example.org/share/long-base64-string) with friends and family. The album link gets shared on, and ends up on the public internet. Once somebody figures out foo.bar.example.org points to my home IP, they can look up my home IP at all times.
Wait, then why does 1.0.0.1 exist? I'll grant I've never seen it advertised/documented as a backup, but I just assumed it must be because why else would you have two? (Given that 1.1.1.1 already isn't actually a single point, so I wouldn't think you need a second IP for load balancing reasons.)
I don't know of it's the reason, but inet_aton[0] and other parsing libraries that match its behaviour will parse 1.1 as 1.0.0.1. I use `ping 1.1` as a quick connectivity test.
1.0.0.0/24 is a different network than 1.1.1.0/24 too, so can be hosted elsewhere. Indeed right now 1.1.1.1 from my laptop goes via 141.101.71.63 and 1.0.0.1 via 141.101.71.121, which are both hosts on the same LINX/LON1 peer but presumably from different routers, so there is some resilience there.
Given DNS is about the easiest thing to avoid a single point of failure on I'm not sure why you would put all your eggs in a single company, but that seems to be the modern internet - centralisation over resilience because resilience is somehow deemed to be hard.
Not a network expert but anycast will give you different routes depending on where you are. But having 2 IPs will give you different routes to them from the same location.
In this case since the error was BGP related, and they clearly use the same system to announce both IPs, both were affected.
In this case they both are advertised from the same peer above, I suspect they usually are - they certainly come from the same AS, but they don't need to. You could have two peers with cloudflare with different weights for each /24
In general, the idea of DNS's design is to use the DNS resolver closest to you, rather than the one run by the largest company.
That said, it's a good idea to specifically pick multiple resolvers in different regions, on different backbones, using different providers, and not use an Anycast address, because Anycast can get a little weird. However, this can lead to hard-to-troubleshoot issues, because DNS doesn't always behave the way you expect.
In case of Denmark, ISP DNS also means censored. Of course it started with CP, as it always does, then expanded to copyrights, pharmaceuticals, gambling and "terrorism". Except for the occasional Linux ISO, I don't partake in any of these topics, but I'm opposed to any kind of censorship on principle. And naturally, this doesn't stop anyone, but politicians get to stand in front of television cameras and say they're protecting children and stopping terrorists.
Not just that. ISPs are often subject to certain data retention laws. For Denmark (And other EU countries) that maybe 6 months to 2 years. And considering close ties with "9 eyes" means America potentially has access to my information anyway.
Judging by Cloudflare's privacy policy, they hold less personally identifiable information than my ISP while offering EDNS and low latencies? Win, win, win.
Actually, it's about 20cm from my left elbow, which is physically several orders of magnitude closer than anything run by my ISP, and logically at least 2 network hops closer.
And the closest resolving proxy DNS server for most of my machines is listening on their loopback interface. The closest such machine happens to be about 1m away, so is beaten out of first place by centimetres. (-:
It's a shame that Microsoft arbitrarily ties such functionality to the Server flavour of Windows, and does not supply it on the Workstation flavour, but other operating systems are not so artificially limited or helpless; and even novice users on such systems can get a working proxy DNS server out of the box that their sysops don't actually have to touch.
The idea that one has to rely upon an ISP, or even upon CloudFlare and Google and Quad9, for this stuff is a bit of a marketing tale that is put about by thse self-same ISPs and CloudFlare and Google and Quad9. Not relying upon them is not actually limited to people who are skilled in system operation, i.e. who they are; but rather merely limited by what people run: black box "smart" tellies and whatnot, and the Workstation flavour of Microsoft Windows. Even for such machines, there's the option of a decent quality router/gateway or simply a small box providing proxy DNS on the LAN.
In my case, said small box is roughly the size of my hand and is smaller than my mass-market SOHO router/gateway. (-:
I used to run unbound at home as a full resolver, and ultimately this was my reason to go back to forwarding to other large public resolvers. So many domains seemed to be pretty slow to get a first query back, I had all kinds of odd behaviors from devices around the house getting a slow initial connection.
Changed back to just using big resolvers and all those issues disappeared.
Keep in mind that low latency is a different goal than reliability. If you want the lowest-latency, the anycast address of a big company will often win out, because they've spent a couple million to get those numbers. If you want most reliable, then the closest hop to you should be the most reliable (there's no accounting for poor sysadmin'ing), which is often the ISP, but sometimes not.
If you run your own recursive DNS server (I keep forgetting to use the right term) on a local network, you can hit the root servers directly, which makes that the most reliable possible DNS resolver. Yes you might get more cache misses initially but I highly doubt you'd notice. (note: querying the root nameservers is bad netiquette; you should always cache queries to them for at least 5 minutes, and always use DNS resolvers to cache locally)
> If you want most reliable, then the closest hop to you should be the most reliable (there's no accounting for poor sysadmin'ing), which is often the ISP, but sometimes not.
I'd argue that accounting for poorly managed ISP resolvers is a critical part of reasoning about reliability.
It is. If latency were important, one could always aggregate across a LAN with forwarding caching proxies pointing to a single resolving caching proxy, and gain economies of scale by exactly the same mechanisms. But latency is largely a wood-for-the-trees thing.
In terms of my everyday usage, for the past couple of decades, cache miss delays are largely lost in the noise of stupidly huge WWW pages, artificial service greylisting delays, CAPTCHA delays, and so forth.
Especially as the first step in any full cache miss, a back-end query to the root content DNS server, is also just a round-trip over the loopback interface. Indeed, as is also the second step sometimes now, since some TLDs also let one mirror their data. Thank you, Estonia. https://news.ycombinator.com/item?id=44318136
And the gains in other areas are significant. Remember that privacy and security are also things that people want.
Then there's the fact that things like Quad9's/Google's/CloudFlare's anycasting surprisingly often results in hitting multiple independent servers for successive lookups, not yielding the cache gains that a superficial understanding would lead one to expect.
Just for fun, I did Bender's test at https://news.ycombinator.com/item?id=44534938 a couple of days ago, in a loop. I received reset-to-maximum TTLs from multiple successive cache misses, on queries spaced merely 10 seconds apart, from all three of Quad9, Google Public DNS, and CloudFlare 1.1.1.1. With some maths, I could probably make a good estimate as to how many separate anycast caches on those services are answering me from scratch, and not actually providing the cache hits that one would naïvely think would happen.
I added 127.0.0.1 to Bender's list, of course. That had 1 cache miss at the beginning and then hit the cache every single time, just counting down the TTL by 10 seconds each iteration of the loop; although it did decide that 42 days was unreasonably long, and reduced it to a week. (-:
In general there's no such thing as "DNS backup". Most clients just arbitrarily pick one from the list, they don't fall back to the other one in case of failure or anything. So if one went down you'd still find many requests timing out.
The reality is that it's rather complicated to say what "most clients" do, as there is some behavioural variation amongst the DNS client libraries when they are configured with multiple IP addresses to contact. So whilst it's true to say that fallback and redundancy does not always operate as one might suppose at the DNS client level, it is untrue to go to the opposite extreme and say that there's no such thing at all.
I tried it for a few minutes, I really like it
But the lack of widget support is a deal breaker for me
I usually just unlock my phone, see top stories directly and then click on one of the to open Materialistic (HN app, haven't been updated in a few years do it got delisted from the play store)
Yeah a widget is something that would be nice - I don't use them heavily myself so I guess that's a big reason for why they are not there right now. I do remember Materialistic having a OK widget.
Meta question : do you keep the archive.org link of the article in your favorite or did you manually look up the link before posting?
Or maybe an extension that does that automatically?
Instilling doubt or question in something is often more practical in effecting change than immediately announcing every facet of your argument. Sometimes it’s more practical to coerce change by presenting an opportunity to question an assumed assumption.
Maybe someone will see my comment and start down their own rabbit hole to find a conclusion. That is more ideal than immediately assuming the details of my personal assumptions and conclusions.
Meh. I think too much reliance on a single entity like CloudFlare isn’t good but your reply isn’t helping at all. I’d reconsider the approach it you really care about a decentralized internet.
@userbinator sets a good example elsewhere in this thread, imo.