Hacker Newsnew | past | comments | ask | show | jobs | submit | imalerba's commentslogin

There's a newer video more on topic of police chases https://www.youtube.com/watch?v=wVFXUkFx5Y8


Used this page to show the actual error. As usual, everything was green on the status page while nothing was working


Fell for it. 90% czech, 10% polish. Nothing further from the reality. I guess it just geolocates by IP?


Doubt it. I'm traveling in Hawaii right now and am Chinese-American who can speak Spanish, but it labeled me as Hindi/Urdu.


So I picked up the Czech accent but not the language, dang.


I'm in Japan, it correctly detected French. Can't be much further away geolocation wise.


Don't think so, couldn't pick my Australian accent from here in Australia.

I'll go back and lay it on real thick and see if it does better.


I think we both misunderstood.

> You sound like a native English speaker. I couldn’t identify any distinct non-native accent.

I am a native English speaker, with an Australian accent. I think it's supposed to identify your non-english-native accent, which you wouldn't have one being Australian.


worked 100% identifying my accent


guessed 100% czech for me as well, wrongly



The purpose of CAPTCHA is supposedly to test if human or a bot, not to break or violate user privacy protections. It appears Cloudflare and others rather push the dangling of websites as "carrots", and see if they can get users to disable their ad blockers or any other privacy protections to get access.

The Cloudflare verification has become a sick or sadistic joke now. It's often just used to annoy people, and no matter if they pass the tests, denies access anyway. If the test is not going to determine access, then don't provide it, and just wholesale be up front on mindlessly or frivolously blocking people and entire IP ranges.


I thought the purpose of captcha was to train AI


Cloudflare's captcha alternative Turnstile doesn't have anything to train ai on, no images, descriptions or anything else really, its just a single click.


There's a natural contradiction between security and privacy.

For security, an actor needs to be tested and marked as secure, or else tested again before every interaction.

For privacy, an actor must not be marked, lest observers could correlate several interactions and make conclusions undesirable for the actor.

It does not make the infinite loop produced by CLoudflare any more reasonable though.


Ever heard of zero-knowledge proofs?

CloudFlare claims to support Privacy Pass, which is supposed to use a zero-knowledge scheme to solve for this for Tor users.

Unforunately, the integration has been broken for a very long time and bug reports aren't tended to.

https://blog.cloudflare.com/cloudflare-supports-privacy-pass...

https://privacypass.github.io/

https://github.com/privacypass/challenge-bypass-extension/is...


I don't understand why an actor needs to be tested and marked as secure on first interaction. There must be signals so that the server could initially trust an actor in some case. For example, why can't the server trust a never before seen IP attempting to sign into an account that hasn't been experiencing incorrect password attempts? Is Cloudflare just a case of a one size fit all solution?


the problem is it's too easy to make a botnet attack a sure by having each computer try a password for a unique account once per day. this wouls let you get a few million chances per day or website at guessing user passwords without detection.


In theory, this could be countered by moving to one wrong password attempt per IP over any web site protected by Cloudflare. I have a better understanding of the threat and there might be other drawbacks.


I disbelieve there is no way for a client to prove that it has been challenged and cleared in the past without disclosing a persistent unique identifier.


Without a unique identifier, it would be easy for an attacker to clear one challenge and use the result for all nodes in a botnet.


Why can't the identifier be merely yet another bit of data whose existence and properties can be proven by cryptography without transmitting the data itself? It's done all the time with other data.


He's saying that won't work, because the goal is not actually to fingerprint or mark users. It's to ensure that the thing connecting to their servers at that moment is a web browser and not something pretending to be a browser. Give away tokens that say "i'm a browser honest" and they'll just get cloned all the bots.


Rate-limit the number of different source IPs that the token can be used from within a given period of time, or the number of requests per second that can use that token without having to re-verify?


If they can track the token that way, that blows the whole point, the token becomes a persistent unique id.

The idea was to prove that a token exists without disclosing the token itself, nor any sort of 1:1 substitution.

That sort of thing is definitely possible, that's not the conundrum. What they said is one of the conundrums I have to admit. If the server doesn't know who the user is, then the server doesn't know it's a valid user vs a bot.

But I only agree it's a problem. I don't agree it's a problem without a solution.


I'm at a loss for how this could be implemented reliably (where it never fails to stop bots). Ideas?


I don't think the burden of proof/R&D is on us. But there are many smart people around, I'm sure Cloudflare can pay some of them (even more surprising things are possible with cryptography).

One far-fetched idea is to use ZKP proofs to prove that you were verified, without disclosing anything your identity. But that's likely overkill.

Anyway, I think Cloudflare already works on something better with turnstile, the "privacy preserving captcha" and private access tokens [0].

[0] https://blog.cloudflare.com/turnstile-private-captcha-altern...


What do you see as the problem with this attempt?

https://privacypass.github.io/


It allows for unlimited tries. Let's say current ML system could solve 1% of the captchas, then an attacker could try a million captchas and generate privacy passes for equivalent of 10k captchas.

Theoretically, to penalize the user you need to identify the user. And for that you need to maintain long term identity.


You still wouldn't need that.

Trivial counter-examples include proof-of-work (see HashCash) or cryptocurrency micropayments (not necessarily settled on-chain, so transaction fees are not an issue for the user).


Isn't the client's IP address a sufficient unique identifier?


Absolutely not. Dozens if not hundreds of legitimate clients can appear on the same public IPv4 address, being home internet customers behind a NAT. The same client can trivially change their IPv4 and likely IPv6 address on a mobile network by toggling flight mode to reconnect.


There's more to it than just anti-fingerprinting. There's also some other fingerprinting going on, and I think there may be some kind of IP reputation system that influences these prompts as well. I've put privacy protections up to max but never see Cloudflare prompts.

I see them using some VPNs and using Tor, but that makes sense, because that's super close to the type of traffic that these filters were designed to block.

I suspect people behind CGNAT and other such technologies may be flagged as bots because one of their peers is tainting their IP address' reputation, or maybe something else is going on on a network level (i.e. the ISP doesn't filter traffic properly and botnets are spoofing source IPs from within the ISPs network?).


Every IPv6 thread we get someone saying "Oh v6 is worthless, we can stay on v4 forever, there are no downsides to CGNAT". I still have no idea how they can think that.


Those responses baffle me. I don't think most of those have ever been on the receiving end of anti-abuse features targeting shared IP addresses. I wonder if they're the same people who consider IPv4 a scarce resource that needs to be shared carefully.

Try ten Google dorks for finding open Apache directory listings; your IP address gets reCAPTCHA prompts for every single search query for minutes. Share that IP address with thousands of people, and suddenly thousands of people get random Google/Cloudflare prompts.


Yeah, ever try to use Google through Tor? If you're lucky, it will let you do a captcha and get your result, but mostly it just says the IP is temporarily blocked for abuse.


IPv6 addresses are effectively the same as shared IPv4 addresses in anti-abuse systems. All anti abuse systems treat a /48 or /56 level the same as a single IPv4 address. It's the only way to actually detect one system doing abuse.


> All anti abuse systems treat a /48 or /56 level the same as a single IPv4 address.

With the difference being that you get your own /48 or /56 and suffer from only your own behaviour.

If you're behind CG-NAT because your ISP can't get enough IPv4 addresses, then you suffer from the behaviour of other people.


I don't know of a single ISP that gives /48s out to customers. Maybe a /56, but I think even that is rare.

IPv6 is way better than cgnat, but ISPs are still doing their own internal routing for much smaller blocks. Meaning the block itself is functionally the equivalent of a shared IPv4 for abuse prevention purposes.

But also, I could just not know about the ISPs giving out /48s. My window to this is from the abuse prevention side.


Residential /56s are ubiquitous in my community, and /48s are offered by one major isp, though not the one I personally use.


I'm with Andrews & Arnold (a UK ISP) and they provide a /48 by default.


You get with v6 it's all disposable? You can use it for 1 min and throw it away.

You'll be able to to get them from any geo-location easy as pie.

So it's worse. You'll be even less trustworthy unless you register as trustworthy and keep it, which means tracking. The same as having a fingerprint or login now.

As pro argument that sucks, it's the opposite.


The second half of the address is disposable, plus a few more bits. The first 56 bits or so are allocated just like non-CGNAT IPv4 addresses are currently allocated.


So then you can build up a good reputation by sticking with one IPv6 address, and you shouldn't have to deal with any silly bot restrictions at all.


>I suspect people behind CGNAT and other such technologies may be flagged as bots because one of their peers is tainting their IP address' reputation, or maybe something else is going on on a network level

This is a thing that is absolutely happening, I got temporarily shadowbanned for spam on Reddit the day I switched to T-Mobile Home Internet which is CGNAT'd, and I didn't post a single thing


I'm curious why you seem to think that Tor is more legitimate to block than those behind CGNAT. There's been plenty of research showing on a per-connection basis, Tor is no more prone to malicious activity than connections from random IPs, and that it's only on a per-IP basis malicious activity is more likely. I.e., it's the same phenomenon as why CGNAT causes collateral damage. You could argue that Tor is opt-in and therefore less worthy of protection, but saying "users who want extra privacy deserve to be blocked, even when we know (as much as one can know) that they're not using it for malicious reasons" seems like a fairly dystopian premise.

I'm actually kind of glad more people are becoming aware of this problem, and hope it finally spurs more interest in mechanisms that divorce network identity from IP addresses -- including the work Cloudflare is doing on Privacy Pass!


In my opinion Tor is as good a privacy-preserving technology as VPNs and should be treated very similarly. I use Tor sometimes and I'm annoyed as you are with all the CAPTCHAs and outright blocks when I just want to read an article on a website.

However, the sad fact is that Tor is abused for a LOT of malicious traffic, much more so than any VPN provider, let alone normal ISPs using CGNAT. The anonymity combined with its free nature make it very attractive for bad people to use Tor for bad things without any reasonable fear of getting caught.

An outright block for Tor traffic is definitely out of the question, but adding CAPTCHAs to sensitive things (like account signups, expensive queries, etc.) is sadly a requirement these days.

Blocking exit nodes does nothing to protect your website's security, but it sure as hell cleans up the logs and false positives in your security logs. It's not just Tor, though, there are also some home ISP networks that don't seem to care about the botnets operating inside their network.


"I'm curious why you seem to think that Tor is more legitimate to block than those behind CGNAT."

Who said that? I don't see anyone saying that.


How else would you interpret "I see them using some VPNs and using Tor, but that makes sense, because that's super close to the type of traffic that these filters were designed to block"? They seem to be implying that Tor is a form of acceptable collateral damage, but the likely problem here, i.e. the CGNAT instantiation of collateral blocking, is not.


That only says why it might be blocked, not that it's right.


I never said they claimed it was right, just that it was more acceptable. Again, I don't see how one could interpret it otherwise?


They didn't suggest it was acceptable or more acceptable either, or any other equivalent words for ok, or agreeable, or understandable, or justified, or proper, or reasonable, or...


... I am saying they are clearly making a comparative statement, not an absolute one. Again, how else am I supposed to read that sentence? I feel like I'm going crazy here, this isn't some nuanced point, it's literally what they seemed to be trying to say. Can you please tell me what you think the point they were making with the "but" in that sentence?


"I see them using some VPNs and using Tor, but that makes sense, because that's super close to the type of traffic that these filters were designed to block."

All this says is "This explains why I get blocked while using tor or vpns", It does not say they agree with it or accept it etc.

It only says they are not suprised that it happens, that they understand the mechanism by which it happens, not that they accept or agree with it.

They might or might not also think it's fine and reasonable. I can't say they don't approve any more than you can say they do.


Some sites I have already visited keep popping them up. And I'm on public IP that should have been associated with my computer for a while...

Maybe it is just per use case. Or they think I'm a bot as I keep looking at sites every couple hours... Which might be actually common with these sites.


it may be anecdotal but I see Cloudflare on Firefox compared to Chrome.


The most entertaining part of when I first ran into endless verification loop/Cloudflare error codes is that I couldn't access their official forums/support articles for information due to the same problems.


Had the same issue a long time ago, it was surprising how much of the internet was just "turned off": https://blog.dijit.sh/cloudflare-is-turning-off-the-internet...


Got SSL_ERROR_UNSUPPORTED_SIGNATURE_ALGORITHM when I went to the site and a redirect to https when I manually changed the protocol to http. I turned off https-only mode in Firefox so it appears to be a redirect that your server is sending back.

When I change the protocol and get the redirect back to https there's another "/" which is added after the domain such that "domain/path" becomes "domain//path". This repeats if I continue to change the protocol and hit the redirect such that "domain//path" will become "domain///path" (I noticed this because there was like 6 of them).

Apologies if this is indeed caused by my browser settings; I've been unable to find the cause if that's the case.


The slow march of progress I suppose, that machine is running OpenBSD6.0 which apparently is too old for modern ciphers, I had A+ a year ago on Qualys.

I suppose I better update it now, sorry for the inconvenience.


It is concerning how the recommended security practice is essentially planned obsolence.


Interesting find but that's not the issue for me. about:config shows privacy.resistFingerprinting=false by default (maybe Fedora sets that default?). There were various sub-settings (privacy.resistFingerprinting.*), some of which default to true, so I explicitly set them to false, and refreshed, but that didn't help. I also changed layout.css.font-visibility.resistFingerprinting from 1 to 0. I also tried adding the domain I'm testing to privacy.resistFingerprinting.exemptedDomains and that didn't help.


I wonder at what stage we can consider the damage Cloudflare is doing to the internet as naughty under anti-trust or similar?


Lucky me, I didn't find yet any site to regret if I just give up when I'm presented with the "verify you're human" garbage - which by the way you can get also on Windows Firefox from Google.


The breadth of sites that have this is increasing. I've had problems from everything to a website that sells eggs to science journals to ChatGPT.


> This is because Cloudflare is not happy with Firefox 'resist fingerprint' feature.

"Cloudflare is not happy with anything that is not Cloudflare"

ftfy :)


Yes, I was going to mention something like this. I use a custom firefox cookie setting and get many sites that are broken. The sign that it is a security setting within firefox is the fact that chrome will work fine.


I'm experiencing the same issues with both security.u.c and archive.u.c , status dashboard shows everything operational https://status.canonical.com/


On the other hand `apt-smart --list-mirrors` says that http://archive.ubuntu.com is unavailable. Other than https://github.com/martin68/apt-smart, are there any other tools or best practices to fallback apt mirrors?


the default Software & updates app allows checking for fastest mirrors


That's Loki

> Loki is a horizontally scalable, highly available, multi-tenant log aggregation system inspired by Prometheus.

https://grafana.com/oss/loki/


> Tired of getting spam emails from newsletters you never even signed up for? Mailscarp has you covered!

Ironic


It's literally racketeering.


Charging protection money


Maybe their masterplan is to sell beds :)


Reading between lines it also says it going to enforce a 10GB limit on Paid tiers.

> Namespaces on a GitLab SaaS paid tier (Premium and Ultimate) have a storage limit on their project repositories. A project’s repository has a storage quota of 10 GB.

Even it's not mentioned as a change nor in the timeline, but that limit does not exist currently.


This change is one of the only MRs about these new limits discussed in the open.

https://gitlab.com/gitlab-org/gitlab/-/merge_requests/91418


Just wanted to point out that it looks like people will be able to go over that limit but will have to pay extra for it according to https://docs.gitlab.com/ee/subscriptions/gitlab_com/index.ht... . Seems somewhat reasonable.


CF status page now showing a wide spread incident.

https://www.cloudflarestatus.com/incidents/xvs51y9qs9dj


This is probably the best link for this status instead of the generic cloudflare.com one.


Long time Thunderbird in Linux user, had to switch to Mutt a few months ago. Having 3 inboxes configured was taking, for some reason, +10Gb of disk and several Gb of RAM, plus being painfully slow and freeze pretty often.


With several 10s of thousands of messages (~70 GB) in my accounts, I also had issues with TB using tons of disk space even when set to not copy messages locally. The issue was TB's global search index. If you disable global search indexing in your config, then manually delete the global-messages-db.sqlite file, you can free up those 10+ GB.

My fix for most annoyances was to copy mail locally, and run dovecot locally on the same box as TB (TB doesn't support standard maildir). I also added a wrapper script that does a VACUUM on all the sqlite dbs in the profile when starting TB.

With the above, TB has worked well for me.


I didn't realize how many sqlite files there are, I ran "find . -name *.sqlite", I see Chrome related files, cookie files, a file related to storage? Time to look for a new email client which is sad to say after all these years.


I eventually had to switch off Thunderbird as well for similar reasons and just live with mutt. TB just really didn't perform well at all on large mailboxes (dozens of folders, thousands of messages per folder) without freezing the UI, gobbling gigabytes of ram, etc - it is obviously not targeted at my use case.


Disk space is as much as you have mail. If not, there is something wrong with your profile.

I have 15 GB of mail across 5 email accounts and Thunderbird is currently sitting at 350 MB RAM. Rarely crosses 500MB, I think.

I am encountering some freezes and occasional crashes which are annoying, but on linux there is nothing better.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: