There’s a fundamental trade-off between performance and privacy for onion routing. Much of the slowness you’re experiencing is likely network latency, and no software optimization will improve that.
completely agree but it could be added that a new language can sometimes help explore new ideas faster, in which case maybe the routing layer and protocol can see new optimizations
This is not correct. Tor is generally not bottlenecked by the quantity of available nodes, the usual bottleneck is the quality of nodes picked by your client rather than the quantity of nodes available.
Of course, technically, this problem is related to the quantity of high quality nodes :)
It’s important to remember that safety is the whole purpose of the thing. If Tor is slow, it’s annoying. If Tor is compromised, people get imprisoned or killed.
With 3 proxies traffic circles around the planet 2 times, which takes light 1/4 second to travel. Response does it again, so 1/2 second in total. Light is slow.
The modern TLS 1.3 handshake is exactly the same as your connection setup. If we ignore the fact that (Because Middleboxes) you have to pretend you're talking TLS 1.2 it goes like this:
Client: "Hi, some.web.site.example please, I want to talk HTTP and I assume you know how AES works and I've randomly picked these numbers to agree the AES key"
Server: "Hi, I do know AES and I've picked these other numbers so now we're good."
Included in the very same packet as that response from the server is the (now AES encrypted) first things the TLS server wants to say e.g. to prove who it is, and agree that it knows HTTP as well.
0RT is a (very dangerous, do not use unless you understand exactly what you're doing) extension for some niche applications where we can safely skip even this roundtrip, also included in TLS 1.3
What do you mean by "exactly the same as your connection setup."? Are you talking about TCP?
This TLS handshake can only happen after the TCP handshake, right? So 1 rtt for TCP, + 1 rtt for TLS. 2 rtt total. (2.5 rtt for the server to start receiving actual data. 3 rtt for the client to receive the actual response.)
Today, Tor doesn't move QUIC so you'd have to do TCP, but that's not actually a design requirement of Tor, a future Tor could actually deliver QUIC instead. QUIC is encrypted with TLS 1.3 so your first packet as the client is that Hello packet, there's no TCP layer.
QUIC really wants to do discovery to figure out a better way to move the data and of course Tor doesn't want discovery that's the whole point, so these features are in tension, but that's not hard to resolve in Tor's favour from what I can see.
You meant Tor network, right? Sadly, making very fast anonymous overlay networks is extremely difficult. You either make it fast or don't sacrifice anonymity. I personally noticed that Tor network has significantly improved and is way faster since a few years. It's also not recommended to exit and if you religiously stay over onions, you increase your anonymity.
And significantly faster to access onion websites than go through exit nodes, which are probably saturated most of the time.
Reddit over their onion website is very snappy, and compared to accessing reddit over VPN it shows fewer issues with loading images/videos and less likely to be blocked off.
It would be nice if more websites were available as onion addresses (and I2P as well).
edit: also if the Tor browser (desktop and mobile) would ship with ublock origin bundled, that would further improve the experience (Brave browser Tor window compared to the Tor browser is a night and day difference)
> My biggest gripe with the Tor project is that it is so slow.
It’s not supposed to be a primary browsing outlet nor a replacement for a VPN. It’s for specific use cases that need high protection. The tradeoff between speed and privacy for someone whistleblowing to a journalist, as an example, is completely reasonable.
Having too much bandwidth available to each participant would incentivize too much abuse. In my past experience, a Tor associated IP was already highly correlated with abuse (users trying to evade bans, create alternate accounts to break rules, and then of course the actual attacks on security).
>It’s not supposed to be a primary browsing outlet nor a replacement for a VPN.
Tor wants people to use the network for primary browsing because it helps mask the people that need the protection. The more people using the network, the better for everyone's anonymity.
Knowing not so much about Tor but some about math: the number of nodes you need to compromise in order to de-anonymize a Tor user is exponential in the number of hops. Google says there are roughly 7000 Tor nodes, including 2000 guards (entry) and 1000 exit nodes. If you have a single hop, there's roughly a 1/1000 chance that you will connect to a single malicious node that can de-anonymize you, going up linearly with the number of nodes an attacker controls. If you have 3 hops, you have a 1 in 1000 * 7000 * 2000 = roughly 14 billion chance. 2 hops would give you 1 in 2 million, 4 hops would give you 1 in 1000 * 7000 * 7000 * 2000 = 98 trillion. In practical terms 1:14B is about the same as 1:98T (i.e. both are effectively zero), but 1:2M is a lot higher.
There are currently ~9000 relays if you look at https://metrics.torproject.org/networksize.html. The current problem is the fact that majority of relays are in Germany and if you rotate your circuits enough, you'll also notice the same path. German govt has been very hostile towards Tor for a long time, they were also behind KAX17. We need more relays obviously but also in different regions.
4 = ... actually, you have more attack surface and you are more susceptible to fingerprinting because everybody else is using 3, so you're timings etc help identify you
So the default is 3 and nobody ought change it! Use 3 like everybody else.
The exception is .onion sites. TOR actually deliberately defaults to 6 hops when accessing .oninon sites - 3 to protect you and 3 to project the site.
They should be fine since I made up the setting name, and even though I am not familiar with Tor client's configuration, I don't believe this is possible without altering its source code.
Also, using this kind of software without understanding how its works even just a little doesn't protect much of your privacy.
You should preface this with some important information about what that does.
There are some trade-offs!
Changing that setting to 1 gives you weaker anonymity guarantees. Using multiple guards spreads your traffic across different IP addresses, making it harder for an adversary who controls a subset of the network to correlate your activity.
Reducing to a single guard concentrates all traffic through one point, increasing the chance that a hostile relay could observe a larger fraction of your streams...
What's the point of having one relay? You're better off using a reputable VPN like mullvad or ivpn. Tor is the best you're gonna get for low latency anonymous overlay network. It's been studied and refined over the years.
It's very difficult for me to contemplate how anybody could run a VPN, however reputable, that isn't compromised by one intelligence agency at least. Their incentive structures and their costs to participate in this space just make it a no-brainer.
If you're starting a brand new VPN company with ironclad ideals about privacy - are you going to be able to compete with state-run enterprises that can subsidize their own competing "businesses", on top of whatever coercive authority they possess to intervene in local small businesses?
> Well it's certainly not worse than c, and it's hard to argue it's as bad, so...
Except in regards to having a proper standard (the standard from Ferrocene has significant issues), and to the size of the language and how easy it is to implement a compiler for.
This would be a fantastic argument against rust for the m68k or some other embedded architecture. But we live in a world with an actual rust compiler for basically all architectures tor serves. & obviously the c standard can't save c from itself.
Ahh here you are speaking nonsense again. We ain't talking formal logic, we're speaking human to human
> For instance, building a large project in a language with only one major compiler, can introduce risk.
Ok let's introduce an alternative to gcc then
> But Steve Klabnik will lie about that
You seem fine to both tarnish the reputation of, erm, c defenders with your own actions and to slander the reputation of Klabnik (or "lie" as I'm sure you'd term it), who both speaks more coherently and with his own name. Why do this in the name of open source if you have nothing to contribute, knowing that you're setting your own project back?
Hey, if you want a fast anonymity netowrk, there are commercial providers. Companies doing research on thier competition use these to hide thier true idents from targets. They are not cheap (not free but cheaper than AWS imho) but have much greater functionality than tor.
>Hey, if you want a fast anonymity netowrk, there are commercial providers.
For most people seeking anonymity via Tor network (whistleblowers, journalists, activists, etc.), paying a company who can then subsequently be compelled to hand over your information is a bad choice.
And in most other scenarios, Authentic8 is probably still a bad choice. If you require a FedRAMP-authorized service, then sure, look at Authentic8.
I agree it probably won't make it faster. But there is absolutely no comparison when it comes to safety/stability. I've written a ton of C code, and it's just not even close. Rust really outshines C and C++ in this regard, and by a very large margin too.
Still seems insanely more expensive in the UK. I understand they have a higher cost to carry because their project is indeed more complex, but that's like a almost 13x more expensive variant, while not even being two times the length.
But you have not addressed the problem that governments control the flow of information in this case here.
The antisocial media may be irrelevant, but I still fail to see why a government should be able to proxy-control the flow of information. So I am totally against this. I am also against antisocial media, but I don't see why a government actor should filter and censor information here.
So the big fatso corporations all rally behind AI.
I don't like this. I don't dispute that AI has some useful use cases,
but there are tons of time-wasters, such as fake videos generated on
youtube. So when they now autogenerate everything, the quality will
further go downwards but they will claim it will go upwards. Well,
what may go up are the net profits. I don't think the quality will
really go upwards. They also kind of create a monopoly here. Only
other big corporations can break in - and they won't because it is
easier to share the profits in the same market in a guaranteed manner.
Quite amazing that this can happen. Who needs courts anymore when the
base system can be gamified?
Then there is also the censorship situation. If you keep on censoring stuff, you lose out information. I see this on youtube where Google censors cuss words. This leads to rubbish bleeps every some seconds. Who wants to hear that? It's so pointless.
I see a lot of Google adverts for AI that seems to be “look, you can translate your photo into a sci-fi world”.
Which is cool, I guess. But it doesn’t feel like a very valuable thing to an end user. That kind of thing is mostly valuable because it’s hard. If anyone can do it, nobody cares any more.
I am really excited about AI in some use cases. Using the latest models for agentic software development is truly magic. But “make a funny video of yourself as Mickey Mouse” just seems kind of naff.
I don't understand why. Working with hardware you're going to have to do various things with `unsafe`. Interfacing to C (the rest of the kernel) you'll have to be using `unsafe`.
In my mind, the reasoning for rust in this situation seems flawed.
The amount of unsafe is pretty small and limited to where the interfacing with io or relevant stuff is actually happening. For example (random selection) https://lkml.org/lkml/2023/3/7/764 has a few unsafes, but either: a) trivial structure access, or b) documented regarding why it's valid. The rest of the code still benefits.
I’ve done a decent amount of low level code - (never written a driver but I’m still young). The vast majority of it can be safe, and call into unsafe wrapped when needed. My experience is that a very very small amount of stuff actually needs unsafe and the only reason the unsafe C code is used is because it’s possible not because it’s necessary.
It is interesting how many very experienced programmers have not yet learned your lesson, so you may be young but you are doing very well indeed. Kudos.
Either way, the point you are making is an excellent one. Discipline makes for better programming, and to not use the features available to you is very often the right choice.
Unsafe in Rust doesn't mean anything goes. Specifically it means that you are going to 1) dereference a raw pointer; or 2) call an unsafe function/method; or 3) access/modify a mutable static variable; or 4) implement an unsafe trait; or 5) access fields of a union.
You still get the safety guarantees of Rust in unsafe code like bounds checking and lifetimes.
Why does that matter? Rust with some "unsafe" is still much nicer to use than C.
In fact one of the main points of Rust is the ability to build safe abstractions on top of unsafe code, rather than every line in the entire program always being unsafe and possibly invoking UB.
We need to create an office suite that really allows us to get rid of those milking corporations. I am not just thinking LibreOffice - I am actually thinking that an office suite can be used globally AND can also be at the least in part be co-funded by governments. The exact amount and procedure I omit here (can be many things), but it is no longer acceptable that a single greedy corporation keeps on milking schools for money.
(To those wondering why not LibreOffice - I am not saying not LibreOffice; but I am not sure how well LibreOffice's model fits to e. g. having a suite of office-related software that can be employed by every government, school, university, company etc... perhaps the code base is not well-written. Do we already have the co-editing functionality online? So that I could modify the document of an elderly person and then create a .pdf file. I can do so locally of course, but I want to be able to modify that on another, approved before-hand computer. Right now I have to carry an USB stick, and then modify locally, which is also possible, but I'd much prefer in-built solutions here. This is just one example of many many more. We need an improved LibreOffice here.)
I don't think it needs one specific alternative; if the protocols they shared were all open and useable, small pieces could be replaced slowly over time.
reply