Whether or not this is acceptable depends on your threat model. If you believe your adversary might compromise or coerce the service operator, then you cannot trust in-browser encryption even if it is served over https - the code sent to you could be modified to be malicious, and you have no way to prevent or even detect that this is happening. See the Tor Freedom Hosting [0] incident for an example of how LEA already do this.
So, the inability to guarantee integrity of a web application remains a problem. TLS helps, but falls short if your adversary can MITM TLS or compromise/coerce the service operator. Web applications unfortunately make this a very convenient attack vector, since their code gets reloaded from the server so frequently and remote code execution (RCE) is trivial to achieve on the web platform (XSS, browsers are full of exploitable bugs).
GP's questions are (respectfully) being skirted around. The same reasoning applies to compiled, client-side code. Recent events show that open source code is as vulnerable as closed source. App stores may mitigate things somewhat, but not completely. It's probably easier to verify client-side encryption in a browser than it is to audit a thick client app, no?
This is very similar to the functionality provided by tlsdate (https://github.com/ioerror/tlsdate). They appear to have eschewed tlsdate's default approach of using the timestamp from the handshake in favor of using the `Date:` field, which tlsdate also supports. It would be interesting to see whether the randomization of TLS timestamps in modern implementations of TLS might mean that tlsdate's default mode is no longer useful. Either way, it's really cool to see this sort of functionality being included in ntpd by default!
openntpd has been nothing but trouble for me, but when I switched to djb clockspeed instead, it made things better. Here's a script that runs on GFiber devices, which uses tlsdate securely for the initial timewarp, and djb clockspeed thereafter. Since switching to this we have had extremely accurate timekeeping.
Which version were you using? If you were using the portable version, I heard openntpd-portable wasn't updated for quite a while, and fell behind...missing out on some really big improvements from more recent versions.
The portable tree has apparently recently been picked up again by a new maintainer.
If you don't mind elaborating a bit more on the troubles you had, I might be able to help get things fixed for a later release.
I have had some trouble on Solaris getting the adjtime olddelta value to settle quickly, but haven't heard of any other issues. Even if you're happy with clockspeed, it might help other users to identify the problem.
Are you generating the User ID with the additional characters and expecting the user to remember/keep track of it? I do think that is very user-friendly, even with the cookie trick you describe.
It seems like you are trying to force your user to remember a salt. Why not just use a proper salt and a strong password hashing function?
Also note that this protection is only useful in the case where an attacker can get a database dump but cannot perform an active attack on the server.
On the other hand, I have seen some sites (gandi.net comes to mind) do something similar to this. Wonder if they have a similar security reasoning?
> It seems like you are trying to force your user to remember a salt.
Yes, essentially I'm trying to force the user to remember a client-side 'salt'.
> Why not just use a proper salt and a strong password hashing function?
Because it wouldn't protect against the attack described by userbinator (ie. 'just trying these 20 passwords gives you a ~18% success rate for any username'). Having a client-side 'salt' gives you that protection.
> I do [not] think that is very user-friendly, even with the cookie trick you describe.
Yes, this system imposes a cost in terms of user-friendliness. But for sensitive sites (eg. medical or financial) I think it's worth it.
Sensitive sites should use 2-factor authentication by default as your method won't help against keyloggers and other malware. I don't like 2-factor authentication (it's more time consuming and costly to get a throw away phone number than a new single purpose email address to register to a random site), but this method is even less user friendly as you can't expect an average user to remember a random symbol string in few months. What would really improve security situation is a good, easy to use, cross platform, cross device password manager that would be included in major browsers by default.
Part of the problem of running an exit node is that it's unclear how "safe" it actually is, and as a result there is a lot of rumor and paranoia. Every country has different laws that affect the legal status of an exit node operator.
For example, an Austrian man was arrested in 2011 for running an exit node and charged with being an accomplice to crimes that were carried out over Tor using his exit node. He was ultimately found not guilty, but a law was passed as a result that effectively makes it illegal to run a Tor exit in Austria. [0]
Meanwhile, in the US no one has ever been arrested simply for running a Tor exit node (at least to my knowledge). Anecdotal information suggests that the most difficult thing is finding someone to host the node (many cloud VPS providers, for example, will not) if you don't host it yourself. A Reddit commentator and operator of Tor exits suggests that running Tor exits is protected under U.S. law, although I'm not sure if this has been tested in court [1].
I think Mozilla should take the (relatively small, due to their presence in the U.S.) risk of running Tor exit nodes. They could even turn it into a project of its own, to explore the common problems and develop some best practices for running Tor exits. I could imagine this being a fruitful collaboration with the EFF, for example!
The case is Austria was complicated because the court found chat protocols from him:
„You can host 20 TB child porn with us on some encrypted hdds“
The judge argues that this is more than just providing infrastructure, it is advertising illegal content / behavior. So this case is not representative for evaluating the risk of running a tor exit node.
I work at Mozilla, and the folks at Torservers.net were extremely helpful in helping us get up to speed quickly. We're hoping to contribute to the public body of knowledge on how to operate servers efficiently, both in terms of effort and cost.
IANAL, but would this just require someone incorporating or starting an LLC and then paying for the exit nodes in the name of that entity? Would that be sufficient protection?
Also not a lawyer, but you can still be charged criminally in the USA:
"Charging a corporation, however, does not mean that individual directors, officers, employees, or
shareholders should not also be charged. Prosecution of a corporation is not a substitute for the
prosecution of criminally culpable individuals within or without the corporation"
tor exit is effectively a proxy. nobody should run an open proxy. that's just common sense.
on the other hand it may be a good feature if implemented correctly. for example, sites explicitly saying they allow tor exit connections would be a good start.
I think the title of this post is misleading. For context, see the summary of the amendment on p. 324, under "ACTION ITEM—Rule 41 (venue for approval of warrant for certain remote electronic searches)".
The goal of this amendment (appears to me, a non-laywer) to be to allow judges to issue warrants for crimes that occur in their jurisdiction, for materials that may not be in their jurisdiction, when the location of the materials has been obfuscated with an anonymizing technology. I don't think this is an "automatic warrant" - they still have to establish probable cause, etc.
A more interesting sentence from p. 325 discusses the mechanism by which the search may be carried out: "The proposal speaks to two increasingly common situations affected by the territorial restriction, each involving remote access searches, in which the government seeks to obtain access to electronic information or an electronic storage device by sending surveillance software over the Internet."
Been using this for a minute, it's quite nice! Kudos to Nadim & team for a friendly and mostly intuitive UI, with some creative new ideas in the context of email/messaging.
A few initial questions:
1. Is any part of the communication forward secure? I can't imagine how the multiparty chat would be.
2. Using the avatar to verify cryptographic identity seems weak, mostly because I don't expect users will check it (it's only in the Contacts view, and it's unclear that it has that use). It resembles the placeholder avatars used on Github among other sites, which seems to suggest that it is not meaningful. So - can the Peerio server silently MITM my communications?
3. I'm not quite sure how the search works (still reading the code), but it seems like it must be searching the plaintext stored in the client's memory. How well will that scale?
So, the inability to guarantee integrity of a web application remains a problem. TLS helps, but falls short if your adversary can MITM TLS or compromise/coerce the service operator. Web applications unfortunately make this a very convenient attack vector, since their code gets reloaded from the server so frequently and remote code execution (RCE) is trivial to achieve on the web platform (XSS, browsers are full of exploitable bugs).
[0] http://www.wired.com/2013/09/freedom-hosting-fbi/