Yes, it broke the speed record: a multithreaded application outperformed the singlethreaded version. But I wasn't happy with the result. It consumed an order of magnitude more memory, and gc times were potentially harming users (not a widely researched subject, but gc times in low-latency mixnets can likely harm user anonymity). Oh, and it would occasionally crash with OOM errors.
* yes, more modern versions of Go would likely mitigate some of the memory pain
* yes, crypto/tls is fast now
* no, crypto/tls still has insufficient functionality for implementing this. crypto/tls implicitly assumes you want to authenticate the channel through certificates, which Tor doesn't do
* I was using go 1.4
* yes, I tried Rust
Did you consider a different concurrency strategy to avoid the deadlocks? With separate reader-writer threads you don't have the deadlock you mentioned.
Crypto/tls doesn't support renegotation, which Tor needs, but they are getting rid off.
There are separate reader/writer goroutines, I don't think splitting them up further would've helped much. The problem is that all connections may end up needing something from all other connections, and as soon as one of them slows down (slow network, etc) its channels start filling up, taking other connections with it :-)
This could've been mitigated by applying backpressure in a bunch of places, and is ultimately a problem of Tor and not Go, but the nature of Go makes it hard to build code to do that.
Rust is awesome. It's likely a better fit than Go for applications like this, as it has more predictable performance[1], and more control over the scheduler (as you have to roll one yourself).
I attempted an implementation of Tor in Rust, but because I implemented it in Go a few weeks before that I got bored quickly. That said, some ideas I had for the Rust version have made it to Tor itself (or soon will), such as my ideas on transparently load-balancing Tor hidden services: https://gitweb.torproject.org/torspec.git/tree/proposals/255...
[1] note that in the land of Tor, unpredictable performance (for example because of GC pauses) could lead to user deanonymization.
This is awkward... When newzbin shut down, I was without a NZB provider for a week. Then 2 days ago I paid for my membership at NZBMatrix. And now it's gone?
Similar position.. but I was waiting a week or two before committing to actual money :)
Glad I did.
Going a bit old school at the mo, downloading headers from a.b.mm for example. Get some retention up myself, might take a while!
Non-native English speaker here, I volunteer at several companies to translate their interfaces.
Almost all people can read basic English (especially those who access Github), so translating the Github interface seems rather pointless to me. Of course, translating the support articles would really help accessibility for those who don't know English as well as native speakers.
Translating interfaces makes people think they can use their language to communicate on a site. Translating only the support articles helps people understand the site, but they will quickly realize that the site itself prefers English-only communication.
The bar to entry to newcomers is probably more of a UX/usability problem than one of internationalization; that I agree with you on.
Translating the support docs is one suggestion I think is unequivocally good, and something that should be done.
I think internationalization is great in a read-only capacity, but when non-English speakers start posting Issues, comments and Pull Requests, that's where it starts to turn into a problem.
GitHub has very little to translate in the first place, as it's very sparse on prose, so aside from the question of support docs, we are probably making mountain out of mole hills, since the remaining English is jargon and not beholden to internationalization concerns.
Transaction fees are a 'gift' from the person doing the transaction to the person that's processing the transaction. Processing transactions is done in blocks, and the process is called 'mining' because for every processed block, the person who processes the block gets 25 BTC (before today that was 50 BTC) plus the transaction fees. Obviously 25 BTC is a lot, which is why it's so hard to mine blocks. Mining is often done in groups, and the person who mines a block has to share the profits with the rest of that group.
IIRC the entire Bitcoin community mines 6 blocks per hour, and it's the mining process that keeps Bitcoin going, because without miners there couldn't be any transactions.
This is why transaction fees will increasingly be important as BTC matures, after all 21 Million are gone the only reward for "mining" will be the award of all transaction fees paid for transactions in that block, so much like the old west, mining is gradually replaced with banking/processing :-)
The reward for mining a block just halved; it'll continue to halve as the supply increases, so that 21 million is not the endpoint so much as it is an asymptote.
It will continue to halve, exactly one more time to 12.5, and then the next time after that, it will stop. Once the 12.5 reward is gone, the network will be supported entirely by transaction fees, and no new bitcoins will be created.
So, yes, asymptote, but the rest of your comment seems misleading. It will halve once more, that's not exactly "continue to halve."
I was also wondering how to reconcile this (seemingly very) early first halving with my concept of a 21 year bitcoin generation span. Thanks for clearing that up!
*edit: Turns out I was misinformed about the 21 year thing too. These projections have the halving terminating estimated at 2140. Don't suppose either of us will be around to see it.
And, actually, it may continue well beyond 2140. The only reason it would stop there is because the smallest value bitcoin can currently represent is 0.00000001 BTC (1e-8). Many people believe that the rising value of bitcoin will bring about a need to increase the number of decimal places that bitcoin supports. If this change is made to the protocol before 2140, the the mining reward will probably keep on halving to values even less than 1e-8 BTC.
I like it. Produce the proof that your rig solved the latest block, and you get your name in the book. Sounds much better than "no further rewards will be issued from this date."
For the record, distributing Python scripts doesn't have to mean distributing the source: it's possible to just execute the compiled .pyc files, which are harder to crack than (for example) Java's .class files.
Also, since xor is just a CPU instruction, you won't immediately notice it in the decompiled script (if you get that far). With all the overhead that decompilers tend to produce, it's really easy to miss.
You should really take a look at one of these .pyc files. They are very very verbose to the point that they even contain local variable names and the python code can be trivially decompiled from the bytecode.
Literally the only thing that goes missing from .py to .pyc is comments.
That's what an assembly dump looks like to an experienced reverse engineer. Writing something in a "compiled language" because it's more secure is like XOR-ing your video with RANDOM_STRING and calling it DRM.
(Not that any DRM scheme can ever work, ever, but hey. At least some try to try.)
I think you guys have two different definitions of "work". jrockway seems to be arguing from a technical perspective, but you're likely arguing from a practical perspective. Sure, DRM can "never work" in that there's always the analog loop and all that. But it's absolutely possible to make it so insanely complex and difficult that no one will ever break it; DirectTV has shown that angle works, without a doubt.
I think the distinction is between software and hardware DRM. DirecTV controls the entire hardware chain. This means they can do various proper encryption schemes (public/pre-shared key etc) that are actually near impossible to crack and make it really, really hard to obtain the key by making the key write-only in the crypto-chip.
In a pure software solution, you control the hardware, and any hiding of the key is subject to reverse engineering the software.
There's also a distinction between access to the data stream vs the ability to make a duplicate of it.
For all of the success they've had in protecting DirecTV, if you've got a legitimate access card feeding HDMI data out, you can make a perfect digital copy of the video stream that has no copy protection whatsoever. So ultimately the DRM offers no protection for the media content companies (at least those that don't benefit from live performances like say sports games), though it does for the pipe provider who will surely get his monthly satellite fees.
I agree with your comment, except that "difficult" doesn't necessarily imply "insanely complex".
Counter-example: we once timed a release of a very minor protection update to when the main attacker typically took a holiday. We got 6 weeks out of something trivial, buying more time to work on the major release to greet him when he returned.
It all comes down to economics. Buy a beater bike, and sure, you can secure it well enough. (Mean time between stolen is low enough you don't care too much.) Buy a really nice one that everyone wants, and good luck with that.
Popular, recently produced media has too much value to too many attackers to protect. A celebrity's self shots -- same thing. A game console by Microsoft or Sony -- same thing.
> But it's absolutely possible to make it so insanely complex and difficult that no one will ever break it
A more accurate way to put it: If you make the return on effort ratio low enough, the probability of someone breaking it goes down, and it might even go down enough for you to get away with it for a useful amount of time.
It's not that simple. Sure, unpopular systems are more obscure and less likely to attract attention, but you're wrong in extending that to "if it's popular, it will be broken" (denying the antecedent).
As a counter-example, I propose DirecTV or even their competitor, Dish Network (Nagravision). Hacks of these systems are worth 6 figures, pay TV is widely desired, and there hasn't been a DTV hack since 2004. None.
.pyc files are actually really easy to decompile, it's just that most people have never encountered the tools required to do it. I believe they literally contain the entire abstract syntax tree for the Python source code.
No, they contain marshaled bytecode. I documented the format at http://daeken.com/python-marshal-format a while back (should still be more or less correct).
Python's bytecode is for a stack machine, if I'm not mistaken, and such bytecode is a serialization format for ASTs - a post-order traversal for expressions. Interpret stack machine bytecode symbolically and it reconstructs an AST:
Compilation:
1 + 2 => (+ 1 2) => push 1, push 2, add
Interpretation:
push 1 => 1
push 2 => 2, 1
add => (+ 1 2)
Control flow makes things slightly more complicated, but not for predictable code generation.
Obfuscated bytecode which e.g. doesn't maintain consistent interpreter stack depths for every code path (illegal for JVM or .net CLR) would make things a little harder to analyze, but I doubt that's often the case in practice with Python.
Yeah, the reconstruction isn't hard at all, but it's not a direct 1:1 mapping to the AST, since multiple control flow structures in the AST can become the same thing in bytecode. That said, it's quite simple to make it Good Enough (TM); the reason I wrote that and the RMarshal module was that I was writing a Python decompiler a part of a larger commercial project. I should release the decompiler at some point.
I've been using Windows 8 since roughly two weeks before its launch, and I agree with most of the article.
In fact, I just realized that I never use a single "Modern UI" app for the simple reason that they force my entire screen (2560x1440, 27") to be filled by one app. Such a waste of space. In desktop mode I often have four 1280x720 windows on my screen.
Windows 8 might just be the push I needed to switch to Linux.
What I don't understand is why you feel like you are supposed to use "Modern UI" apps on your big screen. You are not. Everything you enjoyed to do with Windows 7 is still possible with Windows 8. Just because there is an alternate way to do them does not mean you have to adopt it...
Man has always assumed that he was more intelligent than
dolphins because he had achieved so much... the wheel,
New York, wars and so on... while all the dolphins had
ever done was muck about in the water having a good time.
But conversely, the dolphins had always believed that
they were far more intelligent than man... for precisely
the same reason.
Yep, it's not very smart yet. There's a small json dict of firefox releases that it's referencing (https://github.com/witoff/BrowserAge/blob/master/data/age-fi...). I haven't updated past the stable releases so it's defaulting to the last known release date. Feel free to clone on github :)
Hmm, might be tough - I'm not sure quite how Nightly releases work, but I updated an hour ago and the version changed from 18.0a1 (old date, a few days ago I think) to 18.0a1 (2012-09-30). Meanwhile I check my useragent and I'm just seeing Mozilla/5.0 (Windows NT 6.1; WOW64; rv:18.0) Gecko/18.0 Firefox/18.0
Will submit a pull request if I get round to looking further and find a decent answer!
edit: Not sure there's going to be any good solution. 18.0 Nightly was first out on August 28th, which is 33 days ago, so despite the fact that I last updated today, and that the whole concept of "Nightly" is to have an update every day, the release number is technically older than the 25 days since the most recent stable release came out. And when, in 10 days, 18.0 moves to Aurora and Nightly moves to 19.0, the user agent most likely won't be different between Aurora users then and Nightly users now.
Yes, it broke the speed record: a multithreaded application outperformed the singlethreaded version. But I wasn't happy with the result. It consumed an order of magnitude more memory, and gc times were potentially harming users (not a widely researched subject, but gc times in low-latency mixnets can likely harm user anonymity). Oh, and it would occasionally crash with OOM errors.