Haven't read the paper yet, but have read the blog posts (which are awesome, BTW!).
I'm wondering if you have any thoughts on Frank McSherry's old blog post expressing his distrust for approximate-DP [1]. He seems to have different intuitions than your "almost DP" post expresses and makes criticisms that aren't quite addressed in your post.
First of all, there's a lot of recent (and not so recent) work in Local Differential Privacy [1], which uses the "untrusted curator" model. Although this software doesn't use it, the article mentions RAPPOR, which is a good example.
Second of all, encryption protects your _data_, but not your _privacy_; that is, assuming your data gets used in any way, you have no guarantees about whether the result reveals anything you'd rather keep secret. Of course, if you're talking about normal encryption, your data _can't_ be used, but then you're not really sharing it at all, as much as storing it there (like Dropbox). But once you start talking about things like homomorphic encryption or secure multiparty computation, it's important to keep in mind that they are complements to differential privacy, not replacements.
> After using the language for two years I find that the types are actually enough to understand a new library, however, am taking for granted that it's an acquired skill.
I think this overstating it, unfortunately. I'm an intermediate Haskeller, and when I tried to use `hasql`, I found the lack of documentation to slow me down.
It's a testament to the power of types as documentation that I was able to use it at all, but examples and simple cookbook-style "Here's how you do this thing" or "Here's how you use this component" or "You can't do this because the interface doesn't allow it; here's why" would have sped up my acquisition of the library immensely.
Even if the function had documentation that said "this function does not have side effects" it could still have side effects.
So within the context of this thread, type signatures vs hand written documentation, I would say your point is a +0 for hand written documentation. Both type signatures and hand written documentation can lie.
Thank you for doing an AMA! I know I'm late to the party, but I have an experience-report/feature-request/question:
I tried to use GitLab in a classroom setting, and it went okay. One of the reasons we decided against using it the next year was the apparent lack of an archival backup feature (c.f. my Stack Exchange [question](http://serverfault.com/q/627618/172148) on the matter.)
We'd like to start completely fresh every year, so that former course assistants and students don't have access, but we'd also like to keep around the old data (for various reasons). Given that GitLab can only restore a backup to the same version that generated it, the only option this left us with was to archive the whole VM, which just feels sloppy.
I understand that this feature is not a priority and is a relatively large technical undertaking, so I'm not holding my breath on it getting implemented; even so, I thought that sharing my experience would be valuable.
Once again, thank you for engaging with the community and for such a great product.
I don't know if this is a definition other people use, but here's one possibility.
Intelligence (in a domain) is measured by how well you solve problems in that domain. If problems in the domain have binary solutions and no external input, a good measure of quality is average time to solution. Sometimes, you can get a benefit by batching the problems, so lets permit that. In other cases, quality is best measured by probability of success given a certain amount of time (think winning a timed chess or go game). Sometimes instead of a binary option, we want to minimize error in a given time (like computing pi).
Pick a measure appropriate to the problem. These measures require thinking of the system as a whole, so an AI is not just a program but a physical device, running a program.
The domain for the unrestricted claim of intelligence is "reasonable problems". Having an AI tell you what the mass of Jupiter or find Earth-like planets is reasonable. Having it move its arms (when it doesn't have any) is not. Having it move _your_ arms is reasonable, though.
The comparison is to the human who is or was most qualified to solve the problem, with the exception of people uniquely qualified to solve the problem (I'm not claiming that the AI is better than you are at moving your own arms).
Most problems are not binary. They might not even have a single best solution. Many have multiple streams of changing inputs and factors. So again, how are you going to measure intelligence in such domains?
Besides, an AI might be really good in solving problems in one specific domain. This does not mean this AI is anything more than a large calculator, designed to solve that kind of problems. That calculator does not need to, and will not become "self-aware". It does not need, and will not have, a "personality". It might be able to solve that narrow class of problems faster than humans, but it will be useless when faced with most other kinds of problems. Is it more intelligent than humans?
It's not at all clear how to develop an AI which will be able to solve any "reasonable" problem, and I don't even think that's what most companies/researchers are trying to achieve. Arguably the best way to approach this problem is reverse engineering our own intelligence, but this, even if successful, will not necessarily lead to anything smarter than what is being reversed engineered.
Are you happier with the state of symmetric crypto, which, despite relying on conjectures (like the existence of pseudo-random functions) tends not to rely on _algebraic_ ones?
Personally, I don't have particular worries about the hardness assumptions of asymmetric crypto, and I think of them a bit like I think of bitcoin (hear me out). Yes, it is certain that eventually someone will solve the discrete log problem for any given algebraic structure (either by rendering all crypto that relies on it broken, or (less likely) proving it fundamentally secure), but for now, we know that this is hard (since it has been open for a while), and we're also incentivizing people to make mathematical discoveries.
I'd also claim that the "crypto community" (at least the academic side of it) and the "technology community" are not the same, and (at least to me) often feel opposed. Cryptologists write papers filled to the brim with dense and precise mathematical assumptions and reductions; technologists skim the papers, ignore the assumptions, and implement half-assed, unaudited versions of the systems in question and claim them secure (pardon my cynicism).
As to what the community thinks about mathematical public key crypto, they hail it as the greatest innovation since sliced bread and the herald of modern cryptography. Prior to modernity, cryptography was very ad-hoc and depended on what the author's intuitions; modernity introduced precise definitions of what it meant for a system to be secure and raised the bar. It also relies heavily on the concept of a hardness reduction, i.e. a proof that breaking a cryptogrpahic primitive is at least as hard as solving a yet-unsolved math problem.
Specifically about algebraic problems, I have a (low confidence) intuition that they are unavoidable in public-key crypto precisely because of the need for an algebraic structure relating the public and private keys. With this in mind, I'd rather have algorithms which rely on known hard to solve problems (demonstrated hard by having years of mathematical effort poured into them with minimal result) to those which rely on problems no one has ever bothered to look at.
A final question: you are unhappy with public key crypto that relies on algebra; would you be happier if it relied on some other branch of mathematics? Analysis? Topology (okay, so that's still algebra)? Complexity theory (a secure cryptosystem that relied only on P!=NP would be a holy grail for several reasons, but I don't know of any attempts to find one)? Would you feel safe using a cryptosystem that was secure if and only if the Riemann Hypothesis were true? If the RH were false? The Collatz Conjecture?
I'm wondering if you have any thoughts on Frank McSherry's old blog post expressing his distrust for approximate-DP [1]. He seems to have different intuitions than your "almost DP" post expresses and makes criticisms that aren't quite addressed in your post.
[1]: https://github.com/frankmcsherry/blog/blob/master/posts/2017...