I don't like the trend of naming software projects after real people. It makes web search harder both for people who try to find the person and for people who try to find the project.
> I’m going to close this post with a warning. When Frisch and Peierls wrote their now-famous memo in March 1940, estimating the mass of Uranium-235 that would be needed for a fission bomb, they didn’t publish it in a journal, but communicated the result through military channels only. As recently as February 1939, Frisch and Meitner had published in Nature their theoretical explanation of recent experiments, showing that the uranium nucleus could fission when bombarded by neutrons. But by 1940, Frisch and Peierls realized that the time for open publication of these matters had passed.
> Similarly, at some point, the people doing detailed estimates of how many physical qubits and gates it’ll take to break actually deployed cryptosystems using Shor’s algorithm are going to stop publishing those estimates, if for no other reason than the risk of giving too much information to adversaries. Indeed, for all we know, that point may have been passed already. This is the clearest warning that I can offer in public right now about the urgency of migrating to post-quantum cryptosystems, a process that I’m grateful is already underway.
Does anyone know how much underway it is? Do we need to worry that the switch away from RSA won't be broadly deployed before quantum decryption becomes available?
From analytical arguments considering a rather generic error type, we already know that for the Shor algorithm to produce a useful result, the error rate with the number of logical qubits needs to decrease as ~n^(-1/3), where `n` is the number of bits in the number [1].
This estimate, however, assumes that interaction can be turned on between arbitrary two qubits. In practice, we can only do nearest-neighbour interactions on a square lattice, and we need to simulate the interaction between two arbitrary qubits by repeated application of SWAP gates, mangling the interaction through as in the 15th puzzle. This two-qubit simulation would add about `n` SWAP gates, which would then multiply the noise factor by the same factor, hence now we need an error rate for logical qubits on a square lattice to be around ~n^(-4/3)
Now comes the error correction. The estimates are somewhat hard to make here, as they depend on the sensitivity of the readout mechanism, but for example let’s say a 10-bit number can be factored with a logical qubit error rate of 10^{-5}. Then we apply a surface code that scales exponentially, reducing the error rate by 10 times with 10 physical qubits, which we could express as ~1/10^{m/10}, where m is the number of physical qubits (which is rather optimistic). Putting in the numbers, it would follow that we need 40 physical qubits for a logical qubit, hence in total 400k physical qubits.
That may sound reasonable, but then we made the assumption that while manipulating the individual physical qubits, decoherence for each individual qubit does not happen while they are waiting for their turn. This, in fact, scales poorly with the number of qubits on the chip because physical constraints limit the number of coaxial cables that can be attached, hence multiplexing of control signals and hence the waiting of the qubits is imminent. This waiting is even more pronounced in the quantum computer cluster proposals that tend to surface sometimes.
Instead of ordinary brackets, one can also use the dot notation. I think it was used in Principia Mathematica or slightly later:
(A (B (C D)))
would be
A . B : C .: D
Essentially, the more dots you add, the stronger the grouping operator is binding. The precedence increases with the number of dots.
However, this is only a replacement for ordinary parentheses, not for these "reverse" ones discussed here. Maybe for reverse, one could use groups of little circles instead of dots: °, °°, °°°, etc.
I believe Peano dot notation works the other way ’round;
A . B : C :. D
would be, as I understand it, equivalent to:
((A B) C) D
The “general principle” is that a larger number of dots indicates a larger subformula.¹
What if you need to nest parentheses? Then you use more dots. A double dot (:) is like a single dot, but stronger. For example, we write ((1 + 2) × 3) + 4 as 1 + 2 . × 3 : + 4, and the double dot isolates the entire 1 + 2 . × 3 expression into a single sub-formula to which the + 4 applies.²
A dot can be thought of as a pair of parentheses, “) (”, with implicit parentheses at the beginning and end as needed.
In general the “direction” rule for interpreting a formula ‘A.B’ will be to first indicate that the center dot “works both backwards and forwards” to give first ‘A).(B’, and then the opening and closing parentheses are added to yield ‘(A).(B)’. The extra set of pairs of parentheses is then reduced to the formula (A.B).³
So perhaps one way of thinking about it is that more dots indicates more separation.
Oh you are right, more dots indicate lower operator precedence (weaker binding), not the other way round. Though the explanations you cited seem confusing to me. Apparently by non-programmers.
Read the article above. There is a link at the top of this submission to an essay by Peter Norvig, arguing (correctly, in retrospect) that Chomsky's approach to language modelling is mistaken.
Obviously I did read the article. And I know how the hn site works.
I have a passing familiarity with the debate over Chomsky's theories of universal grammar etc. I didn't notice anything in the article that would cause disgust, and so I wondered what I was failing to understand.
If you have read many books by Chomsky, it might make you angry that you have wasted so much time on what turned out to be a fundamentally mistaken theory.
No it is from 2011. The text mentions an event in 2011, so it couldn't have been written earlier, and the first HN submission [1] was in 2011, so it also wasn't written later.
The title should say (2011), otherwise the whole piece is confusing.
For example if we took random samples of the population and tested them for marijuana usage, what percentage would test positive?
Next, this study is only talking about marijuana testing, how many of the same group also tested positive for alcohol (or other impairing drugs). Lets make up fake numbers and say 60% of total fatalities had alcohol or other impairing drugs and the overlap between them and marijuana use was 80% then marijuana is rather insignificant.
We have to have all the details so we don't fall into a base rate fallacy.
Well, its the wrong universe of analysis to make that claim and there is no comparative measure of alcohol exposure in the same universe of analysis so it also fails to provide a basis for any alcohol/THC comparison, so, no?
https://www.lesswrong.com/posts/tjH8XPxAnr6JRbh7k/hard-takeo...
reply