On his original blog the author once said that he was considering making the title of his new blog an anagram, and that the closest options were Slate Star Codex and Astral Codex Ten. We should be grateful he didn't go with Trans Latex Coeds
I assume you're talking about the user requests regarding their data? Well, if the data is so anonymized that even the person can't prove who they are, then I'd say it falls in the provision that exempts anonymized data.
But in this case, I'm assuming the user must have a private key (for signing BAT transactions), so they could build a feature in the browser to sign messages using it.
The problem with signing transactions is basically then you can identify the browser history of the user. The BAT-ledger explains the principles of the transaction system
This article leans too much on the current state of the art to extrapolate towards possible futures, and it shouldn't. AI is dangerous, sure, but the examples we're getting here are a DOTA2 team that lost to humans in a modified version of the game, a blog post that doesn't adequately explain what the heck was accomplished, and Atari bots that buttonmashed their way into exploits that let them directly change registers in the game (all very cool but not super relevant).
It kind of buries the lede that we're one or two technical achievements away from the apocalypse (learning from unlabelled training sets, say) and instead starts with "AI sentencing guidelines are biased against black defendants", as if that exists in the same universe as Skynet IRL. IME the people who are even worried about AI risks at all are overwhelmingly in the "what if it disenfranchises minorities even more?" camp, and only a tiny fraction are concerned about having the atmosphere packed with graphite superconductors in the next fifteen years. Any article like this should probably start off with the assumption that the audience thinks AI is that cute thing in their iPhone that dials the local pizza joint instead of your mother when you try to speak to it. Whether or not AI will eventually let Donald Trump sustain himself eternally on a golden throne as the God Emperor of America is a red herring and needs to be deflected, like, immediately, or else people get on their own hobby horse about how AI represents a turning point in the bendy straws community. All the right notes are in here but they're at the end for some reason.
How many companies would this work at? Seems like more than a few might take exception to a potential employee trying to circumvent their normal hiring pipeline.
Gwern is an independent researcher who studies... a lot of things. He's documented some of the history of the dark web, blogs about his nootropics experience and, perhaps most notably, predicted that bitcoin might reach $10,000... in 2011.
> The law of gravity is one example, since nobody knows where gravity actually comes from.
Please stop this. While I agree with your overall point, we know what causes gravity (the uneven curvature of space due to the distribution of mass). You can find this out by googling "what causes gravity". It's not a mystery anymore.
That is just the mathematical description of gravity according to general relativity. We don't have a theory for how the uneven curvature of space happens and we don't have a quantum mechanical theory for gravity either (gravitons are hypothetical, not proven).
You are right, but my point is that general relativity gives us enough of a causal model for gravity that throwing up our hands and saying "It's a mystery" is unacceptable.
We could split hairs all day, but a more detailed mathematical model like relativity is still just a model. We don't know with certainty why it works, what the underlying mechanism is. I would rather say it's unacceptable to tell people what questions they can't ask than to criticize a particular model.
It is not unacceptable to say we don't know with certainty -- it's faithful to science. You are correct we have a mathematical "casual" model, but that model is entirely observational. We know what happens with certainty in virtually any gravitational situation, but we remain very uncertain about the underlying mechanism of action. It is faithful and mature to recognize this.
Teixobactin, I assume? Their new method of parallel testing promising proto-antibiotics is really neat, but given the ratio of successes to failures, I wouldn't hold out much hope for a second golden age.
Possibly irrelevant correlation: This is pattern is also observed in graphics techniques and programming styles.
Yet, the incremental quantitative advances continue to accumulate to become qualitative differences.
While modifications of existing classes of antibiotics are still critically important and useful, the big problem is that "new antibiotics that conform to established classes are often subject to at least some of the same resistances observed in previous members of the class."[0] They help in that they buy time, but development of novel classes of antibiotics is what's necessary to buy more time. And we're going to need a lot of them.[1]
In engineering or any scientific field, incremental progress is clearly still progress. For most other kinds of drugs, time doesn't work against their effectiveness. Texts describe the use of aspirin precursors, such as willow teas, dates back over four thousand years to ancient Sumer. Salicylates haven't stopped being effective since then.
The trouble with antibiotics is that resistance inevitably develops over time even if we manage to curb their misuse. It isn't enough to enough to develop new antibiotics, novel or otherwise; to keep the "miracle of antibiotics" alive, we need to continually to develop novel ones.
That hits pretty close to what I think I am seeing here.
There are a few assumptions built in to Eroom's "law" which I think we have learned to avoid when looking at Moore's "law".
One is that we will not come up with an alternate or more effective way to address the problems drugs are currently addressing. Another is that we will not come up with a drastically cheaper or more effective way to invent new novel drugs. Still another is that we will not invent a drastically cheaper or more effective way to verify a drug's usefulness and safety.
With Moore's observation, I think people have learned to assume that any observable slow down will be corrected by the invention of some previously unimaginable technique or other machine that will keep things on track. (nothing really makes this have to be true, but we seem to think of it that way)
Similarly, any number of future inventions could completely reverse Eroom's observation.
As we get better at editing genomes, making nanomachines, and increasing the resolution of 3d printers, previously impossible techniques may suddenly make it easy to invent novel drugs, address the same issues without drugs, or change the game in any number of other hard-to-predict ways.
Given these increasingly plausible possibilities, I am inclined to see Eroom's "law" as the mere observation of a relatively short lived trend in human history.