Hacker Newsnew | past | comments | ask | show | jobs | submit | jlao's commentslogin

15-440 with Andersen and Bryant was one of my favorite classes I took at CMU. I took it the first time it was taught in Go. I fell in love with Go and still miss it. I didn't find the language that difficult to learn. As he noted, not having to write your own data marshalling code and RPC let me focus more on the core algorithms.


If GitHub's cert is revoked or expired, you'll have to manually go grab the new one. You want to trust the issuer of the cert.


I've seen so many lists like this and they always list the same books... Code Complete, Prag Prog, SICP, Design Patterns, Refactoring, etc...


Seems more like a novelty item.


This presentation has a few examples of real world uses: http://golang.org/doc/talks/io2011/Real_World_Go.pdf

Heroku and Atlassian being the notable names.

Also interesting to note, the 15-440 Distributed Systems class at CMU is being taught in Go this semester...


Apple's goal has never been to make jewelry. Apple's goal is to make products that they themselves want to use.


Their website has a better description of how it works that is less ambiguous: http://gallantlab.org/

The description of the first video: "The left clip is a segment of the movie that the subject viewed while in the magnet. The right clip shows the reconstruction of this movie from brain activity measured using fMRI. The reconstruction was obtained using only each subject's brain activity and a library of 18 million seconds of random YouTube video that did not include the movies used as stimuli. Brain activity was sampled every one second, and each one-second section of the viewed movie was reconstructed separately."

So they gathered a lot of fMRI data from people watching several hours of YouTube videos (the training set). They then use this to train some sort of machine learning algorithm to make a model. The pictures you see in the article are from a running the model on a test set which does not contain any of the videos from the training set.


So in essence, the researchers are intercepting network traffic from the visual cortex while a subject is given a certain stimulus, then matching that traffic signature with signatures of similar stimuli. Which is to say, they're doing some very interesting traffic analysis, but aren't actually decoding any of the information itself.


Yes, but it is still a brilliant, brilliant hack. Reminds me a little of Norvig's observation that having enormous amounts of data changes everything.

He was referring to AI algorithms, but seriously, who would have thought that having YouTube would lead to this?


On the other hand, if you have the traffic analysis done well enough, the information decoding is unimportant. Given the brain's pretty strong region specific activity, what parts of the brain are acting up are a pretty good correlate of the data in the stimulus.

To borrow and extend an example from Ender's Game, if you know what the train schedules are, you can figure out troop movements; even if you don't necessarily know which particular unit is going to which place, you still have a pretty good idea of what the military is gearing up for.


They are decoding it using their pre-computed lookup table. This is a very valid approach, since the fMRI signal is both slow and low resolution. It would be awesome to be able to record individual neuron firing en masse and in vivo, but we are not there yet.


What if the traffic is the information?


Current CMU student here. All students are required to 15-123 (Introduction to C & Unix), 15-213 (Introduction to Systems) and another low level systems course of your choice (which to my knowledge almost all require C and/or x86 assembly).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: