I sold my first edition almost 10 years ago to fund (partially) my unemployment during a career transition to data science. A couple years, ago my brother bought me a nice reissue for Christmas without ever knowing I once owned a copy. Odd how some things will make their way to you in the world.
Seeing half of an AR LLM's output tokens go to generating a predefined json schema bothers me so much. I would love to have an option to use diffusion for infilling.
By that logic, G Suite should be funneled through GCP.
Also, are you sure you meant to mention Microsoft? Microsoft has this Copilot thing that they will gladly sell you, with generally inoffensive commercial terms, through more channels than you can shake a stick at. Got a $4 GitHub for Teams subscription? Add $20 or so and you will be swimming in Copilot outputs, and all you have to do is check the checkbox.
Got a free Gmail account? Add $20 or so and you'll be swimming in Gemini outputs. Yet both companies also have a cumbersome onboarding process if all you want to do is get an API token. So yeah, quite similar!
Yeah, if the goal of the article was to convince Windows users to switch to Linux then Ubuntu would provide as frictionless an install as Windows. Since the author chooses CachyOS, of course there's going to be some important steps during installation that need to be handled with some forethought and extra software to handle all hardware issues. After all, CachyOS is based on Arch Linux and inherits it's minimal mindset. But the article about switching from Windows to Ubuntu has been already written a thousand times.
You are wrong. There has been many reproductions. People don't study it because there is no known mechanism of action and so it's fringe.
Jessica Utts, a well respected statistician
> Despite Professor Hyman's continued protests about parapsychology lacking repeatability, I have never seen a skeptic attempt to perform an experiment with enough trials to even come close to insuring success. The parapsychologists who have recently been willing to take on this challenge have indeed found success in their experiments, as described in my original report.
Before you can define statistical significance, you have to clearly define the success criteria. From what I see, remote viewing produces vague results, so some amount of human interpretation is necessarily. What counts as a "hit"? If you look at "verified" examples from the social-rv site GP mentioned, some of them match only in an abstract sense, but are still counted as a success. The more reliable thing would be to remote view a coin flip and have the person say heads or tails, but that's not how the stargate experiments were defined and I haven't been able to find any trials like this.
Edit: Actually I did find at least one experiment-ish, which is more precognition rather than remote viewing to determine crypto coin price trends [1]. Seems 53 correct predictions, 50 incorrect predictions which is well within statistical chance.
Also seems the social-rv GP linked will eventually have a remote-viewing for real-world events prediction-market type thing. Now that's interesting, and they cleverly avoid it devolving into a traditional prediction market by introducing indirection where two images are arbitrarily assigned to the outcome (true/false) and the person RVs the image, without knowledge of which outcome that image represents.
No, she isn't. She's a statistician, but mostly known for being in the panel review of Star Gate, and for close associations with parapsychology organizations.
She was already involved in parapsychology, having coauthored papers with the director of Star Gate (a parapsychologist himself) before becoming part of the review panel! You cannot have vested interests in the phenomenon being real if you're going to judge it impartially. You cannot have a relationship with one of the key personnel in the project you're reviewing, and especially not a relationship specifically about the same kind of things you're supposed to review! This is a serious flaw, she shouldn't have been part of the panel.
> There has been many reproductions
Like which ones? A reproduction must be done independently, by scientists without the same sponsors and vested interested. Can you point to these reproductions?
By the way, Star Gate was canceled with the conclusion that the experiments were inconclusive. Had there been reproductions, surely the conclusions would have been different?
If you consider the extent to which our economy has become financialized, then you see these decisions have little to do with providing a product for customers but rather a stock for investors.
Bari wisely points out that if the deportees are being tortured, then there must be a secretly good reason why if they dig a little deeper. Suggests asking Stephen Miller.
General purpose LLMs aren't very good with generating bounding boxes, so with that context, this is actually seen as decent performance for certain use cases.
Yeah, that's bothered me as well. Andrej Karpathy does this all the time when he talks about the human brain and making analogies to LLMs. He makes speculative statements about how the human brain works as though it's established fact.
Andrej does use biological examples, but he's a lot more cautious about biomimicry, and often uses biological examples to show why AI and bio are different. Like he doesn't believe that animals use classical RL because a baby horse can walk after 5 minutes which definitely wasn't achieved through classical RL. He doesn't pretend to know how a horse developed that ability, just that it's not classical RL.
A lot of Ilya's takes in this interview felt like more of a stretch. The emotions and LLM argument felt like of like "let's add feathers to planes because birds fly and have feathers". I bet continual learning is going to have some kind of internal goal beyond RL eval functions, but these speculations about emotions just feel like college dorm discussions.
The thing that made Ilya such an innovator (the elegant focus on next token prediction) was so simple, and I feel like his next big take is going to be something about neuron architecture (something he eluded to in the interview but flat out refused to talk about).
I also tried that in the past with poor results. I just tried it this morning with nano banana pro and it nailed it with a very short prompt: "Repaint the house white with black trim. Do not paint over brick."
reply