Herbivores have digestive tracks that are better at breaking down plant matter than ours [1]. They also spend far more time eating [2]. Obviously these are not huge problems these days, since we can grow plants that give us more protein and calories, while being easier to digest. But to say "hippo get strong from eating plant, you get strong from eating plant" is pretty silly. If you spend 5-6 hours a day eating grass like a hippo, you will die from starvation.
Hippos have to graze up to 16 hours a day. Meat is a shortcut that enabled our species to put time into other things that made us the dominant species on the planet.
The claim is that meat is "essential" to become big and strong. Which nutrients in meat are "essential" for this but not in a vegan's diet, so that they have to fill the gap with supplements to be competitive in sports?
For what one anecdote is worth, I'm friends with a former professional athlete who tried a vegan diet after after retiring - and years on he is still as freakishly strong and agile as ever. Many well-known pro athletes have gone vegan during their careers as well: Chris Paul, Venus Williams, Carl Lewis, among others.
My understanding is that the current web scraping situation is this:
* Web scraping is not a CFAA violation. (EF Travel v. Zefer, LinkedIn v. hiQ).
* Scraping in spite of clickthrough / click-in ToS "violation" on public websites does not constitute an enforceable breach of contract, chattel trespass (ie - incidental damage to a website due to access), or really mean anything at all. This is not as clear once a user account or log-in process is involved. (Intel v. Hamidi, Ticketmaster v. Tickets.com)
* Publishing or using scraped data may still violate copyright, just as if the data had been acquired through any means other than scraping. (AP v. Meltwater, Facebook v. Power.com)
So this boils down to two fundamental questions that will need to get answered regardless of "scraping" being involved: "is GPT output copyrightable" and "is training a model on copyrighted data a copyright infringement."
Is training a model on second-hand data laundering copyright? Second-hand data is data generated from a model that has been trained on copyrighted content.
Let's say I train a diffusion model on ten million images generated by diffusion models that have seen copyrighted data. I make sure to remove near duplicates from my training set. My model will only learn the styles but not the exact composition of the original dataset. So it won't be able to replicate original work, because it has never seen any original work.
Is this a neat way of separating ideas from their expression? Copyright should only cover expression. This kind of information laundering follows the definition to the letter and only takes the part that is ok to take - the ideas, hiding the original expression.
If openAI tries to legally claim against this, they will be reminded that their model is trained on tons of unlicensed , scraped without consent content. If their training is legal, then this one is legal too
Yeah, I’ve not seen anything that I love there unfortunately. Ton for the former, but not much for the later. I’ve thought of trying to create it, but I know how hard it is to produce great content and how unrewarding it is.
The datasheets for various simpler IC's often have extensive example applications with discussion of the principles. Texas Instruments' site is a goldmine.
I went from Python to Go to Rust. Go plays up this fantasy of types helping you catch bugs, buts it’s really C with lipstick. There’s no enums, no exhaustive matching of any kinds, there’s null to worry about everywhere, and every utility function has to be painfully written out for each type (maybe that’s starting to change now with Go having basic support for generics).
I went from Python to Rust to Go. Python I won’t spend long on, it’s problems are well documented. Rust, which I use wherever possible, just is not the right language for apis for me. Adding dependency injection through Boxed traits feels like you’re taking a performance hit just so you can test code. Also, I spent a couple hours on trying to get async traits to work with a mocking lib and gave up. Switched to Golang. It’s not without issues but for now I’m productive.
It appears no but we’re doing environmental studies to confirm. It feels like a lot of emission but it’s actually pretty small compared to the scale of the ocean and what people are pumping out in
It will only be ~10 m above the deck. Most of the CH4 is in the troposphere (the part we breathe); very little at higher altitudes.
There’s a short paragraph on the chemistry in the current Wired article on methane and Rob Jackson of Stanford. The author spoke with de Richter, one of the scientists who came up with this approach.