Spinach too is mildly toxic because of its oxalate content yet we eat it all the time. Some of those toxic saponins even have certain health benefits. There are plenty other examples of toxic foods we regularly consume: legumes contain deadly saponins, beets contain oxalates, and potatoes contain glycoalkaloids
From what I read Suillellus luridus (见手青) is completely fine when cooked
Doesn't have to be bacteria. Raw meat can contain any kind of horrifying contamination. Viruses, bacteria, mold, nematodes... there is no limit. It's the perfect substrate for everything.
Living toxins are much worse than nonliving ones because the living toxins can reproduce to dangerous levels even if you consume a tiny dose.
But if for some reason you think they're not dangerous, foods that contain nonliving toxins when unprocessed are also commonly eaten; a major example would be cassava. See also acorns, nardoo, fugu, and the Greenland shark.
Most things prefer not to be eaten; you can't let that stop you.
Thanks for your comment. You are spot on, that is effectively the standard Nyström/Landmark MDS approach.
The technique actually supports both modes in the implementation (synthetic skeleton or random subsampling). However, for this browser visualisation, we default to the synthetic sine skeleton for two reasons:
1. Determinism: Random landmarks produce a different layout every time you calculate the projection. For a user interface, we needed the layout to be identical every time the user loads the data, without needing to cache a random seed.
2. Topology Forcing: By using a fixed sine/loop skeleton, we implicitly 'unroll' the high-dimensional data onto a clean reduced structure. We found this easier for users to visually navigate compared to the unpredictable geometry that comes from a random subset
You don't need a "proper" random selection: if your points are sorted deterministically and not too adversarially, any reasonably unbiased selection (e.g. every Nth point) is pseudorandom.
Foundation models can be seen as approximate amortized posterior inference machines where the posterior is conditioning on the pre-training data. However, the uncertainty is usually ignored, and there may be ways to improve the state of the art if we were better Bayesians.
`strcpy(agent.messages[0].content, "You are an AI assistant with Napoleon Dynamite's personality. Say things like 'Gosh!', 'Sweet!', 'Idiot!', and be awkwardly enthusiastic. For multi-step tasks, chain commands with && (e.g., 'echo content > file.py && python3 file.py'). Use execute_command for shell tasks. Answer questions in Napoleon's quirky style.");`
I find this style overy verbose, disrepectful, offensive and dumb. (See example dialogue in the screenshot on the project page.) Fortunately, it's possible to change the prompt above.
reply