it's an incredibly hard workout, depending on the style you're actually using a lot of muscles that you don't typically exercise, so your body wants to naturally give up after a while. it took me probably 6-8 sessions until I could reliably do one myself.
i am a (former) neuroscientist and breathwork facilitator (mostly conscious connected breath) — AMA.
the effect of decreased co2 concentration on vasoconstrictions (and also alkalosis-induced tetany, ie your muscles cramping, which happens a lot during breathwork) are well known [1], but i've never seen them quantified in such a clear way. It's cool to see mainstream science give it a closer look!
This is a bit off topic, but what do you think about people doing nitrous recreationally? It's always concerned me that people are inhaling close to pure nitrous oxide and holding it in. I've always wondered if this creates damaging low-oxygen conditions without the normal reflexes kicking in, and if this can cause brain/neuronal damage.
I believe in medical settings it's delivered in a mixture with O2, but in recreational settings it's usually inhaled directly.
I see a lot more talk about the risks of vitamin B12 depletion, and not much talk about O2 deprivation, so not sure if everyone else is crazy or if it's me who is the crazy one.
I'm not one to tell people not to have fun, but i also lost a friend to respiratory failure after prolonged nitrous abuse, and had more then one start having auditory hallucinations. I think it's waning in popularity compared to 10 years ago, but maybe I'm just out of touch with what the kids get high on these days
In zero tolerance Sweden, nitrous is oddly perfectly legal. In fact, I recently got a cheerful flyer from our municipal waste disposal company announcing that empty 1L nitrous bottles can now be left with common household hazardous waste.
I was being treated with nitrous medically. I asked the anaesthesiologist about how it works recreationally and his answer was that yes, it was mostly just hypoxia.
This is easily falsified by a cursory internet search about the physiological mechanism behind nitrous oxide's effects. It is appalling that a medical professional would so confidently give you an uneducated, crackpot answer. The exact same mechanism which knocks you out gives you euphoria at lower doses.
If someone holds their breath long enough to cause hypoxia when inhaling nitrous oxide, they have other problems. You can easily hold your breath 1-2 minutes while sitting on a couch without experiencing hypoxia. If you're experiencing euphoria as strong as what nitrous oxide causes from hypoxia, you're basically about to die.
This is why you shouldn't trust experts on stuff outside their speciality, this answer is just wrong.
You don't even need to research it, the lived experience of being in a dentist office with mixed oxygen and nitrous produces the recreational effects - if it was mostly hypoxia, having oxygen mixed in would have a greatly diminished "recreational" effect.
I mean, it is true most people doing it recreationally are giving themselves mild to severe hypoxia, but that doesn't mean the effect is caused by hypoxia
You don't get half the oxygen. You get as much oxygen as you would during normal calm breathing, which is pretty shallow. Basically, you take a normal breath, which is about 50% lung volume, and then fill your lung up to max capacity with n2o.
But you know what? Pulse oximeters are pretty cheap nowadays. Try it for yourself.
As a Neuroscientist and breathwork facilitator, do you think there is any harm in intentional apnea (e.g. free diving, static holds, ect)?
At what point does cell damage (not necessarily death), kick in? As someone involved in these sports, I operate under the assumption that any damage would kick in after loss of consciousness. For example, if I hold my breath, even for 4 or 5 minutes but dont pass out, that is an indication I am still in the range of safe practice. Anecdotally, I know many people who have spent their lives doing breathholds, and they dont seem any worse for wear.
Are there any high quality studies that look at potential brain damage prior to loss of consciousness?
Does this help? I am a physicist with interest in these subjects and have always been wary of breathwork because of tetany and the following studies. What do experts closer to this field make of these?
Ref. [2] is especially concerning to me in pushing in any sort of static apnea training or breathwork: "The time to complete the interference card test was positively correlated with maximal static apnea duration (r = 0.73, p < 0.05) and the number of years of breath-hold diving training (r = 0.79, p < 0.001)."
So the tetany in breathwork is generally caused by the decreased CO2 concentration causing respiratory alkalosis (ie blood gets more alkaline and has a ph balance of > 7.5), which in turn causes the protein albumin to bind more strongly to calcium and not release it as it's supposed to, and calcium is an important regulator in voltage gated ion channels in neurons.
Long story short, your neurons get just a tad bit more excitable because calcium that usually acts like the bouncer to the hot club is busy snogging albumin. That has very little effect in places in the body, but in motor neurons that control your smallest muscles (face and hand), and in sensory neurons under your skin it does move the needle — that causes the muscles to contract and your skin to feel tingly, both exactly the same cause.
This is the reason people with epilepsy should _NOT_ do breathwork, but for otherwise healthy adults there are no negative long term effects of respiratory alkalosis — a few normal breaths to balance out your co2 and the symptoms will go away.
Could you please explain more about the Ref[2], what does it mean beyond what is in article and how serious is it?
"These findings suggest that breath-hold diving training over several years may cause mild, but persistent, short-term memory impairments"
Can you tell more about recreational nitrous oxide and when does the "damage occur"?
Is there the same thing with wim hof?
(like for example with oximeter 80 Sp02 or below?)
I got in wimhof/oxide around 80 Sp02 the interesting thing is I got this feeling with fighting to hold my breath but below 90 I kinda needed to convince myself that I should breath in both cases,
Being a non-expert I can't attempt to speculate on your questions in good faith! All I was suggesting to the parent is that perhaps these articles offer as evidence of damage being done without pushing to the point of unconsciousness? Feedback is definitely welcome by an expert.
In a medical setting, where I am more familiar with it, tetany is never good. Personally it is also wildly uncomfortable. Perhaps it's fine and somehow pushing through it is part of the "experience", but if I want an altered consciousness I'll stick to a psilocybin-based retreat every 5-10 years and my meditation practice in between :D.
Hyperspell | Backend (Python), NLP, and Data Engineers | https://hyperspell.com | SF + remote in US timezones
Hyperspell is building RAG-as-a-service, allowing developers to build AI apps in minutes, not months — think Plaid for unstructured data. There are many great products for people who want to build their own RAG pipeline. Hyperspell is there for those who don’t.
We have built machine learning & NLP products since way before it was cool. You will join as on of our first engineers working on a technically complex product with many moving parts and lots of things to figure out. That’s okay, because you’re an excellent figurer-outer. In fact, that’s what you’re world class at (at least) two things: you like figuring things out.
Say hello at jobs@hyperspell.com and tell us about yourself. We‘ll write back.
Yep. And the value they have is _your data_. It'd be a similar situation if Zuckerberg was playing "fealty games" with who could and couldn't get an API key.
It's worse than a sloppier version, because we've already proven it wrong in the literal sense.
The simulation of code part is trivially wrong - early CPU's were clear fixed function pipelines and they could execute programs (including fizzbuzz)just fine!
The author seems to confuse algorithm complexity of the available operations with the power of a particular approximation model. They are mostly unrelated.
The only thing you require to simulate turing machines, for example, is a simple O(1) NAND gate (and arbitrary amount of constant time memory. Or equivalently, SKI calculus if you hate arbitrary memory). This is because the algorithm complexity of the operations that make up the simulator are (mostly) unrelated to the power of the simulator.
They only change the time required to execute the simulation.
As another example, you can simulate any non-determinstic turing machine with a deterministic one. The models have equivalent power. The open question is "how fast can you do it", not "can you do it".
Similarly, we can already prove it's possible to build llm's that can simulate arbitrary CPU's to arbitrary precision - the proofs, of course, are not constructive, so it doesn't help you build one.
On the explaining code front, it doesn't take a huge leap to see that any process you are running on a computer today to explain code could similarly be approximated by a neural network.
This is just as useless as the author's sloppy statements in practice, but the author purports to make claims about what is possible in theory, not what is practical.
The description of feeling like he was "in a coma or some afterlife limbo state" sounds similar to Cotard's syndrome [1], a very rare condition that can develop from untreated schizophrenia — basically an unshakable belief that you are, in fact, dead, and every other fact of existence will have to be reframed to support that belief.
10 years ago, I started an ML & NLP consulting firm. Back then nobody was doing NLP in production (SpaCy hadn't come out yet, efficient vector embeddings were not around, the only comprehensive library to do NLP was NLTK, which was academic and hard to run in prod).
I recently revisited some of our projects from back then (like this super fun one where we put 1 Million words into the dictionary [1]) and realized how much faster we could have done many of those tasks with LLMs.
Except we couldn't — the whole "in production" part would have made LLMs for the most minute tasks prohibitively expensive, and that is not going to change for a while, sadly. So, if you want to work something in prod that is not specifically an LLM application, this book is still super valuable.
I don’t disagree with you, but always think the “they’re just predicting the next token” argument is kind of missing the magic for the sideshow.
Yes they do, but in order to do that, LLMs soak up the statistical regularities of just about every sentence ever written across a wide swath of languages, and from that infer underlying concepts common to all languages, which in turn, if you subscribe at least partially to the Sapir-Wharf hypothesis, means LLMs do encode concepts of human cognition.
Predicting the next token is simply a task that requires an LLM to find and learn these structural elements of our language and hence thought, and thus serves as a good error function to train the underlying network. But it’s a red herring when discussing what LLMs actually do.
I am disappointed your comment did not have more responses because I'm very interested in deconstructing this argument I've heard over and over again. ("it just predicts the next words in the sentence").
While explanations of how GPT-style LLMs work involve a layering of structures which encode at the first levels some understanding of syntax, grammar etc. and then as the more levels of transformers are added, eventually some contextual and logical meanings are encoded.
I really want to see a developed conversation about this.
What are we humans even doing when zooming out? We're processing the current inputs to determine what best to do in the present, nearest future or even far future. Sometimes, in a more relaxed space (say a "brainstorming" meeting), we relax our prediction capabilities to the point our ideas come from a hallucination realm if no boundaries are imposed.
LLMs mimic these things in the spoken language space quite well.
> ... means LLMs do encode concepts of human cognition
AND
> ... do encode structural elements of our language and hence thought
Quite true. I think the trivial "proof" that what you are saying is correct is that a significantly smaller model can generate sentence after sentence of fully grammatical but nonsense sentences. Therefore the additional information encoded into the network must be knowledge and not syntax (word order).
Similarly, when there is too much quantization applied, the result does start to resemble a grammatical sentence generator and is less mistakable for intelligence.
I make the argument about LLMs being a time series predictor because they happen to be a predictor that does something that is a bit magical from the perspective of humans.
In the same way that pesticides convincingly mimic the chemical signals used by the creatures to make decisions, LLMs convincingly produce output that feels to humans like intelligence and reasoning.
Future LLMs will be able to convincingly create the impression of love, loyalty, and many other emotions.
Humans too know how to feign reasoning and emotion and to detect bad reasoning, false loyalty, etc.
Last night I baked a batch of gingerbread cookies with a recipe suggested by GPT-4. The other day I asked GPT-4 to write a dozen more unit tests for a code library I am working on.
> just about every sentence ever written across a wide swath of languages
I view LLMs as a new way that humans can access/harness the information of or civilization. It is a tremendously exciting time to be alive to witness and interact with human knowledge in this way.
I listened to a radio segment last week where the hosts were lamenting that Europe was able to pass AI regulation but the US Congress was far from doing so. The fear and hype is fueling reaction to a problem that IMO does not exist. There is no AI. What we have is a wonder of what can be achieved through LLMs but it's still a tool rather than a being. Unfortunately there's a lot of money to be made pitching it as such.