Automerge is an excellent library, with a great API, not just in Rust, but also Javascript and C.
> All you need for the backend is key-value storage with range/prefix queries;
This is true, I was able to quickly put together a Redis automerge library that supports the full API, including pub/sub of changes to subscribers for a full persistent sync server [0]. I was surprised how quickly it came together. Using some LLM assistance (I'm not a frontend specialist) I was able to quickly put together a usable web demo of synchronized documents across multiple browsers using the Webdis [1] websocket support over pub/sub channels.
I caught covid for the first time in jan of 2024, the illness itself wasn't that bad for me, like a common cold, but the aftereffects lingered for months. Eventually my ability to smell and taste came back in a few weeks, but the mental brain fog would not let go. I would sleep over 12 hours a night and still be tired. According to my fitness tracker my daily step and energy burn counts were cut in half. It was so bad I forgot my own phone number at one point and my gmail password. Looking at a screen of code was impossible, I couldn't focus for more than a few seconds. My friends commented on the noticeable changes in my acuity and behavior.
Based on some online anecdotal evidence, I decided to try nicotine "therapy". I bought 4mg smoking cessation mints, cut them in half with a pill cutter, and took 10-12 2mg doses per day at roughly one hour intervals. The effect was immediate and brain fog lifted in less than a week. It was like coming out of a long dream, or like I had been stoned for six months and then suddenly I was sober again. My fitness stats have exceeded where I was before I got sick.
This is just my own anecdotal experience, and there have definitely been some downsides. The mints are about $50/month. My dosage has ticked up a bit and I'm certainly addicted, at least once a day I take a full mint instead of a half for an extra kick. I'd like to taper off, but I'm not sure if I do how to know if any effects are withdrawal or resumption of the covid brain fog. I have a light caffeine habit (2 cups every morning) and I don't see the mints being any more harmful than the coffee, so I think I'm just going to stick with it.
I suggest you to never try a real cigarette if you haven't already. Given that you are already addicted to nicotine the kick of the real deal could be too much to handle. And you don't want that habit, talking by experience. It's good that you found a drug that helps you, just don't inhale it.
I'd never suggest anyone to try a cigarette, but from my experience being addicted to nicotine is not easily translated to being addicted to cigarettes.
It takes a really different state of mind to start a cigarette habit especially due to awful taste and effect of ingesting a strong concentration of chemicals on your body that has nothing to do with dosed concentration of 'mostly just' nicotine.
It may be anecdotal and subjective reasoning, but I battled vape addiction differently than cigarette addiction. I'd classify tobacco addiction more of an emotional addiction, while vaping was more based on nicotine addiction which was more mechanical and predictable than the former.
I have a similar average intake to you, and I've been on it much longer, about 3.5 years. I have had periods where I go off the stuff entirely, including a whole month and a half earlier this year. I've also had periods where my intake has spiked much higher.
The withdrawal symptoms are actually strangely pleasant to me as long as I'm in the right mood and have something interesting to focus on. You will get irritable at things you shouldn't, but as long as you keep that in mind, you should be able to stop yourself from totally flying off the handle. In terms of "fogginess," I find that it's actually mostly in your head. As in, I feel like my mind is dulled, yet when I present myself with a task I somehow find it solved to about the same standard that I'm used to. If you end up feeling foggy coming off of the stuff, give yourself at least a week or two to adjust.
>The withdrawal symptoms are actually strangely pleasant to me as long as I'm in the right mood and have something interesting to focus on.
i actually find this true, too. i’m dependent on nicotine lozenges and if i’m out and about and for some reason i don’t have my nicotine, i feel pretty locked in and focused on the task at hand (probably so i can hurry up and finish and access some nicotine). i often feel a bit chatty, too. maybe i haven’t tried being off it long enough to feel irritable. but this interesting headspace makes me optimistic that maybe quitting won’t be so bad, and that maybe the nicotine is somehow depressing my arousal and i’d be better without it (all the time).
not him, but i use the amazon brand lozenges. they’re mints you tuck like a zyn, rather than gum. much less spicier in my mouth and doesn’t irritate my tummy like gum did. haven’t noticed any mouth issues but im sure a dentist wouldn’t approve
Out of curiosity what brand of gum did you choose? A few years back I had tried nicotine gum to see if it might encourage my sense of smell to improve, it’s been muted since Covid. It helped but the larger effect I noticed was one day in particular my word recall was phenomenal where usually I struggle and always grasping.
I caught Covid march 2020 and it broke my brain. watching Netflix was too difficult. I'm better now through a wide variety of things, but nicotine, via patches for me (half of a 7mg patch - don't want to get used to too much), gave me my life back. That, and a vagus nerve stimulator.
Lol, exact same story here except I caught covid very early days and started nicotine in 2023 after I stumbled upon the trials and anecdotes for the first time. Although unlike you I have tried quitting a few times and my brain fog absolutely comes back. The reason I would prefer not to do nicotine is for me (it seems like this is not the case for you) there is a very clear inflammatory aspect I cannot deny which impacts my exercise.
What is actually quite interesting to me is over the two years since I started nicotine I have grown increasingly disgusted with caffeine to the point where I just prefer not to take it any longer.
There are open source projects moving toward this scale, the GraphBLAS for example uses an algebraic formulation over compressed sparse matrix representations for graphs that is designed to be portable across many architectures, including cuda. It would be nice if companies like nivida could get more behind our efforts, as our main bottleneck is development hardware access.
To plug my project, I've wrapped the SuiteSparse GraphBLAS library in a postgres extension [1] that fluidly blends algebraic graph theory with the relational model, the main flow is to use sql to structure complex queries for starting points, and then use the graphblas to flow through the graph to the endpoints, then joining back to tables to get the relevant metadata. On cheap hetzner hardware (amd epyc 64 core) we've achieved 7 billion edges per second BFS over the largest graphs in the suitesparse collection (~10B edges). With our cuda support we hope to push that kind of performance into graphs with trillions of edges.
An interesting approach to characterize graph topology, both locally and globally is to use a graphlet transform, there some interesting research happening around these types of topology signals, here's one that takes a very algebraic approach
Because then you leak the timestamp. The idea is, present what looks like v4 random uuids externally, but they are stored internally with v7 which greatly improves locality and index usability. The conversion back and forth happens with a secret key.
The advantage is that lexical order and chronological order are the same and you still retain enough random bits that guessing the next generated timestamp is not easy.
uuidv8 does not contain a timestamp or counter unless you put them in there, it only contains a version and variant field. It's a very broad format that lets you contain whatever bits you want.
This library converts a uuidv7 into a cryptographically random but deterministic uuidv4 recoverable with a shared key. For all intents and purposes the external view is a uuidv4, the internal representation is a v7, which has better index block locality and orderability.
It's an interesting exploration of ideas, but there are some issues with this article. Worth noting that it does describe it's approach as "simple and naive", so take my comments below to be corrections and/or pointers into the practical and complex issues on this topic.
- The article says adjacency matrices are "usually dense" but that's not true at all, most graph are sparse to very sparse. In a social network with billions of people, the average out degree might be 100. The internet is another example of a very sparse graph, billions of nodes but most nodes have at most one or maybe two direct connections.
- Storing a dense matrix means it can only work with very small graphs, a graph with one million nodes would require one-million-squared memory elements, not possible.
- Most of the elements in the matrix would be "zero", but you're still storing them, and when you do matrix multiplication (one step in a BFS across the graph) you're still wasting energy moving, caching, and multiplying/adding mostly zeros. It's very inefficient.
- Minor nit, it says the diagonal is empty because nodes are already connected to themselves, this isn't correct by theory, self edges are definitely a thing. There's a reason the main diagonal is called "the identity".
- Not every graph algebra uses the numeric "zero" to mean zero, for tropical algebras (min/max) the additive identity is positive/negative infinity. Zero is a valid value in those algebras.
I don't mean to diss on the idea, it's a good way to dip a toe into the math and computer science behind algebraic graph theory, but in production or for anything but the smallest (and densest) graphs, a sparse graph algebra library like SuiteSparse would be the most appropriate.
SuiteSparse is used in MATLAB (A .* B calls SuiteSparse), FalkorDB, python-graphblas, OneSparse (postgres library) and many other libraries. The author Tim Davis from TAMU is a leading expert in this field of research.
(I'm a GraphBLAS contributor and author of OneSparse)
You seem to do a lot of work on sparse graphs, as most people do, but if you re-read the opening line carefully:
> In graph theory, an adjacency matrix is a square matrix used to represent a finite (and usually dense) graph.
Many of these issues evaporate on the realization the article sets out to talk talk to the use of adjacency matrices for dense graphs, which it's trying to point out are the ones you'd commonly use an adjacency matrix for, rather than trying to claim all graphs are dense so you should always use an adjacency matrix.
E.g. a dense graph of 1,000,000 nodes would usually be considered "a pretty damn large dense graph" and so on. These are probably good things to have mentioned here though, as pulling in an article about adjacency matrices for conversation without context of knowing why you're using one already can lead to bad conclusions by folks.
This is very much a nitpick but a million million bits is 116GB and you can squeeze 192GB of RAM into a desktop these days, let alone a workstation or a server. (Even if mdspan forces bytes, you can fit a million^2 elements into a server.)
Yep, for any decent sized graph, sparse is an absolute necessity, since a dense matrix will grow with the square of the node size, sparse matrices and sparse matrix multiplication is complex and there are multiple kernel approaches depending on density and other factors. SuiteSparse [1] handles these cases, has a kernel JIT compiler for different scenarios and graph operations, and supports CUDA as well. Worth checking out if you're into algebraic graph theory.
Using SuiteSparse and the standard GAP benchmarks, I've loaded graphs with 6 billion edges into 256GB of RAM, and can BFS that graph in under a second. [2]
https://en.wikipedia.org/wiki/Cinematic_Titanic
reply