Hacker Newsnew | past | comments | ask | show | jobs | submit | tmandry's commentslogin

My hope is that you can use Send variants generated with the macro to reduce the typing, in cases where you need that. In the future with return type bounds, middleware would just use the local variants and only the end user needs to specify the Send or Local variant.


I don't have numbers handy, but I can say with confidence that the answer to both questions is "yes". It depends on the use case. Boxing is a useful tool and in many cases the overhead is minimal, but you don't want to use it all the time. Likewise, static inlining can bite you in deeply nested cases and I've heard of this happening too.

The future we're working toward is for this to be a decision you can make locally without introducing compatibility hazards or having to change a bunch of code. Ideally, one day you can even ask the compiler to alert you to potential performance hazards or make reasonable default choices for you...


Study design: Tracks 200 people over one year and measures a suite of biomarkers at the beginning, 6 months, and 12 months. Participants are randomly assigned to one of four dosages or a placebo. Double blind.

The study hopes to find an optimal dose for humans using the various biomarkers as a guide. I’m not sure how well we can connect those biomarkers to actual longevity/healthspan.

Source: https://www.lifespan.io/news/a-campaign-to-launch-rapamycin-...


Realistically you have to start with something like this before doing a huge 30 year study (with potentially the wrong dose).


> Otherwise we are always dependent on the good will of the companies without democratic control.

Yes, but that’s already true without this action, which doesn’t make them any more or less democratically controlled. A more democratic way of deciding clear limits on free speech would be great, but the absence of one doesn’t mean the platforms should sit on their hands and do nothing while the world burns.


Most libraries don’t target nightly anymore. It certainly used to be the case that nightly had all the cool features everyone wanted, but almost all features that popular crates depended on have now been stabilized. Even Rocket (the most high-profile holdout I know of) now works on stable as of earlier this year.

As for maintenance, as with all library ecosystems it’s a mix. The most popular crates tend to be the most well-maintained in my experience. This is definitely something to consider when taking on new dependencies.


You seem to be hitting the nail on the head with regard to seamless UX. I’d gladly pay to support something like this, but the lock-in / “what happens if you go away” problem is real and holds me back from investing all my knowledge and time. Short of an actual federated system, a self hosted option OR just markdown export / backup of an entire database would completely alleviate that for me.


So we do currently have the ability to export all your cards as one giant markdown file. Obviously the issue with any sort of export with our system is that even if we export as markdown, the relationships don't really carry over into the export.

But this is definitely something we are working to make as easy / thorough as possible. Data ownership by our users is very important, which we've spelled out in our terms[1] as well.

[1] https://supernotes.app/terms/


“mild” symptoms has been used to mean flu-like, i.e., not something I would call mild and certainly not the same as asympotomatic, but not requiring hospitalization.


I think it is stranger than that. Mild has been anything from a fever for a night, to flu like for a week or so.

Really want to get a test on myself and family. We had something that near hospitalized me, but the kids barely registered being sick.


I’m not sure I would call this a memory model per se, but here’s how async tasks are laid out in memory.

http://tmandry.gitlab.io/blog/posts/optimizing-await-1/


Only models sold after a certain date support this, and you have to go through a menu sequence to enable HDMI 2. It’s documented on their website.


Just to clarify for those following along, Rust async code does not use green threads and doesn't require a stack per task.


I'm missing some of the technical details here, but from a quick glance of the article it seems like Rust's futures are lazy. I.e. a stack would only be allocated when the future is actually awaited. But in order to execute the relevant code, a call stack per not-finished future is still needed, or am I missing something?


Afaik Rust's futures compile to a state machine, which is basically just a struct that contains the current state flag and the variables that need to be kept across yield points. An executor owns a list of such structs/futures and executes them however it sees fit (single-threaded, multi-threaded, ...). So there is no stack per future. The number of stacks depends on how many threads the executor runs in parallel.


> the current state flag the variables that need to be kept across yield points.

you mean like... a stack frame?


Like a stack frame, but allocated once of a fixed size, instead of being LIFO allocated in a reserved region (which itself must be allocated upfront, when you don't know how big you're going to end up).

The difference being: if your tasks need 192B of memory each, and you spawn 10 of each, you just consumed a little less than 2kB. With green threads, you have 10 times the starting size of your stack (generally a few kB). That makes a big difference if you don't have much memory.


So that's actually green threads in my book (in a good implementation I expect to be able to configure the stack size), with the nice addition that the language exposes the needed stack size.


It's a stackless coroutine. AFAIK, the term “green thread” is usually reserved to stackful coroutines, but I guess it could also be used to talk about any kind of coroutines.


It's more efficient (potentially substantially more so). In a typical threaded system you have some leaf functions which take up a substantial amount of stack space but don't block so you don't need to save their state between context switches. In most green threaded applications you still need to allocate this space (times the number of threads). The main advantage of this kind of green threads is you can seperate out the allocation which you need to keep between context switches (which is stored per task)versus the memory you only need while actually executing (which is shared between all tasks). For certain workloads this can be a substantial saving. In principle you can do this in C or C++ by stack switching at the appropriate points but it's a pain to implement, hard to use correctly (the language doesn't help you at all), and I've not seen any actual implementations of this.


Kinda like a stack frame, but more compact and allocated in one go beforehand.


because of syntactical restrictions of how await work, at most you need to allocate a single function frame, never a full stack, and often it doesn't even need to be allocated separately and can live in the stack of the underlying OS thread.


So that async function cannot call anything else?


They can, but the function itself cannot be suspended by calling something else (i.e. await being a keyword enforces this), so any function that is called can use the original OS thread stack. Any called function can in turn be an async function, and will return a future[1] that in turn capture that function stack. So yes, a chain of suspended async functions sort of looks like a stack, but its size is known a compile time [2].

[1] I'm not familiar with rust semantics here, just making educated guesses.

[2] Not sure how rust deals with recursion in this case. I assume you get a compilation error because it will fail to deduce the return value of the function, and you'll have to explicitly box the future: the "stack" would then look a linked list of activation records.


Async function don't really call anything by themselves: the executor does, and all function called in the context of an async function is called on the executor's stack. You just end up with a fixed number of executors running on their own OS thread with a normal stack.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: