When you use a microscope to magnify something, the objective (magnifying lens) is literally taking the Fourier transform of the image. The optical system recovers up to a limiting frequency, determining the spatial resolution of the image.
Gotta say, I assumed this is some sort of virtual/imaginary thing, but it seems like there's a point in the optical system where if we placed a screen, we'd see the FT of the image coming in! And before we had digital image processing people used to place masks there to filter out low frequency or high frequency details in the image. Which is absolutely insane and I have no comprehension of how the physics works out!
I disagree, in my opinion passing the bar exam is necessary but not nearly sufficient for competently practicing law.
The bar is an imperfect filter. One could study for the exam and pass and still be hugely deficient in ability as an attorney.
I would argue there's no exam that could replace the evaluative and experiential component of 3 years in law school, and accreditation helps enforce at least some standard of quality in the profession. More incompetent lawyers -> more wasteful behavior -> a more bloated and slower legal system -> worse outcomes for everyone.
I think reducing barriers to completing the legal education (part-time programs, lower cost, etc) are better avenues for increasing access.
I don’t have an opinion either way in this, but the legal profession seems like it suffers from some of the same issues that emergency healthcare does that makes licensing important.
It’s not something regular people are using consistently so they have researched the people in advance. They usually have to scramble when they need a lawyer. And it’s very hard for a lay person to identify whether a lawyer did a good job or not.
But even when it comes to bigger firms which do have the resources to find good lawyers, there’s a different advantage to heavy handed licensing. The fact that the law depends extremely heavily on lawyers being largely honest especially when it comes to stuff like discovery and maintaining confidentiality. Licensing is one of the strongest tools has to ensure that.
While cancer is caused by mutations in the genome, these mutations in turn produce the unifying property of cancer: unchecked cell replication.
Most cell types have systems to safely manage replication. Broadly, there are gas pedals (oncogenes) and brakes (tumor suppressors). A classic oncogene is something like RAS, which activates a signaling cascacde and stimulates progression through the cell cycle. A canonical tumor suppressor is something like TP53, the most frequently mutated gene in cancer, which senses various cellular stresses and induces apoptosis or senescence.
Most cancer genomes are more complicated than individual point mutations (SNPs), insertions, or deletions. There are copy number alterations, where you have > or < 2 copies of a genomic region or chromosome, large scale genomic rearrangements, metabolism changes, and extrachromosomal DNA. There is a series on the hallmarks of cancer which is a useful overview [1].
All of the mechanisms that intrinsically regulate cell growth would fall under your "L1 defense". Unfortunately, the idea of reversing somatic point mutations is likely to be a challenging approach to treating cancer given the current state of technology.
First, for the reasons above, cancer is often multifactorial and it would be difficult to identify a single driver that would effectively cure the disease if corrected. Second, we don't have currently delivery or in vivo base editing technology that is sensitive or specific enough to cure cancer by this means. There are gene therapies like zolgensma[2] which act to introduce a working episomal (not replacing the damaged version in the genome) copy of the gene responsible for SMA. There are also in vivo cell therapies like CAR T which attempt to introduce a transgene that encodes for an anti-cancer effector on T cells. These sorts of approaches may give some insight into the current state of art in this field.
Edit: also I should note that the genes involved in DNA repair (PARP, BRACA1/2, MSH2, MLH1, etc) are frequently mutated in cancers and therapeutically relevant. There are drugs that target them, sometimes rather successfully (e.g. PARP inhibitors). But the mechanisms of action for these therapies are more complicated than outright correcting the somatic mutations.
Some thoughts on this as someone working on circulating-tumor DNA for the last decade or so:
- Sure, cancer can develop years before diagnosis. Pre-cancerous clones harboring somatic mutations can exist for decades before transformation into malignant disease.
- The eternal challenge in ctDNA is achieving a "useful" sensitivity and specificity. For example, imagine you take some of your blood, extract the DNA floating in the plasma, hybrid-capture enrich for DNA in cancer driver genes, sequence super deep, call variants, do some filtering to remove noise and whatnot, and then you find some low allelic fraction mutations in TP53. What can you do about this? I don't know. Many of us have background somatic mutations speckled throughout our body as we age. Over age ~50, most of us are liable to have some kind of pre-cancerous clones in the esophagus, prostate, or blood (due to CHIP). Many of the popular MCED tests (e.g. Grail's Galleri) use signals other than mutations (e.g. methylation status) to improve this sensitivity / specificity profile, but I'm not convinced its actually good enough to be useful at the population level.
- The cost-effectiveness of most follow on screening is not viable for the given sensitivity-specificity profile of MCED assays (Grail would disagree). To achieve this, we would need things like downstream screening to be drastically cheaper, or possibly a tiered non-invasive screening strategy with increasing specificity to be viable (e.g. Harbinger Health).
This sort of thing is exactly like preventative whole body MRI scans. It's very noisy, very overwhelming data that is only statistically useful in cases we're not even sure about yet. To use it in a treatment program is witchcraft at this moment, probably doing more harm than good.
It COULD be used to craft a pipeline that dramatically improved everyone's health. It would take probably a decade or two of testing (an annual MRI, an annual sequencing effort, an annual very wide blood panel) in a longitudinal study with >10^6 people to start to show significant reductions in overall cancer mortality and improvements in diagnostics of serious illnesses. The diagnostic merit is almost certainly hiding in the data at high N.
The odds are that most of the useful things we would find from this are serendipitous - we wouldn't even know what we were looking at right now, first we need tons of training data thrown into a machine learning algorithm. We need to watch somebody who's going to be diagnosed with cancer 14 years from now, and see what their markers and imaging are like right now, and form a predictive model that differentiates between them and other people who don't end up with cancer 14 years from now. We [now] have the technology for picking through complex multidimensional data looking for signals exactly like this.
In the meantime, though, you have to deal with the fact that the system is set up exclusively for profitable care of well-progressed illnesses. It would be very expensive to run such a trial, over a long period of time, and the administrators would feel ethically bound to unblind and then report on every tiny incidentaloma, which completely fucks the training process.
This US is institutionally unable to run this study. The UK or China might, though.
> This sort of thing is exactly like preventative whole body MRI scans. It's very noisy, very overwhelming data that is only statistically useful in cases we're not even sure about yet. To use it in a treatment program is witchcraft at this moment, probably doing more harm than good.
The child of a friend of mine has PTEN-Hamartom-Tumor-Syndrom, a tendency to develop tumors throughout life due to a mutation in the PTEN gene. The poor child gets whole body MRIs and other check-ups every half year. As someone in biological data science, I always tell the parents how difficult it will be to prevent false positives, because we don't have a lot of data on routine full body check-ups on healty people. We just know the huge spectrum on how healthy/ok tissue looks like.
>CRISPR/Cas9 can be directed to cut DNA in targeted areas, enabling the ability to accurately edit (remove, add, or replace) DNA where it was cut. The modified blood stem cells are transplanted back into the patient where they engraft (attach and multiply) within the bone marrow...
> It would be very expensive to run such a trial, over a long period of time, and the administrators would feel ethically bound to unblind and then report on every tiny incidentaloma, which completely fucks the training process.
I wonder if our current research product is only considered the gold standard because doing things in a probabilistic way is the only way we can manage the complexity of the human body to date.
It’s like me running an application many, many times with many different configurations and datasets, while scanning some memory addresses at runtime before and after the test runs, to figure out whether a specific bug exists in a specific feature.
Wouldn’t it be a lot easier if I could look at the relevant function in the source code and understand its implementation to determine whether it was logically possible based on the implementation?
We currently don’t have the ability to decompile the human body, or understand the way it’s “implemented”, but that is something that tech is rapidly developing tools that could be used for such a thing. Either a way to corroborate enough information aggregated about the human body “in mind” than any person can in one lifetime and reason about it, or a way to simulate it with enough granularity to be meaningful.
Alternatively, the double-blindedness of a study might not be as necessary if you can continually objectively quantify the agreement of the results with the hypothesis.
Ie if your AI model is reporting low agreement while the researchers are reporting high agreement, that could be a signal that external investigation is warranted, or prompt the researchers to question their own biases where they would’ve previously succumbed to confidence bias.
All of this is fuzzy anyway - we likely will not ever understand everything at 100% or have perfect outcomes, but if you can cut the overhead of each study down by an order of magnitude, you can run more studies to fine-tune the results.
Alternatively, you can have an AI passively running studies to verify reproducibility and flag cases where it fails, whereas now the way the system values contributions makes it far less useful for a human author to invest the time, effort, and money. Ie improve recovery from a bad study a lot quicker rather than improve the accuracy.
EDIT: These are probably all ideas other people have had before, so sorry to anyone who reaches the end of my brainstorming and didn’t come out with anything new. :)
I didn't even think about the replication part of the value proposition.
Do a detailed enough study of an entire population and you get very strong hypothesis testing for all sorts of diseases & treatments simultaneously. You don't have to spend tens of millions of dollars and multiple PHD generations running a blinded study to replicate a specific untested first-principles part of modern medicine's treatment for a rare disease, you get that shit for free and call it up in a SQL query.
I guess the problem is a mismatch between detection capability and treatment capability? We seem to be getting increasingly good at detecting precancerous states but we don't have corresponding precancer treatments, just the regular cancer treatments like chemo or surgery which are a big hit to quality of life, expensive, harmful etc.
Like if we had some kind of prophylactic cancer treatment that was easy/cheap/safe enough to recommend to people even on mild suspicion of cancer with false positives, we could offer it to positive tests. Maybe even just lifestyle interventions if those are proven to work. That's probably very difficult though, just dreaming out loud.
>I guess the problem is a mismatch between detection capability and treatment capability?
the problem is you do the test for 7 billion people, say, 30 times over their life... 210000000000 tests. imagine how many false negatives and false positives, the cost of follow up testing only to find... false positive. the cost of telling someone they have cancer when they don't. the anger of telling someone they are free of cancer, only to find out they had it all along
this tech isn't that good, nowhere near it, more like a 1 in 100 or 10 in 100 rate of "being wrong". those numbers can get cheesed towards more false positives or false negatives.
as for grail, they tried to achieve this and printed OK numbers... ... .. but their test set was their training set. so the performance metrics went to shit when they rolled it out to production
I think chemo in general kills rapidly dividing cells, which is a characteristic of cancer cells and, unfortunately, many types of regular cells as well, hence many of the side effects, like hair loss. If it is precancerous, then probably it’s not yet dividing in that way, so probably wouldn’t make much of a difference, unless you’d actually catch the moment when the switch to full fledged malignant happens.
Chemo poisons the whole body in an attempt to destroy the cancer before the treatment kills the person. Not something you would want to do precancerous I suspect. Many of the targeted treatments we now have would also not be suitable, as they can't target (or find) a cancer so small as to be deemed precancerous. But I imagine some treatments would, such as drugs targeting the cancer DNA.
Would you say ctDNA tools are sensitive and specific enough now to be able to make a decision about post op adjuvant therapies? “Now that I’ve had surgery, did the R0 resection get it all, or do I need to do chemo and challenging medication like mitotane?”
I’ve seen it most commonly thought of as using ctDNA to detect relapse earlier.
So, more like — did the tumor come back? And if that does happen, with ctDNA, can you detect that there is a relapse before you would otherwise find it with standard imaging. Most studies I’ve seen have shown that this happens and ctDNA is a good biomarker for early detection of relapse.
The case for proactively looking for circulating tumor DNA without an initial diagnosis or underlying genetic condition is a bit dicier IMHO. For example, what if really like to know (I haven’t read this article, but I’m pretty familiar with the field) is how many people had a detectable cancer in their plasma (ctDNA), but didn’t receive a cancer diagnosis. It’s been known for a while that you can detect precancerous lesions well before a formal cancer diagnosis. But, what’s still an open question AFAIK, is how many people have precancerous lesions or positive ctDNA hits that don’t form a tumor?
This seems like yet another place where the base rate is going to fuck us: intuitively (and you've actually thought about this problem and I haven't) I'd expect that even with remarkably good tests, most people who come up positive will not go on to develop related disease.
Ideally, you'd want a test (or two sequential ones) that's both very sensitive (rule candidates in) and specific (rule healthy peeps out). But that's only the first step, because there's no point knowing you're sick (from the populational and economic pov) if you can't do something useful about it. So you also have to include downstream tests and treatments in your assessment and all this suddenly becomes a very intricate probability network needing lots of data and thinking before decisions are made. And then, there's politics...
You might be able to target and preemptively treat some aggressive cancers!
I lost my wife to melanoma that metastasized to her brain after cancerous mole and margin was removed 4 years earlier. They did due diligence and by all signs there was no evidence of recurrence, until there was. They think that the tumor appeared 2-3 months before symptoms (headaches) appeared, so it was unlikely that you’d discover it otherwise.
With something like this, maybe you could get lower dose immunotherapy that would help your body eradicate the cancer?
Literally anything that reduces cancer deaths is a win. I'm certainly not campaigning against early detection tests like this! Just talking about a challenge that comes up operationalizing them.
No worries. I did not take it that way! There’s definitely a line with this stuff where there’s no benefit. But if we can eliminate deadly cancers where detection is difficult, we’ve saved the lives of thousands of people.
How about tracking deltas between blood draws starting ant youngish age when things are on average presumed to be in a good state? When a new feature turns up in a subsequent blood draw, could it then be something more concerning?
The sensitivity challenge is compounded by the signal-to-noise ratio problem at ultra-low allelic fractions (<0.1%), where technical artifacts from library preparation and sequencing can mask true variants.
Long term the goal should be to find a treatment that is safe enough and with so small side effects that it can be used for any suspicious mutations even though it may be decades away from killing you.
Yes.. as I read the OP post I was thinking about how many weak natural poisons (ie bloodroot) have been shown to be effective at dispersing through the body and how they might be a good treatment ie 1-2 month course of pills.
It’s when bone marrow cells acquire mutations and expand to take up a noticeable proportion of all your bone marrow cells, but they’re not fully malignant, expanding out of control.
Here's what may seem like an unrelated question in response: how can we get 10^7+ bits of information out of the human body every day?
There are a lot of companies right now trying to apply AI to health, but what they are ignoring is that there are orders of magnitude less health data per person than there are cat pictures. (My phone probably contains 10^10 bits of cat pictures and my health record probably 10^3 bits, if that). But it's not wrong to try to apply AI, because we know that all processes leak information, including biological ones; and ML is a generic tool for extracting signal from noise, given sufficient data.
But our health information gathering systems are engineered to deal with individual very specific hypotheses generated by experts, which require high quality measurements of specific individual metrics that some expert, such as yourself, have figured may be relevant. So we get high quality data, in very small quantities -a few bits per measurement.
Suppose you invent a new cheap sensor for extracting large (10^7+ bits/day) quantities of information about human biochemistry, perhaps from excretions, or blood. You run a longitudinal study collecting this information from a cohort and start training a model to predict every health outcome.
What are the properties of the bits collected by such a sensor, that would make such a process likely to work out? The bits need to be "sufficiently heterogeneous" (but not necessarily independent) and their indexes need to be sufficiently stable (in some sense). What is not required if for specific individual data items to be measured with high quality. Because some information about the original that we're interested in (even though we don't know exactly what it is) will leak into the other measurements.
I predict that designs for such sensors, which cheaply perform large numbers of low quality measurements are would result in breakthroughs what in detection and treatment, by allowing ML to be applied to the problem effectively.
I think it's a very interesting approach and I highly support such an initiative. The easiest way to get a lot of data out of the body is probably to tap the body's own monitoring system - the sensory nerves.
A chemosensor also sounds like a useful thing it should give concentration by time. Minimally invasive option would be to monitor breath, better signal in blood.
Or perhaps even routine bloodwork could incorporate some form of sequencing and longitudinal data banking. Deep sequencing, which may still be too expensive, generates tons of data that can be useful for things that we don't even know to look for today, capturing this data could let us retroactively identify meaningful biomarkers or early signals when we have better techniques. That way, each time models/methods improve, prior data becomes newly valuable. Perhaps the same could be said of raw data/readings from instruments running standard tests as well (as opposed to just the final results).
I'd be really curious to see how longitudinal results of sequencing + data banking, plus other routine bloodwork, could lead to early detection and better health outcomes.
Last time someone tried to inject chips into the bloodstream, public opinion didn't handle it too well. It's the same as we would learn a lot by being more cruel to research animals. But most people have other priorities. Good or bad ? Who knows ? Research meets social constructs.
Apart from the likely technical infeasibility of your idea in today's society, this would require a humongous and diversified population sample to be meaningful (your 'heterogeneous bits'). This follows directly from the complexity of metabolic pathways you wish to analyze. Socially, you'll only be able to achieve that by not asking your sample for consent. Otherwise you'll have a highly biased sample, which could still be useful but for severely restricted research questions.
There are some pretty big longitudinal studies with consent ( "45 and up" are a quarter of a million people, for example - that's big enough that working predictions within the cohort would be a worthwhile health outcome).
There are nevertheless privacy issues, which I did not address as my first comment was already very long, especially for a tangent. Most obviously, people would be consenting to the collection of data whose significance they cannot reasonably forsee.
I do agree that most current AI companies are unlikely to be a good steward of such data, and the current rush to give away health records needs to stop. In a way it's a good thing that health records are currently so limited, since the costs will so obviously outweigh the benefits.
Someone should add a sensor to all those diabetes sensors people have in their arms all day and collect general info. It would obviously bias towards diabetics but that's like half the US population anyways so maybe it wouldn't matter that much.
This is a pretty cool study with some interesting findings! Cancer immunotherapy has a long history but has become very prominent in recent years. (Fun fact: the senior author on this paper, Ed Engleman, co-founded of one of the first cancer cell therapy companies, Dendreon, in the early 90s). However, the success of immunotherapies has been limited by the immune-exclusionary nature of the tumor microenvironment (TME). Why some tumors are immune-hot and others are immune-cold is still a very open research question.
In this study, the authors demonstrate pretty convincingly that erythropoietin (EPO, a hormone that stimulates red blood cell production in the bone marrow) reduces the recruitment of tumor-cell-killing T cells to the TME. It does this by acting on tumor macrophages, another type of immune cell, and changes the state of these cells to facilitate accumulation of immunosuppressive cells.
They work out the mechanism largely through mouse models and associative analysis in human tissue samples, but I thought it was interesting that this finding aligns with the clinical observation that cancer patients who receive recombinant EPO for treatment of anemia frequently experience tumor progression.
After reading this, I am going back to check out EPO expression in old datasets that I worked with haha.
I actually don't mind if it's true, but your comment was the first time I got a tingle that this was AI generated but I still could tell it likely wasn't. Maybe used it to rewrite parts of your comment? Anyways i appreciate it and agree with you. I myself am gonna go check in the cancer cell line encyclopedia, which IME is the single cleanest large dataset curated in biology in decades.
For others who might be curious, this study is genuinely good (evidenced somewhat by it being published in Science, as a study that is not in humans but still showing actual cancer curing efficacy not just some pathway finding) is that they use difficult but necessary model systems that emulate real tumor environments - these are spontaneous tumors that show up in mice that have full immune systems. Literally 99.9% of cancer study papers don't use such systems and in my opin9on get fully invalidated for most interpretation.
Anyways I'm curious if the microenvironment will just evolve quickly to be EPO independent, as it typically seems to do something like that in real long term tumor environments you encounter in people compared to mouse models.
I used VSCode as my default IDE so the switch was very natural.
I am working on machine learning in bio, and many of the tools, methods, and data structures are very domain specific. Even so, the agent feature is good enough that for most tasks, I can describe the functionality I want and it gets me 80% of the way there. I pay $20 a month for Cursor and it has quickly become the last subscription I would cancel.
Not 100% — live share doesn’t work for realtime paired programming. There is an open source alternative extension that works though, albeit even more buggy and slightly more clunky than the official Microsoft extension.
devcontainers extension was a year out of date up until the last month or something? sorry, this is from memory, but definitely not 100% compatibility.
Strongly disagree. My son is going to be born into a county with an active measles outbreak caused entirely by misinformation and stupidity. We can’t vaccinate until 6 months. Absolutely preventable. Zero arguments against vaccination for measles. Public health is everyone’s problem.
> Absolutely preventable. Zero arguments against vaccination for measles. Public health is everyone’s problem
I sympathise with you. I'm not seeing a solution outside suspending this moronic minority's right to make decisions for themselves and their children or leaving the parts of the country that have chosen this fate to their own devices. In that framing, the question is which is leakier: suspending civil rights or literal viruses?
While it’s easy to blame anti-vaxxers, a large contributor are people who saw waning immunity and immigrants from countries with poor immunization rates.
Even with 100% immunization youll still have outbreaks occasionally as people come in and out and the vaccine just doesn’t “take” in some small percentage.
I am curious about the same thing. I worked as a ML engineer for several years and have a couple of degrees in the field. Skimming over the document, I recognized almost everything but I would not be able to recall many of these topics if asked without context, although at one time I might have been able to.
What are others' general level of recall for this stuff? Am I a charlatan who never was very good at math or is it just expected that you will forget these things in time if you're not using them regularly?
There's a decent amount of cynicism in the comments, which I understand. I think this is a really cool and novel study, though.
Historically, cancer was treated with therapies that are toxic to all cells, relying on the fact that cancer cells divide quickly and are unable to handle stress as well as normal cells (chemotherapy, radiation).
The last couple of decades we've seen many targeted cancer therapies. These drugs generally inhibit the activity of a specific protein that lets the cancer cells grow (e.g. EGFR inhibitors) or prevents the immune system from killing the cancer cells (e.g. PDL1 inhibitors).
This mechanism is way more interesting. The gene BCL6 is usually turned on in immune cells when they are mutating to recognize foreign invaders. This process involves lots of DNA damage and stress, but BCL6 stops the cells from dying and is therefore important for normal immune function. Unfortunately, this makes BCL6 a gene that is often co-opted in cancer cells to help them survive.
The method cleverly exploits the oncogenic function of BCL6 not by inhibiting it, but by turning it into a guide, enabling the delivery of activating machinery to the targets of BCL6 and reversing the inhibitory effects on cell death.
The whole field of targeted degraders, molecular glues, and heterobifunctional molecules is a growing area of interest in cancer research.
This comment hits the nail on the head. Another big consideration with the technology in this paper that hasn't been mentioned in this thread is that it opens up a huge range of possibilities for targeting "undruggable" protein targets. Most drugs are small molecules that bind to sites an (relatively much larger) proteins, thereby getting in the way of their function. Unfortunately the vast majority of proteins do not have a site that can be bound by a molecule in a way that 1) has high affinity, 2) has high specificity (doesn't bind to other proteins) and 3) actually abolishes the protein's activity.
With "induced proximity" approaches like the one in this study, all you need is a molecule that binds the target protein somewhere. This idea has been validated extensively in the field of "targeted protein degradation", where a target protein and an E3 ubiquitin ligase, a protein that recruits the cell's native proteolysis machinery, are recruited to each other. The target protein doesn't have to be inactivated by the therapeutic molecule because the proteolysis machinery destroys it, so requirement #3 from above is effectively removed.
The molecule in this study does something similar to targeted protein degradation, but this time using a protein that effects gene expression instead of one that recruits proteolysis machinery. The article focuses on the fact that cancers are addicted to BCL6. This is an important innovation in the study and an active area of research (another example at [1]), but leaves out the fact that these induced proximity platforms are much more generalizable than traditional small molecules because it's the proteins that they recruit that do all the work rather than the molecules themselves. This study goes a long way to validate this principle, pioneered by targeted protein degradation and PROTACs, and shows that it can be applied broadly.
I haven’t read the paper yet but the news article seemed a bit, meeh.
BCL-2 inhibitors, mainly Venetoclax, is used in cancer therapies quite often which also triggers cell apoptosis and it’s very effective. It was also designed to target B-cell related cancers, but it found to be so effective that FDA approved it to be used in primary cases of Acute Myeloid Leukemia. So, killing cancer with triggerring apoptosis is very well known. I think the novel part might be the two protein, so it is probably more targeted for metabolic activities… but yeah didn’t read the paper yet.
Anyways, for the side effects a major one could be Tumor Lysis Syndrome (TLS). Basically, if you apoptose the cancer cells super fast, the molecules from those cells spread everywhere and it becomes toxic for the patient. This is at least the case for Venetoclax.
Cancerous cells are fairly diverse across individuals, or even within a single individual, and many biological treatments require precise sequencing of the tumor DNA of that individual patient to adjust and work. In some cancers, there is a nasty "Russian roulette" effect in play, where a certain treatment may be extremely efficient (in practice a cure, even though oncologists avoid that word) in people with a certain mutation and totally useless in others, even though from the macroscopic point of view, their tumors look the same.
Then, basically, each cancer, cancer cells should be sequenced, then based on the type of cell and DNA sequencing, we have a list of "tools" to deliver payload to those very cells (without delivering such payload to sane cells, ofc)?
In practice, we can only make use of some known mutations. Not just for delivering chemicals, but also for "teaching" the immune system to attack such cells, which, once it is able to recognize them, it will do vigorously.
Let's hope that this catalogue will grow until it covers at least all the typical cases.
My understanding is that even though immunotherapy's mechanism may seem more natural than chemotherapy and radiation, and in some instances may be a magic bullet, up-regulating the immune system can have serious consequences. I remember reading about a clinical trial showing similar progression free survival but increased grade 4-5 toxicities (requiring hospitalization or being fatal). My assumption was that these are autoimmune conditions that are aggravated in some of the patient population.
oop talked about mechanism, so we can't know side effects here. someone will publish a new drug that relies on this mechanism, and then they will check the side effects of the specific drug on cells, rats, or other experimental species.
It’s fairly common for private companies to have non-transferability clauses for both options and stock.
IANAL, but this sounds pretty standard and I doubt OP will be able to fight the legitimacy of the claim. Re: a forward sale, sounds like it blatantly violates the agreement and would expose OP to some obvious risks. Agreed OP should talk to a lawyer.
Is anyone else surprised by OP’s indignation? Startups are risky, options aren’t a guaranteed payday, exercising is gamble, and liquidity events are regulated for a reason. Sometimes you lose money. Have we forgotten this?
> Startups are risky, options aren’t a guaranteed payday, exercising is gamble, and liquidity events are regulated for a reason. Sometimes you lose money. Have we forgotten this?
I spent around $100k to purchase this stock and paid tax on the gain. They are now worth millions of dollars on the open market, but the company will not allow me to sell them... I understand it's risky, but at this point they just aren't letting me get a payday...
> They are now worth millions of dollars on the open market
No, they aren't, because there is no open market for private company shares.
You have a private company valuation you can base the share price on, but that's it. And even that is typically based on black magic and accounting tricks since, at the risk of repeating myself, there's no open market in which those shares are trading and thus no method for price discovery.
And any theoretical transfer of ownership would occur in a private transaction or on a private marketplace that specializes in matching buyers and sellers in private company shares.
<chopped this bit out since the dead horse is beaten>
Edit: And to provide something a bit more constructive, here: unless some lawyer comes up with something clever--and certainly it's worth exploring your options--my bet is your only real move is to just hold onto those shares and wait.
Eventually there may be a liquidity event--probably an acquisition--and hopefully you'll net out positive. You basically bought a 100k lottery ticket. I suspect all you can do now is move on and hope it pays off.
>> They are now worth millions of dollars on the open market
> No, they aren't, because there is no open market for private company shares.
I do have offers to purchase my stock. In a bygone era, I could simply instruct the company to transfer my shares and broker the transaction myself.
I get what you're saying though.
The purpose of my post here isn't to complain so much as inquire about people who have executed forward sales and are willing to speak about the experience. From what I understand, this is being done quite a bit and I guess the idea is that company never has to find out...
> I spent around $100k to purchase this stock and paid tax on the gain.
Yes, that’s how options exercises work.
> They are now worth millions of dollars on the open market, but the company will not allow me to sell them...
But you were prrsumably aware of that limitation when you entered into the agreement under which you purchased them (if you dispute that that is the agreement you agreed to, thebmn, definitely, you need a lawyer.) So, even insofar as you describe the “open market” accurately, that market isn’t open to you.
> I understand it's risky, but at this point they just aren't letting me get a payday...
Perhaps not. Are they obligated to let you get a payday? Is it in their interests to do so? If neither of those is the case, why do you expect they would?
Trying to take this in a more constructive direction though: what happens if I go bankrupt? I have no idea how all of that works, but I can imagine a judge saying this limited transferability clause isn't legal or something. How can I go bankrupt when I kinda-sorta own millions of dollars worth of company stock?
> Trying to take this in a more constructive direction though: what happens if I go bankrupt?
If that is an important real consideration, you should consult an attorney; my general understanding, whixh yoi should not rely on, is that non-transferrability clauses mostly are not enforceable in bankruptcy, with some particular exceptions.
It's a theoretical question, and I'm not trying to be annoying here, just genuinely curious.
If this stock is non-transferable, does that mean it has an inherent value of zero? Does that mean I can file for bankruptcy and still keep the stock? I just feel like the nature of property rights in this country doesn't square with transferability restrictions. :shrug:
> For example, in a U.S. case, Associated Grocers of Maine, Inc., the Bankruptcy Court for the District of Maine ruled that federal bankruptcy law preempted a restriction on the transfer of the debtor's stock, thereby permitting a sale of the stock free and clear of the restriction.
> If this stock is non-transferable, does that mean it has an inherent value of zero?
The value of stock comes from the claim against the assets in the case of dissolution, the ability ot sell in the market just enables one to realize that value without the company dissolving in whole or (as by issuing dividends) in part.