It’s a meme that will never die, but there’s no proof it ever happened:
> This story doesn’t even show that Target tried to figure out whether the girl was pregnant. It just shows that she received a flyer that contained some maternity items and her weird dad freaked out and wanted to talk to the manager. There’s no way to know whether the flyer arrived as a result of some complex targeting algorithm that correctly deduced that the girl was pregnant because she bought a bunch of lotion, or whether they just happened to be having a sale on diapers that week and sent a flyer about it to all their customers.
I think parent comment was pointing to lack of establishing a causation link. The finding in their abstract is extrapolated by statistical inference. For example smokers tend to drink more etc. The paper does take such factors into account. Personally I wouldn't jump to such a strong conclusion from statistical inference because it closes the door on other factors that might be even stronger when combined together. The paper reflects motivated reasoning more than a discovery outcome. That said, smoking is of course a major health risk, I am just pointing at the research approach.
In the paper they claim it matters for midlife mortality too:
> People who start smoking at age 18 begin to exhibit higher mortality several decades later, with particularly large effects beginning at ages 45–64 (Lawton et al. 2025). A health-capital model allows the mortality rates of older persons to be determined not only by their current smoking behavior but also by smoking in earlier years. In the United States, smoking rates started falling for college graduates earlier than they did for the non-college population.
...
> [...] with rapidly improving treatments and screening for lung cancer (Howladeret al. 2020), the major impact of smoking over the longer-term—particularly for people aged 55–64 arises from other more-common tobacco-related diseases such as chronic obstructive pulmonary disease (COPD); cardiovascular diseases such as strokes, aneurysms, and heart
attacks; diabetes; and other types of cancers (Carter et al. 2015). Perhaps more surprising is that past county-level smoking rates are highly predictive of deaths of despair. This finding, however, is consistent with an emerging literature in biology that points to a causal influence
of smoking on drug addiction [...]
I like "ghosts" as a simple metaphor for what you chat with when you chat with AI. Usually we chat with Casper the Friendly Ghost, but there are a lot of other ghosts that can be conjured up.
Some people are obsessed with chatting with ghosts. It seems like a rational adult couldn't be seriously harmed by chatting with a ghost, but there are news reports showing that some people get possessed.
Yes, that's right. It might also make sense to generate multiple passkeys for an account. For example, a separate one for logging in from Apple devices.
It seems like latency will be poor if you have to wait for a server-side round trip to an LLM to update the UI whenever you press a button?
In a context where you're chatting with an LLM, I suppose the user would expect some lag, but it would be unwelcome in regular apps.
This also means that a lot of other UI performance issues don't matter - form submission is going to be slow anyway, so just be transparent about the delay.
Saying it's "algorithms" trivializes the problem. Even on reasonable platforms, trolls often get more upvotes, reshares, and replies. The users are actively trying to promote the bad stuff as well as the good stuff.
There are a variety of possible memory mechanisms including simple things recording a transcript (as a chatbot does) or having the LLM update markdown docs in a repo. So having memory isn't interesting. Instead, my question is: what does Letta's memory look like? Memory is a data structure. How is it structured and why is that good?
I'd be interested in hearing about how this approach compares with Beads [1].
Beads looks cool! I haven't tried it, but as far as I can tell, it's more of a "linear for agents" (memory as a tool), as opposed to baking long-term memory into the harness itself. In many ways, CLAUDE.md is a weak form of "baking memory into the harness", since AFAIK on bootup of `claude`, the CLAUDE.md gets "absorbed" and pinned in the system prompt.
Letta's memory system is designed off the MemGPT reference architecture, which is intentionally very simple - break the system prompt up into "memory blocks" (all pinned to the context window, since they are injected in system, which are modifiable via memory tools (the original MemGPT paper is still a good reference for what this looks like at a high level: https://research.memgpt.ai/). So it's more like a "living CLAUDE.md" that follows your agent around wherever it's deployed - ofc, it's also interoperable with CLAUDE.md. For example, when you boot up Letta Code and run `/init`, it will scan for AGENTS.md/CLAUDE.md, and will ingest the files into its memory blocks.
LMK if you have any other questions about how it works happy to explain more
I think it's mostly complimentary, in the same way a linear MCP would be complementary to a MemGPT/Letta-style memory system
I guess the main potential point of confusion would arise if it's not clear to the LLM / agent which tool should be used for what. E.g. if you tell your agent to use Letta memory blocks as a scratchpad / TODO list, that functionality overlaps with Beads (I think?), so it's easy to imagine the agent getting confused due to stale data in either location. But as long as the instructions are clear about what context/memory to use for what task, it should be fine / complementary.
Just like TikTok. The author doesn't think TikTok is inevitable, and I fully agree with them! But in our real timeline TikTok exists. So TikTok is, unquestionably, the present. Wide adoption of gen-AI is present.
reply