I've noticed this too when dealing with people with power. If you want to be seen as a peer, you have to stop caring. It's weird but it's definitely in the culture. As someone who learned English as a second language, it's especially weird since I worked hard to speak and write good English.
It's also ironic for a self described "classic liberal" building a company which grows the power of the government instead of limiting it. Alex Karp must have deep cognitive dissonance and likely suffers for it.
Palantir itself is trading at an unjustifiable premium given their fundamentals. They P/E is north of 200x. It's forward guidance also doesn't justify their price imo.
So their beef with analysts is obvious since they have a huge risk to the downside in price. The recent pullback of around 21% is not sufficient in my opinion. Note this is not financial advice.
A huge portion of the "I'm a classical liberal" was always just a smokescreen. It was never an ideology for many of these folks. It was just a "more serious" mechanism of complaining about woke college students.
> It's also ironic for a self described "classic liberal" building a company which grows the power of the government instead of limiting it.
I think that he really does see himself as classic liberal in that he really does see government as "limiting" to people like him with things like regulation. Say what you will about the current administration, they're absolutely not going to regulate people who create wealth.
There's a divine right of kings element mixed in here. Thiel, Karp, Trump and the rest really do think that the order of the universe, or the will of a higher power, is putting them in a place to operate without limits. They see any sort of regulation of their behavior as an affront to the order of nature. That's why they consider themselves classically liberal. Ultimately, the little people - that's us - are being illiberal by electing governments that can do things like say "hey, maybe we don't put everyone under constant surveillance" that would both challenge their power and their profitability.
Don't forget he also had Sam Altman's phone number. Do you any of you have his number? Also before he did all this he was semi retired for 5 years because of a successful exit. So for anyone thinking they can replicate this ask...
1. Are you already rich? Do you have cash in the bank to vibecode a project fulltime for many months just for fun?
We use Apache Arrow at my company and it's fantastic. The performance is so good. We have terabytes of time-series financial data and use arrow to store it and process it.
We use Apache Arrow at my company too. It is part of a migration from an old in-house format. When it works it’s good. But there are just way too many bugs in Arrow. For example: a basic arrow computation on strings segfaults because the result does not fit in Arrow’s string type, only the large string type. Instead of casting it or asking the user to cast it, it just segfaults. Another example: a different basic operation causes an exception complaining about negative buffer sizes when using variable-length binary type.
This will obviously depend on which implementation you use. Using the rust arrow-rs crate you at least get panics when you overflow max buffer sizes. But one of my enduring annoyances with arrow is that they use signed integer types for buffer offsets and the like. I understand why it has to be that way since it's intended to be cross-language and not all languages have unsigned integer types. But it does lead to lots of very weird bugs when you are working in a native language and casting back and forth from signed to unsigned types. I spent a very frustrating day tracking down this one in particular https://github.com/apache/datafusion/issues/15967
I'm curious about something. If this is based on historical datasets, and people build strategies using LLMs, then in theory this is deeply flawed since LLMs would contain the knowledge about some of the datasets, and certainly the prices of the biotech stocks. This approach cannot be used to figure out which strategies are good because they know the future outcome.
How do you prevent this problem? It's a classic problem in backtesting strategies where you leak future information into the model.
Yes this is a major problem I thought about. The makeshift solution here was to redact the “identifying information” on the press release. Even then, I benchmarked that GPT-5 could still match it back to the right TIKR around 53% of the time. It does not seem to be able to recall the price of the stock in my benchmark, but to be honest I’m not entirely sure how trustworthy this benchmark is and I may need to come up with a few more clever solutions to validate.
One solution could be to get experts to write similar press releases so that the text itself is out of distribution or if an actual quant firm has internal models, they can just make sure that there is a cutoff date to the pre-training data.
I'm curious, when you ran a quant fund, what was your approach?
We didn't use other LLMs. We built our own models and had a system designed to never leak future information at any given timepoint. Models can only access data the system allowed it at any time point gating future information. This means even training has to use the same system.
You have to design it from the ground up with that approach. Just to give you an idea of how hard it is, when a company releases an earnings report, they can update it in the future with corrected information, so if you pull it later you will leak future information into the past. So even basics like earnings need to be versioned by time.
But you know, most people don't really care and think they have an edge, and who knows maybe they do. Only live trading will prove it.
One interesting thing from this paper is how big of a LiDaR shadow there is around the waymo car which suggests they rely on cameras for anything close (maybe they have radar too?). Seems LiDaR is only useful for distant objects.
This is a good analysis of the yen carry trade but i'd argue the causality is backwards. Record high margin debt in the U.S. is the root cause as it's a powder keg. The yen is just the fuse being lit. When system-wide leverage is this extreme, any funding sock (whether it's the BOJ rate hikes, hawish fed, or geopolitical event) can initiate the liquidation cascade. The yen carry trade is one source of that leverage but the fragility was baked in. If Japan didn't do anything something else would have cause the liquidation cascade, only a matter of time.
The real story isn't Tokyo, it's that Wall Street built a house of cards and ran out of steady hands.
I have a public ThetaEdge card that monitors margin debt and calculates the correlation with the S&P here:
$566B in margin debt. Is that actually a financial black swan amount of money? If 50% of that got "corrected" into Money Heaven on Friday, would it be more than a bad day at the stock market?
You're right that $566B alone isn't a black swan. That FINRA figure only captures retail and small institutional margin at broker-dealers. It excludes prime brokerage (hedge funds), securities-based lending, and repo markets. Conservative estimates put total leveraged exposure at $10-15 trillion. The $566B is maybe 5% of the iceberg.
I see visible margin debt as both a canary and a proxy. It's a canary because retail cracks first (less sophisticated risk management, stricter regulatory margin). It's a proxy because when visible leverage contracts, it usually means hidden leverage is contracting too. They're exposed to the same assets. When FINRA margin debt starts falling, it's not just a warning, it's confirmation that system-wide deleveraging is already underway.
This article crystallizes something I witnessed firsthand last week.
Overheard a guy at a restaurant explaining how he builds phone apps with AI and no coding experience. When asked how he verifies the code works, he said he pastes it into a different AI to explain it.
That's the "slopware" problem in action. The code compiles. It might even work. But there's no understanding of what it's actually doing, no ability to debug when it breaks in production, no awareness of the technical cruft accumulating with every prompt. That's a problem for people creating software for others and is a huge opportunity for software developers to take prototypes and build real stuff.
Does anyone remember the RAD days of the 90s?
On the flip side, for people making software to solve THEIR problems, they don't need to make anything production quality. Its for a single user, themselves! Maybe the LLMs are good enough now that people don't need to buy or subscribe to software that solves trivial problems as they can build their own solutions. Maybe the dream of smalltalk, hypercard, and even early web where anyone can use the computer of what it was meant for is finally here?
reply