Wonder what’s the cause of decline in views. One plausible reaction I had was that views might be down because of people using AI search (ChatGPT, etc) which unlike Google don’t show videos prominently. But since likes haven’t gone down that doesn’t seem likely.
Apparently very few people use the subscriptions list and rely on the videos they subscribe and watch to appear on the Youtube homepage. And youtube changed what videos they put there. Instead of new videos by people you watch and related ones they show:
videos you just watched
videos you watched 10 years ago
auto dubbed videos on topics you are not interested
If I chose from your list I prefer anything with 10 views. Little channels is the place where the best possible content is concentrated. And BTW little channels never use arrows and similar lowball clickbait strategies.
Could it be related to mandatory Widevine encryption?
On my phone, the mobile site (m.youtube.com) has introduced Widevine a couple of weeks ago (last week of August IIRC). No idea if I’m just unlucky and part of a shitty A/B experiment, but I definitely had to recompile libc (being on Linux) with patches from Chromium and install Widevine so I could watch videos again.
Whenever I replace my patched libc with the unpatched original, then the Widevine plugin crashes everytime I try to play back a video on m.youtube.com. And it used to work before.
Anecdotal, but for a while it felt like YouTube had decent content on whatever I was looking for. I trusted product reviews on there ever so slightly more than text content because of the relatively higher cost of producing videos. Nowadays there’s a glut of low quality stuff. Anything from low-effort videos to outright text-to-speech, non-videos that snare you using a promising thumbnail. The search results only surface about 5-10 relevant videos followed by things that have specious relevance. On top of that, they jammed Shorts into prominent screen real estate. It screams “hey while I’ve got you here, about a few of these distractions!”
So, I stopped going there as much. They stopped respecting visitor intentions. Just like every other platform, they just want to keep you on the site for as long as possible sifting through a feed of dopamine slop.
For me 9/10 requests with GPT-5 Pro failed for some weird reason. This never happened with previous models. I ended up downgrading my subscription, I realized I wasn’t using it enough. And for me thinking mode has been good enough.
I was trying to plan for some data analysis. Tried it few times with GPT5 Pro, it failed almost every time. It only succeeded once. While I didn’t had any issue with Thinking model.
Absolutely. After the “Great Resignation,” where labor was tight and wages were pushed up, there was a large push in both tech and financial services to stand up offshoring operations in Chennai and Hyderabad (India), Manila (Philippines), and LATAM (primarily Mexico City, but also parts of South America) in order to avoid hiring US workers.
As others have mentioned in this thread, simply look at the unemployment rate and time to find a job for these workers. The labor force exists, companies just don’t want to pay for it or offer flexible work (RTO, which is also used to cram down labor costs). They want the control back.
Maybe I am oversimplifying it, but isn’t the reason that they are lossy map of worlds knowledge and this map will never be fully accurate unless it is the same size as the knowledge base.
The ability to learn patterns and generalize from them adds to this problem, because people then start using it for usecases it will never be able to solve 100% accurately (because of the lossy map nature).
Many SPVs were available for recent funding rounds, but my biggest gripe was the excessive fees layered on top of them.
More importantly, we should ask who will be left holding the bag when this bubble bursts. For now, investors are getting their money back through acquisitions. Founders with desirable, traditional credentials are doing well, as are early employees at large AI startups who are cashing out on the secondary market. It appears the late-stage employees will be the ones who lose the most.
I don't mean to be rude but this idea that there's some decisive, authoritative data vs. sketchy anecdotal claims kind of drives me up the wall.
What data would or could exist in this case beyond the hundreds of calls the author is apparently basing their observations on? That seems like a reasonable qualitative data set to me.
On the other hand, what you're asking for doesn't make much sense. Any push/pull strategy difference is going to change who takes a call in the first place. You're not doing a RCT on a random sampling of the population.
The point is simply that you're going to have a better time doing sales if your supply matches some pre-existing demand. You don't need a quantitative study to understand why that may well be the case.
It's the same reason that, despite being bombarded with advertisements, we don't all go out and buy 16 meals a day or 10 cars a year simply because someone tried to sell those things to us. We act when we have a need, and founders need to understand that as a physical reality when trying to sell their products.
Your comment hits on a broader tension I see a lot, not just here but in business strategy in general. It's the divide between compelling, experience-based narratives and empirical evidence. I think both are essential.
The author has presented a fantastic and intuitive narrative with the "BUYER-PULL" model. Your analogy is spot-on: you can't sell 16 meals a day to someone who only needs three. The qualitative insight is powerful.
My request for data comes from the next step. How do we know this narrative is not just a "just-so story"? How much does this effect matter on the margin? In the complex world of B2B sales, where needs aren't always as clear as hunger, can "push" tactics sometimes be effective at helping a buyer crystallize a latent need?
Asking for metrics like close rates isn't meant to demand an impossible standard of scientific proof. Instead, it's an attempt to test the boundaries of this framework and understand its real-world impact. Great insights often come from quantifying the effects of a powerful story.
This is super cool. It is great except for two things
1. Don’t want the notebooklm branding
2. Data processing. Some data I will be sharing will be private, so want to make sure it will not be used for training models.
Is PE your target user? And is deal sourcing the primary usecase?
If yes, I am guessing they would be looking for understanding the fundamentals of the company, so start by looking at the SEC filings (assuming public company). Essentially helping them narrow down the entire universe into the set of companies that are interesting.
Currently, yes. However it's quite early and I'm exploring whether this is the best option.
The reason I was posting here on HN is to see if anyone more technical may be interested in this as well. E.g. to leverage an API like this with AI agents or otherwise.