Yeah, I've found AI 'miracle' use-cases like these are most obvious for wealthy people who stopped doing things for themselves at some point.
Typing 'Find me reservations at X restaurant' and getting unformatted text back is way worse than just going to OpenTable and seeing a UI that has been honed for decades.
If your old process was texting a human to do the same thing, I can see how Clawdbot seems like a revolution though.
Same goes for executives who vibecode in-house CRM/ERP/etc. tools.
We all learned the lesson that mass-market IT tools almost always outperform in-house, even with strong in-house development teams, but now that the executive is 'the creator,' there's significantly less scrutiny on things like compatibility and security.
There's plenty real about AI, particularly as it relates to coding and information retrieval, but I'm yet to see an agent actually do something that even remotely feels like the result of deep and savvy reasoning (the precursor to AGI) - including all the examples in this post.
One of the most important details of Sacks's life which dogged him nearly to the end (and which is important to this NY piece), was a minimization by Sacks of his own sexuality. He was not "openly gay" at all.
One of the biggest problems frontier models will face going forward is how many tasks require expertise that cannot be achieved through Internet-scale pre-training.
Any reasonably informed person realizes that most AI start-ups looking to solve this are not trying to create their own pre-trained models from scratch (they will almost always lose to the hyperscale models).
A pragmatic person realizes that they're not fine-tuning/RL'ing existing models (that path has many technical dead ends).
So, a reasonably informed and pragmatic VC looks at the landscape, realizes they can't just put all their money into the hyperscale models (LP's don t want that) and they look for start-ups that take existing hyperscale models and expose them to data that wasn't in their pre-Training set, hopefully in a way that's useful to some users somewhere.
To a certain extent, this study is like saying that Internet start-ups in the 90's relied on HTML and weren't building their own custom browsers.
I'm not saying that this current generation of start-ups will be successful as Amazon and Google, but I just don't know what the counterfactual scenario is.
The question that isn't answered completely in the article is how useful are the pipelines for these startups? The article certainly implies that for at least some of these startups there very little value add in the wrapper.
Got any links to explanations of why fine tuning open models isn’t a productive solution?
Besides renting the GPU time, what other downsides exist on today’s SOTA open models for doing this?
I think it's interesting that everyone's immediate reaction now-a-days is to assume incompetence or maliciousness, rather than curiosity at the root cause (very telling this attitude has even permeated a forum for supposed 'hackers').
A high-level is that 80% of the economy is very easy to track b/c it's not very volatile (teachers, for example).
What we have seen is a huge surge in unpredictability in the most volatile 20% of jobs (mining, manufacturing, retail, etc.). The BLS can't really change their methods to catch up with this change for classic backwards compatibility and tech debt reasons.
Part of the reason 'being a quant' is so hot right now is that we truly are in weird times where volatility is much higher than most people realize across sectors of the economy (i.e. AI is changing formerly rock-solid SWE employment trends, tariffs/electricity are quickly and randomly changing domestic manufacturing profitability, etc.). This means that if you can build systems that track data better than the old official systems, you can make some decent money investing against your knowledge.
I think this is a bad state of affairs, but I don't have a good solution. Any private company won't release their data b/c it's too valuable and I am reluctant to encourage the BLS to rip up their methods when backwards compatibility is a feature worth saving.
Is there really more volatility? My gut feeling is that government interventions have flattened it over recent decades. I’d like to see some real figures on this.
Manufacturing and mining are becoming much less correlated to the overall jobs market (likely, as you point out, b/c the government smooths the other sectors).
Can you actually prove volatility is higher now than in the past? There have been plenty of volatile changes in the workforce over the past several decades, this is not anything new to the job market.
> interesting that everyone's immediate reaction now-a-days is to assume incompetence or maliciousness, rather than curiosity at the root cause
I came across this claim last week regarding recent US jobs figures:
> "All jobs gains were part time. Full-time jobs: -357K. Part-time jobs: +597K"
If this claim is true, and I have no means to tell if it is, then - regardless of one's view on whoever is in power right now - do we really expect any elected representatives to be brave enough to say that out loud at a press conference?
Explain to me please why job numbers aren’t simply a matter of querying the Federal social security database? A longstanding process of polling businesses for what they want to report, followed by corrections up to one year later, has got to be a pantomime to fudge the numbers.
Does that pass the basic common sense smell test? Everyone can see on their paycheck the amount, that is paid 30 days after any work day in the worst case. These payments are sent to a single federal bank account, and data-wise are combined with Social Security ID, sending bank id, date. It’s a bank, there’s a database. We are talking at most about 200mm records, a raspberry pi can process that query in minutes. If we can’t query this easily it’s by design. Or we could do some backflips and somersaults to try to come up with a reason for why the bureaucracy has to be more complicated.
The payments are deposited monthly or semiweekly (for employers with large payroll) but that's a lump sum. If you are looking at that from the government side all you can tell is whether total payroll has gone up or down. That won't tell if any change is due to a change in number of employees or a change in pay rates or some combination of that.
It isn't until the employer files their quarterly Form 941 that you'd see employment numbers. Form 941 includes the number of employees and total wages and withholding.
It isn't until the annual W-2 filings that you would see a breakdown that includes number of employees and the individual pay.
Not all 'normal income' is from a "job" as we think of it and assuming that does not even come close to passing any informed person's smell test.
Parsing tax or SS payments for what a "job" is would be a logistical nightmare, because that's not what the system is designed for (unlike the BLS's system, which is designed to count jobs).
When ppl want job numbers they want a reliable proxy for the state of the economy. Fixing it on changes to payroll-based social security payments would be far better than what we have now, if timely.
I only see a stat that reports the same number for full employment vs one person who fired them all and took their incomes. Is there a way to disaggregate to get some proxy for employment like we are talking about?
So the answer is payments per social security id are not reported to the social security Electronic Federal Tax Payment System (EFTPS), employers only report aggregate payments. And workers and employers only report payments by individual in W2’s in January.
Probably the only reason is because the BLS and SSA are completely separate, and SSA is probably antiquated and doesn't attempt to tag or organize their data along the same parameters as whatever the BLS defines. It likely neither has the staffing nor resources to provide those hooks and realtime anonymous aggregated data for other departments to consume.
A lot of people don't understand that collecting data is actually expensive and difficult when it doesn't involve surreptitiously stealing it via some piece of tech.
Meta is also a great example of AI leading to higher user engagement today.
Reels isn't powered by Transformers per se (likely more of a complex mix of ML techniques), but it is powered by honest-to-goodness SOTA AI/ML running on leading-edge Nvidia GPUs.
I think, because they're so impressive, people assume Transformers = AI/ML, when there's plenty of other hyperscale AI/ML products on the market today.
The article is mostly about how there are now recognized to be certain schizophrenia-like conditions that are clearly autoimmune diseases. Mentioned in the article are anti-NMDA-receptor encephalitis, which responds to immunotherapy, and a previously published case of a woman mid-diagnosed with catatonic schizophrenia fully recovering after being treated for lupus with immunosuppressive therapy.
Based on this, the article suggests that the rituximab Mary was given along with chemo was the key. However, they were unable to test conclusively for antibody evidence of this theory after the fact.
I have a family member with an incidence of autoimmune encephalitis secondary to other conditions (my entire family is an autoimmune cluster) who is actually hospitalized for it now. This almost matches my experience to a tee, though anti-NMDAR was tested for and not found. The neurologists wanted to discharge prior to attempting immunotherapy and thankfully we were able to ensure they tried (pulse steroids).
It's certainly an area which can be characterized as rare disease, whether paraneoplastic or otherwise.
Probably why we keep looking at electroconvulsive ‘therapy’ again and again. Triggering the body’s systems to do something often cleans up other situations at the same time.
There was a phenomenon where sometimes a high fever would cure STDs like syphilis. We generally use antibiotics now that we have them, because they are less dangerous.
Do you have this same concern about literally every structure man has ever constructed?
They do the same exact thing in terms of 'slowing wind down' and 'preventing the sun's energy from reaching the ground'.
This idea is understandable, but it falls apart for the same reason the wind turbine bird death concern does (the number of birds that have died due to humans liking windows is 1,000,000x the number that have died in turbines).
> Do you have this same concern about literally every structure man has ever constructed?
To a lesser extent, yes. However, power generating facilities are different in that they are intended to remove as much energy as possible, whereas sky scrapers etc are not.
This is exactly why science education is so important.
But if it makes you feel better, all man made structures combined cover a small fraction of the earths surface people tend to be in areas with other people and thus it looks like we’re doing more than we are. NYC for example has 291x the average population density of the rest of the US and that’s including over a square mile devoted to Central Park.
Agriculture has a bigger impact because it cover so much land, but that’s offset by it being relatively close to nature.
Cities probably have a significantly larger effect on the way that energy flows around the earth than renewable power generation does. It's relatively easy to change how much a very large amount of heat moves in comparison to how easy it is to harness energy into usable work. (See also why the greenhouse effect from CO2 emissions is such a big deal in comparison to basically any other thing that humans have done, as far as the energy balance of the earth is concerned. CO2 is responsible for about 20 times more energy being absorbed by the earth than humanity uses in total, from any source)
https://www.booking.com/Share-Wt9ksz
Maybe he really is tied to $600 as his absolute upper limit, but also seems like something a few years from AGI would think to check elsewhere.
reply