the flip side of that, right now ai coding agents try to generate code, not software.
it seems semi intuitive to me that a typesafe, functional programming language with referential transparency would be ideal if you could decompose a program to small components and code those.
once you have a referentially transparent function with input/out tests you can spin on that forever until its solved and then be sure that it works.
i think theres a huge section of tasks which required you pay high salaries for which are gone.
just thinking from the finance world, in 2010 no knew how to program on the desk and no one knew sql, and even if they did the institutional knowledge to use the dev systems was not worth their time. So you had multiple layers of meetings and managers to communicate programs. As a result, anything small just didn't get done, and everything took time.
by 2020 most junior guys knew enough python to pick up some of the small stuff
in 2025 ai tools are good enough that they're picking up things that legit would have taken weeks to do in 2010 because of the processes around them not the difficulty and doing it in hours. A task that would take an hour to do used to take multiple meetings to properly outline to someone without finance knowledge and now they can do themselves in less time than it took to describe to a fresh cs grad.
Those tasks that junior traders/strats are able to do now that would have taken weeks or months to get into prod going through an it department i'm seeing cost drop 90% everyday right now. Which is good, it lets tech focus on tech and not learning the minutia of options trading in some random country
i dont think this should matter, plenty of conglomerates have brands across quality levels.
think old navy, gap, banana republic.
the quality difference is important for the conglomerate same with netflix vs hbo, the corporate benefit is being able to save on costs around like amortizing the corporate side of things (accounting, marketing, real estate, research ect)
Well, they didn’t say OpenAI was right. I think that a lot of the people working there believe that. It was kind of built into the original corporate/non-profit structure (that they since blew up).
they're deep into a redesign of the gemini app, idk when it will be released or if its going to be good, but at least they agree with you and are putting significant resources into fixing it.
ads always start on only the free version, then either the free version has a minor fee that slowly gets ratcheted up over time or the paid version gets ads and theres a higher no ad tier version added.
for whatever reason gemini 3 is the first ai i have used for intelligence rather than skills. I suspect a lot more will follow, but its a major threshold to be broken.
i used gpt/claude a ton for writing code, extracting knowledge from docs, formatting graphs and tables ect.
but gemini 3 crossed threshold where conversations about topics i was exploring or product design were actually useful. Instead of me asking 'what design pattern should be useful here', or something like that it introduces concepts to the conversation, thats a new capability and a step function improvement.
sounds smart, but this a false premise because its not zero sum and theres this magical thing called taxes that allow you to reap the benefits of a more productive system.
If you have free public transit and that enables more economic activity or more disposable income to be funneled into services that boost the tax rake of the city the gains can offset the cost. This is an equation none of us have the info to do as randos online and its pointless to claim otherwise.
and even if your point was true free buses are a partial subsidy to low income people like you suggest in nyc its busses are a predominantly taken by low income individuals (source https://blog.tstc.org/2014/04/11/nyc-bus-riders-tend-to-be-o... subway nearly everyone, and ride share has their own tax as well.
you see content about openai everywhere, they spent 2b on marketing, you're in the right places you just are used to seeing things labeled ads.
you remember everyone freaking out about gpt5 when it came out only for it to be a bust once people got their hands on it? thats what paid media looks like in the new world.
it seems semi intuitive to me that a typesafe, functional programming language with referential transparency would be ideal if you could decompose a program to small components and code those.
once you have a referentially transparent function with input/out tests you can spin on that forever until its solved and then be sure that it works.
reply