Hacker Newsnew | past | comments | ask | show | jobs | submit | rm_-rf_slash's commentslogin

AI research has always been a series of occasional great leaps between slogs of iterative improvements, from Turing and Rosenblatt to AlexNet and GPT-3. The LLM era will result in a few things becoming invisible architecture* we stop appreciating and then the next big leap starts the hype cycle anew.

*Think toll booths (“exact change only!”) replaced by automated license plate readers in just the span of a decade. Hardly noticeable now.


Phreaking in 2025


Well Phreaking in 2003-05 (no clue when anymore), so at the same time you could still get free phone calls on pay phones in the library or hotel lobby.


Out of curiosity, what would the minimum specs need to be in order to run this locally?

My PC is just good enough to run a DeepSeek distill. Is that on par with the requirements for your model?


There isn't a viable computer use model that can be ran locally yet unfortunately. Am extremely excited for the day that happens though. Essentially the key capability that makes a model a computer use model is precise coordinate generation.

So if you come across a local model that can do that well, let us know! We're also keeping a close watch.


Haven’t looked into them much but I thought the Chinese labs had released some for this kind of thing


You are correct in that ByteDance did releas UI-TARS which sounds like a really good open source computer use model according to some articles I read. You could run that locally. We haven't tested it so I wouldn't know how it performs, but sounds like it's definitely worth exploring!


What would it take to train your own?


I don't know too much about training your own computer use model other than it would probably be a very hefty, very expensive task.

However, I believe ByteDance released UI-TARS which is an excellent open source computer use model according to some articles I read. You could run that locally. We haven't tested it so I wouldn't know how it performs, but sounds like it's definitely worth exploring!


I’m not sure it’s one or the other. Firing off a prompt to Claude Code and letting it rip can be great for productivity but I won’t pretend I’m reading every line it writes unless I have to.

And yet if I’m inquiring into a subject matter I have scant knowledge about, and want to learn more about, I voraciously read the output and plan my next prompt thoughtfully throughout.

The dividing line is intellectual curiosity. AI can stimulate the mind in ways people may not have thought possible, like explaining subjects they never grasped previously, but the user has to want to go down that path to achieve it.

Social media doomscrolling, by contrast, is designed to anesthetize, so the result should not surprise.


To me AI feels like the early web. I can get information without sifting through heaps of SEO trash, and it’s like having this weird magic thinking mirror to explore ideas. Unlike social media it’s not a sea of culture war rage trolling and slop.

I am not trying to use it as a companion though. Not only do I have human ones but it feels super weird and creepy to try. I couldn’t suspend disbelief since I know how these things work.


> To me AI feels like the early web

To me AI feels like the final nail in the web's coffin

There is nothing remotely charming about it like the early web had


Cursor and Claude Code were the asskicking I needed to finally get on the typescript bandwagon.

Strong typing drastically reduces hallucinations and wtf bugs that slip through code review.

So it’ll probably be the strongly typed languages that receive the proportionally greatest boost in popularity from LLM-assisted coding.


This is why I like Go for vibe programming.

goimports makes everything look the same, the compiler is a nitpicky asshole that won’t let the program even compile if there is an unused variable etc.


> won’t let the program even compile if there is an unused variable

That is a really big advantage in the AI era. LLMs are pretty bad at identifying what is and what isn't relevant in the context.

For developers this decision is pretty annoying, but it makes sense if you are using LLMs.


Yep, that's why I like strict tooling with LLMs (and actually real people as well, but that's a different conversation :D)

When you have a standard build process that runs go vet, go test, golanci-lint, goimports and compiles the code you can order the LLM to do that as the last step every time.

This way at the very least the shit it produces is well-formed and passes the tests :)

Otherwise they tend to just leave stuff hanging like "this erroring test is unrelated to the current task, let's just not run it" - ffs you just broke it, it passed perfectly before you started messing with the codebase =)


Could put it in a ChatGPT project description or Cursor rules to avoid copy pasting every time.


I understand a significant portion of Tesla’s sales are in China and there is fierce competition from homegrown firms like BYD.

TFA mentions this but doesn’t get into much detail.


China is actually one of the markets Tesla is performing best at.


As someone who has done biz with China (specifically with Tsinghua University) - the key phrase is "for now". China's strength in economics is to take what works elsewhere, bring it to China to understand what about that product or service works domestically, and then China-fy / reverse engineer what's great about the product, and then get Party backing to scale it in order to beat the foreign competitor. This is their default strategy and no matter how many factories Tesla might have in China now, it's not a certainty that those factories or technologies will be in Tesla's hands in 3-5 years from now.

Specific to the automobile industry, remember what VW's mistake in China was. Long story short: they taught the Chinese how to really build and scale auto production. The Chinese learned, and then shut VW out of the domestic Chinese market once they had a strong Chinese competitor - they were able to scale and produce cars for significantly cheaper.


“When you loan money to a friend, be prepared to lose the money or the friend” is a maxim I’ve lived by and has guided me through some tough decisions over time.


I’ve also seen this happen in NYS. Tenants do the craziest shit (selling showers to crackheads and discovering a near thousand dollar water bill was one of the tamer tales), fight tooth and nail to stay, and leave the place uninhabitable when they finally are forced to go.

Unfortunately the local mom and pop landlords get wrecked by this while only the big corporate landlords have the resources and scale to weather these situations.


It's a law of large numbers thing. Americans romanticized mom&pop landlords vs big greedy landlords, but.. it's a bad business to be a smalltime landlord. It's like putting all your money in one stock.

If, say, 5% of the population is crazy, and make for bad tenants.. then owning 10-20+ units puts you in a position of always having 90%+ of your revenue coming in.

If you have 1 unit then most of the time you are OK, but every once in a while you may lose 100% of your revenue for 3-12 months, while you have to keep spending on mortgage/tax/utilities, plus lawyers, repairs, etc.


Sounds like it's time for a co-op.


Anyone who has lived in a co-op might disagree lol


The key thing would be the government finally go and improve the situation around mental health care accessibility and a proper social safety net.

People don't fall for drugs on their own - the utter, utter majority fall for drugs to self-medicate for whatever crisis they're facing. Be it perspectivelessness, losing a family member or one's job - across the Western world, governments have completely given up supporting people who hit a rough patch in life, and now it's a situation that is very, very hard to resolve.


>The key thing would be the government finally go and improve the situation around mental health care accessibility and a proper social safety net.

You're right but we are so far from this now I can't imagine it being possible until one or two full generations of people die out and we start teaching empathy


Boston was a mess before the big dig. Enough ink has been spilled over how long it took, but just about everybody agrees the city is better for it.


There’s a great podcast that delves into the Big Dig. It was an exceedingly difficult giant construction project. The original cost estimate was a made up number for political expediency. They lied knowing once they dug a big hole the sunk-cost fallacy would pull them over the finish line.


>lied knowing once they dug a big hold the sunk-cost fallacy would pull them over the finish line.

That's essentially what the CA high speed rail folks did. It remains to be seen if it ends up working out for them.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: