Hacker Newsnew | past | comments | ask | show | jobs | submit | user34283's commentslogin

I'd go further and say vibe coding it up, testing the green case, and deploying it straight into the testing environment is good enough.

The rest we can figure out during testing, or maybe you even have users willing to beta-test for you.

This way, while you're still on the understanding part and reasoning over the code, your competitor already shipped ten features, most of them working.

Ok, that was a provocative scenario. Still, nowadays I am not sure you even have to understand the code anymore. Maybe having a reasonable belief that it does work will be sufficient in some circumstances.


How often do you buy stuff that doesn't work, and you are OK with the provider telling you "we had a reasonable belief that it worked"?

How are we supposed to use software in healthcare, defense, transportation if that's the bar?


There's a lot of functionality in the frontend that I am building that I did not review. If it worked in testing, that's good enough.

You're free to review every line the model produces. Not every project is in healthcare or defense, and sometimes different standards apply.


I’m assuming you work in a setting where there is a QA team?

I haven’t been in such a setting in 2008 so you can ignore everything I said.

But I wouldn’t want to be somewhere where people don’t test their code, and I have to write code that doesn’t break the code that was never tested until the QA cycle?


This approach sounds like a great way to get a lot of security holes into your code. Maybe your competitors will be faster at first, but it’s probably better to be a bit slower and not leaking all your users data.

I'm mostly thinking about the frontend.

If I had a backend API that was serving user data, I'd of course check more carefully.

This kind of mistake always seemed amateurish to me.


Considering that they will make a lot of money with enterprise, yes, that's exactly what I think.

What I don't think is that I can take seriously someone's opinion on enterprise service's privacy after they write "LMAO" in capslock in their post.


I just know many people here complained about the very unclear way, google for example communicates what they use for training data and what plan to choose to opt out of everything, or if you (as a normal buisness) even can opt out. Given the whole volatile nature of this thing, I can imagine an easy "oops, we messed up" from google if it turns out they were in fact using allmost everything for training.

Second thing to consider is the whole geopolitical situation. I know companies in europe are really reluctant to give US companies access to their internal data.


I agree completely. Altman was at some point talking about a screen less device and getting people away from the screen.

Abandoning our mose useful sense, vision, is a recipe for a flop.


I'm not entirely sure it will ever see the light of day tbh

The amount of money sloshing around in these acquisitions makes you wonder what they're really for


Enterprise is slow. As for developers, we will be switching to Google unless the competition can catch up and deliver a similarly fast model.

Enterprise will follow.

I don't see any distinction in target markets - it's the same market.


Yeah, this is what I was trying to say in my original comment too.

Also I do not really use agentic tasks but I am not sure that gemini 3/3 flash have mcp support/skills support for agentic tasks

if not, I feel like they are very low hanging fruits and something that google can try to do too to win the market of agentic tasks over claude too perhaps.


I don't use MCP, but I am using agents in Antigravity.

So far they seem faster with Flash, and with less corruption of files using the Edit tool - or at least it recovered faster.


The CI pipeline using my own runner is absolutely something that I expect to come for free with hosting the repository.

After working for years with GitLab professionally, you know exactly where everything is.

Particularly making a contribution should anyhow be trivial - you push the branch and it shows a banner in the repo asking if you want to open a MR for the recently pushed branch.

I don't know why anyone would use GitHub actions. They seem like a weird, less powerful version of the GitLab CI. Now they want to charge for runtime on your own runner.


Grok is the only frontier model that is at all usable for adult content.

There is no use case for local models.

See Gemini Nano. It is available in custom apps, but the results are so bad; factual errors and hallucinations make it useless. I can see why Google did not roll it out to users.

Even if it was significantly better, inference is still slow. Adding a few milliseconds of network latency for contacting a server and getting a vastly superior result is going to be preferable in nearly all scenarios.

Arguments can be made for privacy or lack of connectivity, but it probably does not matter to most people.


I just want it to be able to control my apple home devices and trigger shortcuts, and maybe do a search into a few apps and find things. I know a local model can understand my intent for siri like operations because I literally have my own version of that on my laptop.

I think the real case is a future technology. Similar to speculative decoding but done over servers.

Local model answers and reaches into the cloud for hard tokens.


Copilot is useful for searching emails and SharePoint. It gives access to GPT-5 with Thinking, making it broadly useful for programming tasks.

It's certainly been useful in my organization.


Gmail search has been excellent for 20 years. Outlook search is still terrible even with copilot. LLM isn’t the killer feature, a search that works is.

For one I don't have Gmail at work.

Copilot can search even in PowerPoints. Being able to search your organisation's documents is kind of a killer feature, provided they make it work reliably.


I can't think of a single reason why you would need an LLM to search through PowerPoint files. We have traditional search technology which would be excellent for that!

> can't think of a single reason why you would need an LLM to search through PowerPoint files

Kati’s Research AI is genuinely great at search. It tries to answer your question, but also directly cites resources. This can help you when you’re not sure where the answer to a question lies, and it winds up being in multiple places.

Unless your query is super simple and of low consequence, you still need to open the files. But LLM-powered search is like the one domain (apart from coding) where these fuckers work.


Google has been doing this well in their office suite for years. Discoverability has been way higher in Gsuite than office.

Users weary about shoehorned AI features are probably all on Reddit or Hackernews.

I certainly never heard anyone complain in real life.


The people I know in real life, besides those that work in tech and use it for code assistance or for generating never-reviewed archival transcripts of meetings, mostly just laugh at AI foibles and faults and casually echo doomer-media worries about job replacement as a topic for small talk.

But admittedly, most of those people are established adults who've figured out an effective rhythm to their home and work life and aren't longing for some magic remedy or disruption. They're not necessarily weary, and they were curious at first, but it seems like they're mostly just waiting for either the buzz to burn off or for some "it just works" product to finally emerge.

I imagine there are younger people wowed by the apparent magic of what we have now and excited that they might use it punch up the homework assignments or emails or texts that make them anxious, or that might enjoy toying with it as a novel tool for entertainment and creative idling. Maybe these are some of the people in your "real life"

There are a lot of people out there in "real life", bringing different perspectives and needs.


Nah, LLMs and stable diffusion are being used everywhere by everyone hardcore.

I work at a coworking space. Most of the folks I've worked alongside had active chats in ChatGPT for all sorts of stuff. I've also seen devs use AI copilots, like Copilot and Codex. I feel big old when I drop into fullscreen vim on my Mac.

AI art is also used everywhere. Especially by bars and restaurants. So many AI happy hour/event promo posters now, complete with text (AI art font is kind-of samey for some reason). I've even seen (what look like) AI generated logos on work trucks.

People are getting use out of LLMs, 100%. Yet the anti-AI sentiment is through the roof. Maybe it's like social media where the most vocal opponents are secretly some of its most active users. Idk.


Yes, that sounds about right.

What I meant specifically was that I don't remember anyone complaining about AI features getting in the way or being shoehorned. That particular complaint seems popular only on Reddit or HN.


I've also never heard anyone praise the fact that the first Google result is now half way down page either. Most people don't care enough to complain.

Most of the people I've talked IRL to aren't against AI as a rule, but have grown tired of poorly implemented AI features, especially if they're used as marketing fodder. In my experience, shoehorned AI features have landed themselves in a category similar to that of bundled crapware and useless single-app hotkeys on cheap laptops.

Those of this group who use AI mostly ignore poor rebadges and integrations like MS Copilot and just use ChatGPT and Claude directly. They prefer it to remain intentional and contained within a box that they control the bounds of.


I talk to tons of people in real life who are deeply troubled by the AI-pocalypse. I was at a dinner party just the other day where out of the blue (wasn't me, I swear!), the conversation turned to the horrors of genAI and its negative effect on our society.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: