18 years here (more if I can count my non professional years). Agreed, coding isn't as much fun as it used to be but AI has only increased my enthusiasm for tech. Its a lot more easier to learn and understand new stuff, build things (especially personal tools) and tinker in any programming language.
For me, it was the craft that was fun, less the building. It was nice to have a really good result that customers were happy about, that other engineers were happy about, but it was also nice to have such intimate knowledge of a codebase because I and my team built it.
I miss that level of mastery. I feel that in the LLM-assisted coding age, that's now gone. You can read every section of code that an LLM generates, but there's no comparison to writing it by hand to me in terms of really internalizing and mastering a codebase.
What's stopping you from writing code by hand even today? I mainly use LLMs for researching and trying possible paths forward, but usually implement the suggested solution myself specifically so that I fully understand the code (unless it's a one-liner or so, then I let the LLM paste it in).
Because I can't justify it. While I do love the craft, and I can do this, I work with other people and I can't convince other people to not use LLMs to do their daily work. So, while I'll be writing things by hand and using the LLM to suggest which way to go, they'll be submitting PR after PR of AI-generated code, which takes much more of my time to review.
I have nearly two decades of programming experience which is mostly server side. The other day I wanted a quick desktop (Linux) program to chat with an LLM. Found out about Viciane launcher, then chalked out an extension in react (which I have never used) to chat with an LLM using OpenAI compatible API. Antigravity wrote a bare minimum working extension in a single prompt. I didn't even need to research how to write an extension for an app released only three to five months ago. I then used AI assistance to add more features and polish the UI.
This was a fun weekend but I would have procrastinated forever without a coding agent.
And assuming people want deeper integration is the browser even the right level of abstraction? Arguably it would be better to have something that was operating at the OS level, like siri/gemini assistant style.
When Microsoft completely integrates its LLM into Windows, would you rather give that access to your browser, or would you rather plug in your own local model / turn it off entirely while browsing?
If a global LLM becomes standard, I'd want to plug in my own local model or disable it entirely, but I don't think Microsoft nor Apple are going to open up their operating systems and make it easy to do that any time soon. The option to granularly use your own models is a plus to me in that situation.
Every app has to open itself for integration, especially if it's not a native app like Firefox. From where they get the AI at the end doesn't really matter, they will support them all anyway.
Filling out forms, booking tickets, summarizing content ...
Even at work, have seen few junior developers use AI browsers to attend mandatory compliance courses and complete quizzes. Not necessarily a good thing but AI browsers may win in the end and it might be too late for Firefox.
"why do I have to go and fill with copy paste that form or navigate through that page to do $something if that AI browser can do it for me?"
And in that scenario, there is a GIGANTIC need for a user-first, privacy-respecting browser using ideally local models (in a few years, when HW is ready)
You people need to be forced to use your product in the exact form your product is presented to end users. With the exact frequency it's presented to end users. In all the wrong places as it is presented to end users.
Maybe then you'll understand why shoving AI in every conceivable crevice is incredibly obnoxious and distracting and, most importantly, not useful.
Shoving an AI agent in every website is distracting and not that useful.
Shoving an AI agent in every app is distracting as well.
Having one global AI agent per operating system or browser (where most of the digital life happens, in the case of desktop browsers), for the people that want to have an AI agent, it's probably going to be useful, if well implemented.
I know, but at the end of the day most people nowadays do the vast majority of their job in a browser, and there is already a well defined API to manage its content. Also browsers are coming there faster and at some point it will become what people expect, rather what's most optimal.
Vibe coding doesnt mean the author doesnt understand their code. Its likely that they don't want carpal tunnel from typing out trivial code and hence offload that labor to a machine.
You do realize that it's possible to ask AI to write code and then read the code yourself to ensure it's valid, right? I usually try to strip the pointless comments, but it's not the end of the world if people leave them in.
Vibe-coding as originally defined (by Karpathy?) implied not reading the code at all, just trying it and pasting back any error codes; repeat ad infinitum until it works or you give up.
Now the term has evolved into "using AI in coding" (usually with a hint of non rigor/casualness), but that's not what it originally meant.
AI assisted coding/engineering becomes "vibe coding" when you decide to abdicate any understanding of what you are building, instead focusing only on the outcome
JetBrains is great at indexing your local codebase and understands it deeply. We don’t try to replace that. Nia focuses on external context: docs, packages, APIs and other remote sources that agents need but your IDE can’t index.
So far, I have had a very good experience using Gemini Live with the camera turned on. Just today, I wanted to find out the name of a spare part inside a bathroom faucet. First, Gemini said it was a thermostatic cartridge, but I responded that it couldn't be, as it doesn't control temperature. Then it asked me what it did, and I said it has a button that controls the flow of water between the tap and shower. It correctly guessed that it was a diverter cartridge.
Exactly! I too bought the M1 Macbook Air in 2021 because of its great battery life. I wanted a powerful device for hacking on personal projects at home (I use a Dell running Ubuntu at work) but every time I opened it there was always something frustrating about OS X that made it unsuitable for dev stuff (at least for me)
* Finder - this is my most hated piece of software. It doesn't display the full file path and no easy way to copy it
* I still haven't figured out how to do cut/paste - CMD + X didn't work for me
* No Virtualbox support for Apple Silicon (last checked 1 year ago)
* Weird bugs when running Rancher Desktop + Docker on Apple Silicon
But still Apple hardware is unbeatable. My 2015 Macbook pro lasted 10 years and the M1 is also working well even after 4 years.
> * Finder - this is my most hated piece of software. It doesn't display the full file path and no easy way to copy it
View -> Show Path Bar to display the full path of a file.
When a file is selected, press Option-Cmd-C to copy the full file path. Or just drag the file anywhere that expects a string (like the Terminal, or here). That strikes me as quite easy.
Cmd-X, -C, -V work as expected, what exactly is the problem? (Note that macOS, unlike Windows, doesn't allow to cut & paste files to avoid loss of the file in case the operation isn't completed. However, you can copy (Cmd-C), then use Option-Cmd-V to paste & move.)
Now, that might not be completely easy to discover (though, when you press Option the items in the Edit menu change to reveal both "tricks" described above, and contain the keyboard shortcut).
At any rate: when switching OS, is it too much to ask to spend a few minutes online to find out how common operations are achieved on the new OS?
FWIW, Virtual box did get ported to Apple silicon, but long time Mac software developer Parallels has a consumer grade VM management software. Theirs supports directX 11 on arm windows, which is critical for getting usable performance out of it. Conversely, VMware's Mac offering does not, making 3d graphics on that painfully slow.
There's also a couple of open source VM utilities. UTM, tart, QEMU, Colima, probably others.
https://github.com/mistweaverco/kulala.nvim is an another restish (it can do gRPC to) plugin for neovim. It is intended to be compatible with a Jetbrains as much as possible.
(After I have seen the IntelliJ one from a colleague I was searching for one like that in neovim. That's the best one I found. It's not perfect, but it works.
Edit: The tool from OP looks very neat though. I will try it out. Might be a handy thing for a few prepared tests that I run frequently