Hacker Newsnew | past | comments | ask | show | jobs | submit | terrywang's commentslogin

The good old Internet was "del.icio.us".


Isn't that an old porn site lol


Thanks for sharing. Gemini CLI doing live troubleshooting for a K8s cluster is surreal. I am keen to try that out, since I have just created RKE2 clusters.


Gemini CLI at this stage isn't good at complex coding tasks (vs. Claude Code, Codex, Cursor CLI, Qoder CLI, etc.). Mostly because of the simple ReAct loop, compounded by relatively weak tool calling capability of the Gemini 2.5 Pro model.

> I haven't tried complex coding tasks using Gemini 3.0 Pro Preview yet. I reckon it won't be materially different.

Gemini CLI is open source and being actively developed, which is cool (/extensions, /model switching, etc.). I think it has the potential to become a lot better and even close to top players.

The correct way of using Gemini CLI is: ABUSE IT! With 1M Context Window (soon to be 2M) and generous daily (free) quota are huge advantages. It's a pity that people don't use it enough (ABUSE it!). I use it as a TUI / CLI tool to orchestrate tasks and workflows.

> Fun fact: I found Gemini CLI pretty good at judging/critiquing code generated by other tools LoL

Recently I even hook it up with homebrew via MCP (other Linux package managers as well?), and a local LLM powered Knowledge/Context Manager (Nowledge Mem), you can get really creative abusing Gemini CLI, unleash the Gemini power.

I've also seen people use Gemini CLI in SubAgents for MCP Processing (it did work and avoided polluting the main context), can't help laughing when I first read this -> https://x.com/goon_nguyen/status/1987720058504982561


Gemini CLI is a wild beast. The stories of it just going bonkers and refactoring everything it reads on its own are not rare. My own experience was something like, "Edit no code. Only give me suggestions. blah blah blah" first thing it does is edit a file without any other output. It's completely unreliable.

Pro 3 is -very- smart but it's tool use/following directions isn't great.


I've been using Gemini 3 in the CLI for the past few days. Multiple times I've asked it to fix one specific lint error, and it goes off and fixes all of them. A lot of times it fixes them by just disabling lint rules. It makes reviewing much harder. It really has a mind of its own and sometimes starts grinding for 20 minutes doing all kinds of things - most of them pretty good, but again, challenging to review. I wish it would stick to the task.


Gemini seriously need a built-in "Plan" mode that will properly prevent it from doing any write operations.

It's pretty good at code review and figuring out massive codebases, but its tendency to just rush in and "fix" things is annoying.


> I haven't tried complex coding tasks using Gemini 3.0 Pro Preview yet. I reckon it won't be materially different.

In my limited testing, I found that Gemini 3 Pro struggles with even simple coding tasks. Sure, I haven't tested complex scenarios yet and have only done so via Antigravity. But it is very difficult to do that with the limited quota it provides. Impressions here - https://dev.amitgawande.com/2025/antigravity-problem


Are we using different models? Here is a simulation of Chernobyl reactor 4 using research grade numerical modeling I made with it in a few days: https://rbmk-1000-simulator-162899759362.us-west1.run.app/


That's impressive. Well, I was using the default Gemini 3 Pro. Based on my experience, I am surprised you did not hit the quota limit.


Thanks for sharing, insightful.

Personally, I consider Antigravity was a positive & ambitious launch. Initial impression was that there are many rough edges to be smoothed out. I hit many errors like 1. communicating with Gemini (Model-as-a-Service) 2. Agent execution terminated due to errors, etc., but somehow it completed the task (verification/review UX is bad).

Pricing for paid plans with AI Pro or Workspace would be key for its adoption, when Gemini 3.x and Antigravity IDE are ready for serious work.


Interesting. I did not face many issues while communicating with Gemini. But I believe these issues will iron themselves out -- Google does feel to have rushed the launch.


The trick with Gemini is to uploading the whole (or the relevant part of the) codebase (depending on the size) as an xml (using repomix et al) then telling it to output whole files.

With a good prompt and soem trial and error in system instructions, as long as you agree to play the agent yourself, it's unmatched.

CLI? Never had any success. Claude Code leaves it in dust.


Uh-oh, looks like the only way out is to unenroll security key completely (if it was enrolled when DNS rebrand was not done).


Prompted a couple of times only on the mobile device. I have a YibiKey 4 so it's inconvenient to do it with a USB-C to USB-A adapter. Ignored it for a while and eventually I wasn't able to use X without "re-enroll".

So I did it on a laptop. The process seemed legit, the entire flow was weird and not intuitive, I had to stop and read twice before proceeding (e.g. "Where to store passkey", disable all other MFA ans only use Security Key, a backup recovery code was given...). After going through all that, find myself locked out of X because of the infinite re-enroll loop, OMG.

Contacted support, let's see how long it takes. After this, I don't think I'll continue to use Security Key with X...


Great idea to let the bullets fly. After taking a nap, the issue was fixed by X

Text message and Authenticator were disabled, two Yubikeys present in Security Keys. I don't get the idea of this process.


BimmerCode / BimmerLink makes coding/customising supported BMW models even easier, as long as you've got a mobile device and an adaptor (e.g. OBD II to Bluetooth, inexpensive), you can DIY coding and obtain visibility into a lot of the car runtime parameters (e.g. engine oil temperature, boost, etc.) - can connect a screen and display whatever you like, turn off the pesky `ASD` (fake exhaust sound pumped into the cabin via speakers...).

What's more amazing these days is that technology like `bootmod3` (bm3) makes flashing (remmaping) stage one as easy as 1-2-3. One needs to understand what they are doing though.


Thanks for the input. Had similar experience with solution based on top of on-prem flavour (gravity - name came from pulling stuff from cloud back into traditional data centre) of k8s. Countless effort and resources have been wasted on troubleshooting customers' infrastructure rather than focusing on the real goal, get apps/APIs up and running quickly to generate value and realise goals for the business. The solution ultimately become a burden to both engineering and customers... Long sad / bad story.

Fortunately, decision makes heard the voice from the field and customers, eventually offloaded the container orchestration layer (and underlying infrastructure) to managed k8s service provider, the solution is delivered as helm charts to be installed on customers' own managed k8s (EKS, AKS, GKE and OpenShift - oh, the Red Hat OpenShi(f)t is just another rabbit hole...). But again, lack of knowledge and hands-on skill operating / running k8s (not yet a commodity although it is hyped to be...) makes the journey quite turbulent from a business PoV (technically it's easy, built the skills in house, hire the right talents).


Make sure you read it ;-) I tried to finish reading it in 5 mins but fell asleep on bed, didn’t finish until now, will do that after getting up.


I’m happy to hear that :-)


Started using Cockpit when installing Fedora 21 on the old dog NAS - HP ProLiant MicroServer N54l), it has been continuously improved over time and become a easy to use & robust Web UI for beginners (very intuitive, e.g. enabling metric collection will trigger toggling `pmlogger.service` and so on), even CLI warriors love it as a supplement.

Found it a useful single point of view / administration covering networking, storage, virtualization, containerization, etc. when managing a small group of servers over nebula (overlay network), across home, multiple cloud service providers. Haven't used it at scale though, will dip into it when time comes (it supports FreeIPA).


Running Pi-hole on the original Pi (1) model B for over a year and really happy with it. The original Pi running Raspbian has been very reliable and working tirelessly, the only problem is the I/O bottleneck - performance querying SQLite database is unbearable.

Last Nov I spent half day installing Pi-hole on spare Pi 2 and Pi 3 (Ubuntu LTS) serving as two internal DNS Servers for home network), router (AsusWrt-Merlin) as its upstream doing DoT (DNS over HTTPS). Really happy with the performance and cost (quite, low power consumption, no heating issue, no dust collection issue, etc.)


Does Pi-Hole also work with other SQL databases? If yes, you could host PostgreSQL on another Pi (or something beefier). Or maybe there is an adapter library that makes it possible to access another SQLite database via the network (not talking about NFS, as SQLite developers discourage from that).


I'm surprised application devs haven't done this, but you can "backup" a sqlite db to an in-memory sqlite db, then just "backup" the in-memory db every so often in the background.


The problem with running Pi-hole on the Pi 1 (original) is that it does not have enough physical memory after installing pi-hole stack (I used Nginx + php-fpm for the web UI), otherwise there is way to use utils like `vmtouch` to read the sqlite database file and keep it in page cache, even "lock" it.

On Pi 2 and 3 it's no longer an issue (even if you don't do anything about it) to a memory buff and better I/O (using faster micro SD).

Pi-hole provides a mechanism to backup and rotate the database from time to time, one can do that whatever way suits their use cases.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: