FWIW as someone who sadly has about 10% of his brain constantly engaged in AI detection when I read HN, absolutely nothing about the announcement struck me as feeling AI generated.
(I feel guilty for pointing it out but I guess it's actually a compliment in this context: I even saw a beautifully reassuring mis-conjugated verb elsewhere on the website. I wonder if LLMs will have to start injecting these errors to give us an authentic feel. Maybe they already have).
I think we are at a point now where just coz AI is so prevalent, every post on any programming forum will have at least one comment saying "AI slop".
Is there a good code review tool out there? The best one I've used is Gerrit, at least it has a sensible design in principle. Aside from that I've only used GitHub and Gitlab which both seem like toys to me. (And mailing lists, lol).
But the implementation of Gerrit seems rather unloved, it just seems to get the minimal maintenance to keep Go/Android chooching along, and nothing more.
My old job used Gerrit, and new job uses Gitlab. I really miss the information density and workflow of Gerrit. We enforce fast forward merges and squashing for MR's anyways, so we just have an awkward version of what Gerrit does by default.
Gitlab CI is good but we use local (k8s-hosted) runners so I have to imagine there's a bunch of options that provide a similar experience.
Just like with plain git - in GitLab you can merge a branch that has multiple separate commits in it. And you can also merge (e.g. topical/feature) branches into one branch - and then merge that "combined" branch into main/master.
Though most teams/project prefer you don't stretch that route to the extreme - simply because it's PITA to maintain/sync several branches for a long period of time, resolving merge conflicts between branches that have been separate for a long time isn't fun, and people don't like to review huge diffs.
Very likely people who actually work on RE at the NSA also have access to IDA Pro licenses. I don't work in this space, so take it with a pinch of salt, but my understanding is this is a fairly long term strategic initiative to _eventually_ be the best tool.
It’s better in some dimensions and not others, and it’s built on a fundamentally different architecture, so of course they use both.
Ghidra excels because it is extremely abstract, so new processors can be added at will and automatically have a decompiler, control flow tracing, mostly working assembler, and emulation.
IDA excels because it has been developed for a gazillion years against patterns found in common binaries and has an extremely fast, ergonomic UI and an awesome debugger.
For UI driven reversing against anything that runs on an OS I generally prefer IDA, for anything below that I’m 50/50 on Ghidra, and for anything where IDA doesn’t have a decompiler, Ghidra wins by default.
For plugin development or automated reversing (even pre LLMs, stuff like pattern matching scripts or little evaluators) Ghidra offers a ton of power since you can basically execute the underlying program using PCode, but the APIs are clunky and until recently you really needed to be using Java.
Well, Ghidra's strength is batch processing at scale (which is why P-Code is less accurate than IDA's but still good enough) while allowing a massive amount of modules to execute. That allows huge distributed fleets of Ghidra. IDA has idalib now, and hcli will soon allow batch fleets, but IDA's focus is very much highly accurate analysis (for now), which makes it a lot less scalable performance wise (for now).
I wish this would be the default. I expose my homelab port 22 directly to the internet. I'm _pretty_ sure I always always always disable password auth but I do worry about it because most distros have an unsafe default.
(A lot of this risk is mitigated by not having login passwords but I definitely have one node where I have a login password, it's an old laptop so I thought I might want to physically log in for local debugging).
I guess the ideal solution here is to run a prober service that attempts logins and alerts if it gets any responses that smell password auth is possible. But no way I have time to set that up.
If you get "Permission denied (password)" back, the server is accepting password auth attempts. If it immediately drops you with "Permission denied (publickey)", you're good.
The tricky part is that sshd_config can be overridden per-user with Match blocks, so ideally you'd probe with a few different usernames. But even a basic probe catches the 90% case of someone forgetting PasswordAuthentication no.
For the laptop with a real login password: you could set PasswordAuthentication no in sshd_config but keep the login password for local console access. Those are independent settings - sshd_config only affects remote SSH, not local login.
When you get a "Permission denied (publickey)." if you try to connect to a server which requires a public key for authentication, it causes your 5 lines to wrongly raise an alarm ... you need to adapt your grep.
One way to solve this it to use a configuration management tool (Puppet / Chef / Salt / Ansible etc.). Alternatively, run NixOS. You apply the setting once and then it's applied to all your machines from that point onwards.
The discourse about AI is definitely the worst I've ever experienced in my life.
One group of people saying every amazing breakthrough "doesn't count" because the AI didn't put a cherry on top. Another group of people saying humans are obsolete, I just wrote a web browser with AI bro.
There are some voices out there that are actually examining the boundaries, possibilities and limitations. A lot of good stuff like that makes it onto HN but then if you open the comments it's just intellectual dregs. Very strange.
ISTR there was a similar phenomenon with cryptocurrency. But with that it was always clear the fog of bullshit would blow away sooner or later. But maybe if it hadn't been there, a load of really useful stuff could have come out of the crypto hype wave? Anyway, AI isn't gonna blow over like crypto did. I guess we have more of a runway to grow out of this infantile phase.
The thing is, I doubt anyone at TikTok ever says "this design choice is good because it's addictive". Almost certainly, their leadership gives them metrics to target, like watch time, and they just hypyothesise and experiment on changes with those metrics in mind. Almost certainly the design of TikTok is almost entirely emergent. Just like the scientific method is "revealing" truth I think TikTok is just "revealing" the design that maximises its target metrics.
So what we have is a machine designed to optimise for something adjacent to addictiveness, and then some rules saying "you can't design for addictiveness"...
What happens when an underspecified vibe rule clashes with a billion dollar optimisation machine? Surely the machine wins every time? The machine is already defeating every ruleset that it's ever come up against.
Feels like the only way regulation could achieve anything is if it said "you can't build a billion dollar optimisation machine at all".
I think weird acting styles can be part of the joy of watching older media. Seems like films mostly switched over to "modern" acting in the 70s (?) and TV had a lot more variety in style (and quality lol) way up into the "modern era".
I'm not gonna say "it's not worse it's just different", coz TBH... It's worse lol. If modern acting was a rare minority style of practice I would seek it out voraciously. But, for the variety I do think it's fun to watch old stuff too!
I was hesitant to make any concrete claims in my comment since I don't know much about it but I _think_ this is basically about The Method i.e. Method Acting.
From what I understand, although people still talk about it as if it's a specialised or niche thing, it's actually basically just how acting is done nowadays (at least for films and in the west).
my mother and younger sister both prefer it over default Windows 10/11 design. mum says, "feels similar to my phone [pure Android 12] yet I can do so much more".
given that sister only really needs Steam Big Picture and everything mother uses is already in Flathub or defined in a Nix flake, they didn't experience any ecosystem issues
I find that respect for safe following distance varies quite a lot amongst the places I've driven.
E.g.
- Switzerland and (somewhat surprisingly) UK: pretty good, people doing idiotic shit is rare enough that I'll usually comment on it if there's another person is in the car. If someone is riding my ass I'll make the effort to try and shake them off.
- Italy and Spain: horrifying, impossible to relax at all on the highway, having someone 2 car lengths off your rear bumper is the default condition.
- France and USA: somewhere in the middle where there are a lot of idiots but they are still the minority.
Subjectively, the USA feels much more sketchy because the rules are so much looser around overtaking.
(I feel guilty for pointing it out but I guess it's actually a compliment in this context: I even saw a beautifully reassuring mis-conjugated verb elsewhere on the website. I wonder if LLMs will have to start injecting these errors to give us an authentic feel. Maybe they already have).
I think we are at a point now where just coz AI is so prevalent, every post on any programming forum will have at least one comment saying "AI slop".
reply