Hacker Newsnew | past | comments | ask | show | jobs | submit | hu3's favoriteslogin

Reminds me of this:

"""On the first day of class, Jerry Uelsmann, a professor at the University of Florida, divided his film photography students into two groups.

Everyone on the left side of the classroom, he explained, would be in the “quantity” group. They would be graded solely on the amount of work they produced. On the final day of class, he would tally the number of photos submitted by each student. One hundred photos would rate an A, ninety photos a B, eighty photos a C, and so on.

Meanwhile, everyone on the right side of the room would be in the “quality” group. They would be graded only on the excellence of their work. They would only need to produce one photo during the semester, but to get an A, it had to be a nearly perfect image.

At the end of the term, he was surprised to find that all the best photos were produced by the quantity group. During the semester, these students were busy taking photos, experimenting with composition and lighting, testing out various methods in the darkroom, and learning from their mistakes. In the process of creating hundreds of photos, they honed their skills. Meanwhile, the quality group sat around speculating about perfection. In the end, they had little to show for their efforts other than unverified theories and one mediocre photo."""

from https://www.thehuntingphotographer.com/blog/qualityvsquantit...


I do write mostly async code, too.

There are several ~~problems~~ subtleties that make usage of Rust async hindered IMHO.

- BoxFuture. It's used almost everywhere. It means there are no chances for heap elision optimization.

- Verbosity. Look at this BoxFuture definition: `BoxFuture<'a, T> = Pin<Box<dyn Future<Output = T> + Send + 'a>>;`. It's awful. I do understand what's Pin trait, what is Future trait, what's Send, lifetimes and dynamic dispatching. I *have to* know all these not obvious things just to operate with coroutines in my (possibly single threaded!) program =(

- No async drop and async trait in stdlib (fixed not so long ago)

I am *not* a hater of Rust async system. It's a little simpler and less tunable than in C++, but more complex than in Golang. Just I cannot say Rust's async approach is a good enough trade-off while a plethora of the decisions made in the design of the language are closest to the silver bullet.


I used to be very against closed source products but changed my mind recently. One of the founders of Obsidian makes some great points here: https://forum.obsidian.md/t/open-sourcing-of-obsidian/1515/1...

You can watch Lattner's interview with Theprimeagen. It's a haphazardly designed language where pressure to ship from Apple as a whole overrides any design or development considerations.

That's why you end up with a compiler that barfs at even the simplest SwiftUI code because Swift's type system is overly complicated and undecidable. And makes the compiler dog slow.

That's why you end up with 200+ keywords [1] with more added each release.

That's how you end up with idiocy like `guard let self = self else { return }` (I think they "fixed" this with some syntax sugar) because making if statements understand nulls is beyond the capabilities of heroes apparently.

And this is just surface level that immediately came to mind.

[1] It's not a typo: https://x.com/jacobtechtavern/status/1841251621004538183


I love Emacs. My first intro to it was on the Braille Plus Mobile Manager back in like 2008 or so. That was a beautiful device that ran Linux and was developed for the blind. There's been nothing exactly like it since. The BT Speak is a poor ematation that runs on a Raspberry Pi 4 and is sluggish because Linux accessibility is hard and not optomized for such low-power devices.

Anyway, I began learning Emacs commands in the Emacs tutorial on that Braille Plus, , and they made sense to me. Unfortunately, Emacspeak only really works well on Linux and Mac, not Windows where all the blind people are. Speechd-el only works on Linux, since it uses Speech-dispatcher. I got Speechd-el talking on Termux for Android last night though, although it was rather laggy between key press and speech. Emacspeak development has paused, though, and Speechd-el seemingly hasn't been updated in half a year. Emacs itself has a lot going on for a normal screen reader to interpret which is why Emacs-specific speech interfaces are so useful.

A few examples:

* On Windows, with Windows Terminal and NVDA screen reader, arrow keys read where the cursor is, but C-n and C-p, C-f and C-b, all that, NVDA doesn't say anything. This is with the -nw command line option because the GUI is inaccessible. * Now, if I do M-x, it does say "minibuf help, M-x, Windows Powershell Terminal". From there, I can do list-package and RET and use arrow keys to go through packages, but N and P don't speak even though I know they move between packages. So it seems like the echo area works. * Programs like the calendar, though, really doesn't speak well with a screen reader. It just read the line, not the focused date. Using left and right jst say "1 2 3 4 5" etc. So custom interfaces don't work well. I shudder to think how it'd read Helm.

Lol maybe I can get AI to make a good speech server for Emacspeak for Windows.


In case anyone else was confused: the link/quote in this comment are from the previous "async cancellation issue" write-up, which describes a situation where you "drop" a future: the code in the async function stops running, and all the destructors on its local variables are executed.

The new write-up from OP is that you can "forget" a future (or just hold onto it longer than you meant to), in which case the code in the async function stops running but the destructors are NOT executed.

Both of these behaviors are allowed by Rust's fairly narrow definition of "safety" (which allows memory leaks, deadlocks, infinite loops, and, obviously, logic bugs), but I can see why you'd be disappointed if you bought into the broader philosophy of Rust making it easier to write correct software. Even the Rust team themselves aren't immune -- see the "leakpocalypse" from before 1.0.


Skimming through, this document feels thorough and transparent. Clearly, a hard lesson learned. The footnotes, in particular, caught my eye https://rfd.shared.oxide.computer/rfd/397#_external_referenc...

> Why does this situation suck? It’s clear that many of us haven’t been aware of cancellation safety and it seems likely there are many cancellation issues all over Omicron. It’s awfully stressful to find out while we’re working so hard to ship a product ASAP that we have some unknown number of arbitrarily bad bugs that we cannot easily even find. It’s also frustrating that this feels just like the memory safety issues in C that we adopted Rust to get away from: there’s some dynamic property that the programmer is responsible for guaranteeing, the compiler is unable to provide any help with it, the failure mode for getting it wrong is often undebuggable (by construction, the program has not done something it should have, so it’s not like there’s a log message or residual state you could see in a debugger or console), and the failure mode for getting it wrong can be arbitrarily damaging (crashes, hangs, data corruption, you name it). Add on that this behavior is apparently mostly undocumented outside of one macro in one (popular) crate in the async/await ecosystem and yeah, this is frustrating. This feels antithetical to what many of us understood to be a core principle of Rust, that we avoid such insidious runtime behavior by forcing the programmer to demonstrate at compile-time that the code is well-formed


And here I am, selling my Macbook M4 Pro to buy a Macbook Air and a dedicated gaming machine. I've tried gaming on the Macbook with Heroic, GPTK, Whiskey, RPCS3 emu and some native. When a game runs, the performance is stunning for a Laptop - but there is always glitches, bugs and annoyances that take out the joy. Needles to mention lack of support from any sort of online multiplayer, due to the lack of anticheat support.

I wish Apple would take gaming more seriously and make GPTK a first class citizen such as Proton on Linux.


Instead of Windows Backup (which relies on M$ OneDrive), you can enable (in Control panel settings) and use Windows File History.

File History is a backup feature in Windows that automatically saves copies of your files from specific folders, like Documents and Pictures, to an external drive or network location. It allows you to restore previous versions of your files if they are lost or damaged.

To enable File History in Windows, connect an external drive or network location, then go to Settings > Update & Security > Backup, and select "Add a drive" to choose your backup location. Finally, turn on File History to start backing up your files automatically.


This is a great opportunity to get HN's take on these tools: systems to streamline the management of containerized services deployed on self-managed hardware.

We've been running both CapRover and Coolify for a couple years. We quite like renting real dedicated servers (Hetzner, OVH), it is so much cheaper than the cloud and a minor management burden. These tools make it easy to bridge the gap and treat these physical servers like PaaS.

We have dozens of apps and services deployed on a couple large-ish servers with backups. Most modern back-ends do so little computationally and lots of containers can comfortably live together. 128GB of RAM and 64 cores can go a long way and surprisingly cheap in Hetzner, and having that fixed monthly cost removes a mental burden. It is cheap, simple and availability issues are so much rarer than people expect, maybe a couple mishaps a year that are easy to recover from and don't really have a meaningful impact for a startup.

Coolify feels more complete and mature, but frankly, after using both a lot, we now steer more towards the simplicity of CapRover. I see that Dokploy is also a major alternative to Coolify, don't know much about it.

How does /dev/push compare? Do you have any other recommendations in this vein? Or differing opinions on the tools I mentioned?


i am honestly glad i don't write rust anymore

One of the things I’ve noticed with senior people is that fine motor control tends to start to go,

Things like double click a mouse is difficult to perform two very fast clicks, without also moving the mouse,

Same with iPhone, swiping without deviating, pressing TINY buttons, and even what constitutes a tap are difficult for the elderly. Yes there’s zoom but that only makes it 10% better, as I watch them


We're processing tenders for the construction industry - this comes with a 'free' bucket sort from the start, namely that people practically always operate only on a single tender.

Still, that single tender can be on the order of a billion tokens. Even if the LLM supported that insane context window, it's roughly 4GB that need to be moved and with current LLM prices, inference would be thousands of dollars. I detailed this a bit more at https://www.tenderstrike.com/en/blog/billion-token-tender-ra...

And that's just one (though granted, a very large) tender.

For the corpus of a larger company, you'd probably be looking at trillions of tokens.

While I agree that delivering tiny, chopped up parts of context to the LLM might not be a good strategy anymore, sending thousands of ultimately irrelevant pages isn't either, and embeddings definitely give you a much superior search experience compared to (only) classic BM25 text search.


Locked into iPhone.

I want to switch to Android, but I have all the following problems:

1. iMessage, unlike whatsapp etc, does not have an android app, and some of my family uses iMessage, so I would be kicked from various group chats

2. My grandma only knows how to use facetime, so I can't talk to her unless I have an iPhone

3. My apple books I purchased can't be read on android

4. Lose access to all my apps (android shares this one)

5. I have a friend who uses airdrop to share maps and files when we go hiking without signal, and apple refuses to open up the airdrop protocol so that I can receive those from android, or an airdrop app on android

6. ... I don't have a macbook, but if I did the sreen sharing, copy+paste sharing, and iMessage-on-macos would all not work with android.

It's obvious that apple has locked in a ton of stuff. Like, all other messages and file-sharing protocols except iMessage and airdrop work on android+iOS. Books I buy from google or amazon work on iOS or android.

Apple is unique here.


This is really sad, because the replacement, Swift Package Manager, is really crap: it lacks some useful features (an "outdated" command, meaningful commandline output, ...), is buggy as hell in xcode (most of the time xcode just crashes when you add/removed a dependency, error messages while getting a repository are not understandable and even often not visible entirely, many repositories have some old Package.swift that current developer tools won't read, ...), and worst of all, it stores the full repositories of all the dependencies with their full history on your machine and downloads them every time when you do CI properly, which often means GBs of data.

I think Go is going away. It occupies such a weird niche. People have said it's good for app backends, but you should really have exceptions (JS, Py, Java) for that sort of thing. For systems, just use Rust or worst case C++. For CLIs, it doesn't really matter. For things where portability matters like WASM, can't use Go. Bad syntax and type system on top of it.

What if Google spent all that time and money on something from the outside instead of inventing their own language? Like, Microsoft owns npm now.


I happen to prototype in python then move on to rust when things solidify... For those who don't know rust, its compiler is pretty slow. I mean, when you want to add a feature to an existing code base, each "run" you do to test the new feature imply relinking the whole project: that's very slow (or you have to split the project in sub project, but I'm lazy). In that situation, prototyping in python makes sense (at least to me :-)

I have multiple system prompts that I use before getting to the actual specification.

1. I use the Socratic Coder[1] system prompt to have a back and forth conversation about the idea, which helps me hone the idea and improve it. This conversation forces me to think about several aspects of the idea and how to implement it.

2. I use the Brainstorm Specification[2] user prompt to turn that conversation into a specification.

3. I use the Brainstorm Critique[3] user prompt to critique that specification and find flaws in it which I might have missed.

4. I use a modified version of the Brainstorm Specification user prompt to refine the specification based on the critique and have a final version of the document, which I can either use on my own or feed to something like Claude Code for context.

Doing those things improved the quality of the code and work spit out by the LLMs I use by a significant amount, but more importantly, it helped me write much better code on my own because I know have something to guide me, while before I used to go blind.

As a bonus, it also helped me decide if an idea was worth it or not; there are times I'm talking with the LLM and it asks me questions I don't feel like answering, which tells me I'm probably not into that idea as much as I initially thought, it was just my ADHD hyper focusing on something.

[1]: https://github.com/jamesponddotco/llm-prompts/blob/trunk/dat...

[2]: https://github.com/jamesponddotco/llm-prompts/blob/trunk/dat...

[3]: https://github.com/jamesponddotco/llm-prompts/blob/trunk/dat...


The website design is so pleasing, props!

Excel has the benefit of being understandable and fixable by a lot of regular office workers.

It's a bit surprising that we don't have that feature as a requirement for most IT infrastructure. It would make it so much more usable.


Offset-based pagination will be a problem on big tables.

I write (mostly) bug free production code fast by doing the following:

1. I use a custom code generator to generate 90%+ of the code I need from a declarative spec.

2. The remaining hand written code is mostly biz logic and it is exhaustively auto tested at the server API level.

3. I maintain and "bring along" a large library of software tools "tested in combat" I can reuse on new projects.

4. I have settled on an architecture that is simple but works and scales well for the small to large biz applications I work on.

5. I constantly work on removing complexity from the code, making it as easy to understand and modify as possible.


In other words, if you're an open source startup and want to avoid being AWS'd, choose dual AGPL + commercial (with IP transfer CLAs).

Hey folks, I ran into similar scalability issues and ended up building a benchmark tool to analyze exactly how LISTEN/NOTIFY behaves as you scale up the number of listeners.

Turns out that all Postgres versions from 9.6 through current master scale linearly with the number of idle listeners — about 13 μs extra latency per connection. That adds up fast: with 1,000 idle listeners, a NOTIFY round-trip goes from ~0.4 ms to ~14 ms.

To better understand the bottlenecks, I wrote both a benchmark tool and a proof-of-concept patch that replaces the O(N) backend scan with a shared hash table for the single-listener case — and it brings latency down to near-O(1), even with thousands of listeners.

Full benchmark, source, and analysis here: https://github.com/joelonsql/pg-bench-listen-notify

No proposals yet on what to do upstream, just trying to gather interest and surface the performance cliff. Feedback welcome.


I built a Chrome extension with one feature that transcribes audio to text in the browser using huggingface/transformers.js running the OpenAI Whisper model with WebGPU. It works perfect! Here is a list of examples of all the things you can do in the browser with webgpu for free. [0]

The last thing in the world I want to do is listen or watch presidential social media posts, but, on the other hand, sometimes enormously stupid things are said which move the SP500 up or down $60 in a session. So this feature queries for new posts every minute, does ORC image to text and transcribe video audio to text locally, sends the post with text for analysis, all in the background inside a Chrome extension before notify me of anything economically significant.

[0] https://github.com/huggingface/transformers.js/tree/main/exa...

[1] https://github.com/adam-s/doomberg-terminal


My elderly mother-in-law is slowly going blind. She relies on Meta glasses to read print on everything — from the back of a can to the mail. She also uses them to help locate items around the house, whether it’s something on the counter or in the living room.

I’ve tried the glasses myself, and I’m convinced that wearable eyewear like this will eventually replace the mobile phone. With ongoing advances in miniaturization, it’s only a matter of time before AR and AV are fully integrated into everyday wearables.


The issue with this is that it would be quite difficult to replace the functionality of the existing VC-25As.

They have comprehensive anti missile and anti air counter measures.

They are completely electromagnetically hardened which doubles to both protect against electronic warfare, worst case scenarios like nuclear attacks, and to shield against spying on internal wireless comms.

They can fly for up to 15 hours without interruption at full fueling and full weight capacity.

On top of their long flight time they can refuel mid-air off USAF tankers if need be (fighter jet style).

They stock two separate galleys for food preparation with supplies for 2000+ meals as well as weeks worth of fresh water.

They have a full onboard pharmacy, diagnostics lab, and surgical center including a full size operating room and all necessary imaging technology. With all accompanying staff of course.

They don't require any support upon landing and supply their own retractable vehicle mounted stairway and baggage loaders which are stored in the aircraft. Strictly speaking all they require is a runway. They can deal with everything else on their own.

And that's before you even start to consider spaces for staff members, a dedicated seating area for the press pool that travels with the president, conference rooms, etc as well as bathrooms, showers, sleeping arrangements, etc for those on board.

-----

Switching away from the VC-25As to a much less capable/independent plane will massively widen the attack surface for targeting the president and will introduce so many avenues for espionage or assassination that simply haven't been possible in the past thanks to the VC-25As.


Do people save time by learning to write code at 420WPM? By optimising their vi(m) layouts and using languages with lots of fancy operators that make things quicker to write?

Using an LLM to write code you already know how to write is just like using intellisense or any other smart autocomplete, but at a larger scale.


> What do you like about Zig more than Rust?

It's been quite a while now, but:

- Great allocator support

- Comptime is better than macros

- Better interop with C

- In the context of the editor, raw byte slices work way better than validated strings (i.e. `str` in Rust) even for things I know are valid UTF8

- Constructing structs with .{} is neat

- Try/catch is kind of neat (try blocks in Rust will make this roughly equivalent I think, but that's unstable so it doesn't count)

- Despite being less complete, somehow the utility functions in Zig just "clicked" better with me - it somehow just felt nice reading the code

There's probably more. But overall, Zig feels like a good fit for writing low-level code, which is something I personally simply enjoy. Rust sometimes feels like the opposite, particularly due to the lack of allocators in most of its types. And because of the many barriers in place to write performant code safely. Example: The `Read` trait doesn't work on `MaybeUninit<u8>` yet and some people online suggest to just zero-init the read buffer because the cost is lower than the syscall. Well, they aren't entirely wrong, yet this isn't an attitude I often encounter in the Zig area.

> How did you ensure your Zig/C memory was freed properly?

Most allocations happened either in the text buffer (= one huge linear allocator) or in arenas (also linear allocators) so freeing was a matter of resetting the allocator in a few strategical places (i.e. once per render frame). This is actually very similar to the current Rust code which performs no heap allocations in a steady state either. Even though my Zig/C code had bugs, I don't remember having memory issues in particular.

> What do you not like about Rust?

I don't yet understand the value of forbidding multiple mutable aliases, particularly at a compiler level. My understanding was that the difference is only a few percent in benchmarks. Is that correct? There are huge risks you run into when writing unsafe Rust: If you accidentally create aliasing mutable pointers, you can break your code quite badly. I thought the language's goal is to be safe. Is the assumption that no one should need to write unsafe code outside of the stdlib and a few others? I understand if that's the case, but then the language isn't a perfect fit for me, because I like writing performant code and that often requires writing unsafe code, yet I don't want to write actual literal unsafe code. If what I said is correct, I think I'd personally rather have an unsafe attribute to mark certain references as `noalias` explicitly.

Another thing is the difficulty of using uninitialized data in Rust. I do understand that this involves an attribute in clang which can then perform quite drastic optimizations based on it, but this makes my life as a programmer kind of difficult at times. When it comes to `MaybeUninit`, or the previous `mem::uninit()`, I feel like the complexity of compiler engineering is leaking into the programming language itself and I'd like to be shielded from that if possible. At the end of the day, what I'd love to do is declare an array in Rust, assign it no value, `read()` into it, and magically reading from said array is safe. That's roughly how it works in C, and I know that it's also UB there if you do it wrong, but one thing is different: It doesn't really ever occupy my mind as a problem. In Rust it does.

Also, as I mentioned, `split_off` and `remove` from `LinkedList` use numeric indices and are O(n), right? `linked_list_cursors` is still marked as unstable. That's kind of irritating if I'm honest, even if it's kind of silly to complain about this in particular.

In all fairness, what bothers me the most when it comes to Zig is that the language itself often feels like it's being obtuse for no reason. Loops for instance read vastly different to most other modern languages and it's unclear to me why that's useful. Files-as-structs is also quite confusing. I'm not a big fan of this "quirkiness" and I'd rather use a language that's more similar to the average.

At the end of the day, both Zig and Rust do a fine job in their own right.


Hey all! I made this! I really hope you like it and if you don't, please open an issue: https://github.com/microsoft/edit

To respond to some of the questions or those parts I personally find interesting:

The custom TUI library is so that I can write a plugin model around a C ABI. Existing TUI frameworks that I found and were popular usually didn't map well to plain C. Others were just too large. The arena allocator exists primarily because building trees in Rust is quite annoying otherwise. It doesn't use bumpalo, because I took quite the liking to "scratch arenas" (https://nullprogram.com/blog/2023/09/27/) and it's really not that difficult to write such an allocator.

Regarding the choice of Rust, I actually wrote the prototype in C, C++, Zig, and Rust! Out of these 4 I personally liked Zig the most, followed by C, Rust, and C++ in that order. Since Zig is not internally supported at Microsoft just yet (chain of trust, etc.), I continued writing it in C, but after a while I became quite annoyed by the lack of features that I came to like about Zig. So, I ported it to Rust over a few days, as it is internally supported and really not all that bad either. The reason I didn't like Rust so much is because of the rather weak allocator support and how difficult building trees was. I also found the lack of cursors for linked lists in stable Rust rather irritating if I'm honest. But I would say that I enjoyed it overall.

We decided against nano, kilo, micro, yori, and others for various reasons. What we wanted was a small binary so we can ship it with all variants of Windows without extra justifications for the added binary size. It also needed to have decent Unicode support. It should've also been one built around VT output as opposed to Console APIs to allow for seamless integration with SSH. Lastly, first class support for Windows was obviously also quite important. I think out of the listed editors, micro was probably the one we wanted to use the most, but... it's just too large. I proposed building our own editor and while it took me roughly twice as long as I had planned, it was still only about 4 months (and a bit for prototyping last year).

As GuinansEyebrows put it, it's definitely quite a bit of "NIH" in the project, but I also spent all of my weekends on it and I think all of Christmas, simply because I had fun working on it. So, why not have fun learning something new, writing most things myself? I definitely learned tons working on this, which I can now use in other projects as well.

If you have any questions, let me know!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: