Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Apart from what you said about the GUI situation in Rust (which I disagree with) I think TUI's have their niche.

I think writing a useful GUI has considerable overhead no matter which technology you use. In addition they cause other difficulties, like testability, i18n, l10n and accessability.

This is why people often resort to command line tools, rightfully so. There are cases, however, where a CLI won't cut it and I believe TUI's are a nice and lean solution that sits right between CLI and full-blown GUI and isn't going anywhere.



To be fair, TUIs are strictly worse accessibility-wise than GUIs.

There's no standard to communicate TUI semantics to assistive technology, and whatever few standards actually exist (like using the cursor to navigate menus instead of some custom highlight) aren't followed.

With GUIs, those standards exist, and are at least somewhat implemented by all major (non-Rust)UI frameworks.


What you say is true, but TUIs are not strictly worse than GUIs at accessibility. The fact that text is inherently more legible than graphics means that, for example, blind players can play console-based roguelikes (and do: https://www.rockpapershotgun.com/playing-roguelikes-when-you... ), and Dungeon Crawl Stone Soup even has configuration options to improve the experience for blind people: https://github.com/crawl/crawl/blob/599108c877da33bc03cb73da...


Text isn't more legible without structure, and without communicating structure, which is what various accessibility toolkits do, you don't get this supposed benefit


Text has inherent structure that GUIs don't. The ceiling for GUIs is higher (thanks to standards and supporting frameworks), but the floor for TUIs is higher.


Ok, what is the inherent structure of these two columns and how is a screen reader supposed to divine that structure without the framework telling it that there are 2 headers with the following text? And imagine the layout is space-separated as in cli utils

C| Column Wide |

o| |

l||

1|2|


Even at the worst case, text can be read aloud and give some indication of what the screen contains. This is absolutely not true for a GUI which could easily just be an opaque rendered canvas. The fact remains: TUIs are inherently legible in ways that GUIs are not guaranteed to be.


Both false: you'll have NO indication if you read letters from different words out of order! You'll not understand whether 'o' is a value or a continuation of the column name even in the primitive example above, and for anything even remotely complicated it's even worse.

> This is absolutely not true for a GUI which could easily just be an opaque rendered canvas. Are you not aware of OCR? Besides, GUIs have special accessibility tools, which almost none of the TUIs have, so your opaque canvas isn't universal.

> The fact remains:

That's a myth, not a fact, and you fail to establish "the fact" even in the most basic example


Plus there's no reason why most GUI's can't adopt a keyboard-only workflow. It's easy to implement. The inverse is not true.


There has to be a reason when 99% of GUI apps don't support it.


There are ways to do it in most GUI applications. On Windows, pressing Alt will sometimes show you the combination to activate certain parts of the UI (keyboard accelerators). It's not obvious anymore because people don't focus on accessibility. Sadly, it's not common practice to ensure a good workflow, because it's assumed that they will use a mouse. Or people keep re-inventing TUI every time they think they want a terminal-friendly utility.


What are you talking about, a screen reader out to be way more capable in a TUI or CLI than the massive pain of ANDI or 508 compliance.


TUIs still need to comply with 508 so that “massive pain” is there either way.

What’s actually hard with screen readers isn’t getting text (that’s been easy on most GUI systems for decades) but communicating things in the right order, removing the need to see spatial relationships or color to understand what’s going on.

TUIs make that harder for everything beyond mid-20th century-style prompt / response interfaces because you don’t want to have to reread the entire screen every time a character changes (and some changes like a clock updating might need to be ignored) so you want to present updates in a logical order and also need to come up with text alternatives to ASCII art. For example, if I made a tool which shows server response times graphically a screen reader user might not want to hear an update every second and if the most interesting thing was something like a histogram I might need to think about how to communicate the distribution which is better than rereading a chart every second only to say that the previous one has shifted to the left by one unit and a single new data point has been added.

Those are non-trivial problems in any case but they’re all harder with a TUI because you’re starting with less convention and without the libraries which GUI interface developers have to communicate lots of context to a screen reader.


You physically can't do any of this in a TUI.

There's no protocol that tells a screen reader to say something different than is actually displayed on the screen. The best you can do is having a whitelist of screen reader process names and changing how your TUI works if one of them is detected, but that's brittle and doesn't work over SSH. You'd also have to think about how to do container escaping and interfacing with the host system when you're running in WSL, as the screen reader is almost certainly on the host side.


> In addition they cause other difficulties, like testability, i18n, l10n and accessability.

Most TUIs don’t have these either. So I don’t see this as a difference between TUI/GUI. If you want to make a GUI and want to ignore these things, you are free to do so.


Where I think TUIs had a niche GUIs don't quite reproduce is in the very particular way DOS TUIs processed input.

An old school DOS TUI reads keyboard input one character at a time from a buffer, doesn't clear the buffer in between screens, and is ideally laid out such that a good part of the input is guaranteed to be fixed for a given operation. They also were built without mouse usage.

So an operator can hammer out a sequence like "ArrowDown, ArrowDown, ENTER, Y, ENTER, John Smith, ENTER" and even if the system is too slow to keep up with the input, it still works perfectly.

Modern GUIs almost never make this work near as well. You need to reach for the mouse, input during delays gets lost, the UI may not be perfectly predictable, sometimes the UI may even shift around while things are loading. Then also neither does Linux, I find that the user experience on DOS was far better than with ncurses apps that have all kinds of weirdness.


> I think writing a useful GUI has considerable overhead no matter which technology you use.

I find egui far easier than Ratatui.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: