Hacker Newsnew | past | comments | ask | show | jobs | submit | noodles_nomore's commentslogin

Haltet euch da raus. Euer Circus geht mich nichts an. Wenn ihr mir einen Gefallen tun wollt, seid einfach ehrlich mit eurem Boss. Ich muss jetzt lernen.

So sehr ich auch dankbar dafür bin, dass ihr mich über Kevin informiert habt (wenn das denn überhaupt wahr ist), tut mir Leid, aber ihr habt euch das selber eingebrockt.


I'd expect that any kind of logistics would need tight bounds on min-max 'performance', and not be up to the whims of artificial stupidity. Am I wrong?


"Training a model" is just a different name for "optimization".

The typical logistics optimization system is "smart" because it's optimizing exactly what it is supposed to. LLMs here would not be "smart" as they are optimizing toward a different target (human-like production of language-like text), and using them for things they're not specifically trained to do is indeed stupid.


Please explain how you made this connection.


The word "infatuation" says enough, especially if we're talking (young) adults, and not the kind of puppy love we expect from teenagers. If you're an adult and you feel infatuated, you might mistake that for love, then check if you're dealing with a narcissist or other cluster B disordered person, and also see if you yourself are an empath or highly empathetic.


> and also see if you yourself are an empath

How do I see this, is there a test or something like that?


I found the Youtube videos from dr. Abdul Saad of Vital Mind Coaching very insightful.


Desktop Dungeons blows my mind because it is so incredibly well balanced and I have no idea how it can be. It's a puzzly kind of "coffee break roguelike." It marries an emphasis on the puzzly aspect of gameplay (e.g. enemies don't move) to the typical rogue-like fashion of explosive combinatoric possibility in a perfect way that just doesn't make sense. The result is that you'll often find yourself looking at impossible-seeming situations and still finding a way through. I've not had as much relief-extacy with any other game, including adrenaline-trembling hands, despite -- or maybe because of -- the fact that it's entirely turn-based.


Thanks for posting this. It's better than I remember.


Refusing a good faith debate with an audience of millions of people is silly. Insisting that a good faith debate has to be done in real time is silly. The only salvation for this rotten society is a return to literacy.


> Insisting that a good faith debate has to be done in real time is silly.

A live show has less risk of sections being edited. Why is it silly?


Because it still doesn't provide enough time to verify or refute any claims being made, especially when one sides strategy is to make a barrage of claims to trip up the other for as long as possible. If most people saw the use of that tactic as a clear sign of weak arguments and question dodging, it wouldn't be such a silly thing because it would server to expose that. But it turns out that kind of thing is a form of entertainment for many people and they like to watch their guy "win" against the other. That's a problem on both sides, but objectively a far more dangerous problem with the topics the right decides to utilize the tactic with.


I had completely misunderstood your point and agree with you.


I'm not OP so I can only say I think that was the point they were trying to make.


The legal system is a game that is played for profit. Being able to sue anyone for anything is advantageous for the big dogs. So it's imperative that as many laws as possible cover as much seemingly innocuous human conduct as possible with the highest stakes possible. Unrestricted expansion of intellectual property, the ability to lay claim to arbitrary regions of the ideosphere, makes perfect sense.


> Being able to sue anyone for anything is advantageous for the big dogs

Unless they can justify very high damages, they are predisposed to settling out of court. When you hire top lawyers, or have a massive legal department, you're paying a lot of money. Going to court is at best a gamble unless you have an obvious and solid case.

What this means is they use threats of going to court wrapped up in legalese in the hopes of getting their way out of court.

Unfortunately, the degree to which various districts earn reputations around being pro or anti patent means they're also advantaged in "shopping around", so to speak, to get any case they bring moved to a favorable court. This is the biggest thing that they can do that your average "small dogs" have a harder time with.

The worst has changed in the last year:

https://news.bloomberglaw.com/ip-law/patent-plaintiffs-scram...

but it'd be nice to see ways to make it more difficult to game the system by "judge shopping".


That's exactly right. You've been successfully filtered.


I can't believe Microsoft would do this, *huff* *puff*. What an outrage!


The article tries really hard to avoid the thing everyone outside this field seems to know: Concurrent with this 15 year golden age, UX has become universally terrible.


I really wonder how they made something like the windows2k interface in the 90's. Compared to that, today's interfaces are garbage. And this is especially true for the web or anything that is even remotely in contact with Javascript.

Were UX/UI people and researchers in the 90's just better? More competent? More careful in their research? Did the objectives change so much from usability to selling clicks?


My take is that they had more limited tooling and resources, so they designed a set of controls, made them visually distinct, and used them almost everywhere. It was easy to make radio buttons that looked like radio buttons and worked like radio buttons. It was substantially more difficult to do anything else.

Also, the default interface wasn’t composited, so nothing was translucent, so no one layered their controls and content. Sure, it looks kind of pretty in Material Design when a round button with nice antialiased edges sits on top of the content, but it’s terrible UX. In Win2K, if you wanted to do this, you either used “layered” windows (which were no amazingly slow that no one wanted to use them) or you had to go outside the Win32 library entirely and render the whole mess yourself. So designers mostly didn’t do this.


No. People were not just "better" and I also think that the core motivation was and is making money.

However, how companies earn money changed. In the 80s and 90s computers were marketed at professionals working in offices. Software was sold shrink-wrapped on disks. It was very costly to change software later (send disks?), you better tested it really, really carefully. The promise was often to make experts more efficient at their tasks: Software was customizable, had macro recorders and had a familiar interface of buttons, menus and docking sub-windows. Computers were used with keyboards and (finally!) mouse: Two very efficient input devices. People (including me) are nostalgic for that time, and that makes memory very selective: We probably remember the best software created by large companies with great teams. We also remember software created for specifically for professionals working with computers, many HN readers will be that people. And, last but not least, there were a lot of terrible UIs too, that we gladly forgot and for every fondly remembered non-standard, fun UI (bryce?) there are sooooo many terrible ones.

A lot of what irks people can be explained with shifts in how software got made, for which audience and for which devices: Having many small teams working independently makes a coherent vision for a product more difficult (but has other advantages), user customization became less important since products were more targeted to non-experts and it also can mess with your automated testing; moving to the web as main way to deliver software meant that some types of interactions (drag and drop, using large amounts of data directly) were harder to create than others; on the web, advertisement and subscription models became popular, leading to very different ways to advertise software in contrast to the former "buy the bi-yearly upgrade in a big box" and the web was more OS independent so the old one-system, one UI-standard did not match anymore.

So, a lot of what "got worse" can be explained by changes in the ecosystem that software-creation and software-use happens in and by the position of people that assume it "got worse".

There are many things that I find worse now than in the 90s but this perspective can help to see if software got worse just for people similar to me and it can also help to find ways to make sustainable changes that fit the ecosystem that exists today.


"Products targeted to non-experts" bears emphasizing: prior to the early 2000s, "using computers" (for any purpose) was a very niche activity, not a mainstream thing that everyone did all the time every day.

Either you were a highly trained professional using a computer for serious work, or else you were a dedicated hobbyist. Either way, you expected to put in significant time to learn and understand how this magically complex machine worked before you could actually accomplish anything. "Normal people" without strong motivation to study the computer simply did not use computers much during that era.

Fast forward to the smartphone era, and "using computers" has become a casual everyday activity for normal people. Companies are now incentivized to produce software for a mass audience with UIs that require as close to zero thought, study, or technical skill as possible.

All the computer nerds were shocked when Google came out with just a single bare text input as its primary UI. Surely we needed the Baroque masterpiece that was AltaVista's UI to ever gain useful work from a search engine; there are so many parameters the user might wish to vary! But no; as it turns out, the new Eternal September mass computer user audience strongly prefers slightly less control in favor of less time spent thinking.


> All the computer nerds were shocked when Google came out with just a single bare text input as its primary UI.

Yes, if by "shocked" you meant "delighted". Not just that it was a single text input, but that it could also do the desired task better.


I think you're off by a decade there, in the early 2000s computers at home and in any kind of white collar (and many blue collar) professions were normal and common.


I agree, good things still exist when it is B2B or only designed for a highly trained or professional users. But, when we need to design things for 80 IQ crowd, everything suffers. RGB lighting and diamond studded so to speak in UI/UX field.


My point was that "good" is good only in connection to an ecosystem of tech, values, practices and the people judging what is "good". I personally do not think any group of users "drags the field down".


Yes it does. A certain group of people definitely drags the entire field down. It’s not a nice thing to think about but it’s a fact and the reality IMO. You can continue to lie to yourself but eventually you realize that you need to dumb things down for the below 2 std deviation folks or you won’t get them as users.


> But, when we need to design things for 80 IQ crowd, everything suffers.

I really think that the only way to adequately serve both demographics is to have two different UIs.


It was standard practice back then to sit non technical and (separately) expert users in front of a product, film their reactions, and then adjust the product design until most of the swearing stopped.

Gnome 2 famously did this. Gnome 3 famously threw out the findings, which is why Linux Mint exists, and how KDE (eventually) caught up on usability.


Nah. KDE 3 was better than Gnome 2 in the first place. KDE 5 has almost recovered to that level.


GNOME is an interesting case, because you can't blame it (directly) on managers pushing dark patterns to maximize conversions. Is it all just trickle-down of user-hostile design philosophy from the commercial web?


Same as the reason Visual C++ was so good: They made all the developers use it.


Except for WinDev, that is why we keep getting such bad tooling to deal with COM/WinRT, although the concepts behind it are great.


Can you go into some more detail as to how "UX has become universally terrible"? I'm a UX person, so I'm always intrigued when I hear people say that UX has become terrible.


Not the author of the above post, but happy to answer:

- Essential functionality shoved into some sub-hamburger-menu because it "didn't fit the design".

- Low information density

- Pointless cruft like animated menus and hero images + resource intensive pseudo-minimalism, where the app/page/whatever, despite the low information density, somehow still manages to load ungodly amounts of data, or eats tons of resources, or both.

- The latter point contributing to the situation where we have apps that run worse on 2022 hardware, than apps in the early 90s did on a Pentium I. I know that design isn't solely to blame for this, but it certainly played a part.

- Circumventing OS or browser default functionality. Example: Webpages that hijack the onscroll event. I use my mousewheel to scroll, not to advance through whatever the designer thought was a must-see presentation about their companies "values".

- Smooth UX breaking down the instant the user leaves the happy path. Trying to setup an account? Easy. Trying to change the auth method to MFA? A hellride.

- Modals. Modals everywhere

- Everything trying to look like a smartphone app, no matter the viewing device. I have a high resolution screen and a high precision pointing device in front of me. Why are half the webpages and many apps presenting buttons the size of texas?

- Super smart designs causing the page layout to change after it's loaded. Nothing more fun than to accidentially click the wrong thing and then having to reload the previous page because the layout changed under my thumb.

- Next to zero configurability.

Apps are tools. Webpages are tools. I am not starting a program to look at it's amazing design, same as I don't pick up a hammer to marvel at the color choice of the handle. I pick up a hammer to hammer at nails. If I get the impression that the process of picking the handle-color was getting more attention than the process of making a good, sturdy, serviceable and reliable hammer, then I will not use that hammer.


Also, everything has to have an account now. I just got fined $8 by a movie theater for buying a ticket without an account.

Of course, with the rise of microservices, everything that requires an account is also unreliable.

Also, there's the dark pattern of returning incorrect results during partial outages, so even when stuff is "working", it's mostly gaslighting the end user. This was pioneered by Netflix's frontend team, but it's seeped into all sorts of inappropriate things. A surefire sign of this is opening up an online-only app, and having it report stale data until it updates. My car does this. I don't care what its charge level was sixteen hours ago (typically displayed by my phone for 10-60 seconds), or four days ago (from my watch).


Netflix has done far more damage to the software industry compared to any other large modern company. The whole microservices nonsense and the myriad of complicated tools to solve their own customer complicated situations, gosh, thankfully the company is dying out now.


It's even older than that: Amazon began doing this in the 90s.

Their initial idea was to synchronize warehouse inventory with the online store in realtime, so that users would never buy anything out of stock. That proved logistically difficult and expensive, so they decided that the frontend would merely checkpoint inventory levels at intervals. When someone inevitably ordered something that was no longer in stock, they simply sent a robo-apology note and refunded the order.

Mitigating the hit to customer satisfaction was deemed cheaper than the very expensive proposition of synchronizing distributed warehouse inventory levels in realtime over unreliable networks.


> Apps are tools. Webpages are tools...

I strongly believe that's the main problem. Today UX thinks of computers as Assistants. It is clippy all over again.

That is why tech is so exited about Amazon Echo, ChatGPT.. It's the ultimate assistant.

Nobody wants to teach their users anymore, it's supposed to work out of the box. Easy, simple. There should be one button or one way of doing something (the dreaded User Story). Otherwise the user will go elsewhere. So we get these one trick apps.

In the 90s the user was seen as an intermediate user. Today it's all about onboarding.


I'll take it a step further and say most "UX" designers think of everything as an Experience. That's fine if I want to watch a movie but not if I want to do a thing.


Add to list:

- Desperately trying to make everything from PCs to cars to refrigerators to space capsules look and behave like a mobile phone for no reason on (or off) God's green Earth except "Woah, trendy."


> - Everything trying to look like a smartphone app, no matter the viewing device. I have a high resolution screen and a high precision pointing device in front of me. Why are half the webpages and many apps presenting buttons the size of texas?

You're making a very broad assumption here that everyone is like you. The fact is, they aren't. Big click targets are important for accessibility (if you want to read more, theres a pretty good article on the topic here: https://ishadeed.com/article/clickable-area/)


> Big click targets are important for accessibility

Apparently not, because we went for decades without them, and no one complained about a bad UX from buttons that are too small to click. Which could have something to do with the aforementioned high-precision pointing device, which btw. can be configured to the motoric requirements of the individual user.

However, a lot of people complain about UX gone to hell as of right now. So I'd say its a pretty safe bet that things, as a whole, didn't go into the right direction

Besides, a requirement on a PHONE is not an excuse to do the same thing on a DESKTOP. My desktop PC isn't a phone, and I value information density more than click-area. If an interface ignores these facts, then it is a bad UX for me.


> Big click targets are important for accessibility

So that's why Microsoft made the calc.exe fill the whole screen. /s

This does not explain however why the scrollbars are so small. Are they no click targets anymore ?


> why the scrollbars are so small.

Not to mention essentially nonexistent window borders and putting active controls in the title bar. Have people forgotten that those are click targets, too? Sometimes I want to move the window or resize it, and Windows makes that hard.


This article, like seemingly ever design-focused article in the last decade, parrots Fitts' Law but doesn't run any numbers:

> An important law to be followed in UX design. In simple words, the larger and closer the touch or click target is, the less time it will require the user to interact with it.

...and mistakenly concludes that, if you just make buttons bigger and add more whitespace everywhere, you get widgets that are easier to click.

If you try to run the numbers on "facelifted" modern interfaces vs. their older counterparts, you'll find that many of them actually fare worse even in terms of Fitts' model.

E.g. if you have three equally-sized widgets side by side -- three buttons, for instance -- simply making them wider by some proportion of the initial width increases the difficulty of getting from the center of the leftmost widget to the center of the rightmost widget, because the distance (up in the nominator) increases by a higher factor than the width (in the denominator). In practice, increasing widget sizes increases the index difficulty in basically every UI that has more than two widgets laid out along a given direction, because it causes the distance to targets to grow by more than the widget size.

(Edit: this is commonly forgotten because "literature" drones about Fitt's conclusion without explaining the context in which it was determined: repetitive motion over a single direction between two already widely-spaced physical items. Making the items bigger resulted in lower distance because, unlike in a modern fluid UI, where widgets are placed at constant paddings that are a fraction of their physical size, that did not cause their centers to drift further apart. Whereas in practical UI cases, increased paddings and widget sizes often result in the average distance to widget centers increasing by way more than (half) the widget width, so the ID actually grows).

The example shown in the article just so happens to fare better because it makes a widget taller by about 10%, while reducing the distance between the only two widgets shown on screen about three times. IRL padding between widgets has steadily grown, so while the numbers line up, this isn't a very representative example -- in fact, many designers would probably object that the design on the right is also bad because there's not enough space between the text field and the button.

But even if we take the design example at face value, generalizing from it is a really bad idea. Because the ID is logarithmically-derived, if you were to take the "narrow" button on the left and just bring it 16px away from the text field, the difference would be minute -- no more than 5% (assuming strictly vertical motion between the midline of the text field and the midline of the button; in practice it's probably way lower than that, since LTR users tend to click towards the left of the field; although FWIW I bet in practice moving the two widgets close together has barely any impact at all, as the text field is likely auto-focused, so motion will commonly happen between wherever the cursor happens to be on the screen and the center of the button).

That's a good trade-off if you only have two widgets on the screen, as in a form -- you don't lose anything other than some whitespace by making a widget bigger. If you have more elements to show, though, that's a very bad trade-off to make. An ID difference of 5% is barely noticeable (IIRC it's just above the "noise floor" in a standard Fitts experiment), which is more than offset by the additional difficulty introduced by scrolling (because you can fit fewer elements on the screen).


> - Super smart designs causing the page layout to change after it's loaded.

This seems to happen far too much. Nowadays usually causes me to give up on a site and go somewhere else. Feels like eliminating stuff like this should be low-hanging fruit UX-wise (albeit not especially exciting, I guess)


> - Circumventing OS or browser default functionality. Example:

Google search bar breaks macOS keybindings eg. option-arrow doesn't work as expected.


What web browser are you using? Option arrow works right for DDG's location bar on safari + firefox, and duckduckgo.com and google.com's search bar's option arrow works properly under safari.


If you are in the pull down of search suggestions and in the search field is standing (copied) the suggestion where you currently are, then when you hit arrow, the search field jumps back to your search string. It should instead led you edit the selected suggestion.


This was somewhat recent and it’s incredibly irritating.


It seems like in your experience, either UX did it's job and came up with all the wrong solutions, or UX wasn't involved and all the gripes you listed would have been alleviated by true UX professionals.


In the 90s people worked really hard to draft up universal standards of what makes UI usable and intuitive. Most of it tends to be ignored nowadays. Try teaching an old person how to use a smart phone some time. Just some bullet points:

* Apps feel generally disorganized. Buttons are unlabled. Many things are hidden somewhere between layers of unlabled buttons. In the past you could count on the menu bar giving you quick access to anything.

* Lack of functionality / composability. Avoidance of the file system. Tunnel menus that you have to take one step at a time.

* Every program has UI that works and looks differently, made worse because even the same programs redesign their own UI periodically.

* Flat design. Lack of 'affordances'. Buttons don't look like buttons, draggable things don't look like they're draggable. E.g. scroll bars in the past had this serration to suggest interaction. This leads to hidden features and surprises, where things that seem like static images suddenly hide important functionality.

* Lack of configurability. Configurable Toolbars, arrangeable view panes, tabs etc. And unnecessary limits even when you can configure things. Like, Firefox only has a list of preset zoom levels, to get finer zoom levels you have to go into about:config. Or the fact that it limits the size of tabs to a rather large minimum. For no reason at all.

* Lack of consistent (or even discoverable) keyboard navigation. Rebinding short cuts is not a thing anyone seems to care about anymore.

* Readability, use of space. This applies more to the web, but grey text, ultra-narrow columns, inconsistent scaling.


> Buttons are unlabled. Many things are hidden somewhere between layers of unlabled buttons.

My car hides map functionality in buttons that are simply not present until you interact with the map in some magic way that triggers a heuristic that you want the buttons to appear. Slowly.

When I want to see chargers drawn on the map, I don’t want to move the map. So why TF do I have to move the map to convince the software to draw the button so I can tap it? While driving or perhaps waiting at a red light.


Every time I watch people use modern in-car navigation systems, I think "wouldn't it be a lot easier and safer to just use the phone instead?"


> Try teaching an old person how to use a smart phone some time

Also, try to teach a young person to use an old program, or an older version of an office program


> Also, try to teach a young person to use an old program

Funny enough, the few times I had to do that, it turned out to be easy.

Why? Because the software may not look shiny, but it's obvious and discoverable. There is the menu-bar. It says "File". The assumption that "File" is the right place to look for the button that saves the work to a file is one that comes pretty naturally. And of course the keybind is written right there next to the menu item.

Ugly? Maybe. But it's obvious and gets the job done.

Now let's look at some "modern" software, and the Button to save is...yeah, anyones guess really where it is.

Might be in some hamburger or sub-hamburger.

Might be in some animated menu that I have to scroll beyond the various cloud-store options to store to disk, which is a common dark pattern, because cloud solutions are something I can sell, while the users disk isn't.

There might be some gesture-based menu, even in desktop apps.

It may be a button in the interface, but which one is anyones guess ... because apparently the floppy disk icon is not "modern" enough, so there may be any combination of boxes, arrows, arrows in boxes, or whatever happened to be the ultimate wisdom in save-button design at the time.

I have seen apps where it was in the "Share" menu, right next to whatsapp and facebook integration, because these are vitally important options for all apps apparently.


Having an actual Save button or shortcut is a luxury nowadays. I will admit autosaving is convenient. But there are still places where network connectivity is spotty and one wants to actually confirm that the thing is saved. You're lucky to have a little icon color change or text that says the stuff on the screen is saved. Even when such text exists, it's often off the visible screen. Worse, sometimes a form button will not work and the error message is not visible until you look for it at the bottom or the top of the page where it's not visible.

My other usability pet peeve. Traversing a big number if things by paging. The UI will not give you the total number of pages anymore. You can increment 2 pages at a time. Sometimes you can change the url to try later pages but not always. Sometimes there will be a button for the last (final) page but when you click it, it doesn't exist. To make this even worse most times this is loaded via JavaScript so if you're on the equivalent of page 50. The next time you get there you have to keep loading more until you're on 50 again. There is no way to bookmark state or go there directly. I am assuming this is a JavaScript Json thing that somehow became a pattern, like getting a total count is now impossible or something.


My kindergartener definitely picked up conventional linux desktops faster than iPadOS.

(It's whatever Manjaro defaults to. It looks like xfce, but I'm pretty sure it's KDE. The point being that I don't know because it's not configured terribly out of the box like modern gnome.)

The minecraft launcher does regularly bring him to literal tears though. :-(


- poor text contrast

- content areas that truncate half a sentence

- expanded content that reveals in a different part of the screen from the place where the teaser lives

- cookie notices - when then problem is not cookies but data exfiltration

- opt-in by default instead of opt out

- accept data exfiltration or don’t use our site

- images with rollovers that hide the image when rolled over

- text content that changes when you toll over

- headers that change size as you scroll so content jumps u predictably

- sticky headers

- sticky footers

- footers with useful content that you can’t read because there’s also an infinite scroller above it

- hamburgers on desktop sites

- landing pages that are 70% negative space

- poster/hero images so large there’s no actual content above the fold

- newsletter subscription modal popovers

- any kind of modal popover that appears mid-way through reading the content

- images with CSS that causes them to shrink when you use screen zoom (amazon product images)

- mistimed animations

- logins with the username and password on separate pages

- password fields you can’t paste into

- “repeat your email” fields

- carousels with differently sized panels so the content below shuffles ip and down whilst you’re reading it

- auto-play videos

- ads every fucking where

- social share buttons


- Daily essentials / safety-critical stuff that requires an account and/or doesn't work reliably anymore (pay for parking, pay to charge car, call 911, etc...)


> - content areas that truncate half a sentence

Haha, this seems to be a "design guideline" in both iOS and Android.


I wouldn't say UX has become universally terrible, but the average experience has definitely gotten worse over the last couple years:

- The introduction of several banners/popups/modals, either to shoddily comply with tracking consent laws or to drive up engagement (newsletter subscription, notifications, check out new feature X)

- The display of several, sometimes even nested loading indicators all over the place

- The adoption of "trendy" UI patterns that don't necessarily fit (e.g. "stories", a few years ago)

- An obsession with reducing information density and making everything very sparse

- Rejection of color as a redundant signifier for UI elements, particularly icons

- An overall extremely hostile attitude towards the user, with dark patterns, disregard to preferences and privacy


It has become terrible. My best guess is that there is little else to do, and UX professionals keep changing things to justify their jobs. Nothing personal, maybe your job requires you, but we don't need a brand new UX every 3-4 years on every product. Incremental improvements, rounded edges, some smoother animation and better accessibility. But not changing everything, one gazillion new buttons and features.

We need simplicity and stability - the latter being the most important for user interfaces.


Let me give a concrete example: I have had iPad Pro for years now. A primary use case for me is to watch a lecture and take notes, or read a book and take notes. The way this should obviously work is that I have my iPad in portrait orientation, split the screen vertically so that I have the video playing or the book open on the top half and notes on the bottom half. This was solved perfectly by the original Macintosh. It is still not solved by iOS (even though I've had tickets for years in Radar).

For some reason I cannot comprehend, I can split the screen horizontally, which results in two thin strips side-by-side that are useless for anything that I can think of. It is not possible to split it vertically so that I would have two reasonable aspect ratio apps on top of each other. Now they added a convoluted "multitasking" mechanism that kind of lets me solve this for reading, where I can have two 3/4 sized apps overlapping each other, but I still cannot just split the screen or have freely resizable apps (which, again, is a problem solved already in the original Macintosh).

This type of terrible UX has become endemic, and is even worse in non-Apple products. The root cause I believe is "authoritarian simplicity", where some UX designer or team thinks they know the best and force a single, over-simplified, over-specified solution on everyone.


The rise of targeted advertising is the main culprit of bad UX.

Everywhere there's dark patterns, performance overhead, mandatory updates degrading user experience, unnecessary JS and walled gardens breaking accessibility/compatibility, online-only etc.


This mostly matches up with the "business > users" point that the author was making. Businesses have decided that degrading the end experience for users through dark patterns, mandatory updates, walled gardens, and online only services is acceptable because users either have to use their product (electric companies, as an example), or because they offer a genuinely compelling product that people want to use despite the terrible experience (another point the authors made).

For performance overhead and unnecessary JS, that sounds like poor development practices that happens to also degrade the end experience for users, although some of that unnecessary JS could be some sort of tracking implementation that was included through business requirements.


For example, this site went viral because it resonated with so many people:

https://how-i-experience-web-today.com/


real web sites are somehow still worse than this parody


omg, they cherry on top was hijacking the back button


"Lets optimize for mobile" with the implications that come with it, e.g. extra snooping and dark patterns, while simultaneously leading to a shit desktop experience.

the need for "new", when in most cases something vaguely similar to WinXP or OSX is fine for 90% of things.


My favourite website that was designed "mobile first" is the English Cities Fund site[1]. Why? Because the site is supposed to be the primary site for a Government-supported fund distributing millions of £££ to English towns and cities to help them regenerate (or "level up" in today's political Jargonese). The primary audience for such a site should - one assumes - be people who want to gain access to that funding: town planners, local authority CFO staff, etc[2]. People who are most likely to access the site while at work, viewing the site on their out-of-date workstations or non-touchscreen laptops.

Maybe some out-of-work UXRs would like to offer views on how to fix the site?

[1] - https://englishcitiesfund.co.uk/ - though when I just revisited, it appears to be broken for both mobile and desktop.

[2] - my (very personal) view is that the site was in fact designed to showcase work done, so it could be referenced in the Annual Reports of the Fund's partner companies, allowing them to check whatever corporate checkboxes they needed to check that year during their AGMs. Though people tell me that I am too cynical so I could be wrong.


>the need for "new", when in most cases something vaguely similar to WinXP or OSX is fine for 90% of things

When you say that being vaguely similar to WinXP or OSX is fine for 90% of programs, is that in terms of looks, journey path, or something else?


I’d guess it is in terms of user experience, which is hard to achieve if you are optimizing for looks or journey path.

The article mentions this in the Business > User section, saying (removing euphemism and double negatives) the prevailing wisdom is that you will probably be fired unless you actively worsen the product, and provide metrics to explain how you did it.

I’d think that companies with customer bases that don’t actively hate them are less likely to have big layoff rounds, but I am not a UX expert.


AFAIC all of those have degraded.


From my experience the terrible experience is pretty often because of bloated features (and the bad UI that comes with it) that are shipped quickly in order to be first to market and capitalize on the novelty of some new technology. Move fast and break things. Then hardware technology improved so bloated UI etc got a pass because no one needed to optimize anything unless you're developing for 3rd world with outdated hardware and poor internet speeds.


lots have been pointed out

- infantilization of interface to treat everyone 'like a 5 year old'

- removal of functionality , homogeneous bootstrap-like interfaces with rounded corners everywhere, to the point where you no longer recognize which system you re using

- "mobilification" of the desktop

- no more rational organization: no hierarchical menus, only the top 5 buttons survive


1 px borders on 4k screens. Bad contrast which you cannot change because colors are not configurable. Need to scale my monitor (Windows) because i cannot use custom fonts. Titlebar cluttered with other GUI elements. Ribbon using a good amount of vertical space. Unusable scrollbars - too small/no scroll buttons at end - good luck scrolling through a 500 pages pdf. Settings resetted with updates. Automatic updates which change the way the programm works. And so on and so fort.


Not showing the interact-able parts of the UI. Sometimes buttons will have a large colored area, but only the text at the center is clickable. More often the opposite, a chunk of text with no indication of where the clickable/tappable area begins and ends. Even worse is when there are multiple options in a row with no way to see the boundaries between, or if there is any dead space. Fitt's Law has been a thing since before these designers were born!

The worst example I encounter on a daily basis is Twitter. Each tweet has at least 8 sections that do different things when tapped. I never know quite where to long-press with my thumb over the tiny timestamp to open it in a new tab, or where the boundary is between the single line of text and the username.


I half agree, UX seems to be being optimized for the new user and common workflows. This is great, because every user starts as a new user, so it makes onboarding easier. It does not seem to be optimized for experts and novel ways of completing tasks that were not explicitly thought of.

Consider the power of piping a few unix commands together, this is an advanced task that enables power users, but is nearly impossible for a new user.

tldr; For the common cases and beginners UX has improved, but dropped off for users that go beyond that.


Generally speaking in B2B Apps, I think UX has improved quite a bit. End user workflows have improved in both usability and beauty for the masses (10-100's of Millions)


In some B2B apps, a good UX can be the primary marketed feature. B2B app space is wonderful when it comes to UX engineering... You still have to think about some edges, but depending on your customer you will probably find you can skip a lot of painful items due to their unique organizational constraints.

We sell software to financial institutions and our mission is to provide low-skill hourly hires the ability to reliably open complex accounts. Clearly, focusing on the ability of your target audience is really important if you want to go to this kind of an extreme. For me, this is what "UXR" is - Our team sitting down and asking "how does it feel to use that workflow?" and "If I were walking out of HS graduation, could I understand what I am looking at?".

I don't think this is really complicated stuff at the end of the day. If you let the customers harass the developers just a tiny bit, you might find high quality UXR occurs automagically.


> End user workflows have improved in both usability and beauty for the masses (10-100's of Millions)

Can you cite a single example? Web browsers, mail, event ticket purchases, window managers, music playback, file management, maps, televisions, kitchen appliances, paying for parking, paying for gas / EV charge, credit card checkouts, and banks have all enshittened in the last 15 years.

That's just stuff that actively wasted my time this week.

I can't think of any counter examples.


Yes, although the important question here is how much of this can be attributed to UXR and not the roles it works alongside? It's one the article struggles with answering, and I expect why most businesses are trimming UXR teams as much as they can.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: