I'll echo one of the points in the article: "Google is trying to be smart".
This is the source of many people's frustration, and the source of forced synonyms. A dumb tool that adapts to humans as they use it and tries to be "smart" prohibits us from getting more skilled in the usage of the tool. It becomes unpredictable, and it introduces significant friction each time it does something dumb.
Even if the tool is correct 90% of the time, it is wrong 100% of the time on an emotional/ux level. The successes are invisible in aggregate, but each mistake sticks out like a sore thumb. I guess why this is: modern understanding of our brains (as I, a lay man, understand it) is that they attempt to continuously predict what's going to happen next in their environment. When all predictions are correct it feels good, and there's no tension. A tool that adapts and changes makes our brains predictions turn out wrong, and our brains punish us with tension and attention each time the tool does not do what we want, since it failed to predict the desired behavior.
Previous versions of google felt so nice precisely because our brains, or at least those of hackers, could adapt to its various tricks and shortcuts.
I wish I could upvote this comment a second time. This resonates strongly with me, and in general is why I think the entire “ml driven” approach to contemporary technologies is misguided.
Historically, when we use tools, one of their crucial aspects is that they are deterministic. A hammer only does one (or arguably two) things, and those functions are completely static and will never change. This not only allows users to become experts and more proficient with the hammer much faster, but it also allows users to use the hammer in flexible and creative capacities—when you know with absolute certainty what something does it’s significantly easier to “deduce” what might happen when you apply it to different circumstances.
Contrarily all “smart” technologies move away from determinism. Rather, they all admit of some amount of indeterminacy, even if their basic purpose is fixed. There’s a happy medium where the indeterminacy is restrained enough that a tool’s cleverness really does save us time, but too often the pendulum swings too far and you wind up actively having to fight against your tools to do what it is you set out to do, making all the microscopic time shavings “smart” features net you worthless.
I’d much prefer my tools and products to be, dumb, fast, consistent and ai free.
This is the source of many people's frustration, and the source of forced synonyms. A dumb tool that adapts to humans as they use it and tries to be "smart" prohibits us from getting more skilled in the usage of the tool. It becomes unpredictable, and it introduces significant friction each time it does something dumb.
Even if the tool is correct 90% of the time, it is wrong 100% of the time on an emotional/ux level. The successes are invisible in aggregate, but each mistake sticks out like a sore thumb. I guess why this is: modern understanding of our brains (as I, a lay man, understand it) is that they attempt to continuously predict what's going to happen next in their environment. When all predictions are correct it feels good, and there's no tension. A tool that adapts and changes makes our brains predictions turn out wrong, and our brains punish us with tension and attention each time the tool does not do what we want, since it failed to predict the desired behavior.
Previous versions of google felt so nice precisely because our brains, or at least those of hackers, could adapt to its various tricks and shortcuts.