i notice a huge difference between working on large systems with lots of microservices and building small apps or tools for myself. The large system work is what you describe, but small apps or tools I resonate with the automate coding crowd.
I've built a few things end to end where I can verify the tool or app does what I want and I haven't seen a single line of the code the LLM wrote. It was a creepy feeling the first time it happened but it's not a workflow I can really use in a lot of my day to day work.
reactive is on the decline in the java world post-loom (virtual threads) and should be nobody's first choice. writing plain old imperative code is vastly simpler to write / debug / reason about.
I think there is an issue where reactive frameworks are massively overused in languages that have (or had) weak concurrency patterns. This was true for a while in JavaScript (before async/await became universally supported), but it's especially endemic in the Java world, especially the corners of it which refused to use Kotlin.
So yes, in this particular case, most of the usages of RxJava, Reactive Streams and particularly Spring Reactor is just there because programmers wanted pretty simple composition of asynchronous operations (particularly API calls) and all the other alternatives were worse: threads were too expensive (as Brian Goetz mentions), but Java did have asynchronous I/O since Java 1.4 and proactor-style scheduling (CompletableFuture) since Java 8. You could use CompletableFutures to do everything with asynchronous I/O, but this was just too messy. So a lot of programmers started using Reactive framework, since this was the most ergonomic solution they had (unless they wanted to introduce Kotlin).
That's why you'd mostly see Mono<T> types if you look at a typical reactive Spring WebFlux project. These Mono/Single monads are essentially CompletableFutures with better ergonomics. I don't mean to say that hot and cold multi-value observables were used at all, but in many cases the operator chaining was pretty simple: gathering multiple inputs, mapping, reducing, potentially splitting. Most of this logic is easier to do with normal control flow when you've got proper coroutines.
But that's not all what reactive frameworks can give you. The cases when I'd choose a reactive solution over plain coroutines are few and pretty niche: to be honest, I've only reached to a reactive 3 or 4 times in my career. But they do exist:
1. Some reactive operators are trivially mapped to loops or classic collection operators (map/reduce/filter/flatMap/groupBy/distinct...). But everything that's time-bound, is more complicated to implement in a simple loop, and the result is far less readable. Think about sample or debounce for instance.
2. Complex operation chains do exist and implementing them as a reactive pattern makes the logic far easier to test and reason about. I've had a case where I need to implement a multi-step resource fetching logic, where an index is fetched first, and then resources are fetched based on the index and periodically refreshed with some added jitter to avoid a thundering herd effect, as well as retries with exponential backoff and predictable update interval ranges which is NOT affected by the retries (in other words: no, you can't just put a delay in a loop). My first implementation tried to model that with pure coroutines and it was a disaster. My second implementation was RxJava, which was quite decent, and then Kotlin Flow came out, which was a breeze.
3. I'm not sure if we should call this "Reactive" (since it's not the classic observable), but hot and cached single values (like StateFlow in Kotlin) are extremely useful for many UI paradigms. I found myself reaching to StateFlow extensively when I was doing Android programming (which I didn't do a lot of!).
In short, I strongly disagree with Brian Goetz that Functional Reactive Programming is transitional. I think he's seeing this issue from a very Java-centric perspective, where probably over 90% of the usage we've seen for it was transitional, but that's not all that FRP was all about. FRP will probably lose its status a serious contender for a general tool for expressing asynchronous I/O logic, and that's fine. It was never designed to be that. But let's keep in mind that there are other languages than Java in the world. Most programming languages support some concept of coroutines that are not bound to OS kernel threads nowadays, and FRP is still very much alive.
But even this statement is incorrect. FRP frameworks with Observables will remain useful in Java (as they have in other languages that already had coroutines). It's only the use of Observables as _an alternative for coroutines_ that is a transitional technology.
Maybe this is what Brian Goetz meant to say, but this is not what he said.
Innate talent can also be tied to one's sense of identity, which makes failure more overwhelming. (If one does not _really_ try, failure does not feel as crushing.)
Just an expectation that something will be easy (without a strong tie to identity/sel-worth) can make failure more painful.
Easy successes can also lead to not developing progress-enabling skills when in a "friendly" environment (e.g., an academically gifted person not learning study skills and disclipline before college). When the innate skill and casual training is no longer enough to meet expectations, there is not the emotional reserve and external support to develop the meta-skills.
Failure aversion and lack of self-discipline is somewhat independent of "work ethic"; a person terrified of failure can work very hard at easy tasks or tasks with results that lack internal or peceived external judgment in part because such feels so much better than not really trying.
Sadly, a "safe" activity can be "ruined" by a person's well-meaning compliment, that introduces expectations to the activity. (Weirdly, indirect compliments seem significantly less problematic; "these decorations look really nice" can feel acceptable even when the person knows one did them while "you did a really good job on the decorations" can feel crushing by setting a new higher baseline of expectations and/or introducing self-doubt because the person is just being nice.)
I kind of agree with you. On the one hang OP is logically correct, on the other it's very sad and a form of a tragedy of the commons. If everyone gave candid feedback we'd all be better off.
I think this blind spot exists because the pure engineering/logic mindset is such a massive superpower in so many elements of life, people fail to consider that it might not always be the right way to think about the world.
One obvious example where it falls laughably short is in interpersonal relationships. Trying to logic your way out of an emotional conflict just does not work
> Trying to logic your way out of an emotional conflict just does not work.
It does, there just needs to be a proper model of how humans work to back it up. The usual mistake is using logic to prove why a person is right instead of to work out why the relationship is going wrong.
People who don't use logic to guide their interpersonal interactions cap out in some fairly shallow waters. They are more easily suckered by emotions primed to respond to looks and the present instead of properly aligning the relationship for the long haul. The easiest path to push back against those inbuilt biases is logic - there needs to be some set of principles beyond emotions to use as a guide.
There's also the added issue that binary logic (what most people use when they say "logic") is only slightly less constrained than unary logic and is insufficient for modeling reality. Without uncertainty logic, the wisdom available is highly limited. This allows for every emotional story to be engaged and worked through without declaring it immediately and absolutely false, allowing emotions to inform while not letting them drive the decision-making process.
And that is where nerds do massive missteps including horribly ridiculous jumps in the logic. Because nerds and technical people are emotions driven as anyone else. They react to own feelings of anger, fear, frustration etc.
But, since they think emotions dont matter and cant be talked about, they rationalize all above into arguments that sound logical to them and no one else.
All people in the relationship have to be willing to use logic (and understand logic) for it to ever work when dealing with the relationship. That’s rarely the case.
Logic is a technique for detecting inconsistent beliefs. Only one person using it is still helpful, one side being logically alert in a disagreement is going to open up more paths toward controlled deescalation and resolution than two people both fighting while being logically inconsistent.
Even when it is the case you can logically come to a resolution but if you don't emotional feel it, the problem/conflict is not solved and will come up again. In my experience this manifests in non obvious ways that are far removed from the original problem
I think the reasonable explanation is that logic works great for simple systems, but once there are more than, say, seven variables involved, nobody can properly reason in real-time anymore. Personal relations, politics, raising a child, finding out what to do with life, selecting a web framework, all involve millions of variables.
Even if some abstract concepts (love, power, friendship) allow a scientifically minded person to consider complex systems as simpler ones, the underlying complexity is still there, and is relevant. Human ecosystems do not adhere to Maxwell's laws.
> One obvious example where it falls laughably short is in interpersonal relationships. Trying to logic your way out of an emotional conflict just does not work
This is awfully glib for something that rings so wrong for me - logic not useful in emotional conflict?? Emotional conflict itself stems from emotion! How could taking a step back and trying to look at things logically not be productive?
In my experience, one of the only things that can safely navigate conflict, whether emotional or otherwise, is logic - your challenge is to actually be disciplined enough to apply it in stressful situations - and/or to be willing to leave the matter unsettled until you’ve had time to cool down and can afford the luxury of looking at things more practically.
I suspect we’re using the same words to mean different things because I can’t imagine not being able to logic your way out of emotional conflict, I can’t imagine any other route being viable apart from logic - I think the root cause of emotional conflict is getting overwhelmed by feelings and neglecting to think.
Using logic with people who think logic fails laughably short anytime they get emotional does indeed not work.
These people view everything through the lens of power - they are amusingly simple creatures who only use logic to acquire power or for an occasional hoot.
When people like that get into positions of power over others - disaster follows.
The controversy over the amyloid hypothesis comes from a Stanford professor faking data[1] and setting the field back decades. The amount of harm this individual caused is hard to overstate. He is also still employed by Stanford.
It's actually pretty easy to overstate the amount of harm caused by that one individual... you're doing it.
There are lots of good reasons to believe in the amyloid hypothesis, and no paper or even line of research is the one bedrock of the hypothesis. It was the foundational bedrock of Alzheimer's research back in the early 1990s (essentially, before Alzheimer's became one of the holy grail quests of modern medicine), after all; well before any of the fraudulent research into Alzheimer's was done.
The main good reason not to believe in amyloid is that every drug targeting amyloid plaques has failed to even slow Alzheimer's, even when they do impressive jobs in clearing out plaques--and that is a hell of a good reason to doubt the hypothesis. But no one is going to discover that failure until you have amyloid blockers read out their phase III clinical trial results, and that doesn't really happen until about a decade ago.
I know that it is very important for HN folks to be angry. But as someone who has a parent with this disease, I would like to be certain that the amyloid hypothesis is definitely not correct before we throw it entirely out with the bathwater. These simplified “one researcher caused an entire field to go astray for decades” explanations are much too pat for me to have any confidence in them.
A lot of people should be mad at Marc Tessier-Lavigne, not just HN folks. He lied for personal gain at the expense of scientific progress and millions of patients who suffer
> These simplified “one researcher caused an entire field to go astray for decades” explanations are much too pat for me to have any confidence in them.
Right, monocausal explanations in-general will set-off my skept-o-sense too; but then my mind made me think of another example: Andrew Wakefield (except that AW succeeded more at convincing Facebook-moms than the scientific establishment - but still harmed society just as much, IMO)
The amyloid hypothesis is absolutely not correct. We know this unequivocally.
Amyloid deposits correlate with Alzheimer’s, but they do not cause the symptoms. We know this because we have drugs which (in some patients, not approved for general use) completely clear out amyloids, but have no affect on symptoms or outcomes. We have other very promising medications that do nothing to amyloids. We also have tons of people who have had brain autopsies for other reasons and found to have very high levels of amyloid deposits, but no symptoms of dementia prior to death.
My uncle died of the disease, and I work in neurotech/sleeptech, specifically in slow-wave enhancement which is showing promise in Alzheimer's.
I 100% agree with you that we shouldn't throw the baby out with the bathwater on this one. Data being falsified and the hypothesis being wrong are two different things.
> These simplified “one researcher caused an entire field to go astray for decades” explanations are much too pat for me to have any confidence in them.
Anyone who believes that an entire field and decades of researched pivoted entirely around one researcher falsifying data is oversimplifying. The situation was not good, but it’s silly to act like it all came down to this one person and that there wasn’t anything else the industry was using as their basis for allocating research bets.
Regardless, it is still important not to fall into the fallacy fallacy (just because someone made a bad argument for something does not imply that the conclusion is neccesarily false)
in poker AI solvers tell you what the optimal play is and it's your job to reverse engineer the principles behind it. It cuts a lot of the guess work out but there's still plenty of hard work left in understanding the why and ultimately that's where the skill comes in. I wonder if we'll see the same in math
For comparison if you run your own hardware and do a memcached KV lookup with a different server on the same rack, p99 times are slightly under 1ms. Given the guarantees of cosmosdb ~10ms isn't that bad for a p100
I had a lightbulb moment recently where I had a recruiter ask me if I had any experience building recommendation systems. While I don't use that word on my resume, my resume is full of technologies and projects that point toward recommendation system experience.
The recruiter was tasked to find candidates with a recommendation system background but the only way they know to do that is look for that exact word.
I've built a few things end to end where I can verify the tool or app does what I want and I haven't seen a single line of the code the LLM wrote. It was a creepy feeling the first time it happened but it's not a workflow I can really use in a lot of my day to day work.
reply