not to be antagonistic but are we paid to learn stuff or to build stuff? I think it's the latter. if we have to learn something it's only so that we can build something in the end.
I am absolutely paid by the hour to learn stuff. The things I'm learning are mostly messy business domain bits: how does our internal API work, who wrote it, what were the constraints, which customer requested this feature, how much SLA will we lose if we break it to hotfix this CVE...
Yes the end result is at some small moment in time a thing that was built. But the value prop of the company isn't the software, it's the ability to solve business problems. The software is a means to that end. Understanding the problems is almost the entire job.
> But the value prop of the company isn't the software, it's the ability to solve business problems.
Clearly it's critical to the job, but to take your point to its limits: imagine the business has a problem to solve and you say "I have learned how to solve it but I won't solve it nor help anyone with it." Your employer would not celebrate this, because they don't pay you for the private inner workings of your own mind, they pay you for the effects your mind has on their product. Learning is a means to an end here, not the end itself.
Helpfully, neither is "I won't solve it nor help anyone with it" actually normal. That's what documentation, mentorship, peer review and coaching is for. Someone has to actually write all that stuff. If I solved it initially, that someone is me. Now it's got my name on it (right there in the docs, as the author) and anyone else can tap on my shoulder. I'm happy to clarify (and improve that documentation) if something is unclear.
Here, of course, is finally where AI can plausibly enter the picture. It's pretty good at search! So if someone has learned, and understood, and written it down, that documentation can be consumed, surfaced, and learned by a new hire. But if the new hire doesn't actually learn from that, then they can't improve it with their own understanding. That's the danger.
I agree, I have said it before, ChatGPT is like Photoshop at this point, or Google. Even if you are using Bing you are googling it. Even if you are using MS Paint to edit an image it was photoshopped.
I concur. I too, am on the shots, and after stabilizing my sugars and blood pressure, my chronic migraines have all but disappeared. My migraines have been with me since I was a child, but uncontrolled diabetes and blood pressure were definitely a trigger for mine.
Its the classic problem that code firefighters get all the attention, but they were also soaking the structure down with gasoline before hand.
Meanwhile the people who took the extra effort to tackle particularly painful parallel systems remove some race conditions early in the design process and are seen to achieve nothing.
no, letting misinformation persist is counterproductive because of the illusory truth effect. the more people hear it, the more they think (consciously or not) "there must be something to this if it keeps popping up"
Elon Musk's takeover of X is already a good example of what happens with unlimited free speech and unlimited reach.
Neo-nazis and white nationalists went from their 3-4 replies per thread forums, 4chan posts, and Telegram channels, to now regularly reaching millions of people and getting tens of thousands of likes.
As a Danish person I remember how American media in the 2010s and early 2020s used to shame Denmark for being very right-wing on immigration. The average US immigration politics thread on X is worse than anything I have ever seen in Danish political discussions.