Someone posted a link on HN years ago to a set of google docs titled the "Mochary Method", which covers all sorts of management skills just like this. I have it bookmarked as it's the only set of notes I've seen which talks about this stuff in a very human way that makes sense to me (as a non-manager).
> This is the culture that replaced hacker culture.
Somewhere along the lines of "everybody can code," we threw out the values and aesthetics that attracted people in the first place. What began as a rejection of externally imposed values devolved into a mouthpiece of the current powers and principalities.
This is evidenced by the new set of hacker values being almost purely performative when compared against the old set. The tension between money and what you make has been boiled away completely. We lean much more heavily on where someone has worked ("ex-Google") vs their tech chops, which (like management), have given up on trying to actually evaluate. We routinely devalue craftsmanship because it doesn't bow down to almighty Business Impact.
We sold out the culture, which paved the way for it to be hollowed out by LLMs.
There is a way out: we need to create a culture that values craftmanship and dignifies work done by developers. We need to talk seriously and plainly about the spiritual and existential damage done by LLMs. We need to stop being complicit in propagating that noxious cloud of inevitability and nihilism that is choking our culture. We need to call out the bullshit and extended psyops ("all software jobs are going away!") that have gone on for the past 2-3 years, and mock it ruthlessly: despite hundreds of billions of dollars, it hasn't fully delivered on its promises, and investors are starting to be a bit skeptical.
My productivity is not significantly limited by my ability to generate code, so I see little value in tools which offer to accelerate the process. (I don't use autocomplete, either; I type quickly, and prefer my editor to stay out of the way as much as possible.) I spend far more time reading, discussing, testing, and thinking than I do writing.
The people who rave about AI tools generally laud their facility with the tedious boilerplate involved in typical web-based business applications, but I have spent years steering my career away from such work, and most of what I do is not the sort of thing you can look up on StackOverflow. Perhaps there are things an AI tool could do to help me, and perhaps someday I will be curious enough to try; but for now they seem to solve problems I don't really have, while introducing difficulties I would find annoying.
> the assignments could simply be worth 0% [..] that the proctored, for-credit exams would demand that they write similar essays.
We run university programs at my company, and arrived at this bit of insight as well. That said, some of your points are incorrect or incomplete:
- You can't build systems assuming responsible individuals. These systems are guaranteed to fail. Instead, assume individuals are mould-able, and build a system which nurtures discipline towards goals. This works.
- There are still issues with cheating, but it's more of an older way of thinking, that we developed methods to reset.
- Advanced students need to be given more challenging assignments - quantum of assignments should be the same no matter the capability of students. This solution was unworkable until GenAI came about.
Looked from a pure individual skill-building perspective your ideas are alluring, but if one looks at completion rates of any online courses (Udemy/Coursera - under 4%), then one understands why physical cohort-led education system can work.
Happy to chat with anyone who'd like to delve deeper on this.
Maybe think about it this way. If you read and reflect on the HN guidelines, it might occur to you that HN is trying to be everything that X is not. Thus, we don't need to be a site that is forever getting wound up about every installment in the X drama. That just detracts from what we're trying to be. Yet another bizarre thing happens at X today is not significant new information. We have better things to think about and talk about here.
Brian Scalabrine was a bottom of the barrel NBA player. He was good enough to stay in the league for years but he'd only play about 13 minutes a game and averaged 3 points a game. Basically he was the guy who would play a few minutes and not completely tank the game while your actually good players were getting a rest.
After he retired he went around playing amateurs and completely dominating them. He became semi famous for his saying "I'm closer to LeBron James than you are you to me."
I think you described it much more succinctly than most people do. It's been my exact experience as well. The LLM can develop much faster than I can build a mental model. It's very easy to get to a point where you don't know what's going on, a bunch of bugs have been introduced and you can't easily fix them or refactor because you're essentially the new guy on your own project. I find myself adjusting by committing code very frequently and periodically asking the LLM to explain it to me. I often ask the LLM to confirm things are working the way it says they are and it tends to find its own bugs that way.
I use an LLM primarily for smaller, focused data analysis tasks so it's possible to move fast and still stay reasonably on top of things if I'm even a little bit careful. I think it would be really easy to trash a large code base in a hurry without some discipline and skill in using LLM. I'm finding that developing prompts, managing context, controlling pace, staying organized and being able to effectively review the LLM's work are required skills for LLM-assisted coding. Nobody teaches this stuff yet so you have to learn it the hard way.
Now that I have a taste, I wouldn't give it up. There's so much tedious stuff I just don't want to have to do myself that I can offload to the LLM. After more than 20 years doing this, I don't have the same level of patience anymore. There are also situations where I know conceptually what I want to accomplish but may not know exactly how to implement it and I love the LLM for that. I can definitely accomplish more in less time than I ever did before.
In this case, it's not hard to make the argument that you can't have one without the other.
If you're always going to lean into risk aversion and safety nets, you will lose to the player who is willing to make more risks. This involves the losers losing bigger, but the winners winning bigger.
That's what the US is compared to Europe. The winners are better off, but the losers are worse off.
Everything is a trade off. But it's highly unlikely you can have best of both worlds in the long run.
We (Pimoroni) actually shipped this technique in PicoVision, used to load the “GPU” firmware (an RP2040 used to offload the HDMI signal generation) at runtime-
> Basically nobody on the left talks about “woke” except for perhaps a period of six months back in 2017.
Can't really agree. Especially in the wake of the 2024 election, there's been quite a bit of discussion about wokeness on the left.
The trouble is that many people have decided that if you discuss "wokeness" and especially if you have a problem with some element of it, that means you're no longer on "the left".
Personally, I think the issue is mostly about behavior, and not specific ideas. "Let's all make an effort to move culture in a better direction" became "If you don't wholly endorse these specific changes we've decided are necessary, that makes you a bigot, you're not a true progressive, etc.".
When a lot of this was heating up during the pandemic, I encountered two very different kinds of people.
1. Those who generally agreed with efforts to improve the status quo and did what they could to help (started displaying their pronouns, tried to eliminate language that had deeply racist connotations, etc)
2. Those who would actively judge/shame/label you if you weren't 100% up to speed on every hot-button issue and hadn't fully implemented the desired changes
It's that 2nd group that tends to be the target of "anti-woke" sentiment, and that 2nd group tended to be extremely noisy.
> not because there actually exists a problem with wokeness but to try to gain political and social status with their political group
The other issue that I see repeatedly is a group of people insisting that "wokeness" doesn't exist or that there isn't a toxic form of it currently in the culture. I think acknowledging the existence of bad faith actors and "morality police" would do more for advancing the underlying ideas often labeled "woke" than trying to focus on the fakeness of the problem.
Maybe that group is made up of squeaky wheels, but their existence is used to justify the "anti-woke" sentiment that many people push.
For me, this boils down to a tactics issue where people are behaving badly and distracting from real issues - often issues those same people claim to care about.
Aaron Reed's 50 Years of Text Games[1][2] is a fantastic journey into the history and the possibilities of text-based games. I got the physical book and was surprised to find it as engaging as a novel. Each chapter takes one year between 1971 and 2020 and picks a game from that year to discuss in depth. While it might not help with the writing per se, you might good ideas there (several of the games discussed are in the "Adventure" lineage).
I'm a former USN Submariner, Nuclear Electronics Technician, and served from 2006 - 2016. Boy do I have some thoughts. Not all quotes here are from TFA, some are from various speeches he gave.
> Free discussion requires an atmosphere unembarrassed by any suggestion of authority or even respect.
This has always been an interesting idea to implement at tech companies. I've not yet been reprimanded for it, but I have definitely gotten raised eyebrows when the CEO or some other higher-up makes a statement, then pauses for replies. It seems like management isn't expecting anyone to do anything other than applaud.
> Responsibility is a unique concept. You may share it with others, but your portion is not diminished. You may delegate it, but it is still with you. If responsibility is rightfully yours, no evasion, or ignorance or passing the blame can shift the burden to someone else. Unless you can point your finger at the man who is responsible when something goes wrong, then you have never had anyone really responsible.
This is my number one source of frustration in tech, specifically in incident management / retrospectives. Companies have fully latched onto blamelessness in such a way that obliterates any responsibility. There is a difference between blaming a person for causing a problem, and holding a person accountable for their actions. The former usually also implies a disinterest in finding and fixing the root cause, instead looking for a scapegoat. That is flawed thinking that will not yield positive results. However, it's also patently absurd to pretend that when Bob has caused 3 of the last SEV0s, Bob isn't at least related to the problem. "We need more guardrails," they'll say, and implement automated checks to prevent the specific issue. I have no problem with guardrails, but what bothers me is when the guardrails become so onerous that it's difficult to do my job, and the bottom line is no one is holding Bob accountable. If you are careless with your work, no amount of guardrails will fix that; the problem is you.
> I am not satisfied with bringing an individual to a qualified level once, and then forgetting about him. Therefore, we continually reinforce theoretical and practical training with a continuing training program. This includes frequent practice in plant evolutions and casualty drills.
As much as any sailor hated drills, you can't deny that they work. I was jolted awake once by the sound of the collision alarm, followed by the announcement of flooding. While it turned out that it wasn't flooding, but merely a "controlled seawater leak," (someone messed up a rig for dive and left a valve open; that's an entirely different discussion), the fact remains that everyone knew where they needed to go, and what they needed to do.
Companies, especially / mostly ops-related departments, should practice scenarios. DR, loss of a K8s cluster, whatever. If you've automated everything, terrific; find something new that you haven't automated, and see if people know how to deal with it. This leads me to my next point: understanding fundamentals.
> I recall once several years ago an Admiral, whose conventionally powered ships were suffering serious engineering problems, asked me for a copy of one specific procedure I used to identify equipment which was not operating properly. He believed that would solve his problem, but it did not. That Admiral did not have the vaguest understanding of the problem or how to solve it, he was merely searching for a simple answer, a check off list, that he hoped would magically solve his problem.
and
> One of the elements needed in solving a complex technical problem is to have the individuals who make the decisions trained in the technology involved. A concept widely accepted in some circles is that all you need is to get a college degree in management and then, regardless of the technical subject, you can apply your management techniques to run any program...
Rickover understood that in order to operate things, you have to understand how they work. To this end, the training pipeline for my job began with basic algebra, in order to assure a baseline level of knowledge, and then proceeded through the structure of an atom, electrons, PN junctions, diodes, transistors, and logic circuits, before finally learning a great deal about the CPU (Motorola 68000 when I was a student; I was part of the curriculum overhaul years later to "modernize" it to the Intel 386) at the logic signal level. All this, to operate with massive layers of abstraction. But critically, that fundamental knowledge is there. We could, if absolutely necessary, troubleshoot a logic board (which are simply specialized computers) down to the component level, and desolder / resolder the new one.
Tech largely operated in this manner for decades by necessity. If you asked how to do something, you were told to RTFM. If you instead said, "I read section x.y.z but don't understand what it means," there was a much better chance of someone offering guidance. The onus was on you to understand enough of your current layer to apply it to the abstraction above. Instead, we now have vibe coding, and people pushing PRs having neither written nor tested any of the code. We have people copy/pasting error messages into Slack and asking what they mean, instead of taking the 10 seconds required to read it. We have people who have successfully memorized Leetcode, but who can't apply any of that knowledge to real-world problems.
Rickover was an asshole, but he had an extremely transparent and level requirement of all his employees: know your job inside and out. If you didn't, he would destroy you.
There _has_ to be a middle ground somewhere that modern companies could strive towards.
> People can ‘git clone’ my code, and there’s a web-based browsing interface (the basic gitweb) for looking around without having to clone it at all.
I host my own public Git repositories, but statically--read-only, no HTML views. I don't want to run any specialized server-side code, whether dynamic or preprocessed, as that's a security vector and system administration maintenance headache I don't want to deal with. You can host a Git repository as a set of static files using any web server, without any special configuration. Just clone a bare repo into an existing visible directory. `git update-server-info` generates the necessary index files for `git clone https://...` to work transparently. I add a post-receive hook to my read-write Git repositories that does `cd /path/to/mirror.git && git fetch && git --bare update-server-info` to keep the public repo updated.
In theory something like gitweb could be implemented purely client-side, using JavaScript or WASM to fetch the Git indices and packs on-demand and generate the HTML views. Some day I'd like to give that a try if someone doesn't beat me to it. You could even serve it as index.html from the Git repository directory, so the browser app and the Git clone URL are identical.
Embedded systems often have crappy compilers. And you sometimes have to pay crazy money to be abused, as well.
Years ago, we were building an embedded vehicle tracker for commercial vehicles. The hardware used an ARM7 CPU, GPS, and GPRS modem, running uClinux.
We ran into a tricky bug in the initial application startup process. The program that read from the GPS and sent location updates to the network was failing. When it did, the console stopped working, so we could not see what was happening. Writing to a log file gave the same results.
For regular programmers, if your machine won't boot up, you are having a bad day. For embedded developers, that's just a typical Tuesday, and your only debugging option may be staring at the code and thinking hard.
This board had no Ethernet and only two serial ports, one for the console and one hard-wired for the GPS. The ROM was almost full (it had a whopping 2 MB of flash, 1 MB for the Linux kernel, 750 KB for apps, and 250 KB for storage). The lack of MMU meant no shared libraries, so every binary was statically linked and huge. We couldn't install much else to help us.
A colleague came up with the idea of running gdb (the text mode debugger) over the cellular network. It took multiple tries due to packet loss and high latency, but suddenly, we got a stack backtrace. It turned out `printf()` was failing when it tried to print the latitude and longitude from the GPS, a floating point number.
A few hours of debugging and scouring five-year-old mailing list posts turned up a patch to GCC (never applied), which fixed a bug on the ARM7 that affected uclibc.
This made me think of how the folks who make the space probes debug their problems. If you can't be an astronaut, at least you can be a programmer, right? :-)
> I _strongly_ disagree with a fully cynical response of working only to contract, leveraging job offers for raises, etc.
Early in my career I watched a coworker get denied a promotion to management and make a hard turn toward cynicism. To be honest, he was not ready for a management promotion and the company made the right call. However, he was so insulted that he immediately started looking for new jobs and stopped doing more than a couple hours of work per week.
I thought his cynicism was going to backfire, but over the next several years he job hopped almost every year, getting bigger titles at every move. For a long time I was jealous that his cynicism and mercenary-style approach to employment was paying off so well.
Years later I went to a fun networking lunch. His name came up and many of us, from different local companies, said we had worked with him. The conversation quickly turned to how he had kind of screwed everyone over by doing Resume Driven Development, starting ambitious projects, and then leaving before he had to deal with consequences of, well, anything.
He hit a wall mid-career where he was having a very hard time getting hired because his resume was full of job hopping. He was requesting reference letters from past bosses multiple times a month because he was always trying to job hop. One admitted that he eventually just stopped responding, because he'd write a lot of reference letters every job-hop cycle only to have him bail on the company with a lot of technical debt later.
He eventually moved away, I suspect partially because the local market had become saturated with people who knew his game. He interviewed extremely well (because he did it so much) but he'd fail out as soon as someone recognized his name or talked to an old coworker.
The last I talked to him, he felt like a really cynical person all around. Like his personality was based on being a mercenary who extracted "TC" from companies by playing all the games. He was out of work, but asked me if I had any leads (no thanks!).
I'm no longer jealous of his mercenary, job-hopping adventure.
I came in here to see if this had been posted already. deBoer does a much better job at talking about it in a comparatively neutral way. He only adds his two cents at the very end.
Another good one that gets it even closer is from Sam Kriss. His prose is a bit less to the point than deBoer, but he outlines his idea that "wokeness" is not a political ideology but rather an etiquette. I think it's paywalled now but the archived version can be read:https://web.archive.org/web/20230324050437/https://samkriss....
It's a good writeup that doesn't require the reader to have taken a stance or agree with the author's (arguably reactionary in the case of PG's post, depending on one's perspective) politics.
It's not that the skin is thin, but that the muscle is tired. Our muscle (or sense) of guilt has been overused and abused. Now it's prone to inflammation when we hear people who intentionally or unintentionally trigger it.
I think the irritation towards guilt may look like rage but I think it's a weary hopelessness. No matter what is done, history cannot be undone. It cannot be forgotten and many people feel it can't even be made right anymore. All the guilt of recent history did not lead to a new Civil Rights Act, it did not change the Constitution. And any of the good that was done to right history in the 20th century-- many claim it only belongs to yesterday's victims.
Those with the wrong ancestors are stuck in their sin waiting for history to be twisted & jabbed into them by their neighbors, who wish to ease or to glorify their own individual conscience.
IMO, the cycle breaks only when there's hope of true, genuine forgiveness that MLK preached and LBJ effected. But that forgiveness is beyond human power.
This is a pattern we see again and again. The true believers are fools because they don't anticipate how the implementation of their doctrine will be gamed.
While unemployment certainly deserves a conversation of its own, I think the more overlooked aspects of education and democracy will erode our society deeper into a hole by themselves.
I'm rather fearful for the future of education in this current climate. The tools are already powerful enough to wreak havoc and they haven't stopped growing yet! I don't think we'll properly know the effect for some years now, not until the kids that are currently in 5th, 6th, or 7th start going into the workforce. While the individual optimist in me would like to see AI as this great equalizer, personal tutor for everyone, equal opportunity deliverance, I think we've fumbled it for all but a select few. Certainly there will be cases of great success, students who leverage AI to it's fullest extent. But I urge one to think of the other side of the pie. How will that student respond to this? And how many students are really in this section?
AI in its current state presents a pact with the devil for all but the most disciplined and passionate of us. It makes it far to easy to resign all use of your critical mental faculties, and to stagnate in your various abilities to navigate our complex modern world. Skills such as critical reading, synthesizing, and writing are just a few of the most notable examples. Unrestrained use of tools that help us so immensely in these categories can bring nothing but slow demise for us in the end.
This thought pattern pairs nicely with the discussion of AIs effects on democracy. Hopefully the step taken from assuming the aforementioned society, with its rampant inabilities to reason critically about its surroundings, to saying that this is categorically bad for democracy, isn't too large. Democracy, an imperfect form of government that is the best we have at this moment, only works well with an educated populace. An uneducated democracy governs on borrowed time. One can already see the paint start to peel (there is a larger effect that the Internet has on democracy that I'll leave out of this for now, but is worth thinking about as it's the one responsible for the current decline in our political reality).
The unfortunate conclusion that I reach when I think of all of this, is that it comes down to the ability of government and corporations to properly restrain this technology and foster its growth in a manner that is beneficial for society. And that restraint is hard to see coming with our current set up. This is to avoid being overly dramatic and saying that it's impossible.
If you look at the history of the United States, and truly observe the death grip that its baby, capitalism, has on its governance, if you look at this, you find it hard to believe that this time will be any different from times past. There is far too much money and national security concern at stake here to do anything but put the pedal to the floor and rapidly build an empire in this wild west of AI. The unfortunate conclusion is that perhaps this could have been a wonderful tool for humanity, and allowed us to realize our collective dreams, but due to the reasons stated above I believe this is unachievable with our current set up of governance and understanding of ethics, globally.
Dr. Horstmann was my advisor in college, San Jose State.
I just loved his lectures, very dry sense of humor, and extremely funny.
He was just getting started writing books in the early 90s. He has this awesome way of thinking about programming, that I imparted to my own students when it came my turn to teach programming. I wish there some videos of his classes that I could go back to and share with people.
The picture on the website with him in the row boat has a funny story with it. When asked why he is in a row boat, he would reply, "Students are in the row boat with me, learning to program. At some point I push them out of the boat into the eel infested lake. The ones who are clever enough to make it back to the shore will be good programmers." All of this said with a faint hint of a German accent and a sly smile.
If you happen to read this, Dr. Horstman. I made it to shore. Thanks! It has been an awesome journey!
Trying to boil the frog and return to the before times. Not going to go well, but you have to let people who strongly believe in shooting themselves in the foot to do it. They won't be talked out of it, because the folks who filter to the top do not get there because they are rational, logical, data driven, etc.
I spent about the past month learning rust - and decided it just wasn't a usable language for me.
I think what it comes down to is that I'm just not that into bondage and discipline from a compiler. Yes, I know, it's trying to make my code 'safe', and I'm horribly cavalier and I should feel bad, but:
1. the borrow checker rejects valid programs, has a lot of corner cases it can't catch, and is in active development
2. people routinely write more inefficient code to satisfy it
3. people routinely freak out about 'unsafe', which is a bit much given the first point
Anyway, YMMV. Maybe you're smarter than me, or rust fits your mindset better, or you enjoy asking for help on discord all day, but I am so glad not be using it anymore.
I agree that writing type hints can be painful, especially if you are starting with a large code base that is mostly untyped. You might consider using RightTyper (https://github.com/RightTyper/RightTyper) - basically run your Python 3.12+ program with it, and it will add type hints to your code. It’s fast, basically automatic, and by design RightTyper avoids overfitting to your types, letting a type checker like MyPy surface edge cases. In effect, the type checker becomes an anomaly detector (Full disclosure, I am one of the authors of RightTyper.)
From the GitHub page:
RightTyper is a Python tool that generates types for your function arguments and return values. RightTyper lets your code run at nearly full speed with almost no memory overhead. As a result, you won't experience slow downs in your code or large memory consumption while using it, allowing you to integrate it with your standard tests and development process. By virtue of its design, and in a significant departure from previous approaches, RightTyper only captures the most commonly used types, letting a type checker like mypy detect possibly incorrect type mismatches in your code.
I am as worried about LLMs taking my job as I am about parrots or magic taking it, and for the same reason. Do not worry about make-believe stuff.
Interviews are irrelevant if you publish enough good things that companies reach out to you and offer to skip them. Write. Learn. Publish. Thrive.
Building fluff is indeed a serious issue, but there are plenty of places that are doing useful things, eg: Medical. Aerospace.
Demand is weak for javascript monkeys. It is plenty strong in the same places it always has been: solving complex problems in complex systems. Go work there.
Here's the doc for responding to mistakes: https://docs.google.com/document/d/1AqBGwJ2gMQCrx5hK8q-u7wP0...
And here's a video with Matt talking about it in a little more detail: https://www.loom.com/share/651f369c763f4377a146657e1362c780
It's a very similar approach to the linked article although it goes slightly further in advocating "rewind and redo" where possible.
EDIT - The full "curriculum" is here: https://docs.google.com/document/d/18FiJbYn53fTtPmphfdCKT2TM...