So, I'm not a dev nor a project manager but I found this article very enlightening. At the risk of asking a stupid question and getting a RTFM or a LMGTFY can anyone provide any simple and practical examples of software successes at a big scale. I work at a hospital so healthcare specific would be ideal but I'll take anything.
FWIW I have read The Phoenix Project and it did help me get a better understanding of "Agile" and the DevOps mindset but since it's not something I apply in my work routinely it's hard to keep it fresh.
My goal is to try and install seeds of success in the small projects I work on and eventually ask questions to get people to think in a similar perspective.
Even though there were some benefits to the modularity of Multics (apparently you could unload and replace hardware in Multics servers without reboot, which was unheard of at the time), it was also its downfall. Multics was eventually deemed over-engineered and too difficult to work with. It couldn't evolve fast enough with the changing technological landscape. Bell Labs' conclusion after the project was shelved was that OSs were too costly and too difficult to design. They told engineers that no one should work on OSs.
Ken Thompson wanted a modern OS so he disregarded these instructions. He used some of the expertise he gained while working on Multics and wrote Unix for himself (in three weeks, in assembly). People started looking over Thompson's shoulder being like "Hey what OS are you using there, can I get a copy?" and the rest is history.
Brian Kernighan described Unix as "one of" whatever Multics was "multiple of". Linux eventually adopted a similar architecture.
If you click on the link, I mention other competing attempts and architectures, like Multics, Hurd, MacOS and even early Windows that either failed or started adopting Unix patterns.
This is a noble and ambitious goal. I feel qualified to provide some pointers, not because I have been instrumental in delivering hugely successful projects, but because I have been involved, in various ways, in many, many failed projects. Take what you will from that :-)
- Define "success" early on. This usually doesn't mean meeting a deadline on time and budget. That is actually the start of the real goal. The real success should be determined months or years later, once the software and processes have been used in a production business environment.
- Pay attention to Conways Law. Fight this at your peril.
- Beware of the risk of key people. This means if there is a single person who knows everything, you have a risk if they leave or get sick. Redundancy needs to be built into the team, not just the hardware/architecture.
- No one cares about preventing fires from starting. They do care about fighting fires late in the project and looking like a hero. Sometimes you just need to let things burn.
- Be prepared to say "no", alot. (This is probably the most important one, and the hardest.)
- Define ownership early. If no one is clearly responsible for the key deliverables, you are doomed.
- Consider the human aspect as equally as the technical. People don't like change. You will be introducing alot of change. Balancing this needs to be built into the project at every stage.
- Plan for the worst, hope for the best. Don't assume things will work the way you want them to. Test _everything_, always.
>No one cares about preventing fires from starting. They do care about fighting fires late in the project and looking like a hero. Sometimes you just need to let things burn.
As a Californian, I hate this mentality so much. Why can't we just have a smooth release with minimal drama because we planned well? Maybe we could properly fix some tech debt or even polish up some features if we're not spending the last 2 months crunching on some showstopper that was pointed out a year ago.
I find it kind of hard to define success or failure. Google search and Facebook are a success right? And they were able to scale up as needed, which can be hard. But the way they started is very different from a government agency or massive corporation trying to orchestrate it from scratch. I don't know if you'd be familiar with this, but maybe healthcare.gov is a good example... it was notoriously buggy, but after some time and a lot of intense pressure it was dealt with.
The untold story is of landing software projects at Google. Google has landed countless software projects internally in order for Google.com to continue working, and the story of those will never reach the light of day, except in back room conversations never to be shared publicly. How did they go from internal platform product version one to version two? it's an amazing feat of engineering that can't be shown to the public, which is a loss for humanity, honestly, but capitalism isn't going to have it any other way.
Are you saying this from firsthand experience? Because it sounds like the sort of myth that Google would like you to believe. Much more believable is that their process is as broken and chaotic as most software projects are, they are just so big that they manage to have some successes regardless. Survivorship bias. A broken clock is still right twice a day.
I was an SRE on their Internet traffic team for three years, from 2020 til 2023. The move from Sisyphus to Legislator is something I wish the world could see documented in a museum, like the moving of the Cape Hatteras Lighthouse.
That's my entire industry, so I can believe it. I'd love to learn large scale game architecture but it simply isn't public. At best you can dig into the source available 30 year legacy code of Unreal Engine as a base. But extracting architecture from the source is like looking at a building without a schematic.
Your best bet is a 500 dollar GDC vault that offers relative scraps of a schematic and making your own from those experiences.
Have you seen the presentation from GDC 2017 on the architecture of Overwatch [0]? If you watch the video in detail -- stepping through frame-by-frame at some points -- it provides a nearly complete schematic of the game's architecture. That's probably why the video has since been made unlisted.
I don't think you should focus on successful large projects. Generally you should consider that all big successes are outliers from a myriad of attempts. They have been lucky and you can't reproduce luck.
I'd like try to correct your course a bit.
DevOps is a trash concept, that had good intentions. But today it's just an industry cheatcode to fill three dev positions with a single one that is on pager duty. The good takeaways from it: Make people care that things work end to end. If Joe isn't caring about Bob's problems, something is off. Either with the process, or with the people.
Agile is a very loose term nowadays. Broadly spoken it's the opposite of making big up front plans and implement them in a big swipe. Agile wants to start small and improve it iteratively as needed. This tends to work in the industry, but the iterative time buckets have issues, some teams can move fast in 2 week cycles, others don't. The original agile movement also wanted to give back control and autonomy back to those who actually do stuff (devs and lower management). This is very well intended and highly functional, but is often buried or ignored in corporate environments. Autonomy is extremely valuable, it motivates people and fosters personal growth, but being backed by a skilled peers also creates psychological safety. One of the major complaints I hear about agile practices is that there are too many routines, meetings and other in person tasks with low value that keep you from working. This is really bad and in my perception was never intended, but companies love that shit. This part is about communication, make it easy for people to share and engage, while also keeping their focus hours high. Problems have to bubble up quickly and everyone should be motivated and able to help solving them. If you listen to agile hardliners, they will also tell you that software can't be reliably planned, you won't make deadlines, none of them, never. That is very true, but companies are unable to deal with it.
I legit wish I had known about you a few years ago when writing my thesis. It was about community run broadband internet. I was trying to identify repeatable models for communities who wanted to run their own ISPs to use. This would have been so helpful!
Side note, truly inspirational an something I would love to do in my little village in Ohio.
I was wondering if you wouldn't mind talking more about your thesis? I am interested in rural internet as well for Native American reservations and both opie's and your comments are inspiring and something I definitely want to read.
When I drink black tea it's with milk, using freshly boiled water (allow bubbling to stop) in a preheated cup. Milk gets added after steeping so as not to scald. For the love of all that's holy, don't microwave the water. Use a kettle or a pot.
Green and most herbal tea gets neither milk nor sugar, but sometimes honey. Add a little cold water to the herbs first, so you don't extract bitter compounds: using near 100C water is why so many people think they dislike green and herbal tea. (Ginger, hemp, chamomile and hibiscus are the main exceptions to this rule - they need boiling water for the floral flavours.)
I love coffee, but I have enough herbal tea to open an apothecary. Just be careful that you're not getting teabags with plastic in them! Pukka are consistently great, as are Suki.
Favourite fancy biscuits are from the Island Bakery [0] in Scotland. Their Lemon Melts and their Chocolate Gingers are just about perfect. I know you can find their shortbread in the US, but not sure about other varieties. The biscuits come in a little cardboard boat and it's adorable.
Former MSP I worked at is also looking to migrate customers off to something like Hyper-V or Nutanix. I told my old boss to hire a bunch of Linux gurus and go through the Proxmox partnership program and get certified. As much as people knock it for their support not really being ready for enterprise. I do think that proxmox is well suited to gain some of the market share.
I've been using it in my lab for the past few years without any major issues. Running an HA 3-node cluster. Only issue I ran into was when I kept getting kernel panic from an driver issues in my RAID card. But that really only affected me during reboots. Pinning to and older kernel worked. Oh and it was also because my RAID card is like 10 years old.
So HN - if you are moving off VmWare, what are you using?
It can in advanced caseas - from my personal experience: My gran had it for as long as I could remember. As she got older and her condition worsened she would have difficulty swallowing just about anything and we had to add thickening agents (even to her water) so she could.
This also might have been a side affect of her medication but I don't recall it happening earlier on the same meds.
Take this for what it's worth. Happy to be wrong, I know someone on HN will tell me lol
Checks out for my old man, but then again his disease's progression has been so wildly aggressive - at 64 his stage of the disease is more akin to what I'd always imagined for an 84-year-old - I suspect it's not even PD
IIRC EBPF is an enhanced version of the Berkeley Packet filter. In this scenario I believe it is being used for sandboxing a low level process to allow for TLS "decryption" on network connections related to Docker.
FWIW I have read The Phoenix Project and it did help me get a better understanding of "Agile" and the DevOps mindset but since it's not something I apply in my work routinely it's hard to keep it fresh.
My goal is to try and install seeds of success in the small projects I work on and eventually ask questions to get people to think in a similar perspective.