You're free to donate money to Ukraine yourself. They didn't ask to be born there or to be in that situation.
I live in a country with hundreds of thousands of Ukrainian refugees just living their new lives, working - and spending the money, sometimes a lot of it for expensive apartments, nice cars, luxury meals etc. Are all of them bad just because they don't want to be responsible for the state when they didn't sign up for that and nobody paid them to do so? I really don't think so. They are people like you and me, they want to live a happy life. Some contribute to the defense effort, some don't, both is fine.
I do donate. I do not expect them to be patriots and go die in a trench, but I also expect them to not be creeps and dickheads abroad. Their behaviour and narration presents a couple of cash-rich, entitled people detached from reality. They are not helping their own cause.
Are they trying to help any cause other than their own? Also - don't generalize based on nationality, please. These behave like this, others behave differently. This changes nothing about the war in Ukraine or Ukrainians.
They are refugees. They are not entitled to the same quality of life as they enjoyed in their own country. Nobody says they can't have it, but trying to maintain it by using digital footprints of 299 deceased people and stalking living persons while spending a lot of cash on toys is not going to help them last long in the UK. They need to accept what is obvious--they are refugees and may have to adjust their own expectations for a while. It helps to have friends when you are a refugee, but so far they upset people and have a permanent record on the internet linking their name with this idiotic stunt.
It's not like your bank account immediately stops working when you cross the border. There are people who had successful careers finding themselves in this situation, is their story not valid just because they also weren't completely broke? It's not actionable advice (is this even advice?) for someone with zero on their bank account, but why should it?
Saying you have nothing but a backpack, when in fact you have a backpack and tens of thousands of dollars is lying. This whole thing is a ton of lying for their own personal benefit.
While I appreciate the effect this kind of downtime can have, I just don't understand these stories.
Presumably it was planned in advance, so the patients know the time of their appointment and the doctor knows what was planned, and everything necessary to physically perform the treatment is already prepared at the hospital. What's stopping them from doing it without filling it into a digital system? Why is it impossible to make a paper record and fill it into the computer system later?
If somebody was literally dying, would they stand around the computer like confused characters in a The Sims game who can't find the door, instead of saving the life? And if not, why is this less urgent case different?
A nurse was unable to give my wife medication while in labor because the barcode on the bag of drugs wouldn’t scan. Fortunately we just had to wait another 20 minutes to get a new bag from the pharmacy but I can easily imagine a world where doctors are unable to perform procedures they are physically capable of doing because of liability surrounding not using the computer systems as intended. Epic particularly has really done a number on the healthcare system.
I really, sincerely don't understand that. How does an unscannable barcode prevent a doctor/nurse from administering medicine they are holding in their hands?
The other commenter already said it: Liability. What if the scan is part of a procedure that ensures that the right drug is given to the right patient? Giving someone the wrong drug or even the wrong dose can cause serious harm. Imagine they kill someone that way and then during the investigation it turns out the they didn't scan the meds. It doesn't matter why they didn't scan it (lazy, forgetful, computer problem), it is en enormous legal risk for every party involved. Thousands of people die each year because of medical errors, so trying to prevent doctors from killing people by using strict procedures is very important. Even if it means that in extreme situations like this the procedure can cause harm as well. Overall it will save many, many more people than it will kill.
What if it turns out they harmed the patient by insisting on following the standard procedure during a worldwide outage? Isn't that the same kind of liability risk, and is the regulation really going to protect them in this case? If so, isn't that a hugely problematic regulation?
In that case the nurse or doctor has a strong defense of "I was following policy" for their insurance and boss.
The people writing hospital policies or regulations aren't thinking about individual patient outcomes unless some notable news story came out recently, and even then it's maybe the third or fourth priority on a list a hundred items long.
We don't know that this is what actually happened in OP's case. I was referring to the comment you replied to and there it is pretty obvious that the regulation is exists to prevent harm from being done. But even if there is a clear justification, you would expose yourself to a lawsuit and need argue all this in court. I can totally understand why people don't want that, especially in the US. So if anything, you should blame the legal system.
It probably also automates the chart entry and billing to insurance/patient, at least to an extent. I wouldn’t rest sole responsibility for this system on legal compliance or risk mitigation. Under normal circumstances, there’s also an efficiency improvement. The problem arises when there either is no workaround when the system doesn’t work, or workers aren’t trained well enough to know how to do things manually (or don’t have enough time under the less efficient mode of operating).
It doesn't. We do this all this time in rapid responses and cardiac arrest scenarios, when we can't wait for an order in the EHR; someone keeps track of the medications, doses and rough times of administration, and it's entered into the EHR later.
An awful lots of apparently useless bureaucracy exists because many people, left to themselves, are often very, very stupid.
Bureaucracy certainly stops smart people from doing the right thing, but more often, it stops stupid people from doing the wrong thing. Hack away at bureaucracy at your peril.
If the barcode wouldn't scan there it might also have not scanned correctly when that bag was being filled which could have led to it being filled incorrectly.
I think a good remedy would be to completely remove "normal procedure" as a defense against liability. Our legal standard should defend people who break protocols if they know they will result in harm, and prosecute people who don't, or prosecute the people who make the protocols in those cases. Law should supercede corporate policy, not treat it as a form of law
> What's stopping them from doing it without filling it into a digital system? Why is it impossible to make a paper record and fill it into the computer system later?
It’s not filling in new data that’s the problem - every person involved in treatment needs to be able to access the patient’s medical records to check for contraindications. Allergies and drug interactions are a quick way to kill someone when injecting drugs directly into their veins even if they’re already in a hospital.
At a major hospital there’s too many patients coming through and the data changes too frequently to keep paper backups.
It’s a large children’s hospital with thousands of employees treating tens of thousands of kids a year, not some rural family doctor with a list of patients that can fit on a single sheet of A4. They’re not going to get the same staff every time and the staff isn’t going to memorize the charts of every patient.
It’s not clear to me that this case is actually life threatening. They have a regular procedure which even in the article they say they have wiggle room for timing.
If all of your computers go down your throughput is going to go down because other kinds of organization are going to be slower to do ad hoc… so you triage.
I can get stories like call centers, but I absolutely don't understand how life critical systems aren't air gapped and rigidly controlled.
Fail safe is the only acceptable failure mode for any critical system. Crowdstrike failed here, but they're not the only thing that can go wrong with computers. Where is the redundancy?
Life-critical systems are air-gapped. Just no one considered systems running Epic to be life-critical. It turns out they are, probably more so than most.
Also, air-gapping helps only so much when network dies and hospitals can't exchange patient information or send images from MRIs and X-rays to radiologists.
>and hospitals can't exchange patient information or send images from MRIs and X-rays to radiologists
My dentist literally took a photo of my x-ray with his phone and sent it to to my orthodontist via Whatsapp and everything went quick and smooth, much faster than the official channels. Solutions to get a job done quickly and efficiently in case of emergency always exist, they're just not "by the book".
Imagine a news story about a dentist that violated HIPPA (or equivalent) laws because they used Whatsapp / Facebook to share medical records. Will this news story be about a hero vs someone who got into trouble?
Hippa doesn't apply in Europe but GDPR, and I don't see how that would be in violation since my information was exchanged only between the two parties with my consent, on an encrypted channel.
They would only get into trouble if that info would leak in an identifiable way to unauthorized third parties and would cause damages (here there's no punitive damages like in the US). And people here tend to guard their WhatsApp chats pretty well since it's what everyone uses and it also contains their private chats so in a sense it can even be more secure than the official medical channels which are just more burocratic but offer no actual guarantee of more data security.
> my information was exchanged only between the two parties with my consent, on an encrypted channel
Say WhatsApp is found to have a security hole that has been leaking data to 3rd parties. What may be the fate of dentists / doctors that decided to use it an "encrypted channel" for medical records? Are doctors / dentists not fat targets for lawsuits? What might the guidance be from their lawsuit insurance policy?
Lots of encryption software that’s been used in the past was found to be deficient. I can’t recall any entity that used it while being unaware of its deficiency being held responsible.
I get that, my point is, why is it absolutely necessary to use the computer system? Why don't they just knock on the door, go grab the medicine and tools, apply it, then fill it into the system later?
I understand they would just postpone whatever can be postponed to save the headache, I don't get the stories about life/health threatening situations.
Have you ever worked a job that requires high degree of physical world logistics? In times where the primary coordination mechanism is down, any action becomes much slower to implement and often at a direct cost to implementing other actions.
With regard to this case, I don't know any specifics, but I can imagine tools require digital calibration, inventories not tracked outside digital systems, certain meds behind digital access control, and emergency response striained to the point where complicated non emergency procedures would be more risk than benefit.
I have managed IT departments that managed hundreds of locations and thousands of computers running Windows XP and Windows Server 2003, no cloud at all. And I went through several similar outages (similar in impact on our operations, not cause or impact on others). Our first priority was to get the critical computers that operated machinery running - we did that hours (1-2) after the problem started. Then we played around with the servers and network for few weeks - but critical stuff was operable, albeit with lesser capacity and efficiency.
And we were managing forests and waterways, not hospitals and human lives.
That's all fine, but this time, no one could get those computers back up in the first few hours, since they were stuck in a boot loop. Plus, systems like hospitals had to be running all that time. Plus, at the scale this outage is reported to be - banks, stores, factories, phones, emergency services, CNC machines, networking, aircon - I imagine everyone was confused and trying to figure out if anything works.
I'm happy nothing significant was hit over here in Poland; reading the main HN thread on the outage feels like reading war reports.
If it's stuck in a boot loop, the first thing I do is call the local admins and tell them to take a fresh SSD and a Windows installation USB drive with them. Plug the new SSD, reinstall the OS and copy the files from the old one. Computer running in less than an hour.
That's literally what we did to restart our forest logging machinery. Are human lives less critical than that?
Things haven't changed in IT so much. I am not in ICT management anymore, but I write software for the modern enterprise systems and networks - I'm reasonably up to date.
Ad medicine - hence my question, I'd really like to know what's the blocker. So far it seems the blocker is bad IT management, regulation and liability, not impossibility to perform the treatment.
Your answers indicate that you have not worked in n environment heavily dependent on ever shifting physical world logistics. You might try talking to some coordinators on the ground of a hospital, rescue center, consteuction site, theme park, or military operation for insight.
I talked to people in charge of the operations on a daily basis for years. I really don't think these considerations have changed that much since my times of leadership of an entire department managing just that.
reminds me of a ticket I once worked on, no emergency or anything, I just needed to crank out a few pages of cobol, it took a little while to type it in, and the boss of the department wanting the ticket done came by and asked what the holdup was, "I'm working on it, will be ready in a bit", the boss asked "what, don't you just need to press a button or something?" she too "lead a department." hahaha
I can imagine that for something like this procedure, which is an infusion of medication into the brain it sounds like?, that the "tools" to perform the procedure themselves are computer based or computer dependent. It might not be as simple as injecting a drug into an IV line.
Note that I am not a doctor and have absolutely no specific knowledge beyond what is in the original article, but I am guessing at potential explanations.
Additionally, the article states that there is some "wiffle [sic] room" around the timing of the infusions. So it may be that the delay is not quite as serious as the title makes it sound.
Presumably they would fix these computers first thing during the night from a backup? If not, is this really about CrowdStrike, and not about a hospital unable to keep their absolutely critical computers backed up and restored in a timely manner?
Again, I understand that restoring a complex net of servers is hard and takes time. But they surely have local hospital IT admins for these absolutely critical computers who are always available on site and can do it individually - it's not like there will be more than a hundred of these at a particular hospital? Hack it a little if you have to, disable the SSO etc - all that can be fixed later.
The unfortunate fact of the matter is that centralizing IT systems around large corporate products, including the on-prem software and any cloud services, necessarily means less local control of what can go wrong and how it can be mitigated, and thus often problems that simply can't be fixed, even by competent on-prem staff. Even when it is possible, it's often highly illegal, and most organizations do a lot to beat risk-aversion into everyone on their staff, and of course I mean aversion to risk of breaking rules or protocols, not risk like "someone dying"
I think it's always a mistake to outsource control of a mission-critical system, but that is exactly what large tech companies have been encouraging every organization that will listen to them to do for decades now
I have trouble accepting that. Even if they had to unplug the computer from the network and disable SSO and antivirus in safe mode, it's possible to get the computer operational. Even if they had to reinstall the OS and the critical software from scratch. There are solutions, the question is - did they even try? If not, why? And is CrowdStrike really to blame if they didn't? I just don't think so.
Who in the org do you expect to have that competency, and do you think hospitals aren't keeping crucial things like credentials or software that gates access to things in the cloud when literally everyone in the world is encouraged to at every turn?
The culture of organizational IT is broken because a lot of powerful companies found it profitable to break it and leave something inadequate in its place
I agree with this sentiment. If you ask me, the entity that comes out looking the worst from this Crowdstrike debacle are the companies that bought their service. Crowdstrike made a poorly designed and maintained product. I heard multiple people on reddit say it's the best of that type of product, but what the hell? Why does it need kernel-level control?
Why did we get here? If you're installing kernel-level software you might as well run a kiosk that only runs presigned code and runs off a read-only system image. And a lot of the machines in question DO APPEAR to be kiosk settings (like hospital data entry terminals).
It's easy to sit back and armchair, I'm sure there will be many cybersecurity experts who would figuratively jump at my throat for suggesting that trusting a vendor to run a rootkit on your computers is a bit incompetent. LOL. :D
Everyone installing Crowdstrike seems like they want to build locked-down kiosks but haven't heard of Windows Embedded yet. Or at least I'm assuming there's an Embedded configuration that lets you do AMFI[0]-tier code signing enforcement.
[0] AppleMobileFileIntegrity, the daemon and kext on iOS that enforces very strict code signing.
I expect the local admins to be able to install a fresh OS not connected to the enterprise network. And I expect them to have physical copies of stuff like disk encryption keys, also backups of OS installations and images, and all critical software. If they don't have that or can't use it during an outage, the problem is incompetent IT management that has no business running a hospital, not CrowdStrike. Something else would take them out sooner or later.
Again, we had all of this for a forest logging operation - is it too much to expect at a hospital?
I agree with you, and kind of even agree that crowdstrike may not directly be at fault. But my point is that this competency is bled out of hospitals by external forces, primarily two: distant administration from companies that buy and manage multiple hospitals, often applying the same "efficiency" mindset that stripmines other industries in the name of profit, and the cloudtech sector, that is Google, Amazon, and Microsoft in particular, are very aggressive about selling their services along with demands that everything be given to their platforms, which often involves purging technicians who want on-site redundancy. This makes the systems more brittle, but also often causes people with the competency you're advocating to be fired
Hospital IT sucks. Look at a news report about a ransomware or this and it can easily be a few weeks for them to get back in shape. This one is hopefully easier because reportedly CloudStrike can sometimes pull an update before it BSODs and most windows machines auto restart on BSOD, so just leaving things unattended may be enough.
Restore from backup or reimaging fresh often means you need a working backup or image server, which at a lot of these places is also a Windows server and is likely also running the same endpoint protection, and is likely also boot looping.
Restore from zero isn't something any IT wants to do, and many of them aren't prepared to do it either.
Like it or not, hospital care revolves around the electronic medical records systems, and while Kaiser Southern California in the 90s was using amber screens and some sort of mainframe, afaik, almost everyone is on EPIC now, which is a windows application with all the baggage that contains. Even before EPIC took over Kaiser, they were running terminal emulators on Windows.
IMHO, it would be better for them to put together a ground up desktop distribution with exactly what they need, but that has user training costs and development costs.
From having seen the infusion process myself, I take it that it requires precision measurements over an extended period of time. This would seem unreasonable requirement for staff to perform.
Again, from what I've seen, infusions are not just "throw it in an IV bag and wait".
If it requires a computer, why was that operationally critical computer not restored from a backup within hours after the problem started? This has nothing to do with CrowdStrike or other bugs - it could've simply failed hardware wise and the hospital should have been able to replace it immediately.
You have a naive view of how modern operations work, I must say. This shows when you suggest endpoints have backups. We're back to the mainframe/terminal times where all software is running on a web server or other centralized application server, which is also in a boot loop, somewhere else.
Failed hardware is different, but hospitals likely have very few computers just 'lying around'. Especially the highly regulated machines, such as those which are attached to MRIs and the like.
CFR 21 Part 11 was the bane of my existence. Software that can be installed and configured in a matter of minutes? That's a six month project, at least. Sure, backups are great, but then you've got a significant process to get it back up and running.
These aren't early-2000 logging operations.
I see you'll never be convinced, but this is how modern operations work. Being a hospital (or other industry with heavy government regulations) make operations that much worse.
Very few companies, for-profit or otherwise, keep gobs of machinery on hand "just in case". It's expensive, not only the machinery, but the space to store it, maintain it while not in use, replace it when it ages out, and so on. It's also exceedingly rare to need it.
Hospitals also have limited resources in terms of IT staff. It's not a Azure army of operations staff that can rush out to every endpoint and click buttons.
When I was in helpdesk eons ago, I was "responsible" for roughly 300 - 400 endpoints, plus a handful of servers. As were all of the other helldesk techs. If something like this happened, there's simply not enough hands to go around as fast as everyone would like.
What I meant when I said reinstall the PCs was to reinstall the critical computers necessary for operation of medical machinery to make basic and still mostly manual/paper based operation possible, not every computer they have there. I really don't think they have hundreds of computers necessary for operations of MRIs and other machines.
The truth is that many medical personnel are not agentic. They are human robots unable to act unless instructed to by a computer. The computer tells them when they can do something and they do it.
The bad person is the one who decided to lock and downsize public toilets. All the arguments about taxes are about money for public good that market can't solve, well where is it?
It has a simple reason, any other device is much worse. All my Lenovo laptop batteries died in 2 years, meanwhile my MacBook from 2015 still gets 3 hours of battery life. That one was expensive, but now with Apple Silicon the Macbook has best power to performance ratio by far.
And it's not like other vendors are not full of crap either. I had a Dell laptop with a clearly broken display that they never acknowledged or repaired - and many other problems of all kinds. Apple was always least (but obviously not zero) problems and best build quality.
> It has a simple reason, any other device is much worse. All my Lenovo laptop batteries died in 2 years
Pff. I had a macbook battery (2018 model, brand new, issued by my employer at the time) that died in 1.5 years. Died in the sense that I couldn't use that crap unplugged for more than 10 minutes.
Since every place I work issues me a MacBook, I am very experienced with these luxury toys, and I wouldn't ever buy one for myself. I actually think Thinkpads are much better.
As I said, it's obviously not zero issues with Apple either. But this particular issue is an exception imho, my own experience and everyone with a Mac around me is saying the battery lifetime is much better than any other brand they tried. Also, 1.5 years is below 2 years of warranty (in EU) - if you're around here, try to have it replaced. I had only good experience with Apple customer care - much better than HP, Dell and Lenovo. Again, while it wasn't always perfect and sometimes required visiting again, at least they really wanted to help - unlike the other vendors.
BTW you're saying it was 2018 model, and employer issued, so if I'm correct in assuming it was a top model Intel CPU, these really were chewing through the batteries because of the heat. It's very different with i5, less powerful i7 and Apple Silicon.
I really don't think anyone is claiming that Apple is perfect - it's just that the experience with other vendors is so, so utterly bad. For example ThinkPads - nice performance and cheap, I give you that. But the non-existent customer care (for consumers, not enterprise), the build quality, the bad sound and displays and the absolutely terrible touchpad make me avoid it. Also Windows - and I never got Linux properly working on a ThinkPad as well as MacOS does on a MacBook, even though they claim it's Linux certified.
People use Teams because it's well integrated into Office, 365, Entra and other MS products, they would (and recently do) pay for it. It has functionalities that no other alternative has, e.g. it can act as a full call centre solution through a SIP gateway.
France, Italy, Greece, Spain, Portugal are giving out the visas in Moscow at a higher rate than they used to in 2020. Some countries are extremely dependent on tourism.
Bank accounts are not an issue in the slightest
Yeah, true... There is currently a big anti-EU/Schengen backslash in my state because we can't control entry of Russians that got the visa elsewhere. Which is a problem as we stopped giving them out because they acted as if we were the next in line for invasion.
I mean, the two are not necessarily mutually exclusive.
Among those who can apply to the remaining countries, who can bear the increased costs and shortened validity terms of the non-“simplified” process (and of course the usual 2x markup from the “visa center” rackets that consulates are so keen to force on applicants), and who can figure out a way to arrange a trip without access to their domestic bank accounts (no credit cards thanks to Visa/MC, no currency withdrawals thanks to Putin) or other cheap ways to get foreign currency (good luck figuring out travel medical insurance), the acceptance rates might well be the same as among everybody who went abroad previously.
I don’t know that that’s true, but it would not exactly strain the imagination. It just doesn’t mean what GP appears to imply it means.
Recent Russian regulation automatically closes their domestic bank accounts on the grounds of their internal passports expiring. You can only renew them from inside Russia.
Unfortunately, one of these times is at 20, which can be a serious problem for those evading conscription. There have also been recent developments where opposition figures’ passports are cancelled as a persecution measure[1], but it’s too early to tell how widespread it’s going to be.
That's not about experience, that's about following the regulated standards. This is well known ever since technology (not computers) got into hospitals.
OR-Tools has a lot of bindings in different languages, though Javascript/Node doesn't seem to be a first-class supported environment. Looks like https://www.npmjs.com/package/node_or_tools ports a few solver packages into Node so if those solvers fit your needs you can use this package.