We've been having 'fun' with ongoing issues for a site since 6pm UTC yesterday, which got dramatically worse this morning...and having been recurring during the day.
Having multiple hour outages makes me really want to go back to hiring a couple of physical servers in a rack somewhere.
I have no idea where my power or water come from, or where the cell towers are located for my phone, or the satellites they communicate with. I don't want/need to know. Most cloud providers tell you which region of which country you're in, most people don't need to know much more (and if you do, then dont use cloud)
The difference is you're not concerned about where those come from. It doesn't matter whos water it is or where the power is coming from. And they're both simple commodities with few metrics.
We're trying to fit something that's generally very centralized into the same model. Where the servers are does matter. What OS they run and how reliable they are matters on an individual level. The environment your server runs on is quite important and it's definitely your server, not "any server will do" by a Longshot.
If the cloud was just some source of CPU instructions we would have a government regulated source of CPU power for everyone. But depending on what you're running ram size, cache size, network latency, CPU architecture, drive type, endless variables come into play that are all important.
Depending on hardware that you cant control to have metrics you definitely need to control is going to make the system less reliable, and that's what we're seeing now with cloud computing.
I lived in a country where everyone has a power generator in their building. Let's just say the quality of life was significantly lower. This cloud shift is like an unstoppable tidal wave. I'm always surprised when I hear people with your argument. Are you willing to imagine that in a couple years things may change your perspective?
The key is that most services don't need to be reliable. I think the cloud has huge promise here. Engineers tend to think their app needs five-nines reliability when we live in a world where the banks close twice a week.
I don't think on-prem will ever die out. It's like owning vs renting your office. There's pros and cons to each and we'll eventually hit some kind of equilibrium.
My cloud provider tells me which city my server is in and I get to pick the OS. I don't care about the topology of their data-center or what rack I'm in, or even the precise location of the data-center beyond which region it is in. Your needs may differ, if so don't use cloud.
I think it all boils down to how you deploy your stuff. If you think Cloud is so massive that it is never a SPOF, you'll likely not meet your availability aspirations in some time in future.
Cloud to me is also shared risk. I read - "Google cloud outage" as "multiple companies that rely on a shared infrastructure is not available ATM".
The mindset should be to run your services on a distributed infrastructure with no SPOF. Leverage cloud, fog, racks, PCs, whatever resources you can, but diversify your content/service and be risk averse from failure of one kind.
what about experimenting & failing fast and cheap? will you buy servers / rack space / service contract for a year to develop your app when you can experiments with servers on the cloud in cents? public clouds are about muuuuch more than scaling, we can also talk about managed services, agility, payg, etc'
Up to a certain scale I think being completely off the cloud is never out of the question.
But if you're a small-ish team with ambitions of building something that may one day need to scale quickly to support a large influx of traffic / customers (generally unannounced / unplanned), I think it's insane not to have a cloud strategy / presence.
I have never seen research on it, but my hunch is that given that a large number of websites / services end up being impacted simultaneously, it's probably better to be down when everyone else is, than being the only one down.
In my experience when there's a large-scale outage, that information is far more likely to get back to the end user a lot faster, and in a fashion where it may not even impact their perception of your business (ie. maybe they originally experienced the issue on someone else's website / app).
But you can be certain that while you're down when everyone else is up, your potential and existing customers / users are far more likely to blame you and begin searching for alternatives.
And when your load balancers fail you can be the one responsible for fixing it instead of Google. The internet is brittle. You should use servers at different clouds / data-centers and use DNS fail over for specifically this reason. Maybe even use 2 different cloud based DNS providers.
The issue is not that the compute instances are unreliable - the issue is that the super-awesome-magic-dust is not reliable and "cloud" is not the way to use compute, rather it is the way to use the magic dust to get unicorns.
It was better for a period after the fork, but the only guy working on it hasn't maintained it that well...there's a significant number of bugs in GraphicsMagick that have been there for years.
Nope, it's not polite, it's the most convenient thing for you.
> It’s certainly the most professional thing.
That phrase sucks. Whenever I hear it, I always interpret it as "Someone did something I didn't agree with, but I can't actually express a legitimate reason why they shouldn't do that."
e.g.
Programmer who never has any customer contact doesn't wear a suit to work - oh how unprofessional.
Someone has their job outsourced, can't be arsed to fly out to Bumfuck Nowhere to train the new people how to do his job - oh how unprofessional.
Customers of a conference object to giving their money to pay to listen to someone who thinks that a Monarchy would be better than Democracy, and that "Traditional sex roles are basically a good idea" - oh, how unprofessional of the customers!
If you have a legitimate reason to disapprove of someones actions you should be able to express it a lot more clearly than just calling them 'unprofessional'.
For the record - Moldbug/Yarvin wrote my favourite article on the internet: http://unqualified-reservations.blogspot.co.uk/2009/07/wolfr... But his lack of ability to see/predict the real world results of his actions, is totally consistent with his lack of ability to see the result of the policies he thinks we should be following.
And yes, people may be using their emotions to make a decision, rather than segmenting things perfectly and thinking about each of them separately with a coldly logical basis. Humans, eh?
You can't give read only Oauth access to private repos....it has to be read/write. Which means if you want to use online CI tools with those private repos....you've got to hope they don't either turn malicious, or they get hacked and have their keys copied.
1. Customer places an order.
2. SYN: Can I charge $30?
3. SYN/ACK: Yes.
4. ACK + SYN: Do it.
5. SYN/ACK: I am gonna do it.
6. ACK: I see that you're gonna do it.
"If that was their model, then at no point does a communication failure cause a charge to be in an ambiguous state. If I never get the message in #5, the customer is not charged. If I get the message in #5 and my response in #6 is not received, the customer is not charged."
Er.....that doesn't appear to solve anything, instead it just pushes the error state down a level; there's still an ambiguous state where #6 is sent and not received.
The client thinks the charge is going to take place, and so thinks the client will be charged, but the bank never gets #6 and so never makes the charge, aka distributed atomic operations are hard.
He doesn't say that the change makes things perfect - he says "There's only one possible failure mode and not two, and that failure mode is the safer one"
"Because acknowledgement of message receipt can be lost as easily as the original message, a potentially infinite series of messages are required to come to consensus."
What you want isn't really a handshake, it's a commit, and no finite amount of messages will ensure agreement over a lossy network.
The slowness appears to be caused by continual directory scanning. For me at least, turning off the "Refresh when files change" option, and so having to do "view, refresh" or Command-R, made SourceTree be zippy again.
And as I'm usually on a laptop, having less continued CPU usage is a good tradeoff against having to press refresh when I'm going to do something in SourceTree.
Does not work, i've also heard from other sources that turning this option on makes it faster because then it uses some native file-changed-hooks instead of continuously polling the directory. For me both options are equally slow and many forum posts say the same. Sourcetree performance simply is crap and no fiddling with the options fixes this, nor should this fiddling be required. You can find bug tickets about performance issues dating back years but nothing happens, don't waste your time on sourcetree just because it looks shiny, use a tool that actually works.
Also people are probably used to holding things that don't matter too much if dropped. If someone dropped a 10/20/30 kg bag of rice and it landed on my toes, the bag would just fold around the foot and it wouldn't be a problem.
If someone dropped a 30kg piece of steel and it landed corner first, there's a decent chance of it just going straight through the foot.
>> There is a second form of Dependency Injection that uses setters instead of constructor injection. Do not use this form.
>Why?
Not using setter injection eliminates a huge class of bugs that occur when people try to use objects that haven't had all of their dependencies injected yet:
$user = new RegisteredUser();
// Any code that touches $user here will be bad
$user->setEmail($email);