Another article that, by the third sentence, namedrops seven different AWS services they want to build their app on and then spends the rest of the argument pretending like that ecosystem has zero in-built complexity. My friend, each one of those services has its own security model, limitations, footguns, and interoperability issues that you have to learn about independently. And you don't even mention any of the operational services like CloudWatch, CloudTrail, VPCs (even serverless, you'll need them if you want your lambdas to hit certain other services efficiently), and so on. Those are not remotely free. Your "real developers" can't figure out how to write a YAML document, but you trust them to manage infrastructure-as-code for motherloving API Gateway? Absolutely wild.
Kubernetes and AWS are both complex, but one of them frontloads all the complexity because it's free software written by infrastructure dorks, and one of them backloads all of it because it's a business whose model involves minimizing barriers to entry so that they can spring all the real costs on you once you're locked in. That doesn't mean either one is a better or worse technical solution to whatever specific problem you have, but it does make it really easy to make the wrong choice if you don't know what you're getting into.
As for the last point, I don't discourage serverless solutions because they make less work for me, I do it because they make more. The moment the developers decide they want any kind of consistency across deployments, I'm stuck writing or rewriting a bunch of Terraform and CI/CD pipelines for people who didn't think very hard about what they were doing the first time. They got a PoC working in half an hour clicking around the AWS console, fell in love, and then handed it to someone else to figure out esoterica like "TLS termination" and "logs" and "not making all your S3 buckets public by accident."
Is it? If you compare to serverless, you'd almost have to compare AWS EKS Fargate and with that, there's a lot less operational overload. You still have to learn ingress, logging, networking, etc. but you'd have to do that with serverless as well.
I'd argue between AWS serverless and AWS EKS fargate, the initial complexity is about the same. But serverless is a lot harder to scale cost efficiently and not accidentally go wild with function or sns loops.
This is my experience too. We served fairly complex data requests, around 200,00 per day, for mobile and commercial users using ECS Fargate and Aurora Postgres as our main technologies and it coped fine.
Used Golang and optimised our queries and data structures and rarely needed more than 2 of whatever the smallest ECS Fargate task size is, but if we did it scaled in and out without any issues.
Realise that isn't at scale for some but it's probably a relatively common point for a lot of use cases.
We put some effort into maintenance, mostly ensuring we kept on an upgrade path but barely touched the infrastructure code other than that.
One thing we did do was limit the number of other AWS services we adopted and kept it fairly basic. Seen plenty of other teams go down the rabbit hole.
This is one thing that REALLY frustrates me about enterprise. So often the c-suite wants to push for going cloud platforms (aws, azure, snowflake, along with all the costs, "because they need it". It's this narrative of scale that drives these discussions - so few companies are genuinely dealing with 200,000 requests per day!
Genuine question - have you come across good/useful case studies/summaries of going cloud?
All I can really get is "pretty toys", "everyone is going cloud", "we don't need to have network engineers (we now need azure network engineers)", etc.
I have been involved in several cloud migrations of existing systems. These where all successful and a lot was driven by not having to own and manage the underlying servers and/or the need to replace aged systems at the very last point possible.
Like most things understanding the rationale and desired outcome are key. One of the things you get from going to cloud is, as you point out, is a wider group of people who already know and understand key parts of the architecture/infrastructure choices.
Without wanting to be unkind, this author misunderstands the purpose of the exercise, which is to demonstrate to another human being that you can write code that meets the company's standards (as in, would pass PR review), not to ship a fully-functional product solo. The initial email makes it explicit that they don't care if the product even works ("Can use a fake backend") and that the specifications are deliberately open-ended. A little intuition and empathy can tell you that they probably are not going to spend many hours reviewing a complex submission, so the project should optimize for demonstrability of code quality, not completeness.
"I would like to know what kind of response I could expect..." - This is also established from the beginning: you can expect either a "Looks good, let's do some interviews" or a "Sorry, not interested," based on the code you submit. They can't narrow down the choices prior to your submission, because they're grading your submission, not your proposal document with an extensive list of details that they already told you they're mostly ambivalent to.
"So it is funny that my project is so weak, yet it made them update the guidelines to something stricter." - Main character syndrome. As someone who has been on the other side of these kinds of reviews, the far more likely explanation is that they kept getting submissions where the build instructions didn't work (which is not disqualifying by itself; the authors may not be on the same OS as the reviewers) and they got tired of spending time dealing with it.
Ultimately, the failure here was not a technical one but a social one. The author tried very hard to do the thing that it seemed like they were asking for, not the thing they actually wanted. The hiring manager's unwillingness to engage with the proposal doc was itself a form of communication that they were not interested in this level of detail, it was just an implicit one.
It's common for engineer types to want all that kind of communication to be explicit, and I have a lot of sympathy for those folks, but the reality is that teamwork is a skill, and the ability to suss out what a stakeholder actually wants, but isn't saying due to incomplete information and/or office politics, is a reasonable thing to select for. The ambiguity of the prompt is a feature, not a bug: it's kept the author away from a company whose communication style they're not compatible with.
(that said, this project does sound excessively complex for a take-home)
> A little intuition and empathy can tell you that they probably are not going to spend many hours reviewing a complex submission, so the project should optimize for demonstrability of code quality, not completeness.
> "I would like to know what kind of response I could expect..." - This is also established from the beginning: you can expect either a "Looks good, let's do some interviews" or a "Sorry, not interested," based on the code you submit. They can't narrow down the choices prior to your submission, because they're grading your submission, not your proposal document with an extensive list of details that they already told you they're mostly ambivalent to.
Absolutely spot on, and you ID it later with "Main character syndrome", but it is so very clear from this post's tone & content that OP expected a symmetric outlay of effort & focus from the company's side. They thought they were the main character.
That's a fundamental misunderstanding that seems to have predicated a lot of their ultimate response: they feel as if they were entitled to much more effort from the company than they received. Such is often the case with strong entitlement, it's nearly impossible for the person suffering it to see it.
> suss out what a stakeholder actually wants, but isn't saying due to incomplete information and/or office politics
What you are saying sounds a lot like this:
"it is common for engineers to communicate properly, but people above them prefer to be vague, because plausible deniability is a political advantage to them at the detriment of the engineer"
EDIT: I originally had an escalatory reply here but have thought better of it. Trying again but without being an asshole: yeah, what you describe is one way miscommunications can happen. They can also happen in other ways that may be the engineer's fault, or nobody's at all. They may not happen every day, just like how you won't be deploying a new service every day, but anyone who works in an office is going to need to be able to deal with them regardless. Highly functional organizations are so because they can recover from mistakes, not because they don't make them.
You can't programatically detect novel BS any more than you can programatically detect viruses or spam. You can only add the fingerprints of known badness into an ever-growing database. Viruses and spam are antagonistic to well-resourced institutions, and their databases get maintained reasonably well. LLM slop is being generated by those same well-resourced institutions. I don't think it fits into the same category as Nepenthes.
The whole point of systemic incentives is that there is no conspiracy. Nobody wants a DDOS and every large provider will have people genuinely working to avoid them. But every time there is an opportunity to allocate resources, the team that gets to frame their return on investment in terms of real dollars will always have an edge over one whose value is realized only in murky customer satisfaction projections. Over the lifetime of a company, the impact of these decisions will add up with no need for any of the individuals involved to even be aware of the dynamic, much less conspire to perpetuate it.
That's sound logic. In this specific case of capitalistic incentives, I haven't noticed that it's working out in a way that make one more vulnerable to DDoS when one pays for bandwidth
Impossible means it does not happen, not that it does not happen only when we look. Just because we can't see it doesn't mean that it doesn't happen. After all, as the comment I replied to pointed out, other galaxies can have different constants. We have to be humble and admit we just don't know.
This seems like a distinction without a difference, since we can never positively categorize any unobserved phenomenon as impossible (vs merely unobservable). To me, it seems ontologically cleaner to treat existence and observability as the same thing. shrug
Okay, fine, I'll come clean, I was just making an unfalsifiability joke. The original god-of-the-gapsy comment was the one that got me. Always just out of reach of our verifiability is the magic. Why not all the way out?
I'd call that reading poorly-supported enough to be incorrect. The report establishes that waypoint names are changed somewhat frequently for reasons that are largely left to mystery, but can include "a basketball player I like died." Additionally, "This is on fire right now" suggests some relevant context beyond "a single reporter sent us a nosy email." So while it may technically be true that the email preceded the name change in the chain of causality, its framing as the principal cause appears to be a narrative built of incomplete information. The linked NYT article also mentions pushback from pilots.
Think of all the algebra problems you got in school where the solution started with "get all the x's on the same side of the equation." You then applied a bunch of rules like "you can do anything to one side of the equals sign if you also do it to the other side" to reiterate the same abstract concept over and over, gradually altering the symbology until you wound up at something that looked like the quadratic formula or whatever. Then you were done, because you had transformed the representation (not the value) of x into something you knew how to work with.
People don't uncover new mathematics with formal rules and symbols pushing, at least not for the most part. They do so first with intuition and vague belief. Formalisation and rigour is the final stage of constructing a proof or argument.
Yeah, the AI in question can turn intuition into statements, then turn that to symbolic intuition, then work with that until something breaks it, then revise the system, etc, quite like a human?
There's no reason the tax needs to directly reflect the environmental impact. We figure out an amount that is enough to change corporate behavior without bankrupting them, maybe with some kind of sliding scale to put more responsibility on larger businesses who would otherwise benefit from regulatory capture, throw in some exceptions for the aforementioned medical devices, etc. "Pricing the externalities in" is a nice political justification but in reality this kind of thing happens because we've already decided that plastics are significantly worse than the alternative and we want to incentivize change.
Regarding consumers shouldering the cost - well, yeah, regulation drives prices up; even my liberal self agrees that that's broadly true. Those same consumers will be shouldering the cost of an environment permeated by toxic microplastics, which we are increasingly being driven to believe will be a greater impact than that of more expensive consumer goods.
and when they jack up prices, we use the tax revenue to subsidize the poor so only the well off pay that assholery, we can do this for carbon in all forms.
For someone so skeptical, why buy the 'cost increases will be passed on to the consumer' BS? Clearly that's not true.
First, price increases depend on elasticity. I'm guessing that ketchup demand is pretty elastic; it's not diabetes medication or higher education.
Also, we can assume Heinz, being sophisticated, has already priced it for the highest possible marginal return; there's not necessarily room for increasing the price without reducing return (by driving down sales).