Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It’s not just portability that’s an issue with lambda. It’s also churn.

Running on Lambda, one day you’ll get an email saying that we’re deprecating node version x.x so be sure to upgrade your app by June 27th when we pull the plug. Now you have to pull the team back together and make a bunch of changes to an old, working, app just to meet some 3rd party’s arbitrary timeframe.

If you’re running node x.x on your own backend, you can choose to simply keep doing so for as long as you want, regardless of what version the cool kids are using these days.

That’s the issue I find myself up against more often when relying on Other People’s Infrastructure.



It's not about using what the cool kids use these days. I can't stress enough that unmaintained software should not run in production.

This way you have a good argument towards management and if you do it regularly or even plan it in ahead of time it's usually not much work.

During a product planning meeting: "Dear manager, for the next weeks/sprint the team needs X days to upgrade the software to version x.x.x otherwise it will stop working"


I guess we have different philosophies then. My take is that software in production should not require maintenance to remain in production.

Imagine a world where you didn't need to spend a whole week every year, per project, just keeping your existing software alive. Imagine not having to put off development of the stuff you want to build to accommodate technical debt introduced by 3rd parties.

That's the reality in Windows-land, at least. And I seem to remember it being like that in the past on the Unix side too.


Your vision is only workable for software for which there are no security concerns. This might improve to the extent industry slowly moves away from utterly irresponsible technologies like memory-unsafe languages and brain damaged parsing and templating approaches and more or less the whole web stack. I wouldn't hold my breath though. And even software that's not cavalierly insecure will have security flaws, albeit at a lower rate.


Keep in mind that you're arguing against an existence disproof. The Microsoft stack, for example, is a pretty big target for attack, and has seen its share of security issues over the years.

But developers don't need to make any code changes or redeploy anything to mitigate those security issues. It all happens through patches on the server, 99% of which happen automatically via windows update.


Yes, Microsoft is good at backward compatibility.

So many open source hackers do not know the basic tecniques for backwards compatibility (e.g. don't reaname a function, just intoduce a new one, leaving the old available).

I'm spending very significant efforts maintaining an OpenSSL wrapper because OpenSSL constantly remove / rename functions. I hoped to branch based on version number, but they even changed the name of the function which returns version number.

And that's only one example, lot of people do such mistakes costing huge efforts from users.

And this popular semantic version myth, that you just need to update major version number when you chane the API incompatibly to save your clients from trouble.


> So many open source hackers do not know the basic tecniques for backwards compatibility (e.g. don't reaname a function, just intoduce a new one, leaving the old available).

I'd dispute this, or at least I think this doesn't capture the whole picture. Microsoft makes money with backwards compatibility and can afford to spend significant effort on to the ever-growing burden of remaining backwards-compatible indefinitely. Open source volunteers are working with much more limited resources and I think that it comes down much more to intentional tradeoffs between ease of maintenance and maintaining backwards compatibility.

If you have a low single-digit number of long-term contributors, maybe the biggest priority to keep your project moving at all is to avoid scaring off new contributors or burning out old contributors, and that might require making frequent breaking changes to get rid of unnecessary complexity asap. Characterizing that as "they don't know that you can just introduce a new function" doesn't seem like it yields instructive insights.


Yes, this is exactly the wrong reply I often hear when complaining about backwards compatibility.

The mistake here is that in 99% of cases backwards compatibility costs noting - no efforts, no complexity.

Of two equally costing choices the people breaking backwards compatibility just make a wrong choice.

> maybe the biggest priority to keep your project moving at all

When you rename function SSLeay to OpenSSL_version_num, where are you moving? What does it give to your project?

Ok, if you like the new name so much, what prevents you from keeping the old symbol available?

        unsigned long (*SSLeay)(void) = OpenSSL_version_num
(Sorry for naming OpenSSL here, it's just one of many examples)

When developers do such things, they break other open source libraries, which in turn break other. It's a huge destructive effect on the ecosystem. It will take many man-days of work for the dependent systems to recover. And it may take years for the maintainers to find those free days to spend on recovery, and some projects will never recover (e.g. no active maintainer).

With a lift of a finger you can save humanity from significant pain and efforts. If you decided to spend your efforts on open source, keeping backwards compatibility by making the right choice in a trivial situation will make you contribution an order of magnitude bigger, efficient.

So, I believe people don't know what they are doing when they introduce breaking changes.


I saw developers introducing breaking changes, then finding projects depending on them and submitting patches. So they really have good intentions and spend more their volunteer open source energy than necessary. And when the other project can not review and merge their patch (no maintainers) they get disappointed.

So please, just keep the old function name. It will be cheaper for you and for everyone.


An unmaintained duplicate way of doing things is a mistake waiting to happen.


I was just thinking this, but I guess were really just talking API changes. Everything under the api can still get rewritten, no?


Microsoft makes money with backwards compatibility

That's a good way of putting it, and it gets to a key difference between open source and proprietary software.

In the open source world where a million eyes make all bugs shallow, developer hours are thought of as free. So if you change something it's no big deal because all the developers using your thing can simply change their code to accommodate it. It doesn't matter how many devs or how many hours, since the total cost all works out to zero.

In the proprietary world, devs value their time in dollars. The reason they're using your thing is because it's saving them time. They paid good money because that's what your thing does. Save time. Get them shipped. As a vendor, you're smart enough to realize that if you introduce a change that stops saving your customers time or, worse, costs them time or, god forbid, un-ships their product, they'll do their own mental math and drop you for somebody who understands what they're selling.

In the end, all we're talking about here is the end product of this disconnect in mindset.


Microsoft also isn't your average developer that imports libraries from strangers.

Ever time I run an audit (which is monthly) I see at least a dozen conversations in NPM packages we use. Sure, some of them don't apply to our usage, and others can't really impact is, but occasionally there is one we should be concerned about.

We server admins can push buttons to upgrade, but that doesn't mean developer code will keep working.

Many developers live in this world were they think server admins will protect their app... But we're more likely to break things by forcing your neglected package upgrades


> Keep in mind that you're arguing against an existence disproof. The Microsoft stack, for example, is a pretty big target for attack, and has seen its share of security issues over the years. > But developers don't need to make any code changes or redeploy anything to mitigate those security issues.

I don't believe it. Most security issues are not just an implementation issue in the framework but an API that is fundamentally insecure and cannot be used safely. Most likely those developers' programs are rife with security issues that will never be fixed.


Which is fine, as long as they are understood and mitigated against. If your security policy consists entirely of "keep software up to date", you don't have a security policy.


In practice trying to "understand and mitigate against" vulnerabilities inherent in older APIs is likely to be more costly and less effective than keeping software up to date.


If there is a problem in an older API, it's probably time to update. That's understanding and mitigation.

The discussion is about the difference between updates when there's a valid reason and updates that are imposed by cloud providers, nobody advocates sticking with old software versions.


> nobody advocates sticking with old software versions.

In my experience that's what any policy that doesn't include staying up to date actually boils down to in practice. Auditing old versions is never going to be a priority for anyone, and any reason not to upgrade today is an even better reason not to upgrade tomorrow, so "understanding and mitigation" tends to actually become "leave it alone and hope it doesn't break".


In practice you don't mitigate against specific vulnerabilities at all, you mitigate against the very concept of a vulnerability. It would be foolish to assume that any given piece of software is free from vulnerabilities just because it is up to date, so you ask yourself "what if this is compromised?" and work from the premise that it can and will be.


Sounds clever, but what does it actually translate to in practice? And does it work?


Let's say I have a firewall. If we assume someone can compromise the firewall, what does that mean for us? Can we detect that kind of activity? What additional barriers can we put between someone with that access and other things we care about? What kind of information can they gather from that foothold? Can we make that information less useful? etc.

You think about these things in layers. If X, then Y, and if Y, then Z, and if X, Y, and Z do we just accept that some problems are more expensive than they're worth or get some kind of insurance?


I've found that kind of approach to be low security in practice, because it means you don't have a clear "security boundary". So the firewall is porous but that's considered ok because our applications are probably secure, and the applications have security holes but that's considered ok because the firewall is probably secure, and actually it turns out nothing is secure and everyone thought it was someone else's responsibility.


I think you're projecting. The whole point is reminding yourself that your firewall probably isn't as secure as you think it is, just like everything else in your network. This practice doesn't mean ignoring the simple things, it just means thinking about security holistically, and more importantly: in the context of actually getting crap done. Regardless, anyone who thinks keeping their stuff up to date is some kind of panacea is a fool.


Personal attacks are for those who know they've lost the argument.

Keeping stuff up to date is exactly the kind of "simple thing" that no amount of sophistry will replace; in practice it has a better cost/benefit ratio than any amount of "thinking holistically". Those who only their things up to date and do nothing else may be foolish, but those who don't keep their things up to date are even more foolish.


> But developers don't need to make any code changes or redeploy anything to mitigate those security issues

Right, so all deployed Active X based software magically became both secure and continued working as before after everyone installed the latest Windows patches?

The trivial patching only works for security issues due to implementation not design defects. If you have a design defect, your choice is typically either breaking working apps or usage patterns or breaking your users security. Microsoft has done both (e.g. Active X blocking, vs continued availability of CSV injection) and both have negatively affected millions.


... because it is maintained?


There are no changes needed on application code side.


What definition of maintained are you using?

If they're doing security patches and bug fixes it's a maintained codebase.


We're using the definition a few notches upthread: "Dear manager, for the next weeks/sprint the team needs X days to upgrade the software to version x.x.x otherwise it will stop working"

As opposed to:

2011: deploy website, turn on windows update

2011-2019: lead life as normal

2019: website is up and running, serving webpages, and not part of a botnet.

That's reality today, and if it helps to refer to it as "maintained", that's fine. The point is that it's preferable to the alternative.


I think that the parent commenter is referencing node 4.3 being past EOL and being unmaintained software and therefore unfit for prod, unlike the ms stack which is receiving patches



node, not .net


I was referring to comments that MS is good at backwards compatibility and “if you write application, it will run forever” and I pointed out that MS also breaks backward compatibility what regards languages.


Installing Security patches for a ruby stack takes a full code coverage test suite, days of planning and even more to update code for breaking changes.

Installing security patches for a Microsoft stack requires turning on windows update.

There's a BIG difference. Once you write your msft stack app, is done. Microsoft apps written decades ago still work today with no code changes.


That's not true. Try running anything with VisualFoxPro. There are tons of programs that ran on XP and 7 and don't on 10.


What if the new node version fix an bug / issue / CVE that doesn't concern the software ?

Is it resonable to postpone the upgrade for later ?

Example : the software uses python requests. A new version fixes CVE-2018-18074 about Authorization header, but you don't use this header, for sure. Is it resonable to upgrade a little bit later ?


Depends on how mature is your security team/process. Can you spend time tracking separate announced bugs and make case by case decision for each cve? How much would you trust that review? Do you review dependencies which may trigger the same issue?

Or is it going to take less time/effort to upgrade each time?

Or is the code so trivial you can immediately make the decision to skip that patch?

There's no perfect answer - you have to decide what's reasonable for your teams.


The cool thing about serverless infrastructure is that it does not really concern you. As long as you are on a maintained version of the underlying platform your provider will take care of the updates.

If your software runs on a unmaintained platform there won't be any security fixes and that's why amazon forces you to upgrade at some point.


AWS, at least, didn't make any promises for updates for serverless Lambda that I can see in their docs.


Right, because it's not relevant to you. You don't care about the underlying infrastructure in terms of security, amazon does that for you.


Security wise you should of course be taking patches, however those patches should not be breaking functionality.


You are looking to save yourself a week of time a year and then 3 years later for some reason or another you will HAVE to upgrade and good luck making that change when the world has moved past you.


Your describing traditional sysadmin vs devops. Devops means repeating the stress points so that they are no longer stressful and automated as much as possible. I like it way better then the classic, "don't touch this, it's working and the last guy that knew how to fix it is gone.


you don't need maintenance to remain in production, you need maintenance to reduce the tech debt in the infrastructure you decided to use (code, frameworks, third party libraries, security issues). Even just vanilla languages get upgraded every X months/years etc. Not maintaining the code is just a bad gift you are giving to your (or someone's) future. I have been in upgrades from perfectly working software written in an older (almost 4) version of java that was needed to add new features and it took a hell of a time and I have never seen it working at the end. I don't think it's a safe choice to "let it be" when it comes to software.


BSD still loves you long time


Imagine a world where new exploits and hacks didn't come along every day and compromise the systems your app sits on because you didn't keep up with patches and upgrades...


That also describes my (very small in scope) PHP and Javascript things. They all still work, and I love that to bits. Admittedly, the price of that probably is keeping it simple, but if I needed to update it all the time just to keep it from not sinking under its own weight or the ground shifting beneath it, that would be no fun for me.


I completely agree with this, starting a new position and coming into infra running 5 year old software is not fun, generally neglected, and full of deprecated features/code that is improved in later versions. Not to mention the security risk running old software can often create.


Isn't that always the case though? I mean, outside of e.g. lambdas or other platforms / runtimes as a service?

I mean a few years ago there was a huge security flaw in Apache Struts, whose impact was big because it had been used in a lot of older applications - meaning a LOT of people had to be summoned to work on old codebases to fix this issue.

The problem isn't a changing runtime - even if you self-host it you should make sure to keep that updated regularly.


if you self-host it you should make sure to keep that updated regularly.

Mild disagreement: My philosophy is that you should choose technologies that aren't likely to introduce breaking changes in the future.

As an example, I have sites that were built using ASP.NET version 1.1 that have survived to this day with nothing more than Windows Update on the server and the occasional version bump in the project config when adding a feature that needed the latest and greatest.

Compare that to the poor soul who decided to build on top of React when it first came out, and has been rewarded by getting to rewrite his entire application four times in as many years.

To return to the point, rather than rewriting around breaking changes from Node x.x to Node y.a, I'd be shopping around for the LTS version of Node x that I could keep the thing running on without intervention from my team.


> As an example, I have sites that were built using ASP.NET version 1.1 that have survived to this day with nothing more than Windows Update

You are right; but in my experience those ASP applications also had security holes (CSRF etc) that were never patched. They ultimately either became botnets or faded away when the corp simply faded away.

A business that can't afford to pay for cleaning up its business applications is likely to be unable to pay for general upkeep as well. It is simply past the point of being a viable business and is either in limbo or in the grave!

See "maintenance free" approach to software as canary in the coal-mine and run away as fast as possible.


That's not the reality I know. I have apps written and compiled on windows xp that still work to this day.

If you work on any non-msft stack I know of, you're constantly updating code for any, sometimes even minor version upgrade.


The reality I know is that the people who wrote their apps on Windows XP left the company some years ago, but the apps live on: perfectly functioning, but unable to make requests via anything more secure than TLS1.0 and forcing other people to run servers that continue to accept TLS1.0 years after it was deprecated by everyone else.

https://blog.pcisecuritystandards.org/migrating-from-ssl-and...

(That's the PCI announcement in 2015 that despite everyone knowing about problems with TLS1.0, they would continue to allow it through 2018 because of all the companies who deferred their technical debt in the manner you seem to be advocating.)


That happened to me. All I had to do is patch windows server. I didn't have to change or even recompile my code.

I know is hard to believe it's that simple, but it is.


So all you had to do was:

1. Know what to do.

2. Have approval to do it.

3. Do it.

... which is to say, maintenance. The fact that maintenance is simple and/or easy doesn't mean it happens by itself.


> The fact that maintenance is simple and/or easy doesn't mean it happens by itself.

Yes, there will always be _some_ maintenance. The point is it should be as simple and easy as possible.


I'm telling you that the entire infrastructure of the world is held back by companies who don't do simple maintenance, and your response is to tell me it should be easy to do maintenance.

Someone isn't getting the point, and I don't think I can make it any clearer.


Or is your point that we're held back by people who don't do simple maintenance, and trying to make maintenance simpler, while it might help, won't solve 100% of the problem.


Are you saying it's better if maintenance requires a lot of work, to encourage people to do more maintenance?

If we made things harder to upgrade would that encourage more people to upgrade?


Since late last year, you can use old versions if you want to. [0] The provider doesn't enforce the runtime anymore. But I don't think it's the provider issue in the first place. At some point "node x.x" will be EOL and you won't get an email. You'll just stop getting maintenance patches.

[0] https://aws.amazon.com/blogs/aws/new-for-aws-lambda-use-any-...


Good point, but no. If you have architected your apps properly, decommissioning or sunsetting services or individual components should already have been designed and planned.

I know, I know... It's almost never the case.


You really just don't have to think like this on the msft stack. I'm so glad I chose msft ASP 20 years ago, instead of php, or RoR or python, or node or any of the myriad other stacks that have come and gone since.


Do you ever need to update your servers to a newer version like say 2016 or now 2019? There are definitely issues on using say old VB6 libraries when you need to upgrade your severs from 2008 to 2019. Not to mention using those old technologies if you do have a new feature or change you end up with an unmaintainable mess. I am MSFT stack programmer, but to claim there are no issues and you just need to patch a server is flat out wrong.


They didn’t pull the plug. You can’t create or update new lambdas with older versions of Node, but if you have existing code, it won’t just stop working.


I have written more than a few applications on lambda for several years, and I have gotten this email exactly once, for one function.


The issue you have isn't lambda, it's using an *aaS that someone else is hosting.


> If you’re running node x.x on your own backend, you can choose to simply keep doing so for as long as you want, regardless of what version the cool kids are using these days.

What do you mean by your own backend? Your own physical hardware, your own rented hardware, your own EC2 box, your own Fargate container?

What if you get an office fire, a hardware failure, a network outage, a required security update, your third party company goes out of business, etc.?

There's no such thing as code that doesn't need to be maintained. Lambdas (and competitors) probably require the least maintenance of the lot.


It’s not the quantity of maintenance that’s at issue. It’s the lack of ability to schedule that maintenance.

GP is talking about unplanned maintenance, which is a huge problem in many industries, like air travel (any transportation, really), or software.


> If you’re running node x.x on your own backend, you can choose to simply keep doing so for as long as you want,

Until you get audited, and that raises a flag, and now you have to deal with it.


Not a problem. Ill tell TypeScript to output code matching the es version for that version of node.


Sure its a problem. It is still extra effort, no matter if your stack makes it easy or not. It still has to be done.


That's just laziness on your part though.

Using an old version of Node is just going to leave you with worse performance and potentially security holes.


Given the nature of lambdas specifically

-What would the security issue be on outdated lambda code?

-Wouldn’t the performance of the code would be equal to the performance of when you first deployed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: