Think of some common sense physical analogies: a hidden underground bunker is much less likely to be robbed than a safe full of valuables in your front yard. A bicycle buried deeply in bushes is less likely to be stolen than one locked to a bike rack.
Without obscurity it is straightforward to know exactly what resources will be required to break something- you can look for a flaw that makes it easy and/or calculate exactly what is required for enough brute force.
When you add the element of well executed obscurity on top of an also strong system, it becomes nearly impossible to even identify that there is something to attack, or to even start to form a plan to do so.
Combining both approaches is best, but in most cases I think simple obscurity is more powerful and requires less resources than non obscure strength based security.
I’ve managed public servers that stayed uncompromised without security updates for a decade or longer using obscurity: an archaic old Unix OS of some type that does not respond to pings or other queries, runs services on non-standard ports, and blocks routes to hosts that even attempt scanning the standard ports will not be compromised. Obviously also using a secure OS with updates on top of these techniques is better overall.
I think the scenario that security through obscurity fails is when the end user is reliant on guarantees that don't exist.
For example Intel's Management Engine, it was obscured very well. It wasn't found for years. Eventually people did find it, and you can't help but wonder how long it took for bad actors with deep pockets to find it. Its this obscured cubby hole in your CPU, but if someone could exploit it, it would be really difficult to find out because of intel's secrecy on top of the feature.
It seems like people are really talking about different things with obscurity. Some are referring to badly designed weak systems, where secrecy and marketing hype is used to attempt to conceal the flaws. Others, like my comment above, are talking about systems carefully engineered to have no predictable or identifiable attack surfaces- things like OpenBSDs memory allocation randomization, or the ancient method of simply hiding physical valuable things well and never mentioning them to anyone. I’ve found when it is impossible for an external bad actor to even tell what OS and services my server is running- or in some cases to even positively confirm that it really exists- they can’t really even begin to form a plan to compromise it.
> where secrecy and marketing hype is used to attempt to conceal the flaws.
That's literally the practical basis of security through obscurity.
> Others, like my comment above, are talking about systems carefully engineered to have no predictable or identifiable attack surfaces- things like OpenBSDs memory allocation randomization,
That's exactly the opposite of 'security through obscurity' - you're literally talking about a completely open security mitigation.
> I’ve found when it is impossible for an external bad actor to even tell what OS and services my server is running- or in some cases to even positively confirm that it really exists- they can’t really even begin to form a plan to compromise it.
If one of your mitigations is 'make the server inaccessible via public internet', for example - that is not security through obscurity - it's a mitigation which can be publicly disclosed and remain effective for the attack vectors it protects against. I don't think you quite understand what 'security through obscurity[0]' means. 'Security through obscurity' in this case would be you running a closed third-party firewall on this sever (or some other closed software, like macos for example) which has 100 different backdoors in it - the exact oppposite of actual security.
You're mis-representing my examples by shifting the context, and quoting a wikipedia page that literally gives the same examples to two of the main ones I mentioned at the very top of the article as key examples of security through obscurity: "Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such as spoofing a web browser's version number"
If you're not understanding how memory allocation randomization is security through obscurity- you are not understanding what the concept entails at the core. It does share a common method with, e.g. using a closed 3rd party firewall: in both cases direct flaws exist that could be overcome with methods other than brute force, yet identifying and specifying them enough to actually exploit is non-trivial.
The flaw in your firewall example is not using obscurity itself, but: (1) not also using traditional methods of hardening on top of it - obscurity should be an extra layer not an only layer, and (2) it's probably not really very obscure, e.g. if an external person could infer what software you are using by interacting remotely, and then obtain their own commercial copy to investigate for flaws.
> You're mis-representing my examples by shifting the context,
Specific example of where I did this?
> literally gives the same examples to two of the main ones I mentioned at the very top of the article as key examples of security through obscurity: "Examples of this practice include disguising sensitive information within commonplace items, like a piece of paper in a book, or altering digital footprints, such as spoofing a web browser's version number"
I mean, I don't disagree that what you said about changing port numbers, for example, is security through obscurity. My point is that this is not any kind of defense from a capable and motivated attacker. Other examples like the OpenBSD mitigation you mentioned are very obviously not security through obscurity though.
> If you're not understanding how memory allocation randomization is security through obscurity- you are not understanding what the concept entails at the core.
No, you still don't understand what 'security through obscurity' means. If I use an open asymmetric key algorithm - the fact that I can't guess a private key does not make it 'security through obscurity' it's the obscuring of the actual crypto algorithm that would make it 'security through obscurity'. Completely open security mitigations like the one you mentioned have nothing to do with security through obscurity.
> The flaw in your firewall example is not using obscurity itself, but: (1) not also using traditional methods of hardening on top of it
Sooo... you think adding more obscurity on top of a closed, insecure piece of software is going to make it secure?
> if an external person could infer what software you are using by interacting remotely,
There are soooo many ways for a capable and motivated attacker to figure out what software you're running. Trying to obscure that fact is not any kind of security mitigation whatsoever. Especially when you're dealing with completely closed software/hardware - all of your attempts at concealment are mostly moot - you have no idea what kind of signatures/signals that closed system exposes, you have no idea what backdoors exist, you have no idea what kind of vulnerable dependencies it has that expose their own signatures and have their own backdoors. Your suggestion is really laughable.
> not also using traditional methods of hardening on top of it
What 'traditional methods' do you use to 'harden' closed software/hardware? You literally have no idea what security holes and backdoors exist.
> if an external person could infer what software you are using by interacting remotely, and then obtain their own commercial copy to investigate for flaws.
Uhh yeah, now you're literally bringing up one of the most common arguments for why security through obscurity is bullshit. During WW1/WW2 security through obscurity was common in crypto - they relied on hiding their crypto algos instead of designing ones that would be secure even when publicly known. What happened is enough messages, crypto machines, etc were recovered by the other side to reverse these obscured algos and break them - since then crypro has pretty much entirely moved away from security through obscurity.
You are operating on a false dichotomy that the current best practices of cryptographic security, code auditing, etc. are somehow mutually exclusive with obscurity, and then arguing against obscurity by arguing for other good practices. They are absolutely complementary, and implementing a real world secure system will layer both- one starts with a mathematically secure heavily publicly audited system, and adds obscurity in their real world deployment of it.
If there are advantages to a closed source system, it is not in situations where the source is closed to you and contains bugs, but when closed to the attacker. If you have the resources and ability to, for example, develop your own internally used but externally unknown, but still heavily audited and cryptographically secure system, is going to be better than an open source tool.
> They are absolutely complementary, and implementing a real world secure system will layer both- one starts with a mathematically secure heavily publicly audited system, and adds obscurity in their real world deployment of it.
Ok, let's start with a 'mathematically secure heavily public audited system' - let's take ECDSA, for example - how will you use obscurity to improve security?
> If you have the resources and ability to, for example, develop your own internally used but externally unknown, but still heavily audited and cryptographically secure system, is going to be better than an open source tool.
Literally all of the evidence we have throughout the history of the planet says you're 100% wrong.
> Literally all of the evidence we have throughout the history of the planet says you're 100% wrong
You are so sure you’re right that you are not really thinking about what I am saying, and how it applies to real world situations- especially things like real life high stakes life or death situations.
I am satisfied that your perspective makes the most sense for low stakes broad deployments like software releases, but not for one off high stakes systems.
For things like ECDSA, like anything else you implement obscurity on a one off basis tailored to the specific use case- know your opponent and make them think you are using an entirely different method and protocol that they’ve already figured out and compromised. Hide the actual channel of communication so they are unable to notice it exists, and over that you simply use ECDSA properly.
Oh, and store your real private key in the geometric design of a giant mural in your living room, while your house and computers are littered with thousands of wrong private keys on ancient media that is expensive to extract. Subscribe to and own every key wallet product or device, but actually use none of them.
> You are so sure you’re right that you are not really thinking about what I am saying, and how it applies to real world situations- especially things like real life high stakes life or death situations.
Nah, you're just saying a lot of stuff that's factually incorrect and just terrible advice overall. You lack understanding what you're talking about. And the stakes are pretty irrelevant to whether a system is secure or not.
> For things like ECDSA, like anything else you implement obscurity on a one off basis tailored to the specific use case- know your opponent and make them think you are using an entirely different method and protocol that they’ve already figured out and compromised.
You're going to make ECDSA more secure by making people think you're not using ECDSA? That makes so little sense in so many ways. Ahahahahaha.
I very well may be wrong, but if so you are not aware of how, and I will need to find someone else to explain it to me. I’ve been interested for a while in having a serious debate with someone that understands and advocates for the position you claim to have- but if you understood it you would be able to meaningfully defend it rather than using dismissive statements.
> Think of some common sense physical analogies: a hidden underground bunker is much less likely to be robbed than a safe full of valuables in your front yard. A bicycle buried deeply in bushes is less likely to be stolen than one locked to a bike rack.
That's not what security through obscurity is. If you want to make an honest comparison - what is more likely to be a secure - an open system built based on the latest/most secure public standards, or a closed system built based on (unknown)? The open system is going to be more secure 99.999% of the time.
> Without obscurity it is straightforward to know exactly what resources will be required to break something- you can look for a flaw that makes it easy and/or calculate exactly what is required for enough brute force.
The whole point of not relying on obscurity is that you design an actually secure system even assuming the attacker has a full understanding of your system. That is how virtually all modern crypto that's actually secure works. Knowing your system is insecure and trying to hide that via obscurity is not security.
> it becomes nearly impossible to even identify that there is something to attack
That's called wishful thinking. You're conflating 'system that nobody knows about or wants to attack' with 'system that someone actually wants to attack and is defending via obscurity of its design'. If you want to make an honest comparison you have to assume the attacker knows about the system and has some motive for attacking it.
> but in most cases I think simple obscurity is more powerful and requires less resources than non obscure strength based security.
Except obscurity doesn't actually give you any security.
> I’ve managed public servers that stayed uncompromised without security updates for a decade or longer using obscurity: an archaic old Unix OS of some type that does not respond to pings or other queries, runs services on non-standard ports, and blocks routes to hosts that even attempt scanning the standard ports will not be compromised.
That's a laughably weak level of security and does approximately ~zero against a capable and motivated attacker. Also, your claim of 'stayed uncompromised' is seemingly based on nothing.
You are begging the question- insisting that obscurity isn't security by definition, instead of actually discussing it's strength and weaknesses. I didn't "say so"- I gave specific real world examples, and explained the underlying theory- that being unable to plan or quantify what is required to compromise a system makes it much harder.
Instead of, for example in your last example simply labeling something you seem to not like as "laughably weak"- do you have any specific reasoning? Again, I'd like to emphasize that I don't advocate obscurity in place of other methods, but on top of additional methods.
Let's try some silly extreme examples of obscurity. Say I put up a server running OpenBSD (because it is less popular)- obviously a recent version with all security updates-, and it has only one open port- SSH, reconfigured to run on port 64234, and attempting to scan all other ports immediately and permanently drop the route to your IP. The machine does not respond to pings, and does other weird things like only being physically connected for 10 minutes a day at seemingly random times only known by the users, with a new IP address each time that is never reused. On top of that, the code and all commands of the entire OS has been secretly translated into a dead ancient language so that even with root it would take a long time to figure out how to work anything. It is a custom secret hacked fork of SSH only used in this one spot that cannot be externally identified as SSH at all, and exhibits no timing or other similar behaviors to identify the OS or implementation. How exactly are you going to remotely figure out that this is OpenBSD and SSH, so you can then start to look for a flaw to exploit?
If you take the alternate model, and just install a mainstream open source OS and stay on top of all security updates the best you can, all a potential hacker needs to do is quickly exploit a new update before you actually get it installed, or review the code to find a new one.
Is it easier to rob a high security vault in a commercial bank on a major public street, or a high security vault buried in the sand on a remote island, where only one person alive knows its location?
> Instead of, for example in your last example simply labeling something you seem to not like as "laughably weak"- do you have any specific reasoning?
'without security updates for a decade or longer' - do I really need to go into detail on why this is hilariously terrible security?
'runs services on non-standard ports,' - ok, _maybe_ you mitigated some low-effort automated scans, does not address service signatures at all, the most basic nmap service detection scan bypasses this already.
'blocks routes to hosts that even attempt scanning the standard ports ' - what is 'attempt scanning the standard ports' and how are you detecting that- is it impossible for me to scan your server from multiple boxes? (No, it's not, it's trivially easy.)
> Say I put up a server running OpenBSD (because it is less popular)- obviously a recent version with all security updates-, and it has only one open port- SSH,
Ok, so already far more secure than what you said in your previous comment.
> only being physically connected for 10 minutes a day at seemingly random times only known by the users
Ok, so we're dealing with a server/service which is vastly different in its operation from almost any real-world server.
> only known by the users, with a new IP address each time that is never reused
Now you have to explain how you force a unique IP every time, and how users know about it.
> On top of that, the code and all commands of the entire OS has been secretly translated into a dead ancient language so that even with root it would take a long time to figure out how to work anything
Ok, so completely unrealistic BS.
> It is a custom secret hacked fork of SSH only used in this one spot that cannot be externally identified as SSH at all
It can't be identified, because you waved a magic wand and made it so?
> and exhibits no timing or other similar behaviors to identify the OS or implementation
Let's wave that wand again.
> How exactly are you going to remotely figure out that this is OpenBSD and SSH, so you can then start to look for a flaw to exploit?
Many ways. But let me use your magic wand and give you a much better/secure scenario - 'A server which runs fully secure software with no vulnerabilities or security holes whatsoever.' - Makes about as much sense as your example.
> Is it easier to rob a high security vault in a commercial bank on a major public street, or a high security vault buried in the sand on a remote island, where only one person alive knows its location?
The answer comes down to what 'high security' actually means in each situation. You don't seem to get it.
Obfuscation is not security.. So there can't be "security through obscurity".
Widely deployed doesn't mean it's a positive action, and effective ? It just can't be as it's not a security. People really need to pay more attention to these things, or else we DO get nonsense rolled out as "effective".
Where did you come up with “ security through obscurity ” in that previous commment? It said nothing about using an obscurity measure. He was talking about hardware based privacy features.
What do you mean by considered bad practice? By whom? I would think this is one of the reasons that my Macs since 2008 have just worked without any HW problems.
First, this is 100% false. Second, security through obscurity is almost universally discouraged and considered bad practice.