> Think of some common sense physical analogies: a hidden underground bunker is much less likely to be robbed than a safe full of valuables in your front yard. A bicycle buried deeply in bushes is less likely to be stolen than one locked to a bike rack.
That's not what security through obscurity is. If you want to make an honest comparison - what is more likely to be a secure - an open system built based on the latest/most secure public standards, or a closed system built based on (unknown)? The open system is going to be more secure 99.999% of the time.
> Without obscurity it is straightforward to know exactly what resources will be required to break something- you can look for a flaw that makes it easy and/or calculate exactly what is required for enough brute force.
The whole point of not relying on obscurity is that you design an actually secure system even assuming the attacker has a full understanding of your system. That is how virtually all modern crypto that's actually secure works. Knowing your system is insecure and trying to hide that via obscurity is not security.
> it becomes nearly impossible to even identify that there is something to attack
That's called wishful thinking. You're conflating 'system that nobody knows about or wants to attack' with 'system that someone actually wants to attack and is defending via obscurity of its design'. If you want to make an honest comparison you have to assume the attacker knows about the system and has some motive for attacking it.
> but in most cases I think simple obscurity is more powerful and requires less resources than non obscure strength based security.
Except obscurity doesn't actually give you any security.
> I’ve managed public servers that stayed uncompromised without security updates for a decade or longer using obscurity: an archaic old Unix OS of some type that does not respond to pings or other queries, runs services on non-standard ports, and blocks routes to hosts that even attempt scanning the standard ports will not be compromised.
That's a laughably weak level of security and does approximately ~zero against a capable and motivated attacker. Also, your claim of 'stayed uncompromised' is seemingly based on nothing.
You are begging the question- insisting that obscurity isn't security by definition, instead of actually discussing it's strength and weaknesses. I didn't "say so"- I gave specific real world examples, and explained the underlying theory- that being unable to plan or quantify what is required to compromise a system makes it much harder.
Instead of, for example in your last example simply labeling something you seem to not like as "laughably weak"- do you have any specific reasoning? Again, I'd like to emphasize that I don't advocate obscurity in place of other methods, but on top of additional methods.
Let's try some silly extreme examples of obscurity. Say I put up a server running OpenBSD (because it is less popular)- obviously a recent version with all security updates-, and it has only one open port- SSH, reconfigured to run on port 64234, and attempting to scan all other ports immediately and permanently drop the route to your IP. The machine does not respond to pings, and does other weird things like only being physically connected for 10 minutes a day at seemingly random times only known by the users, with a new IP address each time that is never reused. On top of that, the code and all commands of the entire OS has been secretly translated into a dead ancient language so that even with root it would take a long time to figure out how to work anything. It is a custom secret hacked fork of SSH only used in this one spot that cannot be externally identified as SSH at all, and exhibits no timing or other similar behaviors to identify the OS or implementation. How exactly are you going to remotely figure out that this is OpenBSD and SSH, so you can then start to look for a flaw to exploit?
If you take the alternate model, and just install a mainstream open source OS and stay on top of all security updates the best you can, all a potential hacker needs to do is quickly exploit a new update before you actually get it installed, or review the code to find a new one.
Is it easier to rob a high security vault in a commercial bank on a major public street, or a high security vault buried in the sand on a remote island, where only one person alive knows its location?
> Instead of, for example in your last example simply labeling something you seem to not like as "laughably weak"- do you have any specific reasoning?
'without security updates for a decade or longer' - do I really need to go into detail on why this is hilariously terrible security?
'runs services on non-standard ports,' - ok, _maybe_ you mitigated some low-effort automated scans, does not address service signatures at all, the most basic nmap service detection scan bypasses this already.
'blocks routes to hosts that even attempt scanning the standard ports ' - what is 'attempt scanning the standard ports' and how are you detecting that- is it impossible for me to scan your server from multiple boxes? (No, it's not, it's trivially easy.)
> Say I put up a server running OpenBSD (because it is less popular)- obviously a recent version with all security updates-, and it has only one open port- SSH,
Ok, so already far more secure than what you said in your previous comment.
> only being physically connected for 10 minutes a day at seemingly random times only known by the users
Ok, so we're dealing with a server/service which is vastly different in its operation from almost any real-world server.
> only known by the users, with a new IP address each time that is never reused
Now you have to explain how you force a unique IP every time, and how users know about it.
> On top of that, the code and all commands of the entire OS has been secretly translated into a dead ancient language so that even with root it would take a long time to figure out how to work anything
Ok, so completely unrealistic BS.
> It is a custom secret hacked fork of SSH only used in this one spot that cannot be externally identified as SSH at all
It can't be identified, because you waved a magic wand and made it so?
> and exhibits no timing or other similar behaviors to identify the OS or implementation
Let's wave that wand again.
> How exactly are you going to remotely figure out that this is OpenBSD and SSH, so you can then start to look for a flaw to exploit?
Many ways. But let me use your magic wand and give you a much better/secure scenario - 'A server which runs fully secure software with no vulnerabilities or security holes whatsoever.' - Makes about as much sense as your example.
> Is it easier to rob a high security vault in a commercial bank on a major public street, or a high security vault buried in the sand on a remote island, where only one person alive knows its location?
The answer comes down to what 'high security' actually means in each situation. You don't seem to get it.
If you say so.
> Think of some common sense physical analogies: a hidden underground bunker is much less likely to be robbed than a safe full of valuables in your front yard. A bicycle buried deeply in bushes is less likely to be stolen than one locked to a bike rack.
That's not what security through obscurity is. If you want to make an honest comparison - what is more likely to be a secure - an open system built based on the latest/most secure public standards, or a closed system built based on (unknown)? The open system is going to be more secure 99.999% of the time.
> Without obscurity it is straightforward to know exactly what resources will be required to break something- you can look for a flaw that makes it easy and/or calculate exactly what is required for enough brute force.
The whole point of not relying on obscurity is that you design an actually secure system even assuming the attacker has a full understanding of your system. That is how virtually all modern crypto that's actually secure works. Knowing your system is insecure and trying to hide that via obscurity is not security.
> it becomes nearly impossible to even identify that there is something to attack
That's called wishful thinking. You're conflating 'system that nobody knows about or wants to attack' with 'system that someone actually wants to attack and is defending via obscurity of its design'. If you want to make an honest comparison you have to assume the attacker knows about the system and has some motive for attacking it.
> but in most cases I think simple obscurity is more powerful and requires less resources than non obscure strength based security.
Except obscurity doesn't actually give you any security.
> I’ve managed public servers that stayed uncompromised without security updates for a decade or longer using obscurity: an archaic old Unix OS of some type that does not respond to pings or other queries, runs services on non-standard ports, and blocks routes to hosts that even attempt scanning the standard ports will not be compromised.
That's a laughably weak level of security and does approximately ~zero against a capable and motivated attacker. Also, your claim of 'stayed uncompromised' is seemingly based on nothing.