> People only read your code when something is wrong, which means they’re already annoyed before they get to your bit and if your bit is also annoying you’re going to either hear about it or get frozen out because if it.
If you’re talking about angry issues in FOSS, then there’s another positive way to look at this.
Not only did at least 1 person run your code somehow, they also cared enough to find the source and report it to you. Which means your code has value!!
But generally people are pretty nice when reporting issues to small projects
OSS has its own set of problems but I was talking more of commercial projects. Ones where people are being paid to care and when they don’t we have a problem.
The FOSS world is primarily about freedom. You don’t have to align with someone else’s vision, you don’t need to be profitable, you don’t need to care about other projects
At Amazon scale, including a "we don't delete the data for 30 days if a bill isn't paid" clause is a plausible thing to include in the "free" tier. Paid tiers owe Amazon the contracted rate for the storage, as with any similar contract, and when Amazon deletes the data if payment isn't rendered when due is up to the terms of the contract.
There is no such thing as the “free tier” at least until July of this year. Some services are free for the first year up to a certain limit, some give you a bucket of free usage every month, etc.
Then you owe the contracted rate for the storage. These massive bills are almost never for storage, they're almost always for some sort of compute or transport left unrestricted. If you store 500TB you'll get an $11k/month bill, but the vast majority of the services can simply cut off usage at a limit. Even storage could prevent adding new data if you hit a pre-specified limit, so you'd only pay for the data you already had.
If I know my service should never use more than 1TB total I'd like to be able to set a limit at (say) 2TB total with warnings at 0.6TB & 1TB, thus limiting spend to $46/month on storage. Sure, my service will fail if I hit the limit, but if it's using double the storage I expect it to use something went wrong & I want to require manual action to resolve it instead of allowing it to leak storage unbounded.
This is not a particularly difficult problem to make significant improvements on. There are some edge cases (there always are) but even if spending limits were only implemented for non-storage services it'd still be better for customers than the status quo.
I wish this trend of “security through obscurity” should mean that all info should just be exposed would die, its silly and lacks basis in reality.
Even within infosec, certain types of information disclosure are considered security problems. Leaking signed up user information or even inodes on the drives can lead to PCI-DSS failures.
Why is broadcasting your records treated differently? Because people would find the information eventually if they scanned the whole internet? Even then they might not due to SNI; so this is actually giving critical information necessary for an attack to attackers.
The issue is not that obscurity per se is bad, but relying _only_ on obscurity is absolute the same as not having any security measures at all.
With the public ledger or not, you will still need to implement proper security measures. So it shouldn't matter if your address is public or not, in fact making it public raises the awareness for the problem. That's the argument.
Until it gets obscure enough that we start calling it “public-key cryptography”. Guess the prime number I'm thinking of between 0 and 2↑4096 and win a fabulous prize!
If you replace "security by obscurity" with "Kerckhoffs's principle", yes, absolutely!
The problem with using regular everyday obscurity is that it usually has a small state space and makes for terrible security, but people will treat it like it is cleverly hidden and safe from attackers
If I guess the IPv4 you're thinking of between 0 and 2↑32, ready or not, you win a free port scan
> So it shouldn't matter if your address is public or not, in fact making it public raises the awareness for the problem. That's the argument.
Forget about the internet, we've had almost 100 years to prove we can secure identity theft. And the best thing we can do is to keep our SSN's secret -- security through obscurity. Keeping your SSN private reduces your personal attack surface.
We've had 50 years to secure the internet, and yet, we still have zero day attacks. Nuclear submarines try their best to keep their locations a secret? Why? You cannot attack something you cannot see or hear.
Battleship sounds like a good analogy, but is very different because you don't have other options to "secure your ship" besides obscurity. If you had other options, let's say a sonar or moving your ship, they would definitely be used along with obscurity.
Besides, the time to scan the whole board is too time consuming in a battleship game, but scanning the whole internet on the other hand only take a few minutes[1]
You're talking IPv4 here, not IPv6. A 24 bit network has 254 addresses in IPv4. A 64bit subnet in IPv6 has 2^64.
If you can scan 1M ipv6's in a second, you can maybe scan 1 subnet in 584,942 years.
So if you're a firewall, and you notice scanning from a particular ip or network, it's easy enough to block them.
Also if you are scanning IPv4, you're not scanning addresses behind the NAT'd routers -- which is also effectively a form of obfuscation. So I would argue it's not the entire internet.
Okay, but we're not talking about that here. This is very much the case of a service being exposed that shouldn't be and relying on obscurity to try and avoid actually getting compromised
If something was temporary then it’s likely that it wouldn’t have been found in a meaningful amount of time to be exploited.
As an only line of defence it’s not good, but its also not good to hand-deliver your entire personal information to fraudsters and then claim that the systems should be more robust.
If you have a target on your own back thanks to cert transparency logs, it's a bit like closing the barn door late for you to find fault in your own being in Texas when sharpshooters are about. If your only defense was obscurity, your ass is hanging out, and it's no one's fault but your own when you find fault with others for simply saying so.
IME, moving ssh off the standard port reduces bot scanning traffic by >99%. Not only it means less noise in the logs (and thus higher SNR), but also lowers the chance you're hit by spray-and-pray in case there's a zero day in sshd (or any other daemon really).
True, but I hardly open any ssh to the wide world. I would only allow it inside a closed network anyways. HTTP on the other hand _needs_ to be exposed on 80 or 443 (not technically, but in practice)
The context of the conversation is that the address becomes publicly visible so you get hit with port scanners and script kiddies looking for vulns. Moving off standard ports does help but many of those are also going to look at ports like 2222 or 8022 and treat them as ssh.
It's not hard to just send something like `nmap -sV -p- <ADDRESS>` (or better, use like rustscan.) and you'll discover those ports and the services.
On the other hand, just install something like knocked and you don't have to do much. Knocking is not a difficult thing to set up.
Which is something that makes a notable difference. It’s telling the bots the OP listed are trying Vite endpoints, they’re targeting folks doing short term local web development. Removing obscurity and indicating relative likelihood of still being online is a big shift.
Yes. Yes, of course they do. Check for example https://crt.sh with your domain name to see the glorious public history of everything the certificates tell about your domain.
Would that income be more than the lost ad revenue (as applicants stop visiting their site) plus lost subscriptions on the employer side (as AI-authored applications make the site useless to them)? Who knows but probably MS are betting on no.
Hiring companies certainly don’t want bots to write job applications. They are already busy weeding out the AI-written applications and bots would only accelerate their problem. Hiring companies happen to be paying customers of LinkedIn.
Job applications aren't the only use case for using LinkedIn in this connected way, but even on that topic -- I think we are moving pretty quickly to no longer need to "weed out" AI-written applications.
As adoption increases, there's going to be a whole spectrum of AI-enabled work that you see out there. So something that doesn't appear to be AI written is not necessarily pure & free of AI. Not to mention the models themselves getting better at not sounding AI-style canned. If you want to have a filter for lazy applications that are written with a 10-word prompt using 4o, sure, that is actually pretty trivial to do with OpenAI's own models, but is there another reason you think companies "don't want bots to write job applications"?
Devastating… I discovered Mikeal like most people did, from curiosity about npm packages in a project.
He wrote a lot of opensource projects and was a refreshingly nice and patient person to interact with on GitHub. Condolences to his friends and family, he’ll be missed in the FOSS world
If you’re talking about angry issues in FOSS, then there’s another positive way to look at this.
Not only did at least 1 person run your code somehow, they also cared enough to find the source and report it to you. Which means your code has value!!
But generally people are pretty nice when reporting issues to small projects