Hacker Newsnew | past | comments | ask | show | jobs | submit | more gitgud's commentslogin

> People only read your code when something is wrong, which means they’re already annoyed before they get to your bit and if your bit is also annoying you’re going to either hear about it or get frozen out because if it.

If you’re talking about angry issues in FOSS, then there’s another positive way to look at this.

Not only did at least 1 person run your code somehow, they also cared enough to find the source and report it to you. Which means your code has value!!

But generally people are pretty nice when reporting issues to small projects


OSS has its own set of problems but I was talking more of commercial projects. Ones where people are being paid to care and when they don’t we have a problem.


> Accept the request and send a response, one character at a time

Sounds like the opposite of the [1] Slow Loris DDOS attack. Instead of attacking with slow connections, you’re defending with slow connections

[1] https://www.cloudflare.com/en-au/learning/ddos/ddos-attack-t...


That's why it is actually sometimes called inverse slow loris.


it's called the slow sirol in my circles


Interesting perspective, I think highlighting what I’m looking at, to show me if it’s a function/class/variable is pretty useful…

Also most modern IDE’s already contextually highlight usages of what you’ve selected too


The FOSS world is primarily about freedom. You don’t have to align with someone else’s vision, you don’t need to be profitable, you don’t need to care about other projects


A.k.a. not getting paid, so you might as well do what you want.


How's the computing freedom for general audience? Better than ever, right?


Clanker Coding ™


> If you go over your budget with AWS, what should AWS do automatically? Delete your objects from S3? Terminate your databases and EC2 instances?

Why not simply take the service offline once it reaches the free tier limit??

The reason why is that AWS is greedy, and would rather force you to become a paid customer…


How do you take your S3 service offline when they charge for storage or your EBS volumes? Your databases?


Block access to the service until the next billing period starts, or the user upgrades to a paid tier.


And it is still incurring charges for storage costs.


At Amazon scale, including a "we don't delete the data for 30 days if a bill isn't paid" clause is a plausible thing to include in the "free" tier. Paid tiers owe Amazon the contracted rate for the storage, as with any similar contract, and when Amazon deletes the data if payment isn't rendered when due is up to the terms of the contract.


There is no such thing as the “free tier” at least until July of this year. Some services are free for the first year up to a certain limit, some give you a bucket of free usage every month, etc.


Then you owe the contracted rate for the storage. These massive bills are almost never for storage, they're almost always for some sort of compute or transport left unrestricted. If you store 500TB you'll get an $11k/month bill, but the vast majority of the services can simply cut off usage at a limit. Even storage could prevent adding new data if you hit a pre-specified limit, so you'd only pay for the data you already had.

If I know my service should never use more than 1TB total I'd like to be able to set a limit at (say) 2TB total with warnings at 0.6TB & 1TB, thus limiting spend to $46/month on storage. Sure, my service will fail if I hit the limit, but if it's using double the storage I expect it to use something went wrong & I want to require manual action to resolve it instead of allowing it to leak storage unbounded.

This is not a particularly difficult problem to make significant improvements on. There are some edge cases (there always are) but even if spending limits were only implemented for non-storage services it'd still be better for customers than the status quo.


Wait, so bots watch for new records added to this HTTPS cert public ledger, then immediately start attacking?

To me that sounds like enabling HTTPS is actually a risk here…


The server was already exposed. All this does is remove obscurity


I wish this trend of “security through obscurity” should mean that all info should just be exposed would die, its silly and lacks basis in reality.

Even within infosec, certain types of information disclosure are considered security problems. Leaking signed up user information or even inodes on the drives can lead to PCI-DSS failures.

Why is broadcasting your records treated differently? Because people would find the information eventually if they scanned the whole internet? Even then they might not due to SNI; so this is actually giving critical information necessary for an attack to attackers.


The issue is not that obscurity per se is bad, but relying _only_ on obscurity is absolute the same as not having any security measures at all.

With the public ledger or not, you will still need to implement proper security measures. So it shouldn't matter if your address is public or not, in fact making it public raises the awareness for the problem. That's the argument.


> relying _only_ on obscurity

Until it gets obscure enough that we start calling it “public-key cryptography”. Guess the prime number I'm thinking of between 0 and 2↑4096 and win a fabulous prize!


If you replace "security by obscurity" with "Kerckhoffs's principle", yes, absolutely!

The problem with using regular everyday obscurity is that it usually has a small state space and makes for terrible security, but people will treat it like it is cleverly hidden and safe from attackers

If I guess the IPv4 you're thinking of between 0 and 2↑32, ready or not, you win a free port scan


As per another comment, we can scan a single port on every public IPv4 address in less than an hour.

Trying every 256bit number gets into a "slightly" larger problem.


> So it shouldn't matter if your address is public or not, in fact making it public raises the awareness for the problem. That's the argument.

Forget about the internet, we've had almost 100 years to prove we can secure identity theft. And the best thing we can do is to keep our SSN's secret -- security through obscurity. Keeping your SSN private reduces your personal attack surface.

We've had 50 years to secure the internet, and yet, we still have zero day attacks. Nuclear submarines try their best to keep their locations a secret? Why? You cannot attack something you cannot see or hear.


Well, this is a bad example, considering public/private key pairs exist,

and work for identity validation,

as long as you don't farm it out to a cheap, know-nothing vendor.


Except we are more on a chess table where we can just trivially probe each cell, unlike the vast volume of the ocean.


A game of battleship is indeed a good analogy!

Just because its a finite space that may eventually be discovered is a poor reason to announce where things are!


Battleship sounds like a good analogy, but is very different because you don't have other options to "secure your ship" besides obscurity. If you had other options, let's say a sonar or moving your ship, they would definitely be used along with obscurity.

Besides, the time to scan the whole board is too time consuming in a battleship game, but scanning the whole internet on the other hand only take a few minutes[1]

[1]: https://github.com/robertdavidgraham/masscan


You're talking IPv4 here, not IPv6. A 24 bit network has 254 addresses in IPv4. A 64bit subnet in IPv6 has 2^64.

If you can scan 1M ipv6's in a second, you can maybe scan 1 subnet in 584,942 years.

So if you're a firewall, and you notice scanning from a particular ip or network, it's easy enough to block them.

Also if you are scanning IPv4, you're not scanning addresses behind the NAT'd routers -- which is also effectively a form of obfuscation. So I would argue it's not the entire internet.


Okay, but we're not talking about that here. This is very much the case of a service being exposed that shouldn't be and relying on obscurity to try and avoid actually getting compromised


ironically I would double down even harder then;

If something was temporary then it’s likely that it wouldn’t have been found in a meaningful amount of time to be exploited.

As an only line of defence it’s not good, but its also not good to hand-deliver your entire personal information to fraudsters and then claim that the systems should be more robust.


If you have a target on your own back thanks to cert transparency logs, it's a bit like closing the barn door late for you to find fault in your own being in Texas when sharpshooters are about. If your only defense was obscurity, your ass is hanging out, and it's no one's fault but your own when you find fault with others for simply saying so.

https://en.wikipedia.org/wiki/Texas_sharpshooter_fallacy


In my original comment I said (I thought) quite clearly that obscurity as your only defence is a terrible idea.

But painting a target on your back is not exactly justified just because hiding yourself isn’t a good defence in of itself.


Obscurity couldn't be anyone's last/best defense, unless it was their only defense, was my point.

In any case, I think we agree.


IME, moving ssh off the standard port reduces bot scanning traffic by >99%. Not only it means less noise in the logs (and thus higher SNR), but also lowers the chance you're hit by spray-and-pray in case there's a zero day in sshd (or any other daemon really).


True, but I hardly open any ssh to the wide world. I would only allow it inside a closed network anyways. HTTP on the other hand _needs_ to be exposed on 80 or 443 (not technically, but in practice)


> IME, moving ssh off the standard port reduces bot scanning traffic by >99%.

Depends on the site I expect. My low value domains get NO ssh attempts on my random ports. The high value ones get a few each week.


You could also always add port knocking or something like that.


If you're going to that level, just put it behind a VPN.


Tailscale is a VPN...

The context of the conversation is that the address becomes publicly visible so you get hit with port scanners and script kiddies looking for vulns. Moving off standard ports does help but many of those are also going to look at ports like 2222 or 8022 and treat them as ssh.

It's not hard to just send something like `nmap -sV -p- <ADDRESS>` (or better, use like rustscan.) and you'll discover those ports and the services.

On the other hand, just install something like knocked and you don't have to do much. Knocking is not a difficult thing to set up.


> Tailscale is a VPN...

And if you use it as a VPN and don't turn on the funnel feature, your service won't be exposed.

> On the other hand, just install something like knocked and you don't have to do much. Knocking is not a difficult thing to set up.

Neither is wireguard.


lol your solution to a problem caused by a feature is "don't use that feature?" LGTM

Presumably wireguard was already being used?


Which is something that makes a notable difference. It’s telling the bots the OP listed are trying Vite endpoints, they’re targeting folks doing short term local web development. Removing obscurity and indicating relative likelihood of still being online is a big shift.


Yes. Yes, of course they do. Check for example https://crt.sh with your domain name to see the glorious public history of everything the certificates tell about your domain.


Why would platforms like LinkedIn want this? Bots have never been good for social media…


If they are getting a cut of that premium subscription income, they'd want it if it nets them enough.


Would that income be more than the lost ad revenue (as applicants stop visiting their site) plus lost subscriptions on the employer side (as AI-authored applications make the site useless to them)? Who knows but probably MS are betting on no.


LinkedIn is probably the only social platform that would be improved by bots.


Hiring companies certainly don’t want bots to write job applications. They are already busy weeding out the AI-written applications and bots would only accelerate their problem. Hiring companies happen to be paying customers of LinkedIn.


Job applications aren't the only use case for using LinkedIn in this connected way, but even on that topic -- I think we are moving pretty quickly to no longer need to "weed out" AI-written applications.

As adoption increases, there's going to be a whole spectrum of AI-enabled work that you see out there. So something that doesn't appear to be AI written is not necessarily pure & free of AI. Not to mention the models themselves getting better at not sounding AI-style canned. If you want to have a filter for lazy applications that are written with a 10-word prompt using 4o, sure, that is actually pretty trivial to do with OpenAI's own models, but is there another reason you think companies "don't want bots to write job applications"?


1. Already true, no company will make the AI agent liable for its output, it’s always the programmer

2. Unlikely, as most software won’t result in death/injury… whereas a structural engineering project is much more life threatening.

3. I actually think entry level engineers will be expected to ramp up to productive levels much much quicker due to the help of AI

4. Already true


Devastating… I discovered Mikeal like most people did, from curiosity about npm packages in a project.

He wrote a lot of opensource projects and was a refreshingly nice and patient person to interact with on GitHub. Condolences to his friends and family, he’ll be missed in the FOSS world

https://github.com/mikeal


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: