Hacker Newsnew | past | comments | ask | show | jobs | submit | greyface-'s commentslogin


That link goes to a page full of random garbage. No commits there to be seen.

Apparently the owners of that website don't like my choice of user agent, and have decided to punish me accordingly.


Same here. It says please wait while verifying.

I just checked, and it's confirmed: I am definitely using a web browser. It seems my browser and this site have a different definition of web standards, however.

So exhausting to be surrounded by people with a paranoid, irrational fear of robots, who don't give a shit who they harm in their zeal to lash out and strike the evil bots.


That's crazy. This is core business critical software but they just YOLO critical changes without any automated tests? this PR would be insta-rejected in the small SAAS shop I work at.

If you think you can do better you're welcome to do better. I say this without a hint of sarcasm. This is how open source works. It's a do–ocracy, not a democracy. Whoever makes a telnet server gets to decide how the telnet server works and how much testing it gets before release.

Maybe the lesson here is to stop letting the GNU folks do things, if this is what they do. This is only one example of craziness coming out of the GNU camp.

Or, flip the responsibility to what it has always been understood to be, when using open source software from random volunteers (some being bad actors) on the internet for anything remotely critical: audit the source.

GNU doesn’t provide labor, only organizational tools like mailing lists and whatnot. The projects that GNU supports are still run by individual volunteers. If you want it done better then please volunteer so that you can be the one doing it better.

I am the one doing it better. GNU software is slowly being deprecated on my system, starting with glibc.

So you’re just changing which volunteers you depend on? That’s really productive of you. Thank you for your service.

You can enslave yourself to Microslop if you prefer.

Culture has changed a lot since the 20th century and older projects can have antiquated norms around things like testing. I was just listening to a recent podcast talking about how worrisome it is that OpenSSL has a casual culture about testing[1] and was reminded about how normal that used to be. I think in the case of telnetd you also have the problem that it’s been deprecated for multiple decades so I’d bet that they struggle even more than average to find maintainer time.

1. https://securitycryptographywhatever.com/2026/02/01/python-c...


Even with automated tests you'd need to think of this exploit right? Perhaps fuzzing would have got it. The mailing lists says they proved it successful on

- OpenIndiana

- FreeBSD

- Debian GNU/Linux

So not complete YOLO.

See https://lists.gnu.org/archive/html/bug-inetutils/2015-03/msg...

FWIW, a well known LLM agent, when I asked for a review of the patch, did suggest it was dodgy but didn't pick up the severity of how dodgy it was.


> a well known LLM agent

Which one?


Not GP, but my local Ministral 3 14B and GPT-OSS 20B didn't catch anything unless I gave some hints.

He says 'well known' so I assume Claude or GPT, I just don't get why he's being coy.

I thought by not naming it wouldn't shift the focus to the particular model, but it did the opposite. It was gpt-5.3-codex in medium mode.

Any business that has a telnet daemon able to be reached by an unauthenticated user is negligent. Just the fact that everything is in the clear is reason enough to never use it outside of protected networks.

unless it doesn’t matter if it’s evesdropped

Traffic could be tampered as well.

Sometimes that doesn't matter either. That is the valid use case of a plain-text protocol like telnet: doesn't matter.

Sure. But, contrary to what some people seem to think, "it's nothing secret" is not a sufficient justification to use an unencrypted plain-text protocol.

It literally is. I do not give a fuck if someone reads or fakes the wind speed from the sensor on my roof.

My point is that it's ok to use unencrypted plain text if you don't care if it's read ("it's nothing secret"), AND furthermore you don't care if it's modified.

If you don't care that it's read ("it's nothing secret"), but you do care that it's not modified, you should not use unencrypted plain text. That's why I explained that if you don't care if it's read that is not a sufficient justification to use unencrypted plain text, because then it might be modified, and you might care about that.

You then said that it "literally" is: if you don't care if it's read ("it's nothing secret") that is a sufficient justification to use unencrypted plain text.

But then you proceed to give an example where it is, indeed, ok to use unencrypted plain text, but only because you don't care if it's read ("it's nothing secret"), AND you don't care if it's modified. That is what I have been saying all along. If you were to care that the wind speed from the sensor on your roof is not faked, then you should not use unencrypted plain text.

So again: If you don't care that it's read, AND you don't care if it's modified, then, sure, use unencrypted plain text.

If you don't care that it's read ("it's nothing secret"), but you do care that it's not modified, that is not sufficient justification to use unencrypted plain text. Rather, in addition, you also have to not care if it's modified.

Let me give you an example. Air pressure varies, and airplanes use air pressure to measure altitude, so they need to set their altimeter to the correct air pressure. Now, the air pressure is not secret at all. Anyone could trivially measure it. So, one doesn't care if it's read ("it's nothing secret").

According to your faulty thinking, one could thus use unencrypted plain text to transmit it. However, someone could modify it, giving wrong numbers to the airplane, putting the airplane and the crew in danger. That is not good. No-one cares that the data is read ("it's nothing secret"), but we do care that it is not modified. Thus, do not use unencrypted plain text. Because if you don't care if it is read ("it's nothing secret"), that is not sufficient justification to use unencrypted plain text. You have to, in addition, not care if it is modified.

In your case, you don't care if it is read ("it's nothing secret"), AND you don't care if it modified. But someone else might not care if it is read, but DO care if it is modified.

Do you understand this now, or should I make a full 2x2 matrix with all possibilities and carefully explain each case with examples?


Most 90’s era software had zero tests. Nobody gave it a second thought.

This is quite untrue as a blanket statement. The problem is that there was massive cultural variation: if you installed a Perl module from CPAN you probably ran hundreds of tests. If you ran a C program, it ranged from nothing to “run this one input and don’t crash” to exhaustive suites. PHP tended towards nothing with a handful of surprises.

As a data point, my first tech job was QA for a COBOL compiler vendor. They supported roughly 600 permutations of architecture, operating system, and OS version with a byte-coded runtime and compiler written in C. I maintained a test runner and suite with many thousands of tests, ranging from unit tests to things like Expect UI tests. This was considered routine in the compiler vendor field, and in the scientific computing space I moved into. I worked with someone who independently reproduced the famous Pentium FDIV bug figuring out why their tests failed, which surprised no one because that was just expected engineering.

Then you had the other end of the industry where there was, say, 50k lines of Visual Basic desktop app where they didn’t even use version control software. At a later job, I briefly encountered a legacy system which had 30 years of that where they had the same routine copied in half a dozen places, modified slightly because when the author had fixed a bug they weren’t sure if it would break something else so they just created a copy and updated just the module they were working on.


True, it is colored by my own personal experienced. I remember CPAN, perl, and installing modules with tests. I also remember my day job: a 500,000 line C and C++ code base with literally 5 automated tests that nobody ever ran!

Yeah, I think it’s really hard to understand how much more cultural variation there was without first the internet and open source, and then services like GitHub, GitLab, BitBucket, etc. converging people onto similar practices and expectations.

Early '90s maybe. By the late '90s people knew tests were a good idea, and many even applied that in practice.

There's a famous XKCD about this: https://xkcd.com/2347/

In this case the hero's name is apparently Simon Josefsson (maintainer).


I feel like we should just start saying 2347. Everyone knows what you mean.

https://xkcd.com/2347/

Ah, someone beat me to it!


It can't be critical business software if the business to which it is critical isn't paying anything for it.

/s


When I bought my initial /24 on such a site, it was not a competitive auction. I was the only bidder, and I paid the opening bid price, which was set by the seller. It's true that it was a real price, in that I paid it, but the 'auction' aspect felt like a farce.

> at the end of the day, you still hold something by owning real estate

Unless you fail to keep up on rent^Wproperty taxes, in which case you will find that someone comes to take it away from you.


> the only individuals that see the CAPCHA page mentioned, are users of Cloudflare's DNS services

I don't think this is true. I run my own recursive DNS resolver, and get a CAPTCHA when visiting archive.today.


I use my ISP's default DNS servers and have consistently gotten the CAPTCHA page for weeks now. The CAPTCHA seems to be broken too, rendering archive.today entirely inaccessible.

Someone has suggested that CAPTCHA is broken for everyone in Finland.

Not surprising considering the service is operated by Russia.

Seems to be the case in Estonia as well.

How feasible would it be for the host under measurement to introduce additional artificial latency to ping responses, varying based on source IP, in order to spoof its measured location?


Totally feasible.

You could do even cooler tricks, like https://github.com/blechschmidt/fakeroute

Pointless? Almost certainly.


Not-impossible, but it would be a whole lot simpler to just not respond to pings in the first place.


But also, as mentioned in https://news.ycombinator.com/item?id=46836803 , someone can still probe the second-last hop and get pretty close.


Traceroutes are already notoriously hard to interpret correctly[1] and yes, they can be trivially spoofed. Remember the stunt[2] pulled by tpb to move to North Korea? If you are an AS you can also prepend fake AS to your BGP announcements and make the spoofed traceroute even more legitimate.

I wonder if this thing will start a cat and mouse game with VPNs.

[1]: https://old.reddit.com/r/networking/comments/1hkm4g/lets_tal...

[2]: https://news.ycombinator.com/item?id=5319419


Courtesy of Xfinity and Charter overprovisioning most neighborhood’s circuits, we already have that today for a significant subset of U.S. Internet users due to the resulting Bufferbloat (up to 2500ms on a 1000/30 connection!)


You probably meant to say oversubscribing, not overprovisioning.

Oversubscription is expected to a certain degree (this is fundamentally the same concept as "statistical multiplexing"). But even oversubscription in itself is not guaranteed to result in bufferbloat -- appropriate traffic shaping (especially to "encourage" congestion control algorithms to back off sooner) can mitigate a lot of those issues. And, it can be hard to differentiate between bufferbloat at the last mile vs within the ISP's backbone.


Ok.


Have you seen excessive bufferbloat on a DOCSIS 3.1 modem?


Yes.


>varying based on source IP,

Aha, that's what you would think, but what if I fake the source of the IP used to do the geolocation ping instead!


Totally feasible but a bit like all these situations - it’s not happening in practice.


Hacks


This would push inflation above the Fed's target rate.


CGNATs should be using 100.64/10 instead of 10/8 to avoid this problem, but I don't doubt that there are significant deployments on 10/8 anyway.


The IETF really dragged their heels on CGNAT because they thought that IPv6 is easy™ (of course not, it's intentionally designed not to be "almost the same but wider" but include unworkable stuff like Mobile IPv6[1] which is just a fancy VPN) until they were forced to allocate 100.64.0.0/10 because some ISPs are not just using 10.0.0.0/8 but also US-DoD addresses (especially 11.0.0.0/8, because it's basically 10.0.0.0/7) as "private" addresses.

[1] Not IPv6 on mobile devices but a fully-owned IPv6 range that is supposed to be the address for a device regardless of where it is, see RFC 3775


I wanted to use 11.0.0.0 and call the company "Eleven," but by that time the DOD had given up the block for general use... GCNAT is perfect.


> being forced to

Choosing to.


The idea that the SPD consent decree constituted "severe consequences", or was successful at all, is a joke.

https://www.capitolhillseattle.com/2026/01/video-cops-rallie...


Currency debasement is an option. Taxes would be less inflationary, of course.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: