I just checked, and it's confirmed: I am definitely using a web browser. It seems my browser and this site have a different definition of web standards, however.
So exhausting to be surrounded by people with a paranoid, irrational fear of robots, who don't give a shit who they harm in their zeal to lash out and strike the evil bots.
That's crazy. This is core business critical software but they just YOLO critical changes without any automated tests? this PR would be insta-rejected in the small SAAS shop I work at.
If you think you can do better you're welcome to do better. I say this without a hint of sarcasm. This is how open source works. It's a do–ocracy, not a democracy. Whoever makes a telnet server gets to decide how the telnet server works and how much testing it gets before release.
Maybe the lesson here is to stop letting the GNU folks do things, if this is what they do. This is only one example of craziness coming out of the GNU camp.
Or, flip the responsibility to what it has always been understood to be, when using open source software from random volunteers (some being bad actors) on the internet for anything remotely critical: audit the source.
GNU doesn’t provide labor, only organizational tools like mailing lists and whatnot. The projects that GNU supports are still run by individual volunteers. If you want it done better then please volunteer so that you can be the one doing it better.
Culture has changed a lot since the 20th century and older projects can have antiquated norms around things like testing. I was just listening to a recent podcast talking about how worrisome it is that OpenSSL has a casual culture about testing[1] and was reminded about how normal that used to be. I think in the case of telnetd you also have the problem that it’s been deprecated for multiple decades so I’d bet that they struggle even more than average to find maintainer time.
Even with automated tests you'd need to think of this exploit right? Perhaps fuzzing would have got it. The mailing lists says they proved it successful on
Any business that has a telnet daemon able to be reached by an unauthenticated user is negligent. Just the fact that everything is in the clear is reason enough to never use it outside of protected networks.
Sure. But, contrary to what some people seem to think, "it's nothing secret" is not a sufficient justification to use an unencrypted plain-text protocol.
My point is that it's ok to use unencrypted plain text if you don't care if it's read ("it's nothing secret"), AND furthermore you don't care if it's modified.
If you don't care that it's read ("it's nothing secret"), but you do care that it's not modified, you should not use unencrypted plain text. That's why I explained that if you don't care if it's read that is not a sufficient justification to use unencrypted plain text, because then it might be modified, and you might care about that.
You then said that it "literally" is: if you don't care if it's read ("it's nothing secret") that is a sufficient justification to use unencrypted plain text.
But then you proceed to give an example where it is, indeed, ok to use unencrypted plain text, but only because you don't care if it's read ("it's nothing secret"), AND you don't care if it's modified. That is what I have been saying all along. If you were to care that the wind speed from the sensor on your roof is not faked, then you should not use unencrypted plain text.
So again: If you don't care that it's read, AND you don't care if it's modified, then, sure, use unencrypted plain text.
If you don't care that it's read ("it's nothing secret"), but you do care that it's not modified, that is not sufficient justification to use unencrypted plain text. Rather, in addition, you also have to not care if it's modified.
Let me give you an example. Air pressure varies, and airplanes use air pressure to measure altitude, so they need to set their altimeter to the correct air pressure. Now, the air pressure is not secret at all. Anyone could trivially measure it. So, one doesn't care if it's read ("it's nothing secret").
According to your faulty thinking, one could thus use unencrypted plain text to transmit it. However, someone could modify it, giving wrong numbers to the airplane, putting the airplane and the crew in danger. That is not good. No-one cares that the data is read ("it's nothing secret"), but we do care that it is not modified. Thus, do not use unencrypted plain text. Because if you don't care if it is read ("it's nothing secret"), that is not sufficient justification to use unencrypted plain text. You have to, in addition, not care if it is modified.
In your case, you don't care if it is read ("it's nothing secret"), AND you don't care if it modified. But someone else might not care if it is read, but DO care if it is modified.
Do you understand this now, or should I make a full 2x2 matrix with all possibilities and carefully explain each case with examples?
This is quite untrue as a blanket statement. The problem is that there was massive cultural variation: if you installed a Perl module from CPAN you probably ran hundreds of tests. If you ran a C program, it ranged from nothing to “run this one input and don’t crash” to exhaustive suites. PHP tended towards nothing with a handful of surprises.
As a data point, my first tech job was QA for a COBOL compiler vendor. They supported roughly 600 permutations of architecture, operating system, and OS version with a byte-coded runtime and compiler written in C. I maintained a test runner and suite with many thousands of tests, ranging from unit tests to things like Expect UI tests. This was considered routine in the compiler vendor field, and in the scientific computing space I moved into. I worked with someone who independently reproduced the famous Pentium FDIV bug figuring out why their tests failed, which surprised no one because that was just expected engineering.
Then you had the other end of the industry where there was, say, 50k lines of Visual Basic desktop app where they didn’t even use version control software. At a later job, I briefly encountered a legacy system which had 30 years of that where they had the same routine copied in half a dozen places, modified slightly because when the author had fixed a bug they weren’t sure if it would break something else so they just created a copy and updated just the module they were working on.
True, it is colored by my own personal experienced. I remember CPAN, perl, and installing modules with tests. I also remember my day job: a 500,000 line C and C++ code base with literally 5 automated tests that nobody ever ran!
Yeah, I think it’s really hard to understand how much more cultural variation there was without first the internet and open source, and then services like GitHub, GitLab, BitBucket, etc. converging people onto similar practices and expectations.
When I bought my initial /24 on such a site, it was not a competitive auction. I was the only bidder, and I paid the opening bid price, which was set by the seller. It's true that it was a real price, in that I paid it, but the 'auction' aspect felt like a farce.
I use my ISP's default DNS servers and have consistently gotten the CAPTCHA page for weeks now. The CAPTCHA seems to be broken too, rendering archive.today entirely inaccessible.
How feasible would it be for the host under measurement to introduce additional artificial latency to ping responses, varying based on source IP, in order to spoof its measured location?
Traceroutes are already notoriously hard to interpret correctly[1] and yes, they can be trivially spoofed. Remember the stunt[2] pulled by tpb to move to North Korea? If you are an AS you can also prepend fake AS to your BGP announcements and make the spoofed traceroute even more legitimate.
I wonder if this thing will start a cat and mouse game with VPNs.
Courtesy of Xfinity and Charter overprovisioning most neighborhood’s circuits, we already have that today for a significant subset of U.S. Internet users due to the resulting Bufferbloat (up to 2500ms on a 1000/30 connection!)
You probably meant to say oversubscribing, not overprovisioning.
Oversubscription is expected to a certain degree (this is fundamentally the same concept as "statistical multiplexing"). But even oversubscription in itself is not guaranteed to result in bufferbloat -- appropriate traffic shaping (especially to "encourage" congestion control algorithms to back off sooner) can mitigate a lot of those issues. And, it can be hard to differentiate between bufferbloat at the last mile vs within the ISP's backbone.
The IETF really dragged their heels on CGNAT because they thought that IPv6 is easy™ (of course not, it's intentionally designed not to be "almost the same but wider" but include unworkable stuff like Mobile IPv6[1] which is just a fancy VPN) until they were forced to allocate 100.64.0.0/10 because some ISPs are not just using 10.0.0.0/8 but also US-DoD addresses (especially 11.0.0.0/8, because it's basically 10.0.0.0/7) as "private" addresses.
[1] Not IPv6 on mobile devices but a fully-owned IPv6 range that is supposed to be the address for a device regardless of where it is, see RFC 3775
reply