Hacker Newsnew | past | comments | ask | show | jobs | submit | oaiey's commentslogin

I do not know what is more critical: the risk of censorship or stand by while hospitals, banking, nuclear power plants and other systems become compromised and go down with people dying because of it. These decision makers not only have powers but also have a responsibility

Have you ever seen a hospital, a bank, a power plan to expose telnetd to the public internet in the last 20 years? It should be extremely rare and should be addressed by company’s IT not by ISPs.

These are the institutions I would most expect to do that.

Well, maybe not a bank.


Probably Tier 1 providers have some insight on this.

This feels more akin to discovering an alarming weakness in the concrete used to build those hospitals, banks and nuclear power plants – and society responding by grounding all flights to make sure people can't get to, and thus overstress, the floors of those hospitals, banks and nuclear power plants.

In the UK we have in fact discovered an alarming weakness in the concrete used to build schools, hospitals and other public building (in one case, the roof of a primary school collapsed without warning). The response was basically "Everybody out now".

https://en.wikipedia.org/wiki/2023_United_Kingdom_reinforced...

https://www.theconstructionindex.co.uk/news/view/raac-crisis...

https://www.theguardian.com/education/2023/aug/31/what-is-ra...


You feel it's similar because having access to port 23 is similarly life critical as having access to an hospital? Or is it because like with ports, when people can't flight to an hospital, they have 65000 other alternative options?

All I'm saying is that the only right place to fix this is at the hospital. Not at the roads leading to it.

That's my question. Why is there infrastructure that has open access to port 23 on the Internet. That shouldn't be a problem that the service provider has to solve, but it should absolutely be illegal for whomever is in charge of managing the service or providing equipment to the people managing the service. That is like selling a car without seatbelts.

We are beyond the point where not putting infrastructure equipment behind a firewall should result in a fine. It's beyond the point that this is negligence.


There again, I think the comparison fails.

Fixing the hospital: single place to work on, easier

Blocking all the roads/flights: everywhere, harder

Vs

Fixing all the telnet: everywhere, harder/impossible

Blocking port 23 on an infra provider: single place, easier

It makes sense to me to favor the realistic solution that actually works vs the unrealistic one which is guaranteed not fix the issue, especially when it's much easier to implement


I run telnetd on 2323 because I don't want hackers to find it.

The hospital-plural-s: many places.

Roads: a lot more places than that.

The core of the analogy holds.


nah, that's like seeing an open gate to nuclear tank - a thing easily fixed within few minutes - and responding to it by removing every road in existence that can bear cars

Censorship is one of these words that get slapped on anything.

Filtering one port is not censorship. Not even close.


> censorship, the suppression or removal of writing, artistic work, etc. that are considered obscene, politically unacceptable, or a threat to security

It is not the responsibility of the Tier 1 or the ISP to configure your server securely, it is their responsibility to deliver the message. Therefore it is an overreach to block it because you might be insecure. What is next. They block the traffic to your website because you run PHP?

Similar to how the mailman is obligated to deliver your letter at address 13 even though he personally might be very superstitious and believe by delivering the mail to that address bad things will happen.


I don't agree with your argument, but I don't want to debate that.

But let's say I agree: That still is not censorship.


If that really affects them it's better to take them offline.

At a certain point these simulators would be so intense that your joy for the game could be your joy for real employment in a city :)

The statement reversed, you might loose your joy because working in the game is no longer fun.

I get you but just want to say: careful what you wish for :)


Well probably more people want to be city planners than the number of city planners society actually requires. Also, I think I would draw the line somewhere way before the real world. I want most of the technical details of the real world without having to deal with the politics. I don’t want to attend town hall meetings and stakeholder consultations in my game, but then again maybe someone else wants that.

In my area, streets are often church tower to church tower. From the middle ages. You can nowadays drive these streets and the middle line indicators align perfectly with the church tower showing up. I think the church /church based government share that property right understanding of the Romans :)

Sounds like the person doing the performance review just relies on metrics. Sounds like a shitty leader.

Not only that, but that person was relying on a totally incorrect metric in the first place. Tale as old as time.

This is why data driven decision making is a trap. Even if the data is correct, which it's usually not, its still not complete just by definition. It's instinctually a dumbed down, distilled, and one-dimensional view of the real world, of meat space, and you gotta treat it like that.

Here's what is scary. I have been looking at many job descriptions for a Developer Experience Engineer or similar positions. About half of them ask for experience with automated tools to measure developer productivity!

Many such cases.

Hmmm. I have a different take there: when you are young and wild, you achieve stuff because you think later and instantly produce code. When you turn older, you do it the other way leading to your example.

In the early 2000s I have been in a startup and we delivered rapidly in C# as we did in PHP. We just coded the shit.


I think what you said is a healthy progression : write dumb code -> figure out it doesn't scale -> add a bunch of clever abstraction layers -> realize you fucked yourself when you're on call for 12 hours trying to fix a mess for a critical issue over the weekend x how many time it takes you to get it -> write dumb code and only abstract when necessary.

Problem is devs these days start from step two because we're teaching that in all sources - they never learned why it's done by doing step one - it's all theoretical example and dogma. Or they are solving problems from Google/Microsoft/etc. scale and they are a 5 people startup. But everyone wants to apply "lessons learned" by big tech.

And all this advice is usually coming from very questionable sources - tech influencers and book authors. People who spend more time talking about code than reading/writing it, and earn money by selling you on an idea. I remember opening uncle bobs repo once when I was learning clojure - the most unreadable scattered codebase I've seen in the language. I would never want to work with that guy - yet Clean Code was a staple for years. DDD preachers, Event driven gurus.

C# is the community where I've noticed this the most.


Spot on.


As a long term observer: definitely not a goal. But you have to be clear here: JavaScript and C# both are OO languages, both are having origins stories in Java/C++, both are facing the same niche (system development), same challenges (processor counts, ...) and so on. And then, you put teams on it which look left and right when they face a problem and then you wonder that they reuse what they like?

C# language team is also really good. They did not do a lot of mistakes in the 25+ years. They are a very valid source of OO and OO-hybrid concepts. It is not only TS/JS but also Java and C++ who often look to C#.

The story was not to transform C# code to JS but to use C# to write the code in the first place and transpile it. Not for the sake of having .NET usage but for the sake of having a good IDE.


> They did not do a lot of mistakes in the 25+ years

If my memory serves, .NET and WinFS were the two major forces that sunk Longhorn, and both have been given their walking papers after the reset [1].

.NET and C# have grown to be mature and well-engineered projects, but the road there was certainly not without bumps. It's just that a lot of the bad parts haven't spilled outside of Microsoft, thankfully.

[1] https://www.theregister.com/2005/05/26/dotnet_longhorn/


Are we mixing the language and the runtime here? C# the language seems weirdly free of weirdness and footguns.


Not only that, they went as deep as mixing in project issues with language design. A massive rewrite mixed with massive feature changes is always a tricky thing no matter the language.


.NET was already a going concern before Longhorn even started. What sank Longhorn was the fact that writing an OS from scratch is hard and maintaining compatibility with existing OSes in the process is even harder, especially when you're adopting a completely new architecture. Longhorn would have been a microkernel running 100% on the .NET runtime, mainline Windows is a monolithic kernel written in C++. I don't know how it would have ever worked, whether .NET was "perfect" or not.


No, Longhorn was neither a microkernel nor was the kernel rewritten in .NET.

Source: I was there.


I think he confuses longhorn with Singularity research project.


See Android, or Meadows for alternative reality.


Android still runs on a monolithic kernel written in a memory-unsafe language. I'm finding it suprisingly difficult to find information on Meadow, other than it runs .NET DLLs as user-space applications, but nothing about the structure of the kernel.

Longhorn was going to be more than that. Microsoft did have Singularity/Midori projects, started around the middle of Longhorn/Vista, and continued much longer after Vista released to build out the managed microkernel concept. It's been about a decade since they've put any work into it, though.


Microsoft wasn't even able to deliver that, which was my whole point.

Joe Duffy mentions on a talk, that even with Midori running production workloads, Windows team could not be changed their mind.

Meadow uses a C++ based microkernel, the whole userspace is based on .NET, by the way.


The article is presenting some stuff mixed up. Had nothing to do with the language or the framework. WinFS was a database product. Over engineered and abstract.

.NET and C# were researched a lot for operating system usage (Midori, Singularity) but that was after Longhorn.

The operating system group UI toolkits was a further problem and they pivoted there dozen of times in the years. Particular for a C++ based os group.

But the death of longhorn was ultimately about the security restart of Bill Gates


Nope, what sunk Longhorn was politics.

Windows team is a C++ kingdom, and those devs will not adopt .NET even at gun point.

They redid Longhorn with COM and called it WinRT, irony of ironies, WinRT applications run slower than .NET with COM reference counting all over the place.

Google has showed those folks what happens when everyone plays on the same team, and now it owns the mobile phone market, a managed userspace with 70% world market.


Sweet. Alternatives are always something good.


It will not survive. No point in maintaining both. Just costs money. Device management for mobile phones is also a huge point.

My educated guess: tablet/laptop hybrids with Android OS. Not that Apple has huge success with the same move


As someone who has benefiter ones from this, I have to say: good.

In my humble opinion: the current state is better than no encryption at all. For example: Laptop theft, scavengers trying to find pictures, etc. And if you think you are target of either Microsoft or the law enforcement manage your keys yourself or go straight to Linux.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: