Hacker Newsnew | past | comments | ask | show | jobs | submit | apitman's commentslogin

I've maintained this list the last several years: https://github.com/anderspitman/awesome-tunneling

Pangolin has quickly risen almost to the top since being released. It's very well loved by /r/selfhosted.


Hey, great to see you here! Thanks for maintaining this great list of tools. :)

On desktop it can be pretty simple. On mobile it can be a hot mess. Essentially with LGPL you have to provide a way for users to replace your version of the LGPL lib with one they choose.

Also the company seems to be trying to wiggle out from under the open source spirit of the license the last few years, which doesn't bode well.


Bevy releases have a lot of breaking changes, and so do many of the 3rd party plugins.

Matrix is the only one that offers the killer feature of Discord, which is being able to join many communities from a single login.

Sadly Matrix has never had a good UX for me. IMO they spent too many complexity tokens on e2ee and there are simply not enough left.


Actually, the true killer feature for discord back in the day was something much dumber, but still heavily related to on-boarding and community transference.

You could join a discord server with a single link.

Account creation could come later.

Considering the competition at its heyday was Teamspeak or Skype, the mere fact you could just actually see the hell you were getting into without some stupid ass "Hol' Up!" instantly made it popular with basically everyone who didn't even know what it was.

My account is dated June 2015 which is apparently a month after it launched, and both me and every single one of the early adopters in that channel that is still up to this day have this same story to tell. We used it because we didn't even have to login at all in the first place when we first got it.


Account creation is the first thing you gotta do when trying to join a server complete with a silly captcha. At least on the server I just tried.

IIRC, admins of a server can configure it to require different levels of authentication, from anonymous all the way to requiring a verified phone number (the latter of which I have so far refused to join).

The killer feature of discord was always "open this link for our guild voice and let's go into the dungeon". And this is a brand advantage: you can try that with self hosted tools but discord "feels safe" to click.

The only thing TeamSpeak has on it is multi level voice for complex command chains. But you pay for that with enormous sign up friction.

There's no viable frictionless chat alternative. Maybe jitsi. And if you try to make one? You'll get regulated and have to do the same thing.


> being able to join many communities from a single login.

That's one of the features I hate most about Discord, the difficulty of having separate identities in separate places! You can set a "display name" for convenience, but everyone can see your root identity.


What is your greatest UX concern that you would like to see fixed?

randomly telling me that my connection is not secure and as a result randomly hiding messages. and then, if I try to fix it somehow, it will tell me it's nearly impossible or that they'll delete my whole history. the security model is hostile towards user.

when coming back to a community after a long period of time i have to follow a trail of links to rooms that say they have been upgraded and the current one abandoned until i reach the current one. thats stupid, confusing and time wasting

It is up to the room admin to upgrade or not, I still have some v1 rooms. Some features like knocking require a newer version, though.

for me, I'd say: occasional cryptic error messages that prevent conversation, even using encrypted DM across multiple devices

Yes, it like reddit is a community of sub-communities.

The whole fediverse wants to offer that, but I have no idea why single sign on fails mostly to make waves for them. Perhaps it is just user adoption, or technical complexity about privacy protection vs. ease of use.


But you have to admit it loses a certain shine in the cases where you know that what you're doing is no longer solving a problem that could be solved simpler and cheaper another way.

But understanding _how_ it solves the problem, and knowing you found the solution yourself might/will be something to strive for.

> If I have any doubt that someone might not complete a task, or at least accurately explain why it's proving difficult, with at least 95% certainty, I won't assign them the task

It gets hard to compare AI to humans. You can ask the AI to do things you would never ask a human to do, like retry 1000 times until it works, or assign 20 agents to the same problem with slightly different prompts. Or re-do the entire thing with different aesthetics.


> retry 1000 times until it works, or assign 20 agents to the same problem with slightly different prompts

By that point, you're also spending more money on the agents than you would've on a junior developer with learning potential.


No doubt, I'm just saying that working with a coding agent is not even remotely similar to being a team lead. If a member of your team can't complete a task and can't accurately explain what the difficulty is, you're in trouble.

> software engineering is the only industry that is built on the notion of rapid change, constant learning, and bootstrapping ourselves to new levels of abstraction

Not sure I agree. I think most programming today looks almost exactly the same as it did 40 years ago. You could even have gotten away with never learning a new language. AI feels like the first time a large percentage of us may be forced to fundamentally change the way we work or change careers.


One may still write C code as they did 40 years ago, but they still use the power of numerous libraries, better compilers, Git, IDEs with syntax highlighting and so on. The only true difference — to me — is the speed of change that makes it so pronounced and unsettling.

It's true, unless you have always been working on FOTM frontend frameworks, you could easily be doing the same thing as 20/30/40 years ago. I'm still using vim and coding in C++ like someone could have 30+ years ago (I was too young then). Or at least, I was until Claude code got good enough to replace 90% of my code output :)

Obviously that matters, but how much does it matter? Does it matter if you don't learn anything about computer architecture because you only code in JS all day? Very situational.

There's a subset of people whose identity is grounded in the fact that they put in the hard work to learn things that most people are unable or unwilling to do. It's a badge of honor, and they resent anyone taking "shortcuts" to achieve their level of output. Kind of reminds me of lawyers who get bent out of shape when they lose a case to a pro se party. All those years of law school and studying for the bar exam, only to be bested by someone who got by with copying sample briefs and skimming Westlaw headnotes at a public library. :)

It's not that our identity is grounded in being competent, it's that we're tired of cleaning up messes left by people taking shortcuts.

It's that, but it's also that the incentives are misaligned.

How many supposed "10x" coders actually produced unreadable code that no one else could maintain? But then the effort to produce that code is lauded while the nightmare maintenance of said code is somehow regarded as unimpressive, despite being massively more difficult?

I worry that we're creating a world where it is becoming easy, even trivial, to be that dysfunctional "10x" coder, and dramatically harder to be the competent maintainer. And the existence of AI tools will reinforce the culture gap rather than reducing it.


It's a societal problem we are just seeing the effects in computing now. People have given up, everything is too much, the sociopaths won, they can do what they want with my body mind and soul. Give me convenience or give me death.

OT but I see your account was created in 2015, so I'm assuming very late in your career. Curious what brought you to HN at that time and not before?

I did not know it existed before 2015 :)

In some ways, I'd say we're in a software dark age. In 40 years, we'll still have C, bash, grep, and Mario ROMs, but practically none of the software written today will still be around. That's by design. SaaS is a rent seeking business model. But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies. There's no way you'll be able to take a repo from 2026 and spin it up in 2050 without major work.

One question is how will AI factor in to this. Will it completely remove the problem? Will local models be capable of finding or fixing every dependency in your 20yo project? Or will they exacerbate things by writing terrible code with black hole dependency trees? We're gonna find out.


> That's by design. SaaS is a rent seeking business model.

Not all software now is SaaS, but unfortunately it is too common now.

> But I think it also applies to most code written in JS, Python, C#, Go, Rust, etc. There are too many dependencies.

Some people (including myself) prefer to write programs without too many dependencies, in order to avoid that problem. Other things also help, including some people write programs for older systems which can be emulated, or will use a more simpler portable C code, etc. There are things that can be done, to avoid too many dependencies.

There is uxn, which is a simple enough instruction set that people can probably implement it without too much difficulty. Although some programs might need some extensions, and some might use file names, etc, many programs will work, because it is designed in a simple way that it will work.


Big uxn fan

I’m not sure Go belongs on that list. Otherwise I hear what you’re saying.

A large percentage of the code I've written the last 10 years is Go. I think it does somewhat better than the others in some areas, such as relative simplicity and having a robust stdlib, but a lot of this is false security. The simplicity is surface level. The runtime and GC are very complex. And the stdlib being robust means that if you ever have to implement a compiler from scratch, you have to implement all of std.

All in all I think the end result will be the same. I don't think any of my Go code will survive long term.


I’ve got 8 year old Go code that still compiles fine on the latest Go compiler.

Go has its warts but backwards compatibility isn’t one of them. The language is almost as durable as Perl.


8 years is not that long, if it can still compile in say 20 years then sure but 8 years in this industry isn't that long at all (unless you're into self flagellation by working on the web).

Except 8 years is impressive by modern standards. These days, most popular ecosystems have breaking changes that would cause even just 2-year-old code bases to fail to compile. It's shit and I hate it. But that's one of the reasons I favour Go and Perl -- I know my code will continue to compile with very little maintenance years later.

Plus 8 years was just an example, not the furthest back Go will support. I've just pulled a project I'd written against Go 1.0 (the literal first release of Golang). It's 16 years old now, uses C interop too (so not a trivial Go program), and I've not touched the code in the years since. It compiled without any issues.

Go is one of the very few programming languages that has an official backwards compatibility guarantee. This does lead to some issues of its own (eg some implementations of new features have been somewhat less elegant because the Go team favoured an approach that didn't introduce changes to the existing syntax).


8 years is only "not that long" because we have gotten better at compatibility.

How many similar programs written in 1999 compiled without issue in 2007? The dependency and tooling environment is as robust as it's ever been.


> because we have gotten better at compatibility.

Have we though? I feel the opposite it true. These days developers expect users of their modules and frameworks will be regularly updating those dependencies and doing so dynamically from the web.

While this is true for active code bases. You can quickly find stable but unmaintained code will eventually rot as its dependencies deprecate.

There aren't many languages out there where their wider ecosystem thinks about API-stability in terms of years.


If they change the syntax sure but you can always use today's compiler if necessary. I generally find the go binaries to have even fewer external dependencies than a C/Cpp project.

On the scale of decades that's an incorrect assumption, unless you mean running the compiler within an emulated system.

It depends on your threat model. Mine includes the compiler vendors abandoning the project and me needing to make my own implementation. Obviously unlikely, and someone else would likely step in for all the major languages, but I'm not convinced Go adds enough over C to give away that control.

As long as I have a stack of esp32s and a working C compiler, no one can take away my ability to make useful programs, including maintaining the compiler itself.


For embedded that probably works. For large C programs you're going to be just as stuck as you are with Go.

I think relatively few programs need to be large. Most complexity in software today comes from scale, which usually results in an inferior UX. Take Google drive for example. Very complicated to build a system like that, but most people would be better served by a WebDAV server hosted by a local company. You'd get way better latency and file transfer speeds, and the company could use off the shelf OSS, or write their own.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: