> For many business apps, they will never reach 2 billion unique values per table, so this will be adequate for their entire life. I’ve also recommended always using bigint/int8 in other contexts.
I'm sure every dba has a war story that starts with similar decision in the past
The other reason is the volume of the code being produced combined with the constant product changes. An innocent change like mixing two close but still different concepts can easily poison the whole codebase and take years to undo and may even be nearly impossible to fix if it propagates to external systems outside of direct control
It's fair to complain about Rust complexity IMO. What's _not_ fair is pointing at Zig as an example of how it could be simpler, when it's not memory safe. The requirements are different. At that point we might as well say "why use Rust when you can just use Go or TypeScript"
I’ve made my own attempt in building a social network [0] and it’s evening running for me and my friends.
Here are few things I’ve encountered during the development:
- Based on my experience private only approach does not work. It’s all about network effects and if users can’t send their post to all friends, they’ll just move on to a different platform
- chronological friends only feed is really boring. Not that it’s bad per de, but it’s hard to convince people to stay if the service is not entertaining
It can also be me not being able to market the project right. Good luck with your attempt!
There are two sides to the argument which I think should be treated separately: a) Is it a good idea overall? and b) is htmx implementation good enough?
a) I think so, yes. I've seen much more spa that have a completely broken page navigation. This approach does not fit all use cases, but if you remember that the whole idea of htmx is that you rely on webserver giving you page updates as opposed to having a thick js app rendering it all the way it makes sense. And yes, js libraries should be wrapped to function properly in many cases, but you would do the same with no react-native components in any react app for example
b) I don't think so. htmx boost functionality is an afterthought and it will always be like this. Compare it with turbo [1] where this is a core feature and the approach is to use turbo together with stimulus.js which gives you automagical component life cycle management. Turbo still has it's pains (my favorite gh issue [2]), but otherwise it works fine
htmx boost functionality is an afterthought in the main use case it is marketed for (turning a traditional MPA into something that feels like a SPA), but it's actually super useful for the normal htmx use case of fetching partial updates and swapping them into a page.
If you do something like <a href=/foo hx-get=/foo hx-target="#foo">XYZ</a> the intention is that it should work with or without JavaScript or htmx available. But the problem is that if you do Ctrl-click or Cmd-click, htmx swallows the Ctrl/Cmd key and opens the link in the same tab instead of in a new tab!
But if you do <a href=/foo hx-boost=true hx-target="#foo">XYZ</a> everything works as expected–left-click does the swap in the current tab as expected, Ctrl/Cmd-click opens in a new tab, etc. etc.
Also another point–you are comparing htmx's boost, one feature out of many, to the entirety of Turbo? That seems like apples and oranges.
hx-boost is an afterthought and we haven't pushed the idea further because we expect view transitions via normal navigation to continue to fill in that area
Would like to second the turbo rec. I've had good results with it for nontrivial use cases. Would like to hear from people if they have different experiences. Also, praying that everything gets cached first load and hand waving that view transitions will eventually work is not a position I want to hear from an engineer in a commercial context. Really happy to see the author bring more attention to how good vanilla web technologies have gotten though.
This and similar post are a bulletproof way to start a flame war.
Last time it was generics that were missing, now everyone is raging about sum times and of course explicit error is a topic of constant concern and why panics and not exceptions?
Go is well designed to build good software quickly. Easy dependency handling, good tooling, vast ecosystem.
Go is well designed to help developers with automation and help them catch mistakes, that's why it's easy to parse and all language design decisions take that into account.
It's also designed to produce a lot of code and that requires the language to be easy to understand and programs easy to tweak and that's what it provides, since you'll have a lot of developers tweaking the code.
We're in the industry of shipping different kinds of products and that imposes different constraints and results in different languages being used. Also, different people care about different stuff and languages form clusters of similar minded people around them, that's a choice too.
One of the things that surprised me in the article was their usage of J2K. They’ve been using it as part of IntelliJ, alright, but why did they have to run it headless? They’ve even mentioned that it was open sourced. And later they’ve said that they were not able to do much improvements because it was on maintenance mode at Jet Brains.
I mean, with the ressources meta has I’m sure they could have rewritten the tool, made a fork or done any other thing to incorporate their changes (they talk about overrides) or transformed the tool into something better fitting their approach. Maybe it has been done, just not clear from the article
Local state is indeed a problem that's exacerbated by swapping logic. Simple example: you have a form + a collapsible block inside or maybe a dynamic set of inputs (imagine you're adding items to a catalog and you want to allow to submit more than one). If you save the form and simply swap the html the block state will be reset. Ofc you could set the toggle state somewhere to pass it along to the backend, but that's a pain already compared to spa approach where you don't even bother with that since no state is reset.
You could you query string as mentioned in the article, but that's not really convenient when done in a custom way for every single case.
Having said that I think that a way to go could be a community effort to settle on the ways to handle different ui patterns and interactions with some bigger project to serve as a testing ground. And that includes backend part too, we can look at what rails does with turbo
My opinion is that htmx (and similar) approach is different enough to potentially require a different set of ui interactions and it usually hurts when one tries to apply react friendly interactions to it.
Nice! Personally I think that the more niche social networks we have the better it is. The big problem with the mainstream networks is that they've evolved from a media to communicate and keep in touch with real people into a platform for influencers and businesses.
The common complaint I hear about instagram for example is that every second connection of yours would try to sell/teach something and that's just garbage if all you need is to keep in touch with your friends.
The main problems to tackle imo are:
- information propagation speed. This is good in case you want to get a quick update but it also a double edged sword, since this allows information attacks, trolls etc
- Scale. Anything of big scale becomes a problem by itself since it becomes economically viable to target the platform with bots, scam etc.
- Incentives. I think we should get to the point where social networks are being run by non profits
I've posted the link a couple of time, I'm working on my personal take on this problem[0]. My approach is the following:
- Slow down information propagation. Every post is visible to the direct connections, to their connections if you allow it, but no further
- No way to get a connection request from a stranger. Either you specifically allow it, or it's introduced by your direct connections
- No federation, since my idea was to have small communities
- Fully open in the sense of data formats, import/export etc. Migrating between instances is as easy exporting posts in bulk, creating an account on another instance and doing the import. You could do the bulk updates the same way
Also, it's all go + htmx just in case anyone else is also tired of modern frontend mess. I have a couple of videos on the feautures[1], if you like. The design is not great, since I wanted to focus on the idea itself
I've got to chime in here, because of how much this overlaps with the project I've been working on called Haven[1].
A lot of these problems go away with a decentralized/open-source private model. If your posts aren't public then there is no spam. If everyone runs their own node of open-source (or better yet: open-protocol, ie RSS) software, then there is no centralized entity able to have incentives of profiting off the platform.
Information propagation speed is a good call-out as dangerous. Even with all the spam/shilling/trills removed, it still leads to the girl who's having a great time on her snowboarding trip until she posts pictures on Instagram and drops into a foul mood because not enough people immediately liked her posts.
I'd love to connect and share thoughts, feel free to reach out[2]/
Just checked it, thanks for pointing to it. I think it's more of a decentralized encrypted messaging platform, and my idea was to have a way constrain the visibility of the conversations to naturally connected groups of people while giving a way to slowly expand the connections rather then fighting censorship
More or less like in real life, where you chat a lot with your friends, but necessarily with some of their friends you don't know that well. In this case you would ask your friends for the introduction and that what I've tried to model.
One other feature I've been thinking about was to make the moderation automatic in a sense of making signups possible only via invitation and putting some weight on it. Basically if you invite somebody who's misbehaving on the platform and they get flagged, you get penalized as well unless you do it first. My theory is that it should make users care about their digital surroundings.
By default all texts are open. There is encrypted messaging, albeit only used for private messages inside a group or to another person.
What you mention could be achieved with the a nostr relay. Just permit inside who you want, but anyone can keep participating on internet at large with exactly the same account.
But if you want to moderate everything inside, then likely mastodon or a traditional web forum might be more suited.
I’ve been scratching my own itch lately trying to build a communication medium that I like.
IMO the problem with current social networks is their scale and public only approach. Any network that goes this way ends up with lots of bad actors and public only approach means that it’s easy to harass people and bots are economically viable.
I’ve addressed both points [0]. Visibility of the posts is limited to direct connections, you need a proxy connection to make a new one and at the same time it’s mega easy to import/export, markdown support and apis are there etc. That was my way to get miningful discussions back.
In general, you need to look to small scale places
I'm sure every dba has a war story that starts with similar decision in the past