There are so many examples of large companies and open source projects that have moved off of npm that I just don’t think this is true anymore. PNPM and bun are fast enough that the increase in development velocity is worth the occasional rough edge IMO.
>PNPM and bun are fast enough that the increase in development velocity is worth the occasional rough edge IMO.
If the speed of your package manager is causing issues for developer velocity you have much bigger issues to contend with. And I categorically reject the statement that either of those are meaningully faster in any way. Maybe you can point to some specific obscure benchmarks that have slightly smaller numbers. But all of that goes out the window the second a dev is stuck with one of those "rough edges" even once. Not even to mention the lockin you've achieved on the tooling front now that your entire stack is nonstandard and reliant on a single highly specific list of dependencies to work, which may or may not even be kept in line with their node/npm counterparts.
Have you ever worked in a monorepo? With at least 5 apps and at least 5 packages, each with direct dependencies, devDependencies and testing libraries?
The amount of packages you’ll need to download for a full dev environment can get really big really quickly, even if your end-user bundle doesn’t have many dependencies at all. I’ve worked on projects where an npm install took five minutes and a bun install took 10 seconds. In the real world this makes a big difference.
Have you tried using them? Installing packages is way, way faster. Here’s an example of how this is meaningful to an organization, and I’ve personally experienced the same exact thing at my last 2 jobs.
To list some projects and companies that aren’t on NPM: Prettier, Next.js/Vercel, Cloudflare, Hono, Zod, Expo, Tamagui, Tailwind, the list goes on. I actually had trouble finding any major JS projects that are on NPM. These are serious, widely used packages, and they chose non-standard tooling for a reason.
The post describes moving from an old Yarn version that still suffered from the long, long fixed problem npm had with tree shaking. In fact, their inability to port to the newest Yarn version just highlights my point. Modern npm has solved all of these issues without the compatability problem.
>"Yarn v2 introduced several new features, including a different approach to managing the node_modules folder by eliminating it altogether through its Plug’n’Play mode."
And this is just complete insanity.
That aside, I can see that there's no real argument against pnpm at this point. It wouldn't be the end of the world. I just don't buy saving 20 seconds in CI as a legitimate reason for it.
I doubt the reservation is with the language/runtime itself, especially with elixir and the BEAM. More likely, it’s with the maturity of the community. Especially at a small startup, building on elixir even today might still mean having to build things in house you may not in Django/Rails/JS.
The moat comment depends on the company, not all or even most businesses depend on actual innovation as their moat.
Also, I think we’re agreeing here, but there are a huge set of things that you may need to build an application that aren’t the core value prop of your company. Buying into a more mature ecosystem makes it more likely that you don’t have to build those things and can spend more time on the moat stuff.
All of those problems and solutions are not unique to either tech choice. You can use OpenApi for documentation, a variety of libraries like React-query for caching, and N+1 issues exist (and are arguably more complicated to solve) in graphql.
The problem with that is that when consuming of the dictionary, “doesn’t know” is actually more appropriate. If you then access Object.values(foo) in your method you are given an iterable of anys which is unsafe.
If the function is doing something with the values which is unsafe, sure. My point was the more relaxed constrain on the type signature can be used to imply it’s only concerned with the dictionary’s keys.
Coming back to this after the other thread cooled down a bit lol - to me, unknown actually implies that the function doesn’t care about the values more than any, as the compiler will enforce that they’re unused.
And in any case, I’d almost always lean towards the option with stronger type safety guarantees. Especially in a team environment when someone else may be modifying your code later. As a convention, I almost never use any.
To me `unknown` signals an intent to know, as in “I don’t know yet” (or put another way, “I don’t have any prerequisites for accepting this value, I can and will narrow it as appropriate”). In a codebase that otherwise takes type safety seriously, `any` in a type parameter (again to me) means “I don’t have any type narrowing agenda for this thing, it’s along for the ride and coming out the other side the same way it showed up”.
I’m not being lazy? I am taking extra time and expending extra energy to make sure the metadata I put in code is as informative as possible. In this use case `any` carries more information. I use `unknown` in place of `any` exactly as it’s intended wherever I’m able.
Okay I'll stop being a jerk and engage in good faith. I'll assume you think this is a valid use case for any and it communicates something important.
My counterpoint is this: communication involves two things, someone stating a message and someone receiving a message. You are doing part one. Is part two occurring? It may be because of certain conventions in your codebase or team, but I've personally never seen any used in that manner ever, so I would not receive your intended message.
If I were in the same situation I would use unknown and add a comment stating that the type is of no importance since I'm only worried about the keys. That way my message is clear and I prevent future developers from having to debug code where they assume the value is of a certain type and start accessing parameters and methods that do not exist.
Valid feedback. I even thought of adding it myself, because implied stuff isn’t obvious. I felt it worth communicating because there’s value in what’s implied that isn’t available in the type system. To the extent I have team members consuming the same code, I would definitely communicate the intent. To the extent I have reviewers who read the code, I do discuss it.
To the extent this is in a type parameter position, the onus is on the person writing the function signature and… well if they don’t want a footgun, they have every opportunity to not gun their foot. But that’s entirely opt in by the time they’ve reached that point.
Completely aside from these details, thank you for stepping back and discussing this in good faith. It made a discussion that was going badly feel at least like communication. I appreciate that, and I’m glad to land at a place where we’re not necessarily on the same page but we’re at least recognizing we have similar priorities. Cheers!
That’s pretty much the intent of the constraint. I don’t have time to sit with a type checker right now, but I don’t think we disagree as much as you might think. I arrived at this from years of trying to find the best way to express types which are as strict as possible with as much clarity as possible.
Unfortunately the object type is basically any non-null value, as is {}. They both intuitively mean what I want. They also inherently allow PropertyKey keys, which is effectively Record<string | number | symbol, any>, which is looser than the “dictionary” type I often want to accept in these scenarios.
A better question (for me, and maybe you and maybe all of us who want type certainties) is why we even accept dictionaries in object shapes when Map is the obvious expression of that type. I’ve repeatedly wanted that and shied away from it because it requires too much change for very little gain.
I think with Map the answer is somewhere between momentum ("we've always used objects as dictionaries") and some misapprehensions easily shifted with basic caniuse statistics. Map still feels "too new" to some developers, despite being ES2015 (8 years old now!) and available in every browser that supports the arrow operator for functions has Map (and Set) out of the box (no need for polyfills in 2023, ever).
Probably the only other reason I've seen is "JSON interop" is "hard" because Map doesn't natively serialize. I think `new Map(Object.entries(oldDictionaryObject))` and `Object.fromEntries(someMap.entries())` are sufficient for most serializer boundary cases (even without feeling fancy and doing that as a true JSON revivifier/resolver pair).
As an addendum if the function only needs the keys I would possibly just have the parameter be a string[] that expected the user to call object.Keys to pass to.
That way the function isn't asking for parameters it doesn't really care about.
Though I do get the appeal of having the function call object.Keys if it's called frequently so as not to have to sprinkle that call everywhere.
Yeah unfortunately it’s ergonomically A Thing to just accept object as input even if you only care about keys. Otherwise I’d have the exact signature you describe.