As far as I know, these 100+ dev dependencies are installed by default.
Yes, you can probably avoid it, but it will likely break something during the build process, and most people just stick to the default anyway.
> Reproducible builds, or don’t use those packages.
No. They’re only installed if you git clone react and npm install inside your clone.
They are only installed for the topmost package (the one you are working on), npm does not recurse through all your dependencies and install their devDependencies.
The best tool for your median software-producing organization, who can’t just hire a team of engineers to do this, is update embargoes. You block updating packages until they’ve been on the registry for a month or whatever by default, allowing explicit exceptions if needed. It would protect you from all the major supply-chain attacks that have been caught in the wild.
In security-sensitive code, you take dependencies sparingly, audit them, and lock to the version you audited and then only take updates on a rigid schedule (with time for new audits baked in) or under emergency conditions only.
Not all dependencies are created equal. A dependency with millions of users under active development with a corporate sponsor that has a posted policy with an SLA to respond to security issues is an example of a low-risk dependency. Someone's side project with only a few active users and no way to contact the author is an example of a high-risk dependency. A dependency that forces you to take lots of indirect dependencies would be a high-risk dependency.
Practically, unless you code is super super security sensitive (something like a root of trust), you won't be able to review everything. You end up going for "good" dependencies that are lower risk. You throw automated fuzzing and linting tools, and these days ask AI to audit it as well.
You always have to ask: what are the odds I do something dumb and introduce a security bug vs what are the odds I pull a dependency with a security bug. If there's already "battle hardened" code out there, it's usually lower risk to take the dep than do it yourself.
This whole thing is not a science, you have to look at it case-by-case.
If that is really the case (I don't know numbers about React), in projects with a sane criteria of security, they would either only jump between versions that have passed a complete verification process (think industry certifications); or the other option is that simply by having such an enormous amount of dependencies would render that framework an undesirable tool to use, so they would just avoid it. What's not serious is living the life and incorporating 15-17K dependencies blindly because YOLO.
(so yes, I'm stating that 99% of JS devs who _do_ precisely that, are not being serious, but at the same time I understand they just follow the "best practices" that the ecosystem pushes downstream, so it's understandable that most don't want to swim against the current when the whole ecosystem itself is not being serious either)
> How do you do that practically? Do you read the source of every single package before doing a `brew update` or `npm update`?
There are several ways to do this. What you mentioned is the brute-force method of security audits. That may be impractical as you allude to. Perhaps there are tools designed to catch security bugs in the source code. While they will never be perfect, these tools should significantly reduce the manual effort required.
Another obvious approach is to crowd source the verification. This can be achieved through security advisory databases like Rust's rustsec [1] service. Rust has tools that can use the data from rustsec to do the audit (cargo-audit). There's even a way to embed the dependency tree information in the target binary. Similar tools must exist for other languages too.
> What if these sources include binary packages?
Binaries can be audited if reproducible builds are enforced. Otherwise, it's an obvious supply chain risk. That's why distros and corporations prefer to build their software from source.
More useful than reading the code, in most cases, is looking at who's behind the code. Can you identify the author? Do they have an identity and reputation in the space? Are you looking at the version of the package they manage? People often freak out about the number of packages in such ecosystems but what matters a lot more is how many different people are in your dependency tree, who they are, and how they operate.
(The next most useful step, in the case where someone in your dependency tree is pwned, is to not have automated systems that update to the latest version frequently. Hang back a few days or so at least so that any damage can be contained. Cargo does not update to the latest version of a dependency on a built because of its lockfiles: you need to run an update manually)
> More useful than reading the code, in most cases, is looking at who's behind the code. Can you identify the author? Do they have an identity and reputation in the space?
That doesn't necessarily help you in the case of supply chains attacks. A large proportion of them are spread through compromised credentials. So even if the author of a package is reputable, you may still get malware through that package.
Normally it would omly be the diff from a previous version. But yes, it's not really practical for small companies or individuals atm. Larger companies do exactly this.
We need better tooling to enable crowdsourcing and make it accessible for everyone.
It’s mentioned in the books, kopeng. I think it comes up in some of the repair scenes, but there’s such a jargon dump in many of them that it might slip by. Naomi is caressing some of it at one point, like she’s petting a cat. Which is not far off from how she sees the Roci.
I can think of two in the show, but one is right before Holden needs to tell Nagata something important, and the other is in the middle of a brain dump at Tycho station when the Roci is being diagnosed for repairs.
Look I love giving people the benefit of the doubt, but that's not why this pricing model exits. It's because they want to capture a percentage of the value delivered, and the easiest way to do that it to charge by executions
When Bill C-26 was introduced, OpenMedia and our partners in civil society.. unpacked what the bill meant for Canadians, raised the alarm about its risks, and put forward practical recommendations to improve cybersecurity without compromising privacy. Civil liberties groups, academics, and experts joined us in calling for change. While a few of our fixes were adopted, most were ignored. Those unfinished issues now carry over into Bill C-8.. This campaign is about restarting the national conversation on cybersecurity and privacy. If we push harder this time, we can shape Bill C-8 into the law Canadians want and deserve.
> H-1B visa holders are REQUIRED to leave the country to renew their visas every few years
This part is not exactly true. You can renew H1B indefinitely within the USA(every 3 years, need a pending Green card application from the 2nd extension onwards i.e after 6 years). However, if you leave the US for any reason you won't be able to re-enter the USA without a renewed visa stamp from a US embassy. The two exceptions are that you can visit Canada or Mexico for less than 30 days without triggering the visa stamp requirement.
No, it's correct. The original comment said "to renew their visas". The visa expires on its expiration date, however the H1B status itself is extended. Hope this helps.
Even being generous, and saying it's a year, most capital expenditures depreciate over a period of 5-7 years. To state the obvious, training one model a year is not a saving grace
I don't understand why the absolute time period matters — all that matters is that you get enough time making money on inference to make up for the cost of training.
This was a surprisingly big thing back in the early 2000s with The War Against Terror. I think that it was mostly for reasons of 'chilling effect', but the media made everyone aware that the Department of Homeland Security were paying attention to what books people took out of the library.
What was curious about this was that, at the time, there were few dangerous books in libraries. Catcher in the Rye and 1984 was about it. You wouldn't find a large print copy of Che Guevara's Guerrilla Warfare, for instance.
I disagree about how libraries minimise the risk of anyone knowing who is reading what. On the web where so much is tracked by low intelligence marketing people, there is more data than anything that anyone can deal with. In effect, nobody is able to follow you that easily, only machines, with data that humans can't make sense of.
Meanwhile, libraries have had really good IT systems for decades, with everything tracked in a meaningful form with easy lookups. These systems are state owned, therefore it is no problem for a three letter agency to get the information they want from a library.
Libraries don't tend to have consolidated, centralized IT. As a result, TLAs have to actually make subpoenas to the databanks maintained by individual, regional library groups, and The ALA offers guidelines on how to respond to those (https://www.ala.org/advocacy/privacy/lawenforcement/guidelin...).
This, of course, doesn't mean your information is irretrievable by TLAs. But the premise of "tap every library to bypass the legal protections against data harvesting" is much trickier when applied to libraries than when applied to, say, Google. They also aren't meaningfully "state-owned" any more than the local Elk's Club is state-owned; the vast majority of libraries are, at most, a county organ, and it is the particular and peculiar style of governance in the United States that when the Feds come knocking on a county's door, they can also tell them to come back with a warrant. That's if the library is actually government-affiliated at all; many are in fact private organizations that were created by wealthy donors at some point in the past (New York Public Library and the Carnegie Library System are two such examples).
Many libraries also purposefully discard circulation data so as to minimize the surface area of what can be subpoena'd. New York Public Library for example, as a matter of policy, purges the circulation data tied to a person's account soon after each loaned item is returned (https://www.nypl.org/help/about-nypl/legal-notices/privacy-p...).
Have you seen the list of books fascists want to ban? I think GP's point was exactly to emphasize that when we're talking about "dangerous books", we're talking about books that indicate you might not be a toe-lining member of The Party. We're talking about any book that any powerful person decides is some sort of threat, even if it's merely a threat to their ego.
Not dangerous at all! An analogy would be comparing a pea-shooter to an automatic rifle, or a thimble full of shandy when compared to a gallon of vodka. There is not a dangerous word in my local library!
Not having a dependency management system isn't a solution to supply chain attacks, auditing your dependencies is