> The general idea is that whenever algorithms are deciding what you see Section 230 is not in play
This isn't correct. The ruling was very narrow, with a key component being that a death was directly attributed to a trend recommend by the algorithm that TikTok was aware of, and knew was dangerous. That part is key - from a section 230 enforcement perspective it's basically the equivalent of not acting to remove illegal content. Basically everything we've understood about how algorithms are liable since section 230 was enacted remain intact.
I don’t agree. The ruling used logical reasoning based on the 2023 Netchoice decision in which the Supreme Court ruled that the actions of the moderating algorithms enjoyed first amendment protection. The first amendment protects you from liability from your own speech, while section 230 protects you from liability from somebody else’s speech. Ergo, if the platform was protected by the first amendment then the algorithm output was the speech of the platform.
Netchoice had a bunch of concurring opinions, including from ACB that essentially says they really aren’t sure how they’d rule in a case directly challenging algorithmic recommendations. That’s why I say it’s not clear how the liability situation is, and it really is baffling why TikTok chose not to appeal.
I don't think the comparison is valid. Releasing code and weights for an architecture that is widely known is a lot different than releasing research about an architecture that could mitigate fundamental problems that are common to all LLM products.
As far as I'm aware, transitive dependencies are counted in this number. So when you npm install next.js, the download count for everything in its dependency tree gets incremented.
Beyond that, I think there is good reason to believe that the number is inflated due to automated downloads from things like CI pipelines, where hundreds or thousands of downloads might only represent a single instance in the wild.
It's certainly not uncommon to cache deps in CI. But at least at some point CircleCI was so slow at saving+restoring cache that it was actually faster to just download all the deps. Generally speaking for small/medium projects installing all deps is very fast and bandwidth is basically free, so it's natural many projects don't cache any of it.
Optimizing for disk space is very low on the priority list for pretty much every game, and this makes sense since its very low on the list of customer concerns relative to things like in-game performance, net code, tweaking game mechanics and balancing etc.
Apparently, in-game performance is not more important than pretty visuals. But that's based on hearsay / what I remember reading ages ago, I have no recent sources. The tl;dr was that apparently enough people are OK with a 30 fps game if the visuals are good.
I believe this led to a huge wave of 'laziness' in game development, where framerate wasn't too high up in the list of requirements. And it ended up in some games where neither graphics fidelity or frame rate was a priority (one of the recent Pokemon games... which is really disappointing for one of the biggest multimedia franchises of all time).
That used to be the case, but this current generation the vast majority of games have a 60 fps performance mode. On PS5 at least, I can't speak about other consoles.
Seems to me the root problem here is poor security posture from the package maintainers. We need to start including information about publisher chain of custody into package meta data, that way we can recursively audit packages that don't have a secure deployment process.
7.0 added scalar type declarations and a mechanism for strong typing. PHP 8.0 added union types and mixed types. PHP enforces types at runtime, Javascript/Typescript do not. PHP typesystem is built into the language, with Js u either need jsdoc or Typescript both of which wont enforce runtime type checks, Typescript even adds a buildstep. php-fpm allows u to not care about concurrency too much because of an isolated process execution model, with js based apps you need to be extremely careful about concurrency because of how easy you can create and access global stuff.
PHP also added a lot of syntax sugar over the time especially with 8.5 my beloved pipe operator.
And the ecosystem is not as fragile as Javascripts.
I've tested this, the LLM will tend to strongly pattern match to the closest language syntactically, so if your language is too divergent then you have continually remind it of your syntax or semantics. But if your language is just a skin for C or JavaScript then it'll do fine.
Is it? or is the new part that it's being reported? This "news" just looks like an investigation AP conducted on its own. Could they have conducted it years ago, and what would they have found then?
That's the nature of statistical output, even minus all the context manipulation going on in the background.
You say the outputs "seem" to drop off at a certain time of day, but how would you even know? It might just be a statistical coincidence, or someone else might look at your "bad" responses and judge them to be pretty good actually, or there might be zero statistical significance to anything and you're just seeing shapes in the clouds.
This isn't correct. The ruling was very narrow, with a key component being that a death was directly attributed to a trend recommend by the algorithm that TikTok was aware of, and knew was dangerous. That part is key - from a section 230 enforcement perspective it's basically the equivalent of not acting to remove illegal content. Basically everything we've understood about how algorithms are liable since section 230 was enacted remain intact.