Could you expand on how these wraparound bugs happen in Rust? As far as I know, integer overflow panics (i.e. aborts) your code when compiled in debug mode, which I think is often used for testing.
Because adding the top limbs of two encoded numbers would overflow too soon. If you set both to 2^63 for example, they overflow immediately. Might be fine for wraparound arithmetic, but not in general.
Setting both to 2^63 means your original 256-bit numbers were 2^255, thus the addition would overflow no matter what intermediate encoding you’re using.
Sure, then set one to 2^62 and the other to -2^62 (namely: 0b1100..00). It's overflow as far as unsigned arithmetic is concerned, but not in the case of signed arithmetic.
That said, when you're dealing with 256-bit integers, you're almost assuredly not working with signed arithmetic.
For aggregation-like things, the interesting properties are often about the properties of the accumulation function, and not the entire aggregation, which should then be correct by extension. So for your `sum` example, you'd use PBT to test that your `+` works first, and only then coming up with things that should hold on top of that when repeatedly applying your operation. For example, once you have a list of numbers and know it's sum, adding one additional number to the list and taking the sum again should lead to the same result as if you had added the number to the sum directly (barring non-associative shenanigans like with floating point - but that should have already been found in the addition step ;) ).
There's a bunch of these kinds of patterns (the above was inspired by [0]) that are useful in practice, but unfortunately rarely talked about. I suppose that's because most people end up replicating their TDD workflows and just throwing more randomness at it, instead of throwing more properties at their code.
> Throughout the rest of the paper, we assume an adversary who at some point managed to obtain the private identity key IK, the phone number pn, the API username unA, and the password pwA.
This is basically fully owned access already, right? If you have everything that is required to authenticate to the Signal servers, of course you can register new devices.. In that scenario, there's not much you can do to protect against this, and even the proposed countermeasures are conceivable to work around as an attacker. Signal also seems to view the paper like that:
> We disclosed our findings to the Signal organization on October 20, 2020, and received an answer on October 28, 2020. In summary, they state that they do not treat a compromise of long-term secrets as part of their adversarial model. Therefore, they do not currently plan to mitigate the described attack or implement one of the proposed countermeasures.
Location: EU, Austria
Remote: Possible
Willing to relocate: Yes, for some offers
Technologies: C, C++, Julia, Python, SQL, Linux, Git/Github, Docker, TrueNAS, Wireguard, CoreDNS, AVR, ARM, RISC-V
Résumé/CV: ~3 years DevOps (mostly Python/Proxmox/TrueNAS); Email for more extensive CV
Email: valentin (at) bogad (dot) at please mention "HNMay2024" somewhere in the email
LinkedIn: https://www.linkedin.com/in/vbogad
Site: https://seelengrab.github.io
GitHub: https://github.com/Seelengrab
In a previous position I've maintained a multisite Linux environment for ~20 people. I'm very OpenSource friendly, experimenting with compiling Julia baremetal to embedded devices. I've also created Supposition.jl (https://github.com/Seelengrab/Supposition.jl), a Hypothesis inspired property based testing/fuzzing framework for Julia.
> (10c) the mere fact that an open-source software product receives financial support by manufacturers or that manufacturers contribute to the development of such a product should not in itself determine that the activity is of commercial nature.
> (10) Accepting donations without the intention of making a profit should not be considered to be a commercial activity.
> (10c).. for the purpose of this Regulation, the development of products qualifying as free and open-source software by not-for-profit organisations should not be considered a commercial activity as long as the organisation is set up in a way that ensures that all earnings after cost are used to achieve not-for-profit objectives.
> the mere fact that an open-source software product receives financial support by manufacturers or that manufacturers contribute to the development of such a product should not in itself determine that the activity is of commercial nature.
That just means that a business can donate to a non-profit project. Such a business would still need to not profit from the project in anyway. Why would a business help develop something it does not profit from?
> for the purpose of this Regulation, the development of products qualifying as free and open-source software by not-for-profit organisations should not be considered a commercial activity as long as the organisation is set up in a way that ensures that all earnings after cost are used to achieve not-for-profit objectives
So again, an organisation can, provided it no profit.
The idea is that you can accept donations to cover the costs, but not beyond that.
So an organisation can pay developers to work on it, cover hosting costs etc. but they have to be careful not to accept donations for more than that. A non-profit can accept more provided it is used for the right objects.
I have no idea (neither does the author of the article) where that leaves an individual developer who accepts donations to cover the value of their time.
> But often the packages also talk through simple accesses of values like `X.someproperty`. I've seen situations where maintainers would add and remove these properties and break their own other libraries. Better "enforcement" of these types of things - however that would look like - would be a huge improvement for the time sink that is maintaining a Julia package.
I think this is a cultural issue to a large degree. I think even if there were better enforcement, it's just a question of coding of discipline to then actually not break something. If it were easier to anticipate what sort of breakage a change could entail (say, by making clear & documented _by the core language_ what is considered a breaking change, and then actually not even doing "technically breaking" changes and using deprecations instead), this COULD change.
That being said, that requires a very different development workflow than what is currently practiced in the main repos of the language, so there's quite a bit of an uphill battle to be had (though such issues happen there all the time, so solving that would actually help here the most, ironically).
There is no rebuttal because nothing much has really changed culture wise. Sure, the various @inbounds issues and concrete bugs that are mentioned in Yuris post have mostly been addressed, but the larger point (that is, "what can I actually expect/get guaranteed when calling a given function?") definitely hasn't been, at least not culturally (there are parts of the ecosystem that are better at this, of course). Documentation of pre- and postconditions are still lackluster, PRs trying to establish that for functions in Base stall for unclear reasons/don't get followups and when you try to talk about that on Slack retorts boil down to "we're tired of hearing you complain about this" instead of trying to find a systemic solution to that problem. Until that changes, I have large doubts about Yuris post losing relevance.
My own efforts (shameless plug, https://github.com/Seelengrab/PropCheck.jl for property based testing inspired by Hedgehog and https://github.com/Seelengrab/RequiredInterfaces.jl for somewhat formalizing "what methods are needed to subtype an abstract type") are unused in the wider community as far as I can tell, in spite of people speaking highly of them when coming across them. I also don't think Kenos InterfaceSpecs.jl is the way forward either - I think there's quite a lot of design space left in the typesystem the language could do without reaching for z3 and other SAT/SMT solvers. I personally attribute the lack of progress on that front to the lack of coherent direction of the project at large (and specifically not to the failings of individuals - folks are always very busy with their lives outside of Julia development/other priorities). In spite of the fact that making this single area better could be a big boon with more traditional software engineers, which are very underrepresented in the community.
There is a culture of well-defined interfaces which are checked at compile time. This is something that was emphasized in recent posts and changes such as:
* SciMLStyle (https://github.com/SciML/SciMLStyle) came into existence and defines a style which avoids any of the behaviors seen in the blog post.
* Julia came out with package extensions in v1.9 (https://www.youtube.com/watch?v=TiIZlQhFzyk) and with the interface checking, implicit interface support was turned mostly into explicit interface support where packages and types opt-in via defining traits (this is still continuing to evolve but is mostly there).
Given all of these changes, most of the things in the blog post would now error in many standard packages without someone explicitly defining interface trait functions to allow an object in that doesn't satisfy the interface which it's claiming to. Of course, not every person or every package has changed, but what I described here are major interface, style, and cultural changes to some of the most widely used packages since 2020.
> with the interface checking, implicit interface support was turned mostly into explicit interface support where packages and types opt-in via defining traits (this is still continuing to evolve but is mostly there)
What interface checking? Base doesn't provide any such facilities. Package extensions are still ad-hoc constructions on a package by package basis; there is little to no consistency.
> Of course, not every person or every package has changed, but what I described here are major interface, style, and cultural changes to some of the most widely used packages since 2020.
And none of that has landed in Base, none of that is easy to find out about/discoverable for non-SciML users. SciML is not all of Julia, and certainly not Base or packages outside of SciML. Please don't frame this as if SciML was the only thing that mattered here.
Yes it has not landed in Base, but you're acting like there has not been a general cultural changes. I showed, with receipts, that many of the most widely used packages in the Julia ecosystem have adopted new infrastructure, systems, and tooling to address these problems in the last 3 years. With SciML and JuMP both having adopted such systems, this accounts for roughly 50% of the top 100 most starred Julia packages according to current metrics (Nov 20 2023), with many of the packages not doing this largely being interface packages (plotting, notebooks, and language interop, if you account for those not having this issue it's closer to 2/3 of the top 100 most starred packages). I also want that number to be 100%, and the compiler team is having weekly discussions with us about our needs given that there are now successful parts of the ecosystem to model this tooling based off of, and so yes it can get better and we need to keep improving. But to claim that no shift in culture has occurred is implying that all of this doesn't exist, even though we can point to the receipts.
50% of the top 100 most starred, i.e. used, packages are not representative of the entire community. Not to mention that the vast majority of those SciML packages is developed by a relatively small group of people, compared to the rest of the ecosystem. If all a potential user cares about is SciML, good for them! I've repeatedly said that users not looking for SciML are left behind.
Yes, SciML is doing good. I'm not denying that, and never have. Still, the rest of the community/package ecosystem is not good at catching up - which is what I'm criticizing.