> It's better to use a constant time algorithm, but that's harder to do in a curve generic way and has a pretty significant performance impact (particular before the safegcd paper).
Crypto noob here, but isn't modular inverse the same as modular exponentiation through Fermat's little theorem? I.e., x^-1 mod n is the same as computing x^{n - 2} mod n which we know how to do in a constant-time way with a Montgomery ladder.
Or is that too slow?
The powering ladder is unfortunately quite slow compared to the obvious vartime algorithm which is what temps these things to have a vartime algorithim in the first place, though too slow depends on the application. It doesn't help that these chips are underpowered to begin with.
Aside, for the FLT powering ladder n need needs to be prime, but it isn't when there is a cofactor, though there is a generalization that needs phi(n)... I probably shouldn't have made a comment on the issue of being curve specific since the problem is worse for sqrt().
From the German BSI-TR-02102-1 ([0],[1]) guidelines
"Combination of Classical and PQC Security: The secure implementation of PQC mechanisms, especially with regard to side-channel security, avoidance of implementation errors and secure
implementation in hardware, and also their classical cryptanalysis are significantly less well
studied than for RSA- and ECC-based cryptographic mechanisms. In addition, there are currently no standardised versions of these mechanisms. Their use in productive systems is currently only recommended together with a classic ECC- or RSA-based key exchange or key
transport. In this case, one speaks of a so-called hybrid mechanism. Parallel to a PQC key
transport, an ECC-based key exchange using Brainpool or NIST curves with at least 256 bits
key length should be performed. The two shared secrets generated in this way should be combined with the mechanism given in Section B.1.1 of this Technical Guideline. Here, the standard [96] in its current version explicitly provides the possibility to combine several partial
secrets. A hybrid approach, as proposed here, is further described for example in [5] as the
most feasible alternative for a use of PQC mechanisms in the near future.
Provided that the restrictions of the stateful mechanisms XMSS and LMS recommended in
this TechnicalGuideline are carefully considered, these hash-based signatures can in principle
also be used alone (i.e., not hybrid), see Chapter 6"
I personally did not know about classes prépa before the last year of high school. I will forever be thankful for my maths teacher who told me about it that year, since I would have probably slacked off at university.
- Make them good at science, i.e., Maths and Physics.
- Get them into a decent high school, e.g., Henri 4 or Louis Le Grand in Paris.
- Hope they have good grades and manage to get into a good preparatory class [1], e.g., Henri 4, Louis Le Grand in Paris, or Hoche and Sainte-Geneniève in Versailles.
- Make sure they don't slack off, and hope they get into a good engineer school, e.g., Ecole Polytechnique, Ecole des Mines, Ecole Nationale des Ponts et Chaussés, CentraleSupelec.
(Lists are not exhaustive)
If they manage to get into one of these schools, they will most likely end up not have any difficulty to find a somewhat well-paid job in France.
> Google put in significant engineering effort into "Ryu", a parsing library for double-precision floating point numbers: https://github.com/ulfjack/ryu
> So a theoretician who had little experience actually using such solutions in real life?
These low-effort, borderline ad-hominem comments like yours make these comment threads a terrible reading experience. These comments add nothing to the discussion.
I disagree. when some tech seems to come out of nowhere, and disconnected from the general community practices, understanding the background of who made it can help understanding its blindspots.
also, the whole thing did not start with some kind of open white paper explaining the need, the rational for a new framework or language and comparing with alternatives, etc. which is often how oss projects start.
The comment added a very specific and valid criticism to the discussion: academics don't design for the real world. (In case I have to lead you down the path... the point being, the design will lack insight into how such a policy language might function in the real world, and thus not provide any significant advantage over existing policy languages, while simultaneously fragmenting said ecosystem)
This is yet another ad hominem which I frankly find very distasteful. Plenty of real world software that you see around has been designed by academics and are hugely successful in the real world.
Academics come in all varieties just like your engineers. There are good engineers and there are engineers with years of experience who design terrible software. There are academics who do not design for real world and there are academics who do.
Just throwing around the term "academic" in a loose manner offers no insight on the quality of the work. If there is any valid criticism it should be made on the work, not about the person behind it. That you come to defend ad hominem in more elaborate words goes to reaffirm my original point about why these comments make these threads a terrible reading experience! They often seem to show a shallow understanding of both the academia and the professional IT industry.
You realize the phrase ad hominem means a personal attack, right? None of us have made a personal attack. If anything, we are attacking the distinction between one institution and another.
We're pointing out that experience provides essential knowledge that a person who hasn't had experience lacks, a simple truth of life. Reading a book about performing surgery is much different than having actually performed the surgery. And someone who has worked in academia (aka "an academic") lacks experience in industry. An academic has not been actually applying the policies in a large organization for years, and thus cannot know or predict all the ways in which the practice of applying the policy can cause problems that do not present themselves in a non-industry setting.
Again, we are not attacking a person. We are specifically saying that any person who lacks experience is not going to make as good of a solution. This is not a controversial statement.
Ad hominem doesn’t mean personal attack actually. This is quite a frustrating misconception online.
Ad hominem applies to an argument wherein you try to discredit an argument by discrediting the person making the argument. I.E. “Person A is B therefore not C”. In this situation the argument that “This guy is an academic therefor his tool is unneeded” is absolutely an ad hominem.
If I said "Hitler was a racist, therefore he is not the right person to hire to run an Inclusion & Diversity initiative", that would be an ad hominem, and therefore a fallacy? No, because "what he is" directly impacts the subject in question. Claiming it was a fallacy due to ad hominem would be argument from fallacy, which is a fallacy. (A fallacy fallacy)
But this academic did design for the real world. He works for Amazon. This project is happening inside Amazon. How the heck can you be saying he wouldn't have insight into how such a policy language would function "in the real world", when he's literally designing this language at a "real world" company?
Amazon is a gigantic series of 2-pizza-teams and fiefdoms. They don't talk to each other much. Just because you "work at Amazon" does not mean you have ever serviced a single customer, or gleaned the experience of the tens of thousands of engineers working there (most of whom don't know the others exist).
But even so, let's assume this person has worked directly with customers, and knows how to design for the real world. Why might someone whose job is not being a practitioner, maybe not come up with the best design?
Take for example the product "Terraform", by HashiCorp. It has severe design flaws which anyone who has been forced to use it at scale will find immediately. But the person who invented it, and its subsequent developers, follow a "philosophy" that has nothing to do with working with it in the real world. It was not created by practitioners, for practitioners, but by idealists, for a corporation to ensnare hapless clients that have sadly adopted their freeware and now need to pay for add-on software to manage it. The end result? It sucks balls.
For an Amazon-centric example, take Terraform's competitor, CloudFormation. It's even more horrible. You write your configuration in a data serialization format, in verbose and clunky huge chunks, and (afaik) can't validate or test them without applying them in the cloud. There's no modules or extensions, and it doesn't really change or improve. It's like someone decided to only build the guts of a configuration management engine and told all the users to just figure out for themselves how to use it. You still need to build a whole 'nother software project just to interface with it in a sane way.
Back to the topic: Many other policy languages and tools already exist, many of them open-source. They could have simply adapted those existing solutions and extended them to include whatever novel, genius ideas they had that those other solutions didn't have. But they didn't. Why? I have no idea. But I'm betting it's not because they couldn't write patches for the other tool.
Crypto noob here, but isn't modular inverse the same as modular exponentiation through Fermat's little theorem? I.e., x^-1 mod n is the same as computing x^{n - 2} mod n which we know how to do in a constant-time way with a Montgomery ladder. Or is that too slow?