File and folder names containing sensitive information get encrypted with non-deterministic encryption (i.e., with random IV) and get decrypted only for displaying purposes. Software can handle GUIDs as file and folder names whereas the real names are kept encrypted.
For indexing, unpredictable data can be hashed (with a salt unique to the field). Both predictable and unpredictable data need get deterministically encrypted - usually with IV being an SHA-2 or SHA-3 hash of the data. This works for exact searches only, of course.
Oghuz bards sang the versed version of jazz: common themes with a lot of improvisation. They hardly could repeat their songs from the previous day. Their were the same songs which were never the same.
Great to see the zero knowledge security model being adopted more widely across different types of applications.
I wonder how the interoperability between E2EE (end-to-end encrypted) apps is going to look like. Zero-knowledge sharing is a solved problem, but it is not easy to implement - and many choose not to implement it. They just use key sharing in URLs, making the actual secure sharing the user's problem.
Productivity is calculated as a ratio value_produced / value_consumed. The latter, in software world, is a sum of all benefits, salaries, technology used to produce value, etc.
What is missing in the current software world is the way of calculating the value produced.
Intel® Stratix® 10 devices deliver up to 23 TMACs of fixed-point performance and up to 10 TFLOPS of IEEE-754 single-precision floating-point performance.
You can't really compare them on a FLOPS basis. Firstly because our and Jäkel's algorithms are completely different. In fact within the FPGA accelerator itself we don't even use a single multiply, all our operations are boolean logic and counting. Whereas Jäkel was able to exploit the GPU's strong preference for matrix multiplication. All his operations were integer multiplications.
In fact, in terms of raw operation count, Jäkel actually did more. There appears to be this tradeoff within the Jumping Formulas, where any jump you pick keeps the same fundamental complexity, with even a slight preference towards smaller jumps. It is just that GPU development is several decades ahead of FPGA development, thanks to ML and Rendering hype, which more than compensates for the slightly worse fundamental complexity.
As a sidenote, raw FLOP counts from FPGA vendors are wildly inflated. The issue with FPGA designs is that getting this theorethical FLOP count is nigh-impossible, because getting all components running at the theorethical clock frequency limit is incredibly difficult, compare that with GPUs, where at least your processing frequency is a given.
Hello, thanks for taking the time to reply here! My own Master's thesis was also about optimising and implementing algorithms to count certain (graphical) mathematical objects, but you picked a much more famous problem than me. I'm very surprised I didn't know the definition of Dedekind numbers, although it's related to things I touched on.
I'm not too familiar with FPGAs but hope to have a use for them some day. Measuring their performance in FLOPs seems strange. How close to those theoretical limits does one typically get? Are there are a lot of design constraints that conspire against you, or is it just that whatever circuit you want can't be mapped densely to the gate topology?
As for the trust, we generally trust the CPU manufacturers, including their implementation of encryption like the AES instructions.