Hacker Newsnew | past | comments | ask | show | jobs | submit | alkoumpa's commentslogin

if only it wasn't that expensive to get a license for x86


x86 isn't that good. RISC-V has similar code density as amd64 while being fairly simpler. There's not much appeal to x86 as an ISA, other than for running legacy software.


running 'legacy software' is for most use cases probably the most important feature.


Most "legacy software" can be rebuilt from source code. For software where the sources aren't available, there's only a small subset where performance is the main priority. The rest can be handled through emulation.


The last time I checked, the patents have expired on anything pre-Pentium, and by now the Pentium patents probably have expired too.


I am aiming for the expiration of Intel patents for stuff up to SSE2, which will likely happen in the next few years.


organically? Please, Google promoted Chrome through ads for months! -- disclaimer, I love Chrome!


I wonder if only using non-overlapped triangles would reduce accuracy. Otherwise, it should limit the number of triangles.


It would be great if I could remove overlapping triangles then I would have near linear growth of triangles for larger images. But it's very hard to come up with a technique which removes the same overlapping triangles (and leaves the same one) and is invariant to 2D affine transformations.

For example if you have 50 overlapping triangles you have to decide which 49 to remove and you have to remove the same 49 on the query image and the matching image. But because we want to be able to do our searches in sublinear time we can't compare the two images and decided on which triangles should stay/be removed, you have to do all that in preprocessing before inserting into the database/querying. delaunay triangulation looks almost perfect but isn't invariant to 2D affine transformations


Maybe removing triangles which have one very narrow angle could help. After normalization those trinagles do not hold a lot information anyway ;-)


to protect voting, audit your software/system extensively. Openssh is open-source and we all know the story..


But how can I (a voter), audit it in the voting booth? How can I verify that the extensively audited software is actually running on the machine in front of me?


You can't. And even if the software is open source, it doesn't guarantee that state or election officials will set aside budgets to deploy such patches swiftly, or even care to deploy them.


You can't. Especially at scale (every person validating the software before voting). Paper ballots with a anonymised ledger of votes placed is, in my opinion, the best method.


You can audit your ballot in some systems. For example https://nvotes.com (open source software here https://github.com/agoravoting/).

You could even create your ballot offline, even by hand.


Paper doesn't scale well, attacks on paper are extremely difficult to scale well, which is why paper is a good system for voting.


It scales "well enough", in that we currently do it, and pay for people to verify the results.

In Australia a lot of this work is done by volunteers from the major parties.

Edit: I agree, its difficult to scale an attack on paper :)


do you think these questions are addressed by open-source software? I mean, if you only have a few buttons in front of you, how can you verify/audit the software it's running?


plus, I might add, you can create secure software, that can't be penetrated from outside, but what about the hardware? Unless you write this (software) too, how can you trust the underlying hardware? e.g.: broadpwn. Yes, open source makes it easier to audit/collaborate/patch but it's not enough.


similarly, on a larger scale, one could ask whether deep learning is unethical for automating millions of jobs (if not yet, certainly in the future).


I read: I suggest purchasing a scanf :(


they are definitely moving really fast on this one


posts involving vectorization are always nice, especially if it involves compression. I'd love to see a vectorized version of arithmetic coding, that would definitely be interesting



Finite State Entropy is the leaner, faster entropy encoder that you probably want to use instead of Arithmetic coding. It's also much more likely to be something that could be made parallel--at least for encoding, Arithmetic coding traditionally uses division, and there's no SSE integer division.

But that's okay, because ANS / FSE are here.


There is LZNA (http://cbloomrants.blogspot.se/2015/05/05-09-15-oodle-lzna.h...) that uses vectorization to update and query its statistical models.


I wonder how of those fit in a zedboard. (not going to ask how many hours of tool fighting that would require though).


Running on a Zedboard is quite well documented; only took me a couple of hours to do it from scratch following their instructions: https://github.com/ucb-bar/fpga-zynq


I meant in a SIMT fashion, more like a GPU


So, Kim Jong-un has a memory leak problem, but that's probably good for the rest of us..


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: