Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Also the amount of data we query in most cases is many orders of magnitude larger.

Most of the 1980-1995 cases could fit the entire datasets in CPU cache and be insanely fast.

Most things I query these days are in the gigabytes to terabytes range.

Lastly, we have to make them secure,especially against malformed data attempting to attack the app, which eats a lot of CPU cycles.



> Most of the 1980-1995 cases could fit the entire datasets in CPU cache and be insanely fast.

They couldn't then. They had to fit it in RAM.

> Most things I query these days are in the gigabytes to terabytes range.

That still is in "fits in RAM on a typical PC" to "fits in SSD on a PC, fits in RAM on a server" range.

There's little excuse for the slowness of the current searching interfaces, even if your data is in gigabytes-to-terabytes. That's where the whole "a bunch of Unix tools on a single server will be an order of magnitude more efficient than your Hadoop cluster" articles came from.


> Most of the 1980-1995 cases could fit the entire datasets in CPU cache and be insanely fast.

How big do you think CPU caches were at the time? CPUs towards the start of the era didn't even have caches.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: