Hacker Newsnew | past | comments | ask | show | jobs | submit | more some1else's commentslogin

Here's one I use for cryptographic purposes:

  let invocations = 0;

  export function rand3000() {
    invocations += 1;

    const timestamp = new Date().getTime() + invocations * 1000;
    const masked = (timestamp ^ (timestamp >> 8)) & 0xFF;
    const result = masked / 255;

    return result;
  }


Talk to people who want to hear what you have to say.


Where would a Redis vector store play a part though? Maybe you'd load up relevant embeddings for a particular user while they're interacting with their dataset, to make their responses quicker? You've already spent the effort on hydrating their data out of persistence though. I guess step one is likely being a more trusted alternative to the in-memory vector solutions like HNSW, Faiss, and a potentially faster engine than pg_vector. I've always seen Redis as an augmentation, but maybe in this role it can take the helm?


It's exactly that. Redis is an in-memory data structure server that you can outsource index-style operations to. Vector similarity is a type of index search. I think it's an exact fit for Redis.


Cool. Redis in front of Postgres always brought peace-of-mind that will likely be welcome for the vector data use-case.

P.s.: Appreciate the llm command line tool.


Regardless of the naive algorithm, wouldn't this also match empty cells?


https://weirdnet.xyz - hexagonal lattice mesh

https://undesign.systems - shadow study

https://spamoji.com/ - emoji collaging

https://srdjan.pro - ascii perlin noise

https://qwerkey.xyz - keyboard tonnetz

https://ljubljana-shelters.netlify.app/ - shelter map

https://floating-weights.netlify.app/ - generative vector piece

https://randomle.netlify.app/ - random fake wordle result

https://ancient-scroll.netlify.app/ - it is what it is

https://sandstorm.netlify.app/ - app that always shazams to darude - sandstorm

https://uploadimg.netlify.app/ - fake image upload app

https://pietgrid.netlify.app/ - interactive mondrian

https://pixelbuddy.netlify.app/ - click to spawn randomly walking dead pixel


Supply chain attack


You might have a problem using CUDA as part of the name, since Nvidia has it trademarked. Maybe you can switch to Scuba if they give you trouble, sounds like a good name for the tool.


Buda may Be a Better name


We need to do for CUDA what was done for Jell-o and Kleenex.


"Allegory of the cave" comes to mind, when trying to describe the understanding that's missing from diffusion models. I think a super-model with such qualifications would require a number of ControlNets in a non-visual domains to be able to encode understanding of the underlying physics. Diffusion models can render permutations of whatever they've seen fairly well without that, though.


I'm very familiar with the allegory of the cave, but I'm not sure I understand where you're going with the analogy here.

Are you saying that it is not possible to learn about dynamics in a higher dimensional space from a lower dimensional projection? This is clearly not true in general.

E.g., video models learn that even though they're only ever seeing and outputting 2d data, objects have different sides in a fashio that is consistent with our 3d reality.

The distinctions you (and others in this thread) are making is purely one of degree - how much generalization has been achieved, and how well - versus one of category.


It appears that the author was indeed not too closely familiar with the premise of the Y2K bug, as they mention the change "from 19 to 20"

... the Y2K Bug, and it prophesied that on January 1, 2000, computers the world over would be unable to process the thousandth-digit change from 19 to 20 as 1999 rolled into 2000 and would crash ...

That wouldn't be problematic, since the numbers don't loop around (like when going from 99 to 00).


A lot of these systems stored the year as two digits. So 19 to 20 wasn’t the problem. The problem was mainframe based systems are/were almost entirely based on fixed length data representations; cobol copybooks, tape and dasd datasets (ie files). Expanding all those from two bytes to four was a lot of work and risk is some organizations.


For those that are looking, I have one of these to sell.

Drop me a line or schedule a meet: https://srdjan.pro


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: