Where would a Redis vector store play a part though? Maybe you'd load up relevant embeddings for a particular user while they're interacting with their dataset, to make their responses quicker? You've already spent the effort on hydrating their data out of persistence though. I guess step one is likely being a more trusted alternative to the in-memory vector solutions like HNSW, Faiss, and a potentially faster engine than pg_vector. I've always seen Redis as an augmentation, but maybe in this role it can take the helm?
It's exactly that. Redis is an in-memory data structure server that you can outsource index-style operations to. Vector similarity is a type of index search. I think it's an exact fit for Redis.
You might have a problem using CUDA as part of the name, since Nvidia has it trademarked. Maybe you can switch to Scuba if they give you trouble, sounds like a good name for the tool.
"Allegory of the cave" comes to mind, when trying to describe the understanding that's missing from diffusion models. I think a super-model with such qualifications would require a number of ControlNets in a non-visual domains to be able to encode understanding of the underlying physics. Diffusion models can render permutations of whatever they've seen fairly well without that, though.
I'm very familiar with the allegory of the cave, but I'm not sure I understand where you're going with the analogy here.
Are you saying that it is not possible to learn about dynamics in a higher dimensional space from a lower dimensional projection? This is clearly not true in general.
E.g., video models learn that even though they're only ever seeing and outputting 2d data, objects have different sides in a fashio that is consistent with our 3d reality.
The distinctions you (and others in this thread) are making is purely one of degree - how much generalization has been achieved, and how well - versus one of category.
It appears that the author was indeed not too closely familiar with the premise of the Y2K bug, as they mention the change "from 19 to 20"
... the Y2K Bug, and it prophesied that on January 1, 2000, computers the world over would be unable to process the thousandth-digit change from 19 to 20 as 1999 rolled into 2000 and would crash ...
That wouldn't be problematic, since the numbers don't loop around (like when going from 99 to 00).
A lot of these systems stored the year as two digits. So 19 to 20 wasn’t the problem. The problem was mainframe based systems are/were almost entirely based on fixed length data representations; cobol copybooks, tape and dasd datasets (ie files). Expanding all those from two bytes to four was a lot of work and risk is some organizations.