Hi! I'm also here (other co-founder of the company/project), we'd love to understand if this is a problem only we encountered - or others also need to keep track of their simulated model performance over time!
We'll be working on the pytorch integration soon!
`Fold`'s scope is time series, but there's nothing stopping you from sending in vector embeddings of any kind, timestamped.
Figuring out how to create those embeddings (that make sense over time) can mean quite a bit of research work, and requires flexibility, so it's probably better done outside of the time series library, with the tools of your choice.
> We'll be working on the pytorch integration soon! `Fold`'s scope is time series, but there's nothing stopping you from sending in vector embeddings of any kind, timestamped.
Awesome. I'll take a look. Thanks!
> Figuring out how to create those embeddings (that make sense over time) can mean quite a bit of research work, and requires flexibility, so it's probably better done outside of the time series library ...
Mark here (other co-founder), we're really curious if you are using Time Series Cross-Validation, with what tools, how frequently, and what kind of issues did you bump into!
That is true, but unfortunately "out of the box", they're not well suited just be "fed into" an NN. Even if you think of the adjacency matrix as very similar to how the weights are laid out in a feed-forward neural network, you can't ignore that:
- in real life, graphs are not fixed
- you need to deal with the many different potential representations of the graph (permutation invariance)
- the nodes are usually containing more features than a single scalar value
but this is definitely not the best explanation, I think this guy does a lot better job: https://youtu.be/JtDgmmQ60x8
Sure, but GNNs modeling neurons is nonsensical since the graph is the analyte of the NN, you are not a priori doing anything with the graph. So in a sense my point is "to use NNs to model neurons", using a GNN doesn't buy you anything because the G in GNN isn't being subjected to dynamic activation.
In my mind, GNNs are designed to solve graph problems, in the usual case, with message passing, that enables (I'd emphasise the aggregation step) to "do ML on graphs".