Most volume renderers lack a good transfer function editor.
When analyzing volumes, especially explorative analysis, the most effective tool is to dial in colors and opacities for certain value ranges, in order to find structures.
The volume rendering engine I have been working on uses a histogram for the value distribution, and on top of it, one can draw lines that indicate the opacity. Additionally, one can set colors to the control points, which are then linearly interpolated for the given ranges.
Ah, really? My feeling (in Berlin) has always been that because of its grip on CDJs, Rekordbox dominates.
But this is a quite unfortunate, at least on my MBP 2014, Rekordbox (and Mixxx, mentioned elsewhere in the thread) send the fans into overdrive immediately while quickly becoming sluggish (just about the last thing you want from a tool for DJing), while Traktor keeps things quiet and responsive.
Not sure what you mean by large amount of words. You can fit a PCA on millions of vectors relatively performantly, then inference from it is just a matmul.
Not true. You need a distance matrix (for classical PCA it's a covariance matrix), which scales quadratically with the number of points you want to compare.
If you have 1 Mio. vectors, each creating a float entry in the matrix, you will end up with approx (10^6)^2 / 2 unique values, which is roughly 2000Gb of memory.
Some architectures are relatively well understood. Eg in CNNs, the first layers detect low level features like edges, gradients, etc. The next layer then combines these features to more complex structures like corners or circles. Next layer will combine these features to even higher level features and so on. [1]
Typically, you can take a pre-trained model and retrain it on your new dataset by only changing the weights of the last layer(s).
Some loss functions even measures the difference between the high-level features of two images, typically extracted from a pre-trained CNN (Perceptual Loss).
I have not seen many work on explainable AI regarding large language models. I remember many very nice visualizations and visual analysis tools trying to comprehend, what the network „is seeing“ (eg. in the realm of image classification) or doing