Hacker Newsnew | past | comments | ask | show | jobs | submit | karxxm's commentslogin

Replacing juniors with AI is stupid because who will be the next senior? AI won't learn anything while performing inference only.


A wonderful satire!


Most volume renderers lack a good transfer function editor. When analyzing volumes, especially explorative analysis, the most effective tool is to dial in colors and opacities for certain value ranges, in order to find structures.

The volume rendering engine I have been working on uses a histogram for the value distribution, and on top of it, one can draw lines that indicate the opacity. Additionally, one can set colors to the control points, which are then linearly interpolated for the given ranges.


What’s that color-map called?


In the notebook you can see it set to Spectral.

https://github.com/Sohl-Dickstein/fractal/blob/main/the_boun...


Most “professional” techno djs in Germany use Traktor, if they are not using Ableton or the CDJs


Ah, really? My feeling (in Berlin) has always been that because of its grip on CDJs, Rekordbox dominates.

But this is a quite unfortunate, at least on my MBP 2014, Rekordbox (and Mixxx, mentioned elsewhere in the thread) send the fans into overdrive immediately while quickly becoming sluggish (just about the last thing you want from a tool for DJing), while Traktor keeps things quiet and responsive.


It depends I guess.

If you solve a problem that had been around for a while and LLMs offer a new way of approaching it, then it can definitely become a paper.

Of cause one has to verify in sophisticated experiments, that this approach is stable.


Unfortunately no mention of colorbrewer (https://colorbrewer2.org)


You wrote „out of the box“, did you find a way to improve this?


You can do PCA or some other dimensionality reduction technique. That’ll reduce computation and improve signal/noise ratio when comparing vectors.


Unfortunately this is not feasible with a large amount of words due to the quadratic scaling. But thanks for the response!


Not sure what you mean by large amount of words. You can fit a PCA on millions of vectors relatively performantly, then inference from it is just a matmul.


Not true. You need a distance matrix (for classical PCA it's a covariance matrix), which scales quadratically with the number of points you want to compare. If you have 1 Mio. vectors, each creating a float entry in the matrix, you will end up with approx (10^6)^2 / 2 unique values, which is roughly 2000Gb of memory.


Some architectures are relatively well understood. Eg in CNNs, the first layers detect low level features like edges, gradients, etc. The next layer then combines these features to more complex structures like corners or circles. Next layer will combine these features to even higher level features and so on. [1]

Typically, you can take a pre-trained model and retrain it on your new dataset by only changing the weights of the last layer(s).

Some loss functions even measures the difference between the high-level features of two images, typically extracted from a pre-trained CNN (Perceptual Loss).

[1]Matt Zeiler did an amazing work on these findings 10 years ago (https://arxiv.org/abs/1311.2901).


I have not seen many work on explainable AI regarding large language models. I remember many very nice visualizations and visual analysis tools trying to comprehend, what the network „is seeing“ (eg. in the realm of image classification) or doing


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: