Hacker Newsnew | past | comments | ask | show | jobs | submit | kwertzzz's commentslogin

Can you give an example where theories and techniques from other fields are reinvented? I would be genuinely interested for concrete examples. Such "reinventions" happen quite often in science, so to some degree this would be expected.


Bethe ansatz is one. It took a toure de force by Yedidia to recognize that loopy belief propagation is computing the stationary point of Bethe's approximation to Free Energy.

Many statistical thermodynamics ideas were reinvented in ML.

Same is true for mirror descent. It was independently discovered by Warmuth and his students as Bregman divergence proximal minimization, or as a special case would have it, exponential gradient algorithms.

One can keep going.


The connections of deep learning to stat-mech and thermodynamics are really cool.

It's led me to wonder about the origin of the probability distributions in stat-mech. Physical randomness is mostly a fiction (outside maybe quantum mechanics) so probability theory must be a convenient fiction. But objectively speaking, where then do the probabilities in stat-mech come from? So far, I've noticed that the (generalised) Boltzmann distribution serves as the bridge between probability theory and thermodynamics: It lets us take non-probabilistic physics and invent probabilities in a useful way.


In Boltzmann's formulation of stat-mech it comes from the assumption that when a system is in "equilibrium", then all the micro-states that are consistent with the macro-state are equally occupied. That's the basis of the theory. A prime mover is thermal agitation.

It can be circular if one defines equilibrium to be that situation when all the micro-states are equally occupied. One way out is to define equilibrium in temporal terms - when the macro-states are not changing with time.


The Bayesian reframing of that would be that when all you have measured is the macrostate, and you have no further information by which to assign a higher probability to any compatible microstate than any other, you follow the principle of indifference and assign a uniform distribution.


Yes indeed, thanks for pointing this out. There are strong relationships between max-ent and Bayesian formulations.

For example one can use a non-uniform prior over the micro-states. If that prior happens to be in the Darmois-Koopman family that implicitly means that there are some non explicitly stated constraints that bind the micro-state statistics.


One might add 8-16-bit training and quantization. Also, computing semi-unreliable values with error correction. Such tricks have been used in embedded, software development on MCU's for some time.


I mean the entire domain of systems control is being reinvented by deep RL. System identification, stability, robustness etc


Good one. Slightly different focus but they really are the same topic. Historically, Control Theory has focused on stability and smooth dynamics while RL has traditionally focused on convergence of learning algorithms in discrete spaces.


I am not sure if type K thermocouple are used for meteorological air temperature measurements.

These sensors (based on thermal resistance) for example have an accuracy of 0.2 °C under typical conditions [1].

[1] https://www.ti.com/lit/ds/symlink/tmp1826.pdf?ts=17427997638...


It would also be interesting to know how the orbits became quite circular afterwards (relative to the common center of mass).


> Not to mention that on Linux switching the locale may change the interpretation of filenames as characters, which isn’t the case with NTFS.

If you change the locale to an uninstalled one, then yes. But if the locale is installed, then I don't see a problem.

echo $LANG

# output: en_US.UTF-8

touch fusée.txt

LANG=fr_FR.UTF-8 ls

# output: 'fus'$'\303\251''e.txt'

sudo locale-gen fr_FR.UTF-8

sudo update-locale

LANG=fr_FR.UTF-8 ls

# output: fusée.txt

Are you maybe using non-UTF-8 locale?


Yes, I mean locales like fr_FR.ISO-8859-15, ja_JP.SJIS or zh_CN.GBK.

While these probably aren’t used much anymore, it still means that your filenames can break just by setting an environment variable. Or issues like here: https://news.ycombinator.com/item?id=16992546


The article mentions also that the computers are short-circuiting. I guess that you can lose a lot of energy when there is a short circuit. Maybe with an infrared camera, one can get an idea where all the energy is wasted. Apparently the author did check that sentry was not enabled.


The short circuiting comment sounded to me like something someone non-technical would say when they don't really know the answer.


Sadly, several python projects do not use semantic versioning, for example xarray [0] and dask. Numpy can make backward incompatible changes after a warning for two releases[1]. In general, the python packaging docs do not really read as an endorsement of semantic versioning [2]:

> A majority of Python projects use a scheme that resembles semantic versioning. However, most projects, especially larger ones, do not strictly adhere to semantic versioning, since many changes are technically breaking changes but affect only a small fraction of users...

[0] https://github.com/pydata/xarray/issues/6176

[1] https://numpy.org/doc/stable/dev/depending_on_numpy.html

[2] https://packaging.python.org/en/latest/discussions/versionin...


I was asking myself the same question. I would love to see a derivation from e.g. the Navier Stokes equation for this. I think, intuitively, when you draw the streamlines under the rectangular wings, the applied force should be related to the curvature of the streamlines (which is larger at the beginning of the wing).

I made some simple 2D Navier-Stokes solver here where you can use the mouse to draw a section of a wing:

https://alexander-barth.github.io/FluidSimDemo-WebAssembly/

The color represents the pressure (I should add a proper color bar!)


So we want:

1. free/gratis Linux distribution

2. long support (~ 10 years)

3. not needing to contribute (as a community)

It seems that we can only pick two from this list (even a distribution with short support cycle needs community).


This is true, but you can inspect the package (and its dependencies) once installed and before importing it. Now it is an all manual process (with default tools, as far as I know), which contributes to the current state that only very few people inspect their packages.

It can give people also a second change to notice, e.g. the typo in the package name.


Here is a "Proof of the Law of Cosines" by the same author:

https://www.tandfonline.com/doi/pdf/10.4169/amer.math.monthl...

But this time not with complex numbers.


So, I have to pay to read the proof? Not fun.


It's also on JSTOR: https://www.jstor.org/stable/10.4169/amer.math.monthly.121.0... . No payment needed there (unless you've burned through your 100 free articles for this month).


I appreciate that, but it requires creating an account, and for people who have a gazillion other things to be doing, taking the time to create Yet Another Account on Yet Another System just for the one article is something most people will pass on.

It may not have a financial cost, but it certainly has an opportunity cost, and contributes to "Account Fatigue".

But loved the proof of Heron's formula, and I'm pleased to be able to say thank you in person, and make sure I give credit when I use it for the Math Club talks I do.

Great work ... feel free to contact me if you are so inclined, or put an email address in your profile.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: