Hacker Newsnew | past | comments | ask | show | jobs | submit | tristanc's commentslogin

This is one of the greatest news I've ever heard for the digital preservation community. Just so many projects over the years could have used resources like this. Thank you for contributing to humankind!


I'll bite, the article mentions:

> To calculate the wind speed on WASP-127b, the team tracked how fast molecules in the planet's atmosphere moved using the 'Very Large Telescope' (VLT) located in the south American country of Chile.

Could you provide anymore insight into how this was achieved? Any interesting challenges associated with the measurement? What are the practical limitations of this methodology? How did you pick the planet as a target to measure?

None of these questions are necessary, I'm just genuinely curious if you could provide any insight


Great questions, thanks!

> Could you provide anymore insight into how this was achieved? The observations we have consist of spectra - light from the star is split into wavelengths with a fine resolution (we can distinguish very small differences in wavelength/frequency). When the planet passes between the star and us, its atmosphere absorbs a tiny fraction of the stellar light and we can see that : Fingerprints in the spectrum that are associated to molecules we have characterized in the lab (here we see carbon monoxide and water). The way we can isolate the signature from the planet - compared to the signature from the star or say the water in the earth atmosphere - is that the planet is moving and its velocity with regards to us is varying ! The Doppler effect (think ambulance coming towards you making the sound higher pitch) means that the signature from the planet also get shifted in frequency.

> Any interesting challenges associated with the measurement? Yeah, one of the big challenge is that the Earth atmosphere is not so transparent in the near-infrared so it absorbs a lot of light and pollutes a lot our data. Getting rid of the atmospheric spectrum with high enough accuracy is challenging, but we manage - either doing some physics and modelling it -- and/or using some data analysis tool (PCA for instance).

> What are the practical limitations of this methodology? The main one is the amount of light we need, that's why this instrument sits at the Very Large Telescope (the primary mirror is 8m in diameter) one of the largest European telescope. We need to have very good signal, as we are splitting the light into many wavelengths. And we cannot just take very long exposures, because the transits of the planets in front of the star is done in a limited time (few hours)

> How did you pick the planet as a target to measure? Normally we start with a big spreadsheet of all planets, and we work out which ones are observable from a given telescope, which stars are bright enough, which planet could have observable signal. WASP-127 was studied in the past and other teams already detected some molecules, but we did not expect to measure this kind of winds.


Location: Washington D.C. and metro area

Remote: Preferred

Willing to relocate: no

Technologies: C++, Linux, Python, Rust, ML (PyTorch), 3D graphics programming (shaders, data processing, rendering, ...), Multimedia (FFMPEG, GStreamer, ...), Git, etc...

Résumé/CV: http://tristancharpentier.com/resume.pdf

Email: tristancharpentier[at]protonmail[dot]com

Hi! I'm Tristan, a recent graduate in C.S. from Sorbonne University in Paris but now based in the Washington D.C. area. Lately, I have been working on interactive 3D interfaces for music discovery with personalized recommendations using neural nets, pixel shaders for rendering weather forecasts, and real-time lidar processing and shading for augmented reality using a Kinect.

My projects are driven by intellectual curiosity, sometimes originating from conversations with my friends. I enjoy doing the research to achieve unique results. Here are some of my projects: http://tristancharpentier.com/projects.html

GitHub: https://github.com/hashFactory/

Youtube: https://youtube.com/@tridostudio


Location: Washington D.C. and metro area

Remote: Preferred

Willing to relocate: yes

Technologies: C++, Linux, Python, 3D graphics programming (shaders, data processing, rendering, …), Multimedia (FFMPEG, GStreamer, ...)

Résumé/CV: tristancharpentier.com/resume.pdf

Email: tristancharpentier[at]protonmail[dot]com

Hi! I'm Tristan, a Recent graduate in C.S. from Sorbonne University in Paris but based in the Washington D.C. area. Lately, I have been working on interactive 3D interfaces for music discovery with personalized recommendations using neural nets, pixel shaders for rendering weather forecasts, and real-time lidar processing and shading for augmented reality with a Kinect.

My projects are driven by intellectual curiosity, sometimes originating from conversations with my friends. I enjoy doing the research to achieve unique results. Here are some of my projects: https://tristancharpentier.com/projects.html and my github: https://github.com/hashFactory/


Hi atdrummond!

I'm sorry I missed your earlier thread I had no idea you were doing and was a little stumped but I think you are an amazing person for doing this.

I'm writing because I think I could use your help!

I'm a recent graduate from University in France (Sorbonne University) with a degree in CS. I recently moved back to the states to my parent's house to look for a job but it's been difficult to get an offer.

I'm passionate about CS and have been working on personal projects while waiting to land an interview. Most recently I've been working on projects like creating novel interfaces for exploring music by analyzing audio with a neural net and plotting in 3D according to genre classification.

(demos from my project here: https://www.youtube.com/watch?v=1UyjFeLjGJs&list=PLvzGE7O7Di... )

I've been having a great time developing this and my friends are all encouraging but it's been difficult to convince a company to hire me or a way to monetize this.

Furthermore, I only have access to an old family computer from 2013 so have been limited to doing audio processing and hosting on cpu.

I would be eternally grateful and appreciative if you would do me the favor of helping finance a PC with reasonable specs such as a GPU and proper storage capacity (stuck on a 512GB SSD right now...) to help me develop my projects further!

I would only ask for ~$800 and I could finance the rest but this would be a huge huge huge help!

I still can't believe someone would be so generous as you but I know HN is full of thoughtful people so I'd love to discuss!

I'm not sure which transfer method is best yet but I have a bank account in the US and paypal so I'm sure we could make something work.

Thank you so much!!!! My name is Tristan btw (: I am located in virginia and my email is tristancharpentier@protonmail.com


Use any model trained on the AudioSet dataset. There is one called EfficientAT i think that I use regularly and is pretty reliable


AFAIK, there is only support for AV1 decoding in the latest Snapdragon 8 Gen 2 and Apple's A17 Pro. No mobile SoC supports AV1 encoding (so far).


The new Tensor G3 in the pixel 8 is supposed to have encoding. Not sure if it's been officially confirmed though.


getting there, may be for AV2.


  Location: Washington D.C. and metro area
  Remote: yes or on-site
  Technologies: C++, Linux, Python, 3D graphics programming (shaders, data processing, rendering, …), Multimedia (FFMPEG, GStreamer, ...)
  Résumé/CV: https://tristancharpentier.com/resume.pdf
  Email: tristan_charpentier[at]hotmail[dot]com
Hi, I'm Tristan. I am passionate about math and CS and graduated with a B.S. in CS from Sorbonne University in Paris (June 2023).

Recently I have been working on interactive 3D interfaces for music discovery with personalized recommendations using neural nets, pixel shaders for rendering weather forecasts, and real-time lidar processing and shading for augmented reality with a Kinect.

My projects are driven by intellectual curiosity, sometimes originating from conversations with my friends. I enjoy doing the research to achieve unique results. Here are some of my projects: https://tristancharpentier.com/projects.html


Woah, is this suggesting that the cumulative human-years experienced by current living US population will add up to the age of the entire universe?

Mind-boggled again!


>>> 27600000000 / 332000000

83.13253012048193

According to google the current life expectancy (in 2020) is:

77.28 years

So a touch shy, but almost.


and this <waves hands around> is all we have to show for it?


Interesting, I attempted to do the same as you but stopped just shy of BPM matching.

However I did get sound similarity working using an audio tagging neural net [1]. I chopped off the first and last 15 seconds of every song in my collection and ran them all through this analysis which produces a ~520 dimensional vector. I then targeted specific endings I wanted to match and used Euclidian distance to find the closest matching song beginning.

YMMV but I thought it actually worked pretty well, I just never got to automating the BPM matching. I can try to look for my old script if you're interested :)

[1] https://github.com/fschmid56/EfficientAT


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: