Hacker Newsnew | past | comments | ask | show | jobs | submit | mitchellgoffpc's commentslogin

Some neat results from the last six months or so:

- Significantly-improved diffusion models (DALL-E 2, Midjourney, Stable Diffusion, etc)

- Diffusion models for video (see https://video-diffusion.github.io/, this paper is from April but I expect to see a lot more published research in this area soon)

- OpenAI Minecraft w/VPT (first model with non-zero success rate at mining diamonds in <20min)

- AlphaCode (from February, reasonably high success rate on solving competitive programming problems)

- Improved realism and scale for NeRFs (see https://dellaert.github.io/NeRF22/ for some cool examples from this year’s CVPR)

- Better sample efficiency for RL models (see https://arxiv.org/abs/2208.07860 for a recent real-world example)


> - AlphaCode (from February, reasonably high success rate on solving competitive programming problems)

reasonably high was 50% on 10 attempts, meaning success rate on first attempt can be as low as 5%, out of which who knows how many were leaked to training data.


This would basically be my list.

I'd add GPT3 & Github Copilot, which my team and I use professionally. It's far from perfect, but it's a great GSD tool especially for stuff like Regex, bash scripts, and weird APIs.


https://www.deeplearningbook.org/ and http://incompleteideas.net/book/the-book-2nd.html are excellent resources for supervised and reinforcement learning, respectively, and some knowledge of statistics and probability go a long way. But I think by far the most important thing is to just start training models, even very small ones, and developing an intuition for what works and what the failure modes are.

- Get really comfortable with matplotlib or your graphing library of choice. Plot your data in every way you can think of. Plot your models' outputs, find which samples they do best and worst on.

- Play around with different hyperparameters and data augmentation strategies and see how they affect training.

- Try implementing backprop by hand -- understanding the backward pass of the different layers is extremely helpful when debugging. I found Karpathy's CS231n lectures to be a great starting point for this.

- Eventually, you'll want to start reading papers. The seminal papers (alexnet, resnet, attention is all you need, etc) are a good place to start. I found https://www.youtube.com/c/YannicKilcher (especially the early videos) to be a very useful companion resource for this.

- Once you've read some papers and feel comfortable with the format, you'll want to try implementing something. Important tricks are often hidden away in the appendices, read them carefully!

- And above all, remember that machine learning is a dark art -- when your dataloader has a bug in its shuffling logic, or when your tensor shapes get broadcast incorrectly, your code often won't throw an error, your model will just be slightly worse and you'll never notice. Because of this, 90% of being a good ML researcher/engineer is writing tests and knowing how to track down bugs. http://karpathy.github.io/2019/04/25/recipe/ perfectly summarizes my feelings on this.


I second Karpathy's version of cs231n (2016). He's an amazing lecturer.

A good alternative to Goodfellow is "Dive into Deep Learning" (https://d2l.ai), which is free and more up-to-date, interactive, and practical, IMO. Videos of a 2019 Berkeley course based on it are available too (https://courses.d2l.ai/berkeley-stat-157/).


Goodfellow's book is a bad recommendation for people don't already know the material in it.


It’s possible, though not quite as clean as you’d like:

  from IPython.display import display, Markdown as md
  display(md(f'*pi = {math.pi}*'))


Tried that, it works well for simple equations but python's string escape character is the same as LaTeX's control character for math objects (\frac{}{}, etc). Doing anything beyond the simplest of equations becomes ridiculously tedious.


I was shooketh


Huge thanks to the node team for adding this, I’ve been wanting fetch in node for years now! Installing node-fetch for every project was getting kind of old haha


The lisp folks are probably wondering why it took us so long to figure out the whole “all configuration is code” thing haha


The Tcl folks are also wondering the same thing...

A "configuration file" is simply a valid script that is executed in a safe interpreter.


I know there was a vogue to pretend otherwise but there are definitely benefits to not letting your config file execute arbitrary code.


What do you mean by “arbitrary”? If you don’t want it to access the network or whatever, that’s definitely possible. If you don’t want it to be able to loop forever, though, Turing-incomplete languages like Dhall exist, but that doesn’t stop them from expressing “loop for a trillion years” — but you can always say “we’ll execute this config file for a second and then halt with an error if it isn’t already done”. (You can also pipe an infinite stream into a FIFO and trick your program even if it uses a dumb config language.)


The problem with turing-complete config languages, is not that it might take long to evaluate them. The problem is they are hard to modify by programs. A purely data-driven configuration, e.g. an ini-style config file, json, or xml is relatively easy to parse, transform and unparse again. Doing that with a config file written in a Turing-complete language is substantially harder, theoretically even impossible. But in practice most configuration are evaluated from a subset of all programs. A language like Dhall tries to formalize that subset and and thereby enables better processing of such programs.


Yeah, the stock driver assists in most cars still need a LOT of work. I drove from San Diego to LA last weekend with a comma, and openpilot handled the entire trip without a single disengagement until I got off the highway. I’d be really impressed to see a stock LKA manage the same (although I’ve heard great things about super cruise!)


Man, I saw that thing moving sideways right after liftoff and I thought it was a gonner! Huge congrats to SpaceX for landing with an offset engine like that on their first try!!!


In addition to the other replies, it is also standard for rockets to 'clear the pad' as soon as possible, to avoid damaging the ground support equipment as much as possible.

The amount of kick to the side is almost certainly due to the offset engine, but they would definitely design the flight path (with that in mind) to clear the pad as fast as reasonable as well.


Yep, pretty much every rocket is much much cheaper than the often one-off launch infrastructure.

From what i remeber some Soviet rockets had after a string of pad destroing failures any abort commands disablee for the first 30 seconds of flight - regardless of what happens, it must not hit the pad, or Barmin (the chief designer of most Soviet launch complexes) will be angry and you don't want that.


The single raptor engine is offset from the center on this test, so it had to do some pretty quick and somewhat dramatic adjustments to keep things upright.


Why is it offset from center? To stress test the steering?


The design of the full rocket doesn't have a single center engine. Instead, it has three engines at the center.

This test article uses the same layout for engines, but with one engine instead of three. So that one engine is offset from the center.

Their flight software is already capable of handling the multiple engine out-scenario and compensating for it, so there's no real reason for them to center the engine for the test article.


The 'thrust puck' the engine is mounted to is designed for multiple engines, none of which are in the centre.

They are only using one engine this test, but are testing the flight-design thrust puck (as opposed to some interim structure with a centre mount for the single engine).


It's designed for 3 sea level engines and 3 vacuum engines to be installed in a radially symmetric fashion... But they only installed 1 sea level engine on this prototype.


There's some center-of-gravity and shifting of things as you burn up fuel. Not an expert, so here's where the streamer talks a bit about it, including why the space shuttle did it too: https://youtu.be/NJR4gZBLMNw?t=1195


Same here. I was waiting to see if it would cross the point of no return and obliterate their ground facilities... again. Instead, looks to have been a nearly perfect test. Onward!


It did obliterate their launch stand, with the flame of the raptor cutting through a thick steel beam, and you can see walkways being tossed about at ~0:12, but they have parts for 3 of those already on the site.


Yep. The "AOA sensor disagreement" light is built into every cockpit, but it won't turn on unless the airline purchased a specific premium add-on. Boeing claims they initially didn't realize the light was bundled as an add-on, and then once they did realize, they decided it wasn't critical to the airplane's safety and just kept selling it as an add-on, which is... kind of alarming, to put it mildly. https://www.nytimes.com/2019/05/05/business/boeing-737-max-w...


> The "AOA sensor disagreement" light is built into every cockpit,

While technically true, that sounds a bit misleading considering that the AOA DISAGREE warning "light" is just a text indicator on the primary flight display: https://www.boeing.com/resources/boeingdotcom/commercial/737...


Shoot you’re right, I read “warning light” and just assumed. Thanks for the correction! On the other hand, it’s even weirder that Boeing didn’t bother fixing it if it was just a software patch...


Wow, that is remarkable.


The success of Tesla as a business will have far more impact on our progress against global warming than CA’s regulatory goals will. Maybe some don’t see it that way, but I’m quite certain Musk does, and frankly I agree with him.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: