I watched the whole talk, so I heard the bit about the IDE, but I still think there's a really fundamental ability of being able to walk through the "decision-making logic" of your "code" (in this case, model) that wasn't touched upon. For example, suppose your model misclassifies a barrier and a car crashes into it as a result [1]. How do you debug this? You can say, "Well, it's a data-labelling problem" and go get more data on barriers, but in the meantime people have died. Model testing and debugging should be an incredibly high priority for use cases like Tesla's. That means some degree of interpretability, testing edge cases, simulation, anything to find flaws like this before they occur in real life.
See here [2] for an example of production ML testing practices. I wonder how much of this is in place at Tesla? I would argue they should be at the forefront of work like this. Something tells me they aren't.
See here [2] for an example of production ML testing practices. I wonder how much of this is in place at Tesla? I would argue they should be at the forefront of work like this. Something tells me they aren't.
[1] https://news.ycombinator.com/item?id=17257239
[2] https://ai.google/research/pubs/pub46555