As someone that works on a platform users have used for labeling 1B images, I'm bullish SAM 3 can automate at least 90% of the work. Data prep is flipped to models being human-assisted instead of humans being model-assisted (see "autolabel" https://blog.roboflow.com/sam3/). I'm optimistic majority of users can now start deploying a model to then curate data instead of the inverse.
A brief history. SAM 1 - Visual prompt to create pixel-perfect masks in an image. No video. No class names. No open vocabulary. SAM 2 - Visual prompting for tracking on images and video. No open vocab. SAM 3 - Open vocab concept segmentation on images and video.
Roboflow has been long on zero / few shot concept segmentation. We've opened up a research preview exploring a SAM 3 native direction for creating your own model: https://rapid.roboflow.com/
Yes. But also note that redistribution of SAM 3 requires using the same SAM 3 license downstream. So libraries that attempt to, e.g., relicense the model as AGPL are non-compliant.
If this is whats in the consumer space I'd imagine the government has something much more advanced. Its probably a foregone conclusion that they are recording the entire country (maybe the world) and storing everyone's movements or are getting close to it.
The bike lane compliant vehicle category is exciting. Infinite Machine (infinitemachine.com) made me aware of this category with their Olto model, which is at a (surprisingly) superior price point.
One of the most common uses for edge AI not listed in this course is computer vision. You similarly want real-time inference for processing video. Another open source project that makes it easy to use SOTA vision models on the edge is inference: https://github.com/roboflow/inference