Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I reverse engineered their stuff a bit. I downloaded their Android APK and found a tensorflow lite model inside. I found that it accepts 299x299px RGB input and spits out probabilities/scores for about 25,000 species. The phylogenetic ranking is performed separately (outside of the model) based on thresholds (if it isn't confident enough about any species, it seems to only provide genus, family, etc.) They just have a CSV file that defines the taxonomic ranks of each species.

I use it to automatically tag pictures that I take. I took up bird photography a few years ago and it's become a very serious hobby. I just run my Python script (which wraps their TF model) and it extracts JPG thumbnails from my RAW photos, automatically crops them based on EXIF data (regarding the focus point and the focus distance) and then feeds it into the model. This cropping was critical - I can't just throw the model a downsampled 45 megapixel image straight from the camera, usually the subject is too small in the frame. I store the results in a sqlite database. So now I can quickly pull up all photos of a given species, and even sort them by other EXIF values like focus distance. I pipe the results of arbitrary sqlite queries into my own custom RAW photo viewer and I can quickly browse the photos. (e.g. "Show me all Green Heron photos sorted by focus distance.") The species identification results aren't perfect, but they are very good. And I store the score in sql too, so I can know how confident the model was.

One cool thing was that it revealed that I had photographed a Blackpoll Warbler in 2020 when I was a new and budding birder. I didn't think I had ever seen one. But I saw it listed in the program results, and was able to confirm by revisiting the photo.

I don't know if they've changed anything recently. Judging by some of their code on GitHub, it looked like they were also working on considering location when determining species, but the model I found doesn't seem to do that.

I can't tell you anything about how the model was actually trained, but this information may still be useful in understanding how the app operates.

Of course, I haven't published any of this code because the model isn't my own work.



I don't use Seek, but the iNaturalist website filters computer vision matches using a "Seen Nearby" feature:

> The “Seen Nearby” label on the computer vision suggestions indicates that there is a Research Grade observation, or an observation that would be research grade if it wasn't captive, of that taxon that is:

> - within nine 1-degree grid cells in around the observation's coordinates and

> - observed around that time of year (in a three calendar month range, in any year).

https://www.inaturalist.org/pages/help#computer-vision

For how the model was trained, it's fairly well documented on the blog, including different platforms used as well as changes in training techniques. Previously the model was updated twice per year, as it required several months to train. For the past year they've been operating on a transfer learning method, so the model is trained on the images then updated, roughly once each month, to reflect changes in taxa. The v2.0 model was trained on 60,000 taxa and 30 million photos. There are far more taxa on iNaturalist, however there is a threshold of ~100 observations before a new species is included in the model.

https://www.inaturalist.org/blog/83370-a-new-computer-vision...

https://www.inaturalist.org/blog/75633-a-new-computer-vision...


>It looked like they were also working on considering location when determining species, but the model I use doesn't do that.

I do this in fish for very different work and there's a good chance the model for your species does not exist yet. For fish we have 6,000 distribution models based on sightings (aquamaps.org) but there are at least 20,000 species. These models have levels of certainty from 'expert had a look and fixed it slightly manually' to 'automatically made based on just three sightings' to 'no model as we don't have great sightings data'. So it may be that the model uses location, just not for the species you have?


Well, there's no way to feed lat/lng or similar into this particular tensorflow model.


that is actually surprising. surely they use location at some point in the ID process. its possible they have a secondary location based model to do sorting/ranking after the initial detection?

Merlin's bird detection system is almost non-functional without location.


yeah that's true! You can't really do that, these models are just polygons, all we do is doublecheck the other methods' predictions' overlap with these polygons as a second step.


Sounds like a real-life Pokémon Snap. You should add a digital professor who gives you points based on how good your recent photos are. (Size of subject in photo, focus, in-frame, and if the animal is doing something interesting.)


That doesn't work well even when it's the only game mechanic and everything else is designed around making it work.

https://www.awkwardzombie.com/comic/angle-tangle

It's not likely to work well on actual photos of actual wildlife.


That just adds to the fun.


That sounds like an awesome setup! Would you be willing to share your script with another bird photography enthusiast?


Comments like these are why I lurk on HN. Genius solution.

As a birder I have thousands of bird photos and would pay for this service.


This post fits the username perfectly.


You wrote your own custom RAW photo viewer? Like, including parsing? That's incredibly cool, do you share it anywhere?

Also why not just darktable / digikam?


I would pay for this


Thanks for sharing - I was curious too but didn’t delve in myself.


if you're willing it's totally fine to share your work with the model itself removed




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: