Because a weeding robot needs to know where the weeds are to disrupt them.
grass
Contains only grass - no weedsweeds
Contains weeds which could be sprayed in the centrereject
Contains weeds, but the picture doesn't contain the centres, and hence may confuse the NNC.grass
and weeds
photos were placed in so-named directories, with a small number taken aside to be test images.{ weeds, grass }
set, and the results from the test images are shown below.Classifying 6552 images manually wasn't nearly as gruesome as expected. Anyone who has done any serious spot-spraying or weeding has already programmed their wet-ware neurons to be triggered by the sight of weeds. (Spend an entire afternoon digging out weeds, and you will see them in your dreams, I guarantee. This is your brain hard-wiring your lizard-brain as a weed trigger). A very quick glance at an image is generally enough to allow a '1' (kill!) or '0' (ignore!) to be typed in by your pre-programmed lizard brain. (I even managed to binge-watch Netflix's 'The Final Kingdom' at the same time, though I did kind of lose track of why Æthelred's evil adviser suddenly changes sides, and alerts Æthelflæd that her hubby is trying to kill her, but that was a small price to pay. For me that it is, not for Æthelred).
.. anyway, the point is that human classification is not actually that onerous. They say the average human only uses 10% of their brain, and in this case that's certainly not an overstatement. Human classification is quite feasible even with medium sized data sets. Just find something else to do at the same time.
These are images, pulled out of the original training set before training was done, then classified by the NNC.
It works. Kind of. Not as well as the Kangarrecogniser, but pretty well.
A few points