Abstract:We present a vision-based navigation system for under-canopy agricultural robots using semantic keypoints. Autonomous under-canopy navigation is challenging due to the tight spacing between the crop rows ($\sim 0.75$ m), degradation in RTK-GPS accuracy due to multipath error, and noise in LiDAR measurements from the excessive clutter. Our system, CropFollow++, introduces modular and interpretable perception architecture with a learned semantic keypoint representation. We deployed CropFollow++ in multiple under-canopy cover crop planting robots on a large scale (25 km in total) in various field conditions and we discuss the key lessons learned from this.
Abstract:We report promising results for high-throughput on-field soybean pod count with small mobile robots and machine-vision algorithms. Our results show that the machine-vision based soybean pod counts are strongly correlated with soybean yield. While pod counts has a strong correlation with soybean yield, pod counting is extremely labor intensive, and has been difficult to automate. Our results establish that an autonomous robot equipped with vision sensors can autonomously collect soybean data at maturity. Machine-vision algorithms can be used to estimate pod-counts across a large diversity panel planted across experimental units (EUs, or plots) in a high-throughput, automated manner. We report a correlation of 0.67 between our automated pod counts and soybean yield. The data was collected in an experiment consisting of 1463 single-row plots maintained by the University of Illinois soybean breeding program during the 2020 growing season. We also report a correlation of 0.88 between automated pod counts and manual pod counts over a smaller data set of 16 plots.