Abstract:Deep learning models are transforming agricultural applications by enabling automated phenotyping, monitoring, and yield estimation. However, their effectiveness heavily depends on large amounts of annotated training data, which can be labor and time intensive. Recent advances in open-set object detection, particularly with models like Grounding-DINO, offer a potential solution to detect regions of interests based on text prompt input. Initial zero-shot experiments revealed challenges in crafting effective text prompts, especially for complex objects like individual leaves and visually similar classes. To address these limitations, we propose an efficient few-shot adaptation method that simplifies the Grounding-DINO architecture by removing the text encoder module (BERT) and introducing a randomly initialized trainable text embedding. This method achieves superior performance across multiple agricultural datasets, including plant-weed detection, plant counting, insect identification, fruit counting, and remote sensing tasks. Specifically, it demonstrates up to a $\sim24\%$ higher mAP than fully fine-tuned YOLO models on agricultural datasets and outperforms previous state-of-the-art methods by $\sim10\%$ in remote sensing, under few-shot learning conditions. Our method offers a promising solution for automating annotation and accelerating the development of specialized agricultural AI solutions.
Abstract:In this paper, we present techniques to measure crop heights using a 3D LiDAR mounted on an Unmanned Aerial Vehicle (UAV). Knowing the height of plants is crucial to monitor their overall health and growth cycles, especially for high-throughput plant phenotyping. We present a methodology for extracting plant heights from 3D LiDAR point clouds, specifically focusing on row-crop environments. The key steps in our algorithm are clustering of LiDAR points to semi-automatically detect plots, local ground plane estimation, and height estimation. The plot detection uses a k--means clustering algorithm followed by a voting scheme to find the bounding boxes of individual plots. We conducted a series of experiments in controlled and natural settings. Our algorithm was able to estimate the plant heights in a field with 112 plots within +-5.36%. This is the first such dataset for 3D LiDAR from an airborne robot over a wheat field. The developed code can be found on the GitHub repository located at https://github.com/hsd1121/PointCloudProcessing.