Abstract:We present an autonomous aerial system for safe and efficient through-the-canopy fruit counting. Aerial robot applications in large-scale orchards face significant challenges due to the complexity of fine-tuning flight paths based on orchard layouts, canopy density, and plant variability. Through-the-canopy navigation is crucial for minimizing occlusion by leaves and branches but is more challenging due to the complex and dense environment compared to traditional over-the-canopy flights. Our system addresses these challenges by integrating: i) a high-fidelity simulation framework for optimizing flight trajectories, ii) a low-cost autonomy stack for canopy-level navigation and data collection, and iii) a robust workflow for fruit detection and counting using RGB images. We validate our approach through fruit counting with canopy-level aerial images and by demonstrating the autonomous navigation capabilities of our experimental vehicle.
Abstract:We propose a novel human-drone interaction paradigm where a user directly interacts with a drone to light-paint predefined patterns or letters through hand gestures. The user wears a glove which is equipped with an IMU sensor to draw letters or patterns in the midair. The developed ML algorithm detects the drawn pattern and the drone light-paints each pattern in midair in the real time. The proposed classification model correctly predicts all of the input gestures. The DroneLight system can be applied in drone shows, advertisements, distant communication through text or pattern, rescue, and etc. To our knowledge, it would be the world's first human-centric robotic system that people can use to send messages based on light-painting over distant locations (drone-based instant messaging). Another unique application of the system would be the development of vision-driven rescue system that reads light-painting by person who is in distress and triggers rescue alarm.
Abstract:For the human operator, it is often easier and faster to catch a small size quadrotor right in the midair instead of landing it on a surface. However, interaction strategies for such cases have not yet been considered properly, especially when more than one drone has to be landed at the same time. In this paper, we propose a novel interaction strategy to land multiple robots on the human hands using vibrotactile feedback. We developed a wearable tactile display that is activated by the intensity of light emitted from an LED ring on the bottom of the quadcopter. We conducted experiments, where participants were asked to adjust the position of the palm to land one or two vertically-descending drones with different landing speeds, by having only visual feedback, only tactile feedback or visual-tactile feedback. We conducted statistical analysis of the drone landing positions, landing pad and human head trajectories. Two-way ANOVA showed a statistically significant difference between the feedback conditions. Experimental analysis proved that with an increasing number of drones, tactile feedback plays a more important role in accurate hand positioning and operator's convenience. The most precise landing of one and two drones was achieved with the combination of tactile and visual feedback.
Abstract:We propose a novel system SwarmCloak for landing of a fleet of four flying robots on the human arms using light-sensitive landing pads with vibrotactile feedback. We developed two types of wearable tactile displays with vibromotors which are activated by the light emitted from the LED array at the bottom of quadcopters. In a user study, participants were asked to adjust the position of the arms to land up to two drones, having only visual feedback, only tactile feedback or visual-tactile feedback. The experiment revealed that when the number of drones increases, tactile feedback plays a more important role in accurate landing and operator's convenience. An important finding is that the best landing performance is achieved with the combination of tactile and visual feedback. The proposed technology could have a strong impact on the human-swarm interaction, providing a new level of intuitiveness and engagement into the swarm deployment just right from the skin surface.
Abstract:We propose SlingDrone, a novel Mixed Reality interaction paradigm that utilizes a micro-quadrotor as both pointing controller and interactive robot with a slingshot motion type. The drone attempts to hover at a given position while the human pulls it in desired direction using a hand grip and a leash. Based on the displacement, a virtual trajectory is defined. To allow for intuitive and simple control, we use virtual reality (VR) technology to trace the path of the drone based on the displacement input. The user receives force feedback propagated through the leash. Force feedback from SlingDrone coupled with visualized trajectory in VR creates an intuitive and user friendly pointing device. When the drone is released, it follows the trajectory that was shown in VR. Onboard payload (e.g. magnetic gripper) can perform various scenarios for real interaction with the surroundings, e.g. manipulation or sensing. Unlike HTC Vive controller, SlingDrone does not require handheld devices, thus it can be used as a standalone pointing technology in VR.
Abstract:Drone application for aerial manipulation is tested in such areas as industrial maintenance, supporting the rescuers in emergencies, and e-commerce. Most of such applications require teleoperation. The operator receives visual feedback from the camera installed on a robot arm or drone. As aerial manipulation requires delicate and precise motion of robot arm, the camera data delay, narrow field of view, and blurred images caused by drone dynamics can lead the UAV to crash. The paper focuses on the development of a novel teleoperation system for aerial manipulation using Virtual Reality (VR). The controlled system consists of UAV with a 4-DoF robotic arm and embedded sensors. VR application presents the digital twin of drone and remote environment to the user through a head-mounted display (HMD). The operator controls the position of the robotic arm and gripper with VR trackers worn on the arm and tracking glove with vibrotactile feedback. Control data is translated directly from VR to the real robot in real-time. The experimental results showed a stable and robust teleoperation mediated by the VR scene. The proposed system can considerably improve the quality of aerial manipulations.
Abstract:To achieve a smooth and safe guiding of a drone formation by a human operator, we propose a novel interaction strategy for a human-swarm communication which combines impedance control and vibrotactile feedback. The presented approach takes into account the human hand velocity and changes the formation shape and dynamics accordingly using impedance interlinks simulated between quadrotors, which helps to achieve a natural swarm behavior. Several tactile patterns representing static and dynamic parameters of the swarm are proposed. The user feels the state of the swarm at the fingertips and receives valuable information to improve the controllability of the complex formation. A user study revealed the patterns with high recognition rates. A flight experiment demonstrated the possibility to accurately navigate the formation in a cluttered environment using only tactile feedback. Subjects stated that tactile sensation allows guiding the drone formation through obstacles and makes the human-swarm communication more interactive. The proposed technology can potentially have a strong impact on the human-swarm interaction, providing a higher level of awareness during the swarm navigation.
Abstract:We report on the teleoperation system DronePick which provides remote object picking and delivery by a human-controlled quadcopter. The main novelty of the proposed system is that the human user continuously gets the visual and haptic feedback for accurate teleoperation. DronePick consists of a quadcopter equipped with a magnetic grabber, a tactile glove with finger motion tracking sensor, hand tracking system, and the Virtual Reality (VR) application. The human operator teleoperates the quadcopter by changing the position of the hand. The proposed vibrotactile patterns representing the location of the remote object relative to the quadcopter are delivered to the glove. It helps the operator to determine when the quadcopter is right above the object. When the "pick" command is sent by clasping the hand in the glove, the quadcopter decreases its altitude and the magnetic grabber attaches the target object. The whole scenario is in parallel simulated in VR. The air flow from the quadcopter and the relative positions of VR objects help the operator to determine the exact position of the delivered object to be picked. The experiments showed that the vibrotactile patterns were recognized by the users at the high recognition rates: the average 99% recognition rate and the average 2.36s recognition time. The real-life implementation of DronePick featuring object picking and delivering to the human was developed and tested.