Abstract:Enabling video-haptic radio resource slicing in the Tactile Internet requires a sophisticated strategy to meet the distinct requirements of video and haptic data, ensure their synchronized transmission, and address the stringent latency demands of haptic feedback. This paper introduces a Deep Reinforcement Learning-based radio resource slicing framework that addresses video-haptic teleoperation challenges by dynamically balancing radio resources between the video and haptic modalities. The proposed framework employs a refined reward function that considers latency, packet loss, data rate, and the synchronization requirements of both modalities to optimize resource allocation. By catering to the specific service requirements of video-haptic teleoperation, the proposed framework achieves up to a 25% increase in user satisfaction over existing methods, while maintaining effective resource slicing with execution intervals up to 50 ms.