This study examines the problem of hopping robot navigation planning to achieve simultaneous goal-directed and environment exploration tasks. We consider a scenario in which the robot has mandatory goal-directed tasks defined using Linear Temporal Logic (LTL) specifications as well as optional exploration tasks represented using a reward function. Additionally, there exists uncertainty in the robot dynamics which results in motion perturbation. We first propose an abstraction of 3D hopping robot dynamics which enables high-level planning and a neural-network-based optimization for low-level control. We then introduce a Multi-task Product IMDP (MT-PIMDP) model of the system and tasks. We propose a unified control policy synthesis algorithm which enables both task-directed goal-reaching behaviors as well as task-agnostic exploration to learn perturbations and reward. We provide a formal proof of the trade-off induced by prioritizing either LTL or RL actions. We demonstrate our methods with simulation case studies in a 2D world navigation environment.