Hierarchical Reinforcement Learning (HRL) effectively tackles complex tasks by decomposing them into structured policies. However, HRL agents often face challenges with efficient exploration and rapid adaptation. To address this, we integrate meta-learning into HRL to enhance the agent's ability to learn and adapt hierarchical policies swiftly. Our approach employs meta-learning for rapid task adaptation based on prior experience, while intrinsic motivation mechanisms encourage efficient exploration by rewarding novel state visits. Specifically, our agent uses a high-level policy to select among multiple low-level policies operating within custom grid environments. We utilize gradient-based meta-learning with differentiable inner-loop updates, enabling optimization across a curriculum of increasingly difficult tasks. Experimental results demonstrate that our meta-learned hierarchical agent significantly outperforms traditional HRL agents without meta-learning and intrinsic motivation. The agent exhibits accelerated learning, higher cumulative rewards, and improved success rates in complex grid environments. These findings suggest that integrating meta-learning with HRL, alongside curriculum learning and intrinsic motivation, substantially enhances the agent's capability to handle complex tasks.