We present a framework for a ground-aerial robotic team to explore large, unstructured, and unknown environments. In such exploration problems, the effectiveness of existing exploration-boosting heuristics often scales poorly with the environments' size and complexity. This work proposes a novel framework combining incremental frontier distribution, goal selection with Monte-Carlo view quality rendering, and an automatic-differentiable information gain measure to improve exploration efficiency. Simulated with multiple complex environments, we demonstrate that the proposed method effectively utilizes collaborative aerial and ground robots, consistently guides agents to informative viewpoints, improves exploration paths' information gain, and reduces planning time.