We study interactive imitation learning, where a learner interactively queries a demonstrating expert for action annotations, aiming to learn a policy that has performance competitive with the expert, using as few annotations as possible. We give an algorithmic framework named Ensemble-based Interactive Imitation Learning (EIIL) that achieves this goal. Theoretically, we prove that an oracle-efficient version of EIIL achieves sharp regret guarantee, given access to samples from some ``explorative'' distribution over states. Empirically, EIIL notably surpasses online and offline imitation learning benchmarks in continuous control tasks. Our work opens up systematic investigations on the benefit of using model ensembles for interactive imitation learning.