The opaqueness of deep NLP models has motivated the development of methods for interpreting how deep models predict. Recently, work has introduced hierarchical attribution, which produces a hierarchical clustering of words, along with an attribution score for each cluster. However, existing work on hierarchical attribution all follows the connecting rule, limiting the cluster to a continuous span in the input text. We argue that the connecting rule as an additional prior may undermine the ability to reflect the model decision process faithfully. To this end, we propose to generate hierarchical explanations without the connecting rule and introduce a framework for generating hierarchical clusters. Experimental results and further analysis show the effectiveness of the proposed method in providing high-quality explanations for reflecting model predicting process.