The architecture of deep convolutional networks (CNNs) has evolved for years, becoming more accurate and faster. However, it is still challenging to design reasonable network structures that aim at obtaining the best accuracy under a limited computational budget. In this paper, we propose a Tree block, named after its appearance, which extends the One-Shot Aggregation (OSA) module while being more lightweight and flexible. Specifically, the Tree block replaces each of the $3\times3$ Conv layers in OSA into a stack of shallow residual block (SRB) and $1\times1$ Conv layer. The $1\times1$ Conv layer is responsible for dimension increasing and the SRB is fed into the next step. By doing this, when aggregating the same number of subsequent feature maps, the Tree block has a deeper network structure while having less model complexity. In addition, residual connection and efficient channel attention(ECA) is added to the Tree block to further improve the performance of the network. Based on the Tree block, we build efficient backbone models calling TreeNets. TreeNet has a similar network architecture to ResNet, making it flexible to replace ResNet in various computer vision frameworks. We comprehensively evaluate TreeNet on common-used benchmarks, including ImageNet-1k for classification, MS COCO for object detection, and instance segmentation. Experimental results demonstrate that TreeNet is more efficient and performs favorably against the current state-of-the-art backbone methods.