Deep learning (DL) approaches are achieving extraordinary results in a wide range of domains but often require a massive collection of private data. Hence, methods for training neural networks on the joint data of different data owners, that keep each party's input confidential, are called for. We address the setting of horizontally distributed data in deep learning, where the participants' vulnerable intermediate results have to be processed in a privacy-preserving manner. The predominant scheme for this setting is based on homomorphic encryption (HE), and it is widely considered to be without alternative. In contrast to this, we demonstrate that a carefully chosen, less complex and computationally less expensive secure sum protocol in conjunction with default secure channels exhibits superior properties in terms of both collusion-resistance and runtime. Finally, we discuss several open research questions in the context of collaborative DL, which possibly might lead back to HE-based solutions.