Current Computer-Aided Diagnosis (CAD) methods mainly depend on medical images. The clinical information, which usually needs to be considered in practical clinical diagnosis, has not been fully employed in CAD. In this paper, we propose a novel deep learning-based method for fusing Magnetic Resonance Imaging (MRI)/Computed Tomography (CT) images and clinical information for diagnostic tasks. Two paths of neural layers are performed to extract image features and clinical features, respectively, and at the same time clinical features are employed as the attention to guide the extraction of image features. Finally, these two modalities of features are concatenated to make decisions. We evaluate the proposed method on its applications to Alzheimer's disease diagnosis, mild cognitive impairment converter prediction and hepatic microvascular invasion diagnosis. The encouraging experimental results prove the values of the image feature extraction guided by clinical features and the concatenation of two modalities of features for classification, which improve the performance of diagnosis effectively and stably.