Convolutional neural networks (CNNs) have proven effective for image processing tasks, such as object recognition and classification. Recently, CNNs have been enhanced with concepts of attention, similar to those found in biology. Much of this work on attention has focused on effective serial spatial processing. In this paper, I introduce a simple procedure for applying feature-based attention (FBA) to CNNs and compare multiple implementation options. FBA is a top-down signal applied globally to an input image which aides in detecting chosen objects in cluttered or noisy settings. The concept of FBA and the implementation details tested here were derived from what is known (and debated) about biological object- and feature-based attention. The implementations of FBA described here increase performance on challenging object detection tasks using a procedure that is simple, fast, and does not require additional iterative training. Furthermore, the comparisons performed here suggest that a proposed model of biological FBA (the "feature similarity gain model") is effective in increasing performance.