Factor models are routinely used for dimensionality reduction in modeling of correlated, high-dimensional data. We are particularly motivated by neuroscience applications collecting high-dimensional `predictors' corresponding to brain activity in different regions along with behavioral outcomes. Joint factor models for the predictors and outcomes are natural, but maximum likelihood estimates of these models can struggle in practice when there is model misspecification. We propose an alternative inference strategy based on supervised autoencoders; rather than placing a probability distribution on the latent factors, we define them as an unknown function of the high-dimensional predictors. This mapping function, along with the loadings, can be optimized to explain variance in brain activity while simultaneously being predictive of behavior. In practice, the mapping function can range in complexity from linear to more complex forms, such as splines or neural networks, with the usual tradeoff between bias and variance. This approach yields distinct solutions from a maximum likelihood inference strategy, as we demonstrate by deriving analytic solutions for a linear Gaussian factor model. Using synthetic data, we show that this function-based approach is robust against multiple types of misspecification. We then apply this technique to a neuroscience application resulting in substantial gains in predicting behavioral tasks from electrophysiological measurements in multiple factor models.