Quantum computers require high-fidelity measurement of many qubits to achieve a quantum advantage. Traditional approaches suffer from readout crosstalk for a neutral-atom quantum processor with a tightly spaced array. Although classical machine learning algorithms based on convolutional neural networks can improve fidelity, they are computationally expensive, making it difficult to scale them to large qubit counts. We present two simpler and scalable machine learning algorithms that realize matched filters for the readout problem. One is a local model that focuses on a single qubit, and the other uses information from neighboring qubits in the array to prevent crosstalk among the qubits. We demonstrate error reductions of up to 32% and 43% for the site and array models, respectively, compared to a conventional Gaussian threshold approach. Additionally, our array model uses two orders of magnitude fewer trainable parameters and four orders of magnitude fewer multiplications and nonlinear function evaluations than a recent convolutional neural network approach, with only a minor (3.5%) increase in error across different readout times. Another strength of our approach is its physical interpretability: the learned filter can be visualized to provide insights into experimental imperfections. We also show that a convolutional neural network model for improved can be pruned to have 70x and 4000x fewer parameters, respectively, while maintaining similar errors. Our work shows that simple machine learning approaches can achieve high-fidelity qubit measurements while remaining scalable to systems with larger qubit counts.