We present the CheckforAI text classifier, a transformer-based neural network trained to distinguish text written by large language models from text written by humans. CheckforAI outperforms zero-shot methods such as DetectGPT as well as leading commercial AI detection tools with over 9 times lower error rates on a comprehensive benchmark comprised of ten text domains (student writing, creative writing, scientific writing, books, encyclopedias, news, email, scientific papers, short-form Q&A) and 8 open- and closed-source large language models. We propose a training algorithm, hard negative mining with synthetic mirrors, that enables our classifier to achieve orders of magnitude lower false positive rates on high-data domains such as reviews. Finally, we show that CheckforAI is not biased against nonnative English speakers and generalizes to domains and models unseen during training.