People with disabilities (PwD) regularly encounter ableist hate and microaggressions online. While online platforms use machine learning models to moderate online harm, there is little research investigating how these models interact with ableism. In this paper, we curated a dataset of 100 social media comments targeted towards PwD, and recruited 160 participants to rate and explain how toxic and ableist these comments were. We then prompted state-of-the art toxicity classifiers (TCs) and large language models (LLMs) to rate and explain the harm. Our analysis revealed that TCs and LLMs rated toxicity significantly lower than PwD, but LLMs rated ableism generally on par with PwD. However, ableism explanations by LLMs overlooked emotional harm, and lacked specificity and acknowledgement of context, important facets of PwD explanations. Going forward, we discuss challenges in designing disability-aware toxicity classifiers, and advocate for the shift from ableism detection to ableism interpretation and explanation.