Empathy is a critical element of effective and satisfactory conversational communication, yet previous studies in measuring conversational empathy mostly focus on expressed communicative intents -- in which way empathy is expressed, ignoring the fact that conversation is also a collaborative practice involving both speakers and listeners. In contrast, we propose a multi-dimensional empathy evaluation framework that extends upon existing work to measure both expressed intents from the speaker's perspective and perceived empathy from the listener's perspective. Applying the proposed framework to analyzing our internal customer-service dialogue shows that the two dimensions (expressed intent types and perceived empathy) are inter-connected, while perceived empathy has high correlation with the satisfactory level of dialogue sessions. This proposed framework still requires subjective assessments from trained annotators, which can be non-trivial to collect. To scale up evaluation without excessive reliance on carefully annotated data, we explore different modeling options to automatically measure conversational empathy with (1) prompting frozen large language models (LLMs) and (2) training language model-based classifiers. Extensive experiments on both internal and external dialogue datasets show that measuring conversational empathy remains a challenging task for prompting frozen LLMs, reflected by less satisfying performance of GPT-4 and Flan family models. On the other hand, our proposed instruction-finetuned classifiers based on sequence-to-sequence (Seq2Seq) language models is able to achieve the best performance compared to prior works and competitive baselines. Finally, we perform comprehensive ablation studies on the performance of proposed instruction-finetuned classifiers and give recommendations on potentially adopting them as automatic conversational empathy evaluation metrics.