Emotional Voice Conversion (EVC) modifies speech emotion to enhance communication by amplifying positive cues and reducing negative ones. This complex task involves entangled factors like voice quality, speaker traits, and content. Traditional deep learning models like GANs and autoencoders have achieved some success in EVC by learning mappings or disentangling features but face challenges like instability and voice quality degradation. Diffusion models offer stable training and high-quality generation. We propose a diffusion-based EVC framework that disentangles emotion and speaker identity using mutual information loss and auxiliary models. An expressive guidance mechanism is introduced to improve emotion conversion while maintaining speaker traits. Experimental results demonstrate our approach's effectiveness for unseen speakers and emotions, achieving state-of-the-art performance in EVC tasks.