This work addresses resource allocation challenges in multi-cell wireless systems catering to enhanced Mobile Broadband (eMBB) and Ultra-Reliable Low Latency Communications (URLLC) users. We present a distributed learning framework tailored to O-RAN network architectures. Leveraging a Thompson sampling-based Deep Reinforcement Learning (DRL) algorithm, our approach provides real-time resource allocation decisions, aligning with evolving network structures. The proposed approach facilitates online decision-making for resource allocation by deploying trained execution agents at Near-Real Time Radio Access Network Intelligent Controllers (Near-RT RICs) located at network edges. Simulation results demonstrate the algorithm's effectiveness in meeting Quality of Service (QoS) requirements for both eMBB and URLLC users, offering insights into optimising resource utilisation in dynamic wireless environments.