Computed Tomography (CT) is widely used in healthcare for detailed imaging. However, Low-dose CT, despite reducing radiation exposure, often results in images with compromised quality due to increased noise. Traditional methods, including preprocessing, post-processing, and model-based approaches that leverage physical principles, are employed to improve the quality of image reconstructions from noisy projections or sinograms. Recently, deep learning has significantly advanced the field, with diffusion models outperforming both traditional methods and other deep learning approaches. These models effectively merge deep learning with physics, serving as robust priors for the inverse problem in CT. However, they typically require prolonged computation times during sampling. This paper introduces the first approach to merge deep unfolding with Direct Diffusion Bridges (DDBs) for CT, integrating the physics into the network architecture and facilitating the transition from degraded to clean images by bypassing excessively noisy intermediate stages commonly encountered in diffusion models. Moreover, this approach includes a tailored training procedure that eliminates errors typically accumulated during sampling. The proposed approach requires fewer sampling steps and demonstrates improved fidelity metrics, outperforming many existing state-of-the-art techniques.