High dynamic range (HDR) imaging is still a significant yet challenging problem due to the limited dynamic range of generic image sensors. Most existing learning-based HDR reconstruction methods take a set of bracketed-exposure sRGB images to extend the dynamic range, and thus are computational- and memory-inefficient by requiring the Image Signal Processor (ISP) to produce multiple sRGB images from the raw ones. In this paper, we propose to broaden the dynamic range from the raw inputs and perform only one ISP processing for the reconstructed HDR raw image. Our key insights are threefold: (1) we design a new computational raw HDR data formation pipeline and construct the first real-world raw HDR dataset, RealRaw-HDR; (2) we develop a lightweight-efficient HDR model, RepUNet, using the structural re-parameterization technique; (3) we propose a plug-and-play motion alignment loss to mitigate motion misalignment between short- and long-exposure images. Extensive experiments demonstrate that our approach achieves state-of-the-art performance in both visual quality and quantitative metrics.