Unlike traditional cameras which synchronously register pixel intensity, neuromorphic sensors only register `changes' at pixels where a change is occurring asynchronously. This enables neuromorphic sensors to sample at a micro-second level and efficiently capture the dynamics. Since, only sequences of asynchronous event changes are recorded rather than brightness intensities over time, many traditional image processing techniques cannot be directly applied. Furthermore, existing approaches, including the ones recently introduced by the authors, use traditional images combined with neuromorphic event data to carry out reconstructions. The aim of this work is introduce an optimization based approach to reconstruct images and dynamics only from the neuromoprhic event data without any additional knowledge of the events. Each pixel is modeled temporally. The experimental results on real data highlight the efficacy of the presented approach, paving the way for efficient and accurate processing of neuromorphic sensor data in real-world applications.