Event, or neuromorphic cameras, offer a novel encoding of natural scenes by asynchronously reporting significant changes in brightness, known as events, with improved dynamic range, temporal resolution and lower data bandwidth when compared to conventional cameras. However, their adoption in domain-specific research tasks is hindered in part by limited commercial availability, lack of existing datasets, and challenges related to predicting the impact of their nonlinear optical encoding, unique noise model and tensor-based data processing requirements. To address these challenges, we introduce Synthetic Events for Neural Processing and Integration (SENPI) in Python, a PyTorch-based library for simulating and processing event camera data. SENPI includes a differentiable digital twin that converts intensity-based data into event representations, allowing for evaluation of event camera performance while handling the non-smooth and nonlinear nature of the forward model The library also supports modules for event-based I/O, manipulation, filtering and visualization, creating efficient and scalable workflows for both synthetic and real event-based data. We demonstrate SENPI's ability to produce realistic event-based data by comparing synthetic outputs to real event camera data and use these results to draw conclusions on the properties and utility of event-based perception. Additionally, we showcase SENPI's use in exploring event camera behavior under varying noise conditions and optimizing event contrast threshold for improved encoding under target conditions. Ultimately, SENPI aims to lower the barrier to entry for researchers by providing an accessible tool for event data generation and algorithmic developmnent, making it a valuable resource for advancing research in neuromorphic vision systems.