To ensure safe operation of autonomous vehicles in complex urban environments, complete perception of the environment is necessary. However, due to environmental conditions, sensor limitations, and occlusions, this is not always possible from a single point of view. To address this issue, collective perception is an effective method. Realistic and large-scale datasets are essential for training and evaluating collective perception methods. This paper provides the first comprehensive technical review of collective perception datasets in the context of autonomous driving. The survey analyzes existing V2V and V2X datasets, categorizing them based on different criteria such as sensor modalities, environmental conditions, and scenario variety. The focus is on their applicability for the development of connected automated vehicles. This study aims to identify the key criteria of all datasets and to present their strengths, weaknesses, and anomalies. Finally, this survey concludes by making recommendations regarding which dataset is most suitable for collective 3D object detection, tracking, and semantic segmentation.