In many joint-action scenarios, humans and robots have to coordinate their movements to accomplish a given shared task. Lifting an object together, sawing a wood log, transferring objects from a point to another are all examples where motor coordination between humans and machines is a crucial requirement. While the dyadic coordination between a human and a robot has been studied in previous investigations, the multi-agent scenario in which a robot has to be integrated into a human group still remains a less explored field of research. In this paper we discuss how to synthesise an artificial agent able to coordinate its motion in human ensembles. Driven by a control architecture based on deep reinforcement learning, such an artificial agent will be able to autonomously move itself in order to synchronise its motion with that of the group while exhibiting human-like kinematic features. As a paradigmatic coordination task we take a group version of the so-called mirror-game which is highlighted as a good benchmark in the human movement literature.