One of the most essential prerequisites behind a successful task execution of a team of agents is to accurately estimate and track their poses. We consider a cooperative multi-agent positioning problem where each agent performs single-agent positioning until it encounters some other agent. Upon the encounter, the two agents measure their relative pose, and exchange particle clouds representing their poses. We propose a cooperative positioning algorithm which fuses the received information with the locally available measurements and infers an agent's pose within Bayesian framework. The algorithm is scalable to multiple agents, has relatively low computational complexity, admits decentralized implementation across agents, and imposes relatively mild requirements on communication coverage and bandwidth. The experiments indicate that the proposed algorithm considerably improves single-agent positioning accuracy, reduces the convergence time of a particle cloud and, unlike its single-agent positioning counterpart, exhibits immunity to an impeding feature-scarce and symmetric environment layout.