We consider two agents playing simultaneously the same stochastic three-armed bandit problem. The two agents are cooperating but they cannot communicate. We propose a strategy with no collisions at all between the players (with very high probability), and with near-optimal regret $O(\sqrt{T \log(T)})$. We also provide evidence that the extra logarithmic term $\sqrt{\log(T)}$ is necessary, with a lower bound for a variant of the problem.