We study an asynchronous online learning setting with a network of agents. At each time step, some of the agents are activated, requested to make a prediction, and pay the corresponding loss. The loss function is then revealed to these agents and also to their neighbors in the network. When activations are stochastic, we show that the regret achieved by $N$ agents running the standard online Mirror Descent is $O(\sqrt{\alpha T})$, where $T$ is the horizon and $\alpha \le N$ is the independence number of the network. This is in contrast to the regret $\Omega(\sqrt{N T})$ which $N$ agents incur in the same setting when feedback is not shared. We also show a matching lower bound of order $\sqrt{\alpha T}$ that holds for any given network. When the pattern of agent activations is arbitrary, the problem changes significantly: we prove a $\Omega(T)$ lower bound on the regret that holds for any online algorithm oblivious to the feedback source.