This paper studies stochastic optimization for a sum of compositional functions, where the inner-level function of each summand is coupled with the corresponding summation index. We refer to this family of problems as finite-sum coupled compositional optimization (FCCO). It has broad applications in machine learning for optimizing non-convex or convex compositional measures/objectives such as average precision (AP), $p$-norm push, listwise ranking losses, neighborhood component analysis (NCA), deep survival analysis, deep latent variable models, softmax functions, and model agnostic meta-learning, which deserves finer analysis. Yet, existing algorithms and analysis are restricted in one or other aspects. The contribution of this paper is to provide a comprehensive analysis of a simple stochastic algorithm for both non-convex and convex objectives. The key results are {\bf improved oracle complexities with the parallel speed-up} by the moving-average based stochastic estimator with mini-batching. Our theoretical analysis also exhibits new insights for improving the practical implementation by sampling the batches of equal size for the outer and inner levels. Numerical experiments on AP maximization and $p$-norm push optimization corroborate some aspects of the theory.