Given a mixture between two populations of coins, "positive" coins that have (unknown and potentially different) probabilities of heads $\geq\frac{1}{2}+\Delta$ and negative coins with probabilities $\leq\frac{1}{2}-\Delta$, we consider the task of estimating the fraction $\rho$ of coins of each type to within additive error $\epsilon$. We introduce new techniques to show a fully-adaptive lower bound of $\Omega(\frac{\rho}{\epsilon^2\Delta^2})$ samples (for constant probability of success). We achieve almost-matching algorithmic performance of $O(\frac{\rho}{\epsilon^2\Delta^2}(1+\rho\log\frac{1}{\epsilon}))$ samples, which matches the lower bound except in the regime where $\rho=\omega(\frac{1}{\log 1/\epsilon})$. The fine-grained adaptive flavor of both our algorithm and lower bound contrasts with much previous work in distributional testing and learning.