The universal approximation theorem, in one of its most general versions, says that if we consider only continuous activation functions $\sigma$, then a standard feedforward neural network with one hidden layer is able to approximate any continuous multivariate function $f$ to any given approximation threshold $\varepsilon$, if and only if $\sigma$ is non-polynomial. In this paper, we give a direct algebraic proof of the theorem. Furthermore we shall explicitly quantify the number of hidden units required for approximation. Specifically, if $X\subseteq \mathbb{R}^n$ is compact, then a neural network with $n$ input units, $m$ output units, and a single hidden layer with $\binom{n+d}{d}$ hidden units (independent of $m$ and $\varepsilon$), can uniformly approximate any polynomial function $f:X \to \mathbb{R}^m$ whose total degree is at most $d$ for each of its $m$ coordinate functions. In the general case that $f$ is any continuous function, we show there exists some $N\in \mathcal{O}(\varepsilon^{-n})$ (independent of $m$), such that $N$ hidden units would suffice to approximate $f$. We also show that this uniform approximation property (UAP) still holds even under seemingly strong conditions imposed on the weights. We highlight several consequences: (i) For any $\delta > 0$, the UAP still holds if we restrict all non-bias weights $w$ in the last layer to satisfy $|w| < \delta$. (ii) There exists some $\lambda>0$ (depending only on $f$ and $\sigma$), such that the UAP still holds if we restrict all non-bias weights $w$ in the first layer to satisfy $|w|>\lambda$. (iii) If the non-bias weights in the first layer are \emph{fixed} and randomly chosen from a suitable range, then the UAP holds with probability $1$.