Ill-posed linear inverse problems appear frequently in various signal processing applications. It can be very useful to have theoretical characterizations that quantify the level of ill-posedness for a given inverse problem and the degree of ambiguity that may exist about its solution. Traditional measures of ill-posedness, such as the condition number of a matrix, provide characterizations that are global in nature. While such characterizations can be powerful, they can also fail to provide full insight into situations where certain entries of the solution vector are more or less ambiguous than others. In this work, we derive novel theoretical lower- and upper-bounds that apply to individual entries of the solution vector, and are valid for all potential solution vectors that are nearly data-consistent. These bounds are agnostic to the noise statistics and the specific method used to solve the inverse problem, and are also shown to be tight. In addition, our results also lead us to introduce an entrywise version of the traditional condition number, which provides a substantially more nuanced characterization of scenarios where certain elements of the solution vector are less sensitive to perturbations than others. Our results are illustrated in an application to magnetic resonance imaging reconstruction, and we include discussions of practical computation methods for large-scale inverse problems, connections between our new theory and the traditional Cram\'{e}r-Rao bound under statistical modeling assumptions, and potential extensions to cases involving constraints beyond just data-consistency.