In this paper, we aim at enhancing the performance of our proposed I-FENN approach by focusing on two crucial aspects of its PINN component: the error convergence analysis and the hyperparameter-performance relationship. By building on the I-FENN setup, our methodology relies on systematic engineering-oriented numerical analysis that is guided by the available mathematical theories on the topic. The objectivity of the characterization is achieved through a novel combination of performance metrics that asses the success of minimization of various error measures, the training efficiency through optimization process, and the training computational effort. In the first objective, we investigate in detail the convergence of the PINN training error and the global error against the network size and the training sample size. We demonstrate a consistent converging behavior of the two error types, which proves the conformance of the PINN setup and implementation to the available convergence theories. In the second objective, we aim to establish an a-priori knowledge of the hyperparameters which favor higher predictive accuracy, lower computational effort, and the least chances of arriving at trivial solutions. We show that shallow-and-wide networks tend to overestimate high frequencies of the strain field and they are computationally more demanding in the L-BFGS stage. On the other hand, deep-and-narrow PINNs yield higher errors; they are computationally slower during Adam optimization epochs, and they are more prone to training failure by arriving at trivial solutions. Our analysis leads to several outcomes that contribute to the better performance of I-FENN and fills a long-standing gap in the PINN literature with regards to the numerical convergence of the network errors. The proposed analysis method and conclusions can be directly extended to other ML applications in science and engineering.