Algorithm configurators are automated methods to optimise the parameters of an algorithm for a class of problems. We evaluate the performance of a simple random local search configurator (ParamRLS) for tuning the neighbourhood size $k$ of the RLS$_k$ algorithm. We measure performance as the expected number of configuration evaluations required to identify the optimal value for the parameter. We analyse the impact of the cutoff time $\kappa$ (the time spent evaluating a configuration for a problem instance) on the expected number of configuration evaluations required to find the optimal parameter value, where we compare configurations using either best found fitness values (ParamRLS-F) or optimisation times (ParamRLS-T). We consider tuning RLS$_k$ for a variant of the Ridge function class (Ridge*), where the performance of each parameter value does not change during the run, and for the OneMax function class, where longer runs favour smaller $k$. We rigorously prove that ParamRLS-F efficiently tunes RLS$_k$ for Ridge* for any $\kappa$ while ParamRLS-T requires at least quadratic $\kappa$. For OneMax ParamRLS-F identifies $k=1$ as optimal with linear $\kappa$ while ParamRLS-T requires a $\kappa$ of at least $\Omega(n\log n)$. For smaller $\kappa$ ParamRLS-F identifies that $k>1$ performs better while ParamRLS-T returns $k$ chosen uniformly at random.