In Physical Human--Robot Interaction (pHRI) grippers, humans and robots may contribute simultaneously to actions, so it is necessary to determine how to combine their commands. Control may be swapped from one to the other within certain limits, or input commands may be combined according to some criteria. The Assist-As-Needed (AAN) paradigm focuses on this second approach, as the controller is expected to provide the minimum required assistance to users. Some AAN systems rely on predicting human intention to adjust actions. However, if prediction is too hard, reactive AAN systems may weigh input commands into an emergent one. This paper proposes a novel AAN reactive control system for a robot gripper where input commands are weighted by their respective local performances. Thus, rather than minimizing tracking errors or differences to expected velocities, humans receive more help depending on their needs. The system has been tested using a gripper attached to a sensitive robot arm, which provides evaluation parameters. Tests consisted of completing an on-air planar path with both arms. After the robot gripped a person's forearm, the path and current position of the robot were displayed on a screen to provide feedback to the human. The proposed control has been compared to results without assistance and to impedance control for benchmarking. A statistical analysis of the results proves that global performance improved and tracking errors decreased for ten volunteers with the proposed controller. Besides, unlike impedance control, the proposed one does not significantly affect exerted forces, command variation, or disagreement, measured as the angular difference between human and output command. Results support that the proposed control scheme fits the AAN paradigm, although future work will require further tests for more complex environments and tasks.