In order to address the limitations of gestural capabilities in physical robots, researchers in Virtual, Augmented, Mixed Reality Human-Robot Interaction (VAM-HRI) have been using augmented-reality visualizations that increase robot expressivity and improve user perception (e.g., social presence). While a multitude of virtual robot deictic gestures (e.g., pointing to an object) have been implemented to improve interactions within VAM-HRI, such systems are often reported to have tradeoffs between functional and social user perceptions of robots, creating a need for a unified approach that considers both attributes. We performed a literature analysis that selected factors that were noted to significantly influence either user perception or task efficiency and propose a set of design considerations and recommendations that address those factors by combining anthropomorphic and non-anthropomorphic virtual gestures based on the motivation of the interaction, visibility of the target and robot, salience of the target, and distance between the target and robot. The proposed recommendations provide the VAM-HRI community with starting points for selecting appropriate gesture types for a multitude of interaction contexts.