This paper addresses the challenge of enabling a single robot to effectively assist multiple humans in decision-making for task planning domains. We introduce a comprehensive framework designed to enhance overall team performance by considering both human expertise in making the optimal decisions and robot influence on human decision-making. Our model integrates these factors seamlessly within the task-planning domain, formulating the problem as a partially observable Markov decision process (POMDP) while treating expertise and influence as unobservable components of the system state. To solve for the robot's actions in such systems, we propose an efficient Attention-Switching policy. This policy capitalizes on the inherent structure of such systems, solving multiple smaller POMDPs to generate heuristics for prioritizing interactions with different human teammates, thereby reducing the state space and improving scalability. Our empirical results on a simulated kit fulfillment task demonstrate improved team performance when the robot's policy accounts for both expertise and influence. This research represents a significant step forward in the field of adaptive robot assistance, paving the way for integration into cost-effective small and mid-scale industries, where substantial investments in robotic infrastructure may not be economically viable.