The software of robotic assistants needs to be verified, to ensure its safety and functional correctness. Testing in simulation allows a high degree of realism in the verification. However, generating tests that cover both interesting foreseen and unforeseen scenarios in human-robot interaction (HRI) tasks, while executing most of the code, remains a challenge. We propose the use of belief-desire-intention (BDI) agents in the test environment, to increase the level of realism and human-like stimulation of simulated robots. Artificial intelligence, such as agent theory, can be exploited for more intelligent test generation. An automated testbench was implemented for a simulation in Robot Operating System (ROS) and Gazebo, of a cooperative table assembly task between a humanoid robot and a person. Requirements were verified for this task, and some unexpected design issues were discovered, leading to possible code improvements. Our results highlight the practicality of BDI agents to automatically generate valid and human-like tests to get high code coverage, compared to hand-written directed tests, pseudorandom generation, and other variants of model-based test generation. Also, BDI agents allow the coverage of combined behaviours of the HRI system with more ease than writing temporal logic properties for model checking.