Abstract:Robotic manipulation involves actions where contacts occur between the robot and the objects. In this scope, the availability of physics-based engines allows motion planners to comprise dynamics between rigid bodies, which is necessary for planning this type of actions. However, physics-based motion planning is computationally intensive due to the high dimensionality of the state space and the need to work with a low integration step to find accurate solutions. On the other hand, manipulation actions change the environment and conditions further actions and motions. To cope with this issue, the representation of manipulation actions using ontologies enables a semantic-based inference process that alleviates the computational cost of motion planning. This paper proposes a manipulation planning framework where physics-based motion planning is enhanced with ontological knowledge representation and reasoning. The proposal has been implemented and is illustrated and validated with a simple example. Its use in grasping tasks in cluttered environments is currently under development.