A Shared-control Teleoperation Architecture for Nonprehensile Object Transportation

M. Selvaggio, J. Cacace, C. Pacchierotti, F. Ruggiero and P. Robuffo Giordano, “A shared-control teleoperation architecture for nonprehensile object transportation”, IEEE Transactions on Robotics (TRO), 2021.

This article proposes a shared-control teleoperation architecture for robot manipulators transporting an object on a tray.

Differently from many existing studies about remotely operated robots with firm grasping capabilities, we consider the case in which, in principle, the object can break its contact with the robot end-effector.

The proposed shared-control approach automatically regulates the remote robot motion commanded by the user and the end-effector orientation to prevent the object from sliding over the tray.

Furthermore, the human operator is provided with haptic cues informing about the discrepancy between the commanded and executed robot motion, which assist the operator throughout the task execution.

We carried out trajectory tracking experiments employing an autonomous 7 degree-of-freedom (DoF) manipulator and compared the results obtained using the proposed approach with two different control schemes (i.e., constant tray orientation and no motion adjustment).

We also carried out a human-subjects study involving eighteen participants, in which a 3-DoF haptic device was used to teleoperate the robot linear motion and display haptic cues to the operator.

In all experiments, the results clearly show that our control approach outperforms the other solutions in terms of sliding prevention, robustness, commands tracking, and user’s preference.

TSDF++: A Multi-Object Formulation for Dynamic Object Tracking and Reconstruction

M. Grinvald, F. Tombari, R. Siegwart and J. Nieto, “TSDF++: A multi-object formulation for dynamic object tracking and reconstruction”, IEEE International Conference on Robotics and Automation (ICRA), 2021.

The ability to simultaneously track and reconstruct multiple objects moving in the scene is of the utmost importance for robotic tasks such as autonomous navigation and interaction.

Virtually all of the previous attempts to map multiple dynamic objects have evolved to store individual objects in separate reconstruction volumes and track the relative pose between them. While simple and intuitive, such formulation does not scale well with respect to the number of objects in the scene and introduces the need for an explicit occlusion handling strategy.

In contrast, we propose a map representation that allows maintaining a single volume for the entire scene and all the objects therein. To this end, we introduce a novel multi-object TSDF formulation that can encode multiple object surfaces at any given location in the map.

In a multiple dynamic object tracking and reconstruction scenario, our representation allows maintaining accurate reconstruction of surfaces even while they become temporarily occluded by other objects moving in their proximity.

We evaluate the proposed TSDF++ formulation on a public synthetic dataset and demonstrate its ability to preserve reconstructions of occluded surfaces when compared to the standard TSDF map representation.

Harmony at ERF2021

An introduction to Project Harmony presented at the European Robotics Forum 2021.

Where to go next: Learning a Subgoal Recommendation Policy for Navigation in Dynamic Environments

B. Brito, M. Everett, J. P. How and J. Alonso-Mora, “Where to go next: Learning a subgoal recommendation policy for navigation in dynamic environments”, IEEE Robotics and Automation Letters (RA-L), 2021.

Robotic navigation in environments shared with other robots or humans remains challenging because the intentions of the surrounding agents are not directly observable and the environment conditions are continuously changing.

Local trajectory optimization methods, such as model predictive control (MPC), can deal with those changes but require global guidance, which is not trivial to obtain in crowded scenarios.

This paper proposes to learn, via deep Reinforcement Learning (RL), an interaction-aware policy that provides long-term guidance to the local planner. In particular, in simulations with cooperative and non-cooperative agents, we train a deep network to recommend a subgoal for the MPC planner.

The recommended subgoal is expected to help the robot in making progress towards its goal and accounts for the expected interaction with other agents. Based on the recommended subgoal, the MPC planner then optimizes the inputs for the robot satisfying its kino-dynamic and collision avoidance constraints.

Our approach is shown to substantially improve the navigation performance in terms of number of collisions as compared to prior MPC frameworks, and in terms of both travel time and number of collisions compared to deep RL methods in cooperative, competitive and mixed multiagent scenarios.

Enter a word, we will search it for you

Search form