Representations of scenes and objects play a crucial role in robot manipulation. Traditional ways represent scenes and objects using explicit geometric primitives, such as 3D meshes, 3D point clouds, 3D voxels, etc. These geometric representations have been widely explored in perception, grasp planning and motion planning in manipulation. However, they still have difficulties in handling challenging manipulation scenarios involving unseen objects, articulated objects, or deformable objects. Recently, neural representations of scenes and objects, such as neural radiance fields and deep signed distance fields, have introduced, which demonstrate superior performance on visual rendering and shape reconstruction tasks. Ideally, neural representations can handle arbitrary scenes and objects by encoding them into neural networks. However, how to learn these representations from data and how to utilize them for robot manipulation remain as open questions. This workshop will represent a great opportunity to bring together experts who are exploring neural representations for robot manipulation, and provide an arena for stimulating discussion and debate on neural representation learning for robot manipulation. Specific questions we aim to address include:

  • What neural representations are useful for robot manipulation?
  • What should a neural representation capture (geometry, visual appearance, dynamics, etc.) for robot manipulation?
  • How to learn and use neural representations for robot manipulation?
  • How to benchmark the progress in neural representation learning for robot manipulation?

Confirmed Speakers

Accepted Presentations




Please feel free to send us your queries via email at neurlrm.workshop@gmail.com