The purpose of planetary exploration is to improve science by revealing new information about the geology and resource potential of other worlds. Extraterrestrial robotic systems are crucial in their acquisition for in situ analysis or sample return, which is an important component of sample analysis for this project. In order to extract hydrogen and oxygen for the local generation of rocket fuel and the operation of life support systems, reports of these discoveries are also necessary for the future use of in situ resources.
This would reduce the need for additional resupply flights while significantly reducing the payload needed for the initial launch from Earth. An increasing amount of work is being devoted to sample return missions that could provide this information. Recently, lunar materials have been brought back to Earth and NASA has chosen companies to collect lunar rocks to advance the Artemis mission. Another mission idea is to retrieve samples collected by NASA’s Perseverance rover using an ESA rover. This mission is called Mars Sample Return.
Unfortunately, teleoperation is inefficient due to transmission lag, which limits the amount of scientific data rovers can collect throughout their mission. Therefore, as missions become more sophisticated, interplanetary rovers must be autonomous. Many possible uses for rovers with robotic arms exist in extraterrestrial environments.
These rovers were capable of performing assembly and maintenance work by engaging with various technical tools and equipment in addition to moving scientific instruments to closely examine regions of interest. Many subroutines engaged in such tasks require an item or tool to be securely held before it is used. Therefore, flexible mobile manipulation requires a fundamental capability called robotic grasping. Rovers must be able to grip a variety of items that may vary in geometry, appearance, and mechanical characteristics in order to achieve this flexibility.
The goal of vision-based robotic input in lunar environments can be achieved using end-to-end deep reinforcement learning, according to a recent publication from researchers at the University of Luxembourg. The main objective of this article is to develop end-to-end strategies for robotic input in unstructured lunar environments with varied rock types, rugged terrain, and strong lighting. Due to the high costs and safety requirements of robotic space systems, it is impractical to train agents directly in extraterrestrial environments. The team’s solution was to use simulations to transfer learned policies to a real robot.
The main contributions of this work are:
• A simulation of the Moon which, through its realistic physics, physics-based rendering and extensive use of domain randomization with procedurally generated datasets to simulate the wide range of lunar conditions, enables the learning mobile manipulation skills transferable to the real world realm.
• A new method for using multi-channel features in 3D octree visual observations for end-to-end deep reinforcement learning. The 3D world is efficiently represented using octrees, and abstract features that allow agents to generalize about spatial positions and orientations are extracted using an octree-based convolutional neural network.
• A learning demonstration of robotic gripping in a realistic simulation environment of the Moon, followed by a zero-shot simulation to a real-world transfer inside a real robot factory.
Experimental analysis shows that when used for end-to-end learning of robotic grasping, 3D visual observations in the form of octrees perform better than image-based observations. This result is explained by the fact that 3D convolutions generalize more efficiently on spatial locations and orientations than 2D convolutions, which generalize better on planar coordinates of the image. Another advantage of 3D observations is that they facilitate the transfer of learned policies to new systems or application domains by making them invariant to the pose of the camera.
In this paper, researchers from the University of Luxembourg presented an end-to-end deep reinforcement learning method for robotic seizure on the Moon. The researchers examined the use of 3D octree observations and assessed their effectiveness compared to that of 2D photos. Showing the shotless sim-to-real transfer to a real robot in a lunar analog setup, they also examined the effects of using domain randomization under lunar conditions. Despite its many difficulties, the team believes deep reinforcement learning is a promising technique for teaching robots in space how to manipulate objects. One of the key steps before these technologies can be used reliably for a wide variety of applications in space robots is improving learning stability under various conditions.
This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Learning to Grasp on the Moon from 3D Octree Observations with Deep Reinforcement Learning'. All Credit For This Research Goes To Researchers on This Project. Check out the paper,and github link. Please Don't Forget To Join Our ML Subreddit
Nitish is a computer science undergraduate with a keen interest in the field of deep learning. He has carried out various projects related to deep learning and closely follows the new advances taking place in the field.