HEAL DSpace

Optimal Motion Planning in 3D Workspaces: Integrating a Panel-Method-Based Motion Planner with Continuous Deep Reinforcement Learning

Αποθετήριο DSpace/Manakin

Εμφάνιση απλής εγγραφής

dc.contributor.author Μαλλιαρόπουλος Κατσίμης, Μάριος el
dc.contributor.author Malliaropoulos Katsimis, Marios en
dc.date.accessioned 2023-08-23T07:22:13Z
dc.date.available 2023-08-23T07:22:13Z
dc.identifier.uri https://dspace.lib.ntua.gr/xmlui/handle/123456789/57923
dc.identifier.uri http://dx.doi.org/10.26240/heal.ntua.25620
dc.rights Αναφορά Δημιουργού - Μη Εμπορική Χρήση - Παρόμοια Διανομή 3.0 Ελλάδα *
dc.subject Robotics en
dc.subject Optimal Control Systems en
dc.subject 3D Motion Planning en
dc.subject Deep Reinforcement Learning en
dc.subject Fluid Mechanics en
dc.subject Ρομποτική el
dc.subject 3D Σχεδιασμός Πορείας el
dc.subject Βέλτιστος Έλεγχος Συστημάτων el
dc.subject Ενισχυτική Μάθηση el
dc.subject Μηχανική των Ρευστών el
dc.title Optimal Motion Planning in 3D Workspaces: Integrating a Panel-Method-Based Motion Planner with Continuous Deep Reinforcement Learning en
dc.contributor.department Control Systems Lab el
heal.type bachelorThesis
heal.classification Mechanical Engineering, Robotics en
heal.language en
heal.access free
heal.recordProvider ntua el
heal.publicationDate 2023-07-01
heal.abstract This diploma thesis proposes a novel and proven correct reactive method for planning three-dimensional optimal motion in complex environments. By combining fluid flow equations, optimal control theory, and deep reinforcement learning techniques, this study offers an interdisciplinary and unique approach, effectively merging positive attributes from different scientific fields. The method models the 3D motion planning problem by solving streamlines of the potential fluid flow, enabling the proper handling of various terrain types. This is achieved through the discretization of the geometry into surface panels, while the safety criteria are ensured via a set of von-Neumann boundary conditions. The proposed fluid-based planner guarantees a continuous-time, natural-looking, stable and safe solution for the motion planning problem with Artificial Harmonic Potential Fields (AHPFs). Furthermore, this thesis presents a model-based reinforcement learning algorithm for learning the optimal non-linear control in continuous time and action space with respect to an infinite horizon cost function. The algorithm utilizes an actor-critic scheme based on policy iteration, to successively approximate the optimal solution of the Hamilton-Jacobi-Bellman equation. This way, the optimal robot motion is obtained by iteratively updating the fluid flow parameters (i.e., the controller parameters) in a deterministic manner. The proposed method demonstrates fast convergence and outperforms widely used methods such as the RRT*, highlighting its contribution to the field of 3D optimal motion planning. en
heal.advisorName Κυριακόπουλος, Κώστας el
heal.advisorName Kyriakopoulos, Kostas en
heal.committeeMemberName Παπαδόπουλος, Ευάγγελος el
heal.committeeMemberName Αντωνιάδης, Ιωάννης el
heal.academicPublisher Εθνικό Μετσόβιο Πολυτεχνείο. Σχολή Μηχανολόγων Μηχανικών el
heal.academicPublisherID ntua
heal.fullTextAvailability false
heal.fullTextAvailability false


Αρχεία σε αυτό το τεκμήριο

Αυτό το τεκμήριο εμφανίζεται στην ακόλουθη συλλογή(ές)

Εμφάνιση απλής εγγραφής