HEAL DSpace

Ευσταθή και ασταθή σημεία ισορροπίας σε εκμάθηση χωρίς regret και ενθόρυβα μοντέλα

Αποθετήριο DSpace/Manakin

Εμφάνιση απλής εγγραφής

dc.contributor.author Γιάννου, Αγγελική
dc.date.accessioned 2022-02-01T15:33:37Z
dc.date.available 2022-02-01T15:33:37Z
dc.identifier.uri https://dspace.lib.ntua.gr/xmlui/handle/123456789/54509
dc.identifier.uri http://dx.doi.org/10.26240/heal.ntua.22207
dc.rights Default License
dc.subject Online Learning, Follow the Regularized Leader, Game Theory, Multi-agent Learning, Bandits en
dc.subject Θεωρία παιγνίων, Follow the Regularized Leader, Εκμάθηση σε περιβάλλοντα πολλών παικτών, Bandits. el
dc.title Ευσταθή και ασταθή σημεία ισορροπίας σε εκμάθηση χωρίς regret και ενθόρυβα μοντέλα el
dc.contributor.department Corelab el
heal.type bachelorThesis
heal.classification Θεωρία παιγνίων el
heal.language el
heal.language en
heal.access free
heal.recordProvider ntua el
heal.publicationDate 2021-06-06
heal.abstract In this diploma thesis, we examine the Nash equilibrium convergence properties of no-regret learning in general N-player games. Despite the importance and widespread applications of no-regret algorithms, their long-run behavior in multi-agent environments is still far from understood, and most of the literature has focused by necessity on certain, specific classes of games (typically zero-sum or congestion games). Instead of focusing on a fixed class of games, we instead take a structural approach and examine different classes of equilibria in generic games. For concreteness, we focus on the archetypal "follow the regularized leader" (FTRL) class of algorithms, and we consider the full spectrum of information uncertainty that the players may encounter – from noisy, oracle-based feedback, to bandit, payoff-based information. In this general context, we establish a comprehensive equivalence between the stability of a Nash equilibrium and its support: a Nash equilibrium is stable and attracting with arbitrarily high probability if and only if it is strict (i.e., each equilibrium strategy has a unique best response). This result extends existing continuous-time versions of the "folk theorem" of evolutionary game theory to a bona fide discrete-time learning setting, and provides an important link between the literature on multi-armed bandits and the equilibrium refinement literature. en
heal.advisorName Fotakis, Dimitris
heal.committeeMemberName Pagourtzis, Aris
heal.committeeMemberName Mertikopoulos, Panayotis
heal.committeeMemberName Fotakis, Dimitris
heal.academicPublisher Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών el
heal.academicPublisherID ntua
heal.numberOfPages 63
heal.fullTextAvailability false


Αρχεία σε αυτό το τεκμήριο

Αυτό το τεκμήριο εμφανίζεται στην ακόλουθη συλλογή(ές)

Εμφάνιση απλής εγγραφής