HEAL DSpace

Deep reinforcement learning for tail latency regulation in co-located applications through cooperative core and cache allocation

Αποθετήριο DSpace/Manakin

Εμφάνιση απλής εγγραφής

dc.contributor.author Κιμωνίδης, Αλέξανδρος
dc.contributor.author Kimonidis, Alexandros en
dc.date.accessioned 2022-03-02T10:18:56Z
dc.date.available 2022-03-02T10:18:56Z
dc.identifier.uri https://dspace.lib.ntua.gr/xmlui/handle/123456789/54900
dc.identifier.uri http://dx.doi.org/10.26240/heal.ntua.22598
dc.rights Default License
dc.subject Cloud en
dc.subject Management en
dc.subject Resource en
dc.subject AI en
dc.subject Deep reinforcement learning en
dc.subject Νέφος el
dc.subject Διαχείριση πόρων el
dc.subject Τεχνητή νοημοσύνη el
dc.subject Βαθιά ενισχυμένη εκμάθηση el
dc.subject Τεχνητή μάθηση el
dc.title Deep reinforcement learning for tail latency regulation in co-located applications through cooperative core and cache allocation en
heal.type bachelorThesis
heal.classification Resource Management en
heal.classification Cloud el
heal.classification Deep Reinforcement Learning el
heal.language el
heal.language en
heal.access free
heal.recordProvider ntua el
heal.publicationDate 2021-10-25
heal.abstract The amount of workloads ran on the Cloud is growing all the time. Data center operators and cloud providers have embraced workload co-location and multi- tenancy as first-class system design concerns to efficiently service and manage these massive computing needs. Current state-of-the-art resource managers place applications on the available pool of resources using standard metrics such as CPU or memory usage. As a result, current state-of-the-art resource managers fail to achieve adequate resource utilization. In this thesis, we design a resource manager that leverages deep reinforce- ment learning for its policy and uses performance monitoring counters which are a more complex metric that is able to determine a machine’s current state. We showcase the impact of applying stress on different server resources and the need for a better scheduler that considers the correct metrics. We integrate our solution with OpenAI Gym, one of the most widely used tool-kits for devel- oping and comparing reinforcement learning algorithms, and we show that we can achieve higher resource usage compared to the default scheduler as well as other state-of-the-art schedulers. en
heal.advisorName Σούντρης, Δημήτριος el
heal.committeeMemberName Σούντρης, Δημήτριος el
heal.committeeMemberName Τσανάκας, Παναγιώτης el
heal.committeeMemberName Γκούμας, Γεώργιος el
heal.academicPublisher Εθνικό Μετσόβιο Πολυτεχνείο. Σχολή Ηλεκτρολόγων Μηχανικών και Μηχανικών Υπολογιστών el
heal.academicPublisherID ntua
heal.numberOfPages 97 σ. el
heal.fullTextAvailability false


Αρχεία σε αυτό το τεκμήριο

Αυτό το τεκμήριο εμφανίζεται στην ακόλουθη συλλογή(ές)

Εμφάνιση απλής εγγραφής