HEAL DSpace

GMBlock: Optimizing data movement in a block-level storage sharing system over Myrinet

Αποθετήριο DSpace/Manakin

Εμφάνιση απλής εγγραφής

dc.contributor.author Koukis, E en
dc.contributor.author Nanos, A en
dc.contributor.author Koziris, N en
dc.date.accessioned 2014-03-01T01:33:34Z
dc.date.available 2014-03-01T01:33:34Z
dc.date.issued 2010 en
dc.identifier.issn 1386-7857 en
dc.identifier.uri https://dspace.lib.ntua.gr/xmlui/handle/123456789/20468
dc.subject Block-level storage en
dc.subject Memory contention en
dc.subject Myrinet en
dc.subject Network block device en
dc.subject OCFS2 en
dc.subject Shared storage en
dc.subject SMP clusters en
dc.subject User level networking en
dc.subject.classification Computer Science, Information Systems en
dc.subject.classification Computer Science, Theory & Methods en
dc.subject.other Block-level storage en
dc.subject.other Memory contentions en
dc.subject.other Myrinet en
dc.subject.other Network block device en
dc.subject.other OCFS2 en
dc.subject.other Shared storage en
dc.subject.other SMP clusters en
dc.subject.other User-level networking en
dc.subject.other Benchmarking en
dc.subject.other Optimization en
dc.subject.other Disks (structural components) en
dc.title GMBlock: Optimizing data movement in a block-level storage sharing system over Myrinet en
heal.type journalArticle en
heal.identifier.primary 10.1007/s10586-009-0106-y en
heal.identifier.secondary http://dx.doi.org/10.1007/s10586-009-0106-y en
heal.language English en
heal.publicationDate 2010 en
heal.abstract We present gmblock, a block-level storage sharing system over Myrinet which uses an optimized I/O path to transfer data directly between the storage medium and the network, bypassing the host CPU and main memory bus of the storage server. It is device driver independent and retains the protection and isolation features of the OS. We evaluate the performance of a prototype gmblock server and find that: (a) the proposed techniques eliminate memory and peripheral bus contention, increasing remote I/O bandwidth significantly, in the order of 20-200% compared to an RDMA-based approach, (b) the impact of remote I/O to local computation becomes negligible, (c) the performance characteristics of RAID storage combined with limited NIC resources reduce performance. We introduce synchronized send operations to improve the degree of disk to network I/O overlapping. We deploy the OCFS2 shared-disk filesystem over gmblock and show gains for various application benchmarks, provided I/O scheduling can eliminate the disk bottleneck due to concurrent access. © 2010 Springer Science+Business Media, LLC. en
heal.publisher SPRINGER en
heal.journalName Cluster Computing en
dc.identifier.doi 10.1007/s10586-009-0106-y en
dc.identifier.isi ISI:000284300700001 en
dc.identifier.volume 13 en
dc.identifier.issue 4 en
dc.identifier.spage 349 en
dc.identifier.epage 372 en


Αρχεία σε αυτό το τεκμήριο

Αρχεία Μέγεθος Μορφότυπο Προβολή

Δεν υπάρχουν αρχεία που σχετίζονται με αυτό το τεκμήριο.

Αυτό το τεκμήριο εμφανίζεται στην ακόλουθη συλλογή(ές)

Εμφάνιση απλής εγγραφής