Mostrar el registro sencillo del ítem

hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
hal.structure.identifierEfficient runtime systems for parallel architectures [RUNTIME]
hal.structure.identifierAlgorithms and high performance computing for grand challenge applications [SCALAPPLIX]
dc.contributor.authorFAVERGE, Mathieu
hal.structure.identifierAlgorithms and high performance computing for grand challenge applications [SCALAPPLIX]
dc.contributor.authorLACOSTE, Xavier
hal.structure.identifierLaboratoire Bordelais de Recherche en Informatique [LaBRI]
hal.structure.identifierAlgorithms and high performance computing for grand challenge applications [SCALAPPLIX]
dc.contributor.authorRAMET, Pierre
dc.date.accessioned2024-04-15T09:53:12Z
dc.date.available2024-04-15T09:53:12Z
dc.date.created2008
dc.date.issued2008
dc.date.conference2008
dc.identifier.urihttps://oskar-bordeaux.fr/handle/20.500.12278/198590
dc.description.abstractEnOver the past few years, parallel sparse direct solvers made significant progress and are now able to solve efficiently industrial three-dimensional problems with several millions of unknowns. An hybrid MPI-thread implementation of our direct solver PaStiX is already well suited for SMP nodes or new multi-core architectures and drastically reduced the memory overhead and improved scalability. In the context of distributed NUMA architectures, a dynamic scheduler based on a work-stealing algorithm has been developed to fill in communication idle times. On these architectures, it is important to take care of NUMA effects and to preserve memory affinity during the work-stealing. The scheduling of communications also needs to be adapted, especially to ensure the overlap by computations. Experiments on numerical test cases will be presented to prove the efficiency of the approach on NUMA architectures. If memory is not large enough to treat a given problem, disks must be used to store data that cannot fit in memory (out-of-core storage). The idle-times due to disk access have to be managed by our dynamic scheduler to prefetch and save datasets. Thus, we design and study specific scheduling algorithms in this particular context.
dc.description.sponsorshipAdaptation et Optimisation des Performances Applicatives sur architectures NUMA. Etude et Mise en Œuvre sur des Applications en SISmologie. - ANR-05-CIGC-0002
dc.language.isoen
dc.subject.ensparse direct solver
dc.subject.enNUMA architecture
dc.subject.enmulti-cores
dc.subject.endynamic scheduling
dc.title.enA NUMA Aware Scheduler for a Parallel Sparse Direct Solver
dc.typeCommunication dans un congrès
dc.subject.halInformatique [cs]/Calcul parallèle, distribué et partagé [cs.DC]
bordeaux.hal.laboratoriesLaboratoire Bordelais de Recherche en Informatique (LaBRI) - UMR 5800*
bordeaux.institutionUniversité de Bordeaux
bordeaux.institutionBordeaux INP
bordeaux.institutionCNRS
bordeaux.conference.titlePMAA'08
bordeaux.countryCH
bordeaux.conference.cityNeuchâtel
bordeaux.peerReviewedoui
hal.identifierinria-00344709
hal.version1
hal.invitednon
hal.proceedingsnon
hal.popularnon
hal.audienceInternationale
hal.origin.linkhttps://hal.archives-ouvertes.fr//inria-00344709v1
bordeaux.COinSctx_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:journal&rft.date=2008&rft.au=FAVERGE,%20Mathieu&LACOSTE,%20Xavier&RAMET,%20Pierre&rft.genre=unknown


Archivos en el ítem

ArchivosTamañoFormatoVer

No hay archivos asociados a este ítem.

Este ítem aparece en la(s) siguiente(s) colección(ones)

Mostrar el registro sencillo del ítem