OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation
Autor(a) principal: | |
---|---|
Data de Publicação: | 2016 |
Outros Autores: | , , , , |
Tipo de documento: | Artigo |
Idioma: | eng |
Título da fonte: | Repositório Institucional da UNESP |
Texto Completo: | http://dx.doi.org/10.1016/j.cpc.2016.07.029 http://hdl.handle.net/11449/173642 |
Resumo: | We present new versions of the previously published C and CUDA programs for solving the dipolar Gross–Pitaevskii equation in one, two, and three spatial dimensions, which calculate stationary and non-stationary solutions by propagation in imaginary or real time. Presented programs are improved and parallelized versions of previous programs, divided into three packages according to the type of parallelization. First package contains improved and threaded version of sequential C programs using OpenMP. Second package additionally parallelizes three-dimensional variants of the OpenMP programs using MPI, allowing them to be run on distributed-memory systems. Finally, previous three-dimensional CUDA-parallelized programs are further parallelized using MPI, similarly as the OpenMP programs. We also present speedup test results obtained using new versions of programs in comparison with the previous sequential C and parallel CUDA programs. The improvements to the sequential version yield a speedup of 1.1–1.9, depending on the program. OpenMP parallelization yields further speedup of 2–12 on a 16-core workstation, while OpenMP/MPI version demonstrates a speedup of 11.5–16.5 on a computer cluster with 32 nodes used. CUDA/MPI version shows a speedup of 9–10 on a computer cluster with 32 nodes. |
id |
UNSP_0bff3a31397f92316e62534f477ee5b0 |
---|---|
oai_identifier_str |
oai:repositorio.unesp.br:11449/173642 |
network_acronym_str |
UNSP |
network_name_str |
Repositório Institucional da UNESP |
repository_id_str |
2946 |
spelling |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equationBose–Einstein condensateC programCUDA programDipolar atomsGPUGross–Pitaevskii equationMPIOpenMPSplit-step Crank–Nicolson schemeWe present new versions of the previously published C and CUDA programs for solving the dipolar Gross–Pitaevskii equation in one, two, and three spatial dimensions, which calculate stationary and non-stationary solutions by propagation in imaginary or real time. Presented programs are improved and parallelized versions of previous programs, divided into three packages according to the type of parallelization. First package contains improved and threaded version of sequential C programs using OpenMP. Second package additionally parallelizes three-dimensional variants of the OpenMP programs using MPI, allowing them to be run on distributed-memory systems. Finally, previous three-dimensional CUDA-parallelized programs are further parallelized using MPI, similarly as the OpenMP programs. We also present speedup test results obtained using new versions of programs in comparison with the previous sequential C and parallel CUDA programs. The improvements to the sequential version yield a speedup of 1.1–1.9, depending on the program. OpenMP parallelization yields further speedup of 2–12 on a 16-core workstation, while OpenMP/MPI version demonstrates a speedup of 11.5–16.5 on a computer cluster with 32 nodes used. CUDA/MPI version shows a speedup of 9–10 on a computer cluster with 32 nodes.Scientific Computing Laboratory Center for the Study of Complex Systems Institute of Physics Belgrade University of Belgrade, Pregrevica 118Departamento de Ciencias Básicas Universidad Santo Tomás, 150001 TunjaInstituto de Física Teórica UNESP—Universidade Estadual Paulista, 01.140-70 São PauloDepartment of Mathematics and Informatics Faculty of Sciences University of Novi Sad, Trg Dositeja Obradovića 4School of Physics Bharathidasan University, Palkalaiperur CampusInstituto de Física Teórica UNESP—Universidade Estadual Paulista, 01.140-70 São PauloUniversity of BelgradeUniversidad Santo TomásUniversidade Estadual Paulista (Unesp)University of Novi SadBharathidasan UniversityLončar, VladimirYoung, Luis E.S. [UNESP]Škrbić, SrdjanMuruganandam, PaulsamyAdhikari, Sadhan K. [UNESP]Balaž, Antun2018-12-11T17:07:02Z2018-12-11T17:07:02Z2016-12-01info:eu-repo/semantics/publishedVersioninfo:eu-repo/semantics/article190-196application/pdfhttp://dx.doi.org/10.1016/j.cpc.2016.07.029Computer Physics Communications, v. 209, p. 190-196.0010-4655http://hdl.handle.net/11449/17364210.1016/j.cpc.2016.07.0292-s2.0-849917468232-s2.0-84991746823.pdfScopusreponame:Repositório Institucional da UNESPinstname:Universidade Estadual Paulista (UNESP)instacron:UNESPengComputer Physics Communications1,729info:eu-repo/semantics/openAccess2023-10-11T06:02:33Zoai:repositorio.unesp.br:11449/173642Repositório InstitucionalPUBhttp://repositorio.unesp.br/oai/requestopendoar:29462024-08-05T14:34:47.864806Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP)false |
dc.title.none.fl_str_mv |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation |
title |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation |
spellingShingle |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation Lončar, Vladimir Bose–Einstein condensate C program CUDA program Dipolar atoms GPU Gross–Pitaevskii equation MPI OpenMP Split-step Crank–Nicolson scheme |
title_short |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation |
title_full |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation |
title_fullStr |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation |
title_full_unstemmed |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation |
title_sort |
OpenMP, OpenMP/MPI, and CUDA/MPI C programs for solving the time-dependent dipolar Gross–Pitaevskii equation |
author |
Lončar, Vladimir |
author_facet |
Lončar, Vladimir Young, Luis E.S. [UNESP] Škrbić, Srdjan Muruganandam, Paulsamy Adhikari, Sadhan K. [UNESP] Balaž, Antun |
author_role |
author |
author2 |
Young, Luis E.S. [UNESP] Škrbić, Srdjan Muruganandam, Paulsamy Adhikari, Sadhan K. [UNESP] Balaž, Antun |
author2_role |
author author author author author |
dc.contributor.none.fl_str_mv |
University of Belgrade Universidad Santo Tomás Universidade Estadual Paulista (Unesp) University of Novi Sad Bharathidasan University |
dc.contributor.author.fl_str_mv |
Lončar, Vladimir Young, Luis E.S. [UNESP] Škrbić, Srdjan Muruganandam, Paulsamy Adhikari, Sadhan K. [UNESP] Balaž, Antun |
dc.subject.por.fl_str_mv |
Bose–Einstein condensate C program CUDA program Dipolar atoms GPU Gross–Pitaevskii equation MPI OpenMP Split-step Crank–Nicolson scheme |
topic |
Bose–Einstein condensate C program CUDA program Dipolar atoms GPU Gross–Pitaevskii equation MPI OpenMP Split-step Crank–Nicolson scheme |
description |
We present new versions of the previously published C and CUDA programs for solving the dipolar Gross–Pitaevskii equation in one, two, and three spatial dimensions, which calculate stationary and non-stationary solutions by propagation in imaginary or real time. Presented programs are improved and parallelized versions of previous programs, divided into three packages according to the type of parallelization. First package contains improved and threaded version of sequential C programs using OpenMP. Second package additionally parallelizes three-dimensional variants of the OpenMP programs using MPI, allowing them to be run on distributed-memory systems. Finally, previous three-dimensional CUDA-parallelized programs are further parallelized using MPI, similarly as the OpenMP programs. We also present speedup test results obtained using new versions of programs in comparison with the previous sequential C and parallel CUDA programs. The improvements to the sequential version yield a speedup of 1.1–1.9, depending on the program. OpenMP parallelization yields further speedup of 2–12 on a 16-core workstation, while OpenMP/MPI version demonstrates a speedup of 11.5–16.5 on a computer cluster with 32 nodes used. CUDA/MPI version shows a speedup of 9–10 on a computer cluster with 32 nodes. |
publishDate |
2016 |
dc.date.none.fl_str_mv |
2016-12-01 2018-12-11T17:07:02Z 2018-12-11T17:07:02Z |
dc.type.status.fl_str_mv |
info:eu-repo/semantics/publishedVersion |
dc.type.driver.fl_str_mv |
info:eu-repo/semantics/article |
format |
article |
status_str |
publishedVersion |
dc.identifier.uri.fl_str_mv |
http://dx.doi.org/10.1016/j.cpc.2016.07.029 Computer Physics Communications, v. 209, p. 190-196. 0010-4655 http://hdl.handle.net/11449/173642 10.1016/j.cpc.2016.07.029 2-s2.0-84991746823 2-s2.0-84991746823.pdf |
url |
http://dx.doi.org/10.1016/j.cpc.2016.07.029 http://hdl.handle.net/11449/173642 |
identifier_str_mv |
Computer Physics Communications, v. 209, p. 190-196. 0010-4655 10.1016/j.cpc.2016.07.029 2-s2.0-84991746823 2-s2.0-84991746823.pdf |
dc.language.iso.fl_str_mv |
eng |
language |
eng |
dc.relation.none.fl_str_mv |
Computer Physics Communications 1,729 |
dc.rights.driver.fl_str_mv |
info:eu-repo/semantics/openAccess |
eu_rights_str_mv |
openAccess |
dc.format.none.fl_str_mv |
190-196 application/pdf |
dc.source.none.fl_str_mv |
Scopus reponame:Repositório Institucional da UNESP instname:Universidade Estadual Paulista (UNESP) instacron:UNESP |
instname_str |
Universidade Estadual Paulista (UNESP) |
instacron_str |
UNESP |
institution |
UNESP |
reponame_str |
Repositório Institucional da UNESP |
collection |
Repositório Institucional da UNESP |
repository.name.fl_str_mv |
Repositório Institucional da UNESP - Universidade Estadual Paulista (UNESP) |
repository.mail.fl_str_mv |
|
_version_ |
1808128381740384256 |