A Survey on Designing of OpenMP, UPC and MPI
The present pattern to multicore designs underscores the need of parallelism. While new dialects and options for supporting all the more proficiently, these frameworks are proposed, MPI confronts this new test. Along these lines, execution assessments of current alternatives for programming multicore frameworks are required. This paper assesses MPI execution against Unified Parallel C (UPC) and OpenMP on multicore designs. From the investigation of the outcomes, it can be reasoned that MPI is the best decision on multicore frameworks with both shared and hybrid shared/circulated memory, as it takes the most astounding favourable position of information area, the key factor for execution in these frameworks. With respect to, in spite of the fact that it misuses effectively the information format in memory, it experiences remote shared memory gets to, though OpenMP generally needs productive information region bolster and is confined to shared memory frameworks, which constrains its adaptability.
Keywords: MPI, UPC, OpenMP, multicore architectures, performance evaluation, NAS parallel benchmarks (NPB)
Cite this Article
Ashwani Wantoo. A Survey on Designing of OpenMP, UPC and MPI. Recent Trends in Parallel Computing. 2017; 4(2): 21–26p.
- There are currently no refbacks.
This site has been shifted to https://stmcomputers.stmjournals.com/