« Previous 1 2 3
Pymp – OpenMP-like Python Programming
Laplace Solver Example
The next example, the common Laplace solver, is a little more detailed. The code is definitely not the most efficient – it uses loops – but I hope it illustrates how to use Pymp. For the curious, timings are included in the code. Listing 4 is the Python version, and the Pymp version of the code is shown in Listing 5. Changed lines in are marked with arrows (-->**<---).
Listing 4: Python Laplace Solver
import numpy from time import perf_counter nx = 1201 ny = 1201 # Solution and previous solution arrays sol = numpy.zeros((nx,ny)) soln = sol.copy() for j in range(0,ny-1): sol[0,j] = 10.0 sol[nx-1,j] = 1.0 # end for for i in range(0,nx-1): sol[i,0] = 0.0 sol[i,ny-1] = 0.0 # end for # Iterate start_time = perf_counter() for kloop in range(1,100): soln = sol.copy() for i in range(1,nx-1): for j in range (1,ny-1): sol[i,j] = 0.25 * (soln[i,j-1] + soln[i,j+1] + soln[i-1,j] + soln[i+1,j]) # end j for loop # end i for loop #end for end_time = perf_counter() print(' ') print('Elapsed wall clock time = %g seconds.' % (end_time-start_time) ) print(' ')
Listing 5: Pymp Laplace Solver
--> import pymp <-- from time import perf_counter nx = 1201 ny = 1201 # Solution and previous solution arrays --> sol = pymp.shared.array((nx,ny)) <-- --> soln = pymp.shared.array((nx,ny)) <-- for j in range(0,ny-1): sol[0,j] = 10.0 sol[nx-1,j] = 1.0 # end for for i in range(0,nx-1): sol[i,0] = 0.0 sol[i,ny-1] = 0.0 # end for # Iterate start_time = perf_counter() --> with pymp.Parallel(6) as p: <-- for kloop in range(1,100): soln = sol.copy() for i in p.range(1,nx-1): for j in p.range (1,ny-1): sol[i,j] = 0.25 * (soln[i,j-1] + soln[i,j+1] + soln[i-1,j] + soln[i+1,j]) # end j for loop # end i for loop # end kloop for loop # end with end_time = perf_counter() print(' ') print('Elapsed wall clock time = %g seconds.' % (end_time-start_time) ) print(' ')
To show that Pymp is actually doing what it is supposed to do, Table 1 shows the timings for various numbers of cores. Notice that the total time decreases as the number of cores increases, as expected.
Table 1: Pymp Timings
No. of Cores | Total Time (sec) |
---|---|
Base (serial) | 165.0 |
1 | 94.0 |
2 | 42.0 |
4 | 10.9 |
6 | 5.0 |
Summary
Ever since Python was created, people have been trying to achieve multithreaded computation. Several tools were created to do computations outside of and integrate with Python.
Over time, parallel programming approaches have become standards. OpenMP is one of the original standards and is very popular in C/C++ and Fortran programming, so a large number of developers know and use it in application development. However, OpenMP is used with compiled, not interpreted, languages.
Fortunately, the innovative Python Pymp tool was created. It is an OpenMP-like Python module that uses the fork mechanism of the operating system instead of threads to achieve parallelism. As illustrated in the examples in this article, it’s not too difficult to port some applications to Pymp.
« Previous 1 2 3