Parallel computing tutorialΒΆ

try pytraj online:


Using multiprocessing

Task

Perform RMSD calculation in parallel
In [1]:
import warnings
warnings.filterwarnings('ignore', category=DeprecationWarning)
import pytraj as pt

traj = pt.iterload('tz2.nc', 'tz2.parm7')
print(traj)
pytraj.TrajectoryIterator, 101 frames: 
Size: 0.000503 (GB)
<Topology: 223 atoms, 13 residues, 1 mols, non-PBC>
           
In [2]:
pt.pmap(pt.rmsd, traj, mask='@CA', ref=traj[0], n_cores=4)
Out[2]:
OrderedDict([('RMSD_00001',
              array([  1.94667955e-07,   2.54596866e+00,   4.22333034e+00, ...,
                       4.97189564e+00,   5.53947712e+00,   4.83201237e+00]))])

Using MPI

Task

Perform RMSD calculation in parallel
In [3]:
%%file my_script.py

## create a file name my_scrip.py (you can use any name you like)

import pytraj as pt
from mpi4py import MPI

comm = MPI.COMM_WORLD
rank = comm.rank

# load files
traj = pt.iterload('tz2.nc', 'tz2.parm7')

# call pmap_mpi for MPI

# we dont need to specify n_cores=6 here since we will use `mpirun -n 6`
data = pt.pmap_mpi(pt.rmsd, traj, mask='@CA', ref=traj[0])

# data is sent to rank==0
if rank == 0:
    print(data)

# run
Writing my_script.py
In [4]:
# run in shell

! mpirun -n 4 python my_script.py
OrderedDict([('RMSD_00001', array([  1.94667955e-07,   2.54596866e+00,   4.22333034e+00, ...,
         4.97189564e+00,   5.53947712e+00,   4.83201237e+00]))])

(tutorial_parallel.ipynb; tutorial_parallel_evaluated.ipynb; tutorial_parallel.py)