HardwareAll machines are 2-way SMPs with dual-core AMD Opteron 265 processors, per core:
|
ssh <your-username@optXXYour initial password is your student ID.
On windows, use your favorite ssh client to log in to opt.csc.ncsu.edu port 22XX (X=00..17). This will log you in to one of the nodes (same as above).
yppasswdThe new password will be effective no later than at the full hour.
gcc -fopenmp -o fn fn.cGcc 4.1 for FC5 is back-patched with the 4.2 OpenMP support. It may not be the fastest but it works.
gcc -o fn fn.c -lpapiMake sure your LD_LIBRARY_PATH is set. See below.
chmod 600 .rhosts
chmod 600 .mpd.conf
export PATH=".:~/bin:/usr/bin:/usr/local/bin:/usr/lib64/mpich2/bin:$PATH" export LD_LIBRARY_PATH="/usr/local/lib64:/usr/local/lib:/usr/lib64/mpich2/lib:$LD_LIBRARY_PATH"Log out and back in to optXX to activate the new settings.
mpdtraceThis should return a list of available nodes to run MPI jobs on. If not, then first start the user-level MPD (see below)
mpicc -O3 -o pi pi.cIf you're using BLAS/ATLAS:
mpicc -O3 -o pi pi.c -L/usr/lib64/atlas -lcblas -latlas
mpirun -np 2 piTry again with a different number of processors.
mpiexec -n 2 piTry again with a different number of processors.
mpdboot --ifhn=optmpiYY -n 2 -r rsh --ncpus=4You can now run MPI jobs (see mpirun/mpiexec above).
mpdlistjobs
mpdallexit
mpdcleanup
1 processor per node, 18 nodes, all in 1 mpd ring:
mpdboot --ifhn=optmpiYY -n 18 -r rsh --ncpus=1 --maxbranch=20 --verbose -dMore cleanup: sometimes mpdboot does not work because of left-over processes; issue:
rsh optYY killall -9 python2.6 rsh optYY mpdcleanup
export PATH="/usr/lib64/openmpi/bin:$PATH" export LD_LIBRARY_PATH="/usr/lib64/openmpi/lib" module load openmpi-x86_64
mpicc -O3 -o pi pi.c
mpirun -mca plm_rsh_agent rsh -mca btl_tcp_if_exclude lo,eth0,virbr0 -machinefile machinefile -np 2 pi2
opt00 slots=4 max_slots=4 opt01 slots=4 max_slots=4
Batch submission is realized via Torque over OpenPBS (coming later: Maui Cluster Scheduler).
On opt, issue:
This applies to: