1 |
|
OOPSE |
2 |
|
|
3 |
|
OOPSE is an open-source Object-Oriented Parallel Simulation Engine. |
4 |
< |
It is a mongrel code which uses no less than 5 programming languages |
5 |
< |
(although it is primarily written in C++ and Fortran95). Input files |
6 |
< |
are handled using the (included) Bizarre Atom Simulation Syntax (BASS) |
7 |
< |
library. The BASS library can handle atoms that don't fit the |
8 |
< |
standard picture of what the rest of the world uses for atoms |
9 |
< |
(i.e. our atoms can be "lumpy"; they have orientational degrees of |
10 |
< |
freedom). We can also handle some transition metal simulations using |
11 |
< |
the Embedded Atom Method (EAM) and other similar force fields. OOPSE |
12 |
< |
doesn't yet do force fields with charges, although it does handles |
13 |
< |
dipoles quite handily. |
4 |
> |
It is primarily used to perform molecular dynamics simulations on |
5 |
> |
"strange" atom types that are not normally handled by other simulation |
6 |
> |
packages. This includes atoms with orientational degrees of freedom |
7 |
> |
(point dipoles, sticky atoms), as well as transition metals under the |
8 |
> |
Embedded Atom Method (EAM). |
9 |
|
|
10 |
+ |
Input files are handled using the (included) Bizarre Atom Simulation |
11 |
+ |
Syntax (BASS) library. |
12 |
+ |
|
13 |
|
What you need to compile and use OOPSE: |
14 |
|
|
15 |
< |
0) A strong stomach. Mixed-language code can get ugly. |
15 |
> |
1) Good C, C++ and Fortran95 compilers. We've built and tested OOPSE |
16 |
> |
using version 8 of the Intel compilers (ifort, icpc and icc) on Linux |
17 |
> |
machines. We also routinely build and test under Mac OS X using the |
18 |
> |
IBM compilers (xlf95, vac++). OOPSE should build with g++ and gcc, |
19 |
> |
but you'll still need a good fortran *95* compiler. Fortran77 and |
20 |
> |
Fortran90 are *not* sufficient to compile OOPSE. |
21 |
|
|
19 |
– |
1) *Good* C++ and Fortran95 compilers. We've built and tested OOPSE |
20 |
– |
using the Intel compilers (ifc and icc) on Linux machines. Outside |
21 |
– |
of our setup, you're pretty much on your own... |
22 |
– |
|
22 |
|
2) MPI. We like MPICH. Other implementations might work, but we |
23 |
|
haven't tried. You can get MPICH here: |
24 |
|
http://www-unix.mcs.anl.gov/mpi/mpich/ |
25 |
|
|
26 |
< |
3) The f90 bindings for MPI. These are built by MPICH if it finds a |
28 |
< |
f90 compiler. There might be others out there for other |
29 |
< |
implementations, but we haven't tested. Try starting here: |
30 |
< |
http://duvel.lowtem.hokudai.ac.jp/~jim/software/f90_mpi_lib.html |
31 |
< |
|
32 |
< |
4) The Scalable Parallel Random Number Generators Library (SPRNG). You |
26 |
> |
3) The Scalable Parallel Random Number Generators Library (SPRNG). You |
27 |
|
can obtain SPRNG here: |
28 |
|
http://archive.ncsa.uiuc.edu/Apps/CMP/RNG/RNG-home.html |
29 |
|
|
30 |
+ |
4) Assorted unix utilities (lexx, yacc, make) or their GNU equivalents. |
31 |
+ |
|
32 |
|
INSTRUCTIONS |
33 |
|
|
34 |
|
1) Get, build, and test the required pieces above. |
36 |
|
3) make |
37 |
|
4) make install |
38 |
|
|
39 |
< |
That's it. Documentation is for wimps. |
39 |
> |
That's it. Documentation will be forthcoming after the paper is |
40 |
> |
published. |
41 |
> |
|