ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/oopse.tex
Revision: 1087
Committed: Fri Mar 5 22:16:34 2004 UTC (20 years, 3 months ago) by mmeineke
Content type: application/x-tex
File size: 100135 byte(s)
Log Message:
semi-final draft

File Contents

# Content
1 \chapter{\label{chapt:oopse}OOPSE: AN OPEN SOURCE OBJECT-ORIENTED PARALLEL SIMULATION ENGINE FOR MOLECULAR DYNAMICS}
2
3
4
5 %% \begin{abstract}
6 %% We detail the capabilities of a new open-source parallel simulation
7 %% package ({\sc oopse}) that can perform molecular dynamics simulations
8 %% on atom types that are missing from other popular packages. In
9 %% particular, {\sc oopse} is capable of performing orientational
10 %% dynamics on dipolar systems, and it can handle simulations of metallic
11 %% systems using the embedded atom method ({\sc eam}).
12 %% \end{abstract}
13
14 \lstset{language=C,frame=TB,basicstyle=\small,basicstyle=\ttfamily, %
15 xleftmargin=0.5in, xrightmargin=0.5in,captionpos=b, %
16 abovecaptionskip=0.5cm, belowcaptionskip=0.5cm}
17
18 \section{\label{oopseSec:foreword}Foreword}
19
20 In this chapter, I present and detail the capabilities of the open
21 source simulation package {\sc oopse}. It is important to note, that a
22 simulation package of this size and scope would not have been possible
23 without the collaborative efforts of my colleagues: Charles
24 F.~Vardeman II, Teng Lin, Christopher J.~Fennell and J.~Daniel
25 Gezelter. Although my contributions to {\sc oopse} are major,
26 consideration of my work apart from the others would not give a
27 complete description to the package's capabilities. As such, all
28 contributions to {\sc oopse} to date are presented in this chapter.
29
30 Charles Vardeman is responsible for the parallelization of the long
31 range forces in {\sc oopse} (Sec.~\ref{oopseSec:parallelization}) as
32 well as the inclusion of the embedded-atom potential for transition
33 metals (Sec.~\ref{oopseSec:eam}). Teng Lin's contributions include
34 refinement of the periodic boundary conditions
35 (Sec.~\ref{oopseSec:pbc}), the z-constraint method
36 (Sec.~\ref{oopseSec:zcons}), refinement of the property analysis
37 programs (Sec.~\ref{oopseSec:props}), and development in the extended
38 system integrators (Sec.~\ref{oopseSec:noseHooverThermo}). Christopher
39 Fennell worked on the symplectic integrator
40 (Sec.~\ref{oopseSec:integrate}) and the refinement of the {\sc ssd}
41 water model (Sec.~\ref{oopseSec:SSD}). Daniel Gezelter lent his
42 talents in the development of the extended system integrators
43 (Sec.~\ref{oopseSec:noseHooverThermo}) as well as giving general
44 direction and oversight to the entire project. My responsibilities
45 covered the creation and specification of {\sc bass}
46 (Sec.~\ref{oopseSec:IOfiles}), the original development of the single
47 processor version of {\sc oopse}, contributions to the extended state
48 integrators (Sec.~\ref{oopseSec:noseHooverThermo}), the implementation
49 of the Lennard-Jones (Sec.~\ref{sec:LJPot}) and {\sc duff}
50 (Sec.~\ref{oopseSec:DUFF}) force fields, and initial implementation of
51 the property analysis (Sec.~\ref{oopseSec:props}) and system
52 initialization (Sec.~\ref{oopseSec:initCoords}) utility programs. {\sc
53 oopse}, like many other Molecular Dynamics programs, is a work in
54 progress, and will continue to be so for many graduate student
55 lifetimes.
56
57 \section{\label{sec:intro}Introduction}
58
59 When choosing to simulate a chemical system with molecular dynamics,
60 there are a variety of options available. For simple systems, one
61 might consider writing one's own programming code. However, as systems
62 grow larger and more complex, building and maintaining code for the
63 simulations becomes a time consuming task. In such cases it is usually
64 more convenient for a researcher to turn to pre-existing simulation
65 packages. These packages, such as {\sc amber}\cite{pearlman:1995} and
66 {\sc charmm}\cite{Brooks83}, provide powerful tools for researchers to
67 conduct simulations of their systems without spending their time
68 developing a code base to conduct their research. This then frees them
69 to perhaps explore experimental analogues to their models.
70
71 Despite their utility, problems with these packages arise when
72 researchers try to develop techniques or energetic models that the
73 code was not originally designed to simulate. Examples of uncommonly
74 implemented techniques and energetics include; dipole-dipole
75 interactions, rigid body dynamics, and metallic embedded
76 potentials. When faced with these obstacles, a researcher must either
77 develop their own code or license and extend one of the commercial
78 packages. What we have elected to do, is develop a package of
79 simulation code capable of implementing the types of models upon which
80 our research is based.
81
82 In developing {\sc oopse}, we have adhered to the precepts of Open
83 Source development, and are releasing our source code with a
84 permissive license. It is our intent that by doing so, other
85 researchers might benefit from our work, and add their own
86 contributions to the package. The license under which {\sc oopse} is
87 distributed allows any researcher to download and modify the source
88 code for their own use. In this way further development of {\sc oopse}
89 is not limited to only the models of interest to ourselves, but also
90 those of the community of scientists who contribute back to the
91 project.
92
93 We have structured this chapter to first discuss the empirical energy
94 functions that {\sc oopse } implements in
95 Sec.~\ref{oopseSec:empiricalEnergy}. Following that is a discussion of
96 the various input and output files associated with the package
97 (Sec.~\ref{oopseSec:IOfiles}). Sec.~\ref{oopseSec:mechanics}
98 elucidates the various Molecular Dynamics algorithms {\sc oopse}
99 implements in the integration of the Newtonian equations of
100 motion. Basic analysis of the trajectories obtained from the
101 simulation is discussed in Sec.~\ref{oopseSec:props}. Program design
102 considerations are presented in Sec.~\ref{oopseSec:design}. And
103 lastly, Sec.~\ref{oopseSec:conclusion} concludes the chapter.
104
105 \section{\label{oopseSec:empiricalEnergy}The Empirical Energy Functions}
106
107 \subsection{\label{oopseSec:atomsMolecules}Atoms, Molecules and Rigid Bodies}
108
109 The basic unit of an {\sc oopse} simulation is the atom. The
110 parameters describing the atom are generalized to make the atom as
111 flexible a representation as possible. They may represent specific
112 atoms of an element, or be used for collections of atoms such as
113 methyl and carbonyl groups. The atoms are also capable of having
114 directional components associated with them (\emph{e.g.}~permanent
115 dipoles). Charges, permanent dipoles, and Lennard-Jones parameters for
116 a given atom type are set in the force field parameter files.
117
118 \begin{lstlisting}[float,caption={[Specifier for molecules and atoms] A sample specification of an Ar molecule},label=sch:AtmMole]
119 molecule{
120 name = "Ar";
121 nAtoms = 1;
122 atom[0]{
123 type="Ar";
124 position( 0.0, 0.0, 0.0 );
125 }
126 }
127 \end{lstlisting}
128
129
130 Atoms can be collected into secondary structures such as rigid bodies
131 or molecules. The molecule is a way for {\sc oopse} to keep track of
132 the atoms in a simulation in logical manner. Molecular units store the
133 identities of all the atoms and rigid bodies associated with
134 themselves, and are responsible for the evaluation of their own
135 internal interactions (\emph{i.e.}~bonds, bends, and torsions). Scheme
136 \ref{sch:AtmMole} shows how one creates a molecule in a ``model'' or
137 \texttt{.mdl} file. The position of the atoms given in the
138 declaration are relative to the origin of the molecule, and is used
139 when creating a system containing the molecule.
140
141 As stated previously, one of the features that sets {\sc oopse} apart
142 from most of the current molecular simulation packages is the ability
143 to handle rigid body dynamics. Rigid bodies are non-spherical
144 particles or collections of particles that have a constant internal
145 potential and move collectively.\cite{Goldstein01} They are not
146 included in most simulation packages because of the algorithmic
147 complexity involved in propagating orientational degrees of
148 freedom. Until recently, integrators which propagate orientational
149 motion have been much worse than those available for translational
150 motion.
151
152 Moving a rigid body involves determination of both the force and
153 torque applied by the surroundings, which directly affect the
154 translational and rotational motion in turn. In order to accumulate
155 the total force on a rigid body, the external forces and torques must
156 first be calculated for all the internal particles. The total force on
157 the rigid body is simply the sum of these external forces.
158 Accumulation of the total torque on the rigid body is more complex
159 than the force because the torque is applied to the center of mass of
160 the rigid body. The torque on rigid body $i$ is
161 \begin{equation}
162 \boldsymbol{\tau}_i=
163 \sum_{a}\biggl[(\mathbf{r}_{ia}-\mathbf{r}_i)\times \mathbf{f}_{ia}
164 + \boldsymbol{\tau}_{ia}\biggr]
165 \label{eq:torqueAccumulate}
166 \end{equation}
167 where $\boldsymbol{\tau}_i$ and $\mathbf{r}_i$ are the torque on and
168 position of the center of mass respectively, while $\mathbf{f}_{ia}$,
169 $\mathbf{r}_{ia}$, and $\boldsymbol{\tau}_{ia}$ are the force on,
170 position of, and torque on the component particles of the rigid body.
171
172 The summation of the total torque is done in the body fixed axis of
173 each rigid body. In order to move between the space fixed and body
174 fixed coordinate axes, parameters describing the orientation must be
175 maintained for each rigid body. At a minimum, the rotation matrix
176 (\textbf{A}) can be described by the three Euler angles ($\phi,
177 \theta,$ and $\psi$), where the elements of \textbf{A} are composed of
178 trigonometric operations involving $\phi, \theta,$ and
179 $\psi$.\cite{Goldstein01} In order to avoid numerical instabilities
180 inherent in using the Euler angles, the four parameter ``quaternion''
181 scheme is often used. The elements of \textbf{A} can be expressed as
182 arithmetic operations involving the four quaternions ($q_0, q_1, q_2,$
183 and $q_3$).\cite{allen87:csl} Use of quaternions also leads to
184 performance enhancements, particularly for very small
185 systems.\cite{Evans77}
186
187 {\sc oopse} utilizes a relatively new scheme that propagates the
188 entire nine parameter rotation matrix. Further discussion
189 on this choice can be found in Sec.~\ref{oopseSec:integrate}. An example
190 definition of a rigid body can be seen in Scheme
191 \ref{sch:rigidBody}. The positions in the atom definitions are the
192 placements of the atoms relative to the origin of the rigid body,
193 which itself has a position relative to the origin of the molecule.
194
195 \begin{lstlisting}[float,caption={[Defining rigid bodies]A sample definition of a rigid body},label={sch:rigidBody}]
196 molecule{
197 name = "TIP3P_water";
198 nRigidBodies = 1;
199 rigidBody[0]{
200 nAtoms = 3;
201 atom[0]{
202 type = "O_TIP3P";
203 position( 0.0, 0.0, -0.06556 );
204 }
205 atom[1]{
206 type = "H_TIP3P";
207 position( 0.0, 0.75695, 0.52032 );
208 }
209 atom[2]{
210 type = "H_TIP3P";
211 position( 0.0, -0.75695, 0.52032 );
212 }
213 position( 0.0, 0.0, 0.0 );
214 orientation( 0.0, 0.0, 1.0 );
215 }
216 }
217 \end{lstlisting}
218
219 \subsection{\label{sec:LJPot}The Lennard Jones Force Field}
220
221 The most basic force field implemented in {\sc oopse} is the
222 Lennard-Jones force field, which mimics the van der Waals interaction at
223 long distances, and uses an empirical repulsion at short
224 distances. The Lennard-Jones potential is given by:
225 \begin{equation}
226 V_{\text{LJ}}(r_{ij}) =
227 4\epsilon_{ij} \biggl[
228 \biggl(\frac{\sigma_{ij}}{r_{ij}}\biggr)^{12}
229 - \biggl(\frac{\sigma_{ij}}{r_{ij}}\biggr)^{6}
230 \biggr]
231 \label{eq:lennardJonesPot}
232 \end{equation}
233 Where $r_{ij}$ is the distance between particles $i$ and $j$,
234 $\sigma_{ij}$ scales the length of the interaction, and
235 $\epsilon_{ij}$ scales the well depth of the potential. Scheme
236 \ref{sch:LJFF} gives and example \texttt{.bass} file that
237 sets up a system of 108 Ar particles to be simulated using the
238 Lennard-Jones force field.
239
240 \begin{lstlisting}[float,caption={[Invocation of the Lennard-Jones force field] A sample system using the Lennard-Jones force field.},label={sch:LJFF}]
241
242 #include "argon.mdl"
243
244 nComponents = 1;
245 component{
246 type = "Ar";
247 nMol = 108;
248 }
249
250 initialConfig = "./argon.init";
251
252 forceField = "LJ";
253 \end{lstlisting}
254
255 Because this potential is calculated between all pairs, the force
256 evaluation can become computationally expensive for large systems. To
257 keep the pair evaluations to a manageable number, {\sc oopse} employs
258 a cut-off radius.\cite{allen87:csl} The cutoff radius can either be
259 specified in the \texttt{.bass} file, or left as its default value of
260 $2.5\sigma_{ii}$, where $\sigma_{ii}$ is the largest Lennard-Jones
261 length parameter present in the simulation. Truncating the calculation
262 at $r_{\text{cut}}$ introduces a discontinuity into the potential
263 energy and the force. To offset this discontinuity in the potential,
264 the energy value at $r_{\text{cut}}$ is subtracted from the
265 potential. This causes the potential to go to zero smoothly at the
266 cut-off radius, and preserves conservation of energy in integrating
267 the equations of motion.
268
269 Interactions between dissimilar particles requires the generation of
270 cross term parameters for $\sigma$ and $\epsilon$. These are
271 calculated through the Lorentz-Berthelot mixing
272 rules:\cite{allen87:csl}
273 \begin{equation}
274 \sigma_{ij} = \frac{1}{2}[\sigma_{ii} + \sigma_{jj}]
275 \label{eq:sigmaMix}
276 \end{equation}
277 and
278 \begin{equation}
279 \epsilon_{ij} = \sqrt{\epsilon_{ii} \epsilon_{jj}}
280 \label{eq:epsilonMix}
281 \end{equation}
282
283 \subsection{\label{oopseSec:DUFF}Dipolar Unified-Atom Force Field}
284
285 The dipolar unified-atom force field ({\sc duff}) was developed to
286 simulate lipid bilayers. The simulations require a model capable of
287 forming bilayers, while still being sufficiently computationally
288 efficient to allow large systems ($\sim$100's of phospholipids,
289 $\sim$1000's of waters) to be simulated for long times
290 ($\sim$10's of nanoseconds).
291
292 With this goal in mind, {\sc duff} has no point
293 charges. Charge-neutral distributions were replaced with dipoles,
294 while most atoms and groups of atoms were reduced to Lennard-Jones
295 interaction sites. This simplification cuts the length scale of long
296 range interactions from $\frac{1}{r}$ to $\frac{1}{r^3}$, and allows
297 us to avoid the computationally expensive Ewald sum. Instead, we can
298 use neighbor-lists and cutoff radii for the dipolar interactions, or
299 include a reaction field to mimic larger range interactions.
300
301 As an example, lipid head-groups in {\sc duff} are represented as
302 point dipole interaction sites. By placing a dipole at the head group
303 center of mass, our model mimics the charge separation found in common
304 phospholipids such as phosphatidylcholine.\cite{Cevc87} Additionally,
305 a large Lennard-Jones site is located at the pseudoatom's center of
306 mass. The model is illustrated by the red atom in
307 Fig.~\ref{oopseFig:lipidModel}. The water model we use to complement
308 the dipoles of the lipids is our reparameterization of the soft sticky
309 dipole (SSD) model of Ichiye
310 \emph{et al.}\cite{liu96:new_model}
311
312 \begin{figure}
313 \centering
314 \includegraphics[width=\linewidth]{lipidModel.eps}
315 \caption[A representation of a lipid model in {\sc duff}]{A representation of the lipid model. $\phi$ is the torsion angle, $\theta$ %
316 is the bend angle, $\mu$ is the dipole moment of the head group, and n
317 is the chain length.}
318 \label{oopseFig:lipidModel}
319 \end{figure}
320
321 We have used a set of scalable parameters to model the alkyl groups
322 with Lennard-Jones sites. For this, we have borrowed parameters from
323 the TraPPE force field of Siepmann
324 \emph{et al}.\cite{Siepmann1998} TraPPE is a unified-atom
325 representation of n-alkanes, which is parametrized against phase
326 equilibria using Gibbs ensemble Monte Carlo simulation
327 techniques.\cite{Siepmann1998} One of the advantages of TraPPE is that
328 it generalizes the types of atoms in an alkyl chain to keep the number
329 of pseudoatoms to a minimum; the parameters for a unified atom such as
330 $\text{CH}_2$ do not change depending on what species are bonded to
331 it.
332
333 TraPPE also constrains all bonds to be of fixed length. Typically,
334 bond vibrations are the fastest motions in a molecular dynamic
335 simulation. Small time steps between force evaluations must be used to
336 ensure adequate energy conservation in the bond degrees of freedom. By
337 constraining the bond lengths, larger time steps may be used when
338 integrating the equations of motion. A simulation using {\sc duff} is
339 illustrated in Scheme \ref{sch:DUFF}.
340
341 \begin{lstlisting}[float,caption={[Invocation of {\sc duff}]Sample \texttt{.bass} file showing a simulation utilizing {\sc duff}},label={sch:DUFF}]
342
343 #include "water.mdl"
344 #include "lipid.mdl"
345
346 nComponents = 2;
347 component{
348 type = "simpleLipid_16";
349 nMol = 60;
350 }
351
352 component{
353 type = "SSD_water";
354 nMol = 1936;
355 }
356
357 initialConfig = "bilayer.init";
358
359 forceField = "DUFF";
360
361 \end{lstlisting}
362
363 \subsection{\label{oopseSec:energyFunctions}{\sc duff} Energy Functions}
364
365 The total potential energy function in {\sc duff} is
366 \begin{equation}
367 V = \sum^{N}_{I=1} V^{I}_{\text{Internal}}
368 + \sum^{N-1}_{I=1} \sum_{J>I} V^{IJ}_{\text{Cross}}
369 \label{eq:totalPotential}
370 \end{equation}
371 Where $V^{I}_{\text{Internal}}$ is the internal potential of molecule $I$:
372 \begin{equation}
373 V^{I}_{\text{Internal}} =
374 \sum_{\theta_{ijk} \in I} V_{\text{bend}}(\theta_{ijk})
375 + \sum_{\phi_{ijkl} \in I} V_{\text{torsion}}(\phi_{ijkl})
376 + \sum_{i \in I} \sum_{(j>i+4) \in I}
377 \biggl[ V_{\text{LJ}}(r_{ij}) + V_{\text{dipole}}
378 (\mathbf{r}_{ij},\boldsymbol{\Omega}_{i},\boldsymbol{\Omega}_{j})
379 \biggr]
380 \label{eq:internalPotential}
381 \end{equation}
382 Here $V_{\text{bend}}$ is the bend potential for all 1, 3 bonded pairs
383 within the molecule $I$, and $V_{\text{torsion}}$ is the torsion potential
384 for all 1, 4 bonded pairs. The pairwise portions of the internal
385 potential are excluded for pairs that are closer than three bonds,
386 i.e.~atom pairs farther away than a torsion are included in the
387 pair-wise loop.
388
389
390 The bend potential of a molecule is represented by the following function:
391 \begin{equation}
392 V_{\text{bend}}(\theta_{ijk}) = k_{\theta}( \theta_{ijk} - \theta_0 )^2 \label{eq:bendPot}
393 \end{equation}
394 Where $\theta_{ijk}$ is the angle defined by atoms $i$, $j$, and $k$
395 (see Fig.~\ref{oopseFig:lipidModel}), $\theta_0$ is the equilibrium
396 bond angle, and $k_{\theta}$ is the force constant which determines the
397 strength of the harmonic bend. The parameters for $k_{\theta}$ and
398 $\theta_0$ are borrowed from those in TraPPE.\cite{Siepmann1998}
399
400 The torsion potential and parameters are also borrowed from TraPPE. It is
401 of the form:
402 \begin{equation}
403 V_{\text{torsion}}(\phi) = c_1[1 + \cos \phi]
404 + c_2[1 + \cos(2\phi)]
405 + c_3[1 + \cos(3\phi)]
406 \label{eq:origTorsionPot}
407 \end{equation}
408 Where:
409 \begin{equation}
410 \cos\phi = (\hat{\mathbf{r}}_{ij} \times \hat{\mathbf{r}}_{jk}) \cdot
411 (\hat{\mathbf{r}}_{jk} \times \hat{\mathbf{r}}_{kl})
412 \label{eq:torsPhi}
413 \end{equation}
414 Here, $\hat{\mathbf{r}}_{\alpha\beta}$ are the set of unit bond
415 vectors between atoms $i$, $j$, $k$, and $l$. For computational
416 efficiency, the torsion potential has been recast after the method of
417 {\sc charmm},\cite{Brooks83} in which the angle series is converted to
418 a power series of the form:
419 \begin{equation}
420 V_{\text{torsion}}(\phi) =
421 k_3 \cos^3 \phi + k_2 \cos^2 \phi + k_1 \cos \phi + k_0
422 \label{eq:torsionPot}
423 \end{equation}
424 Where:
425 \begin{align*}
426 k_0 &= c_1 + c_3 \\
427 k_1 &= c_1 - 3c_3 \\
428 k_2 &= 2 c_2 \\
429 k_3 &= 4c_3
430 \end{align*}
431 By recasting the potential as a power series, repeated trigonometric
432 evaluations are avoided during the calculation of the potential energy.
433
434
435 The cross potential between molecules $I$ and $J$, $V^{IJ}_{\text{Cross}}$, is
436 as follows:
437 \begin{equation}
438 V^{IJ}_{\text{Cross}} =
439 \sum_{i \in I} \sum_{j \in J}
440 \biggl[ V_{\text{LJ}}(r_{ij}) + V_{\text{dipole}}
441 (\mathbf{r}_{ij},\boldsymbol{\Omega}_{i},\boldsymbol{\Omega}_{j})
442 + V_{\text{sticky}}
443 (\mathbf{r}_{ij},\boldsymbol{\Omega}_{i},\boldsymbol{\Omega}_{j})
444 \biggr]
445 \label{eq:crossPotentail}
446 \end{equation}
447 Where $V_{\text{LJ}}$ is the Lennard Jones potential,
448 $V_{\text{dipole}}$ is the dipole dipole potential, and
449 $V_{\text{sticky}}$ is the sticky potential defined by the SSD model
450 (Sec.~\ref{oopseSec:SSD}). Note that not all atom types include all
451 interactions.
452
453 The dipole-dipole potential has the following form:
454 \begin{equation}
455 V_{\text{dipole}}(\mathbf{r}_{ij},\boldsymbol{\Omega}_{i},
456 \boldsymbol{\Omega}_{j}) = \frac{|\mu_i||\mu_j|}{4\pi\epsilon_{0}r_{ij}^{3}} \biggl[
457 \boldsymbol{\hat{u}}_{i} \cdot \boldsymbol{\hat{u}}_{j}
458 -
459 3(\boldsymbol{\hat{u}}_i \cdot \hat{\mathbf{r}}_{ij}) %
460 (\boldsymbol{\hat{u}}_j \cdot \hat{\mathbf{r}}_{ij}) \biggr]
461 \label{eq:dipolePot}
462 \end{equation}
463 Here $\mathbf{r}_{ij}$ is the vector starting at atom $i$ pointing
464 towards $j$, and $\boldsymbol{\Omega}_i$ and $\boldsymbol{\Omega}_j$
465 are the orientational degrees of freedom for atoms $i$ and $j$
466 respectively. $|\mu_i|$ is the magnitude of the dipole moment of atom
467 $i$, $\boldsymbol{\hat{u}}_i$ is the standard unit orientation vector
468 of $\boldsymbol{\Omega}_i$, and $\boldsymbol{\hat{r}}_{ij}$ is the
469 unit vector pointing along $\mathbf{r}_{ij}$
470 ($\boldsymbol{\hat{r}}_{ij}=\mathbf{r}_{ij}/|\mathbf{r}_{ij}|$).
471
472 To improve computational efficiency of the dipole-dipole interactions,
473 {\sc oopse} employs an electrostatic cutoff radius. This parameter can
474 be set in the \texttt{.bass} file, and controls the length scale over
475 which dipole interactions are felt. To compensate for the
476 discontinuity in the potential and the forces at the cutoff radius, we
477 have implemented a switching function to smoothly scale the
478 dipole-dipole interaction at the cutoff.
479 \begin{equation}
480 S(r_{ij}) =
481 \begin{cases}
482 1 & \text{if $r_{ij} \le r_t$},\\
483 \frac{(r_{\text{cut}} + 2r_{ij} - 3r_t)(r_{\text{cut}} - r_{ij})^2}
484 {(r_{\text{cut}} - r_t)^2}
485 & \text{if $r_t < r_{ij} \le r_{\text{cut}}$}, \\
486 0 & \text{if $r_{ij} > r_{\text{cut}}$.}
487 \end{cases}
488 \label{eq:dipoleSwitching}
489 \end{equation}
490 Here $S(r_{ij})$ scales the potential at a given $r_{ij}$, and $r_t$
491 is the taper radius some given thickness less than the electrostatic
492 cutoff. The switching thickness can be set in the \texttt{.bass} file.
493
494 \subsection{\label{oopseSec:SSD}The {\sc duff} Water Models: SSD/E and SSD/RF}
495
496 In the interest of computational efficiency, the default solvent used
497 by {\sc oopse} is the extended Soft Sticky Dipole (SSD/E) water
498 model.\cite{Gezelter04} The original SSD was developed by Ichiye
499 \emph{et al.}\cite{liu96:new_model} as a modified form of the hard-sphere
500 water model proposed by Bratko, Blum, and
501 Luzar.\cite{Bratko85,Bratko95} It consists of a single point dipole
502 with a Lennard-Jones core and a sticky potential that directs the
503 particles to assume the proper hydrogen bond orientation in the first
504 solvation shell. Thus, the interaction between two SSD water molecules
505 \emph{i} and \emph{j} is given by the potential
506 \begin{equation}
507 V_{ij} =
508 V_{ij}^{LJ} (r_{ij})\ + V_{ij}^{dp}
509 (\mathbf{r}_{ij},\boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j)\ +
510 V_{ij}^{sp}
511 (\mathbf{r}_{ij},\boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j),
512 \label{eq:ssdPot}
513 \end{equation}
514 where the $\mathbf{r}_{ij}$ is the position vector between molecules
515 \emph{i} and \emph{j} with magnitude equal to the distance $r_{ij}$, and
516 $\boldsymbol{\Omega}_i$ and $\boldsymbol{\Omega}_j$ represent the
517 orientations of the respective molecules. The Lennard-Jones and dipole
518 parts of the potential are given by equations \ref{eq:lennardJonesPot}
519 and \ref{eq:dipolePot} respectively. The sticky part is described by
520 the following,
521 \begin{equation}
522 u_{ij}^{sp}(\mathbf{r}_{ij},\boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j)=
523 \frac{\nu_0}{2}[s(r_{ij})w(\mathbf{r}_{ij},
524 \boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j) +
525 s^\prime(r_{ij})w^\prime(\mathbf{r}_{ij},
526 \boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j)]\ ,
527 \label{eq:stickyPot}
528 \end{equation}
529 where $\nu_0$ is a strength parameter for the sticky potential, and
530 $s$ and $s^\prime$ are cubic switching functions which turn off the
531 sticky interaction beyond the first solvation shell. The $w$ function
532 can be thought of as an attractive potential with tetrahedral
533 geometry:
534 \begin{equation}
535 w({\bf r}_{ij},{\bf \Omega}_i,{\bf \Omega}_j)=
536 \sin\theta_{ij}\sin2\theta_{ij}\cos2\phi_{ij},
537 \label{eq:stickyW}
538 \end{equation}
539 while the $w^\prime$ function counters the normal aligned and
540 anti-aligned structures favored by point dipoles:
541 \begin{equation}
542 w^\prime({\bf r}_{ij},{\bf \Omega}_i,{\bf \Omega}_j)=
543 (\cos\theta_{ij}-0.6)^2(\cos\theta_{ij}+0.8)^2-w^0,
544 \label{eq:stickyWprime}
545 \end{equation}
546 It should be noted that $w$ is proportional to the sum of the $Y_3^2$
547 and $Y_3^{-2}$ spherical harmonics (a linear combination which
548 enhances the tetrahedral geometry for hydrogen bonded structures),
549 while $w^\prime$ is a purely empirical function. A more detailed
550 description of the functional parts and variables in this potential
551 can be found in the original SSD
552 articles.\cite{liu96:new_model,liu96:monte_carlo,chandra99:ssd_md,Ichiye03}
553
554 Since SSD/E is a single-point {\it dipolar} model, the force
555 calculations are simplified significantly relative to the standard
556 {\it charged} multi-point models. In the original Monte Carlo
557 simulations using this model, Ichiye {\it et al.} reported that using
558 SSD decreased computer time by a factor of 6-7 compared to other
559 models.\cite{liu96:new_model} What is most impressive is that these savings
560 did not come at the expense of accurate depiction of the liquid state
561 properties. Indeed, SSD/E maintains reasonable agreement with the Head-Gordon
562 diffraction data for the structural features of liquid
563 water.\cite{hura00,liu96:new_model} Additionally, the dynamical properties
564 exhibited by SSD/E agree with experiment better than those of more
565 computationally expensive models (like TIP3P and
566 SPC/E).\cite{chandra99:ssd_md} The combination of speed and accurate depiction
567 of solvent properties makes SSD/E a very attractive model for the
568 simulation of large scale biochemical simulations.
569
570 Recent constant pressure simulations revealed issues in the original
571 SSD model that led to lower than expected densities at all target
572 pressures.\cite{Ichiye03,Gezelter04} The default model in {\sc oopse}
573 is therefore SSD/E, a density corrected derivative of SSD that
574 exhibits improved liquid structure and transport behavior. If the use
575 of a reaction field long-range interaction correction is desired, it
576 is recommended that the parameters be modified to those of the SSD/RF
577 model. Solvent parameters can be easily modified in an accompanying
578 \texttt{.bass} file as illustrated in the scheme below. A table of the
579 parameter values and the drawbacks and benefits of the different
580 density corrected SSD models can be found in
581 reference~\cite{Gezelter04}.
582
583 \begin{lstlisting}[float,caption={[A simulation of {\sc ssd} water]An example file showing a simulation including {\sc ssd} water.},label={sch:ssd}]
584
585 #include "water.mdl"
586
587 nComponents = 1;
588 component{
589 type = "SSD_water";
590 nMol = 864;
591 }
592
593 initialConfig = "liquidWater.init";
594
595 forceField = "DUFF";
596
597 /*
598 * The following two flags set the cutoff
599 * radius for the electrostatic forces
600 * as well as the skin thickness of the switching
601 * function.
602 */
603
604 electrostaticCutoffRadius = 9.2;
605 electrostaticSkinThickness = 1.38;
606
607 \end{lstlisting}
608
609
610 \subsection{\label{oopseSec:eam}Embedded Atom Method}
611
612 There are Molecular Dynamics packages which have the
613 capacity to simulate metallic systems, including some that have
614 parallel computational abilities\cite{plimpton93}. Potentials that
615 describe bonding transition metal
616 systems\cite{Finnis84,Ercolessi88,Chen90,Qi99,Ercolessi02} have an
617 attractive interaction which models ``Embedding''
618 a positively charged metal ion in the electron density due to the
619 free valance ``sea'' of electrons created by the surrounding atoms in
620 the system. A mostly-repulsive pairwise part of the potential
621 describes the interaction of the positively charged metal core ions
622 with one another. A particular potential description called the
623 Embedded Atom Method\cite{Daw84,FBD86,johnson89,Lu97}({\sc eam}) that has
624 particularly wide adoption has been selected for inclusion in {\sc oopse}. A
625 good review of {\sc eam} and other metallic potential formulations was written
626 by Voter.\cite{voter}
627
628 The {\sc eam} potential has the form:
629 \begin{eqnarray}
630 V & = & \sum_{i} F_{i}\left[\rho_{i}\right] + \sum_{i} \sum_{j \neq i}
631 \phi_{ij}({\bf r}_{ij}) \\
632 \rho_{i} & = & \sum_{j \neq i} f_{j}({\bf r}_{ij})
633 \end{eqnarray}
634 where $F_{i} $ is the embedding function that equates the energy
635 required to embed a positively-charged core ion $i$ into a linear
636 superposition of spherically averaged atomic electron densities given
637 by $\rho_{i}$. $\phi_{ij}$ is a primarily repulsive pairwise
638 interaction between atoms $i$ and $j$. In the original formulation of
639 {\sc eam}\cite{Daw84}, $\phi_{ij}$ was an entirely repulsive term,
640 however in later refinements to {\sc eam} have shown that non-uniqueness
641 between $F$ and $\phi$ allow for more general forms for
642 $\phi$.\cite{Daw89} There is a cutoff distance, $r_{cut}$, which
643 limits the summations in the {\sc eam} equation to the few dozen atoms
644 surrounding atom $i$ for both the density $\rho$ and pairwise $\phi$
645 interactions. Foiles \emph{et al}.~fit {\sc eam} potentials for the fcc
646 metals Cu, Ag, Au, Ni, Pd, Pt and alloys of these metals.\cite{FBD86}
647 These fits, are included in {\sc oopse}.
648
649 \subsection{\label{oopseSec:pbc}Periodic Boundary Conditions}
650
651 \newcommand{\roundme}{\operatorname{round}}
652
653 \textit{Periodic boundary conditions} are widely used to simulate bulk properties with a relatively small number of particles. The
654 simulation box is replicated throughout space to form an infinite
655 lattice. During the simulation, when a particle moves in the primary
656 cell, its image in other cells move in exactly the same direction with
657 exactly the same orientation. Thus, as a particle leaves the primary
658 cell, one of its images will enter through the opposite face. If the
659 simulation box is large enough to avoid ``feeling'' the symmetries of
660 the periodic lattice, surface effects can be ignored. The available
661 periodic cells in OOPSE are cubic, orthorhombic and parallelepiped. We
662 use a $3 \times 3$ matrix, $\mathsf{H}$, to describe the shape and
663 size of the simulation box. $\mathsf{H}$ is defined:
664 \begin{equation}
665 \mathsf{H} = ( \mathbf{h}_x, \mathbf{h}_y, \mathbf{h}_z )
666 \end{equation}
667 Where $\mathbf{h}_j$ is the column vector of the $j$th axis of the
668 box. During the course of the simulation both the size and shape of
669 the box can be changed to allow volume fluctuations when constraining
670 the pressure.
671
672 A real space vector, $\mathbf{r}$ can be transformed in to a box space
673 vector, $\mathbf{s}$, and back through the following transformations:
674 \begin{align}
675 \mathbf{s} &= \mathsf{H}^{-1} \mathbf{r} \\
676 \mathbf{r} &= \mathsf{H} \mathbf{s}
677 \end{align}
678 The vector $\mathbf{s}$ is now a vector expressed as the number of box
679 lengths in the $\mathbf{h}_x$, $\mathbf{h}_y$, and $\mathbf{h}_z$
680 directions. To find the minimum image of a vector $\mathbf{r}$, we
681 first convert it to its corresponding vector in box space, and then,
682 cast each element to lie on the in the range $[-0.5,0.5]$:
683 \begin{equation}
684 s_{i}^{\prime}=s_{i}-\roundme(s_{i})
685 \end{equation}
686 Where $s_i$ is the $i$th element of $\mathbf{s}$, and
687 $\roundme(s_i)$is given by
688 \begin{equation}
689 \roundme(x) =
690 \begin{cases}
691 \lfloor x+0.5 \rfloor & \text{if $x \ge 0$} \\
692 \lceil x-0.5 \rceil & \text{if $x < 0$ }
693 \end{cases}
694 \end{equation}
695 Here $\lfloor x \rfloor$ is the floor operator, and gives the largest
696 integer value that is not greater than $x$, and $\lceil x \rceil$ is
697 the ceiling operator, and gives the smallest integer that is not less
698 than $x$. For example, $\roundme(3.6)=4$, $\roundme(3.1)=3$,
699 $\roundme(-3.6)=-4$, $\roundme(-3.1)=-3$.
700
701 Finally, we obtain the minimum image coordinates $\mathbf{r}^{\prime}$ by
702 transforming back to real space,
703 \begin{equation}
704 \mathbf{r}^{\prime}=\mathsf{H}^{-1}\mathbf{s}^{\prime}%
705 \end{equation}
706 In this way, particles are allowed to diffuse freely in $\mathbf{r}$,
707 but their minimum images, $\mathbf{r}^{\prime}$ are used to compute
708 the inter-atomic forces.
709
710
711 \section{\label{oopseSec:IOfiles}Input and Output Files}
712
713 \subsection{{\sc bass} and Model Files}
714
715 Every {\sc oopse} simulation begins with a Bizarre Atom Simulation
716 Syntax ({\sc bass}) file. {\sc bass} is a script syntax that is parsed
717 by {\sc oopse} at runtime. The {\sc bass} file allows for the user to
718 completely describe the system they wish to simulate, as well as tailor
719 {\sc oopse}'s behavior during the simulation. {\sc bass} files are
720 denoted with the extension
721 \texttt{.bass}, an example file is shown in
722 Scheme~\ref{sch:bassExample}.
723
724 \begin{lstlisting}[float,caption={[An example of a complete {\sc bass} file] An example showing a complete {\sc bass} file.},label={sch:bassExample}]
725
726 molecule{
727 name = "Ar";
728 nAtoms = 1;
729 atom[0]{
730 type="Ar";
731 position( 0.0, 0.0, 0.0 );
732 }
733 }
734
735 nComponents = 1;
736 component{
737 type = "Ar";
738 nMol = 108;
739 }
740
741 initialConfig = "./argon.init";
742
743 forceField = "LJ";
744 ensemble = "NVE"; // specify the simulation ensemble
745 dt = 1.0; // the time step for integration
746 runTime = 1e3; // the total simulation run time
747 sampleTime = 100; // trajectory file frequency
748 statusTime = 50; // statistics file frequency
749
750 \end{lstlisting}
751
752 Within the \texttt{.bass} file it is necessary to provide a complete
753 description of the molecule before it is actually placed in the
754 simulation. The {\sc bass} syntax was originally developed with this
755 goal in mind, and allows for the specification of all the atoms in a
756 molecular prototype, as well as any bonds, bends, or torsions. These
757 descriptions can become lengthy for complex molecules, and it would be
758 inconvenient to duplicate the simulation at the beginning of each {\sc
759 bass} script. Addressing this issue {\sc bass} allows for the
760 inclusion of model files at the top of a \texttt{.bass} file. These
761 model files, denoted with the \texttt{.mdl} extension, allow the user
762 to describe a molecular prototype once, then simply include it into
763 each simulation containing that molecule. Returning to the example in
764 Scheme~\ref{sch:bassExample}, the \texttt{.mdl} file's contents would
765 be Scheme~\ref{sch:mdlExample}, and the new \texttt{.bass} file would
766 become Scheme~\ref{sch:bassExPrime}.
767
768 \begin{lstlisting}[float,caption={An example \texttt{.mdl} file.},label={sch:mdlExample}]
769
770 molecule{
771 name = "Ar";
772 nAtoms = 1;
773 atom[0]{
774 type="Ar";
775 position( 0.0, 0.0, 0.0 );
776 }
777 }
778
779 \end{lstlisting}
780
781 \begin{lstlisting}[float,caption={Revised {\sc bass} example.},label={sch:bassExPrime}]
782
783 #include "argon.mdl"
784
785 nComponents = 1;
786 component{
787 type = "Ar";
788 nMol = 108;
789 }
790
791 initialConfig = "./argon.init";
792
793 forceField = "LJ";
794 ensemble = "NVE";
795 dt = 1.0;
796 runTime = 1e3;
797 sampleTime = 100;
798 statusTime = 50;
799
800 \end{lstlisting}
801
802 \subsection{\label{oopseSec:coordFiles}Coordinate Files}
803
804 The standard format for storage of a systems coordinates is a modified
805 xyz-file syntax, the exact details of which can be seen in
806 Scheme~\ref{sch:dumpFormat}. As all bonding and molecular information
807 is stored in the \texttt{.bass} and \texttt{.mdl} files, the
808 coordinate files are simply the complete set of coordinates for each
809 atom at a given simulation time. One important note, although the
810 simulation propagates the complete rotation matrix, directional
811 entities are written out using quanternions, to save space in the
812 output files.
813
814 \begin{lstlisting}[float,caption={[The format of the coordinate files]Shows the format of the coordinate files. The fist line is the number of atoms. The second line begins with the time stamp followed by the three $\mathsf{H}$ column vectors. It is important to note, that for extended system ensembles, additional information pertinent to the integrators may be stored on this line as well.. The next lines are the atomic coordinates for all atoms in the system. First is the name followed by position, velocity, quanternions, and lastly angular velocities.},label=sch:dumpFormat]
815
816 nAtoms
817 time; Hxx Hyx Hzx; Hxy Hyy Hzy; Hxz Hyz Hzz;
818 Name1 x y z vx vy vz q0 q1 q2 q3 jx jy jz
819 Name2 x y z vx vy vz q0 q1 q2 q3 jx jy jz
820 etc...
821
822 \end{lstlisting}
823
824
825 There are three major files used by {\sc oopse} written in the
826 coordinate format, they are as follows: the initialization file
827 (\texttt{.init}), the simulation trajectory file (\texttt{.dump}), and
828 the final coordinates of the simulation. The initialization file is
829 necessary for {\sc oopse} to start the simulation with the proper
830 coordinates, and is generated before the simulation run. The
831 trajectory file is created at the beginning of the simulation, and is
832 used to store snapshots of the simulation at regular intervals. The
833 first frame is a duplication of the
834 \texttt{.init} file, and each subsequent frame is appended to the file
835 at an interval specified in the \texttt{.bass} file with the
836 \texttt{sampleTime} flag. The final coordinate file is the end of run file. The
837 \texttt{.eor} file stores the final configuration of the system for a
838 given simulation. The file is updated at the same time as the
839 \texttt{.dump} file, however, it only contains the most recent
840 frame. In this way, an \texttt{.eor} file may be used as the
841 initialization file to a second simulation in order to continue a
842 simulation or recover one from a processor that has crashed during the
843 course of the run.
844
845 \subsection{\label{oopseSec:initCoords}Generation of Initial Coordinates}
846
847 As was stated in Sec.~\ref{oopseSec:coordFiles}, an initialization
848 file is needed to provide the starting coordinates for a
849 simulation. The {\sc oopse} package provides several system building
850 programs to aid in the creation of the \texttt{.init}
851 file. The programs use {\sc bass}, and will recognize
852 arguments and parameters in the \texttt{.bass} file that would
853 otherwise be ignored by the simulation.
854
855 \subsection{The Statistics File}
856
857 The last output file generated by {\sc oopse} is the statistics
858 file. This file records such statistical quantities as the
859 instantaneous temperature, volume, pressure, etc. It is written out
860 with the frequency specified in the \texttt{.bass} file with the
861 \texttt{statusTime} keyword. The file allows the user to observe the
862 system variables as a function of simulation time while the simulation
863 is in progress. One useful function the statistics file serves is to
864 monitor the conserved quantity of a given simulation ensemble, this
865 allows the user to observe the stability of the integrator. The
866 statistics file is denoted with the \texttt{.stat} file extension.
867
868 \section{\label{oopseSec:mechanics}Mechanics}
869
870
871 \section{\label{sec:mechanics}Mechanics}
872
873 \subsection{\label{oopseSec:integrate}Integrating the Equations of Motion: the
874 DLM method}
875
876 The default method for integrating the equations of motion in {\sc
877 oopse} is a velocity-Verlet version of the symplectic splitting method
878 proposed by Dullweber, Leimkuhler and McLachlan
879 (DLM).\cite{Dullweber1997} When there are no directional atoms or
880 rigid bodies present in the simulation, this integrator becomes the
881 standard velocity-Verlet integrator which is known to sample the
882 microcanonical (NVE) ensemble.\cite{}
883
884 Previous integration methods for orientational motion have problems
885 that are avoided in the DLM method. Direct propagation of the Euler
886 angles has a known $1/\sin\theta$ divergence in the equations of
887 motion for $\phi$ and $\psi$,\cite{allen87:csl} leading to
888 numerical instabilities any time one of the directional atoms or rigid
889 bodies has an orientation near $\theta=0$ or $\theta=\pi$. More
890 modern quaternion-based integration methods have relatively poor
891 energy conservation. While quaternions work well for orientational
892 motion in other ensembles, the microcanonical ensemble has a
893 constant energy requirement that is quite sensitive to errors in the
894 equations of motion. An earlier implementation of {\sc oopse}
895 utilized quaternions for propagation of rotational motion; however, a
896 detailed investigation showed that they resulted in a steady drift in
897 the total energy, something that has been observed by
898 Laird {\it et al.}\cite{Laird97}
899
900 The key difference in the integration method proposed by Dullweber
901 \emph{et al.} is that the entire $3 \times 3$ rotation matrix is
902 propagated from one time step to the next. In the past, this would not
903 have been feasible, since the rotation matrix for a single body has
904 nine elements compared with the more memory-efficient methods (using
905 three Euler angles or 4 quaternions). Computer memory has become much
906 less costly in recent years, and this can be translated into
907 substantial benefits in energy conservation.
908
909 The basic equations of motion being integrated are derived from the
910 Hamiltonian for conservative systems containing rigid bodies,
911 \begin{equation}
912 H = \sum_{i} \left( \frac{1}{2} m_i {\bf v}_i^T \cdot {\bf v}_i +
913 \frac{1}{2} {\bf j}_i^T \cdot \overleftrightarrow{\mathsf{I}}_i^{-1} \cdot
914 {\bf j}_i \right) +
915 V\left(\left\{{\bf r}\right\}, \left\{\mathsf{A}\right\}\right)
916 \end{equation}
917 where ${\bf r}_i$ and ${\bf v}_i$ are the cartesian position vector
918 and velocity of the center of mass of particle $i$, and ${\bf j}_i$
919 and $\overleftrightarrow{\mathsf{I}}_i$ are the body-fixed angular
920 momentum and moment of inertia tensor, respectively. $\mathsf{A}_i$
921 is the $3 \times 3$ rotation matrix describing the instantaneous
922 orientation of the particle. $V$ is the potential energy function
923 which may depend on both the positions $\left\{{\bf r}\right\}$ and
924 orientations $\left\{\mathsf{A}\right\}$ of all particles. The
925 equations of motion for the particle centers of mass are derived from
926 Hamilton's equations and are quite simple,
927 \begin{eqnarray}
928 \dot{{\bf r}} & = & {\bf v} \\
929 \dot{{\bf v}} & = & \frac{{\bf f}}{m}
930 \end{eqnarray}
931 where ${\bf f}$ is the instantaneous force on the center of mass
932 of the particle,
933 \begin{equation}
934 {\bf f} = - \frac{\partial}{\partial
935 {\bf r}} V(\left\{{\bf r}(t)\right\}, \left\{\mathsf{A}(t)\right\}).
936 \end{equation}
937
938 The equations of motion for the orientational degrees of freedom are
939 \begin{eqnarray}
940 \dot{\mathsf{A}} & = & \mathsf{A} \cdot
941 \mbox{ skew}\left(\overleftrightarrow{\mathsf{I}}^{-1} \cdot {\bf j}\right) \\
942 \dot{{\bf j}} & = & {\bf j} \times \left( \overleftrightarrow{\mathsf{I}}^{-1}
943 \cdot {\bf j} \right) - \mbox{ rot}\left(\mathsf{A}^{T} \cdot \frac{\partial
944 V}{\partial \mathsf{A}} \right)
945 \end{eqnarray}
946 In these equations of motion, the $\mbox{skew}$ matrix of a vector
947 ${\bf v} = \left( v_1, v_2, v_3 \right)$ is defined:
948 \begin{equation}
949 \mbox{skew}\left( {\bf v} \right) := \left(
950 \begin{array}{ccc}
951 0 & v_3 & - v_2 \\
952 -v_3 & 0 & v_1 \\
953 v_2 & -v_1 & 0
954 \end{array}
955 \right)
956 \end{equation}
957 The $\mbox{rot}$ notation refers to the mapping of the $3 \times 3$
958 rotation matrix to a vector of orientations by first computing the
959 skew-symmetric part $\left(\mathsf{A} - \mathsf{A}^{T}\right)$ and
960 then associating this with a length 3 vector by inverting the
961 $\mbox{skew}$ function above:
962 \begin{equation}
963 \mbox{rot}\left(\mathsf{A}\right) := \mbox{ skew}^{-1}\left(\mathsf{A}
964 - \mathsf{A}^{T} \right)
965 \end{equation}
966 Written this way, the $\mbox{rot}$ operation creates a set of
967 conjugate angle coordinates to the body-fixed angular momenta
968 represented by ${\bf j}$. This equation of motion for angular momenta
969 is equivalent to the more familiar body-fixed forms,
970 \begin{eqnarray}
971 \dot{j_{x}} & = & \tau^b_x(t) +
972 \left(\overleftrightarrow{\mathsf{I}}_{yy} - \overleftrightarrow{\mathsf{I}}_{zz} \right) j_y j_z \\
973 \dot{j_{y}} & = & \tau^b_y(t) +
974 \left(\overleftrightarrow{\mathsf{I}}_{zz} - \overleftrightarrow{\mathsf{I}}_{xx} \right) j_z j_x \\
975 \dot{j_{z}} & = & \tau^b_z(t) +
976 \left(\overleftrightarrow{\mathsf{I}}_{xx} - \overleftrightarrow{\mathsf{I}}_{yy} \right) j_x j_y
977 \end{eqnarray}
978 which utilize the body-fixed torques, ${\bf \tau}^b$. Torques are
979 most easily derived in the space-fixed frame,
980 \begin{equation}
981 {\bf \tau}^b(t) = \mathsf{A}(t) \cdot {\bf \tau}^s(t)
982 \end{equation}
983 where the torques are either derived from the forces on the
984 constituent atoms of the rigid body, or for directional atoms,
985 directly from derivatives of the potential energy,
986 \begin{equation}
987 {\bf \tau}^s(t) = - \hat{\bf u}(t) \times \left( \frac{\partial}
988 {\partial \hat{\bf u}} V\left(\left\{ {\bf r}(t) \right\}, \left\{
989 \mathsf{A}(t) \right\}\right) \right).
990 \end{equation}
991 Here $\hat{\bf u}$ is a unit vector pointing along the principal axis
992 of the particle in the space-fixed frame.
993
994 The DLM method uses a Trotter factorization of the orientational
995 propagator. This has three effects:
996 \begin{enumerate}
997 \item the integrator is area-preserving in phase space (i.e. it is
998 {\it symplectic}),
999 \item the integrator is time-{\it reversible}, making it suitable for Hybrid
1000 Monte Carlo applications, and
1001 \item the error for a single time step is of order $O\left(h^3\right)$
1002 for timesteps of length $h$.
1003 \end{enumerate}
1004
1005 The integration of the equations of motion is carried out in a
1006 velocity-Verlet style 2-part algorithm:
1007
1008 {\tt moveA:}
1009 \begin{eqnarray}
1010 {\bf v}\left(t + \delta t / 2\right) & \leftarrow & {\bf
1011 v}(t) + \frac{\delta t}{2} \left( {\bf f}(t) / m \right) \\
1012 {\bf r}(t + \delta t) & \leftarrow & {\bf r}(t) + \delta t {\bf
1013 v}\left(t + \delta t / 2 \right) \\
1014 {\bf j}\left(t + \delta t / 2 \right) & \leftarrow & {\bf
1015 j}(t) + \frac{\delta t}{2} {\bf \tau}^b(t) \\
1016 \mathsf{A}(t + \delta t) & \leftarrow & \mathrm{rot}\left( \delta t
1017 {\bf j}(t + \delta t / 2) \cdot \overleftrightarrow{\mathsf{I}}^{-1}
1018 \right)
1019 \end{eqnarray}
1020
1021 In this context, the $\mathrm{rot}$ function is the reversible product
1022 of the three body-fixed rotations,
1023 \begin{equation}
1024 \mathrm{rot}({\bf a}) = \mathsf{G}_x(a_x / 2) \cdot
1025 \mathsf{G}_y(a_y / 2) \cdot \mathsf{G}_z(a_z) \cdot \mathsf{G}_y(a_y /
1026 2) \cdot \mathsf{G}_x(a_x /2)
1027 \end{equation}
1028 where each rotational propagator, $\mathsf{G}_\alpha(\theta)$, rotates
1029 both the rotation matrix ($\mathsf{A}$) and the body-fixed angular
1030 momentum (${\bf j}$) by an angle $\theta$ around body-fixed axis
1031 $\alpha$,
1032 \begin{equation}
1033 \mathsf{G}_\alpha( \theta ) = \left\{
1034 \begin{array}{lcl}
1035 \mathsf{A}(t) & \leftarrow & \mathsf{A}(0) \cdot \mathsf{R}_\alpha(\theta)^T \\
1036 {\bf j}(t) & \leftarrow & \mathsf{R}_\alpha(\theta) \cdot {\bf j}(0)
1037 \end{array}
1038 \right.
1039 \end{equation}
1040 $\mathsf{R}_\alpha$ is a quadratic approximation to
1041 the single-axis rotation matrix. For example, in the small-angle
1042 limit, the rotation matrix around the body-fixed x-axis can be
1043 approximated as
1044 \begin{equation}
1045 \mathsf{R}_x(\theta) \approx \left(
1046 \begin{array}{ccc}
1047 1 & 0 & 0 \\
1048 0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4} & -\frac{\theta}{1+
1049 \theta^2 / 4} \\
1050 0 & \frac{\theta}{1+
1051 \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4}
1052 \end{array}
1053 \right).
1054 \end{equation}
1055 All other rotations follow in a straightforward manner.
1056
1057 After the first part of the propagation, the forces and body-fixed
1058 torques are calculated at the new positions and orientations
1059
1060 {\tt doForces:}
1061 \begin{eqnarray}
1062 {\bf f}(t + \delta t) & \leftarrow & - \left(\frac{\partial V}{\partial {\bf
1063 r}}\right)_{{\bf r}(t + \delta t)} \\
1064 {\bf \tau}^{s}(t + \delta t) & \leftarrow & {\bf u}(t + \delta t)
1065 \times \frac{\partial V}{\partial {\bf u}} \\
1066 {\bf \tau}^{b}(t + \delta t) & \leftarrow & \mathsf{A}(t + \delta t)
1067 \cdot {\bf \tau}^s(t + \delta t)
1068 \end{eqnarray}
1069
1070 {\sc oopse} automatically updates ${\bf u}$ when the rotation matrix
1071 $\mathsf{A}$ is calculated in {\tt moveA}. Once the forces and
1072 torques have been obtained at the new time step, the velocities can be
1073 advanced to the same time value.
1074
1075 {\tt moveB:}
1076 \begin{eqnarray}
1077 {\bf v}\left(t + \delta t \right) & \leftarrow & {\bf
1078 v}\left(t + \delta t / 2 \right) + \frac{\delta t}{2} \left(
1079 {\bf f}(t + \delta t) / m \right) \\
1080 {\bf j}\left(t + \delta t \right) & \leftarrow & {\bf
1081 j}\left(t + \delta t / 2 \right) + \frac{\delta t}{2} {\bf
1082 \tau}^b(t + \delta t)
1083 \end{eqnarray}
1084
1085 The matrix rotations used in the DLM method end up being more costly
1086 computationally than the simpler arithmetic quaternion
1087 propagation. With the same time step, a 1000-molecule water simulation
1088 shows an average 7\% increase in computation time using the DLM method
1089 in place of quaternions. This cost is more than justified when
1090 comparing the energy conservation of the two methods as illustrated in
1091 figure \ref{timestep}.
1092
1093 \begin{figure}
1094 \centering
1095 \includegraphics[width=\linewidth]{timeStep.eps}
1096 \caption[Energy conservation for quaternion versus DLM dynamics]{Energy conservation using quaternion based integration versus
1097 the method proposed by Dullweber \emph{et al.} with increasing time
1098 step. For each time step, the dotted line is total energy using the
1099 DLM integrator, and the solid line comes from the quaternion
1100 integrator. The larger time step plots are shifted up from the true
1101 energy baseline for clarity.}
1102 \label{timestep}
1103 \end{figure}
1104
1105 In figure \ref{timestep}, the resulting energy drift at various time
1106 steps for both the DLM and quaternion integration schemes is
1107 compared. All of the 1000 molecule water simulations started with the
1108 same configuration, and the only difference was the method for
1109 handling rotational motion. At time steps of 0.1 and 0.5 fs, both
1110 methods for propagating molecule rotation conserve energy fairly well,
1111 with the quaternion method showing a slight energy drift over time in
1112 the 0.5 fs time step simulation. At time steps of 1 and 2 fs, the
1113 energy conservation benefits of the DLM method are clearly
1114 demonstrated. Thus, while maintaining the same degree of energy
1115 conservation, one can take considerably longer time steps, leading to
1116 an overall reduction in computation time.
1117
1118 There is only one specific keyword relevant to the default integrator,
1119 and that is the time step for integrating the equations of motion.
1120
1121 \begin{center}
1122 \begin{tabular}{llll}
1123 {\bf variable} & {\bf {\tt .bass} keyword} & {\bf units} & {\bf
1124 default value} \\
1125 $\delta t$ & {\tt dt = 2.0;} & fs & none
1126 \end{tabular}
1127 \end{center}
1128
1129 \subsection{\label{sec:extended}Extended Systems for other Ensembles}
1130
1131 {\sc oopse} implements a number of extended system integrators for
1132 sampling from other ensembles relevant to chemical physics. The
1133 integrator can selected with the {\tt ensemble} keyword in the
1134 {\tt .bass} file:
1135
1136 \begin{center}
1137 \begin{tabular}{lll}
1138 {\bf Integrator} & {\bf Ensemble} & {\bf {\tt .bass} line} \\
1139 NVE & microcanonical & {\tt ensemble = ``NVE''; } \\
1140 NVT & canonical & {\tt ensemble = ``NVT''; } \\
1141 NPTi & isobaric-isothermal (with isotropic volume changes) & {\tt
1142 ensemble = ``NPTi'';} \\
1143 NPTf & isobaric-isothermal (with changes to box shape) & {\tt
1144 ensemble = ``NPTf'';} \\
1145 NPTxyz & approximate isobaric-isothermal & {\tt ensemble =
1146 ``NPTxyz'';} \\
1147 & (with separate barostats on each box dimension) &
1148 \end{tabular}
1149 \end{center}
1150
1151 The relatively well-known Nos\'e-Hoover thermostat is implemented in
1152 {\sc oopse}'s NVT integrator. This method couples an extra degree of
1153 freedom (the thermostat) to the kinetic energy of the system, and has
1154 been shown to sample the canonical distribution in the system degrees
1155 of freedom while conserving a quantity that is, to within a constant,
1156 the Helmholtz free energy.
1157
1158 NPT algorithms attempt to maintain constant pressure in the system by
1159 coupling the volume of the system to a barostat. {\sc oopse} contains
1160 three different constant pressure algorithms. The first two, NPTi and
1161 NPTf have been shown to conserve a quantity that is, to within a
1162 constant, the Gibbs free energy. The Melchionna modification to the
1163 Hoover barostat is implemented in both NPTi and NPTf. NPTi allows
1164 only isotropic changes in the simulation box, while box {\it shape}
1165 variations are allowed in NPTf. The NPTxyz integrator has {\it not}
1166 been shown to sample from the isobaric-isothermal ensemble. It is
1167 useful, however, in that it maintains orthogonality for the axes of
1168 the simulation box while attempting to equalize pressure along the
1169 three perpendicular directions in the box.
1170
1171 Each of the extended system integrators requires additional keywords
1172 to set target values for the thermodynamic state variables that are
1173 being held constant. Keywords are also required to set the
1174 characteristic decay times for the dynamics of the extended
1175 variables.
1176
1177 \begin{tabular}{llll}
1178 {\bf variable} & {\bf {\tt .bass} keyword} & {\bf units} & {\bf
1179 default value} \\
1180 $T_{\mathrm{target}}$ & {\tt targetTemperature = 300;} & K & none \\
1181 $P_{\mathrm{target}}$ & {\tt targetPressure = 1;} & atm & none \\
1182 $\tau_T$ & {\tt tauThermostat = 1e3;} & fs & none \\
1183 $\tau_B$ & {\tt tauBarostat = 5e3;} & fs & none \\
1184 & {\tt resetTime = 200;} & fs & none \\
1185 & {\tt useInitialExtendedSystemState = ``true'';} & logical &
1186 false
1187 \end{tabular}
1188
1189 Two additional keywords can be used to either clear the extended
1190 system variables periodically ({\tt resetTime}), or to maintain the
1191 state of the extended system variables between simulations ({\tt
1192 useInitialExtendedSystemState}). More details on these variables
1193 and their use in the integrators follows below.
1194
1195 \subsubsection{\label{oopseSec:noseHooverThermo}Nos\'{e}-Hoover Thermostatting}
1196
1197 The Nos\'e-Hoover equations of motion are given by\cite{Hoover85}
1198 \begin{eqnarray}
1199 \dot{{\bf r}} & = & {\bf v} \\
1200 \dot{{\bf v}} & = & \frac{{\bf f}}{m} - \chi {\bf v} \\
1201 \dot{\mathsf{A}} & = & \mathsf{A} \cdot
1202 \mbox{ skew}\left(\overleftrightarrow{\mathsf{I}}^{-1} \cdot {\bf j}\right) \\
1203 \dot{{\bf j}} & = & {\bf j} \times \left( \overleftrightarrow{\mathsf{I}}^{-1}
1204 \cdot {\bf j} \right) - \mbox{ rot}\left(\mathsf{A}^{T} \cdot \frac{\partial
1205 V}{\partial \mathsf{A}} \right) - \chi {\bf j}
1206 \label{eq:nosehoovereom}
1207 \end{eqnarray}
1208
1209 $\chi$ is an ``extra'' variable included in the extended system, and
1210 it is propagated using the first order equation of motion
1211 \begin{equation}
1212 \dot{\chi} = \frac{1}{\tau_{T}^2} \left( \frac{T}{T_{\mathrm{target}}} - 1 \right).
1213 \label{eq:nosehooverext}
1214 \end{equation}
1215
1216 The instantaneous temperature $T$ is proportional to the total kinetic
1217 energy (both translational and orientational) and is given by
1218 \begin{equation}
1219 T = \frac{2 K}{f k_B}
1220 \end{equation}
1221 Here, $f$ is the total number of degrees of freedom in the system,
1222 \begin{equation}
1223 f = 3 N + 3 N_{\mathrm{orient}} - N_{\mathrm{constraints}}
1224 \end{equation}
1225 and $K$ is the total kinetic energy,
1226 \begin{equation}
1227 K = \sum_{i=1}^{N} \frac{1}{2} m_i {\bf v}_i^T \cdot {\bf v}_i +
1228 \sum_{i=1}^{N_{\mathrm{orient}}} \frac{1}{2} {\bf j}_i^T \cdot
1229 \overleftrightarrow{\mathsf{I}}_i^{-1} \cdot {\bf j}_i
1230 \end{equation}
1231
1232 In eq.(\ref{eq:nosehooverext}), $\tau_T$ is the time constant for
1233 relaxation of the temperature to the target value. To set values for
1234 $\tau_T$ or $T_{\mathrm{target}}$ in a simulation, one would use the
1235 {\tt tauThermostat} and {\tt targetTemperature} keywords in the {\tt
1236 .bass} file. The units for {\tt tauThermostat} are fs, and the units
1237 for the {\tt targetTemperature} are degrees K. The integration of
1238 the equations of motion is carried out in a velocity-Verlet style 2
1239 part algorithm:
1240
1241 {\tt moveA:}
1242 \begin{eqnarray}
1243 T(t) & \leftarrow & \left\{{\bf v}(t)\right\}, \left\{{\bf j}(t)\right\} \\
1244 {\bf v}\left(t + \delta t / 2\right) & \leftarrow & {\bf
1245 v}(t) + \frac{\delta t}{2} \left( \frac{{\bf f}(t)}{m} - {\bf v}(t)
1246 \chi(t)\right) \\
1247 {\bf r}(t + \delta t) & \leftarrow & {\bf r}(t) + \delta t {\bf
1248 v}\left(t + \delta t / 2 \right) \\
1249 {\bf j}\left(t + \delta t / 2 \right) & \leftarrow & {\bf
1250 j}(t) + \frac{\delta t}{2} \left( {\bf \tau}^b(t) - {\bf j}(t)
1251 \chi(t) \right) \\
1252 \mathsf{A}(t + \delta t) & \leftarrow & \mathrm{rot}\left(\delta t *
1253 {\bf j}(t + \delta t / 2) \overleftrightarrow{\mathsf{I}}^{-1} \right) \\
1254 \chi\left(t + \delta t / 2 \right) & \leftarrow & \chi(t) +
1255 \frac{\delta t}{2 \tau_T^2} \left( \frac{T(t)}{T_{\mathrm{target}}} - 1
1256 \right)
1257 \end{eqnarray}
1258
1259 Here $\mathrm{rot}(\delta t * {\bf j}
1260 \overleftrightarrow{\mathsf{I}}^{-1})$ is the same symplectic Trotter
1261 factorization of the three rotation operations that was discussed in
1262 the section on the DLM integrator. Note that this operation modifies
1263 both the rotation matrix $\mathsf{A}$ and the angular momentum ${\bf
1264 j}$. {\tt moveA} propagates velocities by a half time step, and
1265 positional degrees of freedom by a full time step. The new positions
1266 (and orientations) are then used to calculate a new set of forces and
1267 torques in exactly the same way they are calculated in the {\tt
1268 doForces} portion of the DLM integrator.
1269
1270 Once the forces and torques have been obtained at the new time step,
1271 the temperature, velocities, and the extended system variable can be
1272 advanced to the same time value.
1273
1274 {\tt moveB:}
1275 \begin{eqnarray}
1276 T(t + \delta t) & \leftarrow & \left\{{\bf v}(t + \delta t)\right\},
1277 \left\{{\bf j}(t + \delta t)\right\} \\
1278 \chi\left(t + \delta t \right) & \leftarrow & \chi\left(t + \delta t /
1279 2 \right) + \frac{\delta t}{2 \tau_T^2} \left( \frac{T(t+\delta
1280 t)}{T_{\mathrm{target}}} - 1 \right) \\
1281 {\bf v}\left(t + \delta t \right) & \leftarrow & {\bf
1282 v}\left(t + \delta t / 2 \right) + \frac{\delta t}{2} \left(
1283 \frac{{\bf f}(t + \delta t)}{m} - {\bf v}(t + \delta t)
1284 \chi(t \delta t)\right) \\
1285 {\bf j}\left(t + \delta t \right) & \leftarrow & {\bf
1286 j}\left(t + \delta t / 2 \right) + \frac{\delta t}{2} \left( {\bf
1287 \tau}^b(t + \delta t) - {\bf j}(t + \delta t)
1288 \chi(t + \delta t) \right)
1289 \end{eqnarray}
1290
1291 Since ${\bf v}(t + \delta t)$ and ${\bf j}(t + \delta t)$ are required
1292 to caclculate $T(t + \delta t)$ as well as $\chi(t + \delta t)$, they
1293 indirectly depend on their own values at time $t + \delta t$. {\tt
1294 moveB} is therefore done in an iterative fashion until $\chi(t +
1295 \delta t)$ becomes self-consistent. The relative tolerance for the
1296 self-consistency check defaults to a value of $\mbox{10}^{-6}$, but
1297 {\sc oopse} will terminate the iteration after 4 loops even if the
1298 consistency check has not been satisfied.
1299
1300 The Nos\'e-Hoover algorithm is known to conserve a Hamiltonian for the
1301 extended system that is, to within a constant, identical to the
1302 Helmholtz free energy,
1303 \begin{equation}
1304 H_{\mathrm{NVT}} = V + K + f k_B T_{\mathrm{target}} \left(
1305 \frac{\tau_{T}^2 \chi^2(t)}{2} + \int_{0}^{t} \chi(t^\prime) dt^\prime
1306 \right)
1307 \end{equation}
1308 Poor choices of $\delta t$ or $\tau_T$ can result in non-conservation
1309 of $H_{\mathrm{NVT}}$, so the conserved quantity is maintained in the
1310 last column of the {\tt .stat} file to allow checks on the quality of
1311 the integration.
1312
1313 Bond constraints are applied at the end of both the {\tt moveA} and
1314 {\tt moveB} portions of the algorithm. Details on the constraint
1315 algorithms are given in section \ref{oopseSec:rattle}.
1316
1317 \subsubsection{\label{sec:NPTi}Constant-pressure integration with
1318 isotropic box deformations (NPTi)}
1319
1320 To carry out isobaric-isothermal ensemble calculations {\sc oopse}
1321 implements the Melchionna modifications to the Nos\'e-Hoover-Andersen
1322 equations of motion,\cite{melchionna93}
1323
1324 \begin{eqnarray}
1325 \dot{{\bf r}} & = & {\bf v} + \eta \left( {\bf r} - {\bf R}_0 \right) \\
1326 \dot{{\bf v}} & = & \frac{{\bf f}}{m} - (\eta + \chi) {\bf v} \\
1327 \dot{\mathsf{A}} & = & \mathsf{A} \cdot
1328 \mbox{ skew}\left(\overleftrightarrow{I}^{-1} \cdot {\bf j}\right) \\
1329 \dot{{\bf j}} & = & {\bf j} \times \left( \overleftrightarrow{I}^{-1}
1330 \cdot {\bf j} \right) - \mbox{ rot}\left(\mathsf{A}^{T} \cdot \frac{\partial
1331 V}{\partial \mathsf{A}} \right) - \chi {\bf j} \\
1332 \dot{\chi} & = & \frac{1}{\tau_{T}^2} \left(
1333 \frac{T}{T_{\mathrm{target}}} - 1 \right) \\
1334 \dot{\eta} & = & \frac{1}{\tau_{B}^2 f k_B T_{\mathrm{target}}} V \left( P -
1335 P_{\mathrm{target}} \right) \\
1336 \dot{\mathcal{V}} & = & 3 \mathcal{V} \eta
1337 \label{eq:melchionna1}
1338 \end{eqnarray}
1339
1340 $\chi$ and $\eta$ are the ``extra'' degrees of freedom in the extended
1341 system. $\chi$ is a thermostat, and it has the same function as it
1342 does in the Nos\'e-Hoover NVT integrator. $\eta$ is a barostat which
1343 controls changes to the volume of the simulation box. ${\bf R}_0$ is
1344 the location of the center of mass for the entire system, and
1345 $\mathcal{V}$ is the volume of the simulation box. At any time, the
1346 volume can be calculated from the determinant of the matrix which
1347 describes the box shape:
1348 \begin{equation}
1349 \mathcal{V} = \det(\mathsf{H})
1350 \end{equation}
1351
1352 The NPTi integrator requires an instantaneous pressure. This quantity
1353 is calculated via the pressure tensor,
1354 \begin{equation}
1355 \overleftrightarrow{\mathsf{P}}(t) = \frac{1}{\mathcal{V}(t)} \left(
1356 \sum_{i=1}^{N} m_i {\bf v}_i(t) \otimes {\bf v}_i(t) \right) +
1357 \overleftrightarrow{\mathsf{W}}(t)
1358 \end{equation}
1359 The kinetic contribution to the pressure tensor utilizes the {\it
1360 outer} product of the velocities denoted by the $\otimes$ symbol. The
1361 stress tensor is calculated from another outer product of the
1362 inter-atomic separation vectors (${\bf r}_{ij} = {\bf r}_j - {\bf
1363 r}_i$) with the forces between the same two atoms,
1364 \begin{equation}
1365 \overleftrightarrow{\mathsf{W}}(t) = \sum_{i} \sum_{j>i} {\bf r}_{ij}(t)
1366 \otimes {\bf f}_{ij}(t)
1367 \end{equation}
1368 The instantaneous pressure is then simply obtained from the trace of
1369 the Pressure tensor,
1370 \begin{equation}
1371 P(t) = \frac{1}{3} \mathrm{Tr} \left( \overleftrightarrow{\mathsf{P}}(t)
1372 \right)
1373 \end{equation}
1374
1375 In eq.(\ref{eq:melchionna1}), $\tau_B$ is the time constant for
1376 relaxation of the pressure to the target value. To set values for
1377 $\tau_B$ or $P_{\mathrm{target}}$ in a simulation, one would use the
1378 {\tt tauBarostat} and {\tt targetPressure} keywords in the {\tt .bass}
1379 file. The units for {\tt tauBarostat} are fs, and the units for the
1380 {\tt targetPressure} are atmospheres. Like in the NVT integrator, the
1381 integration of the equations of motion is carried out in a
1382 velocity-Verlet style 2 part algorithm:
1383
1384 {\tt moveA:}
1385 \begin{eqnarray}
1386 T(t) & \leftarrow & \left\{{\bf v}(t)\right\}, \left\{{\bf j}(t)\right\} \\
1387 P(t) & \leftarrow & \left\{{\bf r}(t)\right\}, \left\{{\bf v}(t)\right\}, \left\{{\bf f}(t)\right\} \\
1388 {\bf v}\left(t + \delta t / 2\right) & \leftarrow & {\bf
1389 v}(t) + \frac{\delta t}{2} \left( \frac{{\bf f}(t)}{m} - {\bf v}(t)
1390 \left(\chi(t) + \eta(t) \right) \right) \\
1391 {\bf j}\left(t + \delta t / 2 \right) & \leftarrow & {\bf
1392 j}(t) + \frac{\delta t}{2} \left( {\bf \tau}^b(t) - {\bf j}(t)
1393 \chi(t) \right) \\
1394 \mathsf{A}(t + \delta t) & \leftarrow & \mathrm{rot}\left(\delta t *
1395 {\bf j}(t + \delta t / 2) \overleftrightarrow{\mathsf{I}}^{-1} \right) \\
1396 \chi\left(t + \delta t / 2 \right) & \leftarrow & \chi(t) +
1397 \frac{\delta t}{2 \tau_T^2} \left( \frac{T(t)}{T_{\mathrm{target}}} - 1
1398 \right) \\
1399 \eta(t + \delta t / 2) & \leftarrow & \eta(t) + \frac{\delta t \mathcal{V}(t)}{2 N k_B
1400 T(t) \tau_B^2} \left( P(t) - P_{\mathrm{target}} \right) \\
1401 {\bf r}(t + \delta t) & \leftarrow & {\bf r}(t) + \delta t \left\{ {\bf
1402 v}\left(t + \delta t / 2 \right) + \eta(t + \delta t / 2)\left[ {\bf
1403 r}(t + \delta t) - {\bf R}_0 \right] \right\} \\
1404 \mathsf{H}(t + \delta t) & \leftarrow & e^{-\delta t \eta(t + \delta t
1405 / 2)} \mathsf{H}(t)
1406 \end{eqnarray}
1407
1408 Most of these equations are identical to their counterparts in the NVT
1409 integrator, but the propagation of positions to time $t + \delta t$
1410 depends on the positions at the same time. {\sc oopse} carries out
1411 this step iteratively (with a limit of 5 passes through the iterative
1412 loop). Also, the simulation box $\mathsf{H}$ is scaled uniformly for
1413 one full time step by an exponential factor that depends on the value
1414 of $\eta$ at time $t +
1415 \delta t / 2$. Reshaping the box uniformly also scales the volume of
1416 the box by
1417 \begin{equation}
1418 \mathcal{V}(t + \delta t) \leftarrow e^{ - 3 \delta t \eta(t + \delta t /2)}
1419 \mathcal{V}(t)
1420 \end{equation}
1421
1422 The {\tt doForces} step for the NPTi integrator is exactly the same as
1423 in both the DLM and NVT integrators. Once the forces and torques have
1424 been obtained at the new time step, the velocities can be advanced to
1425 the same time value.
1426
1427 {\tt moveB:}
1428 \begin{eqnarray}
1429 T(t + \delta t) & \leftarrow & \left\{{\bf v}(t + \delta t)\right\},
1430 \left\{{\bf j}(t + \delta t)\right\} \\
1431 P(t + \delta t) & \leftarrow & \left\{{\bf r}(t + \delta t)\right\},
1432 \left\{{\bf v}(t + \delta t)\right\}, \left\{{\bf f}(t + \delta t)\right\} \\
1433 \chi\left(t + \delta t \right) & \leftarrow & \chi\left(t + \delta t /
1434 2 \right) + \frac{\delta t}{2 \tau_T^2} \left( \frac{T(t+\delta
1435 t)}{T_{\mathrm{target}}} - 1 \right) \\
1436 \eta(t + \delta t) & \leftarrow & \eta(t + \delta t / 2) +
1437 \frac{\delta t \mathcal{V}(t + \delta t)}{2 N k_B T(t + \delta t) \tau_B^2}
1438 \left( P(t + \delta t) - P_{\mathrm{target}}
1439 \right) \\
1440 {\bf v}\left(t + \delta t \right) & \leftarrow & {\bf
1441 v}\left(t + \delta t / 2 \right) + \frac{\delta t}{2} \left(
1442 \frac{{\bf f}(t + \delta t)}{m} - {\bf v}(t + \delta t)
1443 (\chi(t + \delta t) + \eta(t + \delta t)) \right) \\
1444 {\bf j}\left(t + \delta t \right) & \leftarrow & {\bf
1445 j}\left(t + \delta t / 2 \right) + \frac{\delta t}{2} \left( {\bf
1446 \tau}^b(t + \delta t) - {\bf j}(t + \delta t)
1447 \chi(t + \delta t) \right)
1448 \end{eqnarray}
1449
1450 Once again, since ${\bf v}(t + \delta t)$ and ${\bf j}(t + \delta t)$
1451 are required to caclculate $T(t + \delta t)$, $P(t + \delta t)$, $\chi(t +
1452 \delta t)$, and $\eta(t + \delta t)$, they indirectly depend on their
1453 own values at time $t + \delta t$. {\tt moveB} is therefore done in
1454 an iterative fashion until $\chi(t + \delta t)$ and $\eta(t + \delta
1455 t)$ become self-consistent. The relative tolerance for the
1456 self-consistency check defaults to a value of $\mbox{10}^{-6}$, but
1457 {\sc oopse} will terminate the iteration after 4 loops even if the
1458 consistency check has not been satisfied.
1459
1460 The Melchionna modification of the Nos\'e-Hoover-Andersen algorithm is
1461 known to conserve a Hamiltonian for the extended system that is, to
1462 within a constant, identical to the Gibbs free energy,
1463 \begin{equation}
1464 H_{\mathrm{NPTi}} = V + K + f k_B T_{\mathrm{target}} \left(
1465 \frac{\tau_{T}^2 \chi^2(t)}{2} + \int_{0}^{t} \chi(t^\prime) dt^\prime
1466 \right) + P_{\mathrm{target}} \mathcal{V}(t).
1467 \end{equation}
1468 Poor choices of $\delta t$, $\tau_T$, or $\tau_B$ can result in
1469 non-conservation of $H_{\mathrm{NPTi}}$, so the conserved quantity is
1470 maintained in the last column of the {\tt .stat} file to allow checks
1471 on the quality of the integration. It is also known that this
1472 algorithm samples the equilibrium distribution for the enthalpy
1473 (including contributions for the thermostat and barostat),
1474 \begin{equation}
1475 H_{\mathrm{NPTi}} = V + K + \frac{f k_B T_{\mathrm{target}}}{2} \left(
1476 \chi^2 \tau_T^2 + \eta^2 \tau_B^2 \right) + P_{\mathrm{target}}
1477 \mathcal{V}(t).
1478 \end{equation}
1479
1480 Bond constraints are applied at the end of both the {\tt moveA} and
1481 {\tt moveB} portions of the algorithm. Details on the constraint
1482 algorithms are given in section \ref{oopseSec:rattle}.
1483
1484 \subsubsection{\label{sec:NPTf}Constant-pressure integration with a
1485 flexible box (NPTf)}
1486
1487 There is a relatively simple generalization of the
1488 Nos\'e-Hoover-Andersen method to include changes in the simulation box
1489 {\it shape} as well as in the volume of the box. This method utilizes
1490 the full $3 \times 3$ pressure tensor and introduces a tensor of
1491 extended variables ($\overleftrightarrow{\eta}$) to control changes to
1492 the box shape. The equations of motion for this method are
1493 \begin{eqnarray}
1494 \dot{{\bf r}} & = & {\bf v} + \overleftrightarrow{\eta} \cdot \left( {\bf r} - {\bf R}_0 \right) \\
1495 \dot{{\bf v}} & = & \frac{{\bf f}}{m} - (\overleftrightarrow{\eta} +
1496 \chi \mathsf{1}) {\bf v} \\
1497 \dot{\mathsf{A}} & = & \mathsf{A} \cdot
1498 \mbox{ skew}\left(\overleftrightarrow{I}^{-1} \cdot {\bf j}\right) \\
1499 \dot{{\bf j}} & = & {\bf j} \times \left( \overleftrightarrow{I}^{-1}
1500 \cdot {\bf j} \right) - \mbox{ rot}\left(\mathsf{A}^{T} \cdot \frac{\partial
1501 V}{\partial \mathsf{A}} \right) - \chi {\bf j} \\
1502 \dot{\chi} & = & \frac{1}{\tau_{T}^2} \left(
1503 \frac{T}{T_{\mathrm{target}}} - 1 \right) \\
1504 \dot{\overleftrightarrow{eta}} & = & \frac{1}{\tau_{B}^2 f k_B
1505 T_{\mathrm{target}}} V \left( \overleftrightarrow{\mathsf{P}} - P_{\mathrm{target}}\mathsf{1} \right) \\
1506 \dot{\mathsf{H}} & = & \overleftrightarrow{\eta} \cdot \mathsf{H}
1507 \label{eq:melchionna2}
1508 \end{eqnarray}
1509
1510 Here, $\mathsf{1}$ is the unit matrix and $\overleftrightarrow{\mathsf{P}}$
1511 is the pressure tensor. Again, the volume, $\mathcal{V} = \det
1512 \mathsf{H}$.
1513
1514 The propagation of the equations of motion is nearly identical to the
1515 NPTi integration:
1516
1517 {\tt moveA:}
1518 \begin{eqnarray}
1519 T(t) & \leftarrow & \left\{{\bf v}(t)\right\}, \left\{{\bf j}(t)\right\} \\
1520 \overleftrightarrow{\mathsf{P}}(t) & \leftarrow & \left\{{\bf r}(t)\right\}, \left\{{\bf v}(t)\right\}, \left\{{\bf f}(t)\right\} \\
1521 {\bf v}\left(t + \delta t / 2\right) & \leftarrow & {\bf
1522 v}(t) + \frac{\delta t}{2} \left( \frac{{\bf f}(t)}{m} -
1523 \left(\chi(t)\mathsf{1} + \overleftrightarrow{\eta}(t) \right) \cdot
1524 {\bf v}(t) \right) \\
1525 {\bf j}\left(t + \delta t / 2 \right) & \leftarrow & {\bf
1526 j}(t) + \frac{\delta t}{2} \left( {\bf \tau}^b(t) - {\bf j}(t)
1527 \chi(t) \right) \\
1528 \mathsf{A}(t + \delta t) & \leftarrow & \mathrm{rot}\left(\delta t *
1529 {\bf j}(t + \delta t / 2) \overleftrightarrow{\mathsf{I}}^{-1} \right) \\
1530 \chi\left(t + \delta t / 2 \right) & \leftarrow & \chi(t) +
1531 \frac{\delta t}{2 \tau_T^2} \left( \frac{T(t)}{T_{\mathrm{target}}} - 1
1532 \right) \\
1533 \overleftrightarrow{\eta}(t + \delta t / 2) & \leftarrow & \overleftrightarrow{\eta}(t) + \frac{\delta t \mathcal{V}(t)}{2 N k_B
1534 T(t) \tau_B^2} \left( \overleftrightarrow{\mathsf{P}}(t) - P_{\mathrm{target}}\mathsf{1} \right) \\
1535 {\bf r}(t + \delta t) & \leftarrow & {\bf r}(t) + \delta t \left\{ {\bf
1536 v}\left(t + \delta t / 2 \right) + \overleftrightarrow{\eta}(t +
1537 \delta t / 2) \cdot \left[ {\bf
1538 r}(t + \delta t) - {\bf R}_0 \right] \right\} \\
1539 \mathsf{H}(t + \delta t) & \leftarrow & \mathsf{H}(t) \cdot e^{-\delta t
1540 \overleftrightarrow{\eta}(t + \delta t / 2)}
1541 \end{eqnarray}
1542 {\sc oopse} uses a power series expansion truncated at second order
1543 for the exponential operation which scales the simulation box.
1544
1545 The {\tt moveB} portion of the algorithm is largely unchanged from the
1546 NPTi integrator:
1547
1548 {\tt moveB:}
1549 \begin{eqnarray}
1550 T(t + \delta t) & \leftarrow & \left\{{\bf v}(t + \delta t)\right\},
1551 \left\{{\bf j}(t + \delta t)\right\} \\
1552 \overleftrightarrow{\mathsf{P}}(t + \delta t) & \leftarrow & \left\{{\bf r}(t + \delta t)\right\},
1553 \left\{{\bf v}(t + \delta t)\right\}, \left\{{\bf f}(t + \delta t)\right\} \\
1554 \chi\left(t + \delta t \right) & \leftarrow & \chi\left(t + \delta t /
1555 2 \right) + \frac{\delta t}{2 \tau_T^2} \left( \frac{T(t+\delta
1556 t)}{T_{\mathrm{target}}} - 1 \right) \\
1557 \overleftrightarrow{\eta}(t + \delta t) & \leftarrow & \overleftrightarrow{\eta}(t + \delta t / 2) +
1558 \frac{\delta t \mathcal{V}(t + \delta t)}{2 N k_B T(t + \delta t) \tau_B^2}
1559 \left( \overleftrightarrow{P}(t + \delta t) - P_{\mathrm{target}}\mathsf{1}
1560 \right) \\
1561 {\bf v}\left(t + \delta t \right) & \leftarrow & {\bf
1562 v}\left(t + \delta t / 2 \right) + \frac{\delta t}{2} \left(
1563 \frac{{\bf f}(t + \delta t)}{m} -
1564 (\chi(t + \delta t)\mathsf{1} + \overleftrightarrow{\eta}(t + \delta
1565 t)) \right) \cdot {\bf v}(t + \delta t) \\
1566 {\bf j}\left(t + \delta t \right) & \leftarrow & {\bf
1567 j}\left(t + \delta t / 2 \right) + \frac{\delta t}{2} \left( {\bf
1568 \tau}^b(t + \delta t) - {\bf j}(t + \delta t)
1569 \chi(t + \delta t) \right)
1570 \end{eqnarray}
1571
1572 The iterative schemes for both {\tt moveA} and {\tt moveB} are
1573 identical to those described for the NPTi integrator.
1574
1575 The NPTf integrator is known to conserve the following Hamiltonian:
1576 \begin{equation}
1577 H_{\mathrm{NPTf}} = V + K + f k_B T_{\mathrm{target}} \left(
1578 \frac{\tau_{T}^2 \chi^2(t)}{2} + \int_{0}^{t} \chi(t^\prime) dt^\prime
1579 \right) + P_{\mathrm{target}} \mathcal{V}(t) + \frac{f k_B
1580 T_{\mathrm{target}}}{2}
1581 \mathrm{Tr}\left[\overleftrightarrow{\eta}(t)\right]^2 \tau_B^2.
1582 \end{equation}
1583
1584 This integrator must be used with care, particularly in liquid
1585 simulations. Liquids have very small restoring forces in the
1586 off-diagonal directions, and the simulation box can very quickly form
1587 elongated and sheared geometries which become smaller than the
1588 electrostatic or Lennard-Jones cutoff radii. It finds most use in
1589 simulating crystals or liquid crystals which assume non-orthorhombic
1590 geometries.
1591
1592 \subsubsection{\label{nptxyz}Constant pressure in 3 axes (NPTxyz)}
1593
1594 There is one additional extended system integrator which is somewhat
1595 simpler than the NPTf method described above. In this case, the three
1596 axes have independent barostats which each attempt to preserve the
1597 target pressure along the box walls perpendicular to that particular
1598 axis. The lengths of the box axes are allowed to fluctuate
1599 independently, but the angle between the box axes does not change.
1600 The equations of motion are identical to those described above, but
1601 only the {\it diagonal} elements of $\overleftrightarrow{\eta}$ are
1602 computed. The off-diagonal elements are set to zero (even when the
1603 pressure tensor has non-zero off-diagonal elements).
1604
1605 It should be noted that the NPTxyz integrator is {\it not} known to
1606 preserve any Hamiltonian of interest to the chemical physics
1607 community. The integrator is extremely useful, however, in generating
1608 initial conditions for other integration methods. It {\it is} suitable
1609 for use with liquid simulations, or in cases where there is
1610 orientational anisotropy in the system (i.e. in lipid bilayer
1611 simulations).
1612
1613 \subsection{\label{oopseSec:rattle}The {\sc rattle} Method for Bond
1614 Constraints}
1615
1616 In order to satisfy the constraints of fixed bond lengths within {\sc
1617 oopse}, we have implemented the {\sc rattle} algorithm of
1618 Andersen.\cite{andersen83} The algorithm is a velocity verlet
1619 formulation of the {\sc shake} method\cite{ryckaert77} of iteratively
1620 solving the Lagrange multipliers of constraint. The system of lagrange
1621 multipliers allows one to reformulate the equations of motion with
1622 explicit constraint forces.\cite{fowles99:lagrange}
1623
1624 Consider a system described by coordinates $q_1$ and $q_2$ subject to an
1625 equation of constraint:
1626 \begin{equation}
1627 \sigma(q_1, q_2,t) = 0
1628 \label{oopseEq:lm1}
1629 \end{equation}
1630 The Lagrange formulation of the equations of motion can be written:
1631 \begin{equation}
1632 \delta\int_{t_1}^{t_2}L\, dt =
1633 \int_{t_1}^{t_2} \sum_i \biggl [ \frac{\partial L}{\partial q_i}
1634 - \frac{d}{dt}\biggl(\frac{\partial L}{\partial \dot{q}_i}
1635 \biggr ) \biggr] \delta q_i \, dt = 0
1636 \label{oopseEq:lm2}
1637 \end{equation}
1638 Here, $\delta q_i$ is not independent for each $q$, as $q_1$ and $q_2$
1639 are linked by $\sigma$. However, $\sigma$ is fixed at any given
1640 instant of time, giving:
1641 \begin{align}
1642 \delta\sigma &= \biggl( \frac{\partial\sigma}{\partial q_1} \delta q_1 %
1643 + \frac{\partial\sigma}{\partial q_2} \delta q_2 \biggr) = 0 \\
1644 %
1645 \frac{\partial\sigma}{\partial q_1} \delta q_1 &= %
1646 - \frac{\partial\sigma}{\partial q_2} \delta q_2 \\
1647 %
1648 \delta q_2 &= - \biggl(\frac{\partial\sigma}{\partial q_1} \bigg / %
1649 \frac{\partial\sigma}{\partial q_2} \biggr) \delta q_1
1650 \end{align}
1651 Substituted back into Eq.~\ref{oopseEq:lm2},
1652 \begin{equation}
1653 \int_{t_1}^{t_2}\biggl [ \biggl(\frac{\partial L}{\partial q_1}
1654 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1655 \biggr)
1656 - \biggl( \frac{\partial L}{\partial q_1}
1657 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1658 \biggr) \biggl(\frac{\partial\sigma}{\partial q_1} \bigg / %
1659 \frac{\partial\sigma}{\partial q_2} \biggr)\biggr] \delta q_1 \, dt = 0
1660 \label{oopseEq:lm3}
1661 \end{equation}
1662 Leading to,
1663 \begin{equation}
1664 \frac{\biggl(\frac{\partial L}{\partial q_1}
1665 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1666 \biggr)}{\frac{\partial\sigma}{\partial q_1}} =
1667 \frac{\biggl(\frac{\partial L}{\partial q_2}
1668 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_2}
1669 \biggr)}{\frac{\partial\sigma}{\partial q_2}}
1670 \label{oopseEq:lm4}
1671 \end{equation}
1672 This relation can only be statisfied, if both are equal to a single
1673 function $-\lambda(t)$,
1674 \begin{align}
1675 \frac{\biggl(\frac{\partial L}{\partial q_1}
1676 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1677 \biggr)}{\frac{\partial\sigma}{\partial q_1}} &= -\lambda(t) \\
1678 %
1679 \frac{\partial L}{\partial q_1}
1680 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1} &=
1681 -\lambda(t)\,\frac{\partial\sigma}{\partial q_1} \\
1682 %
1683 \frac{\partial L}{\partial q_1}
1684 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1685 + \mathcal{G}_i &= 0
1686 \end{align}
1687 Where $\mathcal{G}_i$, the force of constraint on $i$, is:
1688 \begin{equation}
1689 \mathcal{G}_i = \lambda(t)\,\frac{\partial\sigma}{\partial q_1}
1690 \label{oopseEq:lm5}
1691 \end{equation}
1692
1693 In a simulation, this would involve the solution of a set of $(m + n)$
1694 number of equations. Where $m$ is the number of constraints, and $n$
1695 is the number of constrained coordinates. In practice, this is not
1696 done, as the matrix inversion necessary to solve the system of
1697 equations would be very time consuming to solve. Additionally, the
1698 numerical error in the solution of the set of $\lambda$'s would be
1699 compounded by the error inherent in propagating by the Velocity Verlet
1700 algorithm ($\Delta t^4$). The Verlet propagation error is negligible
1701 in an unconstrained system, as one is interested in the statistics of
1702 the run, and not that the run be numerically exact to the ``true''
1703 integration. This relates back to the ergodic hypothesis that a time
1704 integral of a valid trajectory will still give the correct ensemble
1705 average. However, in the case of constraints, if the equations of
1706 motion leave the ``true'' trajectory, they are departing from the
1707 constrained surface. The method that is used, is to iteratively solve
1708 for $\lambda(t)$ at each time step.
1709
1710 In {\sc rattle} the equations of motion are modified subject to the
1711 following two constraints:
1712 \begin{align}
1713 \sigma_{ij}[\mathbf{r}(t)] \equiv
1714 [ \mathbf{r}_i(t) - \mathbf{r}_j(t)]^2 - d_{ij}^2 &= 0 %
1715 \label{oopseEq:c1} \\
1716 %
1717 [\mathbf{\dot{r}}_i(t) - \mathbf{\dot{r}}_j(t)] \cdot
1718 [\mathbf{r}_i(t) - \mathbf{r}_j(t)] &= 0 \label{oopseEq:c2}
1719 \end{align}
1720 Eq.~\ref{oopseEq:c1} is the set of bond constraints, where $d_{ij}$ is
1721 the constrained distance between atom $i$ and
1722 $j$. Eq.~\ref{oopseEq:c2} constrains the velocities of $i$ and $j$ to
1723 be perpendicular to the bond vector, so that the bond can neither grow
1724 nor shrink. The constrained dynamics equations become:
1725 \begin{equation}
1726 m_i \mathbf{\ddot{r}}_i = \mathbf{F}_i + \mathbf{\mathcal{G}}_i
1727 \label{oopseEq:r1}
1728 \end{equation}
1729 Where,$\mathbf{\mathcal{G}}_i$ are the forces of constraint on $i$,
1730 and are defined:
1731 \begin{equation}
1732 \mathbf{\mathcal{G}}_i = - \sum_j \lambda_{ij}(t)\,\nabla \sigma_{ij}
1733 \label{oopseEq:r2}
1734 \end{equation}
1735
1736 In Velocity Verlet, if $\Delta t = h$, the propagation can be written:
1737 \begin{align}
1738 \mathbf{r}_i(t+h) &=
1739 \mathbf{r}_i(t) + h\mathbf{\dot{r}}(t) +
1740 \frac{h^2}{2m_i}\,\Bigl[ \mathbf{F}_i(t) +
1741 \mathbf{\mathcal{G}}_{Ri}(t) \Bigr] \label{oopseEq:vv1} \\
1742 %
1743 \mathbf{\dot{r}}_i(t+h) &=
1744 \mathbf{\dot{r}}_i(t) + \frac{h}{2m_i}
1745 \Bigl[ \mathbf{F}_i(t) + \mathbf{\mathcal{G}}_{Ri}(t) +
1746 \mathbf{F}_i(t+h) + \mathbf{\mathcal{G}}_{Vi}(t+h) \Bigr] %
1747 \label{oopseEq:vv2}
1748 \end{align}
1749 Where:
1750 \begin{align}
1751 \mathbf{\mathcal{G}}_{Ri}(t) &=
1752 -2 \sum_j \lambda_{Rij}(t) \mathbf{r}_{ij}(t) \\
1753 %
1754 \mathbf{\mathcal{G}}_{Vi}(t+h) &=
1755 -2 \sum_j \lambda_{Vij}(t+h) \mathbf{r}(t+h)
1756 \end{align}
1757 Next, define:
1758 \begin{align}
1759 g_{ij} &= h \lambda_{Rij}(t) \\
1760 k_{ij} &= h \lambda_{Vij}(t+h) \\
1761 \mathbf{q}_i &= \mathbf{\dot{r}}_i(t) + \frac{h}{2m_i} \mathbf{F}_i(t)
1762 - \frac{1}{m_i}\sum_j g_{ij}\mathbf{r}_{ij}(t)
1763 \end{align}
1764 Using these definitions, Eq.~\ref{oopseEq:vv1} and \ref{oopseEq:vv2}
1765 can be rewritten as,
1766 \begin{align}
1767 \mathbf{r}_i(t+h) &= \mathbf{r}_i(t) + h \mathbf{q}_i \\
1768 %
1769 \mathbf{\dot{r}}(t+h) &= \mathbf{q}_i + \frac{h}{2m_i}\mathbf{F}_i(t+h)
1770 -\frac{1}{m_i}\sum_j k_{ij} \mathbf{r}_{ij}(t+h)
1771 \end{align}
1772
1773 To integrate the equations of motion, the {\sc rattle} algorithm first
1774 solves for $\mathbf{r}(t+h)$. Let,
1775 \begin{equation}
1776 \mathbf{q}_i = \mathbf{\dot{r}}(t) + \frac{h}{2m_i}\mathbf{F}_i(t)
1777 \end{equation}
1778 Here $\mathbf{q}_i$ corresponds to an initial unconstrained move. Next
1779 pick a constraint $j$, and let,
1780 \begin{equation}
1781 \mathbf{s} = \mathbf{r}_i(t) + h\mathbf{q}_i(t)
1782 - \mathbf{r}_j(t) + h\mathbf{q}_j(t)
1783 \label{oopseEq:ra1}
1784 \end{equation}
1785 If
1786 \begin{equation}
1787 \Big| |\mathbf{s}|^2 - d_{ij}^2 \Big| > \text{tolerance},
1788 \end{equation}
1789 then the constraint is unsatisfied, and corrections are made to the
1790 positions. First we define a test corrected configuration as,
1791 \begin{align}
1792 \mathbf{r}_i^T(t+h) = \mathbf{r}_i(t) + h\biggl[\mathbf{q}_i -
1793 g_{ij}\,\frac{\mathbf{r}_{ij}(t)}{m_i} \biggr] \\
1794 %
1795 \mathbf{r}_j^T(t+h) = \mathbf{r}_j(t) + h\biggl[\mathbf{q}_j +
1796 g_{ij}\,\frac{\mathbf{r}_{ij}(t)}{m_j} \biggr]
1797 \end{align}
1798 And we chose $g_{ij}$ such that, $|\mathbf{r}_i^T - \mathbf{r}_j^T|^2
1799 = d_{ij}^2$. Solving the quadratic for $g_{ij}$ we obtain the
1800 approximation,
1801 \begin{equation}
1802 g_{ij} = \frac{(s^2 - d^2)}{2h[\mathbf{s}\cdot\mathbf{r}_{ij}(t)]
1803 (\frac{1}{m_i} + \frac{1}{m_j})}
1804 \end{equation}
1805 Although not an exact solution for $g_{ij}$, as this is an iterative
1806 scheme overall, the eventual solution will converge. With a trial
1807 $g_{ij}$, the new $\mathbf{q}$'s become,
1808 \begin{align}
1809 \mathbf{q}_i &= \mathbf{q}^{\text{old}}_i - g_{ij}\,
1810 \frac{\mathbf{r}_{ij}(t)}{m_i} \\
1811 %
1812 \mathbf{q}_j &= \mathbf{q}^{\text{old}}_j + g_{ij}\,
1813 \frac{\mathbf{r}_{ij}(t)}{m_j}
1814 \end{align}
1815 The whole algorithm is then repeated from Eq.~\ref{oopseEq:ra1} until
1816 all constraints are satisfied.
1817
1818 The second step of {\sc rattle}, is to then update the velocities. The
1819 step starts with,
1820 \begin{equation}
1821 \mathbf{\dot{r}}_i(t+h) = \mathbf{q}_i + \frac{h}{2m_i}\mathbf{F}_i(t+h)
1822 \end{equation}
1823 Next we pick a constraint $j$, and calculate the dot product $\ell$.
1824 \begin{equation}
1825 \ell = \mathbf{r}_{ij}(t+h) \cdot \mathbf{\dot{r}}_{ij}(t+h)
1826 \label{oopseEq:rv1}
1827 \end{equation}
1828 Here if constraint Eq.~\ref{oopseEq:c2} holds, $\ell$ should be
1829 zero. Therefore if $\ell$ is greater than some tolerance, then
1830 corrections are made to the $i$ and $j$ velocities.
1831 \begin{align}
1832 \mathbf{\dot{r}}_i^T &= \mathbf{\dot{r}}_i(t+h) - k_{ij}
1833 \frac{\mathbf{\dot{r}}_{ij}(t+h)}{m_i} \\
1834 %
1835 \mathbf{\dot{r}}_j^T &= \mathbf{\dot{r}}_j(t+h) + k_{ij}
1836 \frac{\mathbf{\dot{r}}_{ij}(t+h)}{m_j}
1837 \end{align}
1838 Like in the previous step, we select a value for $k_{ij}$ such that
1839 $\ell$ is zero.
1840 \begin{equation}
1841 k_{ij} = \frac{\ell}{d^2_{ij}(\frac{1}{m_i} + \frac{1}{m_j})}
1842 \end{equation}
1843 The test velocities, $\mathbf{\dot{r}}^T_i$ and
1844 $\mathbf{\dot{r}}^T_j$, then replace their respective velocities, and
1845 the algorithm is iterated from Eq.~\ref{oopseEq:rv1} until all
1846 constraints are satisfied.
1847
1848
1849 \subsection{\label{oopseSec:zcons}Z-Constraint Method}
1850
1851 Based on the fluctuation-dissipation theorem, a force auto-correlation
1852 method was developed by Roux and Karplus to investigate the dynamics
1853 of ions inside ion channels.\cite{Roux91} The time-dependent friction
1854 coefficient can be calculated from the deviation of the instantaneous
1855 force from its mean force.
1856 \begin{equation}
1857 \xi(z,t)=\langle\delta F(z,t)\delta F(z,0)\rangle/k_{B}T
1858 \end{equation}
1859 where%
1860 \begin{equation}
1861 \delta F(z,t)=F(z,t)-\langle F(z,t)\rangle
1862 \end{equation}
1863
1864
1865 If the time-dependent friction decays rapidly, the static friction
1866 coefficient can be approximated by
1867 \begin{equation}
1868 \xi_{\text{static}}(z)=\int_{0}^{\infty}\langle\delta F(z,t)\delta F(z,0)\rangle dt
1869 \end{equation}
1870 Allowing diffusion constant to then be calculated through the
1871 Einstein relation:\cite{Marrink94}
1872 \begin{equation}
1873 D(z)=\frac{k_{B}T}{\xi_{\text{static}}(z)}=\frac{(k_{B}T)^{2}}{\int_{0}^{\infty
1874 }\langle\delta F(z,t)\delta F(z,0)\rangle dt}%
1875 \end{equation}
1876
1877 The Z-Constraint method, which fixes the z coordinates of the
1878 molecules with respect to the center of the mass of the system, has
1879 been a method suggested to obtain the forces required for the force
1880 auto-correlation calculation.\cite{Marrink94} However, simply resetting the
1881 coordinate will move the center of the mass of the whole system. To
1882 avoid this problem, a new method was used in {\sc oopse}. Instead of
1883 resetting the coordinate, we reset the forces of z-constrained
1884 molecules as well as subtract the total constraint forces from the
1885 rest of the system after the force calculation at each time step.
1886
1887 After the force calculation, define $G_\alpha$ as
1888 \begin{equation}
1889 G_{\alpha} = \sum_i F_{\alpha i}
1890 \label{oopseEq:zc1}
1891 \end{equation}
1892 Where $F_{\alpha i}$ is the force in the z direction of atom $i$ in
1893 z-constrained molecule $\alpha$. The forces of the z constrained
1894 molecule are then set to:
1895 \begin{equation}
1896 F_{\alpha i} = F_{\alpha i} -
1897 \frac{m_{\alpha i} G_{\alpha}}{\sum_i m_{\alpha i}}
1898 \end{equation}
1899 Here, $m_{\alpha i}$ is the mass of atom $i$ in the z-constrained
1900 molecule. Having rescaled the forces, the velocities must also be
1901 rescaled to subtract out any center of mass velocity in the z
1902 direction.
1903 \begin{equation}
1904 v_{\alpha i} = v_{\alpha i} -
1905 \frac{\sum_i m_{\alpha i} v_{\alpha i}}{\sum_i m_{\alpha i}}
1906 \end{equation}
1907 Where $v_{\alpha i}$ is the velocity of atom $i$ in the z direction.
1908 Lastly, all of the accumulated z constrained forces must be subtracted
1909 from the system to keep the system center of mass from drifting.
1910 \begin{equation}
1911 F_{\beta i} = F_{\beta i} - \frac{m_{\beta i} \sum_{\alpha} G_{\alpha}}
1912 {\sum_{\beta}\sum_i m_{\beta i}}
1913 \end{equation}
1914 Where $\beta$ are all of the unconstrained molecules in the system.
1915
1916 At the very beginning of the simulation, the molecules may not be at their
1917 constrained positions. To move a z-constrained molecule to its specified
1918 position, a simple harmonic potential is used
1919 \begin{equation}
1920 U(t)=\frac{1}{2}k_{\text{Harmonic}}(z(t)-z_{\text{cons}})^{2}%
1921 \end{equation}
1922 where $k_{\text{Harmonic}}$ is the harmonic force constant, $z(t)$ is the
1923 current $z$ coordinate of the center of mass of the constrained molecule, and
1924 $z_{\text{cons}}$ is the constrained position. The harmonic force operating
1925 on the z-constrained molecule at time $t$ can be calculated by
1926 \begin{equation}
1927 F_{z_{\text{Harmonic}}}(t)=-\frac{\partial U(t)}{\partial z(t)}=
1928 -k_{\text{Harmonic}}(z(t)-z_{\text{cons}})
1929 \end{equation}
1930
1931 \section{\label{oopseSec:props}Trajectory Analysis}
1932
1933 \subsection{\label{oopseSec:staticProps}Static Property Analysis}
1934
1935 The static properties of the trajectories are analyzed with the
1936 program \texttt{staticProps}. The code is capable of calculating a
1937 number of pair correlations between species A and B. Some of which
1938 only apply to directional entities. The summary of pair correlations
1939 can be found in Table~\ref{oopseTb:gofrs}
1940
1941 \begin{table}
1942 \caption[The list of pair correlations in \texttt{staticProps}]{The different pair correlations in \texttt{staticProps} along with whether atom A or B must be directional.}
1943 \label{oopseTb:gofrs}
1944 \begin{center}
1945 \begin{tabular}{|l|c|c|}
1946 \hline
1947 Name & Equation & Directional Atom \\ \hline
1948 $g_{\text{AB}}(r)$ & Eq.~\ref{eq:gofr} & neither \\ \hline
1949 $g_{\text{AB}}(r, \cos \theta)$ & Eq.~\ref{eq:gofrCosTheta} & A \\ \hline
1950 $g_{\text{AB}}(r, \cos \omega)$ & Eq.~\ref{eq:gofrCosOmega} & both \\ \hline
1951 $g_{\text{AB}}(x, y, z)$ & Eq.~\ref{eq:gofrXYZ} & neither \\ \hline
1952 $\langle \cos \omega \rangle_{\text{AB}}(r)$ & Eq.~\ref{eq:cosOmegaOfR} &%
1953 both \\ \hline
1954 \end{tabular}
1955 \end{center}
1956 \end{table}
1957
1958 The first pair correlation, $g_{\text{AB}}(r)$, is defined as follows:
1959 \begin{equation}
1960 g_{\text{AB}}(r) = \frac{V}{N_{\text{A}}N_{\text{B}}}\langle %%
1961 \sum_{i \in \text{A}} \sum_{j \in \text{B}} %%
1962 \delta( r - |\mathbf{r}_{ij}|) \rangle \label{eq:gofr}
1963 \end{equation}
1964 Where $\mathbf{r}_{ij}$ is the vector
1965 \begin{equation*}
1966 \mathbf{r}_{ij} = \mathbf{r}_j - \mathbf{r}_i \notag
1967 \end{equation*}
1968 and $\frac{V}{N_{\text{A}}N_{\text{B}}}$ normalizes the average over
1969 the expected pair density at a given $r$.
1970
1971 The next two pair correlations, $g_{\text{AB}}(r, \cos \theta)$ and
1972 $g_{\text{AB}}(r, \cos \omega)$, are similar in that they are both two
1973 dimensional histograms. Both use $r$ for the primary axis then a
1974 $\cos$ for the secondary axis ($\cos \theta$ for
1975 Eq.~\ref{eq:gofrCosTheta} and $\cos \omega$ for
1976 Eq.~\ref{eq:gofrCosOmega}). This allows for the investigator to
1977 correlate alignment on directional entities. $g_{\text{AB}}(r, \cos
1978 \theta)$ is defined as follows:
1979 \begin{equation}
1980 g_{\text{AB}}(r, \cos \theta) = \frac{V}{N_{\text{A}}N_{\text{B}}}\langle
1981 \sum_{i \in \text{A}} \sum_{j \in \text{B}}
1982 \delta( \cos \theta - \cos \theta_{ij})
1983 \delta( r - |\mathbf{r}_{ij}|) \rangle
1984 \label{eq:gofrCosTheta}
1985 \end{equation}
1986 Where
1987 \begin{equation*}
1988 \cos \theta_{ij} = \mathbf{\hat{i}} \cdot \mathbf{\hat{r}}_{ij}
1989 \end{equation*}
1990 Here $\mathbf{\hat{i}}$ is the unit directional vector of species $i$
1991 and $\mathbf{\hat{r}}_{ij}$ is the unit vector associated with vector
1992 $\mathbf{r}_{ij}$.
1993
1994 The second two dimensional histogram is of the form:
1995 \begin{equation}
1996 g_{\text{AB}}(r, \cos \omega) =
1997 \frac{V}{N_{\text{A}}N_{\text{B}}}\langle
1998 \sum_{i \in \text{A}} \sum_{j \in \text{B}}
1999 \delta( \cos \omega - \cos \omega_{ij})
2000 \delta( r - |\mathbf{r}_{ij}|) \rangle \label{eq:gofrCosOmega}
2001 \end{equation}
2002 Here
2003 \begin{equation*}
2004 \cos \omega_{ij} = \mathbf{\hat{i}} \cdot \mathbf{\hat{j}}
2005 \end{equation*}
2006 Again, $\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$ are the unit
2007 directional vectors of species $i$ and $j$.
2008
2009 The static analysis code is also cable of calculating a three
2010 dimensional pair correlation of the form:
2011 \begin{equation}\label{eq:gofrXYZ}
2012 g_{\text{AB}}(x, y, z) =
2013 \frac{V}{N_{\text{A}}N_{\text{B}}}\langle
2014 \sum_{i \in \text{A}} \sum_{j \in \text{B}}
2015 \delta( x - x_{ij})
2016 \delta( y - y_{ij})
2017 \delta( z - z_{ij}) \rangle
2018 \end{equation}
2019 Where $x_{ij}$, $y_{ij}$, and $z_{ij}$ are the $x$, $y$, and $z$
2020 components respectively of vector $\mathbf{r}_{ij}$.
2021
2022 The final pair correlation is similar to
2023 Eq.~\ref{eq:gofrCosOmega}. $\langle \cos \omega
2024 \rangle_{\text{AB}}(r)$ is calculated in the following way:
2025 \begin{equation}\label{eq:cosOmegaOfR}
2026 \langle \cos \omega \rangle_{\text{AB}}(r) =
2027 \langle \sum_{i \in \text{A}} \sum_{j \in \text{B}}
2028 (\cos \omega_{ij}) \delta( r - |\mathbf{r}_{ij}|) \rangle
2029 \end{equation}
2030 Here $\cos \omega_{ij}$ is defined in the same way as in
2031 Eq.~\ref{eq:gofrCosOmega}. This equation is a single dimensional pair
2032 correlation that gives the average correlation of two directional
2033 entities as a function of their distance from each other.
2034
2035 \subsection{\label{dynamicProps}Dynamic Property Analysis}
2036
2037 The dynamic properties of a trajectory are calculated with the program
2038 \texttt{dynamicProps}. The program calculates the following properties:
2039 \begin{gather}
2040 \langle | \mathbf{r}(t) - \mathbf{r}(0) |^2 \rangle \label{eq:rms}\\
2041 \langle \mathbf{v}(t) \cdot \mathbf{v}(0) \rangle \label{eq:velCorr} \\
2042 \langle \mathbf{j}(t) \cdot \mathbf{j}(0) \rangle \label{eq:angularVelCorr}
2043 \end{gather}
2044
2045 Eq.~\ref{eq:rms} is the root mean square displacement function. Which
2046 allows one to observe the average displacement of an atom as a
2047 function of time. The quantity is useful when calculating diffusion
2048 coefficients because of the Einstein Relation, which is valid at long
2049 times.\cite{allen87:csl}
2050 \begin{equation}
2051 2tD = \langle | \mathbf{r}(t) - \mathbf{r}(0) |^2 \rangle
2052 \label{oopseEq:einstein}
2053 \end{equation}
2054
2055 Eq.~\ref{eq:velCorr} and \ref{eq:angularVelCorr} are the translational
2056 velocity and angular velocity correlation functions respectively. The
2057 latter is only applicable to directional species in the
2058 simulation. The velocity autocorrelation functions are useful when
2059 determining vibrational information about the system of interest.
2060
2061 \section{\label{oopseSec:design}Program Design}
2062
2063 \subsection{\label{sec:architecture} {\sc oopse} Architecture}
2064
2065 The core of OOPSE is divided into two main object libraries:
2066 \texttt{libBASS} and \texttt{libmdtools}. \texttt{libBASS} is the
2067 library developed around the parsing engine and \texttt{libmdtools}
2068 is the software library developed around the simulation engine. These
2069 two libraries are designed to encompass all the basic functions and
2070 tools that {\sc oopse} provides. Utility programs, such as the
2071 property analyzers, need only link against the software libraries to
2072 gain access to parsing, force evaluation, and input / output
2073 routines.
2074
2075 Contained in \texttt{libBASS} are all the routines associated with
2076 reading and parsing the \texttt{.bass} input files. Given a
2077 \texttt{.bass} file, \texttt{libBASS} will open it and any associated
2078 \texttt{.mdl} files; then create structures in memory that are
2079 templates of all the molecules specified in the input files. In
2080 addition, any simulation parameters set in the \texttt{.bass} file
2081 will be placed in a structure for later query by the controlling
2082 program.
2083
2084 Located in \texttt{libmdtools} are all other routines necessary to a
2085 Molecular Dynamics simulation. The library uses the main data
2086 structures returned by \texttt{libBASS} to initialize the various
2087 parts of the simulation: the atom structures and positions, the force
2088 field, the integrator, \emph{et cetera}. After initialization, the
2089 library can be used to perform a variety of tasks: integrate a
2090 Molecular Dynamics trajectory, query phase space information from a
2091 specific frame of a completed trajectory, or even recalculate force or
2092 energetic information about specific frames from a completed
2093 trajectory.
2094
2095 With these core libraries in place, several programs have been
2096 developed to utilize the routines provided by \texttt{libBASS} and
2097 \texttt{libmdtools}. The main program of the package is \texttt{oopse}
2098 and the corresponding parallel version \texttt{oopse\_MPI}. These two
2099 programs will take the \texttt{.bass} file, and create (and integrate)
2100 the simulation specified in the script. The two analysis programs
2101 \texttt{staticProps} and \texttt{dynamicProps} utilize the core
2102 libraries to initialize and read in trajectories from previously
2103 completed simulations, in addition to the ability to use functionality
2104 from \texttt{libmdtools} to recalculate forces and energies at key
2105 frames in the trajectories. Lastly, the family of system building
2106 programs (Sec.~\ref{oopseSec:initCoords}) also use the libraries to
2107 store and output the system configurations they create.
2108
2109 \subsection{\label{oopseSec:parallelization} Parallelization of {\sc oopse}}
2110
2111 Although processor power is continually growing roughly following
2112 Moore's Law, it is still unreasonable to simulate systems of more then
2113 a 1000 atoms on a single processor. To facilitate study of larger
2114 system sizes or smaller systems on long time scales in a reasonable
2115 period of time, parallel methods were developed allowing multiple
2116 CPU's to share the simulation workload. Three general categories of
2117 parallel decomposition methods have been developed including atomic,
2118 spatial and force decomposition methods.
2119
2120 Algorithmically simplest of the three methods is atomic decomposition
2121 where N particles in a simulation are split among P processors for the
2122 duration of the simulation. Computational cost scales as an optimal
2123 $O(N/P)$ for atomic decomposition. Unfortunately all processors must
2124 communicate positions and forces with all other processors at every
2125 force evaluation, leading communication costs to scale as an
2126 unfavorable $O(N)$, \emph{independent of the number of processors}. This
2127 communication bottleneck led to the development of spatial and force
2128 decomposition methods in which communication among processors scales
2129 much more favorably. Spatial or domain decomposition divides the
2130 physical spatial domain into 3D boxes in which each processor is
2131 responsible for calculation of forces and positions of particles
2132 located in its box. Particles are reassigned to different processors
2133 as they move through simulation space. To calculate forces on a given
2134 particle, a processor must know the positions of particles within some
2135 cutoff radius located on nearby processors instead of the positions of
2136 particles on all processors. Both communication between processors and
2137 computation scale as $O(N/P)$ in the spatial method. However, spatial
2138 decomposition adds algorithmic complexity to the simulation code and
2139 is not very efficient for small N since the overall communication
2140 scales as the surface to volume ratio $(N/P)^{2/3}$ in three
2141 dimensions.
2142
2143 The parallelization method used in {\sc oopse} is the force
2144 decomposition method. Force decomposition assigns particles to
2145 processors based on a block decomposition of the force
2146 matrix. Processors are split into an optimally square grid forming row
2147 and column processor groups. Forces are calculated on particles in a
2148 given row by particles located in that processors column
2149 assignment. Force decomposition is less complex to implement than the
2150 spatial method but still scales computationally as $O(N/P)$ and scales
2151 as $O(N/\sqrt{P})$ in communication cost. Plimpton has also found that
2152 force decompositions scale more favorably than spatial decompositions
2153 for systems up to 10,000 atoms and favorably compete with spatial
2154 methods up to 100,000 atoms.\cite{plimpton95}
2155
2156 \subsection{\label{oopseSec:memAlloc}Memory Issues in Trajectory Analysis}
2157
2158 For large simulations, the trajectory files can sometimes reach sizes
2159 in excess of several gigabytes. In order to effectively analyze that
2160 amount of data, two memory management schemes have been devised for
2161 \texttt{staticProps} and for \texttt{dynamicProps}. The first scheme,
2162 developed for \texttt{staticProps}, is the simplest. As each frame's
2163 statistics are calculated independent of each other, memory is
2164 allocated for each frame, then freed once correlation calculations are
2165 complete for the snapshot. To prevent multiple passes through a
2166 potentially large file, \texttt{staticProps} is capable of calculating
2167 all requested correlations per frame with only a single pair loop in
2168 each frame and a single read of the file.
2169
2170 The second, more advanced memory scheme, is used by
2171 \texttt{dynamicProps}. Here, the program must have multiple frames in
2172 memory to calculate time dependent correlations. In order to prevent a
2173 situation where the program runs out of memory due to large
2174 trajectories, the user is able to specify that the trajectory be read
2175 in blocks. The number of frames in each block is specified by the
2176 user, and upon reading a block of the trajectory,
2177 \texttt{dynamicProps} will calculate all of the time correlation frame
2178 pairs within the block. After in-block correlations are complete, a
2179 second block of the trajectory is read, and the cross correlations are
2180 calculated between the two blocks. this second block is then freed and
2181 then incremented and the process repeated until the end of the
2182 trajectory. Once the end is reached, the first block is freed then
2183 incremented, and the again the internal time correlations are
2184 calculated. The algorithm with the second block is then repeated with
2185 the new origin block, until all frame pairs have been correlated in
2186 time. This process is illustrated in
2187 Fig.~\ref{oopseFig:dynamicPropsMemory}.
2188
2189 \begin{figure}
2190 \centering
2191 \includegraphics[width=\linewidth]{dynamicPropsMem.eps}
2192 \caption[A representation of the block correlations in \texttt{dynamicProps}]{This diagram illustrates the memory management used by \texttt{dynamicProps}, which follows the scheme: $\sum^{N_{\text{memory blocks}}}_{i=1}[ \operatorname{self}(i) + \sum^{N_{\text{memory blocks}}}_{j>i} \operatorname{cross}(i,j)]$. The shaded region represents the self correlation of the memory block, and the open blocks are read one at a time and the cross correlations between blocks are calculated.}
2193 \label{oopseFig:dynamicPropsMemory}
2194 \end{figure}
2195
2196 \section{\label{oopseSec:conclusion}Conclusion}
2197
2198 We have presented the design and implementation of our open source
2199 simulation package {\sc oopse}. The package offers novel capabilities
2200 to the field of Molecular Dynamics simulation packages in the form of
2201 dipolar force fields, and symplectic integration of rigid body
2202 dynamics. It is capable of scaling across multiple processors through
2203 the use of force based decomposition using MPI. It also implements
2204 several advanced integrators allowing the end user control over
2205 temperature and pressure. In addition, it is capable of integrating
2206 constrained dynamics through both the {\sc rattle} algorithm and the
2207 z-constraint method.
2208
2209 These features are all brought together in a single open-source
2210 program. Allowing researchers to not only benefit from
2211 {\sc oopse}, but also contribute to {\sc oopse}'s development as
2212 well.Documentation and source code for {\sc oopse} can be downloaded
2213 from \texttt{http://www.openscience.org/oopse/}.
2214