ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/oopse.tex
Revision: 1114
Committed: Thu Apr 15 16:42:53 2004 UTC (20 years, 3 months ago) by mmeineke
Content type: application/x-tex
File size: 99566 byte(s)
Log Message:
updated all equations, and finished all requested revisions

File Contents

# Content
1 \chapter{\label{chapt:oopse}OOPSE: AN OPEN SOURCE OBJECT-ORIENTED PARALLEL SIMULATION ENGINE FOR MOLECULAR DYNAMICS}
2
3
4
5 %% \begin{abstract}
6 %% We detail the capabilities of a new open-source parallel simulation
7 %% package ({\sc oopse}) that can perform molecular dynamics simulations
8 %% on atom types that are missing from other popular packages. In
9 %% particular, {\sc oopse} is capable of performing orientational
10 %% dynamics on dipolar systems, and it can handle simulations of metallic
11 %% systems using the embedded atom method ({\sc eam}).
12 %% \end{abstract}
13
14 \lstset{language=C,frame=TB,basicstyle=\small,basicstyle=\ttfamily, %
15 xleftmargin=0.5in, xrightmargin=0.5in,captionpos=b, %
16 abovecaptionskip=0.5cm, belowcaptionskip=0.5cm}
17
18 \section{\label{oopseSec:foreword}Foreword}
19
20 In this chapter, I present and detail the capabilities of the open
21 source simulation program {\sc oopse}. It is important to note that a
22 simulation program of this size and scope would not have been possible
23 without the collaborative efforts of my colleagues: Charles
24 F.~Vardeman II, Teng Lin, Christopher J.~Fennell and J.~Daniel
25 Gezelter. Although my contributions to {\sc oopse} are major,
26 consideration of my work apart from the others would not give a
27 complete description to the program's capabilities. As such, all
28 contributions to {\sc oopse} to date are presented in this chapter.
29
30 Charles Vardeman is responsible for the parallelization of the long
31 range forces in {\sc oopse} (Sec.~\ref{oopseSec:parallelization}) as
32 well as the inclusion of the embedded-atom potential for transition
33 metals (Sec.~\ref{oopseSec:eam}). Teng Lin's contributions include
34 refinement of the periodic boundary conditions
35 (Sec.~\ref{oopseSec:pbc}), the z-constraint method
36 (Sec.~\ref{oopseSec:zcons}), refinement of the property analysis
37 programs (Sec.~\ref{oopseSec:props}), and development in the extended
38 system integrators (Sec.~\ref{oopseSec:noseHooverThermo}). Christopher
39 Fennell worked on the symplectic integrator
40 (Sec.~\ref{oopseSec:integrate}) and the refinement of the {\sc ssd}
41 water model (Sec.~\ref{oopseSec:SSD}). Daniel Gezelter lent his
42 talents in the development of the extended system integrators
43 (Sec.~\ref{oopseSec:noseHooverThermo}) as well as giving general
44 direction and oversight to the entire project. My responsibilities
45 covered the creation and specification of {\sc bass}
46 (Sec.~\ref{oopseSec:IOfiles}), the original development of the single
47 processor version of {\sc oopse}, contributions to the extended state
48 integrators (Sec.~\ref{oopseSec:noseHooverThermo}), the implementation
49 of the Lennard-Jones (Sec.~\ref{sec:LJPot}) and {\sc duff}
50 (Sec.~\ref{oopseSec:DUFF}) force fields, and initial implementation of
51 the property analysis (Sec.~\ref{oopseSec:props}) and system
52 initialization (Sec.~\ref{oopseSec:initCoords}) utility programs. {\sc
53 oopse}, like many other Molecular Dynamics programs, is a work in
54 progress, and will continue to be so for many graduate student
55 lifetimes.
56
57 \section{\label{sec:intro}Introduction}
58
59 When choosing to simulate a chemical system with molecular dynamics,
60 there are a variety of options available. For simple systems, one
61 might consider writing one's own programming code. However, as systems
62 grow larger and more complex, building and maintaining code for the
63 simulations becomes a time consuming task. In such cases it is usually
64 more convenient for a researcher to turn to pre-existing simulation
65 packages. These packages, such as {\sc amber}\cite{pearlman:1995} and
66 {\sc charmm}\cite{Brooks83}, provide powerful tools for researchers to
67 conduct simulations of their systems without spending their time
68 developing a code base to conduct their research. This then frees them
69 to perhaps explore experimental analogues to their models.
70
71 Despite their utility, problems with these packages arise when
72 researchers try to develop techniques or energetic models that the
73 code was not originally designed to simulate. Examples of techniques
74 and energetics not commonly implemented include; dipole-dipole
75 interactions, rigid body dynamics, and metallic potentials. When faced
76 with these obstacles, a researcher must either develop their own code
77 or license and extend one of the commercial packages. What we have
78 elected to do is develop a body of simulation code capable of
79 implementing the types of models upon which our research is based.
80
81 In developing {\sc oopse}, we have adhered to the precepts of Open
82 Source development, and are releasing our source code with a
83 permissive license. It is our intent that by doing so, other
84 researchers might benefit from our work, and add their own
85 contributions to the package. The license under which {\sc oopse} is
86 distributed allows any researcher to download and modify the source
87 code for their own use. In this way further development of {\sc oopse}
88 is not limited to only the models of interest to ourselves, but also
89 those of the community of scientists who contribute back to the
90 project.
91
92 We have structured this chapter to first discuss the empirical energy
93 functions that {\sc oopse } implements in
94 Sec.~\ref{oopseSec:empiricalEnergy}. Following that is a discussion of
95 the various input and output files associated with the package
96 (Sec.~\ref{oopseSec:IOfiles}). Sec.~\ref{oopseSec:mechanics}
97 elucidates the various Molecular Dynamics algorithms {\sc oopse}
98 implements in the integration of the Newtonian equations of
99 motion. Basic analysis of the trajectories obtained from the
100 simulation is discussed in Sec.~\ref{oopseSec:props}. Program design
101 considerations are presented in Sec.~\ref{oopseSec:design}. And
102 lastly, Sec.~\ref{oopseSec:conclusion} concludes the chapter.
103
104 \section{\label{oopseSec:empiricalEnergy}The Empirical Energy Functions}
105
106 \subsection{\label{oopseSec:atomsMolecules}Atoms, Molecules and Rigid Bodies}
107
108 The basic unit of an {\sc oopse} simulation is the atom. The
109 parameters describing the atom are generalized to make the atom as
110 flexible a representation as possible. They may represent specific
111 atoms of an element, or be used for collections of atoms such as
112 methyl and carbonyl groups. The atoms are also capable of having
113 directional components associated with them (\emph{e.g.}~permanent
114 dipoles). Charges, permanent dipoles, and Lennard-Jones parameters for
115 a given atom type are set in the force field parameter files.
116
117 \begin{lstlisting}[float,caption={[Specifier for molecules and atoms] A sample specification of an Ar molecule},label=sch:AtmMole]
118 molecule{
119 name = "Ar";
120 nAtoms = 1;
121 atom[0]{
122 type="Ar";
123 position( 0.0, 0.0, 0.0 );
124 }
125 }
126 \end{lstlisting}
127
128
129 Atoms can be collected into secondary structures such as rigid bodies
130 or molecules. The molecule is a way for {\sc oopse} to keep track of
131 the atoms in a simulation in logical manner. Molecular units store the
132 identities of all the atoms and rigid bodies associated with
133 themselves, and are responsible for the evaluation of their own
134 internal interactions (\emph{i.e.}~bonds, bends, and torsions). Scheme
135 \ref{sch:AtmMole} shows how one creates a molecule in a ``model'' or
136 \texttt{.mdl} file. The position of the atoms given in the
137 declaration are relative to the origin of the molecule, and is used
138 when creating a system containing the molecule.
139
140 As stated previously, one of the features that sets {\sc oopse} apart
141 from most of the current molecular simulation packages is the ability
142 to handle rigid body dynamics. Rigid bodies are non-spherical
143 particles or collections of particles that have a constant internal
144 potential and move collectively.\cite{Goldstein01} They are not
145 included in most simulation packages because of the algorithmic
146 complexity involved in propagating orientational degrees of
147 freedom. Until recently, integrators which propagate orientational
148 motion have been much worse than those available for translational
149 motion.
150
151 Moving a rigid body involves determination of both the force and
152 torque applied by the surroundings, which directly affect the
153 translational and rotational motion in turn. In order to accumulate
154 the total force on a rigid body, the external forces and torques must
155 first be calculated for all the internal particles. The total force on
156 the rigid body is simply the sum of these external forces.
157 Accumulation of the total torque on the rigid body is more complex
158 than the force because the torque is applied to the center of mass of
159 the rigid body. The torque on rigid body $i$ is
160 \begin{equation}
161 \boldsymbol{\tau}_i=
162 \sum_{a}\biggl[(\mathbf{r}_{ia}-\mathbf{r}_i)\times \mathbf{f}_{ia}
163 + \boldsymbol{\tau}_{ia}\biggr],
164 \label{eq:torqueAccumulate}
165 \end{equation}
166 where $\boldsymbol{\tau}_i$ and $\mathbf{r}_i$ are the torque on and
167 position of the center of mass respectively, while $\mathbf{f}_{ia}$,
168 $\mathbf{r}_{ia}$, and $\boldsymbol{\tau}_{ia}$ are the force on,
169 position of, and torque on the component particles of the rigid body.
170
171 The summation of the total torque is done in the body fixed axis of
172 each rigid body. In order to move between the space fixed and body
173 fixed coordinate axes, parameters describing the orientation must be
174 maintained for each rigid body. At a minimum, the rotation matrix
175 ($\mathsf{A}$) can be described by the three Euler angles ($\phi,
176 \theta,$ and $\psi$), where the elements of $\mathsf{A}$ are composed of
177 trigonometric operations involving $\phi, \theta,$ and
178 $\psi$.\cite{Goldstein01} In order to avoid numerical instabilities
179 inherent in using the Euler angles, the four parameter ``quaternion''
180 scheme is often used. The elements of $\mathsf{A}$ can be expressed as
181 arithmetic operations involving the four quaternions ($q_0, q_1, q_2,$
182 and $q_3$).\cite{allen87:csl} Use of quaternions also leads to
183 performance enhancements, particularly for very small
184 systems.\cite{Evans77}
185
186 {\sc oopse} utilizes a relatively new scheme that propagates the
187 entire nine parameter rotation matrix. Further discussion
188 on this choice can be found in Sec.~\ref{oopseSec:integrate}. An example
189 definition of a rigid body can be seen in Scheme
190 \ref{sch:rigidBody}. The positions in the atom definitions are the
191 placements of the atoms relative to the origin of the rigid body,
192 which itself has a position relative to the origin of the molecule.
193
194 \begin{lstlisting}[float,caption={[Defining rigid bodies]A sample definition of a rigid body},label={sch:rigidBody}]
195 molecule{
196 name = "TIP3P";
197 nAtoms = 3;
198 atom[0]{
199 type = "O_TIP3P";
200 position( 0.0, 0.0, -0.06556 );
201 }
202 atom[1]{
203 type = "H_TIP3P";
204 position( 0.0, 0.75695, 0.52032 );
205 }
206 atom[2]{
207 type = "H_TIP3P";
208 position( 0.0, -0.75695, 0.52032 );
209 }
210
211 nRigidBodies = 1;
212 rigidBody[0]{
213 nMembers = 3;
214 members(0, 1, 2);
215 }
216 }
217 \end{lstlisting}
218
219 \subsection{\label{sec:LJPot}The Lennard Jones Force Field}
220
221 The most basic force field implemented in {\sc oopse} is the
222 Lennard-Jones force field, which mimics the van der Waals interaction at
223 long distances, and uses an empirical repulsion at short
224 distances. The Lennard-Jones potential is given by:
225 \begin{equation}
226 V_{\text{LJ}}(r_{ij}) =
227 4\epsilon_{ij} \biggl[
228 \biggl(\frac{\sigma_{ij}}{r_{ij}}\biggr)^{12}
229 - \biggl(\frac{\sigma_{ij}}{r_{ij}}\biggr)^{6}
230 \biggr],
231 \label{eq:lennardJonesPot}
232 \end{equation}
233 where $r_{ij}$ is the distance between particles $i$ and $j$,
234 $\sigma_{ij}$ scales the length of the interaction, and
235 $\epsilon_{ij}$ scales the well depth of the potential. Scheme
236 \ref{sch:LJFF} gives an example \texttt{.bass} file that
237 sets up a system of 108 Ar particles to be simulated using the
238 Lennard-Jones force field.
239
240 \begin{lstlisting}[float,caption={[Invocation of the Lennard-Jones force field] A sample system using the Lennard-Jones force field.},label={sch:LJFF}]
241
242 #include "argon.mdl"
243
244 nComponents = 1;
245 component{
246 type = "Ar";
247 nMol = 108;
248 }
249
250 initialConfig = "./argon.init";
251
252 forceField = "LJ";
253 \end{lstlisting}
254
255 Because this potential is calculated between all pairs, the force
256 evaluation can become computationally expensive for large systems. To
257 keep the pair evaluations to a manageable number, {\sc oopse} employs
258 a cut-off radius.\cite{allen87:csl} The cutoff radius can either be
259 specified in the \texttt{.bass} file, or left as its default value of
260 $2.5\sigma_{ii}$, where $\sigma_{ii}$ is the largest Lennard-Jones
261 length parameter present in the simulation. Truncating the calculation
262 at $r_{\text{cut}}$ introduces a discontinuity into the potential
263 energy and the force. To offset this discontinuity in the potential,
264 the energy value at $r_{\text{cut}}$ is subtracted from the
265 potential. This causes the potential to go to zero smoothly at the
266 cut-off radius, and preserves conservation of energy in integrating
267 the equations of motion. There still remains a discontinuity in the derivative (the forces), however, this does not significantly affect the dynamics.
268
269 Interactions between dissimilar particles requires the generation of
270 cross term parameters for $\sigma$ and $\epsilon$. These are
271 calculated through the Lorentz-Berthelot mixing
272 rules:\cite{allen87:csl}
273 \begin{equation}
274 \sigma_{ij} = \frac{1}{2}[\sigma_{ii} + \sigma_{jj}],
275 \label{eq:sigmaMix}
276 \end{equation}
277 and
278 \begin{equation}
279 \epsilon_{ij} = \sqrt{\epsilon_{ii} \epsilon_{jj}}.
280 \label{eq:epsilonMix}
281 \end{equation}
282
283 \subsection{\label{oopseSec:DUFF}Dipolar Unified-Atom Force Field}
284
285 The dipolar unified-atom force field ({\sc duff}) was developed to
286 simulate lipid bilayers. The simulations require a model capable of
287 forming bilayers, while still being sufficiently computationally
288 efficient to allow large systems ($\sim$100's of phospholipids,
289 $\sim$1000's of waters) to be simulated for long times
290 ($\sim$10's of nanoseconds).
291
292 With this goal in mind, {\sc duff} has no point
293 charges. Charge-neutral distributions were replaced with dipoles,
294 while most atoms and groups of atoms were reduced to Lennard-Jones
295 interaction sites. This simplification cuts the length scale of long
296 range interactions from $\frac{1}{r}$ to $\frac{1}{r^3}$, and allows
297 us to avoid the computationally expensive Ewald sum. Instead, we can
298 use neighbor-lists and cutoff radii for the dipolar interactions, or
299 include a reaction field to mimic larger range interactions.
300
301 As an example, lipid head-groups in {\sc duff} are represented as
302 point dipole interaction sites. By placing a dipole at the head
303 group's center of mass, our model mimics the charge separation found
304 in common phospholipid head groups such as
305 phosphatidylcholine.\cite{Cevc87} Additionally, a large Lennard-Jones
306 site is located at the pseudoatom's center of mass. The model is
307 illustrated by the red atom in Fig.~\ref{oopseFig:lipidModel}. The
308 water model we use to complement the dipoles of the lipids is our
309 reparameterization of the soft sticky dipole (SSD) model of Ichiye
310 \emph{et al.}\cite{liu96:new_model}
311
312 \begin{figure}
313 \centering
314 \includegraphics[width=\linewidth]{twoChainFig.eps}
315 \caption[A representation of a lipid model in {\sc duff}]{A representation of the lipid model. $\phi$ is the torsion angle, $\theta$ %
316 is the bend angle, and $\mu$ is the dipole moment of the head group.}
317 \label{oopseFig:lipidModel}
318 \end{figure}
319
320 We have used a set of scalable parameters to model the alkyl groups
321 with Lennard-Jones sites. For this, we have borrowed parameters from
322 the TraPPE force field of Siepmann
323 \emph{et al}.\cite{Siepmann1998} TraPPE is a unified-atom
324 representation of n-alkanes, which is parametrized against phase
325 equilibria using Gibbs ensemble Monte Carlo simulation
326 techniques.\cite{Siepmann1998} One of the advantages of TraPPE is that
327 it generalizes the types of atoms in an alkyl chain to keep the number
328 of pseudoatoms to a minimum; the parameters for a unified atom such as
329 $\text{CH}_2$ do not change depending on what species are bonded to
330 it.
331
332 TraPPE also constrains all bonds to be of fixed length. Typically,
333 bond vibrations are the fastest motions in a molecular dynamic
334 simulation. Small time steps between force evaluations must be used to
335 ensure adequate energy conservation in the bond degrees of freedom. By
336 constraining the bond lengths, larger time steps may be used when
337 integrating the equations of motion. A simulation using {\sc duff} is
338 illustrated in Scheme \ref{sch:DUFF}.
339
340 \begin{lstlisting}[float,caption={[Invocation of {\sc duff}]A portion of a \texttt{.bass} file showing a simulation utilizing {\sc duff}},label={sch:DUFF}]
341
342 #include "water.mdl"
343 #include "lipid.mdl"
344
345 nComponents = 2;
346 component{
347 type = "simpleLipid_16";
348 nMol = 60;
349 }
350
351 component{
352 type = "SSD_water";
353 nMol = 1936;
354 }
355
356 initialConfig = "bilayer.init";
357
358 forceField = "DUFF";
359
360 \end{lstlisting}
361
362 \subsection{\label{oopseSec:energyFunctions}{\sc duff} Energy Functions}
363
364 The total potential energy function in {\sc duff} is
365 \begin{equation}
366 V = \sum^{N}_{I=1} V^{I}_{\text{Internal}}
367 + \sum^{N-1}_{I=1} \sum_{J>I} V^{IJ}_{\text{Cross}},
368 \label{eq:totalPotential}
369 \end{equation}
370 where $V^{I}_{\text{Internal}}$ is the internal potential of molecule $I$:
371 \begin{equation}
372 V^{I}_{\text{Internal}} =
373 \sum_{\theta_{ijk} \in I} V_{\text{bend}}(\theta_{ijk})
374 + \sum_{\phi_{ijkl} \in I} V_{\text{torsion}}(\phi_{ijkl})
375 + \sum_{i \in I} \sum_{(j>i+4) \in I}
376 \biggl[ V_{\text{LJ}}(r_{ij}) + V_{\text{dipole}}
377 (\mathbf{r}_{ij},\boldsymbol{\Omega}_{i},\boldsymbol{\Omega}_{j})
378 \biggr].
379 \label{eq:internalPotential}
380 \end{equation}
381 Here $V_{\text{bend}}$ is the bend potential for all 1, 3 bonded pairs
382 within the molecule $I$, and $V_{\text{torsion}}$ is the torsion potential
383 for all 1, 4 bonded pairs. The pairwise portions of the internal
384 potential are excluded for atom pairs that are involved in the same bond, bend, or torsion. All other atom pairs within the molecule are subject to the LJ pair potential.
385
386
387 The bend potential of a molecule is represented by the following function:
388 \begin{equation}
389 V_{\text{bend}}(\theta_{ijk}) = k_{\theta}( \theta_{ijk} - \theta_0 )^2, \label{eq:bendPot}
390 \end{equation}
391 where $\theta_{ijk}$ is the angle defined by atoms $i$, $j$, and $k$
392 (see Fig.~\ref{oopseFig:lipidModel}), $\theta_0$ is the equilibrium
393 bond angle, and $k_{\theta}$ is the force constant which determines the
394 strength of the harmonic bend. The parameters for $k_{\theta}$ and
395 $\theta_0$ are borrowed from those in TraPPE.\cite{Siepmann1998}
396
397 The torsion potential and parameters are also borrowed from TraPPE. It is
398 of the form:
399 \begin{equation}
400 V_{\text{torsion}}(\phi) = c_1[1 + \cos \phi]
401 + c_2[1 + \cos(2\phi)]
402 + c_3[1 + \cos(3\phi)],
403 \label{eq:origTorsionPot}
404 \end{equation}
405 where:
406 \begin{equation}
407 \cos\phi = (\hat{\mathbf{r}}_{ij} \times \hat{\mathbf{r}}_{jk}) \cdot
408 (\hat{\mathbf{r}}_{jk} \times \hat{\mathbf{r}}_{kl}).
409 \label{eq:torsPhi}
410 \end{equation}
411 Here, $\hat{\mathbf{r}}_{\alpha\beta}$ are the set of unit bond
412 vectors between atoms $i$, $j$, $k$, and $l$. For computational
413 efficiency, the torsion potential has been recast after the method of
414 {\sc charmm},\cite{Brooks83} in which the angle series is converted to
415 a power series of the form:
416 \begin{equation}
417 V_{\text{torsion}}(\phi) =
418 k_3 \cos^3 \phi + k_2 \cos^2 \phi + k_1 \cos \phi + k_0,
419 \label{eq:torsionPot}
420 \end{equation}
421 where:
422 \begin{align*}
423 k_0 &= c_1 + c_3, \\
424 k_1 &= c_1 - 3c_3, \\
425 k_2 &= 2 c_2, \\
426 k_3 &= 4c_3.
427 \end{align*}
428 By recasting the potential as a power series, repeated trigonometric
429 evaluations are avoided during the calculation of the potential energy.
430
431
432 The cross potential between molecules $I$ and $J$, $V^{IJ}_{\text{Cross}}$, is
433 as follows:
434 \begin{equation}
435 V^{IJ}_{\text{Cross}} =
436 \sum_{i \in I} \sum_{j \in J}
437 \biggl[ V_{\text{LJ}}(r_{ij}) + V_{\text{dipole}}
438 (\mathbf{r}_{ij},\boldsymbol{\Omega}_{i},\boldsymbol{\Omega}_{j})
439 + V_{\text{sticky}}
440 (\mathbf{r}_{ij},\boldsymbol{\Omega}_{i},\boldsymbol{\Omega}_{j})
441 \biggr],
442 \label{eq:crossPotentail}
443 \end{equation}
444 where $V_{\text{LJ}}$ is the Lennard Jones potential,
445 $V_{\text{dipole}}$ is the dipole dipole potential, and
446 $V_{\text{sticky}}$ is the sticky potential defined by the SSD model
447 (Sec.~\ref{oopseSec:SSD}). Note that not all atom types include all
448 interactions.
449
450 The dipole-dipole potential has the following form:
451 \begin{equation}
452 V_{\text{dipole}}(\mathbf{r}_{ij},\boldsymbol{\Omega}_{i},
453 \boldsymbol{\Omega}_{j}) = \frac{|\mu_i||\mu_j|}{4\pi\epsilon_{0}r_{ij}^{3}} \biggl[
454 \boldsymbol{\hat{u}}_{i} \cdot \boldsymbol{\hat{u}}_{j}
455 -
456 3(\boldsymbol{\hat{u}}_i \cdot \hat{\mathbf{r}}_{ij}) %
457 (\boldsymbol{\hat{u}}_j \cdot \hat{\mathbf{r}}_{ij}) \biggr].
458 \label{eq:dipolePot}
459 \end{equation}
460 Here $\mathbf{r}_{ij}$ is the vector starting at atom $i$ pointing
461 towards $j$, and $\boldsymbol{\Omega}_i$ and $\boldsymbol{\Omega}_j$
462 are the orientational degrees of freedom for atoms $i$ and $j$
463 respectively. $|\mu_i|$ is the magnitude of the dipole moment of atom
464 $i$, $\boldsymbol{\hat{u}}_i$ is the standard unit orientation vector
465 of $\boldsymbol{\Omega}_i$, and $\boldsymbol{\hat{r}}_{ij}$ is the
466 unit vector pointing along $\mathbf{r}_{ij}$
467 ($\boldsymbol{\hat{r}}_{ij}=\mathbf{r}_{ij}/|\mathbf{r}_{ij}|$).
468
469 To improve computational efficiency of the dipole-dipole interactions,
470 {\sc oopse} employs an electrostatic cutoff radius. This parameter can
471 be set in the \texttt{.bass} file, and controls the length scale over
472 which dipole interactions are felt. To compensate for the
473 discontinuity in the potential and the forces at the cutoff radius, we
474 have implemented a switching function to smoothly scale the
475 dipole-dipole interaction at the cutoff.
476 \begin{equation}
477 S(r_{ij}) =
478 \begin{cases}
479 1 & \text{if $r_{ij} \le r_t$},\\
480 \frac{(r_{\text{cut}} + 2r_{ij} - 3r_t)(r_{\text{cut}} - r_{ij})^2}
481 {(r_{\text{cut}} - r_t)^2}
482 & \text{if $r_t < r_{ij} \le r_{\text{cut}}$}, \\
483 0 & \text{if $r_{ij} > r_{\text{cut}}$.}
484 \end{cases}
485 \label{eq:dipoleSwitching}
486 \end{equation}
487 Here $S(r_{ij})$ scales the potential at a given $r_{ij}$, and $r_t$
488 is the taper radius some given thickness less than the electrostatic
489 cutoff. The switching thickness can be set in the \texttt{.bass} file.
490
491 \subsection{\label{oopseSec:SSD}The {\sc duff} Water Models: SSD/E and SSD/RF}
492
493 In the interest of computational efficiency, the default solvent used
494 by {\sc oopse} is the extended Soft Sticky Dipole (SSD/E) water
495 model.\cite{Gezelter04} The original SSD was developed by Ichiye
496 \emph{et al.}\cite{liu96:new_model} as a modified form of the hard-sphere
497 water model proposed by Bratko, Blum, and
498 Luzar.\cite{Bratko85,Bratko95} It consists of a single point dipole
499 with a Lennard-Jones core and a sticky potential that directs the
500 particles to assume the proper hydrogen bond orientation in the first
501 solvation shell. Thus, the interaction between two SSD water molecules
502 \emph{i} and \emph{j} is given by the potential
503 \begin{equation}
504 V_{ij} =
505 V_{ij}^{LJ} (r_{ij})\ + V_{ij}^{dp}
506 (\mathbf{r}_{ij},\boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j)\ +
507 V_{ij}^{sp}
508 (\mathbf{r}_{ij},\boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j),
509 \label{eq:ssdPot}
510 \end{equation}
511 where the $\mathbf{r}_{ij}$ is the position vector between molecules
512 \emph{i} and \emph{j} with magnitude equal to the distance $r_{ij}$, and
513 $\boldsymbol{\Omega}_i$ and $\boldsymbol{\Omega}_j$ represent the
514 orientations of the respective molecules. The Lennard-Jones and dipole
515 parts of the potential are given by equations \ref{eq:lennardJonesPot}
516 and \ref{eq:dipolePot} respectively. The sticky part is described by
517 the following,
518 \begin{equation}
519 u_{ij}^{sp}(\mathbf{r}_{ij},\boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j)=
520 \frac{\nu_0}{2}[s(r_{ij})w(\mathbf{r}_{ij},
521 \boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j) +
522 s^\prime(r_{ij})w^\prime(\mathbf{r}_{ij},
523 \boldsymbol{\Omega}_i,\boldsymbol{\Omega}_j)]\ ,
524 \label{eq:stickyPot}
525 \end{equation}
526 where $\nu_0$ is a strength parameter for the sticky potential, and
527 $s$ and $s^\prime$ are cubic switching functions which turn off the
528 sticky interaction beyond the first solvation shell. The $w$ function
529 can be thought of as an attractive potential with tetrahedral
530 geometry:
531 \begin{equation}
532 w({\bf r}_{ij},{\bf \Omega}_i,{\bf \Omega}_j)=
533 \sin\theta_{ij}\sin2\theta_{ij}\cos2\phi_{ij},
534 \label{eq:stickyW}
535 \end{equation}
536 while the $w^\prime$ function counters the normal aligned and
537 anti-aligned structures favored by point dipoles:
538 \begin{equation}
539 w^\prime({\bf r}_{ij},{\bf \Omega}_i,{\bf \Omega}_j)=
540 (\cos\theta_{ij}-0.6)^2(\cos\theta_{ij}+0.8)^2-w^0,
541 \label{eq:stickyWprime}
542 \end{equation}
543 It should be noted that $w$ is proportional to the sum of the $Y_3^2$
544 and $Y_3^{-2}$ spherical harmonics (a linear combination which
545 enhances the tetrahedral geometry for hydrogen bonded structures),
546 while $w^\prime$ is a purely empirical function. A more detailed
547 description of the functional parts and variables in this potential
548 can be found in the original SSD
549 articles.\cite{liu96:new_model,liu96:monte_carlo,chandra99:ssd_md,Ichiye03}
550
551 Since SSD/E is a single-point {\it dipolar} model, the force
552 calculations are simplified significantly relative to the standard
553 {\it charged} multi-point models. In the original Monte Carlo
554 simulations using this model, Ichiye {\it et al.} reported that using
555 SSD decreased computer time by a factor of 6-7 compared to other
556 models.\cite{liu96:new_model} What is most impressive is that these savings
557 did not come at the expense of accurate depiction of the liquid state
558 properties. Indeed, SSD/E maintains reasonable agreement with the Head-Gordon
559 diffraction data for the structural features of liquid
560 water.\cite{hura00,liu96:new_model} Additionally, the dynamical properties
561 exhibited by SSD/E agree with experiment better than those of more
562 computationally expensive models (like TIP3P and
563 SPC/E).\cite{chandra99:ssd_md} The combination of speed and accurate depiction
564 of solvent properties makes SSD/E a very attractive model for the
565 simulation of large scale biochemical simulations.
566
567 Recent constant pressure simulations revealed issues in the original
568 SSD model that led to lower than expected densities at all target
569 pressures.\cite{Ichiye03,Gezelter04} The default model in {\sc oopse}
570 is therefore SSD/E, a density corrected derivative of SSD that
571 exhibits improved liquid structure and transport behavior. If the use
572 of a reaction field long-range interaction correction is desired, it
573 is recommended that the parameters be modified to those of the SSD/RF
574 model (an SSD variant parameterized for reaction field). Solvent parameters can be easily modified in an accompanying
575 \texttt{.bass} file as illustrated in the scheme below. A table of the
576 parameter values and the drawbacks and benefits of the different
577 density corrected SSD models can be found in
578 reference~\cite{Gezelter04}.
579
580 \begin{lstlisting}[float,caption={[A simulation of {\sc ssd} water]A portion of a \texttt{.bass} file showing a simulation including {\sc ssd} water.},label={sch:ssd}]
581
582 #include "water.mdl"
583
584 nComponents = 1;
585 component{
586 type = "SSD_water";
587 nMol = 864;
588 }
589
590 initialConfig = "liquidWater.init";
591
592 forceField = "DUFF";
593
594 /*
595 * The following two flags set the cutoff
596 * radius for the electrostatic forces
597 * as well as the skin thickness of the switching
598 * function.
599 */
600
601 electrostaticCutoffRadius = 9.2;
602 electrostaticSkinThickness = 1.38;
603
604 \end{lstlisting}
605
606
607 \subsection{\label{oopseSec:eam}Embedded Atom Method}
608
609 There are Molecular Dynamics packages which have the
610 capacity to simulate metallic systems, including some that have
611 parallel computational abilities\cite{plimpton93}. Potentials that
612 describe bonding transition metal
613 systems\cite{Finnis84,Ercolessi88,Chen90,Qi99,Ercolessi02} have an
614 attractive interaction which models ``Embedding''
615 a positively charged metal ion in the electron density due to the
616 free valance ``sea'' of electrons created by the surrounding atoms in
617 the system. A mostly-repulsive pairwise part of the potential
618 describes the interaction of the positively charged metal core ions
619 with one another. A particular potential description called the
620 Embedded Atom Method\cite{Daw84,FBD86,johnson89,Lu97}({\sc eam}) that has
621 particularly wide adoption has been selected for inclusion in {\sc oopse}. A
622 good review of {\sc eam} and other metallic potential formulations was written
623 by Voter.\cite{voter}
624
625 The {\sc eam} potential has the form:
626 \begin{eqnarray}
627 V & = & \sum_{i} F_{i}\left[\rho_{i}\right] + \sum_{i} \sum_{j \neq i}
628 \phi_{ij}({\bf r}_{ij}), \\
629 \rho_{i} & = & \sum_{j \neq i} f_{j}({\bf r}_{ij}),
630 \end{eqnarray}
631 where $F_{i} $ is the embedding function that equates the energy
632 required to embed a positively-charged core ion $i$ into a linear
633 superposition of spherically averaged atomic electron densities given
634 by $\rho_{i}$. $\phi_{ij}$ is a primarily repulsive pairwise
635 interaction between atoms $i$ and $j$. In the original formulation of
636 {\sc eam}\cite{Daw84}, $\phi_{ij}$ was an entirely repulsive term,
637 however in later refinements to {\sc eam} have shown that non-uniqueness
638 between $F$ and $\phi$ allow for more general forms for
639 $\phi$.\cite{Daw89} There is a cutoff distance, $r_{cut}$, which
640 limits the summations in the {\sc eam} equation to the few dozen atoms
641 surrounding atom $i$ for both the density $\rho$ and pairwise $\phi$
642 interactions. Foiles \emph{et al}.~fit {\sc eam} potentials for the fcc
643 metals Cu, Ag, Au, Ni, Pd, Pt and alloys of these metals.\cite{FBD86}
644 These fits are included in {\sc oopse}.
645
646 \subsection{\label{oopseSec:pbc}Periodic Boundary Conditions}
647
648 \newcommand{\roundme}{\operatorname{round}}
649
650 \textit{Periodic boundary conditions} are widely used to simulate bulk properties with a relatively small number of particles. The
651 simulation box is replicated throughout space to form an infinite
652 lattice. During the simulation, when a particle moves in the primary
653 cell, its image in other cells move in exactly the same direction with
654 exactly the same orientation. Thus, as a particle leaves the primary
655 cell, one of its images will enter through the opposite face. If the
656 simulation box is large enough to avoid ``feeling'' the symmetries of
657 the periodic lattice, surface effects can be ignored. The available
658 periodic cells in OOPSE are cubic, orthorhombic and parallelepiped. We
659 use a $3 \times 3$ matrix, $\mathsf{H}$, to describe the shape and
660 size of the simulation box. $\mathsf{H}$ is defined:
661 \begin{equation}
662 \mathsf{H} = ( \mathbf{h}_x, \mathbf{h}_y, \mathbf{h}_z ),
663 \end{equation}
664 where $\mathbf{h}_{\alpha}$ is the column vector of the $\alpha$ axis of the
665 box. During the course of the simulation both the size and shape of
666 the box can be changed to allow volume fluctuations when constraining
667 the pressure.
668
669 A real space vector, $\mathbf{r}$ can be transformed in to a box space
670 vector, $\mathbf{s}$, and back through the following transformations:
671 \begin{align}
672 \mathbf{s} &= \mathsf{H}^{-1} \mathbf{r}, \\
673 \mathbf{r} &= \mathsf{H} \mathbf{s}.
674 \end{align}
675 The vector $\mathbf{s}$ is now a vector expressed as the number of box
676 lengths in the $\mathbf{h}_x$, $\mathbf{h}_y$, and $\mathbf{h}_z$
677 directions. To find the minimum image of a vector $\mathbf{r}$, we
678 first convert it to its corresponding vector in box space, and then,
679 cast each element to lie in the range $[-0.5,0.5]$:
680 \begin{equation}
681 s_{i}^{\prime}=s_{i}-\roundme(s_{i}),
682 \end{equation}
683 where $s_i$ is the $i$th element of $\mathbf{s}$, and
684 $\roundme(s_i)$ is given by
685 \begin{equation}
686 \roundme(x) =
687 \begin{cases}
688 \lfloor x+0.5 \rfloor & \text{if $x \ge 0$,} \\
689 \lceil x-0.5 \rceil & \text{if $x < 0$.}
690 \end{cases}
691 \end{equation}
692 Here $\lfloor x \rfloor$ is the floor operator, and gives the largest
693 integer value that is not greater than $x$, and $\lceil x \rceil$ is
694 the ceiling operator, and gives the smallest integer that is not less
695 than $x$. For example, $\roundme(3.6)=4$, $\roundme(3.1)=3$,
696 $\roundme(-3.6)=-4$, $\roundme(-3.1)=-3$.
697
698 Finally, we obtain the minimum image coordinates $\mathbf{r}^{\prime}$ by
699 transforming back to real space,
700 \begin{equation}
701 \mathbf{r}^{\prime}=\mathsf{H}^{-1}\mathbf{s}^{\prime}.%
702 \end{equation}
703 In this way, particles are allowed to diffuse freely in $\mathbf{r}$,
704 but their minimum images, $\mathbf{r}^{\prime}$ are used to compute
705 the inter-atomic forces.
706
707
708 \section{\label{oopseSec:IOfiles}Input and Output Files}
709
710 \subsection{{\sc bass} and Model Files}
711
712 Every {\sc oopse} simulation begins with a Bizarre Atom Simulation
713 Syntax ({\sc bass}) file. {\sc bass} is a script syntax that is parsed
714 by {\sc oopse} at runtime. The {\sc bass} file allows for the user to
715 completely describe the system they wish to simulate, as well as tailor
716 {\sc oopse}'s behavior during the simulation. {\sc bass} files are
717 denoted with the extension
718 \texttt{.bass}, an example file is shown in
719 Scheme~\ref{sch:bassExample}.
720
721 \begin{lstlisting}[float,caption={[An example of a complete {\sc bass} file] An example showing a complete {\sc bass} file.},label={sch:bassExample}]
722
723 molecule{
724 name = "Ar";
725 nAtoms = 1;
726 atom[0]{
727 type="Ar";
728 position( 0.0, 0.0, 0.0 );
729 }
730 }
731
732 nComponents = 1;
733 component{
734 type = "Ar";
735 nMol = 108;
736 }
737
738 initialConfig = "./argon.init";
739
740 forceField = "LJ";
741 ensemble = "NVE"; // specify the simulation ensemble
742 dt = 1.0; // the time step for integration
743 runTime = 1e3; // the total simulation run time
744 sampleTime = 100; // trajectory file frequency
745 statusTime = 50; // statistics file frequency
746
747 \end{lstlisting}
748
749 Within the \texttt{.bass} file it is necessary to provide a complete
750 description of the molecule before it is actually placed in the
751 simulation. The {\sc bass} syntax was originally developed with this
752 goal in mind, and allows for the specification of all the atoms in a
753 molecular prototype, as well as any bonds, bends, or torsions. These
754 descriptions can become lengthy for complex molecules, and it would be
755 inconvenient to duplicate the simulation at the beginning of each {\sc
756 bass} script. Addressing this issue {\sc bass} allows for the
757 inclusion of model files at the top of a \texttt{.bass} file. These
758 model files, denoted with the \texttt{.mdl} extension, allow the user
759 to describe a molecular prototype once, then simply include it into
760 each simulation containing that molecule. Returning to the example in
761 Scheme~\ref{sch:bassExample}, the \texttt{.mdl} file's contents would
762 be Scheme~\ref{sch:mdlExample}, and the new \texttt{.bass} file would
763 become Scheme~\ref{sch:bassExPrime}.
764
765 \begin{lstlisting}[float,caption={An example \texttt{.mdl} file.},label={sch:mdlExample}]
766
767 molecule{
768 name = "Ar";
769 nAtoms = 1;
770 atom[0]{
771 type="Ar";
772 position( 0.0, 0.0, 0.0 );
773 }
774 }
775
776 \end{lstlisting}
777
778 \begin{lstlisting}[float,caption={Revised {\sc bass} example.},label={sch:bassExPrime}]
779
780 #include "argon.mdl"
781
782 nComponents = 1;
783 component{
784 type = "Ar";
785 nMol = 108;
786 }
787
788 initialConfig = "./argon.init";
789
790 forceField = "LJ";
791 ensemble = "NVE";
792 dt = 1.0;
793 runTime = 1e3;
794 sampleTime = 100;
795 statusTime = 50;
796
797 \end{lstlisting}
798
799 \subsection{\label{oopseSec:coordFiles}Coordinate Files}
800
801 The standard format for storage of a systems coordinates is a modified
802 xyz-file syntax, the exact details of which can be seen in
803 Scheme~\ref{sch:dumpFormat}. As all bonding and molecular information
804 is stored in the \texttt{.bass} and \texttt{.mdl} files, the
805 coordinate files are simply the complete set of coordinates for each
806 atom at a given simulation time. One important note, although the
807 simulation propagates the complete rotation matrix, directional
808 entities are written out using quanternions, to save space in the
809 output files.
810
811 \begin{lstlisting}[float,caption={[The format of the coordinate files]Shows the format of the coordinate files. The fist line is the number of atoms. The second line begins with the time stamp followed by the three $\mathsf{H}$ column vectors. It is important to note, that for extended system ensembles, additional information pertinent to the integrators may be stored on this line as well. The next lines are the atomic coordinates for all atoms in the system. First is the name followed by position, velocity, quanternions, and lastly angular velocities.},label=sch:dumpFormat]
812
813 nAtoms
814 time; Hxx Hyx Hzx; Hxy Hyy Hzy; Hxz Hyz Hzz;
815 Name1 x y z vx vy vz q0 q1 q2 q3 jx jy jz
816 Name2 x y z vx vy vz q0 q1 q2 q3 jx jy jz
817 etc...
818
819 \end{lstlisting}
820
821
822 There are three major files used by {\sc oopse} written in the
823 coordinate format, they are as follows: the initialization file
824 (\texttt{.init}), the simulation trajectory file (\texttt{.dump}), and
825 the final coordinates of the simulation. The initialization file is
826 necessary for {\sc oopse} to start the simulation with the proper
827 coordinates, and is generated before the simulation run. The
828 trajectory file is created at the beginning of the simulation, and is
829 used to store snapshots of the simulation at regular intervals. The
830 first frame is a duplication of the
831 \texttt{.init} file, and each subsequent frame is appended to the file
832 at an interval specified in the \texttt{.bass} file with the
833 \texttt{sampleTime} flag. The final coordinate file is the end of run file. The
834 \texttt{.eor} file stores the final configuration of the system for a
835 given simulation. The file is updated at the same time as the
836 \texttt{.dump} file, however, it only contains the most recent
837 frame. In this way, an \texttt{.eor} file may be used as the
838 initialization file to a second simulation in order to continue a
839 simulation or recover one from a processor that has crashed during the
840 course of the run.
841
842 \subsection{\label{oopseSec:initCoords}Generation of Initial Coordinates}
843
844 As was stated in Sec.~\ref{oopseSec:coordFiles}, an initialization
845 file is needed to provide the starting coordinates for a
846 simulation. The {\sc oopse} package provides several system building
847 programs to aid in the creation of the \texttt{.init}
848 file. The programs use {\sc bass}, and will recognize
849 arguments and parameters in the \texttt{.bass} file that would
850 otherwise be ignored by the simulation.
851
852 \subsection{The Statistics File}
853
854 The last output file generated by {\sc oopse} is the statistics
855 file. This file records such statistical quantities as the
856 instantaneous temperature, volume, pressure, etc. It is written out
857 with the frequency specified in the \texttt{.bass} file with the
858 \texttt{statusTime} keyword. The file allows the user to observe the
859 system variables as a function of simulation time while the simulation
860 is in progress. One useful function the statistics file serves is to
861 monitor the conserved quantity of a given simulation ensemble, this
862 allows the user to observe the stability of the integrator. The
863 statistics file is denoted with the \texttt{.stat} file extension.
864
865 \section{\label{oopseSec:mechanics}Mechanics}
866
867 \subsection{\label{oopseSec:integrate}Integrating the Equations of Motion: the
868 DLM method}
869
870 The default method for integrating the equations of motion in {\sc
871 oopse} is a velocity-Verlet version of the symplectic splitting method
872 proposed by Dullweber, Leimkuhler and McLachlan
873 (DLM).\cite{Dullweber1997} When there are no directional atoms or
874 rigid bodies present in the simulation, this integrator becomes the
875 standard velocity-Verlet integrator which is known to sample the
876 microcanonical (NVE) ensemble.\cite{Frenkel1996}
877
878 Previous integration methods for orientational motion have problems
879 that are avoided in the DLM method. Direct propagation of the Euler
880 angles has a known $1/\sin\theta$ divergence in the equations of
881 motion for $\phi$ and $\psi$,\cite{allen87:csl} leading to
882 numerical instabilities any time one of the directional atoms or rigid
883 bodies has an orientation near $\theta=0$ or $\theta=\pi$. More
884 modern quaternion-based integration methods have relatively poor
885 energy conservation. While quaternions work well for orientational
886 motion in other ensembles, the microcanonical ensemble has a
887 constant energy requirement that is quite sensitive to errors in the
888 equations of motion. An earlier implementation of {\sc oopse}
889 utilized quaternions for propagation of rotational motion; however, a
890 detailed investigation showed that they resulted in a steady drift in
891 the total energy, something that has been observed by
892 Laird {\it et al.}\cite{Laird97}
893
894 The key difference in the integration method proposed by Dullweber
895 \emph{et al.} is that the entire $3 \times 3$ rotation matrix is
896 propagated from one time step to the next. In the past, this would not
897 have been feasible, since the rotation matrix for a single body has
898 nine elements compared with the more memory-efficient methods (using
899 three Euler angles or 4 quaternions). Computer memory has become much
900 less costly in recent years, and this can be translated into
901 substantial benefits in energy conservation.
902
903 The basic equations of motion being integrated are derived from the
904 Hamiltonian for conservative systems containing rigid bodies,
905 \begin{equation}
906 H = \sum_{i} \left( \frac{1}{2} m_i {\bf v}_i^T \cdot {\bf v}_i +
907 \frac{1}{2} {\bf j}_i^T \cdot \overleftrightarrow{\mathsf{I}}_i^{-1} \cdot
908 {\bf j}_i \right) +
909 V\left(\left\{{\bf r}\right\}, \left\{\mathsf{A}\right\}\right),
910 \end{equation}
911 where ${\bf r}_i$ and ${\bf v}_i$ are the cartesian position vector
912 and velocity of the center of mass of particle $i$, and ${\bf j}_i$,
913 $\overleftrightarrow{\mathsf{I}}_i$ are the body-fixed angular
914 momentum and moment of inertia tensor respectively, and the
915 superscript $T$ denotes the transpose of the vector. $\mathsf{A}_i$
916 is the $3 \times 3$ rotation matrix describing the instantaneous
917 orientation of the particle. $V$ is the potential energy function
918 which may depend on both the positions $\left\{{\bf r}\right\}$ and
919 orientations $\left\{\mathsf{A}\right\}$ of all particles. The
920 equations of motion for the particle centers of mass are derived from
921 Hamilton's equations and are quite simple,
922 \begin{eqnarray}
923 \dot{{\bf r}} & = & {\bf v}, \\
924 \dot{{\bf v}} & = & \frac{{\bf f}}{m},
925 \end{eqnarray}
926 where ${\bf f}$ is the instantaneous force on the center of mass
927 of the particle,
928 \begin{equation}
929 {\bf f} = - \frac{\partial}{\partial
930 {\bf r}} V(\left\{{\bf r}(t)\right\}, \left\{\mathsf{A}(t)\right\}).
931 \end{equation}
932
933 The equations of motion for the orientational degrees of freedom are
934 \begin{eqnarray}
935 \dot{\mathsf{A}} & = & \mathsf{A} \cdot
936 \mbox{ skew}\left(\overleftrightarrow{\mathsf{I}}^{-1} \cdot {\bf j}\right),\\
937 \dot{{\bf j}} & = & {\bf j} \times \left( \overleftrightarrow{\mathsf{I}}^{-1}
938 \cdot {\bf j} \right) - \mbox{ rot}\left(\mathsf{A}^{T} \cdot \frac{\partial
939 V}{\partial \mathsf{A}} \right).
940 \end{eqnarray}
941 In these equations of motion, the $\mbox{skew}$ matrix of a vector
942 ${\bf v} = \left( v_1, v_2, v_3 \right)$ is defined:
943 \begin{equation}
944 \mbox{skew}\left( {\bf v} \right) := \left(
945 \begin{array}{ccc}
946 0 & v_3 & - v_2 \\
947 -v_3 & 0 & v_1 \\
948 v_2 & -v_1 & 0
949 \end{array}
950 \right).
951 \end{equation}
952 The $\mbox{rot}$ notation refers to the mapping of the $3 \times 3$
953 rotation matrix to a vector of orientations by first computing the
954 skew-symmetric part $\left(\mathsf{A} - \mathsf{A}^{T}\right)$ and
955 then associating this with a length 3 vector by inverting the
956 $\mbox{skew}$ function above:
957 \begin{equation}
958 \mbox{rot}\left(\mathsf{A}\right) := \mbox{ skew}^{-1}\left(\mathsf{A}
959 - \mathsf{A}^{T} \right).
960 \end{equation}
961 Written this way, the $\mbox{rot}$ operation creates a set of
962 conjugate angle coordinates to the body-fixed angular momenta
963 represented by ${\bf j}$. This equation of motion for angular momenta
964 is equivalent to the more familiar body-fixed forms,
965 \begin{eqnarray}
966 \dot{j_{x}} & = & \tau^b_x(t) +
967 \left(\overleftrightarrow{\mathsf{I}}_{yy} - \overleftrightarrow{\mathsf{I}}_{zz} \right) j_y j_z, \\
968 \dot{j_{y}} & = & \tau^b_y(t) +
969 \left(\overleftrightarrow{\mathsf{I}}_{zz} - \overleftrightarrow{\mathsf{I}}_{xx} \right) j_z j_x,\\
970 \dot{j_{z}} & = & \tau^b_z(t) +
971 \left(\overleftrightarrow{\mathsf{I}}_{xx} - \overleftrightarrow{\mathsf{I}}_{yy} \right) j_x j_y,
972 \end{eqnarray}
973 which utilize the body-fixed torques, ${\bf \tau}^b$. Torques are
974 most easily derived in the space-fixed frame,
975 \begin{equation}
976 {\bf \tau}^b(t) = \mathsf{A}(t) \cdot {\bf \tau}^s(t),
977 \end{equation}
978 where the torques are either derived from the forces on the
979 constituent atoms of the rigid body, or for directional atoms,
980 directly from derivatives of the potential energy,
981 \begin{equation}
982 {\bf \tau}^s(t) = - \hat{\bf u}(t) \times \left( \frac{\partial}
983 {\partial \hat{\bf u}} V\left(\left\{ {\bf r}(t) \right\}, \left\{
984 \mathsf{A}(t) \right\}\right) \right).
985 \end{equation}
986 Here $\hat{\bf u}$ is a unit vector pointing along the principal axis
987 of the particle in the space-fixed frame.
988
989 The DLM method uses a Trotter factorization of the orientational
990 propagator. This has three effects:
991 \begin{enumerate}
992 \item the integrator is area-preserving in phase space (i.e. it is
993 {\it symplectic}),
994 \item the integrator is time-{\it reversible}, making it suitable for Hybrid
995 Monte Carlo applications, and
996 \item the error for a single time step is of order $\mathcal{O}\left(h^4\right)$
997 for timesteps of length $h$.
998 \end{enumerate}
999
1000 The integration of the equations of motion is carried out in a
1001 velocity-Verlet style 2-part algorithm, where $h= \delta t$:
1002
1003 {\tt moveA:}
1004 \begin{align*}
1005 {\bf v}\left(t + h / 2\right) &\leftarrow {\bf v}(t)
1006 + \frac{h}{2} \left( {\bf f}(t) / m \right), \\
1007 %
1008 {\bf r}(t + h) &\leftarrow {\bf r}(t)
1009 + h {\bf v}\left(t + h / 2 \right), \\
1010 %
1011 {\bf j}\left(t + h / 2 \right) &\leftarrow {\bf j}(t)
1012 + \frac{h}{2} {\bf \tau}^b(t), \\
1013 %
1014 \mathsf{A}(t + h) &\leftarrow \mathrm{rotate}\left( h {\bf j}
1015 (t + h / 2) \cdot \overleftrightarrow{\mathsf{I}}^{-1} \right).
1016 \end{align*}
1017
1018 In this context, the $\mathrm{rotate}$ function is the reversible product
1019 of the three body-fixed rotations,
1020 \begin{equation}
1021 \mathrm{rotate}({\bf a}) = \mathsf{G}_x(a_x / 2) \cdot
1022 \mathsf{G}_y(a_y / 2) \cdot \mathsf{G}_z(a_z) \cdot \mathsf{G}_y(a_y /
1023 2) \cdot \mathsf{G}_x(a_x /2),
1024 \end{equation}
1025 where each rotational propagator, $\mathsf{G}_\alpha(\theta)$, rotates
1026 both the rotation matrix ($\mathsf{A}$) and the body-fixed angular
1027 momentum (${\bf j}$) by an angle $\theta$ around body-fixed axis
1028 $\alpha$,
1029 \begin{equation}
1030 \mathsf{G}_\alpha( \theta ) = \left\{
1031 \begin{array}{lcl}
1032 \mathsf{A}(t) & \leftarrow & \mathsf{A}(0) \cdot \mathsf{R}_\alpha(\theta)^T, \\
1033 {\bf j}(t) & \leftarrow & \mathsf{R}_\alpha(\theta) \cdot {\bf j}(0).
1034 \end{array}
1035 \right.
1036 \end{equation}
1037 $\mathsf{R}_\alpha$ is a quadratic approximation to
1038 the single-axis rotation matrix. For example, in the small-angle
1039 limit, the rotation matrix around the body-fixed x-axis can be
1040 approximated as
1041 \begin{equation}
1042 \mathsf{R}_x(\theta) \approx \left(
1043 \begin{array}{ccc}
1044 1 & 0 & 0 \\
1045 0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4} & -\frac{\theta}{1+
1046 \theta^2 / 4} \\
1047 0 & \frac{\theta}{1+
1048 \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4}
1049 \end{array}
1050 \right).
1051 \end{equation}
1052 All other rotations follow in a straightforward manner.
1053
1054 After the first part of the propagation, the forces and body-fixed
1055 torques are calculated at the new positions and orientations
1056
1057 {\tt doForces:}
1058 \begin{align*}
1059 {\bf f}(t + h) &\leftarrow
1060 - \left(\frac{\partial V}{\partial {\bf r}}\right)_{{\bf r}(t + h)}, \\
1061 %
1062 {\bf \tau}^{s}(t + h) &\leftarrow {\bf u}(t + h)
1063 \times \frac{\partial V}{\partial {\bf u}}, \\
1064 %
1065 {\bf \tau}^{b}(t + h) &\leftarrow \mathsf{A}(t + h)
1066 \cdot {\bf \tau}^s(t + h).
1067 \end{align*}
1068
1069 {\sc oopse} automatically updates ${\bf u}$ when the rotation matrix
1070 $\mathsf{A}$ is calculated in {\tt moveA}. Once the forces and
1071 torques have been obtained at the new time step, the velocities can be
1072 advanced to the same time value.
1073
1074 {\tt moveB:}
1075 \begin{align*}
1076 {\bf v}\left(t + h \right) &\leftarrow {\bf v}\left(t + h / 2 \right)
1077 + \frac{h}{2} \left( {\bf f}(t + h) / m \right), \\
1078 %
1079 {\bf j}\left(t + h \right) &\leftarrow {\bf j}\left(t + h / 2 \right)
1080 + \frac{h}{2} {\bf \tau}^b(t + h) .
1081 \end{align*}
1082
1083 The matrix rotations used in the DLM method end up being more costly
1084 computationally than the simpler arithmetic quaternion
1085 propagation. With the same time step, a 1000-molecule water simulation
1086 shows an average 7\% increase in computation time using the DLM method
1087 in place of quaternions. This cost is more than justified when
1088 comparing the energy conservation of the two methods as illustrated in
1089 Fig.~\ref{timestep}.
1090
1091 \begin{figure}
1092 \centering
1093 \includegraphics[width=\linewidth]{timeStep.eps}
1094 \caption[Energy conservation for quaternion versus DLM dynamics]{Energy conservation using quaternion based integration versus
1095 the method proposed by Dullweber \emph{et al.} with increasing time
1096 step. For each time step, the dotted line is total energy using the
1097 DLM integrator, and the solid line comes from the quaternion
1098 integrator. The larger time step plots are shifted up from the true
1099 energy baseline for clarity.}
1100 \label{timestep}
1101 \end{figure}
1102
1103 In Fig.~\ref{timestep}, the resulting energy drift at various time
1104 steps for both the DLM and quaternion integration schemes is
1105 compared. All of the 1000 molecule water simulations started with the
1106 same configuration, and the only difference was the method for
1107 handling rotational motion. At time steps of 0.1 and 0.5 fs, both
1108 methods for propagating molecule rotation conserve energy fairly well,
1109 with the quaternion method showing a slight energy drift over time in
1110 the 0.5 fs time step simulation. At time steps of 1 and 2 fs, the
1111 energy conservation benefits of the DLM method are clearly
1112 demonstrated. Thus, while maintaining the same degree of energy
1113 conservation, one can take considerably longer time steps, leading to
1114 an overall reduction in computation time.
1115
1116 There is only one specific keyword relevant to the default integrator,
1117 and that is the time step for integrating the equations of motion.
1118
1119 \begin{center}
1120 \begin{tabular}{llll}
1121 {\bf variable} & {\bf {\tt .bass} keyword} & {\bf units} & {\bf
1122 default value} \\
1123 $h$ & {\tt dt = 2.0;} & fs & none
1124 \end{tabular}
1125 \end{center}
1126
1127 \subsection{\label{sec:extended}Extended Systems for other Ensembles}
1128
1129 {\sc oopse} implements a number of extended system integrators for
1130 sampling from other ensembles relevant to chemical physics. The
1131 integrator can selected with the {\tt ensemble} keyword in the
1132 {\tt .bass} file:
1133
1134 \begin{center}
1135 \begin{tabular}{lll}
1136 {\bf Integrator} & {\bf Ensemble} & {\bf {\tt .bass} line} \\
1137 NVE & microcanonical & {\tt ensemble = NVE; } \\
1138 NVT & canonical & {\tt ensemble = NVT; } \\
1139 NPTi & isobaric-isothermal & {\tt ensemble = NPTi;} \\
1140 & (with isotropic volume changes) & \\
1141 NPTf & isobaric-isothermal & {\tt ensemble = NPTf;} \\
1142 & (with changes to box shape) & \\
1143 NPTxyz & approximate isobaric-isothermal & {\tt ensemble = NPTxyz;} \\
1144 & (with separate barostats on each box dimension) & \\
1145 \end{tabular}
1146 \end{center}
1147
1148 The relatively well-known Nos\'e-Hoover thermostat\cite{Hoover85} is
1149 implemented in {\sc oopse}'s NVT integrator. This method couples an
1150 extra degree of freedom (the thermostat) to the kinetic energy of the
1151 system, and has been shown to sample the canonical distribution in the
1152 system degrees of freedom while conserving a quantity that is, to
1153 within a constant, the Helmholtz free energy.\cite{melchionna93}
1154
1155 NPT algorithms attempt to maintain constant pressure in the system by
1156 coupling the volume of the system to a barostat. {\sc oopse} contains
1157 three different constant pressure algorithms. The first two, NPTi and
1158 NPTf have been shown to conserve a quantity that is, to within a
1159 constant, the Gibbs free energy.\cite{melchionna93} The Melchionna
1160 modification to the Hoover barostat is implemented in both NPTi and
1161 NPTf. NPTi allows only isotropic changes in the simulation box, while
1162 box {\it shape} variations are allowed in NPTf. The NPTxyz integrator
1163 has {\it not} been shown to sample from the isobaric-isothermal
1164 ensemble. It is useful, however, in that it maintains orthogonality
1165 for the axes of the simulation box while attempting to equalize
1166 pressure along the three perpendicular directions in the box.
1167
1168 Each of the extended system integrators requires additional keywords
1169 to set target values for the thermodynamic state variables that are
1170 being held constant. Keywords are also required to set the
1171 characteristic decay times for the dynamics of the extended
1172 variables.
1173
1174 \begin{center}
1175 \begin{tabular}{llll}
1176 {\bf variable} & {\bf {\tt .bass} keyword} & {\bf units} & {\bf
1177 default value} \\
1178 $T_{\mathrm{target}}$ & {\tt targetTemperature = 300;} & K & none \\
1179 $P_{\mathrm{target}}$ & {\tt targetPressure = 1;} & atm & none \\
1180 $\tau_T$ & {\tt tauThermostat = 1e3;} & fs & none \\
1181 $\tau_B$ & {\tt tauBarostat = 5e3;} & fs & none \\
1182 & {\tt resetTime = 200;} & fs & none \\
1183 & {\tt useInitialExtendedSystemState = true;} & logical &
1184 true
1185 \end{tabular}
1186 \end{center}
1187
1188 Two additional keywords can be used to either clear the extended
1189 system variables periodically ({\tt resetTime}), or to maintain the
1190 state of the extended system variables between simulations ({\tt
1191 useInitialExtendedSystemState}). More details on these variables
1192 and their use in the integrators follows below.
1193
1194 \subsection{\label{oopseSec:noseHooverThermo}Nos\'{e}-Hoover Thermostatting}
1195
1196 The Nos\'e-Hoover equations of motion are given by\cite{Hoover85}
1197 \begin{eqnarray}
1198 \dot{{\bf r}} & = & {\bf v}, \\
1199 \dot{{\bf v}} & = & \frac{{\bf f}}{m} - \chi {\bf v} ,\\
1200 \dot{\mathsf{A}} & = & \mathsf{A} \cdot
1201 \mbox{ skew}\left(\overleftrightarrow{\mathsf{I}}^{-1} \cdot {\bf j}\right), \\
1202 \dot{{\bf j}} & = & {\bf j} \times \left( \overleftrightarrow{\mathsf{I}}^{-1}
1203 \cdot {\bf j} \right) - \mbox{ rot}\left(\mathsf{A}^{T} \cdot \frac{\partial
1204 V}{\partial \mathsf{A}} \right) - \chi {\bf j}.
1205 \label{eq:nosehoovereom}
1206 \end{eqnarray}
1207
1208 $\chi$ is an ``extra'' variable included in the extended system, and
1209 it is propagated using the first order equation of motion
1210 \begin{equation}
1211 \dot{\chi} = \frac{1}{\tau_{T}^2} \left( \frac{T}{T_{\mathrm{target}}} - 1 \right).
1212 \label{eq:nosehooverext}
1213 \end{equation}
1214
1215 The instantaneous temperature $T$ is proportional to the total kinetic
1216 energy (both translational and orientational) and is given by
1217 \begin{equation}
1218 T = \frac{2 K}{f k_B}
1219 \end{equation}
1220 Here, $f$ is the total number of degrees of freedom in the system,
1221 \begin{equation}
1222 f = 3 N + 3 N_{\mathrm{orient}} - N_{\mathrm{constraints}},
1223 \end{equation}
1224 and $K$ is the total kinetic energy,
1225 \begin{equation}
1226 K = \sum_{i=1}^{N} \frac{1}{2} m_i {\bf v}_i^T \cdot {\bf v}_i +
1227 \sum_{i=1}^{N_{\mathrm{orient}}} \frac{1}{2} {\bf j}_i^T \cdot
1228 \overleftrightarrow{\mathsf{I}}_i^{-1} \cdot {\bf j}_i.
1229 \end{equation}
1230
1231 In eq.(\ref{eq:nosehooverext}), $\tau_T$ is the time constant for
1232 relaxation of the temperature to the target value. To set values for
1233 $\tau_T$ or $T_{\mathrm{target}}$ in a simulation, one would use the
1234 {\tt tauThermostat} and {\tt targetTemperature} keywords in the {\tt
1235 .bass} file. The units for {\tt tauThermostat} are fs, and the units
1236 for the {\tt targetTemperature} are degrees K. The integration of
1237 the equations of motion is carried out in a velocity-Verlet style 2
1238 part algorithm:
1239
1240 {\tt moveA:}
1241 \begin{align*}
1242 T(t) &\leftarrow \left\{{\bf v}(t)\right\}, \left\{{\bf j}(t)\right\} ,\\
1243 %
1244 {\bf v}\left(t + h / 2\right) &\leftarrow {\bf v}(t)
1245 + \frac{h}{2} \left( \frac{{\bf f}(t)}{m} - {\bf v}(t)
1246 \chi(t)\right), \\
1247 %
1248 {\bf r}(t + h) &\leftarrow {\bf r}(t)
1249 + h {\bf v}\left(t + h / 2 \right) ,\\
1250 %
1251 {\bf j}\left(t + h / 2 \right) &\leftarrow {\bf j}(t)
1252 + \frac{h}{2} \left( {\bf \tau}^b(t) - {\bf j}(t)
1253 \chi(t) \right) ,\\
1254 %
1255 \mathsf{A}(t + h) &\leftarrow \mathrm{rotate}
1256 \left(h * {\bf j}(t + h / 2)
1257 \overleftrightarrow{\mathsf{I}}^{-1} \right) ,\\
1258 %
1259 \chi\left(t + h / 2 \right) &\leftarrow \chi(t)
1260 + \frac{h}{2 \tau_T^2} \left( \frac{T(t)}
1261 {T_{\mathrm{target}}} - 1 \right) .
1262 \end{align*}
1263
1264 Here $\mathrm{rotate}(h * {\bf j}
1265 \overleftrightarrow{\mathsf{I}}^{-1})$ is the same symplectic Trotter
1266 factorization of the three rotation operations that was discussed in
1267 the section on the DLM integrator. Note that this operation modifies
1268 both the rotation matrix $\mathsf{A}$ and the angular momentum ${\bf
1269 j}$. {\tt moveA} propagates velocities by a half time step, and
1270 positional degrees of freedom by a full time step. The new positions
1271 (and orientations) are then used to calculate a new set of forces and
1272 torques in exactly the same way they are calculated in the {\tt
1273 doForces} portion of the DLM integrator.
1274
1275 Once the forces and torques have been obtained at the new time step,
1276 the temperature, velocities, and the extended system variable can be
1277 advanced to the same time value.
1278
1279 {\tt moveB:}
1280 \begin{align*}
1281 T(t + h) &\leftarrow \left\{{\bf v}(t + h)\right\},
1282 \left\{{\bf j}(t + h)\right\}, \\
1283 %
1284 \chi\left(t + h \right) &\leftarrow \chi\left(t + h /
1285 2 \right) + \frac{h}{2 \tau_T^2} \left( \frac{T(t+h)}
1286 {T_{\mathrm{target}}} - 1 \right), \\
1287 %
1288 {\bf v}\left(t + h \right) &\leftarrow {\bf v}\left(t
1289 + h / 2 \right) + \frac{h}{2} \left(
1290 \frac{{\bf f}(t + h)}{m} - {\bf v}(t + h)
1291 \chi(t h)\right) ,\\
1292 %
1293 {\bf j}\left(t + h \right) &\leftarrow {\bf j}\left(t
1294 + h / 2 \right) + \frac{h}{2}
1295 \left( {\bf \tau}^b(t + h) - {\bf j}(t + h)
1296 \chi(t + h) \right) .
1297 \end{align*}
1298
1299 Since ${\bf v}(t + h)$ and ${\bf j}(t + h)$ are required to caclculate
1300 $T(t + h)$ as well as $\chi(t + h)$, they indirectly depend on their
1301 own values at time $t + h$. {\tt moveB} is therefore done in an
1302 iterative fashion until $\chi(t + h)$ becomes self-consistent. The
1303 relative tolerance for the self-consistency check defaults to a value
1304 of $\mbox{10}^{-6}$, but {\sc oopse} will terminate the iteration
1305 after 4 loops even if the consistency check has not been satisfied.
1306
1307 The Nos\'e-Hoover algorithm is known to conserve a Hamiltonian for the
1308 extended system that is, to within a constant, identical to the
1309 Helmholtz free energy,\cite{melchionna93}
1310 \begin{equation}
1311 H_{\mathrm{NVT}} = V + K + f k_B T_{\mathrm{target}} \left(
1312 \frac{\tau_{T}^2 \chi^2(t)}{2} + \int_{0}^{t} \chi(t^\prime) dt^\prime
1313 \right).
1314 \end{equation}
1315 Poor choices of $h$ or $\tau_T$ can result in non-conservation
1316 of $H_{\mathrm{NVT}}$, so the conserved quantity is maintained in the
1317 last column of the {\tt .stat} file to allow checks on the quality of
1318 the integration.
1319
1320 Bond constraints are applied at the end of both the {\tt moveA} and
1321 {\tt moveB} portions of the algorithm. Details on the constraint
1322 algorithms are given in section \ref{oopseSec:rattle}.
1323
1324 \subsection{\label{sec:NPTi}Constant-pressure integration with
1325 isotropic box deformations (NPTi)}
1326
1327 To carry out isobaric-isothermal ensemble calculations {\sc oopse}
1328 implements the Melchionna modifications to the Nos\'e-Hoover-Andersen
1329 equations of motion,\cite{melchionna93}
1330
1331 \begin{eqnarray}
1332 \dot{{\bf r}} & = & {\bf v} + \eta \left( {\bf r} - {\bf R}_0 \right), \\
1333 \dot{{\bf v}} & = & \frac{{\bf f}}{m} - (\eta + \chi) {\bf v}, \\
1334 \dot{\mathsf{A}} & = & \mathsf{A} \cdot
1335 \mbox{ skew}\left(\overleftrightarrow{I}^{-1} \cdot {\bf j}\right),\\
1336 \dot{{\bf j}} & = & {\bf j} \times \left( \overleftrightarrow{I}^{-1}
1337 \cdot {\bf j} \right) - \mbox{ rot}\left(\mathsf{A}^{T} \cdot \frac{\partial
1338 V}{\partial \mathsf{A}} \right) - \chi {\bf j}, \\
1339 \dot{\chi} & = & \frac{1}{\tau_{T}^2} \left(
1340 \frac{T}{T_{\mathrm{target}}} - 1 \right) ,\\
1341 \dot{\eta} & = & \frac{1}{\tau_{B}^2 f k_B T_{\mathrm{target}}} V \left( P -
1342 P_{\mathrm{target}} \right), \\
1343 \dot{\mathcal{V}} & = & 3 \mathcal{V} \eta .
1344 \label{eq:melchionna1}
1345 \end{eqnarray}
1346
1347 $\chi$ and $\eta$ are the ``extra'' degrees of freedom in the extended
1348 system. $\chi$ is a thermostat, and it has the same function as it
1349 does in the Nos\'e-Hoover NVT integrator. $\eta$ is a barostat which
1350 controls changes to the volume of the simulation box. ${\bf R}_0$ is
1351 the location of the center of mass for the entire system, and
1352 $\mathcal{V}$ is the volume of the simulation box. At any time, the
1353 volume can be calculated from the determinant of the matrix which
1354 describes the box shape:
1355 \begin{equation}
1356 \mathcal{V} = \det(\mathsf{H}).
1357 \end{equation}
1358
1359 The NPTi integrator requires an instantaneous pressure. This quantity
1360 is calculated via the pressure tensor,
1361 \begin{equation}
1362 \overleftrightarrow{\mathsf{P}}(t) = \frac{1}{\mathcal{V}(t)} \left(
1363 \sum_{i=1}^{N} m_i {\bf v}_i(t) \otimes {\bf v}_i(t) \right) +
1364 \overleftrightarrow{\mathsf{W}}(t).
1365 \end{equation}
1366 The kinetic contribution to the pressure tensor utilizes the {\it
1367 outer} product of the velocities denoted by the $\otimes$ symbol. The
1368 stress tensor is calculated from another outer product of the
1369 inter-atomic separation vectors (${\bf r}_{ij} = {\bf r}_j - {\bf
1370 r}_i$) with the forces between the same two atoms,
1371 \begin{equation}
1372 \overleftrightarrow{\mathsf{W}}(t) = \sum_{i} \sum_{j>i} {\bf r}_{ij}(t)
1373 \otimes {\bf f}_{ij}(t).
1374 \end{equation}
1375 The instantaneous pressure is then simply obtained from the trace of
1376 the Pressure tensor,
1377 \begin{equation}
1378 P(t) = \frac{1}{3} \mathrm{Tr} \left( \overleftrightarrow{\mathsf{P}}(t).
1379 \right)
1380 \end{equation}
1381
1382 In eq.(\ref{eq:melchionna1}), $\tau_B$ is the time constant for
1383 relaxation of the pressure to the target value. To set values for
1384 $\tau_B$ or $P_{\mathrm{target}}$ in a simulation, one would use the
1385 {\tt tauBarostat} and {\tt targetPressure} keywords in the {\tt .bass}
1386 file. The units for {\tt tauBarostat} are fs, and the units for the
1387 {\tt targetPressure} are atmospheres. Like in the NVT integrator, the
1388 integration of the equations of motion is carried out in a
1389 velocity-Verlet style 2 part algorithm:
1390
1391 {\tt moveA:}
1392 \begin{align*}
1393 T(t) &\leftarrow \left\{{\bf v}(t)\right\}, \left\{{\bf j}(t)\right\} ,\\
1394 %
1395 P(t) &\leftarrow \left\{{\bf r}(t)\right\}, \left\{{\bf v}(t)\right\} ,\\
1396 %
1397 {\bf v}\left(t + h / 2\right) &\leftarrow {\bf v}(t)
1398 + \frac{h}{2} \left( \frac{{\bf f}(t)}{m} - {\bf v}(t)
1399 \left(\chi(t) + \eta(t) \right) \right), \\
1400 %
1401 {\bf j}\left(t + h / 2 \right) &\leftarrow {\bf j}(t)
1402 + \frac{h}{2} \left( {\bf \tau}^b(t) - {\bf j}(t)
1403 \chi(t) \right), \\
1404 %
1405 \mathsf{A}(t + h) &\leftarrow \mathrm{rotate}\left(h *
1406 {\bf j}(t + h / 2) \overleftrightarrow{\mathsf{I}}^{-1}
1407 \right) ,\\
1408 %
1409 \chi\left(t + h / 2 \right) &\leftarrow \chi(t) +
1410 \frac{h}{2 \tau_T^2} \left( \frac{T(t)}{T_{\mathrm{target}}} - 1
1411 \right) ,\\
1412 %
1413 \eta(t + h / 2) &\leftarrow \eta(t) + \frac{h
1414 \mathcal{V}(t)}{2 N k_B T(t) \tau_B^2} \left( P(t)
1415 - P_{\mathrm{target}} \right), \\
1416 %
1417 {\bf r}(t + h) &\leftarrow {\bf r}(t) + h
1418 \left\{ {\bf v}\left(t + h / 2 \right)
1419 + \eta(t + h / 2)\left[ {\bf r}(t + h)
1420 - {\bf R}_0 \right] \right\} ,\\
1421 %
1422 \mathsf{H}(t + h) &\leftarrow e^{-h \eta(t + h / 2)}
1423 \mathsf{H}(t).
1424 \end{align*}
1425
1426 Most of these equations are identical to their counterparts in the NVT
1427 integrator, but the propagation of positions to time $t + h$
1428 depends on the positions at the same time. {\sc oopse} carries out
1429 this step iteratively (with a limit of 5 passes through the iterative
1430 loop). Also, the simulation box $\mathsf{H}$ is scaled uniformly for
1431 one full time step by an exponential factor that depends on the value
1432 of $\eta$ at time $t +
1433 h / 2$. Reshaping the box uniformly also scales the volume of
1434 the box by
1435 \begin{equation}
1436 \mathcal{V}(t + h) \leftarrow e^{ - 3 h \eta(t + h /2)}.
1437 \mathcal{V}(t)
1438 \end{equation}
1439
1440 The {\tt doForces} step for the NPTi integrator is exactly the same as
1441 in both the DLM and NVT integrators. Once the forces and torques have
1442 been obtained at the new time step, the velocities can be advanced to
1443 the same time value.
1444
1445 {\tt moveB:}
1446 \begin{align*}
1447 T(t + h) &\leftarrow \left\{{\bf v}(t + h)\right\},
1448 \left\{{\bf j}(t + h)\right\} ,\\
1449 %
1450 P(t + h) &\leftarrow \left\{{\bf r}(t + h)\right\},
1451 \left\{{\bf v}(t + h)\right\}, \\
1452 %
1453 \chi\left(t + h \right) &\leftarrow \chi\left(t + h /
1454 2 \right) + \frac{h}{2 \tau_T^2} \left( \frac{T(t+h)}
1455 {T_{\mathrm{target}}} - 1 \right), \\
1456 %
1457 \eta(t + h) &\leftarrow \eta(t + h / 2) +
1458 \frac{h \mathcal{V}(t + h)}{2 N k_B T(t + h)
1459 \tau_B^2} \left( P(t + h) - P_{\mathrm{target}} \right), \\
1460 %
1461 {\bf v}\left(t + h \right) &\leftarrow {\bf v}\left(t
1462 + h / 2 \right) + \frac{h}{2} \left(
1463 \frac{{\bf f}(t + h)}{m} - {\bf v}(t + h)
1464 (\chi(t + h) + \eta(t + h)) \right) ,\\
1465 %
1466 {\bf j}\left(t + h \right) &\leftarrow {\bf j}\left(t
1467 + h / 2 \right) + \frac{h}{2} \left( {\bf
1468 \tau}^b(t + h) - {\bf j}(t + h)
1469 \chi(t + h) \right) .
1470 \end{align*}
1471
1472 Once again, since ${\bf v}(t + h)$ and ${\bf j}(t + h)$ are required
1473 to caclculate $T(t + h)$, $P(t + h)$, $\chi(t + h)$, and $\eta(t +
1474 h)$, they indirectly depend on their own values at time $t + h$. {\tt
1475 moveB} is therefore done in an iterative fashion until $\chi(t + h)$
1476 and $\eta(t + h)$ become self-consistent. The relative tolerance for
1477 the self-consistency check defaults to a value of $\mbox{10}^{-6}$,
1478 but {\sc oopse} will terminate the iteration after 4 loops even if the
1479 consistency check has not been satisfied.
1480
1481 The Melchionna modification of the Nos\'e-Hoover-Andersen algorithm is
1482 known to conserve a Hamiltonian for the extended system that is, to
1483 within a constant, identical to the Gibbs free energy,
1484 \begin{equation}
1485 H_{\mathrm{NPTi}} = V + K + f k_B T_{\mathrm{target}} \left(
1486 \frac{\tau_{T}^2 \chi^2(t)}{2} + \int_{0}^{t} \chi(t^\prime) dt^\prime
1487 \right) + P_{\mathrm{target}} \mathcal{V}(t).
1488 \end{equation}
1489 Poor choices of $\delta t$, $\tau_T$, or $\tau_B$ can result in
1490 non-conservation of $H_{\mathrm{NPTi}}$, so the conserved quantity is
1491 maintained in the last column of the {\tt .stat} file to allow checks
1492 on the quality of the integration. It is also known that this
1493 algorithm samples the equilibrium distribution for the enthalpy
1494 (including contributions for the thermostat and barostat),
1495 \begin{equation}
1496 H_{\mathrm{NPTi}} = V + K + \frac{f k_B T_{\mathrm{target}}}{2} \left(
1497 \chi^2 \tau_T^2 + \eta^2 \tau_B^2 \right) + P_{\mathrm{target}}
1498 \mathcal{V}(t).
1499 \end{equation}
1500
1501 Bond constraints are applied at the end of both the {\tt moveA} and
1502 {\tt moveB} portions of the algorithm. Details on the constraint
1503 algorithms are given in section \ref{oopseSec:rattle}.
1504
1505 \subsection{\label{sec:NPTf}Constant-pressure integration with a
1506 flexible box (NPTf)}
1507
1508 There is a relatively simple generalization of the
1509 Nos\'e-Hoover-Andersen method to include changes in the simulation box
1510 {\it shape} as well as in the volume of the box. This method utilizes
1511 the full $3 \times 3$ pressure tensor and introduces a tensor of
1512 extended variables ($\overleftrightarrow{\eta}$) to control changes to
1513 the box shape. The equations of motion for this method are
1514 \begin{eqnarray}
1515 \dot{{\bf r}} & = & {\bf v} + \overleftrightarrow{\eta} \cdot \left( {\bf r} - {\bf R}_0 \right), \\
1516 \dot{{\bf v}} & = & \frac{{\bf f}}{m} - (\overleftrightarrow{\eta} +
1517 \chi \cdot \mathsf{1}) {\bf v}, \\
1518 \dot{\mathsf{A}} & = & \mathsf{A} \cdot
1519 \mbox{ skew}\left(\overleftrightarrow{I}^{-1} \cdot {\bf j}\right) ,\\
1520 \dot{{\bf j}} & = & {\bf j} \times \left( \overleftrightarrow{I}^{-1}
1521 \cdot {\bf j} \right) - \mbox{ rot}\left(\mathsf{A}^{T} \cdot \frac{\partial
1522 V}{\partial \mathsf{A}} \right) - \chi {\bf j} ,\\
1523 \dot{\chi} & = & \frac{1}{\tau_{T}^2} \left(
1524 \frac{T}{T_{\mathrm{target}}} - 1 \right) ,\\
1525 \dot{\overleftrightarrow{\eta}} & = & \frac{1}{\tau_{B}^2 f k_B
1526 T_{\mathrm{target}}} V \left( \overleftrightarrow{\mathsf{P}} - P_{\mathrm{target}}\mathsf{1} \right) ,\\
1527 \dot{\mathsf{H}} & = & \overleftrightarrow{\eta} \cdot \mathsf{H} .
1528 \label{eq:melchionna2}
1529 \end{eqnarray}
1530
1531 Here, $\mathsf{1}$ is the unit matrix and $\overleftrightarrow{\mathsf{P}}$
1532 is the pressure tensor. Again, the volume, $\mathcal{V} = \det
1533 \mathsf{H}$.
1534
1535 The propagation of the equations of motion is nearly identical to the
1536 NPTi integration:
1537
1538 {\tt moveA:}
1539 \begin{align*}
1540 T(t) &\leftarrow \left\{{\bf v}(t)\right\}, \left\{{\bf j}(t)\right\} ,\\
1541 %
1542 \overleftrightarrow{\mathsf{P}}(t) &\leftarrow \left\{{\bf r}(t)\right\},
1543 \left\{{\bf v}(t)\right\} ,\\
1544 %
1545 {\bf v}\left(t + h / 2\right) &\leftarrow {\bf v}(t)
1546 + \frac{h}{2} \left( \frac{{\bf f}(t)}{m} -
1547 \left(\chi(t)\mathsf{1} + \overleftrightarrow{\eta}(t) \right) \cdot
1548 {\bf v}(t) \right), \\
1549 %
1550 {\bf j}\left(t + h / 2 \right) &\leftarrow {\bf j}(t)
1551 + \frac{h}{2} \left( {\bf \tau}^b(t) - {\bf j}(t)
1552 \chi(t) \right), \\
1553 %
1554 \mathsf{A}(t + h) &\leftarrow \mathrm{rotate}\left(h *
1555 {\bf j}(t + h / 2) \overleftrightarrow{\mathsf{I}}^{-1}
1556 \right), \\
1557 %
1558 \chi\left(t + h / 2 \right) &\leftarrow \chi(t) +
1559 \frac{h}{2 \tau_T^2} \left( \frac{T(t)}{T_{\mathrm{target}}}
1560 - 1 \right), \\
1561 %
1562 \overleftrightarrow{\eta}(t + h / 2) &\leftarrow
1563 \overleftrightarrow{\eta}(t) + \frac{h \mathcal{V}(t)}{2 N k_B
1564 T(t) \tau_B^2} \left( \overleftrightarrow{\mathsf{P}}(t)
1565 - P_{\mathrm{target}}\mathsf{1} \right), \\
1566 %
1567 {\bf r}(t + h) &\leftarrow {\bf r}(t) + h \left\{ {\bf v}
1568 \left(t + h / 2 \right) + \overleftrightarrow{\eta}(t +
1569 h / 2) \cdot \left[ {\bf r}(t + h)
1570 - {\bf R}_0 \right] \right\}, \\
1571 %
1572 \mathsf{H}(t + h) &\leftarrow \mathsf{H}(t) \cdot e^{-h
1573 \overleftrightarrow{\eta}(t + h / 2)} .
1574 \end{align*}
1575 {\sc oopse} uses a power series expansion truncated at second order
1576 for the exponential operation which scales the simulation box.
1577
1578 The {\tt moveB} portion of the algorithm is largely unchanged from the
1579 NPTi integrator:
1580
1581 {\tt moveB:}
1582 \begin{align*}
1583 T(t + h) &\leftarrow \left\{{\bf v}(t + h)\right\},
1584 \left\{{\bf j}(t + h)\right\}, \\
1585 %
1586 \overleftrightarrow{\mathsf{P}}(t + h) &\leftarrow \left\{{\bf r}
1587 (t + h)\right\}, \left\{{\bf v}(t
1588 + h)\right\}, \left\{{\bf f}(t + h)\right\} ,\\
1589 %
1590 \chi\left(t + h \right) &\leftarrow \chi\left(t + h /
1591 2 \right) + \frac{h}{2 \tau_T^2} \left( \frac{T(t+
1592 h)}{T_{\mathrm{target}}} - 1 \right), \\
1593 %
1594 \overleftrightarrow{\eta}(t + h) &\leftarrow
1595 \overleftrightarrow{\eta}(t + h / 2) +
1596 \frac{h \mathcal{V}(t + h)}{2 N k_B T(t + h)
1597 \tau_B^2} \left( \overleftrightarrow{P}(t + h)
1598 - P_{\mathrm{target}}\mathsf{1} \right) ,\\
1599 %
1600 {\bf v}\left(t + h \right) &\leftarrow {\bf v}\left(t
1601 + h / 2 \right) + \frac{h}{2} \left(
1602 \frac{{\bf f}(t + h)}{m} -
1603 (\chi(t + h)\mathsf{1} + \overleftrightarrow{\eta}(t
1604 + h)) \right) \cdot {\bf v}(t + h), \\
1605 %
1606 {\bf j}\left(t + h \right) &\leftarrow {\bf j}\left(t
1607 + h / 2 \right) + \frac{h}{2} \left( {\bf \tau}^b(t
1608 + h) - {\bf j}(t + h) \chi(t + h) \right) .
1609 \end{align*}
1610
1611 The iterative schemes for both {\tt moveA} and {\tt moveB} are
1612 identical to those described for the NPTi integrator.
1613
1614 The NPTf integrator is known to conserve the following Hamiltonian:
1615 \begin{equation}
1616 H_{\mathrm{NPTf}} = V + K + f k_B T_{\mathrm{target}} \left(
1617 \frac{\tau_{T}^2 \chi^2(t)}{2} + \int_{0}^{t} \chi(t^\prime) dt^\prime
1618 \right) + P_{\mathrm{target}} \mathcal{V}(t) + \frac{f k_B
1619 T_{\mathrm{target}}}{2}
1620 \mathrm{Tr}\left[\overleftrightarrow{\eta}(t)\right]^2 \tau_B^2.
1621 \end{equation}
1622
1623 This integrator must be used with care, particularly in liquid
1624 simulations. Liquids have very small restoring forces in the
1625 off-diagonal directions, and the simulation box can very quickly form
1626 elongated and sheared geometries which become smaller than the
1627 electrostatic or Lennard-Jones cutoff radii. The NPTf integrator
1628 finds most use in simulating crystals or liquid crystals which assume
1629 non-orthorhombic geometries.
1630
1631 \subsection{\label{nptxyz}Constant pressure in 3 axes (NPTxyz)}
1632
1633 There is one additional extended system integrator which is somewhat
1634 simpler than the NPTf method described above. In this case, the three
1635 axes have independent barostats which each attempt to preserve the
1636 target pressure along the box walls perpendicular to that particular
1637 axis. The lengths of the box axes are allowed to fluctuate
1638 independently, but the angle between the box axes does not change.
1639 The equations of motion are identical to those described above, but
1640 only the {\it diagonal} elements of $\overleftrightarrow{\eta}$ are
1641 computed. The off-diagonal elements are set to zero (even when the
1642 pressure tensor has non-zero off-diagonal elements).
1643
1644 It should be noted that the NPTxyz integrator is {\it not} known to
1645 preserve any Hamiltonian of interest to the chemical physics
1646 community. The integrator is extremely useful, however, in generating
1647 initial conditions for other integration methods. It {\it is} suitable
1648 for use with liquid simulations, or in cases where there is
1649 orientational anisotropy in the system (i.e. in lipid bilayer
1650 simulations).
1651
1652 \subsection{\label{oopseSec:rattle}The {\sc rattle} Method for Bond
1653 Constraints}
1654
1655 In order to satisfy the constraints of fixed bond lengths within {\sc
1656 oopse}, we have implemented the {\sc rattle} algorithm of
1657 Andersen.\cite{andersen83} The algorithm is a velocity verlet
1658 formulation of the {\sc shake} method\cite{ryckaert77} of iteratively
1659 solving the Lagrange multipliers of constraint. The system of Lagrange
1660 multipliers allows one to reformulate the equations of motion with
1661 explicit constraint forces.\cite{fowles99:lagrange}
1662
1663 Consider a system described by coordinates $q_1$ and $q_2$ subject to an
1664 equation of constraint:
1665 \begin{equation}
1666 \sigma(q_1, q_2,t) = 0
1667 \label{oopseEq:lm1}
1668 \end{equation}
1669 The Lagrange formulation of the equations of motion can be written:
1670 \begin{equation}
1671 \delta\int_{t_1}^{t_2}L\, dt =
1672 \int_{t_1}^{t_2} \sum_i \biggl [ \frac{\partial L}{\partial q_i}
1673 - \frac{d}{dt}\biggl(\frac{\partial L}{\partial \dot{q}_i}
1674 \biggr ) \biggr] \delta q_i \, dt = 0.
1675 \label{oopseEq:lm2}
1676 \end{equation}
1677 Here, $\delta q_i$ is not independent for each $q$, as $q_1$ and $q_2$
1678 are linked by $\sigma$. However, $\sigma$ is fixed at any given
1679 instant of time, giving:
1680 \begin{align}
1681 \delta\sigma &= \biggl( \frac{\partial\sigma}{\partial q_1} \delta q_1 %
1682 + \frac{\partial\sigma}{\partial q_2} \delta q_2 \biggr) = 0 ,\\
1683 %
1684 \frac{\partial\sigma}{\partial q_1} \delta q_1 &= %
1685 - \frac{\partial\sigma}{\partial q_2} \delta q_2, \\
1686 %
1687 \delta q_2 &= - \biggl(\frac{\partial\sigma}{\partial q_1} \bigg / %
1688 \frac{\partial\sigma}{\partial q_2} \biggr) \delta q_1.
1689 \end{align}
1690 Substituted back into Eq.~\ref{oopseEq:lm2},
1691 \begin{equation}
1692 \int_{t_1}^{t_2}\biggl [ \biggl(\frac{\partial L}{\partial q_1}
1693 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1694 \biggr)
1695 - \biggl( \frac{\partial L}{\partial q_1}
1696 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1697 \biggr) \biggl(\frac{\partial\sigma}{\partial q_1} \bigg / %
1698 \frac{\partial\sigma}{\partial q_2} \biggr)\biggr] \delta q_1 \, dt = 0.
1699 \label{oopseEq:lm3}
1700 \end{equation}
1701 Leading to,
1702 \begin{equation}
1703 \frac{\biggl(\frac{\partial L}{\partial q_1}
1704 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1705 \biggr)}{\frac{\partial\sigma}{\partial q_1}} =
1706 \frac{\biggl(\frac{\partial L}{\partial q_2}
1707 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_2}
1708 \biggr)}{\frac{\partial\sigma}{\partial q_2}}.
1709 \label{oopseEq:lm4}
1710 \end{equation}
1711 This relation can only be statisfied, if both are equal to a single
1712 function $-\lambda(t)$,
1713 \begin{align}
1714 \frac{\biggl(\frac{\partial L}{\partial q_1}
1715 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1716 \biggr)}{\frac{\partial\sigma}{\partial q_1}} &= -\lambda(t), \\
1717 %
1718 \frac{\partial L}{\partial q_1}
1719 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1} &=
1720 -\lambda(t)\,\frac{\partial\sigma}{\partial q_1} ,\\
1721 %
1722 \frac{\partial L}{\partial q_1}
1723 - \frac{d}{dt}\,\frac{\partial L}{\partial \dot{q}_1}
1724 + \mathcal{G}_i &= 0,
1725 \end{align}
1726 where $\mathcal{G}_i$, the force of constraint on $i$, is:
1727 \begin{equation}
1728 \mathcal{G}_i = \lambda(t)\,\frac{\partial\sigma}{\partial q_1}.
1729 \label{oopseEq:lm5}
1730 \end{equation}
1731
1732 In a simulation, this would involve the solution of a set of $(m + n)$
1733 number of equations. Where $m$ is the number of constraints, and $n$
1734 is the number of constrained coordinates. In practice, this is not
1735 done, as the matrix inversion necessary to solve the system of
1736 equations would be very time consuming to solve. Additionally, the
1737 numerical error in the solution of the set of $\lambda$'s would be
1738 compounded by the error inherent in propagating by the Velocity Verlet
1739 algorithm ($\Delta t^4$). The Verlet propagation error is negligible
1740 in an unconstrained system, as one is interested in the statistics of
1741 the run, and not that the run be numerically exact to the ``true''
1742 integration. This relates back to the ergodic hypothesis that a time
1743 integral of a valid trajectory will still give the correct ensemble
1744 average. However, in the case of constraints, if the equations of
1745 motion leave the ``true'' trajectory, they are departing from the
1746 constrained surface. The method that is used, is to iteratively solve
1747 for $\lambda(t)$ at each time step.
1748
1749 In {\sc rattle} the equations of motion are modified subject to the
1750 following two constraints:
1751 \begin{align}
1752 \sigma_{ij}[\mathbf{r}(t)] \equiv
1753 [ \mathbf{r}_i(t) - \mathbf{r}_j(t)]^2 - d_{ij}^2 &= 0 %
1754 \label{oopseEq:c1}, \\
1755 %
1756 [\mathbf{\dot{r}}_i(t) - \mathbf{\dot{r}}_j(t)] \cdot
1757 [\mathbf{r}_i(t) - \mathbf{r}_j(t)] &= 0 .\label{oopseEq:c2}
1758 \end{align}
1759 Eq.~\ref{oopseEq:c1} is the set of bond constraints, where $d_{ij}$ is
1760 the constrained distance between atom $i$ and
1761 $j$. Eq.~\ref{oopseEq:c2} constrains the velocities of $i$ and $j$ to
1762 be perpendicular to the bond vector, so that the bond can neither grow
1763 nor shrink. The constrained dynamics equations become:
1764 \begin{equation}
1765 m_i \mathbf{\ddot{r}}_i = \mathbf{F}_i + \mathbf{\mathcal{G}}_i,
1766 \label{oopseEq:r1}
1767 \end{equation}
1768 where,$\mathbf{\mathcal{G}}_i$ are the forces of constraint on $i$,
1769 and are defined:
1770 \begin{equation}
1771 \mathbf{\mathcal{G}}_i = - \sum_j \lambda_{ij}(t)\,\nabla \sigma_{ij}.
1772 \label{oopseEq:r2}
1773 \end{equation}
1774
1775 In Velocity Verlet, if $\Delta t = h$, the propagation can be written:
1776 \begin{align}
1777 \mathbf{r}_i(t+h) &=
1778 \mathbf{r}_i(t) + h\mathbf{\dot{r}}(t) +
1779 \frac{h^2}{2m_i}\,\Bigl[ \mathbf{F}_i(t) +
1780 \mathbf{\mathcal{G}}_{Ri}(t) \Bigr] \label{oopseEq:vv1}, \\
1781 %
1782 \mathbf{\dot{r}}_i(t+h) &=
1783 \mathbf{\dot{r}}_i(t) + \frac{h}{2m_i}
1784 \Bigl[ \mathbf{F}_i(t) + \mathbf{\mathcal{G}}_{Ri}(t) +
1785 \mathbf{F}_i(t+h) + \mathbf{\mathcal{G}}_{Vi}(t+h) \Bigr] ,%
1786 \label{oopseEq:vv2}
1787 \end{align}
1788 where:
1789 \begin{align}
1790 \mathbf{\mathcal{G}}_{Ri}(t) &=
1791 -2 \sum_j \lambda_{Rij}(t) \mathbf{r}_{ij}(t) ,\\
1792 %
1793 \mathbf{\mathcal{G}}_{Vi}(t+h) &=
1794 -2 \sum_j \lambda_{Vij}(t+h) \mathbf{r}(t+h).
1795 \end{align}
1796 Next, define:
1797 \begin{align}
1798 g_{ij} &= h \lambda_{Rij}(t) ,\\
1799 k_{ij} &= h \lambda_{Vij}(t+h), \\
1800 \mathbf{q}_i &= \mathbf{\dot{r}}_i(t) + \frac{h}{2m_i} \mathbf{F}_i(t)
1801 - \frac{1}{m_i}\sum_j g_{ij}\mathbf{r}_{ij}(t).
1802 \end{align}
1803 Using these definitions, Eq.~\ref{oopseEq:vv1} and \ref{oopseEq:vv2}
1804 can be rewritten as,
1805 \begin{align}
1806 \mathbf{r}_i(t+h) &= \mathbf{r}_i(t) + h \mathbf{q}_i ,\\
1807 %
1808 \mathbf{\dot{r}}(t+h) &= \mathbf{q}_i + \frac{h}{2m_i}\mathbf{F}_i(t+h)
1809 -\frac{1}{m_i}\sum_j k_{ij} \mathbf{r}_{ij}(t+h).
1810 \end{align}
1811
1812 To integrate the equations of motion, the {\sc rattle} algorithm first
1813 solves for $\mathbf{r}(t+h)$. Let,
1814 \begin{equation}
1815 \mathbf{q}_i = \mathbf{\dot{r}}(t) + \frac{h}{2m_i}\mathbf{F}_i(t).
1816 \end{equation}
1817 Here $\mathbf{q}_i$ corresponds to an initial unconstrained move. Next
1818 pick a constraint $j$, and let,
1819 \begin{equation}
1820 \mathbf{s} = \mathbf{r}_i(t) + h\mathbf{q}_i(t)
1821 - \mathbf{r}_j(t) + h\mathbf{q}_j(t).
1822 \label{oopseEq:ra1}
1823 \end{equation}
1824 If
1825 \begin{equation}
1826 \Big| |\mathbf{s}|^2 - d_{ij}^2 \Big| > \text{tolerance},
1827 \end{equation}
1828 then the constraint is unsatisfied, and corrections are made to the
1829 positions. First we define a test corrected configuration as,
1830 \begin{align}
1831 \mathbf{r}_i^T(t+h) = \mathbf{r}_i(t) + h\biggl[\mathbf{q}_i -
1832 g_{ij}\,\frac{\mathbf{r}_{ij}(t)}{m_i} \biggr] ,\\
1833 %
1834 \mathbf{r}_j^T(t+h) = \mathbf{r}_j(t) + h\biggl[\mathbf{q}_j +
1835 g_{ij}\,\frac{\mathbf{r}_{ij}(t)}{m_j} \biggr].
1836 \end{align}
1837 And we chose $g_{ij}$ such that, $|\mathbf{r}_i^T - \mathbf{r}_j^T|^2
1838 = d_{ij}^2$. Solving the quadratic for $g_{ij}$ we obtain the
1839 approximation,
1840 \begin{equation}
1841 g_{ij} = \frac{(s^2 - d^2)}{2h[\mathbf{s}\cdot\mathbf{r}_{ij}(t)]
1842 (\frac{1}{m_i} + \frac{1}{m_j})}.
1843 \end{equation}
1844 Although not an exact solution for $g_{ij}$, as this is an iterative
1845 scheme overall, the eventual solution will converge. With a trial
1846 $g_{ij}$, the new $\mathbf{q}$'s become,
1847 \begin{align}
1848 \mathbf{q}_i &= \mathbf{q}^{\text{old}}_i - g_{ij}\,
1849 \frac{\mathbf{r}_{ij}(t)}{m_i} ,\\
1850 %
1851 \mathbf{q}_j &= \mathbf{q}^{\text{old}}_j + g_{ij}\,
1852 \frac{\mathbf{r}_{ij}(t)}{m_j} .
1853 \end{align}
1854 The whole algorithm is then repeated from Eq.~\ref{oopseEq:ra1} until
1855 all constraints are satisfied.
1856
1857 The second step of {\sc rattle}, is to then update the velocities. The
1858 step starts with,
1859 \begin{equation}
1860 \mathbf{\dot{r}}_i(t+h) = \mathbf{q}_i + \frac{h}{2m_i}\mathbf{F}_i(t+h).
1861 \end{equation}
1862 Next we pick a constraint $j$, and calculate the dot product $\ell$.
1863 \begin{equation}
1864 \ell = \mathbf{r}_{ij}(t+h) \cdot \mathbf{\dot{r}}_{ij}(t+h).
1865 \label{oopseEq:rv1}
1866 \end{equation}
1867 Here if constraint Eq.~\ref{oopseEq:c2} holds, $\ell$ should be
1868 zero. Therefore if $\ell$ is greater than some tolerance, then
1869 corrections are made to the $i$ and $j$ velocities.
1870 \begin{align}
1871 \mathbf{\dot{r}}_i^T &= \mathbf{\dot{r}}_i(t+h) - k_{ij}
1872 \frac{\mathbf{\dot{r}}_{ij}(t+h)}{m_i}, \\
1873 %
1874 \mathbf{\dot{r}}_j^T &= \mathbf{\dot{r}}_j(t+h) + k_{ij}
1875 \frac{\mathbf{\dot{r}}_{ij}(t+h)}{m_j}.
1876 \end{align}
1877 Like in the previous step, we select a value for $k_{ij}$ such that
1878 $\ell$ is zero.
1879 \begin{equation}
1880 k_{ij} = \frac{\ell}{d^2_{ij}(\frac{1}{m_i} + \frac{1}{m_j})}.
1881 \end{equation}
1882 The test velocities, $\mathbf{\dot{r}}^T_i$ and
1883 $\mathbf{\dot{r}}^T_j$, then replace their respective velocities, and
1884 the algorithm is iterated from Eq.~\ref{oopseEq:rv1} until all
1885 constraints are satisfied.
1886
1887
1888 \subsection{\label{oopseSec:zcons}Z-Constraint Method}
1889
1890 Based on the fluctuation-dissipation theorem, a force auto-correlation
1891 method was developed by Roux and Karplus to investigate the dynamics
1892 of ions inside ion channels.\cite{Roux91} The time-dependent friction
1893 coefficient can be calculated from the deviation of the instantaneous
1894 force from its mean force.
1895 \begin{equation}
1896 \xi(z,t)=\langle\delta F(z,t)\delta F(z,0)\rangle/k_{B}T,
1897 \end{equation}
1898 where%
1899 \begin{equation}
1900 \delta F(z,t)=F(z,t)-\langle F(z,t)\rangle.
1901 \end{equation}
1902
1903
1904 If the time-dependent friction decays rapidly, the static friction
1905 coefficient can be approximated by
1906 \begin{equation}
1907 \xi_{\text{static}}(z)=\int_{0}^{\infty}\langle\delta F(z,t)\delta F(z,0)\rangle dt.
1908 \end{equation}
1909 Allowing diffusion constant to then be calculated through the
1910 Einstein relation:\cite{Marrink94}
1911 \begin{equation}
1912 D(z)=\frac{k_{B}T}{\xi_{\text{static}}(z)}=\frac{(k_{B}T)^{2}}{\int_{0}^{\infty
1913 }\langle\delta F(z,t)\delta F(z,0)\rangle dt}.%
1914 \end{equation}
1915
1916 The Z-Constraint method, which fixes the z coordinates of the
1917 molecules with respect to the center of the mass of the system, has
1918 been a method suggested to obtain the forces required for the force
1919 auto-correlation calculation.\cite{Marrink94} However, simply resetting the
1920 coordinate will move the center of the mass of the whole system. To
1921 avoid this problem, a new method was used in {\sc oopse}. Instead of
1922 resetting the coordinate, we reset the forces of z-constrained
1923 molecules as well as subtract the total constraint forces from the
1924 rest of the system after the force calculation at each time step.
1925
1926 After the force calculation, define $G_\alpha$ as
1927 \begin{equation}
1928 G_{\alpha} = \sum_i F_{\alpha i},
1929 \label{oopseEq:zc1}
1930 \end{equation}
1931 where $F_{\alpha i}$ is the force in the z direction of atom $i$ in
1932 z-constrained molecule $\alpha$. The forces of the z constrained
1933 molecule are then set to:
1934 \begin{equation}
1935 F_{\alpha i} = F_{\alpha i} -
1936 \frac{m_{\alpha i} G_{\alpha}}{\sum_i m_{\alpha i}}.
1937 \end{equation}
1938 Here, $m_{\alpha i}$ is the mass of atom $i$ in the z-constrained
1939 molecule. Having rescaled the forces, the velocities must also be
1940 rescaled to subtract out any center of mass velocity in the z
1941 direction.
1942 \begin{equation}
1943 v_{\alpha i} = v_{\alpha i} -
1944 \frac{\sum_i m_{\alpha i} v_{\alpha i}}{\sum_i m_{\alpha i}},
1945 \end{equation}
1946 where $v_{\alpha i}$ is the velocity of atom $i$ in the z direction.
1947 Lastly, all of the accumulated z constrained forces must be subtracted
1948 from the system to keep the system center of mass from drifting.
1949 \begin{equation}
1950 F_{\beta i} = F_{\beta i} - \frac{m_{\beta i} \sum_{\alpha} G_{\alpha}}
1951 {\sum_{\beta}\sum_i m_{\beta i}},
1952 \end{equation}
1953 where $\beta$ are all of the unconstrained molecules in the
1954 system. Similarly, the velocities of the unconstrained molecules must
1955 also be scaled.
1956 \begin{equation}
1957 v_{\beta i} = v_{\beta i} + \sum_{\alpha}
1958 \frac{\sum_i m_{\alpha i} v_{\alpha i}}{\sum_i m_{\alpha i}}.
1959 \end{equation}
1960
1961 At the very beginning of the simulation, the molecules may not be at their
1962 constrained positions. To move a z-constrained molecule to its specified
1963 position, a simple harmonic potential is used
1964 \begin{equation}
1965 U(t)=\frac{1}{2}k_{\text{Harmonic}}(z(t)-z_{\text{cons}})^{2},%
1966 \end{equation}
1967 where $k_{\text{Harmonic}}$ is the harmonic force constant, $z(t)$ is the
1968 current $z$ coordinate of the center of mass of the constrained molecule, and
1969 $z_{\text{cons}}$ is the constrained position. The harmonic force operating
1970 on the z-constrained molecule at time $t$ can be calculated by
1971 \begin{equation}
1972 F_{z_{\text{Harmonic}}}(t)=-\frac{\partial U(t)}{\partial z(t)}=
1973 -k_{\text{Harmonic}}(z(t)-z_{\text{cons}}).
1974 \end{equation}
1975
1976 \section{\label{oopseSec:props}Trajectory Analysis}
1977
1978 \subsection{\label{oopseSec:staticProps}Static Property Analysis}
1979
1980 The static properties of the trajectories are analyzed with the
1981 program \texttt{staticProps}. The code is capable of calculating a
1982 number of pair correlations between species A and B. Some of which
1983 only apply to directional entities. The summary of pair correlations
1984 can be found in Table~\ref{oopseTb:gofrs}
1985
1986 \begin{table}
1987 \caption[The list of pair correlations in \texttt{staticProps}]{THE DIFFERENT PAIR CORRELATIONS IN \texttt{staticProps}}
1988 \label{oopseTb:gofrs}
1989 \begin{center}
1990 \begin{tabular}{|l|c|c|}
1991 \hline
1992 Name & Equation & Directional Atom \\ \hline
1993 $g_{\text{AB}}(r)$ & Eq.~\ref{eq:gofr} & neither \\ \hline
1994 $g_{\text{AB}}(r, \cos \theta)$ & Eq.~\ref{eq:gofrCosTheta} & A \\ \hline
1995 $g_{\text{AB}}(r, \cos \omega)$ & Eq.~\ref{eq:gofrCosOmega} & both \\ \hline
1996 $g_{\text{AB}}(x, y, z)$ & Eq.~\ref{eq:gofrXYZ} & neither \\ \hline
1997 $\langle \cos \omega \rangle_{\text{AB}}(r)$ & Eq.~\ref{eq:cosOmegaOfR} &%
1998 both \\ \hline
1999 \end{tabular}
2000 \begin{minipage}{\linewidth}
2001 \centering
2002 \vspace{2mm}
2003 The third column specifies which atom, if any, need be a directional entity.
2004 \end{minipage}
2005 \end{center}
2006 \end{table}
2007
2008 The first pair correlation, $g_{\text{AB}}(r)$, is defined as follows:
2009 \begin{equation}
2010 g_{\text{AB}}(r) = \frac{V}{N_{\text{A}}N_{\text{B}}}\langle %%
2011 \sum_{i \in \text{A}} \sum_{j \in \text{B}} %%
2012 \delta( r - |\mathbf{r}_{ij}|) \rangle, \label{eq:gofr}
2013 \end{equation}
2014 where $\mathbf{r}_{ij}$ is the vector
2015 \begin{equation*}
2016 \mathbf{r}_{ij} = \mathbf{r}_j - \mathbf{r}_i, \notag
2017 \end{equation*}
2018 and $\frac{V}{N_{\text{A}}N_{\text{B}}}$ normalizes the average over
2019 the expected pair density at a given $r$.
2020
2021 The next two pair correlations, $g_{\text{AB}}(r, \cos \theta)$ and
2022 $g_{\text{AB}}(r, \cos \omega)$, are similar in that they are both two
2023 dimensional histograms. Both use $r$ for the primary axis then a
2024 $\cos$ for the secondary axis ($\cos \theta$ for
2025 Eq.~\ref{eq:gofrCosTheta} and $\cos \omega$ for
2026 Eq.~\ref{eq:gofrCosOmega}). This allows for the investigator to
2027 correlate alignment on directional entities. $g_{\text{AB}}(r, \cos
2028 \theta)$ is defined as follows:
2029 \begin{equation}
2030 g_{\text{AB}}(r, \cos \theta) = \frac{V}{N_{\text{A}}N_{\text{B}}}\langle
2031 \sum_{i \in \text{A}} \sum_{j \in \text{B}}
2032 \delta( \cos \theta - \cos \theta_{ij})
2033 \delta( r - |\mathbf{r}_{ij}|) \rangle.
2034 \label{eq:gofrCosTheta}
2035 \end{equation}
2036 Here
2037 \begin{equation*}
2038 \cos \theta_{ij} = \mathbf{\hat{i}} \cdot \mathbf{\hat{r}}_{ij},
2039 \end{equation*}
2040 where $\mathbf{\hat{i}}$ is the unit directional vector of species $i$
2041 and $\mathbf{\hat{r}}_{ij}$ is the unit vector associated with vector
2042 $\mathbf{r}_{ij}$.
2043
2044 The second two dimensional histogram is of the form:
2045 \begin{equation}
2046 g_{\text{AB}}(r, \cos \omega) =
2047 \frac{V}{N_{\text{A}}N_{\text{B}}}\langle
2048 \sum_{i \in \text{A}} \sum_{j \in \text{B}}
2049 \delta( \cos \omega - \cos \omega_{ij})
2050 \delta( r - |\mathbf{r}_{ij}|) \rangle. \label{eq:gofrCosOmega}
2051 \end{equation}
2052 Here
2053 \begin{equation*}
2054 \cos \omega_{ij} = \mathbf{\hat{i}} \cdot \mathbf{\hat{j}}.
2055 \end{equation*}
2056 Again, $\mathbf{\hat{i}}$ and $\mathbf{\hat{j}}$ are the unit
2057 directional vectors of species $i$ and $j$.
2058
2059 The static analysis code is also cable of calculating a three
2060 dimensional pair correlation of the form:
2061 \begin{equation}\label{eq:gofrXYZ}
2062 g_{\text{AB}}(x, y, z) =
2063 \frac{V}{N_{\text{A}}N_{\text{B}}}\langle
2064 \sum_{i \in \text{A}} \sum_{j \in \text{B}}
2065 \delta( x - x_{ij})
2066 \delta( y - y_{ij})
2067 \delta( z - z_{ij}) \rangle,
2068 \end{equation}
2069 where $x_{ij}$, $y_{ij}$, and $z_{ij}$ are the $x$, $y$, and $z$
2070 components respectively of vector $\mathbf{r}_{ij}$.
2071
2072 The final pair correlation is similar to
2073 Eq.~\ref{eq:gofrCosOmega}. $\langle \cos \omega
2074 \rangle_{\text{AB}}(r)$ is calculated in the following way:
2075 \begin{equation}\label{eq:cosOmegaOfR}
2076 \langle \cos \omega \rangle_{\text{AB}}(r) =
2077 \langle \sum_{i \in \text{A}} \sum_{j \in \text{B}}
2078 (\cos \omega_{ij}) \delta( r - |\mathbf{r}_{ij}|) \rangle.
2079 \end{equation}
2080 Here $\cos \omega_{ij}$ is defined in the same way as in
2081 Eq.~\ref{eq:gofrCosOmega}. This equation is a single dimensional pair
2082 correlation that gives the average correlation of two directional
2083 entities as a function of their distance from each other.
2084
2085 \subsection{\label{dynamicProps}Dynamic Property Analysis}
2086
2087 The dynamic properties of a trajectory are calculated with the program
2088 \texttt{dynamicProps}. The program calculates the following properties:
2089 \begin{gather}
2090 \langle | \mathbf{r}(t) - \mathbf{r}(0) |^2 \rangle, \label{eq:rms}\\
2091 \langle \mathbf{v}(t) \cdot \mathbf{v}(0) \rangle, \label{eq:velCorr} \\
2092 \langle \mathbf{j}(t) \cdot \mathbf{j}(0) \rangle. \label{eq:angularVelCorr}
2093 \end{gather}
2094
2095 Eq.~\ref{eq:rms} is the root mean square displacement function. Which
2096 allows one to observe the average displacement of an atom as a
2097 function of time. The quantity is useful when calculating diffusion
2098 coefficients because of the Einstein Relation, which is valid at long
2099 times.\cite{allen87:csl}
2100 \begin{equation}
2101 2tD = \langle | \mathbf{r}(t) - \mathbf{r}(0) |^2 \rangle.
2102 \label{oopseEq:einstein}
2103 \end{equation}
2104
2105 Eq.~\ref{eq:velCorr} and \ref{eq:angularVelCorr} are the translational
2106 velocity and angular velocity correlation functions respectively. The
2107 latter is only applicable to directional species in the
2108 simulation. The velocity autocorrelation functions are useful when
2109 determining vibrational information about the system of interest.
2110
2111 \section{\label{oopseSec:design}Program Design}
2112
2113 \subsection{\label{sec:architecture} {\sc oopse} Architecture}
2114
2115 The core of OOPSE is divided into two main object libraries:
2116 \texttt{libBASS} and \texttt{libmdtools}. \texttt{libBASS} is the
2117 library developed around the parsing engine and \texttt{libmdtools}
2118 is the software library developed around the simulation engine. These
2119 two libraries are designed to encompass all the basic functions and
2120 tools that {\sc oopse} provides. Utility programs, such as the
2121 property analyzers, need only link against the software libraries to
2122 gain access to parsing, force evaluation, and input / output
2123 routines.
2124
2125 Contained in \texttt{libBASS} are all the routines associated with
2126 reading and parsing the \texttt{.bass} input files. Given a
2127 \texttt{.bass} file, \texttt{libBASS} will open it and any associated
2128 \texttt{.mdl} files; then create structures in memory that are
2129 templates of all the molecules specified in the input files. In
2130 addition, any simulation parameters set in the \texttt{.bass} file
2131 will be placed in a structure for later query by the controlling
2132 program.
2133
2134 Located in \texttt{libmdtools} are all other routines necessary to a
2135 Molecular Dynamics simulation. The library uses the main data
2136 structures returned by \texttt{libBASS} to initialize the various
2137 parts of the simulation: the atom structures and positions, the force
2138 field, the integrator, \emph{et cetera}. After initialization, the
2139 library can be used to perform a variety of tasks: integrate a
2140 Molecular Dynamics trajectory, query phase space information from a
2141 specific frame of a completed trajectory, or even recalculate force or
2142 energetic information about specific frames from a completed
2143 trajectory.
2144
2145 With these core libraries in place, several programs have been
2146 developed to utilize the routines provided by \texttt{libBASS} and
2147 \texttt{libmdtools}. The main program of the package is \texttt{oopse}
2148 and the corresponding parallel version \texttt{oopse\_MPI}. These two
2149 programs will take the \texttt{.bass} file, and create (and integrate)
2150 the simulation specified in the script. The two analysis programs
2151 \texttt{staticProps} and \texttt{dynamicProps} utilize the core
2152 libraries to initialize and read in trajectories from previously
2153 completed simulations, in addition to the ability to use functionality
2154 from \texttt{libmdtools} to recalculate forces and energies at key
2155 frames in the trajectories. Lastly, the family of system building
2156 programs (Sec.~\ref{oopseSec:initCoords}) also use the libraries to
2157 store and output the system configurations they create.
2158
2159 \subsection{\label{oopseSec:parallelization} Parallelization of {\sc oopse}}
2160
2161 Although processor power is continually growing roughly following
2162 Moore's Law, it is still unreasonable to simulate systems of more then
2163 a 1000 atoms on a single processor. To facilitate study of larger
2164 system sizes or smaller systems on long time scales in a reasonable
2165 period of time, parallel methods were developed allowing multiple
2166 CPU's to share the simulation workload. Three general categories of
2167 parallel decomposition methods have been developed including atomic,
2168 spatial and force decomposition methods.
2169
2170 Algorithmically simplest of the three methods is atomic decomposition
2171 where N particles in a simulation are split among P processors for the
2172 duration of the simulation. Computational cost scales as an optimal
2173 $\mathcal{O}(N/P)$ for atomic decomposition. Unfortunately all
2174 processors must communicate positions and forces with all other
2175 processors at every force evaluation, leading communication costs to
2176 scale as an unfavorable $\mathcal{O}(N)$, \emph{independent of the
2177 number of processors}. This communication bottleneck led to the
2178 development of spatial and force decomposition methods in which
2179 communication among processors scales much more favorably. Spatial or
2180 domain decomposition divides the physical spatial domain into 3D boxes
2181 in which each processor is responsible for calculation of forces and
2182 positions of particles located in its box. Particles are reassigned to
2183 different processors as they move through simulation space. To
2184 calculate forces on a given particle, a processor must know the
2185 positions of particles within some cutoff radius located on nearby
2186 processors instead of the positions of particles on all
2187 processors. Both communication between processors and computation
2188 scale as $\mathcal{O}(N/P)$ in the spatial method. However, spatial
2189 decomposition adds algorithmic complexity to the simulation code and
2190 is not very efficient for small N since the overall communication
2191 scales as the surface to volume ratio $\mathcal{O}(N/P)^{2/3}$ in
2192 three dimensions.
2193
2194 The parallelization method used in {\sc oopse} is the force
2195 decomposition method. Force decomposition assigns particles to
2196 processors based on a block decomposition of the force
2197 matrix. Processors are split into an optimally square grid forming row
2198 and column processor groups. Forces are calculated on particles in a
2199 given row by particles located in that processors column
2200 assignment. Force decomposition is less complex to implement than the
2201 spatial method but still scales computationally as $\mathcal{O}(N/P)$
2202 and scales as $\mathcal{O}(N/\sqrt{P})$ in communication
2203 cost. Plimpton has also found that force decompositions scale more
2204 favorably than spatial decompositions for systems up to 10,000 atoms
2205 and favorably compete with spatial methods up to 100,000
2206 atoms.\cite{plimpton95}
2207
2208 \subsection{\label{oopseSec:memAlloc}Memory Issues in Trajectory Analysis}
2209
2210 For large simulations, the trajectory files can sometimes reach sizes
2211 in excess of several gigabytes. In order to effectively analyze that
2212 amount of data, two memory management schemes have been devised for
2213 \texttt{staticProps} and for \texttt{dynamicProps}. The first scheme,
2214 developed for \texttt{staticProps}, is the simplest. As each frame's
2215 statistics are calculated independent of each other, memory is
2216 allocated for each frame, then freed once correlation calculations are
2217 complete for the snapshot. To prevent multiple passes through a
2218 potentially large file, \texttt{staticProps} is capable of calculating
2219 all requested correlations per frame with only a single pair loop in
2220 each frame and a single read of the file.
2221
2222 The second, more advanced memory scheme, is used by
2223 \texttt{dynamicProps}. Here, the program must have multiple frames in
2224 memory to calculate time dependent correlations. In order to prevent a
2225 situation where the program runs out of memory due to large
2226 trajectories, the user is able to specify that the trajectory be read
2227 in blocks. The number of frames in each block is specified by the
2228 user, and upon reading a block of the trajectory,
2229 \texttt{dynamicProps} will calculate all of the time correlation frame
2230 pairs within the block. After in-block correlations are complete, a
2231 second block of the trajectory is read, and the cross correlations are
2232 calculated between the two blocks. This second block is then freed and
2233 then incremented and the process repeated until the end of the
2234 trajectory. Once the end is reached, the first block is freed then
2235 incremented, and the again the internal time correlations are
2236 calculated. The algorithm with the second block is then repeated with
2237 the new origin block, until all frame pairs have been correlated in
2238 time. This process is illustrated in
2239 Fig.~\ref{oopseFig:dynamicPropsMemory}.
2240
2241 \begin{figure}
2242 \centering
2243 \includegraphics[width=\linewidth]{dynamicPropsMem.eps}
2244 \caption[A representation of the block correlations in \texttt{dynamicProps}]{This diagram illustrates the memory management used by \texttt{dynamicProps}, which follows the scheme: $\sum^{N_{\text{memory blocks}}}_{i=1}[ \operatorname{self}(i) + \sum^{N_{\text{memory blocks}}}_{j>i} \operatorname{cross}(i,j)]$. The shaded region represents the self correlation of the memory block, and the open blocks are read one at a time and the cross correlations between blocks are calculated.}
2245 \label{oopseFig:dynamicPropsMemory}
2246 \end{figure}
2247
2248 \section{\label{oopseSec:conclusion}Conclusion}
2249
2250 We have presented the design and implementation of our open source
2251 simulation package {\sc oopse}. The package offers novel capabilities
2252 to the field of Molecular Dynamics simulation packages in the form of
2253 dipolar force fields, and symplectic integration of rigid body
2254 dynamics. It is capable of scaling across multiple processors through
2255 the use of force based decomposition using MPI. It also implements
2256 several advanced integrators allowing the end user control over
2257 temperature and pressure. In addition, it is capable of integrating
2258 constrained dynamics through both the {\sc rattle} algorithm and the
2259 z-constraint method.
2260
2261 These features are all brought together in a single open-source
2262 program. This allows researchers to not only benefit from
2263 {\sc oopse}, but also contribute to {\sc oopse}'s development as
2264 well.
2265