ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
Revision: 2941
Committed: Mon Jul 17 20:01:05 2006 UTC (17 years, 11 months ago) by tim
Content type: application/x-tex
File size: 74564 byte(s)
Log Message:
references corrections

File Contents

# Content
1 \chapter{\label{chapt:introduction}INTRODUCTION AND THEORETICAL BACKGROUND}
2
3 \section{\label{introSection:classicalMechanics}Classical
4 Mechanics}
5
6 Using equations of motion derived from Classical Mechanics,
7 Molecular Dynamics simulations are carried out by integrating the
8 equations of motion for a given system of particles. There are three
9 fundamental ideas behind classical mechanics. Firstly, one can
10 determine the state of a mechanical system at any time of interest;
11 Secondly, all the mechanical properties of the system at that time
12 can be determined by combining the knowledge of the properties of
13 the system with the specification of this state; Finally, the
14 specification of the state when further combined with the laws of
15 mechanics will also be sufficient to predict the future behavior of
16 the system.
17
18 \subsection{\label{introSection:newtonian}Newtonian Mechanics}
19 The discovery of Newton's three laws of mechanics which govern the
20 motion of particles is the foundation of the classical mechanics.
21 Newton's first law defines a class of inertial frames. Inertial
22 frames are reference frames where a particle not interacting with
23 other bodies will move with constant speed in the same direction.
24 With respect to inertial frames, Newton's second law has the form
25 \begin{equation}
26 F = \frac {dp}{dt} = \frac {mdv}{dt}
27 \label{introEquation:newtonSecondLaw}
28 \end{equation}
29 A point mass interacting with other bodies moves with the
30 acceleration along the direction of the force acting on it. Let
31 $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and
32 $F_{ji}$ be the force that particle $j$ exerts on particle $i$.
33 Newton's third law states that
34 \begin{equation}
35 F_{ij} = -F_{ji}.
36 \label{introEquation:newtonThirdLaw}
37 \end{equation}
38 Conservation laws of Newtonian Mechanics play very important roles
39 in solving mechanics problems. The linear momentum of a particle is
40 conserved if it is free or it experiences no force. The second
41 conservation theorem concerns the angular momentum of a particle.
42 The angular momentum $L$ of a particle with respect to an origin
43 from which $r$ is measured is defined to be
44 \begin{equation}
45 L \equiv r \times p \label{introEquation:angularMomentumDefinition}
46 \end{equation}
47 The torque $\tau$ with respect to the same origin is defined to be
48 \begin{equation}
49 \tau \equiv r \times F \label{introEquation:torqueDefinition}
50 \end{equation}
51 Differentiating Eq.~\ref{introEquation:angularMomentumDefinition},
52 \[
53 \dot L = \frac{d}{{dt}}(r \times p) = (\dot r \times p) + (r \times
54 \dot p)
55 \]
56 since
57 \[
58 \dot r \times p = \dot r \times mv = m\dot r \times \dot r \equiv 0
59 \]
60 thus,
61 \begin{equation}
62 \dot L = r \times \dot p = \tau
63 \end{equation}
64 If there are no external torques acting on a body, the angular
65 momentum of it is conserved. The last conservation theorem state
66 that if all forces are conservative, energy is conserved,
67 \begin{equation}E = T + V. \label{introEquation:energyConservation}
68 \end{equation}
69 All of these conserved quantities are important factors to determine
70 the quality of numerical integration schemes for rigid
71 bodies.\cite{Dullweber1997}
72
73 \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74
75 Newtonian Mechanics suffers from an important limitation: motion can
76 only be described in cartesian coordinate systems which make it
77 impossible to predict analytically the properties of the system even
78 if we know all of the details of the interaction. In order to
79 overcome some of the practical difficulties which arise in attempts
80 to apply Newton's equation to complex systems, approximate numerical
81 procedures may be developed.
82
83 \subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's
84 Principle}}
85
86 Hamilton introduced the dynamical principle upon which it is
87 possible to base all of mechanics and most of classical physics.
88 Hamilton's Principle may be stated as follows: the trajectory, along
89 which a dynamical system may move from one point to another within a
90 specified time, is derived by finding the path which minimizes the
91 time integral of the difference between the kinetic $K$, and
92 potential energies $U$,
93 \begin{equation}
94 \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}.
95 \label{introEquation:halmitonianPrinciple1}
96 \end{equation}
97 For simple mechanical systems, where the forces acting on the
98 different parts are derivable from a potential, the Lagrangian
99 function $L$ can be defined as the difference between the kinetic
100 energy of the system and its potential energy,
101 \begin{equation}
102 L \equiv K - U = L(q_i ,\dot q_i ).
103 \label{introEquation:lagrangianDef}
104 \end{equation}
105 Thus, Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
106 \begin{equation}
107 \delta \int_{t_1 }^{t_2 } {L dt = 0} .
108 \label{introEquation:halmitonianPrinciple2}
109 \end{equation}
110
111 \subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The
112 Equations of Motion in Lagrangian Mechanics}}
113
114 For a system of $f$ degrees of freedom, the equations of motion in
115 the Lagrangian form is
116 \begin{equation}
117 \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} -
118 \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f
119 \label{introEquation:eqMotionLagrangian}
120 \end{equation}
121 where $q_{i}$ is generalized coordinate and $\dot{q_{i}}$ is
122 generalized velocity.
123
124 \subsection{\label{introSection:hamiltonian}Hamiltonian Mechanics}
125
126 Arising from Lagrangian Mechanics, Hamiltonian Mechanics was
127 introduced by William Rowan Hamilton in 1833 as a re-formulation of
128 classical mechanics. If the potential energy of a system is
129 independent of velocities, the momenta can be defined as
130 \begin{equation}
131 p_i = \frac{\partial L}{\partial \dot q_i}
132 \label{introEquation:generalizedMomenta}
133 \end{equation}
134 The Lagrange equations of motion are then expressed by
135 \begin{equation}
136 p_i = \frac{{\partial L}}{{\partial q_i }}
137 \label{introEquation:generalizedMomentaDot}
138 \end{equation}
139 With the help of the generalized momenta, we may now define a new
140 quantity $H$ by the equation
141 \begin{equation}
142 H = \sum\limits_k {p_k \dot q_k } - L ,
143 \label{introEquation:hamiltonianDefByLagrangian}
144 \end{equation}
145 where $ \dot q_1 \ldots \dot q_f $ are generalized velocities and
146 $L$ is the Lagrangian function for the system. Differentiating
147 Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, one can obtain
148 \begin{equation}
149 dH = \sum\limits_k {\left( {p_k d\dot q_k + \dot q_k dp_k -
150 \frac{{\partial L}}{{\partial q_k }}dq_k - \frac{{\partial
151 L}}{{\partial \dot q_k }}d\dot q_k } \right)} - \frac{{\partial
152 L}}{{\partial t}}dt . \label{introEquation:diffHamiltonian1}
153 \end{equation}
154 Making use of Eq.~\ref{introEquation:generalizedMomenta}, the second
155 and fourth terms in the parentheses cancel. Therefore,
156 Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as
157 \begin{equation}
158 dH = \sum\limits_k {\left( {\dot q_k dp_k - \dot p_k dq_k }
159 \right)} - \frac{{\partial L}}{{\partial t}}dt .
160 \label{introEquation:diffHamiltonian2}
161 \end{equation}
162 By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can
163 find
164 \begin{equation}
165 \frac{{\partial H}}{{\partial p_k }} = \dot {q_k}
166 \label{introEquation:motionHamiltonianCoordinate}
167 \end{equation}
168 \begin{equation}
169 \frac{{\partial H}}{{\partial q_k }} = - \dot {p_k}
170 \label{introEquation:motionHamiltonianMomentum}
171 \end{equation}
172 and
173 \begin{equation}
174 \frac{{\partial H}}{{\partial t}} = - \frac{{\partial L}}{{\partial
175 t}}
176 \label{introEquation:motionHamiltonianTime}
177 \end{equation}
178 where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179 Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180 equation of motion. Due to their symmetrical formula, they are also
181 known as the canonical equations of motions.\cite{Goldstein2001}
182
183 An important difference between Lagrangian approach and the
184 Hamiltonian approach is that the Lagrangian is considered to be a
185 function of the generalized velocities $\dot q_i$ and coordinates
186 $q_i$, while the Hamiltonian is considered to be a function of the
187 generalized momenta $p_i$ and the conjugate coordinates $q_i$.
188 Hamiltonian Mechanics is more appropriate for application to
189 statistical mechanics and quantum mechanics, since it treats the
190 coordinate and its time derivative as independent variables and it
191 only works with 1st-order differential equations.\cite{Marion1990}
192 In Newtonian Mechanics, a system described by conservative forces
193 conserves the total energy
194 (Eq.~\ref{introEquation:energyConservation}). It follows that
195 Hamilton's equations of motion conserve the total Hamiltonian
196 \begin{equation}
197 \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial
198 H}}{{\partial q_i }}\dot q_i + \frac{{\partial H}}{{\partial p_i
199 }}\dot p_i } \right)} = \sum\limits_i {\left( {\frac{{\partial
200 H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} -
201 \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial
202 q_i }}} \right) = 0}. \label{introEquation:conserveHalmitonian}
203 \end{equation}
204
205 \section{\label{introSection:statisticalMechanics}Statistical
206 Mechanics}
207
208 The thermodynamic behaviors and properties of Molecular Dynamics
209 simulation are governed by the principle of Statistical Mechanics.
210 The following section will give a brief introduction to some of the
211 Statistical Mechanics concepts and theorems presented in this
212 dissertation.
213
214 \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
215
216 Mathematically, phase space is the space which represents all
217 possible states of a system. Each possible state of the system
218 corresponds to one unique point in the phase space. For mechanical
219 systems, the phase space usually consists of all possible values of
220 position and momentum variables. Consider a dynamic system of $f$
221 particles in a cartesian space, where each of the $6f$ coordinates
222 and momenta is assigned to one of $6f$ mutually orthogonal axes, the
223 phase space of this system is a $6f$ dimensional space. A point, $x
224 =
225 (\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
226 \over q} _1 , \ldots
227 ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
228 \over q} _f
229 ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
230 \over p} _1 \ldots
231 ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
232 \over p} _f )$ , with a unique set of values of $6f$ coordinates and
233 momenta is a phase space vector.
234 %%%fix me
235
236 In statistical mechanics, the condition of an ensemble at any time
237 can be regarded as appropriately specified by the density $\rho$
238 with which representative points are distributed over the phase
239 space. The density distribution for an ensemble with $f$ degrees of
240 freedom is defined as,
241 \begin{equation}
242 \rho = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t).
243 \label{introEquation:densityDistribution}
244 \end{equation}
245 Governed by the principles of mechanics, the phase points change
246 their locations which changes the density at any time at phase
247 space. Hence, the density distribution is also to be taken as a
248 function of the time. The number of systems $\delta N$ at time $t$
249 can be determined by,
250 \begin{equation}
251 \delta N = \rho (q,p,t)dq_1 \ldots dq_f dp_1 \ldots dp_f.
252 \label{introEquation:deltaN}
253 \end{equation}
254 Assuming enough copies of the systems, we can sufficiently
255 approximate $\delta N$ without introducing discontinuity when we go
256 from one region in the phase space to another. By integrating over
257 the whole phase space,
258 \begin{equation}
259 N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f
260 \label{introEquation:totalNumberSystem}
261 \end{equation}
262 gives us an expression for the total number of copies. Hence, the
263 probability per unit volume in the phase space can be obtained by,
264 \begin{equation}
265 \frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int
266 {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
267 \label{introEquation:unitProbability}
268 \end{equation}
269 With the help of Eq.~\ref{introEquation:unitProbability} and the
270 knowledge of the system, it is possible to calculate the average
271 value of any desired quantity which depends on the coordinates and
272 momenta of the system. Even when the dynamics of the real system are
273 complex, or stochastic, or even discontinuous, the average
274 properties of the ensemble of possibilities as a whole remain well
275 defined. For a classical system in thermal equilibrium with its
276 environment, the ensemble average of a mechanical quantity, $\langle
277 A(q , p) \rangle_t$, takes the form of an integral over the phase
278 space of the system,
279 \begin{equation}
280 \langle A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho
281 (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
282 (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
283 \label{introEquation:ensembelAverage}
284 \end{equation}
285
286 \subsection{\label{introSection:liouville}Liouville's theorem}
287
288 Liouville's theorem is the foundation on which statistical mechanics
289 rests. It describes the time evolution of the phase space
290 distribution function. In order to calculate the rate of change of
291 $\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider
292 the two faces perpendicular to the $q_1$ axis, which are located at
293 $q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the
294 opposite face is given by the expression,
295 \begin{equation}
296 \left( {\rho + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 }
297 \right)\left( {\dot q_1 + \frac{{\partial \dot q_1 }}{{\partial q_1
298 }}\delta q_1 } \right)\delta q_2 \ldots \delta q_f \delta p_1
299 \ldots \delta p_f .
300 \end{equation}
301 Summing all over the phase space, we obtain
302 \begin{equation}
303 \frac{{d(\delta N)}}{{dt}} = - \sum\limits_{i = 1}^f {\left[ {\rho
304 \left( {\frac{{\partial \dot q_i }}{{\partial q_i }} +
305 \frac{{\partial \dot p_i }}{{\partial p_i }}} \right) + \left(
306 {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i + \frac{{\partial
307 \rho }}{{\partial p_i }}\dot p_i } \right)} \right]} \delta q_1
308 \ldots \delta q_f \delta p_1 \ldots \delta p_f .
309 \end{equation}
310 Differentiating the equations of motion in Hamiltonian formalism
311 (\ref{introEquation:motionHamiltonianCoordinate},
312 \ref{introEquation:motionHamiltonianMomentum}), we can show,
313 \begin{equation}
314 \sum\limits_i {\left( {\frac{{\partial \dot q_i }}{{\partial q_i }}
315 + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)} = 0 ,
316 \end{equation}
317 which cancels the first terms of the right hand side. Furthermore,
318 dividing $ \delta q_1 \ldots \delta q_f \delta p_1 \ldots \delta
319 p_f $ in both sides, we can write out Liouville's theorem in a
320 simple form,
321 \begin{equation}
322 \frac{{\partial \rho }}{{\partial t}} + \sum\limits_{i = 1}^f
323 {\left( {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i +
324 \frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)} = 0 .
325 \label{introEquation:liouvilleTheorem}
326 \end{equation}
327 Liouville's theorem states that the distribution function is
328 constant along any trajectory in phase space. In classical
329 statistical mechanics, since the number of system copies in an
330 ensemble is huge and constant, we can assume the local density has
331 no reason (other than classical mechanics) to change,
332 \begin{equation}
333 \frac{{\partial \rho }}{{\partial t}} = 0.
334 \label{introEquation:stationary}
335 \end{equation}
336 In such stationary system, the density of distribution $\rho$ can be
337 connected to the Hamiltonian $H$ through Maxwell-Boltzmann
338 distribution,
339 \begin{equation}
340 \rho \propto e^{ - \beta H}
341 \label{introEquation:densityAndHamiltonian}
342 \end{equation}
343
344 \subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}}
345 Lets consider a region in the phase space,
346 \begin{equation}
347 \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f .
348 \end{equation}
349 If this region is small enough, the density $\rho$ can be regarded
350 as uniform over the whole integral. Thus, the number of phase points
351 inside this region is given by,
352 \begin{equation}
353 \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f
354 dp_1 } ..dp_f.
355 \end{equation}
356
357 \begin{equation}
358 \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho
359 \frac{d}{{dt}}(\delta v) = 0.
360 \end{equation}
361 With the help of the stationary assumption
362 (Eq.~\ref{introEquation:stationary}), we obtain the principle of
363 \emph{conservation of volume in phase space},
364 \begin{equation}
365 \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 }
366 ...dq_f dp_1 } ..dp_f = 0.
367 \label{introEquation:volumePreserving}
368 \end{equation}
369
370 \subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}}
371
372 Liouville's theorem can be expressed in a variety of different forms
373 which are convenient within different contexts. For any two function
374 $F$ and $G$ of the coordinates and momenta of a system, the Poisson
375 bracket $\{F,G\}$ is defined as
376 \begin{equation}
377 \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
378 F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
379 \frac{{\partial F}}{{\partial p_i }}\frac{{\partial G}}{{\partial
380 q_i }}} \right)}.
381 \label{introEquation:poissonBracket}
382 \end{equation}
383 Substituting equations of motion in Hamiltonian formalism
384 (Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
385 Eq.~\ref{introEquation:motionHamiltonianMomentum}) into
386 (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite
387 Liouville's theorem using Poisson bracket notion,
388 \begin{equation}
389 \left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - \left\{
390 {\rho ,H} \right\}.
391 \label{introEquation:liouvilleTheromInPoissin}
392 \end{equation}
393 Moreover, the Liouville operator is defined as
394 \begin{equation}
395 iL = \sum\limits_{i = 1}^f {\left( {\frac{{\partial H}}{{\partial
396 p_i }}\frac{\partial }{{\partial q_i }} - \frac{{\partial
397 H}}{{\partial q_i }}\frac{\partial }{{\partial p_i }}} \right)}
398 \label{introEquation:liouvilleOperator}
399 \end{equation}
400 In terms of Liouville operator, Liouville's equation can also be
401 expressed as
402 \begin{equation}
403 \left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - iL\rho
404 \label{introEquation:liouvilleTheoremInOperator}
405 \end{equation}
406 which can help define a propagator $\rho (t) = e^{-iLt} \rho (0)$.
407 \subsection{\label{introSection:ergodic}The Ergodic Hypothesis}
408
409 Various thermodynamic properties can be calculated from Molecular
410 Dynamics simulation. By comparing experimental values with the
411 calculated properties, one can determine the accuracy of the
412 simulation and the quality of the underlying model. However, both
413 experiments and computer simulations are usually performed during a
414 certain time interval and the measurements are averaged over a
415 period of time which is different from the average behavior of
416 many-body system in Statistical Mechanics. Fortunately, the Ergodic
417 Hypothesis makes a connection between time average and the ensemble
418 average. It states that the time average and average over the
419 statistical ensemble are identical:\cite{Frenkel1996, Leach2001}
420 \begin{equation}
421 \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
422 \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
423 {A(q(t),p(t))} } \rho (q(t), p(t)) dqdp
424 \end{equation}
425 where $\langle A(q , p) \rangle_t$ is an equilibrium value of a
426 physical quantity and $\rho (p(t), q(t))$ is the equilibrium
427 distribution function. If an observation is averaged over a
428 sufficiently long time (longer than the relaxation time), all
429 accessible microstates in phase space are assumed to be equally
430 probed, giving a properly weighted statistical average. This allows
431 the researcher freedom of choice when deciding how best to measure a
432 given observable. In case an ensemble averaged approach sounds most
433 reasonable, the Monte Carlo methods\cite{Metropolis1949} can be
434 utilized. Or if the system lends itself to a time averaging
435 approach, the Molecular Dynamics techniques in
436 Sec.~\ref{introSection:molecularDynamics} will be the best
437 choice.\cite{Frenkel1996}
438
439 \section{\label{introSection:geometricIntegratos}Geometric Integrators}
440 A variety of numerical integrators have been proposed to simulate
441 the motions of atoms in MD simulation. They usually begin with
442 initial conditions and move the objects in the direction governed by
443 the differential equations. However, most of them ignore the hidden
444 physical laws contained within the equations. Since 1990, geometric
445 integrators, which preserve various phase-flow invariants such as
446 symplectic structure, volume and time reversal symmetry, were
447 developed to address this issue.\cite{Dullweber1997, McLachlan1998,
448 Leimkuhler1999} The velocity Verlet method, which happens to be a
449 simple example of symplectic integrator, continues to gain
450 popularity in the molecular dynamics community. This fact can be
451 partly explained by its geometric nature.
452
453 \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
454 A \emph{manifold} is an abstract mathematical space. It looks
455 locally like Euclidean space, but when viewed globally, it may have
456 more complicated structure. A good example of manifold is the
457 surface of Earth. It seems to be flat locally, but it is round if
458 viewed as a whole. A \emph{differentiable manifold} (also known as
459 \emph{smooth manifold}) is a manifold on which it is possible to
460 apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is
461 defined as a pair $(M, \omega)$ which consists of a
462 \emph{differentiable manifold} $M$ and a close, non-degenerate,
463 bilinear symplectic form, $\omega$. A symplectic form on a vector
464 space $V$ is a function $\omega(x, y)$ which satisfies
465 $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
466 \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
467 $\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in
468 vector field is an example of symplectic form. One of the
469 motivations to study \emph{symplectic manifolds} in Hamiltonian
470 Mechanics is that a symplectic manifold can represent all possible
471 configurations of the system and the phase space of the system can
472 be described by it's cotangent bundle.\cite{Jost2002} Every
473 symplectic manifold is even dimensional. For instance, in Hamilton
474 equations, coordinate and momentum always appear in pairs.
475
476 \subsection{\label{introSection:ODE}Ordinary Differential Equations}
477
478 For an ordinary differential system defined as
479 \begin{equation}
480 \dot x = f(x)
481 \end{equation}
482 where $x = x(q,p)$, this system is a canonical Hamiltonian, if
483 $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
484 function and $J$ is the skew-symmetric matrix
485 \begin{equation}
486 J = \left( {\begin{array}{*{20}c}
487 0 & I \\
488 { - I} & 0 \\
489 \end{array}} \right)
490 \label{introEquation:canonicalMatrix}
491 \end{equation}
492 where $I$ is an identity matrix. Using this notation, Hamiltonian
493 system can be rewritten as,
494 \begin{equation}
495 \frac{d}{{dt}}x = J\nabla _x H(x).
496 \label{introEquation:compactHamiltonian}
497 \end{equation}In this case, $f$ is
498 called a \emph{Hamiltonian vector field}. Another generalization of
499 Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986}
500 \begin{equation}
501 \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
502 \end{equation}
503 where the most obvious change being that matrix $J$ now depends on
504 $x$.
505
506 \subsection{\label{introSection:exactFlow}Exact Propagator}
507
508 Let $x(t)$ be the exact solution of the ODE
509 system,
510 \begin{equation}
511 \frac{{dx}}{{dt}} = f(x), \label{introEquation:ODE}
512 \end{equation} we can
513 define its exact propagator $\varphi_\tau$:
514 \[ x(t+\tau)
515 =\varphi_\tau(x(t))
516 \]
517 where $\tau$ is a fixed time step and $\varphi$ is a map from phase
518 space to itself. The propagator has the continuous group property,
519 \begin{equation}
520 \varphi _{\tau _1 } \circ \varphi _{\tau _2 } = \varphi _{\tau _1
521 + \tau _2 } .
522 \end{equation}
523 In particular,
524 \begin{equation}
525 \varphi _\tau \circ \varphi _{ - \tau } = I
526 \end{equation}
527 Therefore, the exact propagator is self-adjoint,
528 \begin{equation}
529 \varphi _\tau = \varphi _{ - \tau }^{ - 1}.
530 \end{equation}
531 The exact propagator can also be written as an operator,
532 \begin{equation}
533 \varphi _\tau (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
534 }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
535 \label{introEquation:exponentialOperator}
536 \end{equation}
537 In most cases, it is not easy to find the exact propagator
538 $\varphi_\tau$. Instead, we use an approximate map, $\psi_\tau$,
539 which is usually called an integrator. The order of an integrator
540 $\psi_\tau$ is $p$, if the Taylor series of $\psi_\tau$ agree to
541 order $p$,
542 \begin{equation}
543 \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1})
544 \end{equation}
545
546 \subsection{\label{introSection:geometricProperties}Geometric Properties}
547
548 The hidden geometric properties\cite{Budd1999, Marsden1998} of an
549 ODE and its propagator play important roles in numerical studies.
550 Many of them can be found in systems which occur naturally in
551 applications. Let $\varphi$ be the propagator of Hamiltonian vector
552 field, $\varphi$ is a \emph{symplectic} propagator if it satisfies,
553 \begin{equation}
554 {\varphi '}^T J \varphi ' = J.
555 \end{equation}
556 According to Liouville's theorem, the symplectic volume is invariant
557 under a Hamiltonian propagator, which is the basis for classical
558 statistical mechanics. Furthermore, the propagator of a Hamiltonian
559 vector field on a symplectic manifold can be shown to be a
560 symplectomorphism. As to the Poisson system,
561 \begin{equation}
562 {\varphi '}^T J \varphi ' = J \circ \varphi
563 \end{equation}
564 is the property that must be preserved by the integrator. It is
565 possible to construct a \emph{volume-preserving} propagator for a
566 source free ODE ($ \nabla \cdot f = 0 $), if the propagator
567 satisfies $ \det d\varphi = 1$. One can show easily that a
568 symplectic propagator will be volume-preserving. Changing the
569 variables $y = h(x)$ in an ODE (Eq.~\ref{introEquation:ODE}) will
570 result in a new system,
571 \[
572 \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y).
573 \]
574 The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$.
575 In other words, the propagator of this vector field is reversible if
576 and only if $ h \circ \varphi ^{ - 1} = \varphi \circ h $. A
577 conserved quantity of a general differential function is a function
578 $ G:R^{2d} \to R^d $ which is constant for all solutions of the ODE
579 $\frac{{dx}}{{dt}} = f(x)$ ,
580 \[
581 \frac{{dG(x(t))}}{{dt}} = 0.
582 \]
583 Using the chain rule, one may obtain,
584 \[
585 \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \cdot \nabla G,
586 \]
587 which is the condition for conserved quantities. For a canonical
588 Hamiltonian system, the time evolution of an arbitrary smooth
589 function $G$ is given by,
590 \begin{eqnarray}
591 \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \notag\\
592 & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)).
593 \label{introEquation:firstIntegral1}
594 \end{eqnarray}
595 Using poisson bracket notion, Eq.~\ref{introEquation:firstIntegral1}
596 can be rewritten as
597 \[
598 \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)).
599 \]
600 Therefore, the sufficient condition for $G$ to be a conserved
601 quantity of a Hamiltonian system is $\left\{ {G,H} \right\} = 0.$ As
602 is well known, the Hamiltonian (or energy) H of a Hamiltonian system
603 is a conserved quantity, which is due to the fact $\{ H,H\} = 0$.
604 When designing any numerical methods, one should always try to
605 preserve the structural properties of the original ODE and its
606 propagator.
607
608 \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods}
609 A lot of well established and very effective numerical methods have
610 been successful precisely because of their symplectic nature even
611 though this fact was not recognized when they were first
612 constructed. The most famous example is the Verlet-leapfrog method
613 in molecular dynamics. In general, symplectic integrators can be
614 constructed using one of four different methods.
615 \begin{enumerate}
616 \item Generating functions
617 \item Variational methods
618 \item Runge-Kutta methods
619 \item Splitting methods
620 \end{enumerate}
621 Generating functions\cite{Channell1990} tend to lead to methods
622 which are cumbersome and difficult to use. In dissipative systems,
623 variational methods can capture the decay of energy
624 accurately.\cite{Kane2000} Since they are geometrically unstable
625 against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
626 methods are not suitable for Hamiltonian system. Recently, various
627 high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
628 have been developed to overcome this instability. However, due to
629 computational penalty involved in implementing the Runge-Kutta
630 methods, they have not attracted much attention from the Molecular
631 Dynamics community. Instead, splitting methods have been widely
632 accepted since they exploit natural decompositions of the
633 system.\cite{McLachlan1998, Tuckerman1992}
634
635 \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
636
637 The main idea behind splitting methods is to decompose the discrete
638 $\varphi_h$ as a composition of simpler propagators,
639 \begin{equation}
640 \varphi _h = \varphi _{h_1 } \circ \varphi _{h_2 } \ldots \circ
641 \varphi _{h_n }
642 \label{introEquation:FlowDecomposition}
643 \end{equation}
644 where each of the sub-propagator is chosen such that each represent
645 a simpler integration of the system. Suppose that a Hamiltonian
646 system takes the form,
647 \[
648 H = H_1 + H_2.
649 \]
650 Here, $H_1$ and $H_2$ may represent different physical processes of
651 the system. For instance, they may relate to kinetic and potential
652 energy respectively, which is a natural decomposition of the
653 problem. If $H_1$ and $H_2$ can be integrated using exact
654 propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a
655 simple first order expression is then given by the Lie-Trotter
656 formula
657 \begin{equation}
658 \varphi _h = \varphi _{1,h} \circ \varphi _{2,h},
659 \label{introEquation:firstOrderSplitting}
660 \end{equation}
661 where $\varphi _h$ is the result of applying the corresponding
662 continuous $\varphi _i$ over a time $h$. By definition, as
663 $\varphi_i(t)$ is the exact solution of a Hamiltonian system, it
664 must follow that each operator $\varphi_i(t)$ is a symplectic map.
665 It is easy to show that any composition of symplectic propagators
666 yields a symplectic map,
667 \begin{equation}
668 (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi
669 '\phi ' = \phi '^T J\phi ' = J,
670 \label{introEquation:SymplecticFlowComposition}
671 \end{equation}
672 where $\phi$ and $\psi$ both are symplectic maps. Thus operator
673 splitting in this context automatically generates a symplectic map.
674 The Lie-Trotter
675 splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
676 local errors proportional to $h^2$, while the Strang splitting gives
677 a second-order decomposition,\cite{Strang1968}
678 \begin{equation}
679 \varphi _h = \varphi _{1,h/2} \circ \varphi _{2,h} \circ \varphi
680 _{1,h/2} , \label{introEquation:secondOrderSplitting}
681 \end{equation}
682 which has a local error proportional to $h^3$. The Strang
683 splitting's popularity in molecular simulation community attribute
684 to its symmetric property,
685 \begin{equation}
686 \varphi _h^{ - 1} = \varphi _{ - h}.
687 \label{introEquation:timeReversible}
688 \end{equation}
689
690 \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}}
691 The classical equation for a system consisting of interacting
692 particles can be written in Hamiltonian form,
693 \[
694 H = T + V
695 \]
696 where $T$ is the kinetic energy and $V$ is the potential energy.
697 Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one
698 obtains the following:
699 \begin{align}
700 q(\Delta t) &= q(0) + \dot{q}(0)\Delta t +
701 \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, %
702 \label{introEquation:Lp10a} \\%
703 %
704 \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
705 \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. %
706 \label{introEquation:Lp10b}
707 \end{align}
708 where $F(t)$ is the force at time $t$. This integration scheme is
709 known as \emph{velocity verlet} which is
710 symplectic(Eq.~\ref{introEquation:SymplecticFlowComposition}),
711 time-reversible(Eq.~\ref{introEquation:timeReversible}) and
712 volume-preserving (Eq.~\ref{introEquation:volumePreserving}). These
713 geometric properties attribute to its long-time stability and its
714 popularity in the community. However, the most commonly used
715 velocity verlet integration scheme is written as below,
716 \begin{align}
717 \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
718 \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEquation:Lp9a}\\%
719 %
720 q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),%
721 \label{introEquation:Lp9b}\\%
722 %
723 \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
724 \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c}
725 \end{align}
726 From the preceding splitting, one can see that the integration of
727 the equations of motion would follow:
728 \begin{enumerate}
729 \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
730
731 \item Use the half step velocities to move positions one whole step, $\Delta t$.
732
733 \item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move.
734
735 \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
736 \end{enumerate}
737 By simply switching the order of the propagators in the splitting
738 and composing a new integrator, the \emph{position verlet}
739 integrator, can be generated,
740 \begin{align}
741 \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) +
742 \frac{{\Delta t}}{{2m}}\dot q(0)} \right], %
743 \label{introEquation:positionVerlet1} \\%
744 %
745 q(\Delta t) &= q(0) + \frac{{\Delta t}}{2}\left[ {\dot q(0) + \dot
746 q(\Delta t)} \right]. %
747 \label{introEquation:positionVerlet2}
748 \end{align}
749
750 \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
751
752 The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used
753 to determine the local error of a splitting method in terms of the
754 commutator of the
755 operators(Eq.~\ref{introEquation:exponentialOperator}) associated
756 with the sub-propagator. For operators $hX$ and $hY$ which are
757 associated with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we
758 have
759 \begin{equation}
760 \exp (hX + hY) = \exp (hZ)
761 \end{equation}
762 where
763 \begin{equation}
764 hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left(
765 {[X,[X,Y]] + [Y,[Y,X]]} \right) + \ldots .
766 \end{equation}
767 Here, $[X,Y]$ is the commutator of operator $X$ and $Y$ given by
768 \[
769 [X,Y] = XY - YX .
770 \]
771 Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974}
772 to the Strang splitting, we can obtain
773 \begin{eqnarray*}
774 \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
775 & & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
776 & & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots
777 ).
778 \end{eqnarray*}
779 Since $ [X,Y] + [Y,X] = 0$ and $ [X,X] = 0$, the dominant local
780 error of Strang splitting is proportional to $h^3$. The same
781 procedure can be applied to a general splitting of the form
782 \begin{equation}
783 \varphi _{b_m h}^2 \circ \varphi _{a_m h}^1 \circ \varphi _{b_{m -
784 1} h}^2 \circ \ldots \circ \varphi _{a_1 h}^1 .
785 \end{equation}
786 A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
787 order methods. Yoshida proposed an elegant way to compose higher
788 order methods based on symmetric splitting.\cite{Yoshida1990} Given
789 a symmetric second order base method $ \varphi _h^{(2)} $, a
790 fourth-order symmetric method can be constructed by composing,
791 \[
792 \varphi _h^{(4)} = \varphi _{\alpha h}^{(2)} \circ \varphi _{\beta
793 h}^{(2)} \circ \varphi _{\alpha h}^{(2)}
794 \]
795 where $ \alpha = - \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$ and $ \beta
796 = \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$. Moreover, a symmetric
797 integrator $ \varphi _h^{(2n + 2)}$ can be composed by
798 \begin{equation}
799 \varphi _h^{(2n + 2)} = \varphi _{\alpha h}^{(2n)} \circ \varphi
800 _{\beta h}^{(2n)} \circ \varphi _{\alpha h}^{(2n)},
801 \end{equation}
802 if the weights are chosen as
803 \[
804 \alpha = - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta =
805 \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} .
806 \]
807
808 \section{\label{introSection:molecularDynamics}Molecular Dynamics}
809
810 As one of the principal tools of molecular modeling, Molecular
811 dynamics has proven to be a powerful tool for studying the functions
812 of biological systems, providing structural, thermodynamic and
813 dynamical information. The basic idea of molecular dynamics is that
814 macroscopic properties are related to microscopic behavior and
815 microscopic behavior can be calculated from the trajectories in
816 simulations. For instance, instantaneous temperature of a
817 Hamiltonian system of $N$ particles can be measured by
818 \[
819 T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}}
820 \]
821 where $m_i$ and $v_i$ are the mass and velocity of $i$th particle
822 respectively, $f$ is the number of degrees of freedom, and $k_B$ is
823 the Boltzman constant.
824
825 A typical molecular dynamics run consists of three essential steps:
826 \begin{enumerate}
827 \item Initialization
828 \begin{enumerate}
829 \item Preliminary preparation
830 \item Minimization
831 \item Heating
832 \item Equilibration
833 \end{enumerate}
834 \item Production
835 \item Analysis
836 \end{enumerate}
837 These three individual steps will be covered in the following
838 sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
839 initialization of a simulation. Sec.~\ref{introSection:production}
840 discusses issues of production runs.
841 Sec.~\ref{introSection:Analysis} provides the theoretical tools for
842 analysis of trajectories.
843
844 \subsection{\label{introSec:initialSystemSettings}Initialization}
845
846 \subsubsection{\textbf{Preliminary preparation}}
847
848 When selecting the starting structure of a molecule for molecular
849 simulation, one may retrieve its Cartesian coordinates from public
850 databases, such as RCSB Protein Data Bank \textit{etc}. Although
851 thousands of crystal structures of molecules are discovered every
852 year, many more remain unknown due to the difficulties of
853 purification and crystallization. Even for molecules with known
854 structures, some important information is missing. For example, a
855 missing hydrogen atom which acts as donor in hydrogen bonding must
856 be added. Moreover, in order to include electrostatic interactions,
857 one may need to specify the partial charges for individual atoms.
858 Under some circumstances, we may even need to prepare the system in
859 a special configuration. For instance, when studying transport
860 phenomenon in membrane systems, we may prepare the lipids in a
861 bilayer structure instead of placing lipids randomly in solvent,
862 since we are not interested in the slow self-aggregation process.
863
864 \subsubsection{\textbf{Minimization}}
865
866 It is quite possible that some of molecules in the system from
867 preliminary preparation may be overlapping with each other. This
868 close proximity leads to high initial potential energy which
869 consequently jeopardizes any molecular dynamics simulations. To
870 remove these steric overlaps, one typically performs energy
871 minimization to find a more reasonable conformation. Several energy
872 minimization methods have been developed to exploit the energy
873 surface and to locate the local minimum. While converging slowly
874 near the minimum, the steepest descent method is extremely robust when
875 systems are strongly anharmonic. Thus, it is often used to refine
876 structures from crystallographic data. Relying on the Hessian,
877 advanced methods like Newton-Raphson converge rapidly to a local
878 minimum, but become unstable if the energy surface is far from
879 quadratic. Another factor that must be taken into account, when
880 choosing energy minimization method, is the size of the system.
881 Steepest descent and conjugate gradient can deal with models of any
882 size. Because of the limits on computer memory to store the hessian
883 matrix and the computing power needed to diagonalize these matrices,
884 most Newton-Raphson methods can not be used with very large systems.
885
886 \subsubsection{\textbf{Heating}}
887
888 Typically, heating is performed by assigning random velocities
889 according to a Maxwell-Boltzman distribution for a desired
890 temperature. Beginning at a lower temperature and gradually
891 increasing the temperature by assigning larger random velocities, we
892 end up setting the temperature of the system to a final temperature
893 at which the simulation will be conducted. In the heating phase, we
894 should also keep the system from drifting or rotating as a whole. To
895 do this, the net linear momentum and angular momentum of the system
896 is shifted to zero after each resampling from the Maxwell -Boltzman
897 distribution.
898
899 \subsubsection{\textbf{Equilibration}}
900
901 The purpose of equilibration is to allow the system to evolve
902 spontaneously for a period of time and reach equilibrium. The
903 procedure is continued until various statistical properties, such as
904 temperature, pressure, energy, volume and other structural
905 properties \textit{etc}, become independent of time. Strictly
906 speaking, minimization and heating are not necessary, provided the
907 equilibration process is long enough. However, these steps can serve
908 as a mean to arrive at an equilibrated structure in an effective
909 way.
910
911 \subsection{\label{introSection:production}Production}
912
913 The production run is the most important step of the simulation, in
914 which the equilibrated structure is used as a starting point and the
915 motions of the molecules are collected for later analysis. In order
916 to capture the macroscopic properties of the system, the molecular
917 dynamics simulation must be performed by sampling correctly and
918 efficiently from the relevant thermodynamic ensemble.
919
920 The most expensive part of a molecular dynamics simulation is the
921 calculation of non-bonded forces, such as van der Waals force and
922 Coulombic forces \textit{etc}. For a system of $N$ particles, the
923 complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
924 which makes large simulations prohibitive in the absence of any
925 algorithmic tricks. A natural approach to avoid system size issues
926 is to represent the bulk behavior by a finite number of the
927 particles. However, this approach will suffer from surface effects
928 at the edges of the simulation. To offset this, \textit{Periodic
929 boundary conditions} (see Fig.~\ref{introFig:pbc}) were developed to
930 simulate bulk properties with a relatively small number of
931 particles. In this method, the simulation box is replicated
932 throughout space to form an infinite lattice. During the simulation,
933 when a particle moves in the primary cell, its image in other cells
934 move in exactly the same direction with exactly the same
935 orientation. Thus, as a particle leaves the primary cell, one of its
936 images will enter through the opposite face.
937 \begin{figure}
938 \centering
939 \includegraphics[width=\linewidth]{pbc.eps}
940 \caption[An illustration of periodic boundary conditions]{A 2-D
941 illustration of periodic boundary conditions. As one particle leaves
942 the left of the simulation box, an image of it enters the right.}
943 \label{introFig:pbc}
944 \end{figure}
945
946 %cutoff and minimum image convention
947 Another important technique to improve the efficiency of force
948 evaluation is to apply spherical cutoffs where particles farther
949 than a predetermined distance are not included in the
950 calculation.\cite{Frenkel1996} The use of a cutoff radius will cause
951 a discontinuity in the potential energy curve. Fortunately, one can
952 shift a simple radial potential to ensure the potential curve go
953 smoothly to zero at the cutoff radius. The cutoff strategy works
954 well for Lennard-Jones interaction because of its short range
955 nature. However, simply truncating the electrostatic interaction
956 with the use of cutoffs has been shown to lead to severe artifacts
957 in simulations. The Ewald summation, in which the slowly decaying
958 Coulomb potential is transformed into direct and reciprocal sums
959 with rapid and absolute convergence, has proved to minimize the
960 periodicity artifacts in liquid simulations. Taking advantage of
961 fast Fourier transform (FFT) techniques for calculating discrete
962 Fourier transforms, the particle mesh-based
963 methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
964 $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
965 \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
966 which treats Coulombic interactions exactly at short range, and
967 approximate the potential at long range through multipolar
968 expansion. In spite of their wide acceptance at the molecular
969 simulation community, these two methods are difficult to implement
970 correctly and efficiently. Instead, we use a damped and
971 charge-neutralized Coulomb potential method developed by Wolf and
972 his coworkers.\cite{Wolf1999} The shifted Coulomb potential for
973 particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
974 \begin{equation}
975 V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
976 r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
977 R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
978 r_{ij})}{r_{ij}}\right\}, \label{introEquation:shiftedCoulomb}
979 \end{equation}
980 where $\alpha$ is the convergence parameter. Due to the lack of
981 inherent periodicity and rapid convergence,this method is extremely
982 efficient and easy to implement.
983 \begin{figure}
984 \centering
985 \includegraphics[width=\linewidth]{shifted_coulomb.eps}
986 \caption[An illustration of shifted Coulomb potential]{An
987 illustration of shifted Coulomb potential.}
988 \label{introFigure:shiftedCoulomb}
989 \end{figure}
990
991 %multiple time step
992
993 \subsection{\label{introSection:Analysis} Analysis}
994
995 Recently, advanced visualization techniques have been applied to
996 monitor the motions of molecules. Although the dynamics of the
997 system can be described qualitatively from animation, quantitative
998 trajectory analysis is more useful. According to the principles of
999 Statistical Mechanics in
1000 Sec.~\ref{introSection:statisticalMechanics}, one can compute
1001 thermodynamic properties, analyze fluctuations of structural
1002 parameters, and investigate time-dependent processes of the molecule
1003 from the trajectories.
1004
1005 \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1006
1007 Thermodynamic properties, which can be expressed in terms of some
1008 function of the coordinates and momenta of all particles in the
1009 system, can be directly computed from molecular dynamics. The usual
1010 way to measure the pressure is based on virial theorem of Clausius
1011 which states that the virial is equal to $-3Nk_BT$. For a system
1012 with forces between particles, the total virial, $W$, contains the
1013 contribution from external pressure and interaction between the
1014 particles:
1015 \[
1016 W = - 3PV + \left\langle {\sum\limits_{i < j} {r{}_{ij} \cdot
1017 f_{ij} } } \right\rangle
1018 \]
1019 where $f_{ij}$ is the force between particle $i$ and $j$ at a
1020 distance $r_{ij}$. Thus, the expression for the pressure is given
1021 by:
1022 \begin{equation}
1023 P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\sum\limits_{i
1024 < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle
1025 \end{equation}
1026
1027 \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}}
1028
1029 Structural Properties of a simple fluid can be described by a set of
1030 distribution functions. Among these functions,the \emph{pair
1031 distribution function}, also known as \emph{radial distribution
1032 function}, is of most fundamental importance to liquid theory.
1033 Experimentally, pair distribution functions can be gathered by
1034 Fourier transforming raw data from a series of neutron diffraction
1035 experiments and integrating over the surface
1036 factor.\cite{Powles1973} The experimental results can serve as a
1037 criterion to justify the correctness of a liquid model. Moreover,
1038 various equilibrium thermodynamic and structural properties can also
1039 be expressed in terms of the radial distribution
1040 function.\cite{Allen1987} The pair distribution functions $g(r)$
1041 gives the probability that a particle $i$ will be located at a
1042 distance $r$ from a another particle $j$ in the system
1043 \begin{equation}
1044 g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1045 \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
1046 (r)}{\rho}.
1047 \end{equation}
1048 Note that the delta function can be replaced by a histogram in
1049 computer simulation. Peaks in $g(r)$ represent solvent shells, and
1050 the height of these peaks gradually decreases to 1 as the liquid of
1051 large distance approaches the bulk density.
1052
1053
1054 \subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent
1055 Properties}}
1056
1057 Time-dependent properties are usually calculated using \emph{time
1058 correlation functions}, which correlate random variables $A$ and $B$
1059 at two different times,
1060 \begin{equation}
1061 C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1062 \label{introEquation:timeCorrelationFunction}
1063 \end{equation}
1064 If $A$ and $B$ refer to same variable, this kind of correlation
1065 functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation
1066 function which is directly related to transport properties of
1067 molecular liquids:
1068 \begin{equation}
1069 D = \frac{1}{3}\int\limits_0^\infty {\left\langle {v(t) \cdot v(0)}
1070 \right\rangle } dt
1071 \end{equation}
1072 where $D$ is diffusion constant. Unlike the velocity autocorrelation
1073 function, which is averaged over time origins and over all the
1074 atoms, the dipole autocorrelation functions is calculated for the
1075 entire system. The dipole autocorrelation function is given by:
1076 \begin{equation}
1077 c_{dipole} = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1078 \right\rangle
1079 \end{equation}
1080 Here $u_{tot}$ is the net dipole of the entire system and is given
1081 by
1082 \begin{equation}
1083 u_{tot} (t) = \sum\limits_i {u_i (t)}.
1084 \end{equation}
1085 In principle, many time correlation functions can be related to
1086 Fourier transforms of the infrared, Raman, and inelastic neutron
1087 scattering spectra of molecular liquids. In practice, one can
1088 extract the IR spectrum from the intensity of the molecular dipole
1089 fluctuation at each frequency using the following relationship:
1090 \begin{equation}
1091 \hat c_{dipole} (v) = \int_{ - \infty }^\infty {c_{dipole} (t)e^{ -
1092 i2\pi vt} dt}.
1093 \end{equation}
1094
1095 \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1096
1097 Rigid bodies are frequently involved in the modeling of different
1098 areas, including engineering, physics and chemistry. For example,
1099 missiles and vehicles are usually modeled by rigid bodies. The
1100 movement of the objects in 3D gaming engines or other physics
1101 simulators is governed by rigid body dynamics. In molecular
1102 simulations, rigid bodies are used to simplify protein-protein
1103 docking studies.\cite{Gray2003}
1104
1105 It is very important to develop stable and efficient methods to
1106 integrate the equations of motion for orientational degrees of
1107 freedom. Euler angles are the natural choice to describe the
1108 rotational degrees of freedom. However, due to $\frac {1}{sin
1109 \theta}$ singularities, the numerical integration of corresponding
1110 equations of these motion is very inefficient and inaccurate.
1111 Although an alternative integrator using multiple sets of Euler
1112 angles can overcome this difficulty\cite{Barojas1973}, the
1113 computational penalty and the loss of angular momentum conservation
1114 still remain. A singularity-free representation utilizing
1115 quaternions was developed by Evans in 1977.\cite{Evans1977}
1116 Unfortunately, this approach used a nonseparable Hamiltonian
1117 resulting from the quaternion representation, which prevented the
1118 symplectic algorithm from being utilized. Another different approach
1119 is to apply holonomic constraints to the atoms belonging to the
1120 rigid body. Each atom moves independently under the normal forces
1121 deriving from potential energy and constraint forces which are used
1122 to guarantee the rigidness. However, due to their iterative nature,
1123 the SHAKE and Rattle algorithms also converge very slowly when the
1124 number of constraints increases.\cite{Ryckaert1977, Andersen1983}
1125
1126 A break-through in geometric literature suggests that, in order to
1127 develop a long-term integration scheme, one should preserve the
1128 symplectic structure of the propagator. By introducing a conjugate
1129 momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's
1130 equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was
1131 proposed to evolve the Hamiltonian system in a constraint manifold
1132 by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1133 An alternative method using the quaternion representation was
1134 developed by Omelyan.\cite{Omelyan1998} However, both of these
1135 methods are iterative and inefficient. In this section, we descibe a
1136 symplectic Lie-Poisson integrator for rigid bodies developed by
1137 Dullweber and his coworkers\cite{Dullweber1997} in depth.
1138
1139 \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1140 The Hamiltonian of a rigid body is given by
1141 \begin{equation}
1142 H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1143 V(q,Q) + \frac{1}{2}tr[(QQ^T - 1)\Lambda ].
1144 \label{introEquation:RBHamiltonian}
1145 \end{equation}
1146 Here, $q$ and $Q$ are the position vector and rotation matrix for
1147 the rigid-body, $p$ and $P$ are conjugate momenta to $q$ and $Q$ ,
1148 and $J$, a diagonal matrix, is defined by
1149 \[
1150 I_{ii}^{ - 1} = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1151 \]
1152 where $I_{ii}$ is the diagonal element of the inertia tensor. This
1153 constrained Hamiltonian equation is subjected to a holonomic
1154 constraint,
1155 \begin{equation}
1156 Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1157 \end{equation}
1158 which is used to ensure the rotation matrix's unitarity. Using
1159 Eq.~\ref{introEquation:motionHamiltonianCoordinate} and Eq.~
1160 \ref{introEquation:motionHamiltonianMomentum}, one can write down
1161 the equations of motion,
1162 \begin{eqnarray}
1163 \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
1164 \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q), \label{introEquation:RBMotionMomentum}\\
1165 \frac{{dQ}}{{dt}} & = & PJ^{ - 1}, \label{introEquation:RBMotionRotation}\\
1166 \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1167 \end{eqnarray}
1168 Differentiating Eq.~\ref{introEquation:orthogonalConstraint} and
1169 using Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1170 \begin{equation}
1171 Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0 . \\
1172 \label{introEquation:RBFirstOrderConstraint}
1173 \end{equation}
1174 In general, there are two ways to satisfy the holonomic constraints.
1175 We can use a constraint force provided by a Lagrange multiplier on
1176 the normal manifold to keep the motion on the constraint space. Or
1177 we can simply evolve the system on the constraint manifold. These
1178 two methods have been proved to be equivalent. The holonomic
1179 constraint and equations of motions define a constraint manifold for
1180 rigid bodies
1181 \[
1182 M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0}
1183 \right\}.
1184 \]
1185 Unfortunately, this constraint manifold is not $T^* SO(3)$ which is
1186 a symplectic manifold on Lie rotation group $SO(3)$. However, it
1187 turns out that under symplectic transformation, the cotangent space
1188 and the phase space are diffeomorphic. By introducing
1189 \[
1190 \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1191 \]
1192 the mechanical system subjected to a holonomic constraint manifold $M$
1193 can be re-formulated as a Hamiltonian system on the cotangent space
1194 \[
1195 T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
1196 1,\tilde Q^T \tilde PJ^{ - 1} + J^{ - 1} P^T \tilde Q = 0} \right\}
1197 \]
1198 For a body fixed vector $X_i$ with respect to the center of mass of
1199 the rigid body, its corresponding lab fixed vector $X_0^{lab}$ is
1200 given as
1201 \begin{equation}
1202 X_i^{lab} = Q X_i + q.
1203 \end{equation}
1204 Therefore, potential energy $V(q,Q)$ is defined by
1205 \[
1206 V(q,Q) = V(Q X_0 + q).
1207 \]
1208 Hence, the force and torque are given by
1209 \[
1210 \nabla _q V(q,Q) = F(q,Q) = \sum\limits_i {F_i (q,Q)},
1211 \]
1212 and
1213 \[
1214 \nabla _Q V(q,Q) = F(q,Q)X_i^t
1215 \]
1216 respectively. As a common choice to describe the rotation dynamics
1217 of the rigid body, the angular momentum on the body fixed frame $\Pi
1218 = Q^t P$ is introduced to rewrite the equations of motion,
1219 \begin{equation}
1220 \begin{array}{l}
1221 \dot \Pi = J^{ - 1} \Pi ^T \Pi + Q^T \sum\limits_i {F_i (q,Q)X_i^T } - \Lambda, \\
1222 \dot Q = Q\Pi {\rm{ }}J^{ - 1}, \\
1223 \end{array}
1224 \label{introEqaution:RBMotionPI}
1225 \end{equation}
1226 as well as holonomic constraints $\Pi J^{ - 1} + J^{ - 1} \Pi ^t =
1227 0$ and $Q^T Q = 1$. For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a
1228 matrix $\hat v \in so(3)^ \star$, the hat-map isomorphism,
1229 \begin{equation}
1230 v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left(
1231 {\begin{array}{*{20}c}
1232 0 & { - v_3 } & {v_2 } \\
1233 {v_3 } & 0 & { - v_1 } \\
1234 { - v_2 } & {v_1 } & 0 \\
1235 \end{array}} \right),
1236 \label{introEquation:hatmapIsomorphism}
1237 \end{equation}
1238 will let us associate the matrix products with traditional vector
1239 operations
1240 \[
1241 \hat vu = v \times u.
1242 \]
1243 Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew
1244 matrix,
1245 \begin{eqnarray}
1246 (\dot \Pi - \dot \Pi ^T )&= &(\Pi - \Pi ^T )(J^{ - 1} \Pi + \Pi J^{ - 1} ) \notag \\
1247 & & + \sum\limits_i {[Q^T F_i (r,Q)X_i^T - X_i F_i (r,Q)^T Q]} -
1248 (\Lambda - \Lambda ^T ). \label{introEquation:skewMatrixPI}
1249 \end{eqnarray}
1250 Since $\Lambda$ is symmetric, the last term of
1251 Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1252 Lagrange multiplier $\Lambda$ is absent from the equations of
1253 motion. This unique property eliminates the requirement of
1254 iterations which can not be avoided in other methods.\cite{Kol1997,
1255 Omelyan1998} Applying the hat-map isomorphism, we obtain the
1256 equation of motion for angular momentum in the body frame
1257 \begin{equation}
1258 \dot \pi = \pi \times I^{ - 1} \pi + \sum\limits_i {\left( {Q^T
1259 F_i (r,Q)} \right) \times X_i }.
1260 \label{introEquation:bodyAngularMotion}
1261 \end{equation}
1262 In the same manner, the equation of motion for rotation matrix is
1263 given by
1264 \[
1265 \dot Q = Qskew(I^{ - 1} \pi ).
1266 \]
1267
1268 \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1269 Lie-Poisson Integrator for Free Rigid Bodies}
1270
1271 If there are no external forces exerted on the rigid body, the only
1272 contribution to the rotational motion is from the kinetic energy
1273 (the first term of \ref{introEquation:bodyAngularMotion}). The free
1274 rigid body is an example of a Lie-Poisson system with Hamiltonian
1275 function
1276 \begin{equation}
1277 T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 )
1278 \label{introEquation:rotationalKineticRB}
1279 \end{equation}
1280 where $T_i^r (\pi _i ) = \frac{{\pi _i ^2 }}{{2I_i }}$ and
1281 Lie-Poisson structure matrix,
1282 \begin{equation}
1283 J(\pi ) = \left( {\begin{array}{*{20}c}
1284 0 & {\pi _3 } & { - \pi _2 } \\
1285 { - \pi _3 } & 0 & {\pi _1 } \\
1286 {\pi _2 } & { - \pi _1 } & 0 \\
1287 \end{array}} \right).
1288 \end{equation}
1289 Thus, the dynamics of free rigid body is governed by
1290 \begin{equation}
1291 \frac{d}{{dt}}\pi = J(\pi )\nabla _\pi T^r (\pi ).
1292 \end{equation}
1293 One may notice that each $T_i^r$ in
1294 Eq.~\ref{introEquation:rotationalKineticRB} can be solved exactly.
1295 For instance, the equations of motion due to $T_1^r$ are given by
1296 \begin{equation}
1297 \frac{d}{{dt}}\pi = R_1 \pi ,\frac{d}{{dt}}Q = QR_1
1298 \label{introEqaution:RBMotionSingleTerm}
1299 \end{equation}
1300 with
1301 \[ R_1 = \left( {\begin{array}{*{20}c}
1302 0 & 0 & 0 \\
1303 0 & 0 & {\pi _1 } \\
1304 0 & { - \pi _1 } & 0 \\
1305 \end{array}} \right).
1306 \]
1307 The solutions of Eq.~\ref{introEqaution:RBMotionSingleTerm} is
1308 \[
1309 \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) =
1310 Q(0)e^{\Delta tR_1 }
1311 \]
1312 with
1313 \[
1314 e^{\Delta tR_1 } = \left( {\begin{array}{*{20}c}
1315 0 & 0 & 0 \\
1316 0 & {\cos \theta _1 } & {\sin \theta _1 } \\
1317 0 & { - \sin \theta _1 } & {\cos \theta _1 } \\
1318 \end{array}} \right),\theta _1 = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1319 \]
1320 To reduce the cost of computing expensive functions in $e^{\Delta
1321 tR_1 }$, we can use the Cayley transformation to obtain a
1322 single-aixs propagator,
1323 \begin{eqnarray*}
1324 e^{\Delta tR_1 } & \approx & (1 - \Delta tR_1 )^{ - 1} (1 + \Delta
1325 tR_1 ) \\
1326 %
1327 & \approx & \left( \begin{array}{ccc}
1328 1 & 0 & 0 \\
1329 0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4} & -\frac{\theta}{1+
1330 \theta^2 / 4} \\
1331 0 & \frac{\theta}{1+ \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 +
1332 \theta^2 / 4}
1333 \end{array}
1334 \right).
1335 \end{eqnarray*}
1336 The propagators for $T_2^r$ and $T_3^r$ can be found in the same
1337 manner. In order to construct a second-order symplectic method, we
1338 split the angular kinetic Hamiltonian function into five terms
1339 \[
1340 T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1341 ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
1342 (\pi _1 ).
1343 \]
1344 By concatenating the propagators corresponding to these five terms,
1345 we can obtain an symplectic integrator,
1346 \[
1347 \varphi _{\Delta t,T^r } = \varphi _{\Delta t/2,\pi _1 } \circ
1348 \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 }
1349 \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi
1350 _1 }.
1351 \]
1352 The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by
1353 \[
1354 \{ F,G\} (\pi ) = [\nabla _\pi F(\pi )]^T J(\pi )\nabla _\pi G(\pi
1355 ).
1356 \]
1357 If the Poisson bracket of a function $F$ with an arbitrary smooth
1358 function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1359 conserved quantity in Poisson system. We can easily verify that the
1360 norm of the angular momentum, $\parallel \pi
1361 \parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel
1362 \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1363 then by the chain rule
1364 \[
1365 \nabla _\pi F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2
1366 }}{2})\pi.
1367 \]
1368 Thus, $ [\nabla _\pi F(\pi )]^T J(\pi ) = - S'(\frac{{\parallel
1369 \pi
1370 \parallel ^2 }}{2})\pi \times \pi = 0 $. This explicit
1371 Lie-Poisson integrator is found to be both extremely efficient and
1372 stable. These properties can be explained by the fact the small
1373 angle approximation is used and the norm of the angular momentum is
1374 conserved.
1375
1376 \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian
1377 Splitting for Rigid Body}
1378
1379 The Hamiltonian of rigid body can be separated in terms of kinetic
1380 energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations
1381 of motion corresponding to potential energy and kinetic energy are
1382 listed in Table~\ref{introTable:rbEquations}.
1383 \begin{table}
1384 \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1385 \label{introTable:rbEquations}
1386 \begin{center}
1387 \begin{tabular}{|l|l|}
1388 \hline
1389 % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
1390 Potential & Kinetic \\
1391 $\frac{{dq}}{{dt}} = \frac{p}{m}$ & $\frac{d}{{dt}}q = p$ \\
1392 $\frac{d}{{dt}}p = - \frac{{\partial V}}{{\partial q}}$ & $ \frac{d}{{dt}}p = 0$ \\
1393 $\frac{d}{{dt}}Q = 0$ & $ \frac{d}{{dt}}Q = Qskew(I^{ - 1} j)$ \\
1394 $ \frac{d}{{dt}}\pi = \sum\limits_i {\left( {Q^T F_i (r,Q)} \right) \times X_i }$ & $\frac{d}{{dt}}\pi = \pi \times I^{ - 1} \pi$\\
1395 \hline
1396 \end{tabular}
1397 \end{center}
1398 \end{table}
1399 A second-order symplectic method is now obtained by the composition
1400 of the position and velocity propagators,
1401 \[
1402 \varphi _{\Delta t} = \varphi _{\Delta t/2,V} \circ \varphi
1403 _{\Delta t,T} \circ \varphi _{\Delta t/2,V}.
1404 \]
1405 Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1406 sub-propagators which corresponding to force and torque
1407 respectively,
1408 \[
1409 \varphi _{\Delta t/2,V} = \varphi _{\Delta t/2,F} \circ \varphi
1410 _{\Delta t/2,\tau }.
1411 \]
1412 Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1413 $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order
1414 inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the
1415 kinetic energy can be separated to translational kinetic term, $T^t
1416 (p)$, and rotational kinetic term, $T^r (\pi )$,
1417 \begin{equation}
1418 T(p,\pi ) =T^t (p) + T^r (\pi ).
1419 \end{equation}
1420 where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1421 defined by Eq.~\ref{introEquation:rotationalKineticRB}. Therefore,
1422 the corresponding propagators are given by
1423 \[
1424 \varphi _{\Delta t,T} = \varphi _{\Delta t,T^t } \circ \varphi
1425 _{\Delta t,T^r }.
1426 \]
1427 Finally, we obtain the overall symplectic propagators for freely
1428 moving rigid bodies
1429 \begin{eqnarray}
1430 \varphi _{\Delta t} &=& \varphi _{\Delta t/2,F} \circ \varphi _{\Delta t/2,\tau } \notag\\
1431 & & \circ \varphi _{\Delta t,T^t } \circ \varphi _{\Delta t/2,\pi _1 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi _1 } \notag\\
1432 & & \circ \varphi _{\Delta t/2,\tau } \circ \varphi _{\Delta t/2,F} .
1433 \label{introEquation:overallRBFlowMaps}
1434 \end{eqnarray}
1435
1436 \section{\label{introSection:langevinDynamics}Langevin Dynamics}
1437 As an alternative to newtonian dynamics, Langevin dynamics, which
1438 mimics a simple heat bath with stochastic and dissipative forces,
1439 has been applied in a variety of studies. This section will review
1440 the theory of Langevin dynamics. A brief derivation of the generalized
1441 Langevin equation will be given first. Following that, we will
1442 discuss the physical meaning of the terms appearing in the equation.
1443
1444 \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1445
1446 A harmonic bath model, in which an effective set of harmonic
1447 oscillators are used to mimic the effect of a linearly responding
1448 environment, has been widely used in quantum chemistry and
1449 statistical mechanics. One of the successful applications of
1450 Harmonic bath model is the derivation of the Generalized Langevin
1451 Dynamics (GLE). Consider a system, in which the degree of
1452 freedom $x$ is assumed to couple to the bath linearly, giving a
1453 Hamiltonian of the form
1454 \begin{equation}
1455 H = \frac{{p^2 }}{{2m}} + U(x) + H_B + \Delta U(x,x_1 , \ldots x_N)
1456 \label{introEquation:bathGLE}.
1457 \end{equation}
1458 Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated
1459 with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1460 \[
1461 H_B = \sum\limits_{\alpha = 1}^N {\left\{ {\frac{{p_\alpha ^2
1462 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha x_\alpha ^2 }
1463 \right\}}
1464 \]
1465 where the index $\alpha$ runs over all the bath degrees of freedom,
1466 $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1467 the harmonic bath masses, and $\Delta U$ is a bilinear system-bath
1468 coupling,
1469 \[
1470 \Delta U = - \sum\limits_{\alpha = 1}^N {g_\alpha x_\alpha x}
1471 \]
1472 where $g_\alpha$ are the coupling constants between the bath
1473 coordinates ($x_ \alpha$) and the system coordinate ($x$).
1474 Introducing
1475 \[
1476 W(x) = U(x) - \sum\limits_{\alpha = 1}^N {\frac{{g_\alpha ^2
1477 }}{{2m_\alpha w_\alpha ^2 }}} x^2
1478 \]
1479 and combining the last two terms in Eq.~\ref{introEquation:bathGLE}, we may rewrite the Harmonic bath Hamiltonian as
1480 \[
1481 H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha = 1}^N
1482 {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha
1483 w_\alpha ^2 \left( {x_\alpha - \frac{{g_\alpha }}{{m_\alpha
1484 w_\alpha ^2 }}x} \right)^2 } \right\}}.
1485 \]
1486 Since the first two terms of the new Hamiltonian depend only on the
1487 system coordinates, we can get the equations of motion for
1488 Generalized Langevin Dynamics by Hamilton's equations,
1489 \begin{equation}
1490 m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} -
1491 \sum\limits_{\alpha = 1}^N {g_\alpha \left( {x_\alpha -
1492 \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right)},
1493 \label{introEquation:coorMotionGLE}
1494 \end{equation}
1495 and
1496 \begin{equation}
1497 m\ddot x_\alpha = - m_\alpha w_\alpha ^2 \left( {x_\alpha -
1498 \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right).
1499 \label{introEquation:bathMotionGLE}
1500 \end{equation}
1501 In order to derive an equation for $x$, the dynamics of the bath
1502 variables $x_\alpha$ must be solved exactly first. As an integral
1503 transform which is particularly useful in solving linear ordinary
1504 differential equations,the Laplace transform is the appropriate tool
1505 to solve this problem. The basic idea is to transform the difficult
1506 differential equations into simple algebra problems which can be
1507 solved easily. Then, by applying the inverse Laplace transform, we
1508 can retrieve the solutions of the original problems. Let $f(t)$ be a
1509 function defined on $ [0,\infty ) $, the Laplace transform of $f(t)$
1510 is a new function defined as
1511 \[
1512 L(f(t)) \equiv F(p) = \int_0^\infty {f(t)e^{ - pt} dt}
1513 \]
1514 where $p$ is real and $L$ is called the Laplace Transform
1515 Operator. Below are some important properties of the Laplace transform
1516 \begin{eqnarray*}
1517 L(x + y) & = & L(x) + L(y) \\
1518 L(ax) & = & aL(x) \\
1519 L(\dot x) & = & pL(x) - px(0) \\
1520 L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\
1521 L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\
1522 \end{eqnarray*}
1523 Applying the Laplace transform to the bath coordinates, we obtain
1524 \begin{eqnarray*}
1525 p^2 L(x_\alpha ) - px_\alpha (0) - \dot x_\alpha (0) & = & - \omega _\alpha ^2 L(x_\alpha ) + \frac{{g_\alpha }}{{\omega _\alpha }}L(x), \\
1526 L(x_\alpha ) & = & \frac{{\frac{{g_\alpha }}{{\omega _\alpha }}L(x) + px_\alpha (0) + \dot x_\alpha (0)}}{{p^2 + \omega _\alpha ^2 }}. \\
1527 \end{eqnarray*}
1528 In the same way, the system coordinates become
1529 \begin{eqnarray*}
1530 mL(\ddot x) & = &
1531 - \sum\limits_{\alpha = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}\frac{p}{{p^2 + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2 + \omega _\alpha ^2 }}g_\alpha x_\alpha (0) - \frac{1}{{p^2 + \omega _\alpha ^2 }}g_\alpha \dot x_\alpha (0)} \right\}} \\
1532 & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}.
1533 \end{eqnarray*}
1534 With the help of some relatively important inverse Laplace
1535 transformations:
1536 \[
1537 \begin{array}{c}
1538 L(\cos at) = \frac{p}{{p^2 + a^2 }} \\
1539 L(\sin at) = \frac{a}{{p^2 + a^2 }} \\
1540 L(1) = \frac{1}{p} \\
1541 \end{array}
1542 \]
1543 we obtain
1544 \begin{eqnarray*}
1545 m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} -
1546 \sum\limits_{\alpha = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2
1547 }}{{m_\alpha \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega
1548 _\alpha t)\dot x(t - \tau )d\tau } } \right\}} \\
1549 & & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha
1550 x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}}
1551 \right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha
1552 (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}\\
1553 %
1554 & = & -
1555 \frac{{\partial W(x)}}{{\partial x}} - \int_0^t {\sum\limits_{\alpha
1556 = 1}^N {\left( { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha
1557 ^2 }}} \right)\cos (\omega _\alpha
1558 t)\dot x(t - \tau )d} \tau } \\
1559 & & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha
1560 x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}}
1561 \right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha
1562 (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}
1563 \end{eqnarray*}
1564 Introducing a \emph{dynamic friction kernel}
1565 \begin{equation}
1566 \xi (t) = \sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2
1567 }}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha t)}
1568 \label{introEquation:dynamicFrictionKernelDefinition}
1569 \end{equation}
1570 and \emph{a random force}
1571 \begin{equation}
1572 R(t) = \sum\limits_{\alpha = 1}^N {\left( {g_\alpha x_\alpha (0)
1573 - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}x(0)}
1574 \right)\cos (\omega _\alpha t)} + \frac{{\dot x_\alpha
1575 (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t),
1576 \label{introEquation:randomForceDefinition}
1577 \end{equation}
1578 the equation of motion can be rewritten as
1579 \begin{equation}
1580 m\ddot x = - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi
1581 (t)\dot x(t - \tau )d\tau } + R(t)
1582 \label{introEuqation:GeneralizedLangevinDynamics}
1583 \end{equation}
1584 which is known as the \emph{generalized Langevin equation} (GLE).
1585
1586 \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1587
1588 One may notice that $R(t)$ depends only on initial conditions, which
1589 implies it is completely deterministic within the context of a
1590 harmonic bath. However, it is easy to verify that $R(t)$ is totally
1591 uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)}
1592 \right\rangle = 0, \left\langle {\dot x(t)R(t)} \right\rangle =
1593 0.$ This property is what we expect from a truly random process. As
1594 long as the model chosen for $R(t)$ was a gaussian distribution in
1595 general, the stochastic nature of the GLE still remains.
1596 %dynamic friction kernel
1597 The convolution integral
1598 \[
1599 \int_0^t {\xi (t)\dot x(t - \tau )d\tau }
1600 \]
1601 depends on the entire history of the evolution of $x$, which implies
1602 that the bath retains memory of previous motions. In other words,
1603 the bath requires a finite time to respond to change in the motion
1604 of the system. For a sluggish bath which responds slowly to changes
1605 in the system coordinate, we may regard $\xi(t)$ as a constant
1606 $\xi(t) = \Xi_0$. Hence, the convolution integral becomes
1607 \[
1608 \int_0^t {\xi (t)\dot x(t - \tau )d\tau } = \xi _0 (x(t) - x(0))
1609 \]
1610 and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1611 \[
1612 m\ddot x = - \frac{\partial }{{\partial x}}\left( {W(x) +
1613 \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1614 \]
1615 which can be used to describe the effect of dynamic caging in
1616 viscous solvents. The other extreme is the bath that responds
1617 infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1618 taken as a $delta$ function in time:
1619 \[
1620 \xi (t) = 2\xi _0 \delta (t).
1621 \]
1622 Hence, the convolution integral becomes
1623 \[
1624 \int_0^t {\xi (t)\dot x(t - \tau )d\tau } = 2\xi _0 \int_0^t
1625 {\delta (t)\dot x(t - \tau )d\tau } = \xi _0 \dot x(t),
1626 \]
1627 and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1628 \begin{equation}
1629 m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1630 x(t) + R(t) \label{introEquation:LangevinEquation}
1631 \end{equation}
1632 which is known as the Langevin equation. The static friction
1633 coefficient $\xi _0$ can either be calculated from spectral density
1634 or be determined by Stokes' law for regular shaped particles. A
1635 brief review on calculating friction tensors for arbitrary shaped
1636 particles is given in Sec.~\ref{introSection:frictionTensor}.
1637
1638 \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1639
1640 Defining a new set of coordinates
1641 \[
1642 q_\alpha (t) = x_\alpha (t) - \frac{1}{{m_\alpha \omega _\alpha
1643 ^2 }}x(0),
1644 \]
1645 we can rewrite $R(t)$ as
1646 \[
1647 R(t) = \sum\limits_{\alpha = 1}^N {g_\alpha q_\alpha (t)}.
1648 \]
1649 And since the $q$ coordinates are harmonic oscillators,
1650 \begin{eqnarray*}
1651 \left\langle {q_\alpha ^2 } \right\rangle & = & \frac{{kT}}{{m_\alpha \omega _\alpha ^2 }} \\
1652 \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t) \\
1653 \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle \\
1654 \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha {\sum\limits_\beta {g_\alpha g_\beta \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle } } \\
1655 & = &\sum\limits_\alpha {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t)} \\
1656 & = &kT\xi (t)
1657 \end{eqnarray*}
1658 Thus, we recover the \emph{second fluctuation dissipation theorem}
1659 \begin{equation}
1660 \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1661 \label{introEquation:secondFluctuationDissipation},
1662 \end{equation}
1663 which acts as a constraint on the possible ways in which one can
1664 model the random force and friction kernel.