ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
Revision: 2911
Committed: Thu Jun 29 23:56:11 2006 UTC (18 years ago) by tim
Content type: application/x-tex
File size: 74455 byte(s)
Log Message:
version 1.0.0

File Contents

# Content
1 \chapter{\label{chapt:introduction}INTRODUCTION AND THEORETICAL BACKGROUND}
2
3 \section{\label{introSection:classicalMechanics}Classical
4 Mechanics}
5
6 Using equations of motion derived from Classical Mechanics,
7 Molecular Dynamics simulations are carried out by integrating the
8 equations of motion for a given system of particles. There are three
9 fundamental ideas behind classical mechanics. Firstly, one can
10 determine the state of a mechanical system at any time of interest;
11 Secondly, all the mechanical properties of the system at that time
12 can be determined by combining the knowledge of the properties of
13 the system with the specification of this state; Finally, the
14 specification of the state when further combined with the laws of
15 mechanics will also be sufficient to predict the future behavior of
16 the system.
17
18 \subsection{\label{introSection:newtonian}Newtonian Mechanics}
19 The discovery of Newton's three laws of mechanics which govern the
20 motion of particles is the foundation of the classical mechanics.
21 Newton's first law defines a class of inertial frames. Inertial
22 frames are reference frames where a particle not interacting with
23 other bodies will move with constant speed in the same direction.
24 With respect to inertial frames, Newton's second law has the form
25 \begin{equation}
26 F = \frac {dp}{dt} = \frac {mdv}{dt}
27 \label{introEquation:newtonSecondLaw}
28 \end{equation}
29 A point mass interacting with other bodies moves with the
30 acceleration along the direction of the force acting on it. Let
31 $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and
32 $F_{ji}$ be the force that particle $j$ exerts on particle $i$.
33 Newton's third law states that
34 \begin{equation}
35 F_{ij} = -F_{ji}.
36 \label{introEquation:newtonThirdLaw}
37 \end{equation}
38 Conservation laws of Newtonian Mechanics play very important roles
39 in solving mechanics problems. The linear momentum of a particle is
40 conserved if it is free or it experiences no force. The second
41 conservation theorem concerns the angular momentum of a particle.
42 The angular momentum $L$ of a particle with respect to an origin
43 from which $r$ is measured is defined to be
44 \begin{equation}
45 L \equiv r \times p \label{introEquation:angularMomentumDefinition}
46 \end{equation}
47 The torque $\tau$ with respect to the same origin is defined to be
48 \begin{equation}
49 \tau \equiv r \times F \label{introEquation:torqueDefinition}
50 \end{equation}
51 Differentiating Eq.~\ref{introEquation:angularMomentumDefinition},
52 \[
53 \dot L = \frac{d}{{dt}}(r \times p) = (\dot r \times p) + (r \times
54 \dot p)
55 \]
56 since
57 \[
58 \dot r \times p = \dot r \times mv = m\dot r \times \dot r \equiv 0
59 \]
60 thus,
61 \begin{equation}
62 \dot L = r \times \dot p = \tau
63 \end{equation}
64 If there are no external torques acting on a body, the angular
65 momentum of it is conserved. The last conservation theorem state
66 that if all forces are conservative, energy is conserved,
67 \begin{equation}E = T + V. \label{introEquation:energyConservation}
68 \end{equation}
69 All of these conserved quantities are important factors to determine
70 the quality of numerical integration schemes for rigid bodies
71 \cite{Dullweber1997}.
72
73 \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74
75 Newtonian Mechanics suffers from an important limitation: motion can
76 only be described in cartesian coordinate systems which make it
77 impossible to predict analytically the properties of the system even
78 if we know all of the details of the interaction. In order to
79 overcome some of the practical difficulties which arise in attempts
80 to apply Newton's equation to complex systems, approximate numerical
81 procedures may be developed.
82
83 \subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's
84 Principle}}
85
86 Hamilton introduced the dynamical principle upon which it is
87 possible to base all of mechanics and most of classical physics.
88 Hamilton's Principle may be stated as follows: the trajectory, along
89 which a dynamical system may move from one point to another within a
90 specified time, is derived by finding the path which minimizes the
91 time integral of the difference between the kinetic $K$, and
92 potential energies $U$,
93 \begin{equation}
94 \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}.
95 \label{introEquation:halmitonianPrinciple1}
96 \end{equation}
97 For simple mechanical systems, where the forces acting on the
98 different parts are derivable from a potential, the Lagrangian
99 function $L$ can be defined as the difference between the kinetic
100 energy of the system and its potential energy,
101 \begin{equation}
102 L \equiv K - U = L(q_i ,\dot q_i ).
103 \label{introEquation:lagrangianDef}
104 \end{equation}
105 Thus, Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
106 \begin{equation}
107 \delta \int_{t_1 }^{t_2 } {L dt = 0} .
108 \label{introEquation:halmitonianPrinciple2}
109 \end{equation}
110
111 \subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The
112 Equations of Motion in Lagrangian Mechanics}}
113
114 For a system of $f$ degrees of freedom, the equations of motion in
115 the Lagrangian form is
116 \begin{equation}
117 \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} -
118 \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f
119 \label{introEquation:eqMotionLagrangian}
120 \end{equation}
121 where $q_{i}$ is generalized coordinate and $\dot{q_{i}}$ is
122 generalized velocity.
123
124 \subsection{\label{introSection:hamiltonian}Hamiltonian Mechanics}
125
126 Arising from Lagrangian Mechanics, Hamiltonian Mechanics was
127 introduced by William Rowan Hamilton in 1833 as a re-formulation of
128 classical mechanics. If the potential energy of a system is
129 independent of velocities, the momenta can be defined as
130 \begin{equation}
131 p_i = \frac{\partial L}{\partial \dot q_i}
132 \label{introEquation:generalizedMomenta}
133 \end{equation}
134 The Lagrange equations of motion are then expressed by
135 \begin{equation}
136 p_i = \frac{{\partial L}}{{\partial q_i }}
137 \label{introEquation:generalizedMomentaDot}
138 \end{equation}
139 With the help of the generalized momenta, we may now define a new
140 quantity $H$ by the equation
141 \begin{equation}
142 H = \sum\limits_k {p_k \dot q_k } - L ,
143 \label{introEquation:hamiltonianDefByLagrangian}
144 \end{equation}
145 where $ \dot q_1 \ldots \dot q_f $ are generalized velocities and
146 $L$ is the Lagrangian function for the system. Differentiating
147 Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, one can obtain
148 \begin{equation}
149 dH = \sum\limits_k {\left( {p_k d\dot q_k + \dot q_k dp_k -
150 \frac{{\partial L}}{{\partial q_k }}dq_k - \frac{{\partial
151 L}}{{\partial \dot q_k }}d\dot q_k } \right)} - \frac{{\partial
152 L}}{{\partial t}}dt . \label{introEquation:diffHamiltonian1}
153 \end{equation}
154 Making use of Eq.~\ref{introEquation:generalizedMomenta}, the second
155 and fourth terms in the parentheses cancel. Therefore,
156 Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as
157 \begin{equation}
158 dH = \sum\limits_k {\left( {\dot q_k dp_k - \dot p_k dq_k }
159 \right)} - \frac{{\partial L}}{{\partial t}}dt .
160 \label{introEquation:diffHamiltonian2}
161 \end{equation}
162 By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can
163 find
164 \begin{equation}
165 \frac{{\partial H}}{{\partial p_k }} = \dot {q_k}
166 \label{introEquation:motionHamiltonianCoordinate}
167 \end{equation}
168 \begin{equation}
169 \frac{{\partial H}}{{\partial q_k }} = - \dot {p_k}
170 \label{introEquation:motionHamiltonianMomentum}
171 \end{equation}
172 and
173 \begin{equation}
174 \frac{{\partial H}}{{\partial t}} = - \frac{{\partial L}}{{\partial
175 t}}
176 \label{introEquation:motionHamiltonianTime}
177 \end{equation}
178 where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179 Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180 equation of motion. Due to their symmetrical formula, they are also
181 known as the canonical equations of motions \cite{Goldstein2001}.
182
183 An important difference between Lagrangian approach and the
184 Hamiltonian approach is that the Lagrangian is considered to be a
185 function of the generalized velocities $\dot q_i$ and coordinates
186 $q_i$, while the Hamiltonian is considered to be a function of the
187 generalized momenta $p_i$ and the conjugate coordinates $q_i$.
188 Hamiltonian Mechanics is more appropriate for application to
189 statistical mechanics and quantum mechanics, since it treats the
190 coordinate and its time derivative as independent variables and it
191 only works with 1st-order differential equations\cite{Marion1990}.
192 In Newtonian Mechanics, a system described by conservative forces
193 conserves the total energy
194 (Eq.~\ref{introEquation:energyConservation}). It follows that
195 Hamilton's equations of motion conserve the total Hamiltonian
196 \begin{equation}
197 \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial
198 H}}{{\partial q_i }}\dot q_i + \frac{{\partial H}}{{\partial p_i
199 }}\dot p_i } \right)} = \sum\limits_i {\left( {\frac{{\partial
200 H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} -
201 \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial
202 q_i }}} \right) = 0}. \label{introEquation:conserveHalmitonian}
203 \end{equation}
204
205 \section{\label{introSection:statisticalMechanics}Statistical
206 Mechanics}
207
208 The thermodynamic behaviors and properties of Molecular Dynamics
209 simulation are governed by the principle of Statistical Mechanics.
210 The following section will give a brief introduction to some of the
211 Statistical Mechanics concepts and theorem presented in this
212 dissertation.
213
214 \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
215
216 Mathematically, phase space is the space which represents all
217 possible states of a system. Each possible state of the system
218 corresponds to one unique point in the phase space. For mechanical
219 systems, the phase space usually consists of all possible values of
220 position and momentum variables. Consider a dynamic system of $f$
221 particles in a cartesian space, where each of the $6f$ coordinates
222 and momenta is assigned to one of $6f$ mutually orthogonal axes, the
223 phase space of this system is a $6f$ dimensional space. A point, $x
224 =
225 (\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
226 \over q} _1 , \ldots
227 ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
228 \over q} _f
229 ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
230 \over p} _1 \ldots
231 ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
232 \over p} _f )$ , with a unique set of values of $6f$ coordinates and
233 momenta is a phase space vector.
234 %%%fix me
235
236 In statistical mechanics, the condition of an ensemble at any time
237 can be regarded as appropriately specified by the density $\rho$
238 with which representative points are distributed over the phase
239 space. The density distribution for an ensemble with $f$ degrees of
240 freedom is defined as,
241 \begin{equation}
242 \rho = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t).
243 \label{introEquation:densityDistribution}
244 \end{equation}
245 Governed by the principles of mechanics, the phase points change
246 their locations which changes the density at any time at phase
247 space. Hence, the density distribution is also to be taken as a
248 function of the time. The number of systems $\delta N$ at time $t$
249 can be determined by,
250 \begin{equation}
251 \delta N = \rho (q,p,t)dq_1 \ldots dq_f dp_1 \ldots dp_f.
252 \label{introEquation:deltaN}
253 \end{equation}
254 Assuming enough copies of the systems, we can sufficiently
255 approximate $\delta N$ without introducing discontinuity when we go
256 from one region in the phase space to another. By integrating over
257 the whole phase space,
258 \begin{equation}
259 N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f
260 \label{introEquation:totalNumberSystem}
261 \end{equation}
262 gives us an expression for the total number of copies. Hence, the
263 probability per unit volume in the phase space can be obtained by,
264 \begin{equation}
265 \frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int
266 {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
267 \label{introEquation:unitProbability}
268 \end{equation}
269 With the help of Eq.~\ref{introEquation:unitProbability} and the
270 knowledge of the system, it is possible to calculate the average
271 value of any desired quantity which depends on the coordinates and
272 momenta of the system. Even when the dynamics of the real system are
273 complex, or stochastic, or even discontinuous, the average
274 properties of the ensemble of possibilities as a whole remain well
275 defined. For a classical system in thermal equilibrium with its
276 environment, the ensemble average of a mechanical quantity, $\langle
277 A(q , p) \rangle_t$, takes the form of an integral over the phase
278 space of the system,
279 \begin{equation}
280 \langle A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho
281 (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
282 (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
283 \label{introEquation:ensembelAverage}
284 \end{equation}
285
286 \subsection{\label{introSection:liouville}Liouville's theorem}
287
288 Liouville's theorem is the foundation on which statistical mechanics
289 rests. It describes the time evolution of the phase space
290 distribution function. In order to calculate the rate of change of
291 $\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider
292 the two faces perpendicular to the $q_1$ axis, which are located at
293 $q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the
294 opposite face is given by the expression,
295 \begin{equation}
296 \left( {\rho + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 }
297 \right)\left( {\dot q_1 + \frac{{\partial \dot q_1 }}{{\partial q_1
298 }}\delta q_1 } \right)\delta q_2 \ldots \delta q_f \delta p_1
299 \ldots \delta p_f .
300 \end{equation}
301 Summing all over the phase space, we obtain
302 \begin{equation}
303 \frac{{d(\delta N)}}{{dt}} = - \sum\limits_{i = 1}^f {\left[ {\rho
304 \left( {\frac{{\partial \dot q_i }}{{\partial q_i }} +
305 \frac{{\partial \dot p_i }}{{\partial p_i }}} \right) + \left(
306 {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i + \frac{{\partial
307 \rho }}{{\partial p_i }}\dot p_i } \right)} \right]} \delta q_1
308 \ldots \delta q_f \delta p_1 \ldots \delta p_f .
309 \end{equation}
310 Differentiating the equations of motion in Hamiltonian formalism
311 (\ref{introEquation:motionHamiltonianCoordinate},
312 \ref{introEquation:motionHamiltonianMomentum}), we can show,
313 \begin{equation}
314 \sum\limits_i {\left( {\frac{{\partial \dot q_i }}{{\partial q_i }}
315 + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)} = 0 ,
316 \end{equation}
317 which cancels the first terms of the right hand side. Furthermore,
318 dividing $ \delta q_1 \ldots \delta q_f \delta p_1 \ldots \delta
319 p_f $ in both sides, we can write out Liouville's theorem in a
320 simple form,
321 \begin{equation}
322 \frac{{\partial \rho }}{{\partial t}} + \sum\limits_{i = 1}^f
323 {\left( {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i +
324 \frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)} = 0 .
325 \label{introEquation:liouvilleTheorem}
326 \end{equation}
327 Liouville's theorem states that the distribution function is
328 constant along any trajectory in phase space. In classical
329 statistical mechanics, since the number of system copies in an
330 ensemble is huge and constant, we can assume the local density has
331 no reason (other than classical mechanics) to change,
332 \begin{equation}
333 \frac{{\partial \rho }}{{\partial t}} = 0.
334 \label{introEquation:stationary}
335 \end{equation}
336 In such stationary system, the density of distribution $\rho$ can be
337 connected to the Hamiltonian $H$ through Maxwell-Boltzmann
338 distribution,
339 \begin{equation}
340 \rho \propto e^{ - \beta H}
341 \label{introEquation:densityAndHamiltonian}
342 \end{equation}
343
344 \subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}}
345 Lets consider a region in the phase space,
346 \begin{equation}
347 \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f .
348 \end{equation}
349 If this region is small enough, the density $\rho$ can be regarded
350 as uniform over the whole integral. Thus, the number of phase points
351 inside this region is given by,
352 \begin{equation}
353 \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f
354 dp_1 } ..dp_f.
355 \end{equation}
356
357 \begin{equation}
358 \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho
359 \frac{d}{{dt}}(\delta v) = 0.
360 \end{equation}
361 With the help of the stationary assumption
362 (Eq.~\ref{introEquation:stationary}), we obtain the principle of
363 \emph{conservation of volume in phase space},
364 \begin{equation}
365 \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 }
366 ...dq_f dp_1 } ..dp_f = 0.
367 \label{introEquation:volumePreserving}
368 \end{equation}
369
370 \subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}}
371
372 Liouville's theorem can be expressed in a variety of different forms
373 which are convenient within different contexts. For any two function
374 $F$ and $G$ of the coordinates and momenta of a system, the Poisson
375 bracket ${F, G}$ is defined as
376 \begin{equation}
377 \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
378 F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
379 \frac{{\partial F}}{{\partial p_i }}\frac{{\partial G}}{{\partial
380 q_i }}} \right)}.
381 \label{introEquation:poissonBracket}
382 \end{equation}
383 Substituting equations of motion in Hamiltonian formalism
384 (Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
385 Eq.~\ref{introEquation:motionHamiltonianMomentum}) into
386 (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite
387 Liouville's theorem using Poisson bracket notion,
388 \begin{equation}
389 \left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - \left\{
390 {\rho ,H} \right\}.
391 \label{introEquation:liouvilleTheromInPoissin}
392 \end{equation}
393 Moreover, the Liouville operator is defined as
394 \begin{equation}
395 iL = \sum\limits_{i = 1}^f {\left( {\frac{{\partial H}}{{\partial
396 p_i }}\frac{\partial }{{\partial q_i }} - \frac{{\partial
397 H}}{{\partial q_i }}\frac{\partial }{{\partial p_i }}} \right)}
398 \label{introEquation:liouvilleOperator}
399 \end{equation}
400 In terms of Liouville operator, Liouville's equation can also be
401 expressed as
402 \begin{equation}
403 \left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - iL\rho
404 \label{introEquation:liouvilleTheoremInOperator}
405 \end{equation}
406 which can help define a propagator $\rho (t) = e^{-iLt} \rho (0)$.
407 \subsection{\label{introSection:ergodic}The Ergodic Hypothesis}
408
409 Various thermodynamic properties can be calculated from Molecular
410 Dynamics simulation. By comparing experimental values with the
411 calculated properties, one can determine the accuracy of the
412 simulation and the quality of the underlying model. However, both
413 experiments and computer simulations are usually performed during a
414 certain time interval and the measurements are averaged over a
415 period of time which is different from the average behavior of
416 many-body system in Statistical Mechanics. Fortunately, the Ergodic
417 Hypothesis makes a connection between time average and the ensemble
418 average. It states that the time average and average over the
419 statistical ensemble are identical \cite{Frenkel1996, Leach2001}:
420 \begin{equation}
421 \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
422 \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
423 {A(q(t),p(t))} } \rho (q(t), p(t)) dqdp
424 \end{equation}
425 where $\langle A(q , p) \rangle_t$ is an equilibrium value of a
426 physical quantity and $\rho (p(t), q(t))$ is the equilibrium
427 distribution function. If an observation is averaged over a
428 sufficiently long time (longer than the relaxation time), all
429 accessible microstates in phase space are assumed to be equally
430 probed, giving a properly weighted statistical average. This allows
431 the researcher freedom of choice when deciding how best to measure a
432 given observable. In case an ensemble averaged approach sounds most
433 reasonable, the Monte Carlo methods\cite{Metropolis1949} can be
434 utilized. Or if the system lends itself to a time averaging
435 approach, the Molecular Dynamics techniques in
436 Sec.~\ref{introSection:molecularDynamics} will be the best
437 choice\cite{Frenkel1996}.
438
439 \section{\label{introSection:geometricIntegratos}Geometric Integrators}
440 A variety of numerical integrators have been proposed to simulate
441 the motions of atoms in MD simulation. They usually begin with
442 initial conditionals and move the objects in the direction governed
443 by the differential equations. However, most of them ignore the
444 hidden physical laws contained within the equations. Since 1990,
445 geometric integrators, which preserve various phase-flow invariants
446 such as symplectic structure, volume and time reversal symmetry,
447 were developed to address this issue\cite{Dullweber1997,
448 McLachlan1998, Leimkuhler1999}. The velocity Verlet method, which
449 happens to be a simple example of symplectic integrator, continues
450 to gain popularity in the molecular dynamics community. This fact
451 can be partly explained by its geometric nature.
452
453 \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
454 A \emph{manifold} is an abstract mathematical space. It looks
455 locally like Euclidean space, but when viewed globally, it may have
456 more complicated structure. A good example of manifold is the
457 surface of Earth. It seems to be flat locally, but it is round if
458 viewed as a whole. A \emph{differentiable manifold} (also known as
459 \emph{smooth manifold}) is a manifold on which it is possible to
460 apply calculus\cite{Hirsch1997}. A \emph{symplectic manifold} is
461 defined as a pair $(M, \omega)$ which consists of a
462 \emph{differentiable manifold} $M$ and a close, non-degenerated,
463 bilinear symplectic form, $\omega$. A symplectic form on a vector
464 space $V$ is a function $\omega(x, y)$ which satisfies
465 $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
466 \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
467 $\omega(x, x) = 0$\cite{McDuff1998}. The cross product operation in
468 vector field is an example of symplectic form. One of the
469 motivations to study \emph{symplectic manifolds} in Hamiltonian
470 Mechanics is that a symplectic manifold can represent all possible
471 configurations of the system and the phase space of the system can
472 be described by it's cotangent bundle\cite{Jost2002}. Every
473 symplectic manifold is even dimensional. For instance, in Hamilton
474 equations, coordinate and momentum always appear in pairs.
475
476 \subsection{\label{introSection:ODE}Ordinary Differential Equations}
477
478 For an ordinary differential system defined as
479 \begin{equation}
480 \dot x = f(x)
481 \end{equation}
482 where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
483 $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
484 function and $J$ is the skew-symmetric matrix
485 \begin{equation}
486 J = \left( {\begin{array}{*{20}c}
487 0 & I \\
488 { - I} & 0 \\
489 \end{array}} \right)
490 \label{introEquation:canonicalMatrix}
491 \end{equation}
492 where $I$ is an identity matrix. Using this notation, Hamiltonian
493 system can be rewritten as,
494 \begin{equation}
495 \frac{d}{{dt}}x = J\nabla _x H(x).
496 \label{introEquation:compactHamiltonian}
497 \end{equation}In this case, $f$ is
498 called a \emph{Hamiltonian vector field}. Another generalization of
499 Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986},
500 \begin{equation}
501 \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
502 \end{equation}
503 The most obvious change being that matrix $J$ now depends on $x$.
504
505 \subsection{\label{introSection:exactFlow}Exact Propagator}
506
507 Let $x(t)$ be the exact solution of the ODE
508 system,$\frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}$, we can
509 define its exact propagator(solution) $\varphi_\tau$
510 \[ x(t+\tau)
511 =\varphi_\tau(x(t))
512 \]
513 where $\tau$ is a fixed time step and $\varphi$ is a map from phase
514 space to itself. The propagator has the continuous group property,
515 \begin{equation}
516 \varphi _{\tau _1 } \circ \varphi _{\tau _2 } = \varphi _{\tau _1
517 + \tau _2 } .
518 \end{equation}
519 In particular,
520 \begin{equation}
521 \varphi _\tau \circ \varphi _{ - \tau } = I
522 \end{equation}
523 Therefore, the exact propagator is self-adjoint,
524 \begin{equation}
525 \varphi _\tau = \varphi _{ - \tau }^{ - 1}.
526 \end{equation}
527 The exact propagator can also be written in terms of operator,
528 \begin{equation}
529 \varphi _\tau (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
530 }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
531 \label{introEquation:exponentialOperator}
532 \end{equation}
533 In most cases, it is not easy to find the exact propagator
534 $\varphi_\tau$. Instead, we use an approximate map, $\psi_\tau$,
535 which is usually called an integrator. The order of an integrator
536 $\psi_\tau$ is $p$, if the Taylor series of $\psi_\tau$ agree to
537 order $p$,
538 \begin{equation}
539 \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1})
540 \end{equation}
541
542 \subsection{\label{introSection:geometricProperties}Geometric Properties}
543
544 The hidden geometric properties\cite{Budd1999, Marsden1998} of an
545 ODE and its propagator play important roles in numerical studies.
546 Many of them can be found in systems which occur naturally in
547 applications. Let $\varphi$ be the propagator of Hamiltonian vector
548 field, $\varphi$ is a \emph{symplectic} propagator if it satisfies,
549 \begin{equation}
550 {\varphi '}^T J \varphi ' = J.
551 \end{equation}
552 According to Liouville's theorem, the symplectic volume is invariant
553 under a Hamiltonian propagator, which is the basis for classical
554 statistical mechanics. Furthermore, the propagator of a Hamiltonian
555 vector field on a symplectic manifold can be shown to be a
556 symplectomorphism. As to the Poisson system,
557 \begin{equation}
558 {\varphi '}^T J \varphi ' = J \circ \varphi
559 \end{equation}
560 is the property that must be preserved by the integrator. It is
561 possible to construct a \emph{volume-preserving} propagator for a
562 source free ODE ($ \nabla \cdot f = 0 $), if the propagator
563 satisfies $ \det d\varphi = 1$. One can show easily that a
564 symplectic propagator will be volume-preserving. Changing the
565 variables $y = h(x)$ in an ODE (Eq.~\ref{introEquation:ODE}) will
566 result in a new system,
567 \[
568 \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y).
569 \]
570 The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$.
571 In other words, the propagator of this vector field is reversible if
572 and only if $ h \circ \varphi ^{ - 1} = \varphi \circ h $. A
573 conserved quantity of a general differential function is a function
574 $ G:R^{2d} \to R^d $ which is constant for all solutions of the ODE
575 $\frac{{dx}}{{dt}} = f(x)$ ,
576 \[
577 \frac{{dG(x(t))}}{{dt}} = 0.
578 \]
579 Using the chain rule, one may obtain,
580 \[
581 \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \dot \nabla G,
582 \]
583 which is the condition for conserved quantities. For a canonical
584 Hamiltonian system, the time evolution of an arbitrary smooth
585 function $G$ is given by,
586 \begin{eqnarray}
587 \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \notag\\
588 & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)).
589 \label{introEquation:firstIntegral1}
590 \end{eqnarray}
591 Using poisson bracket notion, Eq.~\ref{introEquation:firstIntegral1}
592 can be rewritten as
593 \[
594 \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)).
595 \]
596 Therefore, the sufficient condition for $G$ to be a conserved
597 quantity of a Hamiltonian system is $\left\{ {G,H} \right\} = 0.$ As
598 is well known, the Hamiltonian (or energy) H of a Hamiltonian system
599 is a conserved quantity, which is due to the fact $\{ H,H\} = 0$.
600 When designing any numerical methods, one should always try to
601 preserve the structural properties of the original ODE and its
602 propagator.
603
604 \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods}
605 A lot of well established and very effective numerical methods have
606 been successful precisely because of their symplectic nature even
607 though this fact was not recognized when they were first
608 constructed. The most famous example is the Verlet-leapfrog method
609 in molecular dynamics. In general, symplectic integrators can be
610 constructed using one of four different methods.
611 \begin{enumerate}
612 \item Generating functions
613 \item Variational methods
614 \item Runge-Kutta methods
615 \item Splitting methods
616 \end{enumerate}
617 Generating functions\cite{Channell1990} tend to lead to methods
618 which are cumbersome and difficult to use. In dissipative systems,
619 variational methods can capture the decay of energy
620 accurately\cite{Kane2000}. Since they are geometrically unstable
621 against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
622 methods are not suitable for Hamiltonian system. Recently, various
623 high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
624 have been developed to overcome this instability. However, due to
625 computational penalty involved in implementing the Runge-Kutta
626 methods, they have not attracted much attention from the Molecular
627 Dynamics community. Instead, splitting methods have been widely
628 accepted since they exploit natural decompositions of the
629 system\cite{Tuckerman1992, McLachlan1998}.
630
631 \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
632
633 The main idea behind splitting methods is to decompose the discrete
634 $\varphi_h$ as a composition of simpler propagators,
635 \begin{equation}
636 \varphi _h = \varphi _{h_1 } \circ \varphi _{h_2 } \ldots \circ
637 \varphi _{h_n }
638 \label{introEquation:FlowDecomposition}
639 \end{equation}
640 where each of the sub-propagator is chosen such that each represent
641 a simpler integration of the system. Suppose that a Hamiltonian
642 system takes the form,
643 \[
644 H = H_1 + H_2.
645 \]
646 Here, $H_1$ and $H_2$ may represent different physical processes of
647 the system. For instance, they may relate to kinetic and potential
648 energy respectively, which is a natural decomposition of the
649 problem. If $H_1$ and $H_2$ can be integrated using exact
650 propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a
651 simple first order expression is then given by the Lie-Trotter
652 formula
653 \begin{equation}
654 \varphi _h = \varphi _{1,h} \circ \varphi _{2,h},
655 \label{introEquation:firstOrderSplitting}
656 \end{equation}
657 where $\varphi _h$ is the result of applying the corresponding
658 continuous $\varphi _i$ over a time $h$. By definition, as
659 $\varphi_i(t)$ is the exact solution of a Hamiltonian system, it
660 must follow that each operator $\varphi_i(t)$ is a symplectic map.
661 It is easy to show that any composition of symplectic propagators
662 yields a symplectic map,
663 \begin{equation}
664 (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi
665 '\phi ' = \phi '^T J\phi ' = J,
666 \label{introEquation:SymplecticFlowComposition}
667 \end{equation}
668 where $\phi$ and $\psi$ both are symplectic maps. Thus operator
669 splitting in this context automatically generates a symplectic map.
670 The Lie-Trotter
671 splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
672 local errors proportional to $h^2$, while the Strang splitting gives
673 a second-order decomposition,
674 \begin{equation}
675 \varphi _h = \varphi _{1,h/2} \circ \varphi _{2,h} \circ \varphi
676 _{1,h/2} , \label{introEquation:secondOrderSplitting}
677 \end{equation}
678 which has a local error proportional to $h^3$. The Strang
679 splitting's popularity in molecular simulation community attribute
680 to its symmetric property,
681 \begin{equation}
682 \varphi _h^{ - 1} = \varphi _{ - h}.
683 \label{introEquation:timeReversible}
684 \end{equation}
685
686 \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}}
687 The classical equation for a system consisting of interacting
688 particles can be written in Hamiltonian form,
689 \[
690 H = T + V
691 \]
692 where $T$ is the kinetic energy and $V$ is the potential energy.
693 Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one
694 obtains the following:
695 \begin{align}
696 q(\Delta t) &= q(0) + \dot{q}(0)\Delta t +
697 \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, %
698 \label{introEquation:Lp10a} \\%
699 %
700 \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
701 \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. %
702 \label{introEquation:Lp10b}
703 \end{align}
704 where $F(t)$ is the force at time $t$. This integration scheme is
705 known as \emph{velocity verlet} which is
706 symplectic(\ref{introEquation:SymplecticFlowComposition}),
707 time-reversible(\ref{introEquation:timeReversible}) and
708 volume-preserving (\ref{introEquation:volumePreserving}). These
709 geometric properties attribute to its long-time stability and its
710 popularity in the community. However, the most commonly used
711 velocity verlet integration scheme is written as below,
712 \begin{align}
713 \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
714 \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEquation:Lp9a}\\%
715 %
716 q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),%
717 \label{introEquation:Lp9b}\\%
718 %
719 \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
720 \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c}
721 \end{align}
722 From the preceding splitting, one can see that the integration of
723 the equations of motion would follow:
724 \begin{enumerate}
725 \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
726
727 \item Use the half step velocities to move positions one whole step, $\Delta t$.
728
729 \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
730
731 \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
732 \end{enumerate}
733 By simply switching the order of the propagators in the splitting
734 and composing a new integrator, the \emph{position verlet}
735 integrator, can be generated,
736 \begin{align}
737 \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) +
738 \frac{{\Delta t}}{{2m}}\dot q(0)} \right], %
739 \label{introEquation:positionVerlet1} \\%
740 %
741 q(\Delta t) &= q(0) + \frac{{\Delta t}}{2}\left[ {\dot q(0) + \dot
742 q(\Delta t)} \right]. %
743 \label{introEquation:positionVerlet2}
744 \end{align}
745
746 \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
747
748 The Baker-Campbell-Hausdorff formula can be used to determine the
749 local error of a splitting method in terms of the commutator of the
750 operators(\ref{introEquation:exponentialOperator}) associated with
751 the sub-propagator. For operators $hX$ and $hY$ which are associated
752 with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
753 \begin{equation}
754 \exp (hX + hY) = \exp (hZ)
755 \end{equation}
756 where
757 \begin{equation}
758 hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left(
759 {[X,[X,Y]] + [Y,[Y,X]]} \right) + \ldots .
760 \end{equation}
761 Here, $[X,Y]$ is the commutator of operator $X$ and $Y$ given by
762 \[
763 [X,Y] = XY - YX .
764 \]
765 Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974}
766 to the Strang splitting, we can obtain
767 \begin{eqnarray*}
768 \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
769 & & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
770 & & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots
771 ).
772 \end{eqnarray*}
773 Since $ [X,Y] + [Y,X] = 0$ and $ [X,X] = 0$, the dominant local
774 error of Strang splitting is proportional to $h^3$. The same
775 procedure can be applied to a general splitting of the form
776 \begin{equation}
777 \varphi _{b_m h}^2 \circ \varphi _{a_m h}^1 \circ \varphi _{b_{m -
778 1} h}^2 \circ \ldots \circ \varphi _{a_1 h}^1 .
779 \end{equation}
780 A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
781 order methods. Yoshida proposed an elegant way to compose higher
782 order methods based on symmetric splitting\cite{Yoshida1990}. Given
783 a symmetric second order base method $ \varphi _h^{(2)} $, a
784 fourth-order symmetric method can be constructed by composing,
785 \[
786 \varphi _h^{(4)} = \varphi _{\alpha h}^{(2)} \circ \varphi _{\beta
787 h}^{(2)} \circ \varphi _{\alpha h}^{(2)}
788 \]
789 where $ \alpha = - \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$ and $ \beta
790 = \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$. Moreover, a symmetric
791 integrator $ \varphi _h^{(2n + 2)}$ can be composed by
792 \begin{equation}
793 \varphi _h^{(2n + 2)} = \varphi _{\alpha h}^{(2n)} \circ \varphi
794 _{\beta h}^{(2n)} \circ \varphi _{\alpha h}^{(2n)},
795 \end{equation}
796 if the weights are chosen as
797 \[
798 \alpha = - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta =
799 \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} .
800 \]
801
802 \section{\label{introSection:molecularDynamics}Molecular Dynamics}
803
804 As one of the principal tools of molecular modeling, Molecular
805 dynamics has proven to be a powerful tool for studying the functions
806 of biological systems, providing structural, thermodynamic and
807 dynamical information. The basic idea of molecular dynamics is that
808 macroscopic properties are related to microscopic behavior and
809 microscopic behavior can be calculated from the trajectories in
810 simulations. For instance, instantaneous temperature of a
811 Hamiltonian system of $N$ particles can be measured by
812 \[
813 T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}}
814 \]
815 where $m_i$ and $v_i$ are the mass and velocity of $i$th particle
816 respectively, $f$ is the number of degrees of freedom, and $k_B$ is
817 the Boltzman constant.
818
819 A typical molecular dynamics run consists of three essential steps:
820 \begin{enumerate}
821 \item Initialization
822 \begin{enumerate}
823 \item Preliminary preparation
824 \item Minimization
825 \item Heating
826 \item Equilibration
827 \end{enumerate}
828 \item Production
829 \item Analysis
830 \end{enumerate}
831 These three individual steps will be covered in the following
832 sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
833 initialization of a simulation. Sec.~\ref{introSection:production}
834 will discuss issues of production runs.
835 Sec.~\ref{introSection:Analysis} provides the theoretical tools for
836 analysis of trajectories.
837
838 \subsection{\label{introSec:initialSystemSettings}Initialization}
839
840 \subsubsection{\textbf{Preliminary preparation}}
841
842 When selecting the starting structure of a molecule for molecular
843 simulation, one may retrieve its Cartesian coordinates from public
844 databases, such as RCSB Protein Data Bank \textit{etc}. Although
845 thousands of crystal structures of molecules are discovered every
846 year, many more remain unknown due to the difficulties of
847 purification and crystallization. Even for molecules with known
848 structures, some important information is missing. For example, a
849 missing hydrogen atom which acts as donor in hydrogen bonding must
850 be added. Moreover, in order to include electrostatic interactions,
851 one may need to specify the partial charges for individual atoms.
852 Under some circumstances, we may even need to prepare the system in
853 a special configuration. For instance, when studying transport
854 phenomenon in membrane systems, we may prepare the lipids in a
855 bilayer structure instead of placing lipids randomly in solvent,
856 since we are not interested in the slow self-aggregation process.
857
858 \subsubsection{\textbf{Minimization}}
859
860 It is quite possible that some of molecules in the system from
861 preliminary preparation may be overlapping with each other. This
862 close proximity leads to high initial potential energy which
863 consequently jeopardizes any molecular dynamics simulations. To
864 remove these steric overlaps, one typically performs energy
865 minimization to find a more reasonable conformation. Several energy
866 minimization methods have been developed to exploit the energy
867 surface and to locate the local minimum. While converging slowly
868 near the minimum, steepest descent method is extremely robust when
869 systems are strongly anharmonic. Thus, it is often used to refine
870 structures from crystallographic data. Relying on the Hessian,
871 advanced methods like Newton-Raphson converge rapidly to a local
872 minimum, but become unstable if the energy surface is far from
873 quadratic. Another factor that must be taken into account, when
874 choosing energy minimization method, is the size of the system.
875 Steepest descent and conjugate gradient can deal with models of any
876 size. Because of the limits on computer memory to store the hessian
877 matrix and the computing power needed to diagonalize these matrices,
878 most Newton-Raphson methods can not be used with very large systems.
879
880 \subsubsection{\textbf{Heating}}
881
882 Typically, heating is performed by assigning random velocities
883 according to a Maxwell-Boltzman distribution for a desired
884 temperature. Beginning at a lower temperature and gradually
885 increasing the temperature by assigning larger random velocities, we
886 end up setting the temperature of the system to a final temperature
887 at which the simulation will be conducted. In heating phase, we
888 should also keep the system from drifting or rotating as a whole. To
889 do this, the net linear momentum and angular momentum of the system
890 is shifted to zero after each resampling from the Maxwell -Boltzman
891 distribution.
892
893 \subsubsection{\textbf{Equilibration}}
894
895 The purpose of equilibration is to allow the system to evolve
896 spontaneously for a period of time and reach equilibrium. The
897 procedure is continued until various statistical properties, such as
898 temperature, pressure, energy, volume and other structural
899 properties \textit{etc}, become independent of time. Strictly
900 speaking, minimization and heating are not necessary, provided the
901 equilibration process is long enough. However, these steps can serve
902 as a means to arrive at an equilibrated structure in an effective
903 way.
904
905 \subsection{\label{introSection:production}Production}
906
907 The production run is the most important step of the simulation, in
908 which the equilibrated structure is used as a starting point and the
909 motions of the molecules are collected for later analysis. In order
910 to capture the macroscopic properties of the system, the molecular
911 dynamics simulation must be performed by sampling correctly and
912 efficiently from the relevant thermodynamic ensemble.
913
914 The most expensive part of a molecular dynamics simulation is the
915 calculation of non-bonded forces, such as van der Waals force and
916 Coulombic forces \textit{etc}. For a system of $N$ particles, the
917 complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
918 which makes large simulations prohibitive in the absence of any
919 algorithmic tricks. A natural approach to avoid system size issues
920 is to represent the bulk behavior by a finite number of the
921 particles. However, this approach will suffer from surface effects
922 at the edges of the simulation. To offset this, \textit{Periodic
923 boundary conditions} (see Fig.~\ref{introFig:pbc}) were developed to
924 simulate bulk properties with a relatively small number of
925 particles. In this method, the simulation box is replicated
926 throughout space to form an infinite lattice. During the simulation,
927 when a particle moves in the primary cell, its image in other cells
928 move in exactly the same direction with exactly the same
929 orientation. Thus, as a particle leaves the primary cell, one of its
930 images will enter through the opposite face.
931 \begin{figure}
932 \centering
933 \includegraphics[width=\linewidth]{pbc.eps}
934 \caption[An illustration of periodic boundary conditions]{A 2-D
935 illustration of periodic boundary conditions. As one particle leaves
936 the left of the simulation box, an image of it enters the right.}
937 \label{introFig:pbc}
938 \end{figure}
939
940 %cutoff and minimum image convention
941 Another important technique to improve the efficiency of force
942 evaluation is to apply spherical cutoffs where particles farther
943 than a predetermined distance are not included in the calculation
944 \cite{Frenkel1996}. The use of a cutoff radius will cause a
945 discontinuity in the potential energy curve. Fortunately, one can
946 shift a simple radial potential to ensure the potential curve go
947 smoothly to zero at the cutoff radius. The cutoff strategy works
948 well for Lennard-Jones interaction because of its short range
949 nature. However, simply truncating the electrostatic interaction
950 with the use of cutoffs has been shown to lead to severe artifacts
951 in simulations. The Ewald summation, in which the slowly decaying
952 Coulomb potential is transformed into direct and reciprocal sums
953 with rapid and absolute convergence, has proved to minimize the
954 periodicity artifacts in liquid simulations. Taking the advantages
955 of the fast Fourier transform (FFT) for calculating discrete Fourier
956 transforms, the particle mesh-based
957 methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
958 $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
959 \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
960 which treats Coulombic interactions exactly at short range, and
961 approximate the potential at long range through multipolar
962 expansion. In spite of their wide acceptance at the molecular
963 simulation community, these two methods are difficult to implement
964 correctly and efficiently. Instead, we use a damped and
965 charge-neutralized Coulomb potential method developed by Wolf and
966 his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
967 particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
968 \begin{equation}
969 V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
970 r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
971 R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
972 r_{ij})}{r_{ij}}\right\}. \label{introEquation:shiftedCoulomb}
973 \end{equation}
974 where $\alpha$ is the convergence parameter. Due to the lack of
975 inherent periodicity and rapid convergence,this method is extremely
976 efficient and easy to implement.
977 \begin{figure}
978 \centering
979 \includegraphics[width=\linewidth]{shifted_coulomb.eps}
980 \caption[An illustration of shifted Coulomb potential]{An
981 illustration of shifted Coulomb potential.}
982 \label{introFigure:shiftedCoulomb}
983 \end{figure}
984
985 %multiple time step
986
987 \subsection{\label{introSection:Analysis} Analysis}
988
989 Recently, advanced visualization technique have become applied to
990 monitor the motions of molecules. Although the dynamics of the
991 system can be described qualitatively from animation, quantitative
992 trajectory analysis is more useful. According to the principles of
993 Statistical Mechanics in
994 Sec.~\ref{introSection:statisticalMechanics}, one can compute
995 thermodynamic properties, analyze fluctuations of structural
996 parameters, and investigate time-dependent processes of the molecule
997 from the trajectories.
998
999 \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1000
1001 Thermodynamic properties, which can be expressed in terms of some
1002 function of the coordinates and momenta of all particles in the
1003 system, can be directly computed from molecular dynamics. The usual
1004 way to measure the pressure is based on virial theorem of Clausius
1005 which states that the virial is equal to $-3Nk_BT$. For a system
1006 with forces between particles, the total virial, $W$, contains the
1007 contribution from external pressure and interaction between the
1008 particles:
1009 \[
1010 W = - 3PV + \left\langle {\sum\limits_{i < j} {r{}_{ij} \cdot
1011 f_{ij} } } \right\rangle
1012 \]
1013 where $f_{ij}$ is the force between particle $i$ and $j$ at a
1014 distance $r_{ij}$. Thus, the expression for the pressure is given
1015 by:
1016 \begin{equation}
1017 P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\sum\limits_{i
1018 < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle
1019 \end{equation}
1020
1021 \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}}
1022
1023 Structural Properties of a simple fluid can be described by a set of
1024 distribution functions. Among these functions,the \emph{pair
1025 distribution function}, also known as \emph{radial distribution
1026 function}, is of most fundamental importance to liquid theory.
1027 Experimentally, pair distribution functions can be gathered by
1028 Fourier transforming raw data from a series of neutron diffraction
1029 experiments and integrating over the surface factor
1030 \cite{Powles1973}. The experimental results can serve as a criterion
1031 to justify the correctness of a liquid model. Moreover, various
1032 equilibrium thermodynamic and structural properties can also be
1033 expressed in terms of the radial distribution function
1034 \cite{Allen1987}. The pair distribution functions $g(r)$ gives the
1035 probability that a particle $i$ will be located at a distance $r$
1036 from a another particle $j$ in the system
1037 \begin{equation}
1038 g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1039 \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
1040 (r)}{\rho}.
1041 \end{equation}
1042 Note that the delta function can be replaced by a histogram in
1043 computer simulation. Peaks in $g(r)$ represent solvent shells, and
1044 the height of these peaks gradually decreases to 1 as the liquid of
1045 large distance approaches the bulk density.
1046
1047
1048 \subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent
1049 Properties}}
1050
1051 Time-dependent properties are usually calculated using \emph{time
1052 correlation functions}, which correlate random variables $A$ and $B$
1053 at two different times,
1054 \begin{equation}
1055 C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1056 \label{introEquation:timeCorrelationFunction}
1057 \end{equation}
1058 If $A$ and $B$ refer to same variable, this kind of correlation
1059 function is called an \emph{autocorrelation function}. One example
1060 of an auto correlation function is the velocity auto-correlation
1061 function which is directly related to transport properties of
1062 molecular liquids:
1063 \[
1064 D = \frac{1}{3}\int\limits_0^\infty {\left\langle {v(t) \cdot v(0)}
1065 \right\rangle } dt
1066 \]
1067 where $D$ is diffusion constant. Unlike the velocity autocorrelation
1068 function, which is averaged over time origins and over all the
1069 atoms, the dipole autocorrelation functions is calculated for the
1070 entire system. The dipole autocorrelation function is given by:
1071 \[
1072 c_{dipole} = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1073 \right\rangle
1074 \]
1075 Here $u_{tot}$ is the net dipole of the entire system and is given
1076 by
1077 \[
1078 u_{tot} (t) = \sum\limits_i {u_i (t)}.
1079 \]
1080 In principle, many time correlation functions can be related to
1081 Fourier transforms of the infrared, Raman, and inelastic neutron
1082 scattering spectra of molecular liquids. In practice, one can
1083 extract the IR spectrum from the intensity of the molecular dipole
1084 fluctuation at each frequency using the following relationship:
1085 \[
1086 \hat c_{dipole} (v) = \int_{ - \infty }^\infty {c_{dipole} (t)e^{ -
1087 i2\pi vt} dt}.
1088 \]
1089
1090 \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1091
1092 Rigid bodies are frequently involved in the modeling of different
1093 areas, from engineering, physics, to chemistry. For example,
1094 missiles and vehicles are usually modeled by rigid bodies. The
1095 movement of the objects in 3D gaming engines or other physics
1096 simulators is governed by rigid body dynamics. In molecular
1097 simulations, rigid bodies are used to simplify protein-protein
1098 docking studies\cite{Gray2003}.
1099
1100 It is very important to develop stable and efficient methods to
1101 integrate the equations of motion for orientational degrees of
1102 freedom. Euler angles are the natural choice to describe the
1103 rotational degrees of freedom. However, due to $\frac {1}{sin
1104 \theta}$ singularities, the numerical integration of corresponding
1105 equations of these motion is very inefficient and inaccurate.
1106 Although an alternative integrator using multiple sets of Euler
1107 angles can overcome this difficulty\cite{Barojas1973}, the
1108 computational penalty and the loss of angular momentum conservation
1109 still remain. A singularity-free representation utilizing
1110 quaternions was developed by Evans in 1977\cite{Evans1977}.
1111 Unfortunately, this approach uses a nonseparable Hamiltonian
1112 resulting from the quaternion representation, which prevents the
1113 symplectic algorithm from being utilized. Another different approach
1114 is to apply holonomic constraints to the atoms belonging to the
1115 rigid body. Each atom moves independently under the normal forces
1116 deriving from potential energy and constraint forces which are used
1117 to guarantee the rigidness. However, due to their iterative nature,
1118 the SHAKE and Rattle algorithms also converge very slowly when the
1119 number of constraints increases\cite{Ryckaert1977, Andersen1983}.
1120
1121 A break-through in geometric literature suggests that, in order to
1122 develop a long-term integration scheme, one should preserve the
1123 symplectic structure of the propagator. By introducing a conjugate
1124 momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's
1125 equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was
1126 proposed to evolve the Hamiltonian system in a constraint manifold
1127 by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1128 An alternative method using the quaternion representation was
1129 developed by Omelyan\cite{Omelyan1998}. However, both of these
1130 methods are iterative and inefficient. In this section, we descibe a
1131 symplectic Lie-Poisson integrator for rigid bodies developed by
1132 Dullweber and his coworkers\cite{Dullweber1997} in depth.
1133
1134 \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1135 The motion of a rigid body is Hamiltonian with the Hamiltonian
1136 function
1137 \begin{equation}
1138 H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1139 V(q,Q) + \frac{1}{2}tr[(QQ^T - 1)\Lambda ].
1140 \label{introEquation:RBHamiltonian}
1141 \end{equation}
1142 Here, $q$ and $Q$ are the position vector and rotation matrix for
1143 the rigid-body, $p$ and $P$ are conjugate momenta to $q$ and $Q$ ,
1144 and $J$, a diagonal matrix, is defined by
1145 \[
1146 I_{ii}^{ - 1} = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1147 \]
1148 where $I_{ii}$ is the diagonal element of the inertia tensor. This
1149 constrained Hamiltonian equation is subjected to a holonomic
1150 constraint,
1151 \begin{equation}
1152 Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1153 \end{equation}
1154 which is used to ensure the rotation matrix's unitarity. Using
1155 Equation (\ref{introEquation:motionHamiltonianCoordinate},
1156 \ref{introEquation:motionHamiltonianMomentum}), one can write down
1157 the equations of motion,
1158 \begin{eqnarray}
1159 \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
1160 \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q), \label{introEquation:RBMotionMomentum}\\
1161 \frac{{dQ}}{{dt}} & = & PJ^{ - 1}, \label{introEquation:RBMotionRotation}\\
1162 \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1163 \end{eqnarray}
1164 Differentiating Eq.~\ref{introEquation:orthogonalConstraint} and
1165 using Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1166 \begin{equation}
1167 Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0 . \\
1168 \label{introEquation:RBFirstOrderConstraint}
1169 \end{equation}
1170 In general, there are two ways to satisfy the holonomic constraints.
1171 We can use a constraint force provided by a Lagrange multiplier on
1172 the normal manifold to keep the motion on the constraint space. Or
1173 we can simply evolve the system on the constraint manifold. These
1174 two methods have been proved to be equivalent. The holonomic
1175 constraint and equations of motions define a constraint manifold for
1176 rigid bodies
1177 \[
1178 M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0}
1179 \right\}.
1180 \]
1181 Unfortunately, this constraint manifold is not $T^* SO(3)$ which is
1182 a symplectic manifold on Lie rotation group $SO(3)$. However, it
1183 turns out that under symplectic transformation, the cotangent space
1184 and the phase space are diffeomorphic. By introducing
1185 \[
1186 \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1187 \]
1188 the mechanical system subject to a holonomic constraint manifold $M$
1189 can be re-formulated as a Hamiltonian system on the cotangent space
1190 \[
1191 T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
1192 1,\tilde Q^T \tilde PJ^{ - 1} + J^{ - 1} P^T \tilde Q = 0} \right\}
1193 \]
1194 For a body fixed vector $X_i$ with respect to the center of mass of
1195 the rigid body, its corresponding lab fixed vector $X_0^{lab}$ is
1196 given as
1197 \begin{equation}
1198 X_i^{lab} = Q X_i + q.
1199 \end{equation}
1200 Therefore, potential energy $V(q,Q)$ is defined by
1201 \[
1202 V(q,Q) = V(Q X_0 + q).
1203 \]
1204 Hence, the force and torque are given by
1205 \[
1206 \nabla _q V(q,Q) = F(q,Q) = \sum\limits_i {F_i (q,Q)},
1207 \]
1208 and
1209 \[
1210 \nabla _Q V(q,Q) = F(q,Q)X_i^t
1211 \]
1212 respectively. As a common choice to describe the rotation dynamics
1213 of the rigid body, the angular momentum on the body fixed frame $\Pi
1214 = Q^t P$ is introduced to rewrite the equations of motion,
1215 \begin{equation}
1216 \begin{array}{l}
1217 \dot \Pi = J^{ - 1} \Pi ^T \Pi + Q^T \sum\limits_i {F_i (q,Q)X_i^T } - \Lambda, \\
1218 \dot Q = Q\Pi {\rm{ }}J^{ - 1}, \\
1219 \end{array}
1220 \label{introEqaution:RBMotionPI}
1221 \end{equation}
1222 as well as holonomic constraints $\Pi J^{ - 1} + J^{ - 1} \Pi ^t =
1223 0$ and $Q^T Q = 1$. For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a
1224 matrix $\hat v \in so(3)^ \star$, the hat-map isomorphism,
1225 \begin{equation}
1226 v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left(
1227 {\begin{array}{*{20}c}
1228 0 & { - v_3 } & {v_2 } \\
1229 {v_3 } & 0 & { - v_1 } \\
1230 { - v_2 } & {v_1 } & 0 \\
1231 \end{array}} \right),
1232 \label{introEquation:hatmapIsomorphism}
1233 \end{equation}
1234 will let us associate the matrix products with traditional vector
1235 operations
1236 \[
1237 \hat vu = v \times u.
1238 \]
1239 Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew
1240 matrix,
1241 \begin{eqnarray}
1242 (\dot \Pi - \dot \Pi ^T )&= &(\Pi - \Pi ^T )(J^{ - 1} \Pi + \Pi J^{ - 1} ) \notag \\
1243 & & + \sum\limits_i {[Q^T F_i (r,Q)X_i^T - X_i F_i (r,Q)^T Q]} -
1244 (\Lambda - \Lambda ^T ). \label{introEquation:skewMatrixPI}
1245 \end{eqnarray}
1246 Since $\Lambda$ is symmetric, the last term of
1247 Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1248 Lagrange multiplier $\Lambda$ is absent from the equations of
1249 motion. This unique property eliminates the requirement of
1250 iterations which can not be avoided in other methods\cite{Kol1997,
1251 Omelyan1998}. Applying the hat-map isomorphism, we obtain the
1252 equation of motion for angular momentum in the body frame
1253 \begin{equation}
1254 \dot \pi = \pi \times I^{ - 1} \pi + \sum\limits_i {\left( {Q^T
1255 F_i (r,Q)} \right) \times X_i }.
1256 \label{introEquation:bodyAngularMotion}
1257 \end{equation}
1258 In the same manner, the equation of motion for rotation matrix is
1259 given by
1260 \[
1261 \dot Q = Qskew(I^{ - 1} \pi ).
1262 \]
1263
1264 \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1265 Lie-Poisson Integrator for Free Rigid Bodies}
1266
1267 If there are no external forces exerted on the rigid body, the only
1268 contribution to the rotational motion is from the kinetic energy
1269 (the first term of \ref{introEquation:bodyAngularMotion}). The free
1270 rigid body is an example of a Lie-Poisson system with Hamiltonian
1271 function
1272 \begin{equation}
1273 T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 )
1274 \label{introEquation:rotationalKineticRB}
1275 \end{equation}
1276 where $T_i^r (\pi _i ) = \frac{{\pi _i ^2 }}{{2I_i }}$ and
1277 Lie-Poisson structure matrix,
1278 \begin{equation}
1279 J(\pi ) = \left( {\begin{array}{*{20}c}
1280 0 & {\pi _3 } & { - \pi _2 } \\
1281 { - \pi _3 } & 0 & {\pi _1 } \\
1282 {\pi _2 } & { - \pi _1 } & 0 \\
1283 \end{array}} \right).
1284 \end{equation}
1285 Thus, the dynamics of free rigid body is governed by
1286 \begin{equation}
1287 \frac{d}{{dt}}\pi = J(\pi )\nabla _\pi T^r (\pi ).
1288 \end{equation}
1289 One may notice that each $T_i^r$ in
1290 Eq.~\ref{introEquation:rotationalKineticRB} can be solved exactly.
1291 For instance, the equations of motion due to $T_1^r$ are given by
1292 \begin{equation}
1293 \frac{d}{{dt}}\pi = R_1 \pi ,\frac{d}{{dt}}Q = QR_1
1294 \label{introEqaution:RBMotionSingleTerm}
1295 \end{equation}
1296 with
1297 \[ R_1 = \left( {\begin{array}{*{20}c}
1298 0 & 0 & 0 \\
1299 0 & 0 & {\pi _1 } \\
1300 0 & { - \pi _1 } & 0 \\
1301 \end{array}} \right).
1302 \]
1303 The solutions of Eq.~\ref{introEqaution:RBMotionSingleTerm} is
1304 \[
1305 \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) =
1306 Q(0)e^{\Delta tR_1 }
1307 \]
1308 with
1309 \[
1310 e^{\Delta tR_1 } = \left( {\begin{array}{*{20}c}
1311 0 & 0 & 0 \\
1312 0 & {\cos \theta _1 } & {\sin \theta _1 } \\
1313 0 & { - \sin \theta _1 } & {\cos \theta _1 } \\
1314 \end{array}} \right),\theta _1 = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1315 \]
1316 To reduce the cost of computing expensive functions in $e^{\Delta
1317 tR_1 }$, we can use the Cayley transformation to obtain a
1318 single-aixs propagator,
1319 \begin{eqnarray*}
1320 e^{\Delta tR_1 } & \approx & (1 - \Delta tR_1 )^{ - 1} (1 + \Delta
1321 tR_1 ) \\
1322 %
1323 & \approx & \left( \begin{array}{ccc}
1324 1 & 0 & 0 \\
1325 0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4} & -\frac{\theta}{1+
1326 \theta^2 / 4} \\
1327 0 & \frac{\theta}{1+ \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 +
1328 \theta^2 / 4}
1329 \end{array}
1330 \right).
1331 \end{eqnarray*}
1332 The propagators for $T_2^r$ and $T_3^r$ can be found in the same
1333 manner. In order to construct a second-order symplectic method, we
1334 split the angular kinetic Hamiltonian function into five terms
1335 \[
1336 T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1337 ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
1338 (\pi _1 ).
1339 \]
1340 By concatenating the propagators corresponding to these five terms,
1341 we can obtain an symplectic integrator,
1342 \[
1343 \varphi _{\Delta t,T^r } = \varphi _{\Delta t/2,\pi _1 } \circ
1344 \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 }
1345 \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi
1346 _1 }.
1347 \]
1348 The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1349 $F(\pi )$ and $G(\pi )$ is defined by
1350 \[
1351 \{ F,G\} (\pi ) = [\nabla _\pi F(\pi )]^T J(\pi )\nabla _\pi G(\pi
1352 ).
1353 \]
1354 If the Poisson bracket of a function $F$ with an arbitrary smooth
1355 function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1356 conserved quantity in Poisson system. We can easily verify that the
1357 norm of the angular momentum, $\parallel \pi
1358 \parallel$, is a \emph{Casimir}\cite{McLachlan1993}. Let$ F(\pi ) = S(\frac{{\parallel
1359 \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1360 then by the chain rule
1361 \[
1362 \nabla _\pi F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2
1363 }}{2})\pi.
1364 \]
1365 Thus, $ [\nabla _\pi F(\pi )]^T J(\pi ) = - S'(\frac{{\parallel
1366 \pi
1367 \parallel ^2 }}{2})\pi \times \pi = 0 $. This explicit
1368 Lie-Poisson integrator is found to be both extremely efficient and
1369 stable. These properties can be explained by the fact the small
1370 angle approximation is used and the norm of the angular momentum is
1371 conserved.
1372
1373 \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian
1374 Splitting for Rigid Body}
1375
1376 The Hamiltonian of rigid body can be separated in terms of kinetic
1377 energy and potential energy,$H = T(p,\pi ) + V(q,Q)$. The equations
1378 of motion corresponding to potential energy and kinetic energy are
1379 listed in the below table,
1380 \begin{table}
1381 \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1382 \begin{center}
1383 \begin{tabular}{|l|l|}
1384 \hline
1385 % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
1386 Potential & Kinetic \\
1387 $\frac{{dq}}{{dt}} = \frac{p}{m}$ & $\frac{d}{{dt}}q = p$ \\
1388 $\frac{d}{{dt}}p = - \frac{{\partial V}}{{\partial q}}$ & $ \frac{d}{{dt}}p = 0$ \\
1389 $\frac{d}{{dt}}Q = 0$ & $ \frac{d}{{dt}}Q = Qskew(I^{ - 1} j)$ \\
1390 $ \frac{d}{{dt}}\pi = \sum\limits_i {\left( {Q^T F_i (r,Q)} \right) \times X_i }$ & $\frac{d}{{dt}}\pi = \pi \times I^{ - 1} \pi$\\
1391 \hline
1392 \end{tabular}
1393 \end{center}
1394 \end{table}
1395 A second-order symplectic method is now obtained by the composition
1396 of the position and velocity propagators,
1397 \[
1398 \varphi _{\Delta t} = \varphi _{\Delta t/2,V} \circ \varphi
1399 _{\Delta t,T} \circ \varphi _{\Delta t/2,V}.
1400 \]
1401 Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1402 sub-propagators which corresponding to force and torque
1403 respectively,
1404 \[
1405 \varphi _{\Delta t/2,V} = \varphi _{\Delta t/2,F} \circ \varphi
1406 _{\Delta t/2,\tau }.
1407 \]
1408 Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1409 $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order
1410 inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the
1411 kinetic energy can be separated to translational kinetic term, $T^t
1412 (p)$, and rotational kinetic term, $T^r (\pi )$,
1413 \begin{equation}
1414 T(p,\pi ) =T^t (p) + T^r (\pi ).
1415 \end{equation}
1416 where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1417 defined by Eq.~\ref{introEquation:rotationalKineticRB}. Therefore,
1418 the corresponding propagators are given by
1419 \[
1420 \varphi _{\Delta t,T} = \varphi _{\Delta t,T^t } \circ \varphi
1421 _{\Delta t,T^r }.
1422 \]
1423 Finally, we obtain the overall symplectic propagators for freely
1424 moving rigid bodies
1425 \begin{eqnarray}
1426 \varphi _{\Delta t} &=& \varphi _{\Delta t/2,F} \circ \varphi _{\Delta t/2,\tau } \notag\\
1427 & & \circ \varphi _{\Delta t,T^t } \circ \varphi _{\Delta t/2,\pi _1 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi _1 } \notag\\
1428 & & \circ \varphi _{\Delta t/2,\tau } \circ \varphi _{\Delta t/2,F} .
1429 \label{introEquation:overallRBFlowMaps}
1430 \end{eqnarray}
1431
1432 \section{\label{introSection:langevinDynamics}Langevin Dynamics}
1433 As an alternative to newtonian dynamics, Langevin dynamics, which
1434 mimics a simple heat bath with stochastic and dissipative forces,
1435 has been applied in a variety of studies. This section will review
1436 the theory of Langevin dynamics. A brief derivation of generalized
1437 Langevin equation will be given first. Following that, we will
1438 discuss the physical meaning of the terms appearing in the equation
1439 as well as the calculation of friction tensor from hydrodynamics
1440 theory.
1441
1442 \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1443
1444 A harmonic bath model, in which an effective set of harmonic
1445 oscillators are used to mimic the effect of a linearly responding
1446 environment, has been widely used in quantum chemistry and
1447 statistical mechanics. One of the successful applications of
1448 Harmonic bath model is the derivation of the Generalized Langevin
1449 Dynamics (GLE). Lets consider a system, in which the degree of
1450 freedom $x$ is assumed to couple to the bath linearly, giving a
1451 Hamiltonian of the form
1452 \begin{equation}
1453 H = \frac{{p^2 }}{{2m}} + U(x) + H_B + \Delta U(x,x_1 , \ldots x_N)
1454 \label{introEquation:bathGLE}.
1455 \end{equation}
1456 Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated
1457 with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1458 \[
1459 H_B = \sum\limits_{\alpha = 1}^N {\left\{ {\frac{{p_\alpha ^2
1460 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha \omega _\alpha ^2 }
1461 \right\}}
1462 \]
1463 where the index $\alpha$ runs over all the bath degrees of freedom,
1464 $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1465 the harmonic bath masses, and $\Delta U$ is a bilinear system-bath
1466 coupling,
1467 \[
1468 \Delta U = - \sum\limits_{\alpha = 1}^N {g_\alpha x_\alpha x}
1469 \]
1470 where $g_\alpha$ are the coupling constants between the bath
1471 coordinates ($x_ \alpha$) and the system coordinate ($x$).
1472 Introducing
1473 \[
1474 W(x) = U(x) - \sum\limits_{\alpha = 1}^N {\frac{{g_\alpha ^2
1475 }}{{2m_\alpha w_\alpha ^2 }}} x^2
1476 \]
1477 and combining the last two terms in Eq.~\ref{introEquation:bathGLE}, we may rewrite the Harmonic bath Hamiltonian as
1478 \[
1479 H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha = 1}^N
1480 {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha
1481 w_\alpha ^2 \left( {x_\alpha - \frac{{g_\alpha }}{{m_\alpha
1482 w_\alpha ^2 }}x} \right)^2 } \right\}}.
1483 \]
1484 Since the first two terms of the new Hamiltonian depend only on the
1485 system coordinates, we can get the equations of motion for
1486 Generalized Langevin Dynamics by Hamilton's equations,
1487 \begin{equation}
1488 m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} -
1489 \sum\limits_{\alpha = 1}^N {g_\alpha \left( {x_\alpha -
1490 \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right)},
1491 \label{introEquation:coorMotionGLE}
1492 \end{equation}
1493 and
1494 \begin{equation}
1495 m\ddot x_\alpha = - m_\alpha w_\alpha ^2 \left( {x_\alpha -
1496 \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right).
1497 \label{introEquation:bathMotionGLE}
1498 \end{equation}
1499 In order to derive an equation for $x$, the dynamics of the bath
1500 variables $x_\alpha$ must be solved exactly first. As an integral
1501 transform which is particularly useful in solving linear ordinary
1502 differential equations,the Laplace transform is the appropriate tool
1503 to solve this problem. The basic idea is to transform the difficult
1504 differential equations into simple algebra problems which can be
1505 solved easily. Then, by applying the inverse Laplace transform, we
1506 can retrieve the solutions of the original problems. Let $f(t)$ be a
1507 function defined on $ [0,\infty ) $, the Laplace transform of $f(t)$
1508 is a new function defined as
1509 \[
1510 L(f(t)) \equiv F(p) = \int_0^\infty {f(t)e^{ - pt} dt}
1511 \]
1512 where $p$ is real and $L$ is called the Laplace Transform
1513 Operator. Below are some important properties of Laplace transform
1514 \begin{eqnarray*}
1515 L(x + y) & = & L(x) + L(y) \\
1516 L(ax) & = & aL(x) \\
1517 L(\dot x) & = & pL(x) - px(0) \\
1518 L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\
1519 L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\
1520 \end{eqnarray*}
1521 Applying the Laplace transform to the bath coordinates, we obtain
1522 \begin{eqnarray*}
1523 p^2 L(x_\alpha ) - px_\alpha (0) - \dot x_\alpha (0) & = & - \omega _\alpha ^2 L(x_\alpha ) + \frac{{g_\alpha }}{{\omega _\alpha }}L(x), \\
1524 L(x_\alpha ) & = & \frac{{\frac{{g_\alpha }}{{\omega _\alpha }}L(x) + px_\alpha (0) + \dot x_\alpha (0)}}{{p^2 + \omega _\alpha ^2 }}. \\
1525 \end{eqnarray*}
1526 In the same way, the system coordinates become
1527 \begin{eqnarray*}
1528 mL(\ddot x) & = &
1529 - \sum\limits_{\alpha = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}\frac{p}{{p^2 + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2 + \omega _\alpha ^2 }}g_\alpha x_\alpha (0) - \frac{1}{{p^2 + \omega _\alpha ^2 }}g_\alpha \dot x_\alpha (0)} \right\}} \\
1530 & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}.
1531 \end{eqnarray*}
1532 With the help of some relatively important inverse Laplace
1533 transformations:
1534 \[
1535 \begin{array}{c}
1536 L(\cos at) = \frac{p}{{p^2 + a^2 }} \\
1537 L(\sin at) = \frac{a}{{p^2 + a^2 }} \\
1538 L(1) = \frac{1}{p} \\
1539 \end{array}
1540 \]
1541 we obtain
1542 \begin{eqnarray*}
1543 m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} -
1544 \sum\limits_{\alpha = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2
1545 }}{{m_\alpha \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega
1546 _\alpha t)\dot x(t - \tau )d\tau } } \right\}} \\
1547 & & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha
1548 x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}}
1549 \right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha
1550 (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}\\
1551 %
1552 & = & -
1553 \frac{{\partial W(x)}}{{\partial x}} - \int_0^t {\sum\limits_{\alpha
1554 = 1}^N {\left( { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha
1555 ^2 }}} \right)\cos (\omega _\alpha
1556 t)\dot x(t - \tau )d} \tau } \\
1557 & & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha
1558 x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}}
1559 \right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha
1560 (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}
1561 \end{eqnarray*}
1562 Introducing a \emph{dynamic friction kernel}
1563 \begin{equation}
1564 \xi (t) = \sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2
1565 }}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha t)}
1566 \label{introEquation:dynamicFrictionKernelDefinition}
1567 \end{equation}
1568 and \emph{a random force}
1569 \begin{equation}
1570 R(t) = \sum\limits_{\alpha = 1}^N {\left( {g_\alpha x_\alpha (0)
1571 - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}x(0)}
1572 \right)\cos (\omega _\alpha t)} + \frac{{\dot x_\alpha
1573 (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t),
1574 \label{introEquation:randomForceDefinition}
1575 \end{equation}
1576 the equation of motion can be rewritten as
1577 \begin{equation}
1578 m\ddot x = - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi
1579 (t)\dot x(t - \tau )d\tau } + R(t)
1580 \label{introEuqation:GeneralizedLangevinDynamics}
1581 \end{equation}
1582 which is known as the \emph{generalized Langevin equation}.
1583
1584 \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1585
1586 One may notice that $R(t)$ depends only on initial conditions, which
1587 implies it is completely deterministic within the context of a
1588 harmonic bath. However, it is easy to verify that $R(t)$ is totally
1589 uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)}
1590 \right\rangle = 0, \left\langle {\dot x(t)R(t)} \right\rangle =
1591 0.$ This property is what we expect from a truly random process. As
1592 long as the model chosen for $R(t)$ was a gaussian distribution in
1593 general, the stochastic nature of the GLE still remains.
1594 %dynamic friction kernel
1595 The convolution integral
1596 \[
1597 \int_0^t {\xi (t)\dot x(t - \tau )d\tau }
1598 \]
1599 depends on the entire history of the evolution of $x$, which implies
1600 that the bath retains memory of previous motions. In other words,
1601 the bath requires a finite time to respond to change in the motion
1602 of the system. For a sluggish bath which responds slowly to changes
1603 in the system coordinate, we may regard $\xi(t)$ as a constant
1604 $\xi(t) = \Xi_0$. Hence, the convolution integral becomes
1605 \[
1606 \int_0^t {\xi (t)\dot x(t - \tau )d\tau } = \xi _0 (x(t) - x(0))
1607 \]
1608 and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1609 \[
1610 m\ddot x = - \frac{\partial }{{\partial x}}\left( {W(x) +
1611 \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1612 \]
1613 which can be used to describe the effect of dynamic caging in
1614 viscous solvents. The other extreme is the bath that responds
1615 infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1616 taken as a $delta$ function in time:
1617 \[
1618 \xi (t) = 2\xi _0 \delta (t)
1619 \]
1620 Hence, the convolution integral becomes
1621 \[
1622 \int_0^t {\xi (t)\dot x(t - \tau )d\tau } = 2\xi _0 \int_0^t
1623 {\delta (t)\dot x(t - \tau )d\tau } = \xi _0 \dot x(t),
1624 \]
1625 and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1626 \begin{equation}
1627 m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1628 x(t) + R(t) \label{introEquation:LangevinEquation}
1629 \end{equation}
1630 which is known as the Langevin equation. The static friction
1631 coefficient $\xi _0$ can either be calculated from spectral density
1632 or be determined by Stokes' law for regular shaped particles. A
1633 brief review on calculating friction tensors for arbitrary shaped
1634 particles is given in Sec.~\ref{introSection:frictionTensor}.
1635
1636 \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1637
1638 Defining a new set of coordinates
1639 \[
1640 q_\alpha (t) = x_\alpha (t) - \frac{1}{{m_\alpha \omega _\alpha
1641 ^2 }}x(0),
1642 \]
1643 we can rewrite $R(T)$ as
1644 \[
1645 R(t) = \sum\limits_{\alpha = 1}^N {g_\alpha q_\alpha (t)}.
1646 \]
1647 And since the $q$ coordinates are harmonic oscillators,
1648 \begin{eqnarray*}
1649 \left\langle {q_\alpha ^2 } \right\rangle & = & \frac{{kT}}{{m_\alpha \omega _\alpha ^2 }} \\
1650 \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t) \\
1651 \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle \\
1652 \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha {\sum\limits_\beta {g_\alpha g_\beta \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle } } \\
1653 & = &\sum\limits_\alpha {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t)} \\
1654 & = &kT\xi (t)
1655 \end{eqnarray*}
1656 Thus, we recover the \emph{second fluctuation dissipation theorem}
1657 \begin{equation}
1658 \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1659 \label{introEquation:secondFluctuationDissipation},
1660 \end{equation}
1661 which acts as a constraint on the possible ways in which one can
1662 model the random force and friction kernel.