ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
Revision: 2905
Committed: Wed Jun 28 19:09:25 2006 UTC (18 years, 2 months ago) by tim
Content type: application/x-tex
File size: 75490 byte(s)
Log Message:
more corrections.

File Contents

# User Rev Content
1 tim 2685 \chapter{\label{chapt:introduction}INTRODUCTION AND THEORETICAL BACKGROUND}
2    
3 tim 2693 \section{\label{introSection:classicalMechanics}Classical
4     Mechanics}
5 tim 2685
6 tim 2692 Closely related to Classical Mechanics, Molecular Dynamics
7     simulations are carried out by integrating the equations of motion
8     for a given system of particles. There are three fundamental ideas
9 tim 2819 behind classical mechanics. Firstly, one can determine the state of
10 tim 2692 a mechanical system at any time of interest; Secondly, all the
11     mechanical properties of the system at that time can be determined
12     by combining the knowledge of the properties of the system with the
13     specification of this state; Finally, the specification of the state
14     when further combine with the laws of mechanics will also be
15     sufficient to predict the future behavior of the system.
16 tim 2685
17 tim 2693 \subsection{\label{introSection:newtonian}Newtonian Mechanics}
18 tim 2694 The discovery of Newton's three laws of mechanics which govern the
19     motion of particles is the foundation of the classical mechanics.
20 tim 2819 Newton's first law defines a class of inertial frames. Inertial
21 tim 2694 frames are reference frames where a particle not interacting with
22     other bodies will move with constant speed in the same direction.
23 tim 2819 With respect to inertial frames, Newton's second law has the form
24 tim 2694 \begin{equation}
25 tim 2819 F = \frac {dp}{dt} = \frac {mdv}{dt}
26 tim 2694 \label{introEquation:newtonSecondLaw}
27     \end{equation}
28     A point mass interacting with other bodies moves with the
29     acceleration along the direction of the force acting on it. Let
30 tim 2702 $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and
31     $F_{ji}$ be the force that particle $j$ exerts on particle $i$.
32 tim 2819 Newton's third law states that
33 tim 2694 \begin{equation}
34 tim 2898 F_{ij} = -F_{ji}.
35 tim 2694 \label{introEquation:newtonThirdLaw}
36     \end{equation}
37     Conservation laws of Newtonian Mechanics play very important roles
38     in solving mechanics problems. The linear momentum of a particle is
39     conserved if it is free or it experiences no force. The second
40     conservation theorem concerns the angular momentum of a particle.
41     The angular momentum $L$ of a particle with respect to an origin
42     from which $r$ is measured is defined to be
43     \begin{equation}
44     L \equiv r \times p \label{introEquation:angularMomentumDefinition}
45     \end{equation}
46     The torque $\tau$ with respect to the same origin is defined to be
47     \begin{equation}
48 tim 2819 \tau \equiv r \times F \label{introEquation:torqueDefinition}
49 tim 2694 \end{equation}
50     Differentiating Eq.~\ref{introEquation:angularMomentumDefinition},
51     \[
52     \dot L = \frac{d}{{dt}}(r \times p) = (\dot r \times p) + (r \times
53     \dot p)
54     \]
55     since
56     \[
57     \dot r \times p = \dot r \times mv = m\dot r \times \dot r \equiv 0
58     \]
59     thus,
60     \begin{equation}
61 tim 2819 \dot L = r \times \dot p = \tau
62 tim 2694 \end{equation}
63     If there are no external torques acting on a body, the angular
64     momentum of it is conserved. The last conservation theorem state
65 tim 2899 that if all forces are conservative, energy is conserved,
66     \begin{equation}E = T + V. \label{introEquation:energyConservation}
67 tim 2696 \end{equation}
68 tim 2899 All of these conserved quantities are important factors to determine
69     the quality of numerical integration schemes for rigid bodies
70     \cite{Dullweber1997}.
71 tim 2694
72 tim 2693 \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
73 tim 2692
74 tim 2904 Newtonian Mechanics suffers from a important limitation: motions can
75     only be described in cartesian coordinate systems which make it
76     impossible to predict analytically the properties of the system even
77     if we know all of the details of the interaction. In order to
78     overcome some of the practical difficulties which arise in attempts
79     to apply Newton's equation to complex system, approximate numerical
80     procedures may be developed.
81 tim 2692
82 tim 2819 \subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's
83     Principle}}
84 tim 2692
85     Hamilton introduced the dynamical principle upon which it is
86 tim 2819 possible to base all of mechanics and most of classical physics.
87 tim 2898 Hamilton's Principle may be stated as follows: the actual
88     trajectory, along which a dynamical system may move from one point
89     to another within a specified time, is derived by finding the path
90     which minimizes the time integral of the difference between the
91 tim 2905 kinetic $K$, and potential energies $U$,
92 tim 2692 \begin{equation}
93 tim 2899 \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}.
94 tim 2693 \label{introEquation:halmitonianPrinciple1}
95 tim 2692 \end{equation}
96     For simple mechanical systems, where the forces acting on the
97 tim 2819 different parts are derivable from a potential, the Lagrangian
98     function $L$ can be defined as the difference between the kinetic
99     energy of the system and its potential energy,
100 tim 2692 \begin{equation}
101 tim 2904 L \equiv K - U = L(q_i ,\dot q_i ).
102 tim 2692 \label{introEquation:lagrangianDef}
103     \end{equation}
104 tim 2904 Thus, Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
105 tim 2692 \begin{equation}
106 tim 2904 \delta \int_{t_1 }^{t_2 } {L dt = 0} .
107 tim 2693 \label{introEquation:halmitonianPrinciple2}
108 tim 2692 \end{equation}
109    
110 tim 2819 \subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The
111     Equations of Motion in Lagrangian Mechanics}}
112 tim 2692
113 tim 2850 For a system of $f$ degrees of freedom, the equations of motion in
114     the Lagrangian form is
115 tim 2692 \begin{equation}
116     \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} -
117     \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f
118 tim 2693 \label{introEquation:eqMotionLagrangian}
119 tim 2692 \end{equation}
120     where $q_{i}$ is generalized coordinate and $\dot{q_{i}}$ is
121     generalized velocity.
122    
123 tim 2693 \subsection{\label{introSection:hamiltonian}Hamiltonian Mechanics}
124 tim 2692
125     Arising from Lagrangian Mechanics, Hamiltonian Mechanics was
126     introduced by William Rowan Hamilton in 1833 as a re-formulation of
127     classical mechanics. If the potential energy of a system is
128 tim 2819 independent of velocities, the momenta can be defined as
129 tim 2692 \begin{equation}
130     p_i = \frac{\partial L}{\partial \dot q_i}
131     \label{introEquation:generalizedMomenta}
132     \end{equation}
133 tim 2693 The Lagrange equations of motion are then expressed by
134 tim 2692 \begin{equation}
135 tim 2693 p_i = \frac{{\partial L}}{{\partial q_i }}
136     \label{introEquation:generalizedMomentaDot}
137     \end{equation}
138     With the help of the generalized momenta, we may now define a new
139     quantity $H$ by the equation
140     \begin{equation}
141     H = \sum\limits_k {p_k \dot q_k } - L ,
142 tim 2692 \label{introEquation:hamiltonianDefByLagrangian}
143     \end{equation}
144     where $ \dot q_1 \ldots \dot q_f $ are generalized velocities and
145 tim 2898 $L$ is the Lagrangian function for the system. Differentiating
146     Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, one can obtain
147 tim 2693 \begin{equation}
148     dH = \sum\limits_k {\left( {p_k d\dot q_k + \dot q_k dp_k -
149     \frac{{\partial L}}{{\partial q_k }}dq_k - \frac{{\partial
150     L}}{{\partial \dot q_k }}d\dot q_k } \right)} - \frac{{\partial
151 tim 2904 L}}{{\partial t}}dt . \label{introEquation:diffHamiltonian1}
152 tim 2693 \end{equation}
153 tim 2899 Making use of Eq.~\ref{introEquation:generalizedMomenta}, the second
154     and fourth terms in the parentheses cancel. Therefore,
155 tim 2693 Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as
156     \begin{equation}
157     dH = \sum\limits_k {\left( {\dot q_k dp_k - \dot p_k dq_k }
158 tim 2904 \right)} - \frac{{\partial L}}{{\partial t}}dt .
159 tim 2693 \label{introEquation:diffHamiltonian2}
160     \end{equation}
161     By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can
162     find
163     \begin{equation}
164 tim 2819 \frac{{\partial H}}{{\partial p_k }} = \dot {q_k}
165 tim 2693 \label{introEquation:motionHamiltonianCoordinate}
166     \end{equation}
167     \begin{equation}
168 tim 2819 \frac{{\partial H}}{{\partial q_k }} = - \dot {p_k}
169 tim 2693 \label{introEquation:motionHamiltonianMomentum}
170     \end{equation}
171     and
172     \begin{equation}
173     \frac{{\partial H}}{{\partial t}} = - \frac{{\partial L}}{{\partial
174     t}}
175     \label{introEquation:motionHamiltonianTime}
176     \end{equation}
177 tim 2899 where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
178 tim 2693 Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
179     equation of motion. Due to their symmetrical formula, they are also
180 tim 2786 known as the canonical equations of motions \cite{Goldstein2001}.
181 tim 2693
182 tim 2692 An important difference between Lagrangian approach and the
183     Hamiltonian approach is that the Lagrangian is considered to be a
184 tim 2819 function of the generalized velocities $\dot q_i$ and coordinates
185     $q_i$, while the Hamiltonian is considered to be a function of the
186     generalized momenta $p_i$ and the conjugate coordinates $q_i$.
187     Hamiltonian Mechanics is more appropriate for application to
188     statistical mechanics and quantum mechanics, since it treats the
189     coordinate and its time derivative as independent variables and it
190     only works with 1st-order differential equations\cite{Marion1990}.
191 tim 2696 In Newtonian Mechanics, a system described by conservative forces
192 tim 2899 conserves the total energy
193     (Eq.~\ref{introEquation:energyConservation}). It follows that
194 tim 2904 Hamilton's equations of motion conserve the total Hamiltonian
195 tim 2696 \begin{equation}
196     \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial
197     H}}{{\partial q_i }}\dot q_i + \frac{{\partial H}}{{\partial p_i
198     }}\dot p_i } \right)} = \sum\limits_i {\left( {\frac{{\partial
199     H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} -
200     \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial
201 tim 2904 q_i }}} \right) = 0}. \label{introEquation:conserveHalmitonian}
202 tim 2696 \end{equation}
203    
204 tim 2693 \section{\label{introSection:statisticalMechanics}Statistical
205     Mechanics}
206 tim 2692
207 tim 2694 The thermodynamic behaviors and properties of Molecular Dynamics
208 tim 2692 simulation are governed by the principle of Statistical Mechanics.
209     The following section will give a brief introduction to some of the
210 tim 2700 Statistical Mechanics concepts and theorem presented in this
211     dissertation.
212 tim 2692
213 tim 2700 \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
214 tim 2692
215 tim 2700 Mathematically, phase space is the space which represents all
216     possible states. Each possible state of the system corresponds to
217     one unique point in the phase space. For mechanical systems, the
218     phase space usually consists of all possible values of position and
219 tim 2819 momentum variables. Consider a dynamic system of $f$ particles in a
220     cartesian space, where each of the $6f$ coordinates and momenta is
221     assigned to one of $6f$ mutually orthogonal axes, the phase space of
222 tim 2904 this system is a $6f$ dimensional space. A point, $x =
223     (\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
224     \over q} _1 , \ldots
225     ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
226     \over q} _f
227     ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
228     \over p} _1 \ldots
229     ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
230     \over p} _f )$ , with a unique set of values of $6f$ coordinates and
231     momenta is a phase space vector.
232 tim 2888 %%%fix me
233 tim 2700
234 tim 2888 In statistical mechanics, the condition of an ensemble at any time
235 tim 2700 can be regarded as appropriately specified by the density $\rho$
236     with which representative points are distributed over the phase
237 tim 2819 space. The density distribution for an ensemble with $f$ degrees of
238     freedom is defined as,
239 tim 2700 \begin{equation}
240     \rho = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t).
241     \label{introEquation:densityDistribution}
242     \end{equation}
243     Governed by the principles of mechanics, the phase points change
244 tim 2819 their locations which would change the density at any time at phase
245     space. Hence, the density distribution is also to be taken as a
246 tim 2905 function of the time. The number of systems $\delta N$ at time $t$
247     can be determined by,
248 tim 2700 \begin{equation}
249     \delta N = \rho (q,p,t)dq_1 \ldots dq_f dp_1 \ldots dp_f.
250     \label{introEquation:deltaN}
251     \end{equation}
252 tim 2819 Assuming a large enough population of systems, we can sufficiently
253     approximate $\delta N$ without introducing discontinuity when we go
254     from one region in the phase space to another. By integrating over
255     the whole phase space,
256 tim 2700 \begin{equation}
257     N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f
258     \label{introEquation:totalNumberSystem}
259     \end{equation}
260     gives us an expression for the total number of the systems. Hence,
261     the probability per unit in the phase space can be obtained by,
262     \begin{equation}
263     \frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int
264     {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
265     \label{introEquation:unitProbability}
266     \end{equation}
267 tim 2850 With the help of Eq.~\ref{introEquation:unitProbability} and the
268     knowledge of the system, it is possible to calculate the average
269 tim 2700 value of any desired quantity which depends on the coordinates and
270     momenta of the system. Even when the dynamics of the real system is
271     complex, or stochastic, or even discontinuous, the average
272 tim 2819 properties of the ensemble of possibilities as a whole remaining
273     well defined. For a classical system in thermal equilibrium with its
274     environment, the ensemble average of a mechanical quantity, $\langle
275     A(q , p) \rangle_t$, takes the form of an integral over the phase
276     space of the system,
277 tim 2700 \begin{equation}
278     \langle A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho
279     (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
280 tim 2905 (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
281 tim 2700 \label{introEquation:ensembelAverage}
282     \end{equation}
283    
284     There are several different types of ensembles with different
285     statistical characteristics. As a function of macroscopic
286 tim 2819 parameters, such as temperature \textit{etc}, the partition function
287     can be used to describe the statistical properties of a system in
288 tim 2898 thermodynamic equilibrium. As an ensemble of systems, each of which
289     is known to be thermally isolated and conserve energy, the
290     Microcanonical ensemble (NVE) has a partition function like,
291 tim 2700 \begin{equation}
292 tim 2904 \Omega (N,V,E) = e^{\beta TS}. \label{introEquation:NVEPartition}
293 tim 2700 \end{equation}
294 tim 2905 A canonical ensemble (NVT) is an ensemble of systems, each of which
295 tim 2700 can share its energy with a large heat reservoir. The distribution
296     of the total energy amongst the possible dynamical states is given
297     by the partition function,
298     \begin{equation}
299 tim 2899 \Omega (N,V,T) = e^{ - \beta A}.
300 tim 2700 \label{introEquation:NVTPartition}
301     \end{equation}
302     Here, $A$ is the Helmholtz free energy which is defined as $ A = U -
303 tim 2819 TS$. Since most experiments are carried out under constant pressure
304 tim 2850 condition, the isothermal-isobaric ensemble (NPT) plays a very
305 tim 2819 important role in molecular simulations. The isothermal-isobaric
306     ensemble allow the system to exchange energy with a heat bath of
307     temperature $T$ and to change the volume as well. Its partition
308     function is given as
309 tim 2700 \begin{equation}
310     \Delta (N,P,T) = - e^{\beta G}.
311     \label{introEquation:NPTPartition}
312     \end{equation}
313     Here, $G = U - TS + PV$, and $G$ is called Gibbs free energy.
314    
315     \subsection{\label{introSection:liouville}Liouville's theorem}
316    
317 tim 2819 Liouville's theorem is the foundation on which statistical mechanics
318     rests. It describes the time evolution of the phase space
319 tim 2700 distribution function. In order to calculate the rate of change of
320 tim 2850 $\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider
321     the two faces perpendicular to the $q_1$ axis, which are located at
322     $q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the
323     opposite face is given by the expression,
324 tim 2700 \begin{equation}
325     \left( {\rho + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 }
326     \right)\left( {\dot q_1 + \frac{{\partial \dot q_1 }}{{\partial q_1
327     }}\delta q_1 } \right)\delta q_2 \ldots \delta q_f \delta p_1
328     \ldots \delta p_f .
329     \end{equation}
330     Summing all over the phase space, we obtain
331     \begin{equation}
332     \frac{{d(\delta N)}}{{dt}} = - \sum\limits_{i = 1}^f {\left[ {\rho
333     \left( {\frac{{\partial \dot q_i }}{{\partial q_i }} +
334     \frac{{\partial \dot p_i }}{{\partial p_i }}} \right) + \left(
335     {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i + \frac{{\partial
336     \rho }}{{\partial p_i }}\dot p_i } \right)} \right]} \delta q_1
337     \ldots \delta q_f \delta p_1 \ldots \delta p_f .
338     \end{equation}
339     Differentiating the equations of motion in Hamiltonian formalism
340     (\ref{introEquation:motionHamiltonianCoordinate},
341     \ref{introEquation:motionHamiltonianMomentum}), we can show,
342     \begin{equation}
343     \sum\limits_i {\left( {\frac{{\partial \dot q_i }}{{\partial q_i }}
344     + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)} = 0 ,
345     \end{equation}
346     which cancels the first terms of the right hand side. Furthermore,
347 tim 2819 dividing $ \delta q_1 \ldots \delta q_f \delta p_1 \ldots \delta
348 tim 2700 p_f $ in both sides, we can write out Liouville's theorem in a
349     simple form,
350     \begin{equation}
351     \frac{{\partial \rho }}{{\partial t}} + \sum\limits_{i = 1}^f
352     {\left( {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i +
353     \frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)} = 0 .
354     \label{introEquation:liouvilleTheorem}
355     \end{equation}
356     Liouville's theorem states that the distribution function is
357     constant along any trajectory in phase space. In classical
358 tim 2850 statistical mechanics, since the number of members in an ensemble is
359     huge and constant, we can assume the local density has no reason
360     (other than classical mechanics) to change,
361 tim 2700 \begin{equation}
362     \frac{{\partial \rho }}{{\partial t}} = 0.
363     \label{introEquation:stationary}
364     \end{equation}
365     In such stationary system, the density of distribution $\rho$ can be
366     connected to the Hamiltonian $H$ through Maxwell-Boltzmann
367     distribution,
368     \begin{equation}
369     \rho \propto e^{ - \beta H}
370     \label{introEquation:densityAndHamiltonian}
371     \end{equation}
372    
373 tim 2819 \subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}}
374 tim 2702 Lets consider a region in the phase space,
375     \begin{equation}
376     \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f .
377     \end{equation}
378     If this region is small enough, the density $\rho$ can be regarded
379 tim 2819 as uniform over the whole integral. Thus, the number of phase points
380     inside this region is given by,
381 tim 2702 \begin{equation}
382     \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f
383     dp_1 } ..dp_f.
384     \end{equation}
385    
386     \begin{equation}
387     \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho
388     \frac{d}{{dt}}(\delta v) = 0.
389     \end{equation}
390     With the help of stationary assumption
391     (\ref{introEquation:stationary}), we obtain the principle of the
392 tim 2819 \emph{conservation of volume in phase space},
393 tim 2702 \begin{equation}
394     \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 }
395     ...dq_f dp_1 } ..dp_f = 0.
396     \label{introEquation:volumePreserving}
397     \end{equation}
398    
399 tim 2819 \subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}}
400 tim 2702
401 tim 2700 Liouville's theorem can be expresses in a variety of different forms
402     which are convenient within different contexts. For any two function
403     $F$ and $G$ of the coordinates and momenta of a system, the Poisson
404     bracket ${F, G}$ is defined as
405     \begin{equation}
406     \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
407     F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
408     \frac{{\partial F}}{{\partial p_i }}\frac{{\partial G}}{{\partial
409     q_i }}} \right)}.
410     \label{introEquation:poissonBracket}
411     \end{equation}
412 tim 2905 Substituting equations of motion in Hamiltonian formalism
413     (Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
414     Eq.~\ref{introEquation:motionHamiltonianMomentum}) into
415 tim 2850 (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite
416     Liouville's theorem using Poisson bracket notion,
417 tim 2700 \begin{equation}
418     \left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - \left\{
419     {\rho ,H} \right\}.
420     \label{introEquation:liouvilleTheromInPoissin}
421     \end{equation}
422     Moreover, the Liouville operator is defined as
423     \begin{equation}
424     iL = \sum\limits_{i = 1}^f {\left( {\frac{{\partial H}}{{\partial
425     p_i }}\frac{\partial }{{\partial q_i }} - \frac{{\partial
426     H}}{{\partial q_i }}\frac{\partial }{{\partial p_i }}} \right)}
427     \label{introEquation:liouvilleOperator}
428     \end{equation}
429     In terms of Liouville operator, Liouville's equation can also be
430     expressed as
431     \begin{equation}
432     \left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - iL\rho
433     \label{introEquation:liouvilleTheoremInOperator}
434     \end{equation}
435    
436 tim 2693 \subsection{\label{introSection:ergodic}The Ergodic Hypothesis}
437 tim 2692
438 tim 2695 Various thermodynamic properties can be calculated from Molecular
439     Dynamics simulation. By comparing experimental values with the
440     calculated properties, one can determine the accuracy of the
441 tim 2819 simulation and the quality of the underlying model. However, both
442     experiments and computer simulations are usually performed during a
443 tim 2695 certain time interval and the measurements are averaged over a
444     period of them which is different from the average behavior of
445 tim 2819 many-body system in Statistical Mechanics. Fortunately, the Ergodic
446     Hypothesis makes a connection between time average and the ensemble
447     average. It states that the time average and average over the
448 tim 2905 statistical ensemble are identical \cite{Frenkel1996, Leach2001}:
449 tim 2695 \begin{equation}
450 tim 2700 \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
451     \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
452     {A(q(t),p(t))} } \rho (q(t), p(t)) dqdp
453 tim 2695 \end{equation}
454 tim 2700 where $\langle A(q , p) \rangle_t$ is an equilibrium value of a
455     physical quantity and $\rho (p(t), q(t))$ is the equilibrium
456     distribution function. If an observation is averaged over a
457     sufficiently long time (longer than relaxation time), all accessible
458     microstates in phase space are assumed to be equally probed, giving
459     a properly weighted statistical average. This allows the researcher
460     freedom of choice when deciding how best to measure a given
461     observable. In case an ensemble averaged approach sounds most
462 tim 2905 reasonable, the Monte Carlo methods\cite{Metropolis1949} can be
463 tim 2700 utilized. Or if the system lends itself to a time averaging
464     approach, the Molecular Dynamics techniques in
465     Sec.~\ref{introSection:molecularDynamics} will be the best
466     choice\cite{Frenkel1996}.
467 tim 2694
468 tim 2697 \section{\label{introSection:geometricIntegratos}Geometric Integrators}
469 tim 2819 A variety of numerical integrators have been proposed to simulate
470     the motions of atoms in MD simulation. They usually begin with
471     initial conditionals and move the objects in the direction governed
472     by the differential equations. However, most of them ignore the
473     hidden physical laws contained within the equations. Since 1990,
474     geometric integrators, which preserve various phase-flow invariants
475     such as symplectic structure, volume and time reversal symmetry, are
476     developed to address this issue\cite{Dullweber1997, McLachlan1998,
477 tim 2872 Leimkuhler1999}. The velocity Verlet method, which happens to be a
478 tim 2819 simple example of symplectic integrator, continues to gain
479     popularity in the molecular dynamics community. This fact can be
480     partly explained by its geometric nature.
481 tim 2697
482 tim 2819 \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
483     A \emph{manifold} is an abstract mathematical space. It looks
484     locally like Euclidean space, but when viewed globally, it may have
485     more complicated structure. A good example of manifold is the
486     surface of Earth. It seems to be flat locally, but it is round if
487     viewed as a whole. A \emph{differentiable manifold} (also known as
488     \emph{smooth manifold}) is a manifold on which it is possible to
489     apply calculus on \emph{differentiable manifold}. A \emph{symplectic
490     manifold} is defined as a pair $(M, \omega)$ which consists of a
491 tim 2697 \emph{differentiable manifold} $M$ and a close, non-degenerated,
492     bilinear symplectic form, $\omega$. A symplectic form on a vector
493     space $V$ is a function $\omega(x, y)$ which satisfies
494     $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
495     \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
496 tim 2819 $\omega(x, x) = 0$. The cross product operation in vector field is
497 tim 2899 an example of symplectic form. One of the motivations to study
498     \emph{symplectic manifolds} in Hamiltonian Mechanics is that a
499     symplectic manifold can represent all possible configurations of the
500     system and the phase space of the system can be described by it's
501     cotangent bundle. Every symplectic manifold is even dimensional. For
502     instance, in Hamilton equations, coordinate and momentum always
503     appear in pairs.
504 tim 2697
505 tim 2698 \subsection{\label{introSection:ODE}Ordinary Differential Equations}
506 tim 2697
507 tim 2819 For an ordinary differential system defined as
508 tim 2698 \begin{equation}
509     \dot x = f(x)
510     \end{equation}
511 tim 2819 where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
512 tim 2905 $f(r) = J\nabla _x H(r)$. Here, $H = H (q, p)$ is Hamiltonian
513     function and $J$ is the skew-symmetric matrix
514 tim 2698 \begin{equation}
515     J = \left( {\begin{array}{*{20}c}
516     0 & I \\
517     { - I} & 0 \\
518     \end{array}} \right)
519     \label{introEquation:canonicalMatrix}
520     \end{equation}
521     where $I$ is an identity matrix. Using this notation, Hamiltonian
522     system can be rewritten as,
523     \begin{equation}
524 tim 2905 \frac{d}{{dt}}x = J\nabla _x H(x).
525 tim 2698 \label{introEquation:compactHamiltonian}
526     \end{equation}In this case, $f$ is
527 tim 2899 called a \emph{Hamiltonian vector field}. Another generalization of
528     Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986},
529 tim 2698 \begin{equation}
530     \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
531     \end{equation}
532     The most obvious change being that matrix $J$ now depends on $x$.
533    
534 tim 2702 \subsection{\label{introSection:exactFlow}Exact Flow}
535    
536 tim 2905 Let $x(t)$ be the exact solution of the ODE
537     system,$\frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}$, we can
538     define its exact flow(solution) $\varphi_\tau$
539     \[ x(t+\tau)
540     =\varphi_\tau(x(t))
541 tim 2698 \]
542     where $\tau$ is a fixed time step and $\varphi$ is a map from phase
543 tim 2702 space to itself. The flow has the continuous group property,
544 tim 2698 \begin{equation}
545 tim 2702 \varphi _{\tau _1 } \circ \varphi _{\tau _2 } = \varphi _{\tau _1
546     + \tau _2 } .
547     \end{equation}
548     In particular,
549     \begin{equation}
550     \varphi _\tau \circ \varphi _{ - \tau } = I
551     \end{equation}
552     Therefore, the exact flow is self-adjoint,
553     \begin{equation}
554     \varphi _\tau = \varphi _{ - \tau }^{ - 1}.
555     \end{equation}
556     The exact flow can also be written in terms of the of an operator,
557     \begin{equation}
558     \varphi _\tau (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
559     }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
560     \label{introEquation:exponentialOperator}
561     \end{equation}
562     In most cases, it is not easy to find the exact flow $\varphi_\tau$.
563 tim 2872 Instead, we use an approximate map, $\psi_\tau$, which is usually
564 tim 2702 called integrator. The order of an integrator $\psi_\tau$ is $p$, if
565     the Taylor series of $\psi_\tau$ agree to order $p$,
566     \begin{equation}
567 tim 2872 \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1})
568 tim 2698 \end{equation}
569    
570 tim 2702 \subsection{\label{introSection:geometricProperties}Geometric Properties}
571    
572 tim 2872 The hidden geometric properties\cite{Budd1999, Marsden1998} of an
573     ODE and its flow play important roles in numerical studies. Many of
574     them can be found in systems which occur naturally in applications.
575 tim 2702 Let $\varphi$ be the flow of Hamiltonian vector field, $\varphi$ is
576     a \emph{symplectic} flow if it satisfies,
577 tim 2698 \begin{equation}
578 tim 2703 {\varphi '}^T J \varphi ' = J.
579 tim 2698 \end{equation}
580     According to Liouville's theorem, the symplectic volume is invariant
581     under a Hamiltonian flow, which is the basis for classical
582 tim 2699 statistical mechanics. Furthermore, the flow of a Hamiltonian vector
583     field on a symplectic manifold can be shown to be a
584     symplectomorphism. As to the Poisson system,
585 tim 2698 \begin{equation}
586 tim 2703 {\varphi '}^T J \varphi ' = J \circ \varphi
587 tim 2698 \end{equation}
588 tim 2898 is the property that must be preserved by the integrator. It is
589     possible to construct a \emph{volume-preserving} flow for a source
590     free ODE ($ \nabla \cdot f = 0 $), if the flow satisfies $ \det
591     d\varphi = 1$. One can show easily that a symplectic flow will be
592     volume-preserving. Changing the variables $y = h(x)$ in an ODE
593 tim 2872 (Eq.~\ref{introEquation:ODE}) will result in a new system,
594 tim 2698 \[
595     \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y).
596     \]
597     The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$.
598     In other words, the flow of this vector field is reversible if and
599 tim 2898 only if $ h \circ \varphi ^{ - 1} = \varphi \circ h $. A
600     \emph{first integral}, or conserved quantity of a general
601 tim 2705 differential function is a function $ G:R^{2d} \to R^d $ which is
602     constant for all solutions of the ODE $\frac{{dx}}{{dt}} = f(x)$ ,
603     \[
604     \frac{{dG(x(t))}}{{dt}} = 0.
605     \]
606     Using chain rule, one may obtain,
607     \[
608 tim 2905 \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \dot \nabla G,
609 tim 2705 \]
610     which is the condition for conserving \emph{first integral}. For a
611     canonical Hamiltonian system, the time evolution of an arbitrary
612     smooth function $G$ is given by,
613 tim 2789 \begin{eqnarray}
614 tim 2905 \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \notag\\
615     & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)).
616 tim 2705 \label{introEquation:firstIntegral1}
617 tim 2789 \end{eqnarray}
618 tim 2905 Using poisson bracket notion, Eq.~\ref{introEquation:firstIntegral1}
619     can be rewritten as
620 tim 2705 \[
621     \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)).
622     \]
623     Therefore, the sufficient condition for $G$ to be the \emph{first
624 tim 2905 integral} of a Hamiltonian system is $\left\{ {G,H} \right\} = 0.$
625 tim 2705 As well known, the Hamiltonian (or energy) H of a Hamiltonian system
626     is a \emph{first integral}, which is due to the fact $\{ H,H\} =
627 tim 2898 0$. When designing any numerical methods, one should always try to
628 tim 2702 preserve the structural properties of the original ODE and its flow.
629    
630 tim 2699 \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods}
631     A lot of well established and very effective numerical methods have
632     been successful precisely because of their symplecticities even
633     though this fact was not recognized when they were first
634 tim 2872 constructed. The most famous example is the Verlet-leapfrog method
635 tim 2819 in molecular dynamics. In general, symplectic integrators can be
636 tim 2699 constructed using one of four different methods.
637     \begin{enumerate}
638     \item Generating functions
639     \item Variational methods
640     \item Runge-Kutta methods
641     \item Splitting methods
642     \end{enumerate}
643 tim 2789 Generating function\cite{Channell1990} tends to lead to methods
644     which are cumbersome and difficult to use. In dissipative systems,
645     variational methods can capture the decay of energy
646     accurately\cite{Kane2000}. Since their geometrically unstable nature
647     against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
648     methods are not suitable for Hamiltonian system. Recently, various
649 tim 2905 high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
650     have been developed to overcome this instability. However, due to
651     computational penalty involved in implementing the Runge-Kutta
652     methods, they have not attracted much attention from the Molecular
653     Dynamics community. Instead, splitting methods have been widely
654     accepted since they exploit natural decompositions of the
655     system\cite{Tuckerman1992, McLachlan1998}.
656 tim 2702
657 tim 2819 \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
658 tim 2702
659     The main idea behind splitting methods is to decompose the discrete
660     $\varphi_h$ as a composition of simpler flows,
661 tim 2699 \begin{equation}
662     \varphi _h = \varphi _{h_1 } \circ \varphi _{h_2 } \ldots \circ
663     \varphi _{h_n }
664     \label{introEquation:FlowDecomposition}
665     \end{equation}
666     where each of the sub-flow is chosen such that each represent a
667 tim 2898 simpler integration of the system. Suppose that a Hamiltonian system
668     takes the form,
669 tim 2702 \[
670     H = H_1 + H_2.
671     \]
672     Here, $H_1$ and $H_2$ may represent different physical processes of
673     the system. For instance, they may relate to kinetic and potential
674     energy respectively, which is a natural decomposition of the
675     problem. If $H_1$ and $H_2$ can be integrated using exact flows
676     $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a simple first
677 tim 2819 order expression is then given by the Lie-Trotter formula
678 tim 2699 \begin{equation}
679 tim 2702 \varphi _h = \varphi _{1,h} \circ \varphi _{2,h},
680     \label{introEquation:firstOrderSplitting}
681     \end{equation}
682     where $\varphi _h$ is the result of applying the corresponding
683     continuous $\varphi _i$ over a time $h$. By definition, as
684     $\varphi_i(t)$ is the exact solution of a Hamiltonian system, it
685     must follow that each operator $\varphi_i(t)$ is a symplectic map.
686     It is easy to show that any composition of symplectic flows yields a
687     symplectic map,
688     \begin{equation}
689 tim 2699 (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi
690 tim 2702 '\phi ' = \phi '^T J\phi ' = J,
691 tim 2699 \label{introEquation:SymplecticFlowComposition}
692     \end{equation}
693 tim 2702 where $\phi$ and $\psi$ both are symplectic maps. Thus operator
694     splitting in this context automatically generates a symplectic map.
695 tim 2699
696 tim 2905 The Lie-Trotter
697     splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
698     local errors proportional to $h^2$, while Strang splitting gives a
699     second-order decomposition,
700 tim 2702 \begin{equation}
701     \varphi _h = \varphi _{1,h/2} \circ \varphi _{2,h} \circ \varphi
702 tim 2706 _{1,h/2} , \label{introEquation:secondOrderSplitting}
703 tim 2702 \end{equation}
704 tim 2819 which has a local error proportional to $h^3$. The Sprang
705     splitting's popularity in molecular simulation community attribute
706     to its symmetric property,
707 tim 2702 \begin{equation}
708     \varphi _h^{ - 1} = \varphi _{ - h}.
709 tim 2703 \label{introEquation:timeReversible}
710 tim 2882 \end{equation}
711 tim 2702
712 tim 2872 \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}}
713 tim 2702 The classical equation for a system consisting of interacting
714     particles can be written in Hamiltonian form,
715     \[
716     H = T + V
717     \]
718     where $T$ is the kinetic energy and $V$ is the potential energy.
719 tim 2872 Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one
720 tim 2702 obtains the following:
721     \begin{align}
722     q(\Delta t) &= q(0) + \dot{q}(0)\Delta t +
723     \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, %
724     \label{introEquation:Lp10a} \\%
725     %
726     \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
727     \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. %
728     \label{introEquation:Lp10b}
729     \end{align}
730     where $F(t)$ is the force at time $t$. This integration scheme is
731     known as \emph{velocity verlet} which is
732     symplectic(\ref{introEquation:SymplecticFlowComposition}),
733     time-reversible(\ref{introEquation:timeReversible}) and
734     volume-preserving (\ref{introEquation:volumePreserving}). These
735     geometric properties attribute to its long-time stability and its
736     popularity in the community. However, the most commonly used
737     velocity verlet integration scheme is written as below,
738     \begin{align}
739     \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
740     \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEquation:Lp9a}\\%
741     %
742     q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),%
743     \label{introEquation:Lp9b}\\%
744     %
745     \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
746 tim 2872 \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c}
747 tim 2702 \end{align}
748     From the preceding splitting, one can see that the integration of
749     the equations of motion would follow:
750     \begin{enumerate}
751     \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
752    
753     \item Use the half step velocities to move positions one whole step, $\Delta t$.
754    
755 tim 2872 \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
756 tim 2702
757     \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
758     \end{enumerate}
759 tim 2872 By simply switching the order of the propagators in the splitting
760     and composing a new integrator, the \emph{position verlet}
761     integrator, can be generated,
762 tim 2702 \begin{align}
763     \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) +
764     \frac{{\Delta t}}{{2m}}\dot q(0)} \right], %
765     \label{introEquation:positionVerlet1} \\%
766     %
767 tim 2703 q(\Delta t) &= q(0) + \frac{{\Delta t}}{2}\left[ {\dot q(0) + \dot
768 tim 2702 q(\Delta t)} \right]. %
769 tim 2719 \label{introEquation:positionVerlet2}
770 tim 2702 \end{align}
771    
772 tim 2819 \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
773 tim 2702
774 tim 2872 The Baker-Campbell-Hausdorff formula can be used to determine the
775     local error of splitting method in terms of the commutator of the
776 tim 2702 operators(\ref{introEquation:exponentialOperator}) associated with
777 tim 2872 the sub-flow. For operators $hX$ and $hY$ which are associated with
778 tim 2726 $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
779 tim 2702 \begin{equation}
780     \exp (hX + hY) = \exp (hZ)
781     \end{equation}
782     where
783     \begin{equation}
784     hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left(
785     {[X,[X,Y]] + [Y,[Y,X]]} \right) + \ldots .
786     \end{equation}
787     Here, $[X,Y]$ is the commutators of operator $X$ and $Y$ given by
788     \[
789     [X,Y] = XY - YX .
790     \]
791 tim 2872 Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974}
792     to the Sprang splitting, we can obtain
793 tim 2779 \begin{eqnarray*}
794 tim 2778 \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
795     & & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
796 tim 2905 & & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots
797     ).
798 tim 2779 \end{eqnarray*}
799 tim 2905 Since $ [X,Y] + [Y,X] = 0$ and $ [X,X] = 0$, the dominant local
800 tim 2702 error of Spring splitting is proportional to $h^3$. The same
801 tim 2905 procedure can be applied to a general splitting of the form
802 tim 2702 \begin{equation}
803     \varphi _{b_m h}^2 \circ \varphi _{a_m h}^1 \circ \varphi _{b_{m -
804     1} h}^2 \circ \ldots \circ \varphi _{a_1 h}^1 .
805     \end{equation}
806 tim 2872 A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
807     order methods. Yoshida proposed an elegant way to compose higher
808 tim 2789 order methods based on symmetric splitting\cite{Yoshida1990}. Given
809     a symmetric second order base method $ \varphi _h^{(2)} $, a
810     fourth-order symmetric method can be constructed by composing,
811 tim 2702 \[
812     \varphi _h^{(4)} = \varphi _{\alpha h}^{(2)} \circ \varphi _{\beta
813     h}^{(2)} \circ \varphi _{\alpha h}^{(2)}
814     \]
815     where $ \alpha = - \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$ and $ \beta
816     = \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$. Moreover, a symmetric
817     integrator $ \varphi _h^{(2n + 2)}$ can be composed by
818     \begin{equation}
819     \varphi _h^{(2n + 2)} = \varphi _{\alpha h}^{(2n)} \circ \varphi
820 tim 2872 _{\beta h}^{(2n)} \circ \varphi _{\alpha h}^{(2n)},
821 tim 2702 \end{equation}
822 tim 2872 if the weights are chosen as
823 tim 2702 \[
824     \alpha = - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta =
825     \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} .
826     \]
827    
828 tim 2694 \section{\label{introSection:molecularDynamics}Molecular Dynamics}
829    
830 tim 2720 As one of the principal tools of molecular modeling, Molecular
831     dynamics has proven to be a powerful tool for studying the functions
832     of biological systems, providing structural, thermodynamic and
833     dynamical information. The basic idea of molecular dynamics is that
834     macroscopic properties are related to microscopic behavior and
835     microscopic behavior can be calculated from the trajectories in
836     simulations. For instance, instantaneous temperature of an
837     Hamiltonian system of $N$ particle can be measured by
838     \[
839 tim 2725 T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}}
840 tim 2720 \]
841     where $m_i$ and $v_i$ are the mass and velocity of $i$th particle
842     respectively, $f$ is the number of degrees of freedom, and $k_B$ is
843     the boltzman constant.
844 tim 2694
845 tim 2720 A typical molecular dynamics run consists of three essential steps:
846     \begin{enumerate}
847     \item Initialization
848     \begin{enumerate}
849     \item Preliminary preparation
850     \item Minimization
851     \item Heating
852     \item Equilibration
853     \end{enumerate}
854     \item Production
855     \item Analysis
856     \end{enumerate}
857     These three individual steps will be covered in the following
858     sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
859 tim 2801 initialization of a simulation. Sec.~\ref{introSection:production}
860 tim 2872 will discusse issues in production run.
861 tim 2801 Sec.~\ref{introSection:Analysis} provides the theoretical tools for
862     trajectory analysis.
863 tim 2719
864 tim 2720 \subsection{\label{introSec:initialSystemSettings}Initialization}
865 tim 2719
866 tim 2819 \subsubsection{\textbf{Preliminary preparation}}
867 tim 2719
868 tim 2720 When selecting the starting structure of a molecule for molecular
869     simulation, one may retrieve its Cartesian coordinates from public
870     databases, such as RCSB Protein Data Bank \textit{etc}. Although
871     thousands of crystal structures of molecules are discovered every
872     year, many more remain unknown due to the difficulties of
873 tim 2872 purification and crystallization. Even for molecules with known
874     structure, some important information is missing. For example, a
875 tim 2720 missing hydrogen atom which acts as donor in hydrogen bonding must
876     be added. Moreover, in order to include electrostatic interaction,
877     one may need to specify the partial charges for individual atoms.
878     Under some circumstances, we may even need to prepare the system in
879 tim 2872 a special configuration. For instance, when studying transport
880     phenomenon in membrane systems, we may prepare the lipids in a
881     bilayer structure instead of placing lipids randomly in solvent,
882     since we are not interested in the slow self-aggregation process.
883 tim 2694
884 tim 2819 \subsubsection{\textbf{Minimization}}
885 tim 2705
886 tim 2720 It is quite possible that some of molecules in the system from
887 tim 2872 preliminary preparation may be overlapping with each other. This
888     close proximity leads to high initial potential energy which
889     consequently jeopardizes any molecular dynamics simulations. To
890     remove these steric overlaps, one typically performs energy
891     minimization to find a more reasonable conformation. Several energy
892     minimization methods have been developed to exploit the energy
893     surface and to locate the local minimum. While converging slowly
894     near the minimum, steepest descent method is extremely robust when
895     systems are strongly anharmonic. Thus, it is often used to refine
896     structure from crystallographic data. Relied on the gradient or
897     hessian, advanced methods like Newton-Raphson converge rapidly to a
898     local minimum, but become unstable if the energy surface is far from
899     quadratic. Another factor that must be taken into account, when
900 tim 2720 choosing energy minimization method, is the size of the system.
901     Steepest descent and conjugate gradient can deal with models of any
902 tim 2872 size. Because of the limits on computer memory to store the hessian
903     matrix and the computing power needed to diagonalized these
904     matrices, most Newton-Raphson methods can not be used with very
905     large systems.
906 tim 2694
907 tim 2819 \subsubsection{\textbf{Heating}}
908 tim 2720
909     Typically, Heating is performed by assigning random velocities
910 tim 2872 according to a Maxwell-Boltzman distribution for a desired
911     temperature. Beginning at a lower temperature and gradually
912     increasing the temperature by assigning larger random velocities, we
913     end up with setting the temperature of the system to a final
914     temperature at which the simulation will be conducted. In heating
915     phase, we should also keep the system from drifting or rotating as a
916     whole. To do this, the net linear momentum and angular momentum of
917     the system is shifted to zero after each resampling from the Maxwell
918     -Boltzman distribution.
919 tim 2720
920 tim 2819 \subsubsection{\textbf{Equilibration}}
921 tim 2720
922     The purpose of equilibration is to allow the system to evolve
923     spontaneously for a period of time and reach equilibrium. The
924     procedure is continued until various statistical properties, such as
925     temperature, pressure, energy, volume and other structural
926     properties \textit{etc}, become independent of time. Strictly
927     speaking, minimization and heating are not necessary, provided the
928     equilibration process is long enough. However, these steps can serve
929     as a means to arrive at an equilibrated structure in an effective
930     way.
931    
932     \subsection{\label{introSection:production}Production}
933    
934 tim 2872 The production run is the most important step of the simulation, in
935 tim 2725 which the equilibrated structure is used as a starting point and the
936     motions of the molecules are collected for later analysis. In order
937     to capture the macroscopic properties of the system, the molecular
938 tim 2872 dynamics simulation must be performed by sampling correctly and
939     efficiently from the relevant thermodynamic ensemble.
940 tim 2720
941 tim 2725 The most expensive part of a molecular dynamics simulation is the
942     calculation of non-bonded forces, such as van der Waals force and
943     Coulombic forces \textit{etc}. For a system of $N$ particles, the
944     complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
945     which making large simulations prohibitive in the absence of any
946 tim 2905 algorithmic tricks. A natural approach to avoid system size issues
947     is to represent the bulk behavior by a finite number of the
948     particles. However, this approach will suffer from the surface
949     effect at the edges of the simulation. To offset this,
950     \textit{Periodic boundary conditions} (see Fig.~\ref{introFig:pbc})
951     is developed to simulate bulk properties with a relatively small
952     number of particles. In this method, the simulation box is
953     replicated throughout space to form an infinite lattice. During the
954     simulation, when a particle moves in the primary cell, its image in
955     other cells move in exactly the same direction with exactly the same
956     orientation. Thus, as a particle leaves the primary cell, one of its
957     images will enter through the opposite face.
958 tim 2789 \begin{figure}
959     \centering
960     \includegraphics[width=\linewidth]{pbc.eps}
961     \caption[An illustration of periodic boundary conditions]{A 2-D
962     illustration of periodic boundary conditions. As one particle leaves
963     the left of the simulation box, an image of it enters the right.}
964     \label{introFig:pbc}
965     \end{figure}
966 tim 2725
967     %cutoff and minimum image convention
968     Another important technique to improve the efficiency of force
969 tim 2872 evaluation is to apply spherical cutoff where particles farther than
970     a predetermined distance are not included in the calculation
971 tim 2725 \cite{Frenkel1996}. The use of a cutoff radius will cause a
972 tim 2730 discontinuity in the potential energy curve. Fortunately, one can
973 tim 2872 shift simple radial potential to ensure the potential curve go
974     smoothly to zero at the cutoff radius. The cutoff strategy works
975     well for Lennard-Jones interaction because of its short range
976     nature. However, simply truncating the electrostatic interaction
977     with the use of cutoffs has been shown to lead to severe artifacts
978     in simulations. The Ewald summation, in which the slowly decaying
979     Coulomb potential is transformed into direct and reciprocal sums
980     with rapid and absolute convergence, has proved to minimize the
981     periodicity artifacts in liquid simulations. Taking the advantages
982     of the fast Fourier transform (FFT) for calculating discrete Fourier
983     transforms, the particle mesh-based
984 tim 2789 methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
985 tim 2872 $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
986     \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
987     which treats Coulombic interactions exactly at short range, and
988     approximate the potential at long range through multipolar
989     expansion. In spite of their wide acceptance at the molecular
990     simulation community, these two methods are difficult to implement
991     correctly and efficiently. Instead, we use a damped and
992     charge-neutralized Coulomb potential method developed by Wolf and
993     his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
994     particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
995 tim 2725 \begin{equation}
996     V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
997     r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
998     R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
999     r_{ij})}{r_{ij}}\right\}. \label{introEquation:shiftedCoulomb}
1000     \end{equation}
1001     where $\alpha$ is the convergence parameter. Due to the lack of
1002     inherent periodicity and rapid convergence,this method is extremely
1003     efficient and easy to implement.
1004 tim 2789 \begin{figure}
1005     \centering
1006     \includegraphics[width=\linewidth]{shifted_coulomb.eps}
1007     \caption[An illustration of shifted Coulomb potential]{An
1008     illustration of shifted Coulomb potential.}
1009     \label{introFigure:shiftedCoulomb}
1010     \end{figure}
1011 tim 2725
1012     %multiple time step
1013    
1014 tim 2720 \subsection{\label{introSection:Analysis} Analysis}
1015    
1016 tim 2872 Recently, advanced visualization technique have become applied to
1017 tim 2721 monitor the motions of molecules. Although the dynamics of the
1018     system can be described qualitatively from animation, quantitative
1019 tim 2872 trajectory analysis are more useful. According to the principles of
1020 tim 2905 Statistical Mechanics in
1021     Sec.~\ref{introSection:statisticalMechanics}, one can compute
1022     thermodynamic properties, analyze fluctuations of structural
1023     parameters, and investigate time-dependent processes of the molecule
1024     from the trajectories.
1025 tim 2721
1026 tim 2872 \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1027 tim 2721
1028 tim 2872 Thermodynamic properties, which can be expressed in terms of some
1029 tim 2725 function of the coordinates and momenta of all particles in the
1030     system, can be directly computed from molecular dynamics. The usual
1031     way to measure the pressure is based on virial theorem of Clausius
1032     which states that the virial is equal to $-3Nk_BT$. For a system
1033     with forces between particles, the total virial, $W$, contains the
1034     contribution from external pressure and interaction between the
1035     particles:
1036     \[
1037     W = - 3PV + \left\langle {\sum\limits_{i < j} {r{}_{ij} \cdot
1038     f_{ij} } } \right\rangle
1039     \]
1040     where $f_{ij}$ is the force between particle $i$ and $j$ at a
1041     distance $r_{ij}$. Thus, the expression for the pressure is given
1042     by:
1043     \begin{equation}
1044     P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\sum\limits_{i
1045     < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle
1046     \end{equation}
1047    
1048 tim 2819 \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}}
1049 tim 2721
1050     Structural Properties of a simple fluid can be described by a set of
1051 tim 2872 distribution functions. Among these functions,the \emph{pair
1052 tim 2721 distribution function}, also known as \emph{radial distribution
1053 tim 2872 function}, is of most fundamental importance to liquid theory.
1054     Experimentally, pair distribution function can be gathered by
1055     Fourier transforming raw data from a series of neutron diffraction
1056     experiments and integrating over the surface factor
1057     \cite{Powles1973}. The experimental results can serve as a criterion
1058     to justify the correctness of a liquid model. Moreover, various
1059     equilibrium thermodynamic and structural properties can also be
1060     expressed in terms of radial distribution function \cite{Allen1987}.
1061     The pair distribution functions $g(r)$ gives the probability that a
1062 tim 2721 particle $i$ will be located at a distance $r$ from a another
1063     particle $j$ in the system
1064 tim 2905 \begin{equation}
1065 tim 2721 g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1066 tim 2874 \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
1067 tim 2872 (r)}{\rho}.
1068 tim 2905 \end{equation}
1069 tim 2721 Note that the delta function can be replaced by a histogram in
1070 tim 2881 computer simulation. Peaks in $g(r)$ represent solvent shells, and
1071     the height of these peaks gradually decreases to 1 as the liquid of
1072     large distance approaches the bulk density.
1073 tim 2721
1074    
1075 tim 2819 \subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent
1076     Properties}}
1077 tim 2721
1078     Time-dependent properties are usually calculated using \emph{time
1079 tim 2872 correlation functions}, which correlate random variables $A$ and $B$
1080     at two different times,
1081 tim 2721 \begin{equation}
1082     C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1083     \label{introEquation:timeCorrelationFunction}
1084     \end{equation}
1085     If $A$ and $B$ refer to same variable, this kind of correlation
1086 tim 2872 function is called an \emph{autocorrelation function}. One example
1087     of an auto correlation function is the velocity auto-correlation
1088     function which is directly related to transport properties of
1089     molecular liquids:
1090 tim 2725 \[
1091     D = \frac{1}{3}\int\limits_0^\infty {\left\langle {v(t) \cdot v(0)}
1092     \right\rangle } dt
1093     \]
1094 tim 2872 where $D$ is diffusion constant. Unlike the velocity autocorrelation
1095     function, which is averaging over time origins and over all the
1096     atoms, the dipole autocorrelation functions are calculated for the
1097     entire system. The dipole autocorrelation function is given by:
1098 tim 2725 \[
1099     c_{dipole} = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1100     \right\rangle
1101     \]
1102     Here $u_{tot}$ is the net dipole of the entire system and is given
1103     by
1104     \[
1105 tim 2905 u_{tot} (t) = \sum\limits_i {u_i (t)}.
1106 tim 2725 \]
1107     In principle, many time correlation functions can be related with
1108     Fourier transforms of the infrared, Raman, and inelastic neutron
1109     scattering spectra of molecular liquids. In practice, one can
1110     extract the IR spectrum from the intensity of dipole fluctuation at
1111     each frequency using the following relationship:
1112     \[
1113     \hat c_{dipole} (v) = \int_{ - \infty }^\infty {c_{dipole} (t)e^{ -
1114 tim 2905 i2\pi vt} dt}.
1115 tim 2725 \]
1116 tim 2721
1117 tim 2693 \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1118 tim 2692
1119 tim 2705 Rigid bodies are frequently involved in the modeling of different
1120     areas, from engineering, physics, to chemistry. For example,
1121     missiles and vehicle are usually modeled by rigid bodies. The
1122     movement of the objects in 3D gaming engine or other physics
1123 tim 2872 simulator is governed by rigid body dynamics. In molecular
1124     simulations, rigid bodies are used to simplify protein-protein
1125     docking studies\cite{Gray2003}.
1126 tim 2694
1127 tim 2705 It is very important to develop stable and efficient methods to
1128 tim 2872 integrate the equations of motion for orientational degrees of
1129     freedom. Euler angles are the natural choice to describe the
1130     rotational degrees of freedom. However, due to $\frac {1}{sin
1131     \theta}$ singularities, the numerical integration of corresponding
1132     equations of motion is very inefficient and inaccurate. Although an
1133     alternative integrator using multiple sets of Euler angles can
1134     overcome this difficulty\cite{Barojas1973}, the computational
1135     penalty and the loss of angular momentum conservation still remain.
1136     A singularity-free representation utilizing quaternions was
1137     developed by Evans in 1977\cite{Evans1977}. Unfortunately, this
1138     approach uses a nonseparable Hamiltonian resulting from the
1139     quaternion representation, which prevents the symplectic algorithm
1140     to be utilized. Another different approach is to apply holonomic
1141     constraints to the atoms belonging to the rigid body. Each atom
1142     moves independently under the normal forces deriving from potential
1143     energy and constraint forces which are used to guarantee the
1144     rigidness. However, due to their iterative nature, the SHAKE and
1145     Rattle algorithms also converge very slowly when the number of
1146     constraints increases\cite{Ryckaert1977, Andersen1983}.
1147 tim 2694
1148 tim 2872 A break-through in geometric literature suggests that, in order to
1149 tim 2705 develop a long-term integration scheme, one should preserve the
1150 tim 2872 symplectic structure of the flow. By introducing a conjugate
1151     momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's
1152     equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was
1153     proposed to evolve the Hamiltonian system in a constraint manifold
1154     by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1155     An alternative method using the quaternion representation was
1156     developed by Omelyan\cite{Omelyan1998}. However, both of these
1157     methods are iterative and inefficient. In this section, we descibe a
1158 tim 2789 symplectic Lie-Poisson integrator for rigid body developed by
1159     Dullweber and his coworkers\cite{Dullweber1997} in depth.
1160 tim 2705
1161 tim 2872 \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1162     The motion of a rigid body is Hamiltonian with the Hamiltonian
1163 tim 2713 function
1164 tim 2706 \begin{equation}
1165     H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1166     V(q,Q) + \frac{1}{2}tr[(QQ^T - 1)\Lambda ].
1167     \label{introEquation:RBHamiltonian}
1168     \end{equation}
1169     Here, $q$ and $Q$ are the position and rotation matrix for the
1170     rigid-body, $p$ and $P$ are conjugate momenta to $q$ and $Q$ , and
1171     $J$, a diagonal matrix, is defined by
1172     \[
1173     I_{ii}^{ - 1} = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1174     \]
1175     where $I_{ii}$ is the diagonal element of the inertia tensor. This
1176 tim 2872 constrained Hamiltonian equation is subjected to a holonomic
1177     constraint,
1178 tim 2706 \begin{equation}
1179 tim 2726 Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1180 tim 2706 \end{equation}
1181 tim 2872 which is used to ensure rotation matrix's unitarity. Differentiating
1182 tim 2905 Eq.~\ref{introEquation:orthogonalConstraint} and using
1183     Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1184 tim 2706 \begin{equation}
1185 tim 2707 Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0 . \\
1186 tim 2706 \label{introEquation:RBFirstOrderConstraint}
1187     \end{equation}
1188     Using Equation (\ref{introEquation:motionHamiltonianCoordinate},
1189     \ref{introEquation:motionHamiltonianMomentum}), one can write down
1190     the equations of motion,
1191 tim 2796 \begin{eqnarray}
1192 tim 2905 \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
1193     \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q), \label{introEquation:RBMotionMomentum}\\
1194     \frac{{dQ}}{{dt}} & = & PJ^{ - 1}, \label{introEquation:RBMotionRotation}\\
1195 tim 2796 \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1196     \end{eqnarray}
1197 tim 2707 In general, there are two ways to satisfy the holonomic constraints.
1198 tim 2872 We can use a constraint force provided by a Lagrange multiplier on
1199     the normal manifold to keep the motion on constraint space. Or we
1200     can simply evolve the system on the constraint manifold. These two
1201     methods have been proved to be equivalent. The holonomic constraint
1202     and equations of motions define a constraint manifold for rigid
1203     bodies
1204 tim 2707 \[
1205     M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0}
1206     \right\}.
1207     \]
1208     Unfortunately, this constraint manifold is not the cotangent bundle
1209 tim 2888 $T^* SO(3)$ which can be consider as a symplectic manifold on Lie
1210     rotation group $SO(3)$. However, it turns out that under symplectic
1211 tim 2707 transformation, the cotangent space and the phase space are
1212 tim 2872 diffeomorphic. By introducing
1213 tim 2706 \[
1214 tim 2707 \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1215 tim 2706 \]
1216 tim 2707 the mechanical system subject to a holonomic constraint manifold $M$
1217     can be re-formulated as a Hamiltonian system on the cotangent space
1218     \[
1219     T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
1220     1,\tilde Q^T \tilde PJ^{ - 1} + J^{ - 1} P^T \tilde Q = 0} \right\}
1221     \]
1222     For a body fixed vector $X_i$ with respect to the center of mass of
1223     the rigid body, its corresponding lab fixed vector $X_0^{lab}$ is
1224     given as
1225     \begin{equation}
1226     X_i^{lab} = Q X_i + q.
1227     \end{equation}
1228     Therefore, potential energy $V(q,Q)$ is defined by
1229     \[
1230     V(q,Q) = V(Q X_0 + q).
1231     \]
1232 tim 2713 Hence, the force and torque are given by
1233 tim 2707 \[
1234 tim 2713 \nabla _q V(q,Q) = F(q,Q) = \sum\limits_i {F_i (q,Q)},
1235 tim 2707 \]
1236 tim 2713 and
1237 tim 2707 \[
1238     \nabla _Q V(q,Q) = F(q,Q)X_i^t
1239     \]
1240 tim 2899 respectively. As a common choice to describe the rotation dynamics
1241     of the rigid body, the angular momentum on the body fixed frame $\Pi
1242     = Q^t P$ is introduced to rewrite the equations of motion,
1243 tim 2707 \begin{equation}
1244     \begin{array}{l}
1245 tim 2899 \dot \Pi = J^{ - 1} \Pi ^T \Pi + Q^T \sum\limits_i {F_i (q,Q)X_i^T } - \Lambda, \\
1246     \dot Q = Q\Pi {\rm{ }}J^{ - 1}, \\
1247 tim 2707 \end{array}
1248     \label{introEqaution:RBMotionPI}
1249     \end{equation}
1250 tim 2905 as well as holonomic constraints $\Pi J^{ - 1} + J^{ - 1} \Pi ^t =
1251     0$ and $Q^T Q = 1$. For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a
1252     matrix $\hat v \in so(3)^ \star$, the hat-map isomorphism,
1253 tim 2707 \begin{equation}
1254     v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left(
1255     {\begin{array}{*{20}c}
1256     0 & { - v_3 } & {v_2 } \\
1257     {v_3 } & 0 & { - v_1 } \\
1258     { - v_2 } & {v_1 } & 0 \\
1259     \end{array}} \right),
1260     \label{introEquation:hatmapIsomorphism}
1261     \end{equation}
1262     will let us associate the matrix products with traditional vector
1263     operations
1264     \[
1265 tim 2899 \hat vu = v \times u.
1266 tim 2707 \]
1267 tim 2899 Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew
1268 tim 2707 matrix,
1269 tim 2899 \begin{eqnarray}
1270 tim 2905 (\dot \Pi - \dot \Pi ^T )&= &(\Pi - \Pi ^T )(J^{ - 1} \Pi + \Pi J^{ - 1} ) \notag \\
1271     & & + \sum\limits_i {[Q^T F_i (r,Q)X_i^T - X_i F_i (r,Q)^T Q]} -
1272     (\Lambda - \Lambda ^T ). \label{introEquation:skewMatrixPI}
1273 tim 2899 \end{eqnarray}
1274     Since $\Lambda$ is symmetric, the last term of
1275     Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1276     Lagrange multiplier $\Lambda$ is absent from the equations of
1277     motion. This unique property eliminates the requirement of
1278     iterations which can not be avoided in other methods\cite{Kol1997,
1279     Omelyan1998}. Applying the hat-map isomorphism, we obtain the
1280     equation of motion for angular momentum on body frame
1281 tim 2713 \begin{equation}
1282     \dot \pi = \pi \times I^{ - 1} \pi + \sum\limits_i {\left( {Q^T
1283     F_i (r,Q)} \right) \times X_i }.
1284     \label{introEquation:bodyAngularMotion}
1285     \end{equation}
1286 tim 2707 In the same manner, the equation of motion for rotation matrix is
1287     given by
1288     \[
1289 tim 2899 \dot Q = Qskew(I^{ - 1} \pi ).
1290 tim 2707 \]
1291    
1292 tim 2713 \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1293     Lie-Poisson Integrator for Free Rigid Body}
1294 tim 2707
1295 tim 2872 If there are no external forces exerted on the rigid body, the only
1296     contribution to the rotational motion is from the kinetic energy
1297     (the first term of \ref{introEquation:bodyAngularMotion}). The free
1298     rigid body is an example of a Lie-Poisson system with Hamiltonian
1299     function
1300 tim 2713 \begin{equation}
1301     T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 )
1302     \label{introEquation:rotationalKineticRB}
1303     \end{equation}
1304     where $T_i^r (\pi _i ) = \frac{{\pi _i ^2 }}{{2I_i }}$ and
1305     Lie-Poisson structure matrix,
1306     \begin{equation}
1307     J(\pi ) = \left( {\begin{array}{*{20}c}
1308     0 & {\pi _3 } & { - \pi _2 } \\
1309     { - \pi _3 } & 0 & {\pi _1 } \\
1310     {\pi _2 } & { - \pi _1 } & 0 \\
1311 tim 2899 \end{array}} \right).
1312 tim 2713 \end{equation}
1313     Thus, the dynamics of free rigid body is governed by
1314     \begin{equation}
1315 tim 2899 \frac{d}{{dt}}\pi = J(\pi )\nabla _\pi T^r (\pi ).
1316 tim 2713 \end{equation}
1317 tim 2905 One may notice that each $T_i^r$ in
1318     Eq.~\ref{introEquation:rotationalKineticRB} can be solved exactly.
1319     For instance, the equations of motion due to $T_1^r$ are given by
1320 tim 2713 \begin{equation}
1321     \frac{d}{{dt}}\pi = R_1 \pi ,\frac{d}{{dt}}Q = QR_1
1322     \label{introEqaution:RBMotionSingleTerm}
1323     \end{equation}
1324 tim 2905 with
1325 tim 2713 \[ R_1 = \left( {\begin{array}{*{20}c}
1326     0 & 0 & 0 \\
1327     0 & 0 & {\pi _1 } \\
1328     0 & { - \pi _1 } & 0 \\
1329     \end{array}} \right).
1330     \]
1331 tim 2905 The solutions of Eq.~\ref{introEqaution:RBMotionSingleTerm} is
1332 tim 2707 \[
1333 tim 2713 \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) =
1334     Q(0)e^{\Delta tR_1 }
1335 tim 2707 \]
1336 tim 2713 with
1337 tim 2707 \[
1338 tim 2713 e^{\Delta tR_1 } = \left( {\begin{array}{*{20}c}
1339     0 & 0 & 0 \\
1340     0 & {\cos \theta _1 } & {\sin \theta _1 } \\
1341     0 & { - \sin \theta _1 } & {\cos \theta _1 } \\
1342     \end{array}} \right),\theta _1 = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1343 tim 2707 \]
1344 tim 2719 To reduce the cost of computing expensive functions in $e^{\Delta
1345 tim 2872 tR_1 }$, we can use Cayley transformation to obtain a single-aixs
1346     propagator,
1347 tim 2713 \[
1348     e^{\Delta tR_1 } \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1
1349 tim 2899 ).
1350 tim 2713 \]
1351 tim 2720 The flow maps for $T_2^r$ and $T_3^r$ can be found in the same
1352 tim 2872 manner. In order to construct a second-order symplectic method, we
1353 tim 2905 split the angular kinetic Hamiltonian function into five terms
1354 tim 2707 \[
1355 tim 2713 T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1356     ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
1357 tim 2872 (\pi _1 ).
1358     \]
1359     By concatenating the propagators corresponding to these five terms,
1360     we can obtain an symplectic integrator,
1361 tim 2713 \[
1362     \varphi _{\Delta t,T^r } = \varphi _{\Delta t/2,\pi _1 } \circ
1363 tim 2707 \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 }
1364     \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi
1365 tim 2713 _1 }.
1366 tim 2707 \]
1367 tim 2713 The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1368     $F(\pi )$ and $G(\pi )$ is defined by
1369 tim 2707 \[
1370 tim 2713 \{ F,G\} (\pi ) = [\nabla _\pi F(\pi )]^T J(\pi )\nabla _\pi G(\pi
1371 tim 2899 ).
1372 tim 2713 \]
1373     If the Poisson bracket of a function $F$ with an arbitrary smooth
1374     function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1375     conserved quantity in Poisson system. We can easily verify that the
1376     norm of the angular momentum, $\parallel \pi
1377     \parallel$, is a \emph{Casimir}. Let$ F(\pi ) = S(\frac{{\parallel
1378     \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1379     then by the chain rule
1380     \[
1381     \nabla _\pi F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2
1382 tim 2899 }}{2})\pi.
1383 tim 2713 \]
1384 tim 2899 Thus, $ [\nabla _\pi F(\pi )]^T J(\pi ) = - S'(\frac{{\parallel
1385     \pi
1386 tim 2713 \parallel ^2 }}{2})\pi \times \pi = 0 $. This explicit
1387 tim 2872 Lie-Poisson integrator is found to be both extremely efficient and
1388     stable. These properties can be explained by the fact the small
1389     angle approximation is used and the norm of the angular momentum is
1390     conserved.
1391 tim 2713
1392     \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian
1393     Splitting for Rigid Body}
1394    
1395     The Hamiltonian of rigid body can be separated in terms of kinetic
1396 tim 2905 energy and potential energy,$H = T(p,\pi ) + V(q,Q)$. The equations
1397     of motion corresponding to potential energy and kinetic energy are
1398     listed in the below table,
1399 tim 2776 \begin{table}
1400 tim 2889 \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1401 tim 2713 \begin{center}
1402     \begin{tabular}{|l|l|}
1403     \hline
1404     % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
1405     Potential & Kinetic \\
1406     $\frac{{dq}}{{dt}} = \frac{p}{m}$ & $\frac{d}{{dt}}q = p$ \\
1407     $\frac{d}{{dt}}p = - \frac{{\partial V}}{{\partial q}}$ & $ \frac{d}{{dt}}p = 0$ \\
1408     $\frac{d}{{dt}}Q = 0$ & $ \frac{d}{{dt}}Q = Qskew(I^{ - 1} j)$ \\
1409     $ \frac{d}{{dt}}\pi = \sum\limits_i {\left( {Q^T F_i (r,Q)} \right) \times X_i }$ & $\frac{d}{{dt}}\pi = \pi \times I^{ - 1} \pi$\\
1410     \hline
1411     \end{tabular}
1412     \end{center}
1413 tim 2776 \end{table}
1414 tim 2872 A second-order symplectic method is now obtained by the composition
1415     of the position and velocity propagators,
1416 tim 2713 \[
1417     \varphi _{\Delta t} = \varphi _{\Delta t/2,V} \circ \varphi
1418     _{\Delta t,T} \circ \varphi _{\Delta t/2,V}.
1419     \]
1420 tim 2719 Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1421 tim 2872 sub-propagators which corresponding to force and torque
1422     respectively,
1423 tim 2713 \[
1424 tim 2707 \varphi _{\Delta t/2,V} = \varphi _{\Delta t/2,F} \circ \varphi
1425 tim 2713 _{\Delta t/2,\tau }.
1426 tim 2707 \]
1427 tim 2713 Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1428 tim 2872 $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order
1429     inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the
1430     kinetic energy can be separated to translational kinetic term, $T^t
1431     (p)$, and rotational kinetic term, $T^r (\pi )$,
1432 tim 2713 \begin{equation}
1433     T(p,\pi ) =T^t (p) + T^r (\pi ).
1434     \end{equation}
1435     where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1436 tim 2905 defined by Eq.~\ref{introEquation:rotationalKineticRB}. Therefore,
1437     the corresponding propagators are given by
1438 tim 2713 \[
1439     \varphi _{\Delta t,T} = \varphi _{\Delta t,T^t } \circ \varphi
1440     _{\Delta t,T^r }.
1441     \]
1442 tim 2872 Finally, we obtain the overall symplectic propagators for freely
1443     moving rigid bodies
1444 tim 2905 \begin{eqnarray}
1445     \varphi _{\Delta t} &=& \varphi _{\Delta t/2,F} \circ \varphi _{\Delta t/2,\tau } \notag\\
1446     & & \circ \varphi _{\Delta t,T^t } \circ \varphi _{\Delta t/2,\pi _1 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi _1 } \notag\\
1447 tim 2899 & & \circ \varphi _{\Delta t/2,\tau } \circ \varphi _{\Delta t/2,F} .\\
1448 tim 2713 \label{introEquation:overallRBFlowMaps}
1449 tim 2905 \end{eqnarray}
1450 tim 2707
1451 tim 2685 \section{\label{introSection:langevinDynamics}Langevin Dynamics}
1452 tim 2716 As an alternative to newtonian dynamics, Langevin dynamics, which
1453     mimics a simple heat bath with stochastic and dissipative forces,
1454     has been applied in a variety of studies. This section will review
1455 tim 2872 the theory of Langevin dynamics. A brief derivation of generalized
1456     Langevin equation will be given first. Following that, we will
1457     discuss the physical meaning of the terms appearing in the equation
1458     as well as the calculation of friction tensor from hydrodynamics
1459     theory.
1460 tim 2685
1461 tim 2719 \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1462 tim 2685
1463 tim 2872 A harmonic bath model, in which an effective set of harmonic
1464 tim 2719 oscillators are used to mimic the effect of a linearly responding
1465     environment, has been widely used in quantum chemistry and
1466     statistical mechanics. One of the successful applications of
1467 tim 2872 Harmonic bath model is the derivation of the Generalized Langevin
1468     Dynamics (GLE). Lets consider a system, in which the degree of
1469 tim 2719 freedom $x$ is assumed to couple to the bath linearly, giving a
1470     Hamiltonian of the form
1471 tim 2696 \begin{equation}
1472     H = \frac{{p^2 }}{{2m}} + U(x) + H_B + \Delta U(x,x_1 , \ldots x_N)
1473 tim 2719 \label{introEquation:bathGLE}.
1474 tim 2696 \end{equation}
1475 tim 2872 Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated
1476     with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1477 tim 2696 \[
1478 tim 2719 H_B = \sum\limits_{\alpha = 1}^N {\left\{ {\frac{{p_\alpha ^2
1479     }}{{2m_\alpha }} + \frac{1}{2}m_\alpha \omega _\alpha ^2 }
1480     \right\}}
1481 tim 2696 \]
1482 tim 2719 where the index $\alpha$ runs over all the bath degrees of freedom,
1483     $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1484 tim 2872 the harmonic bath masses, and $\Delta U$ is a bilinear system-bath
1485 tim 2719 coupling,
1486 tim 2696 \[
1487     \Delta U = - \sum\limits_{\alpha = 1}^N {g_\alpha x_\alpha x}
1488     \]
1489 tim 2872 where $g_\alpha$ are the coupling constants between the bath
1490 tim 2874 coordinates ($x_ \alpha$) and the system coordinate ($x$).
1491 tim 2872 Introducing
1492 tim 2696 \[
1493 tim 2719 W(x) = U(x) - \sum\limits_{\alpha = 1}^N {\frac{{g_\alpha ^2
1494     }}{{2m_\alpha w_\alpha ^2 }}} x^2
1495 tim 2899 \]
1496     and combining the last two terms in Eq.~\ref{introEquation:bathGLE}, we may rewrite the Harmonic bath Hamiltonian as
1497 tim 2696 \[
1498     H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha = 1}^N
1499     {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha
1500     w_\alpha ^2 \left( {x_\alpha - \frac{{g_\alpha }}{{m_\alpha
1501 tim 2899 w_\alpha ^2 }}x} \right)^2 } \right\}}.
1502 tim 2696 \]
1503     Since the first two terms of the new Hamiltonian depend only on the
1504     system coordinates, we can get the equations of motion for
1505 tim 2872 Generalized Langevin Dynamics by Hamilton's equations,
1506 tim 2719 \begin{equation}
1507     m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} -
1508     \sum\limits_{\alpha = 1}^N {g_\alpha \left( {x_\alpha -
1509     \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right)},
1510     \label{introEquation:coorMotionGLE}
1511     \end{equation}
1512     and
1513     \begin{equation}
1514     m\ddot x_\alpha = - m_\alpha w_\alpha ^2 \left( {x_\alpha -
1515     \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right).
1516     \label{introEquation:bathMotionGLE}
1517     \end{equation}
1518     In order to derive an equation for $x$, the dynamics of the bath
1519     variables $x_\alpha$ must be solved exactly first. As an integral
1520     transform which is particularly useful in solving linear ordinary
1521 tim 2872 differential equations,the Laplace transform is the appropriate tool
1522     to solve this problem. The basic idea is to transform the difficult
1523 tim 2719 differential equations into simple algebra problems which can be
1524 tim 2872 solved easily. Then, by applying the inverse Laplace transform, also
1525     known as the Bromwich integral, we can retrieve the solutions of the
1526 tim 2899 original problems. Let $f(t)$ be a function defined on $ [0,\infty )
1527 tim 2905 $, the Laplace transform of $f(t)$ is a new function defined as
1528 tim 2696 \[
1529 tim 2719 L(f(t)) \equiv F(p) = \int_0^\infty {f(t)e^{ - pt} dt}
1530 tim 2696 \]
1531 tim 2719 where $p$ is real and $L$ is called the Laplace Transform
1532     Operator. Below are some important properties of Laplace transform
1533 tim 2789 \begin{eqnarray*}
1534     L(x + y) & = & L(x) + L(y) \\
1535     L(ax) & = & aL(x) \\
1536     L(\dot x) & = & pL(x) - px(0) \\
1537     L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\
1538     L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\
1539     \end{eqnarray*}
1540 tim 2872 Applying the Laplace transform to the bath coordinates, we obtain
1541 tim 2789 \begin{eqnarray*}
1542 tim 2905 p^2 L(x_\alpha ) - px_\alpha (0) - \dot x_\alpha (0) & = & - \omega _\alpha ^2 L(x_\alpha ) + \frac{{g_\alpha }}{{\omega _\alpha }}L(x), \\
1543     L(x_\alpha ) & = & \frac{{\frac{{g_\alpha }}{{\omega _\alpha }}L(x) + px_\alpha (0) + \dot x_\alpha (0)}}{{p^2 + \omega _\alpha ^2 }}. \\
1544 tim 2789 \end{eqnarray*}
1545 tim 2719 By the same way, the system coordinates become
1546 tim 2789 \begin{eqnarray*}
1547 tim 2899 mL(\ddot x) & = &
1548     - \sum\limits_{\alpha = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}\frac{p}{{p^2 + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2 + \omega _\alpha ^2 }}g_\alpha x_\alpha (0) - \frac{1}{{p^2 + \omega _\alpha ^2 }}g_\alpha \dot x_\alpha (0)} \right\}} \\
1549 tim 2905 & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}.
1550 tim 2789 \end{eqnarray*}
1551 tim 2719 With the help of some relatively important inverse Laplace
1552     transformations:
1553 tim 2696 \[
1554 tim 2719 \begin{array}{c}
1555     L(\cos at) = \frac{p}{{p^2 + a^2 }} \\
1556     L(\sin at) = \frac{a}{{p^2 + a^2 }} \\
1557     L(1) = \frac{1}{p} \\
1558     \end{array}
1559 tim 2696 \]
1560 tim 2899 we obtain
1561 tim 2794 \begin{eqnarray*}
1562     m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} -
1563 tim 2696 \sum\limits_{\alpha = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2
1564     }}{{m_\alpha \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega
1565 tim 2794 _\alpha t)\dot x(t - \tau )d\tau } } \right\}} \\
1566     & & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha
1567     x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}}
1568     \right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha
1569     (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}
1570     \end{eqnarray*}
1571     \begin{eqnarray*}
1572     m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t
1573 tim 2696 {\sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2
1574     }}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha
1575 tim 2794 t)\dot x(t - \tau )d} \tau } \\
1576     & & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha
1577     x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}}
1578     \right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha
1579     (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}
1580     \end{eqnarray*}
1581 tim 2719 Introducing a \emph{dynamic friction kernel}
1582 tim 2696 \begin{equation}
1583 tim 2719 \xi (t) = \sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2
1584     }}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha t)}
1585     \label{introEquation:dynamicFrictionKernelDefinition}
1586     \end{equation}
1587     and \emph{a random force}
1588     \begin{equation}
1589     R(t) = \sum\limits_{\alpha = 1}^N {\left( {g_\alpha x_\alpha (0)
1590     - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}x(0)}
1591     \right)\cos (\omega _\alpha t)} + \frac{{\dot x_\alpha
1592     (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t),
1593     \label{introEquation:randomForceDefinition}
1594     \end{equation}
1595     the equation of motion can be rewritten as
1596     \begin{equation}
1597 tim 2696 m\ddot x = - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi
1598     (t)\dot x(t - \tau )d\tau } + R(t)
1599     \label{introEuqation:GeneralizedLangevinDynamics}
1600     \end{equation}
1601 tim 2719 which is known as the \emph{generalized Langevin equation}.
1602    
1603 tim 2819 \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1604 tim 2719
1605     One may notice that $R(t)$ depends only on initial conditions, which
1606     implies it is completely deterministic within the context of a
1607     harmonic bath. However, it is easy to verify that $R(t)$ is totally
1608 tim 2905 uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)}
1609     \right\rangle = 0, \left\langle {\dot x(t)R(t)} \right\rangle =
1610     0.$ This property is what we expect from a truly random process. As
1611     long as the model chosen for $R(t)$ was a gaussian distribution in
1612 tim 2872 general, the stochastic nature of the GLE still remains.
1613 tim 2719 %dynamic friction kernel
1614     The convolution integral
1615 tim 2696 \[
1616 tim 2719 \int_0^t {\xi (t)\dot x(t - \tau )d\tau }
1617 tim 2696 \]
1618 tim 2719 depends on the entire history of the evolution of $x$, which implies
1619     that the bath retains memory of previous motions. In other words,
1620     the bath requires a finite time to respond to change in the motion
1621     of the system. For a sluggish bath which responds slowly to changes
1622     in the system coordinate, we may regard $\xi(t)$ as a constant
1623     $\xi(t) = \Xi_0$. Hence, the convolution integral becomes
1624     \[
1625     \int_0^t {\xi (t)\dot x(t - \tau )d\tau } = \xi _0 (x(t) - x(0))
1626     \]
1627 tim 2899 and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1628 tim 2719 \[
1629     m\ddot x = - \frac{\partial }{{\partial x}}\left( {W(x) +
1630     \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1631     \]
1632 tim 2872 which can be used to describe the effect of dynamic caging in
1633     viscous solvents. The other extreme is the bath that responds
1634     infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1635     taken as a $delta$ function in time:
1636 tim 2719 \[
1637     \xi (t) = 2\xi _0 \delta (t)
1638     \]
1639     Hence, the convolution integral becomes
1640     \[
1641     \int_0^t {\xi (t)\dot x(t - \tau )d\tau } = 2\xi _0 \int_0^t
1642     {\delta (t)\dot x(t - \tau )d\tau } = \xi _0 \dot x(t),
1643     \]
1644 tim 2899 and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1645 tim 2719 \begin{equation}
1646     m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1647     x(t) + R(t) \label{introEquation:LangevinEquation}
1648     \end{equation}
1649     which is known as the Langevin equation. The static friction
1650     coefficient $\xi _0$ can either be calculated from spectral density
1651 tim 2850 or be determined by Stokes' law for regular shaped particles. A
1652 tim 2719 briefly review on calculating friction tensor for arbitrary shaped
1653 tim 2720 particles is given in Sec.~\ref{introSection:frictionTensor}.
1654 tim 2696
1655 tim 2819 \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1656 tim 2719
1657 tim 2905 Defining a new set of coordinates
1658 tim 2696 \[
1659     q_\alpha (t) = x_\alpha (t) - \frac{1}{{m_\alpha \omega _\alpha
1660 tim 2905 ^2 }}x(0),
1661     \]
1662 tim 2719 we can rewrite $R(T)$ as
1663 tim 2696 \[
1664 tim 2719 R(t) = \sum\limits_{\alpha = 1}^N {g_\alpha q_\alpha (t)}.
1665 tim 2696 \]
1666     And since the $q$ coordinates are harmonic oscillators,
1667 tim 2789 \begin{eqnarray*}
1668     \left\langle {q_\alpha ^2 } \right\rangle & = & \frac{{kT}}{{m_\alpha \omega _\alpha ^2 }} \\
1669     \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t) \\
1670     \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle \\
1671     \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha {\sum\limits_\beta {g_\alpha g_\beta \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle } } \\
1672     & = &\sum\limits_\alpha {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t)} \\
1673     & = &kT\xi (t) \\
1674     \end{eqnarray*}
1675 tim 2719 Thus, we recover the \emph{second fluctuation dissipation theorem}
1676 tim 2696 \begin{equation}
1677     \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1678 tim 2905 \label{introEquation:secondFluctuationDissipation},
1679 tim 2696 \end{equation}
1680 tim 2905 which acts as a constraint on the possible ways in which one can
1681     model the random force and friction kernel.