ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
Revision: 2899
Committed: Tue Jun 27 04:04:32 2006 UTC (18 years ago) by tim
Content type: application/x-tex
File size: 75342 byte(s)
Log Message:
adding periods at the end of equations

File Contents

# User Rev Content
1 tim 2685 \chapter{\label{chapt:introduction}INTRODUCTION AND THEORETICAL BACKGROUND}
2    
3 tim 2693 \section{\label{introSection:classicalMechanics}Classical
4     Mechanics}
5 tim 2685
6 tim 2692 Closely related to Classical Mechanics, Molecular Dynamics
7     simulations are carried out by integrating the equations of motion
8     for a given system of particles. There are three fundamental ideas
9 tim 2819 behind classical mechanics. Firstly, one can determine the state of
10 tim 2692 a mechanical system at any time of interest; Secondly, all the
11     mechanical properties of the system at that time can be determined
12     by combining the knowledge of the properties of the system with the
13     specification of this state; Finally, the specification of the state
14     when further combine with the laws of mechanics will also be
15     sufficient to predict the future behavior of the system.
16 tim 2685
17 tim 2693 \subsection{\label{introSection:newtonian}Newtonian Mechanics}
18 tim 2694 The discovery of Newton's three laws of mechanics which govern the
19     motion of particles is the foundation of the classical mechanics.
20 tim 2819 Newton's first law defines a class of inertial frames. Inertial
21 tim 2694 frames are reference frames where a particle not interacting with
22     other bodies will move with constant speed in the same direction.
23 tim 2819 With respect to inertial frames, Newton's second law has the form
24 tim 2694 \begin{equation}
25 tim 2819 F = \frac {dp}{dt} = \frac {mdv}{dt}
26 tim 2694 \label{introEquation:newtonSecondLaw}
27     \end{equation}
28     A point mass interacting with other bodies moves with the
29     acceleration along the direction of the force acting on it. Let
30 tim 2702 $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and
31     $F_{ji}$ be the force that particle $j$ exerts on particle $i$.
32 tim 2819 Newton's third law states that
33 tim 2694 \begin{equation}
34 tim 2898 F_{ij} = -F_{ji}.
35 tim 2694 \label{introEquation:newtonThirdLaw}
36     \end{equation}
37     Conservation laws of Newtonian Mechanics play very important roles
38     in solving mechanics problems. The linear momentum of a particle is
39     conserved if it is free or it experiences no force. The second
40     conservation theorem concerns the angular momentum of a particle.
41     The angular momentum $L$ of a particle with respect to an origin
42     from which $r$ is measured is defined to be
43     \begin{equation}
44     L \equiv r \times p \label{introEquation:angularMomentumDefinition}
45     \end{equation}
46     The torque $\tau$ with respect to the same origin is defined to be
47     \begin{equation}
48 tim 2819 \tau \equiv r \times F \label{introEquation:torqueDefinition}
49 tim 2694 \end{equation}
50     Differentiating Eq.~\ref{introEquation:angularMomentumDefinition},
51     \[
52     \dot L = \frac{d}{{dt}}(r \times p) = (\dot r \times p) + (r \times
53     \dot p)
54     \]
55     since
56     \[
57     \dot r \times p = \dot r \times mv = m\dot r \times \dot r \equiv 0
58     \]
59     thus,
60     \begin{equation}
61 tim 2819 \dot L = r \times \dot p = \tau
62 tim 2694 \end{equation}
63     If there are no external torques acting on a body, the angular
64     momentum of it is conserved. The last conservation theorem state
65 tim 2899 that if all forces are conservative, energy is conserved,
66     \begin{equation}E = T + V. \label{introEquation:energyConservation}
67 tim 2696 \end{equation}
68 tim 2899 All of these conserved quantities are important factors to determine
69     the quality of numerical integration schemes for rigid bodies
70     \cite{Dullweber1997}.
71 tim 2694
72 tim 2693 \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
73 tim 2692
74 tim 2819 Newtonian Mechanics suffers from two important limitations: motions
75 tim 2895 can only be described in cartesian coordinate systems. Moreover, it
76     becomes impossible to predict analytically the properties of the
77 tim 2819 system even if we know all of the details of the interaction. In
78     order to overcome some of the practical difficulties which arise in
79     attempts to apply Newton's equation to complex system, approximate
80     numerical procedures may be developed.
81 tim 2692
82 tim 2819 \subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's
83     Principle}}
84 tim 2692
85     Hamilton introduced the dynamical principle upon which it is
86 tim 2819 possible to base all of mechanics and most of classical physics.
87 tim 2898 Hamilton's Principle may be stated as follows: the actual
88     trajectory, along which a dynamical system may move from one point
89     to another within a specified time, is derived by finding the path
90     which minimizes the time integral of the difference between the
91 tim 2899 kinetic, $K$, and potential energies, $U$,
92 tim 2692 \begin{equation}
93 tim 2899 \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}.
94 tim 2693 \label{introEquation:halmitonianPrinciple1}
95 tim 2692 \end{equation}
96     For simple mechanical systems, where the forces acting on the
97 tim 2819 different parts are derivable from a potential, the Lagrangian
98     function $L$ can be defined as the difference between the kinetic
99     energy of the system and its potential energy,
100 tim 2692 \begin{equation}
101     L \equiv K - U = L(q_i ,\dot q_i ) ,
102     \label{introEquation:lagrangianDef}
103     \end{equation}
104     then Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
105     \begin{equation}
106 tim 2693 \delta \int_{t_1 }^{t_2 } {L dt = 0} ,
107     \label{introEquation:halmitonianPrinciple2}
108 tim 2692 \end{equation}
109    
110 tim 2819 \subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The
111     Equations of Motion in Lagrangian Mechanics}}
112 tim 2692
113 tim 2850 For a system of $f$ degrees of freedom, the equations of motion in
114     the Lagrangian form is
115 tim 2692 \begin{equation}
116     \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} -
117     \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f
118 tim 2693 \label{introEquation:eqMotionLagrangian}
119 tim 2692 \end{equation}
120     where $q_{i}$ is generalized coordinate and $\dot{q_{i}}$ is
121     generalized velocity.
122    
123 tim 2693 \subsection{\label{introSection:hamiltonian}Hamiltonian Mechanics}
124 tim 2692
125     Arising from Lagrangian Mechanics, Hamiltonian Mechanics was
126     introduced by William Rowan Hamilton in 1833 as a re-formulation of
127     classical mechanics. If the potential energy of a system is
128 tim 2819 independent of velocities, the momenta can be defined as
129 tim 2692 \begin{equation}
130     p_i = \frac{\partial L}{\partial \dot q_i}
131     \label{introEquation:generalizedMomenta}
132     \end{equation}
133 tim 2693 The Lagrange equations of motion are then expressed by
134 tim 2692 \begin{equation}
135 tim 2693 p_i = \frac{{\partial L}}{{\partial q_i }}
136     \label{introEquation:generalizedMomentaDot}
137     \end{equation}
138     With the help of the generalized momenta, we may now define a new
139     quantity $H$ by the equation
140     \begin{equation}
141     H = \sum\limits_k {p_k \dot q_k } - L ,
142 tim 2692 \label{introEquation:hamiltonianDefByLagrangian}
143     \end{equation}
144     where $ \dot q_1 \ldots \dot q_f $ are generalized velocities and
145 tim 2898 $L$ is the Lagrangian function for the system. Differentiating
146     Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, one can obtain
147 tim 2693 \begin{equation}
148     dH = \sum\limits_k {\left( {p_k d\dot q_k + \dot q_k dp_k -
149     \frac{{\partial L}}{{\partial q_k }}dq_k - \frac{{\partial
150     L}}{{\partial \dot q_k }}d\dot q_k } \right)} - \frac{{\partial
151     L}}{{\partial t}}dt \label{introEquation:diffHamiltonian1}
152     \end{equation}
153 tim 2899 Making use of Eq.~\ref{introEquation:generalizedMomenta}, the second
154     and fourth terms in the parentheses cancel. Therefore,
155 tim 2693 Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as
156     \begin{equation}
157     dH = \sum\limits_k {\left( {\dot q_k dp_k - \dot p_k dq_k }
158     \right)} - \frac{{\partial L}}{{\partial t}}dt
159     \label{introEquation:diffHamiltonian2}
160     \end{equation}
161     By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can
162     find
163     \begin{equation}
164 tim 2819 \frac{{\partial H}}{{\partial p_k }} = \dot {q_k}
165 tim 2693 \label{introEquation:motionHamiltonianCoordinate}
166     \end{equation}
167     \begin{equation}
168 tim 2819 \frac{{\partial H}}{{\partial q_k }} = - \dot {p_k}
169 tim 2693 \label{introEquation:motionHamiltonianMomentum}
170     \end{equation}
171     and
172     \begin{equation}
173     \frac{{\partial H}}{{\partial t}} = - \frac{{\partial L}}{{\partial
174     t}}
175     \label{introEquation:motionHamiltonianTime}
176     \end{equation}
177 tim 2899 where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
178 tim 2693 Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
179     equation of motion. Due to their symmetrical formula, they are also
180 tim 2786 known as the canonical equations of motions \cite{Goldstein2001}.
181 tim 2693
182 tim 2692 An important difference between Lagrangian approach and the
183     Hamiltonian approach is that the Lagrangian is considered to be a
184 tim 2819 function of the generalized velocities $\dot q_i$ and coordinates
185     $q_i$, while the Hamiltonian is considered to be a function of the
186     generalized momenta $p_i$ and the conjugate coordinates $q_i$.
187     Hamiltonian Mechanics is more appropriate for application to
188     statistical mechanics and quantum mechanics, since it treats the
189     coordinate and its time derivative as independent variables and it
190     only works with 1st-order differential equations\cite{Marion1990}.
191 tim 2696 In Newtonian Mechanics, a system described by conservative forces
192 tim 2899 conserves the total energy
193     (Eq.~\ref{introEquation:energyConservation}). It follows that
194     Hamilton's equations of motion conserve the total Hamiltonian.
195 tim 2696 \begin{equation}
196     \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial
197     H}}{{\partial q_i }}\dot q_i + \frac{{\partial H}}{{\partial p_i
198     }}\dot p_i } \right)} = \sum\limits_i {\left( {\frac{{\partial
199     H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} -
200     \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial
201 tim 2698 q_i }}} \right) = 0} \label{introEquation:conserveHalmitonian}
202 tim 2696 \end{equation}
203    
204 tim 2693 \section{\label{introSection:statisticalMechanics}Statistical
205     Mechanics}
206 tim 2692
207 tim 2694 The thermodynamic behaviors and properties of Molecular Dynamics
208 tim 2692 simulation are governed by the principle of Statistical Mechanics.
209     The following section will give a brief introduction to some of the
210 tim 2700 Statistical Mechanics concepts and theorem presented in this
211     dissertation.
212 tim 2692
213 tim 2700 \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
214 tim 2692
215 tim 2700 Mathematically, phase space is the space which represents all
216     possible states. Each possible state of the system corresponds to
217     one unique point in the phase space. For mechanical systems, the
218     phase space usually consists of all possible values of position and
219 tim 2819 momentum variables. Consider a dynamic system of $f$ particles in a
220     cartesian space, where each of the $6f$ coordinates and momenta is
221     assigned to one of $6f$ mutually orthogonal axes, the phase space of
222 tim 2888 this system is a $6f$ dimensional space. A point, $x = (\rightarrow
223     q_1 , \ldots ,\rightarrow q_f ,\rightarrow p_1 , \ldots ,\rightarrow
224     p_f )$, with a unique set of values of $6f$ coordinates and momenta
225     is a phase space vector.
226     %%%fix me
227 tim 2700
228 tim 2888 In statistical mechanics, the condition of an ensemble at any time
229 tim 2700 can be regarded as appropriately specified by the density $\rho$
230     with which representative points are distributed over the phase
231 tim 2819 space. The density distribution for an ensemble with $f$ degrees of
232     freedom is defined as,
233 tim 2700 \begin{equation}
234     \rho = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t).
235     \label{introEquation:densityDistribution}
236     \end{equation}
237     Governed by the principles of mechanics, the phase points change
238 tim 2819 their locations which would change the density at any time at phase
239     space. Hence, the density distribution is also to be taken as a
240 tim 2700 function of the time.
241    
242     The number of systems $\delta N$ at time $t$ can be determined by,
243     \begin{equation}
244     \delta N = \rho (q,p,t)dq_1 \ldots dq_f dp_1 \ldots dp_f.
245     \label{introEquation:deltaN}
246     \end{equation}
247 tim 2819 Assuming a large enough population of systems, we can sufficiently
248     approximate $\delta N$ without introducing discontinuity when we go
249     from one region in the phase space to another. By integrating over
250     the whole phase space,
251 tim 2700 \begin{equation}
252     N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f
253     \label{introEquation:totalNumberSystem}
254     \end{equation}
255     gives us an expression for the total number of the systems. Hence,
256     the probability per unit in the phase space can be obtained by,
257     \begin{equation}
258     \frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int
259     {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
260     \label{introEquation:unitProbability}
261     \end{equation}
262 tim 2850 With the help of Eq.~\ref{introEquation:unitProbability} and the
263     knowledge of the system, it is possible to calculate the average
264 tim 2700 value of any desired quantity which depends on the coordinates and
265     momenta of the system. Even when the dynamics of the real system is
266     complex, or stochastic, or even discontinuous, the average
267 tim 2819 properties of the ensemble of possibilities as a whole remaining
268     well defined. For a classical system in thermal equilibrium with its
269     environment, the ensemble average of a mechanical quantity, $\langle
270     A(q , p) \rangle_t$, takes the form of an integral over the phase
271     space of the system,
272 tim 2700 \begin{equation}
273     \langle A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho
274     (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
275     (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}
276     \label{introEquation:ensembelAverage}
277     \end{equation}
278    
279     There are several different types of ensembles with different
280     statistical characteristics. As a function of macroscopic
281 tim 2819 parameters, such as temperature \textit{etc}, the partition function
282     can be used to describe the statistical properties of a system in
283 tim 2898 thermodynamic equilibrium. As an ensemble of systems, each of which
284     is known to be thermally isolated and conserve energy, the
285     Microcanonical ensemble (NVE) has a partition function like,
286 tim 2700 \begin{equation}
287 tim 2899 \Omega (N,V,E) = e^{\beta TS}. \label{introEquation:NVEPartition}.
288 tim 2700 \end{equation}
289 tim 2850 A canonical ensemble (NVT)is an ensemble of systems, each of which
290 tim 2700 can share its energy with a large heat reservoir. The distribution
291     of the total energy amongst the possible dynamical states is given
292     by the partition function,
293     \begin{equation}
294 tim 2899 \Omega (N,V,T) = e^{ - \beta A}.
295 tim 2700 \label{introEquation:NVTPartition}
296     \end{equation}
297     Here, $A$ is the Helmholtz free energy which is defined as $ A = U -
298 tim 2819 TS$. Since most experiments are carried out under constant pressure
299 tim 2850 condition, the isothermal-isobaric ensemble (NPT) plays a very
300 tim 2819 important role in molecular simulations. The isothermal-isobaric
301     ensemble allow the system to exchange energy with a heat bath of
302     temperature $T$ and to change the volume as well. Its partition
303     function is given as
304 tim 2700 \begin{equation}
305     \Delta (N,P,T) = - e^{\beta G}.
306     \label{introEquation:NPTPartition}
307     \end{equation}
308     Here, $G = U - TS + PV$, and $G$ is called Gibbs free energy.
309    
310     \subsection{\label{introSection:liouville}Liouville's theorem}
311    
312 tim 2819 Liouville's theorem is the foundation on which statistical mechanics
313     rests. It describes the time evolution of the phase space
314 tim 2700 distribution function. In order to calculate the rate of change of
315 tim 2850 $\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider
316     the two faces perpendicular to the $q_1$ axis, which are located at
317     $q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the
318     opposite face is given by the expression,
319 tim 2700 \begin{equation}
320     \left( {\rho + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 }
321     \right)\left( {\dot q_1 + \frac{{\partial \dot q_1 }}{{\partial q_1
322     }}\delta q_1 } \right)\delta q_2 \ldots \delta q_f \delta p_1
323     \ldots \delta p_f .
324     \end{equation}
325     Summing all over the phase space, we obtain
326     \begin{equation}
327     \frac{{d(\delta N)}}{{dt}} = - \sum\limits_{i = 1}^f {\left[ {\rho
328     \left( {\frac{{\partial \dot q_i }}{{\partial q_i }} +
329     \frac{{\partial \dot p_i }}{{\partial p_i }}} \right) + \left(
330     {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i + \frac{{\partial
331     \rho }}{{\partial p_i }}\dot p_i } \right)} \right]} \delta q_1
332     \ldots \delta q_f \delta p_1 \ldots \delta p_f .
333     \end{equation}
334     Differentiating the equations of motion in Hamiltonian formalism
335     (\ref{introEquation:motionHamiltonianCoordinate},
336     \ref{introEquation:motionHamiltonianMomentum}), we can show,
337     \begin{equation}
338     \sum\limits_i {\left( {\frac{{\partial \dot q_i }}{{\partial q_i }}
339     + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)} = 0 ,
340     \end{equation}
341     which cancels the first terms of the right hand side. Furthermore,
342 tim 2819 dividing $ \delta q_1 \ldots \delta q_f \delta p_1 \ldots \delta
343 tim 2700 p_f $ in both sides, we can write out Liouville's theorem in a
344     simple form,
345     \begin{equation}
346     \frac{{\partial \rho }}{{\partial t}} + \sum\limits_{i = 1}^f
347     {\left( {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i +
348     \frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)} = 0 .
349     \label{introEquation:liouvilleTheorem}
350     \end{equation}
351     Liouville's theorem states that the distribution function is
352     constant along any trajectory in phase space. In classical
353 tim 2850 statistical mechanics, since the number of members in an ensemble is
354     huge and constant, we can assume the local density has no reason
355     (other than classical mechanics) to change,
356 tim 2700 \begin{equation}
357     \frac{{\partial \rho }}{{\partial t}} = 0.
358     \label{introEquation:stationary}
359     \end{equation}
360     In such stationary system, the density of distribution $\rho$ can be
361     connected to the Hamiltonian $H$ through Maxwell-Boltzmann
362     distribution,
363     \begin{equation}
364     \rho \propto e^{ - \beta H}
365     \label{introEquation:densityAndHamiltonian}
366     \end{equation}
367    
368 tim 2819 \subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}}
369 tim 2702 Lets consider a region in the phase space,
370     \begin{equation}
371     \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f .
372     \end{equation}
373     If this region is small enough, the density $\rho$ can be regarded
374 tim 2819 as uniform over the whole integral. Thus, the number of phase points
375     inside this region is given by,
376 tim 2702 \begin{equation}
377     \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f
378     dp_1 } ..dp_f.
379     \end{equation}
380    
381     \begin{equation}
382     \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho
383     \frac{d}{{dt}}(\delta v) = 0.
384     \end{equation}
385     With the help of stationary assumption
386     (\ref{introEquation:stationary}), we obtain the principle of the
387 tim 2819 \emph{conservation of volume in phase space},
388 tim 2702 \begin{equation}
389     \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 }
390     ...dq_f dp_1 } ..dp_f = 0.
391     \label{introEquation:volumePreserving}
392     \end{equation}
393    
394 tim 2819 \subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}}
395 tim 2702
396 tim 2700 Liouville's theorem can be expresses in a variety of different forms
397     which are convenient within different contexts. For any two function
398     $F$ and $G$ of the coordinates and momenta of a system, the Poisson
399     bracket ${F, G}$ is defined as
400     \begin{equation}
401     \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
402     F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
403     \frac{{\partial F}}{{\partial p_i }}\frac{{\partial G}}{{\partial
404     q_i }}} \right)}.
405     \label{introEquation:poissonBracket}
406     \end{equation}
407     Substituting equations of motion in Hamiltonian formalism(
408 tim 2850 Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
409     Eq.~\ref{introEquation:motionHamiltonianMomentum} ) into
410     (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite
411     Liouville's theorem using Poisson bracket notion,
412 tim 2700 \begin{equation}
413     \left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - \left\{
414     {\rho ,H} \right\}.
415     \label{introEquation:liouvilleTheromInPoissin}
416     \end{equation}
417     Moreover, the Liouville operator is defined as
418     \begin{equation}
419     iL = \sum\limits_{i = 1}^f {\left( {\frac{{\partial H}}{{\partial
420     p_i }}\frac{\partial }{{\partial q_i }} - \frac{{\partial
421     H}}{{\partial q_i }}\frac{\partial }{{\partial p_i }}} \right)}
422     \label{introEquation:liouvilleOperator}
423     \end{equation}
424     In terms of Liouville operator, Liouville's equation can also be
425     expressed as
426     \begin{equation}
427     \left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - iL\rho
428     \label{introEquation:liouvilleTheoremInOperator}
429     \end{equation}
430    
431 tim 2693 \subsection{\label{introSection:ergodic}The Ergodic Hypothesis}
432 tim 2692
433 tim 2695 Various thermodynamic properties can be calculated from Molecular
434     Dynamics simulation. By comparing experimental values with the
435     calculated properties, one can determine the accuracy of the
436 tim 2819 simulation and the quality of the underlying model. However, both
437     experiments and computer simulations are usually performed during a
438 tim 2695 certain time interval and the measurements are averaged over a
439     period of them which is different from the average behavior of
440 tim 2819 many-body system in Statistical Mechanics. Fortunately, the Ergodic
441     Hypothesis makes a connection between time average and the ensemble
442     average. It states that the time average and average over the
443 tim 2786 statistical ensemble are identical \cite{Frenkel1996, Leach2001}.
444 tim 2695 \begin{equation}
445 tim 2700 \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
446     \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
447     {A(q(t),p(t))} } \rho (q(t), p(t)) dqdp
448 tim 2695 \end{equation}
449 tim 2700 where $\langle A(q , p) \rangle_t$ is an equilibrium value of a
450     physical quantity and $\rho (p(t), q(t))$ is the equilibrium
451     distribution function. If an observation is averaged over a
452     sufficiently long time (longer than relaxation time), all accessible
453     microstates in phase space are assumed to be equally probed, giving
454     a properly weighted statistical average. This allows the researcher
455     freedom of choice when deciding how best to measure a given
456     observable. In case an ensemble averaged approach sounds most
457 tim 2786 reasonable, the Monte Carlo techniques\cite{Metropolis1949} can be
458 tim 2700 utilized. Or if the system lends itself to a time averaging
459     approach, the Molecular Dynamics techniques in
460     Sec.~\ref{introSection:molecularDynamics} will be the best
461     choice\cite{Frenkel1996}.
462 tim 2694
463 tim 2697 \section{\label{introSection:geometricIntegratos}Geometric Integrators}
464 tim 2819 A variety of numerical integrators have been proposed to simulate
465     the motions of atoms in MD simulation. They usually begin with
466     initial conditionals and move the objects in the direction governed
467     by the differential equations. However, most of them ignore the
468     hidden physical laws contained within the equations. Since 1990,
469     geometric integrators, which preserve various phase-flow invariants
470     such as symplectic structure, volume and time reversal symmetry, are
471     developed to address this issue\cite{Dullweber1997, McLachlan1998,
472 tim 2872 Leimkuhler1999}. The velocity Verlet method, which happens to be a
473 tim 2819 simple example of symplectic integrator, continues to gain
474     popularity in the molecular dynamics community. This fact can be
475     partly explained by its geometric nature.
476 tim 2697
477 tim 2819 \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
478     A \emph{manifold} is an abstract mathematical space. It looks
479     locally like Euclidean space, but when viewed globally, it may have
480     more complicated structure. A good example of manifold is the
481     surface of Earth. It seems to be flat locally, but it is round if
482     viewed as a whole. A \emph{differentiable manifold} (also known as
483     \emph{smooth manifold}) is a manifold on which it is possible to
484     apply calculus on \emph{differentiable manifold}. A \emph{symplectic
485     manifold} is defined as a pair $(M, \omega)$ which consists of a
486 tim 2697 \emph{differentiable manifold} $M$ and a close, non-degenerated,
487     bilinear symplectic form, $\omega$. A symplectic form on a vector
488     space $V$ is a function $\omega(x, y)$ which satisfies
489     $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
490     \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
491 tim 2819 $\omega(x, x) = 0$. The cross product operation in vector field is
492 tim 2899 an example of symplectic form. One of the motivations to study
493     \emph{symplectic manifolds} in Hamiltonian Mechanics is that a
494     symplectic manifold can represent all possible configurations of the
495     system and the phase space of the system can be described by it's
496     cotangent bundle. Every symplectic manifold is even dimensional. For
497     instance, in Hamilton equations, coordinate and momentum always
498     appear in pairs.
499 tim 2697
500 tim 2698 \subsection{\label{introSection:ODE}Ordinary Differential Equations}
501 tim 2697
502 tim 2819 For an ordinary differential system defined as
503 tim 2698 \begin{equation}
504     \dot x = f(x)
505     \end{equation}
506 tim 2819 where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
507 tim 2698 \begin{equation}
508 tim 2699 f(r) = J\nabla _x H(r).
509 tim 2698 \end{equation}
510     $H = H (q, p)$ is Hamiltonian function and $J$ is the skew-symmetric
511     matrix
512     \begin{equation}
513     J = \left( {\begin{array}{*{20}c}
514     0 & I \\
515     { - I} & 0 \\
516     \end{array}} \right)
517     \label{introEquation:canonicalMatrix}
518     \end{equation}
519     where $I$ is an identity matrix. Using this notation, Hamiltonian
520     system can be rewritten as,
521     \begin{equation}
522     \frac{d}{{dt}}x = J\nabla _x H(x)
523     \label{introEquation:compactHamiltonian}
524     \end{equation}In this case, $f$ is
525 tim 2899 called a \emph{Hamiltonian vector field}. Another generalization of
526     Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986},
527 tim 2698 \begin{equation}
528     \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
529     \end{equation}
530     The most obvious change being that matrix $J$ now depends on $x$.
531    
532 tim 2702 \subsection{\label{introSection:exactFlow}Exact Flow}
533    
534 tim 2698 Let $x(t)$ be the exact solution of the ODE system,
535     \begin{equation}
536     \frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}
537     \end{equation}
538     The exact flow(solution) $\varphi_\tau$ is defined by
539     \[
540     x(t+\tau) =\varphi_\tau(x(t))
541     \]
542     where $\tau$ is a fixed time step and $\varphi$ is a map from phase
543 tim 2702 space to itself. The flow has the continuous group property,
544 tim 2698 \begin{equation}
545 tim 2702 \varphi _{\tau _1 } \circ \varphi _{\tau _2 } = \varphi _{\tau _1
546     + \tau _2 } .
547     \end{equation}
548     In particular,
549     \begin{equation}
550     \varphi _\tau \circ \varphi _{ - \tau } = I
551     \end{equation}
552     Therefore, the exact flow is self-adjoint,
553     \begin{equation}
554     \varphi _\tau = \varphi _{ - \tau }^{ - 1}.
555     \end{equation}
556     The exact flow can also be written in terms of the of an operator,
557     \begin{equation}
558     \varphi _\tau (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
559     }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
560     \label{introEquation:exponentialOperator}
561     \end{equation}
562    
563     In most cases, it is not easy to find the exact flow $\varphi_\tau$.
564 tim 2872 Instead, we use an approximate map, $\psi_\tau$, which is usually
565 tim 2702 called integrator. The order of an integrator $\psi_\tau$ is $p$, if
566     the Taylor series of $\psi_\tau$ agree to order $p$,
567     \begin{equation}
568 tim 2872 \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1})
569 tim 2698 \end{equation}
570    
571 tim 2702 \subsection{\label{introSection:geometricProperties}Geometric Properties}
572    
573 tim 2872 The hidden geometric properties\cite{Budd1999, Marsden1998} of an
574     ODE and its flow play important roles in numerical studies. Many of
575     them can be found in systems which occur naturally in applications.
576 tim 2702 Let $\varphi$ be the flow of Hamiltonian vector field, $\varphi$ is
577     a \emph{symplectic} flow if it satisfies,
578 tim 2698 \begin{equation}
579 tim 2703 {\varphi '}^T J \varphi ' = J.
580 tim 2698 \end{equation}
581     According to Liouville's theorem, the symplectic volume is invariant
582     under a Hamiltonian flow, which is the basis for classical
583 tim 2699 statistical mechanics. Furthermore, the flow of a Hamiltonian vector
584     field on a symplectic manifold can be shown to be a
585     symplectomorphism. As to the Poisson system,
586 tim 2698 \begin{equation}
587 tim 2703 {\varphi '}^T J \varphi ' = J \circ \varphi
588 tim 2698 \end{equation}
589 tim 2898 is the property that must be preserved by the integrator. It is
590     possible to construct a \emph{volume-preserving} flow for a source
591     free ODE ($ \nabla \cdot f = 0 $), if the flow satisfies $ \det
592     d\varphi = 1$. One can show easily that a symplectic flow will be
593     volume-preserving. Changing the variables $y = h(x)$ in an ODE
594 tim 2872 (Eq.~\ref{introEquation:ODE}) will result in a new system,
595 tim 2698 \[
596     \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y).
597     \]
598     The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$.
599     In other words, the flow of this vector field is reversible if and
600 tim 2898 only if $ h \circ \varphi ^{ - 1} = \varphi \circ h $. A
601     \emph{first integral}, or conserved quantity of a general
602 tim 2705 differential function is a function $ G:R^{2d} \to R^d $ which is
603     constant for all solutions of the ODE $\frac{{dx}}{{dt}} = f(x)$ ,
604     \[
605     \frac{{dG(x(t))}}{{dt}} = 0.
606     \]
607     Using chain rule, one may obtain,
608     \[
609     \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \bullet \nabla G,
610     \]
611     which is the condition for conserving \emph{first integral}. For a
612     canonical Hamiltonian system, the time evolution of an arbitrary
613     smooth function $G$ is given by,
614 tim 2789 \begin{eqnarray}
615     \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \\
616     & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)). \\
617 tim 2705 \label{introEquation:firstIntegral1}
618 tim 2789 \end{eqnarray}
619 tim 2705 Using poisson bracket notion, Equation
620     \ref{introEquation:firstIntegral1} can be rewritten as
621     \[
622     \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)).
623     \]
624     Therefore, the sufficient condition for $G$ to be the \emph{first
625     integral} of a Hamiltonian system is
626     \[
627     \left\{ {G,H} \right\} = 0.
628     \]
629     As well known, the Hamiltonian (or energy) H of a Hamiltonian system
630     is a \emph{first integral}, which is due to the fact $\{ H,H\} =
631 tim 2898 0$. When designing any numerical methods, one should always try to
632 tim 2702 preserve the structural properties of the original ODE and its flow.
633    
634 tim 2699 \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods}
635     A lot of well established and very effective numerical methods have
636     been successful precisely because of their symplecticities even
637     though this fact was not recognized when they were first
638 tim 2872 constructed. The most famous example is the Verlet-leapfrog method
639 tim 2819 in molecular dynamics. In general, symplectic integrators can be
640 tim 2699 constructed using one of four different methods.
641     \begin{enumerate}
642     \item Generating functions
643     \item Variational methods
644     \item Runge-Kutta methods
645     \item Splitting methods
646     \end{enumerate}
647 tim 2698
648 tim 2789 Generating function\cite{Channell1990} tends to lead to methods
649     which are cumbersome and difficult to use. In dissipative systems,
650     variational methods can capture the decay of energy
651     accurately\cite{Kane2000}. Since their geometrically unstable nature
652     against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
653     methods are not suitable for Hamiltonian system. Recently, various
654     high-order explicit Runge-Kutta methods
655     \cite{Owren1992,Chen2003}have been developed to overcome this
656 tim 2703 instability. However, due to computational penalty involved in
657 tim 2819 implementing the Runge-Kutta methods, they have not attracted much
658     attention from the Molecular Dynamics community. Instead, splitting
659     methods have been widely accepted since they exploit natural
660     decompositions of the system\cite{Tuckerman1992, McLachlan1998}.
661 tim 2702
662 tim 2819 \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
663 tim 2702
664     The main idea behind splitting methods is to decompose the discrete
665     $\varphi_h$ as a composition of simpler flows,
666 tim 2699 \begin{equation}
667     \varphi _h = \varphi _{h_1 } \circ \varphi _{h_2 } \ldots \circ
668     \varphi _{h_n }
669     \label{introEquation:FlowDecomposition}
670     \end{equation}
671     where each of the sub-flow is chosen such that each represent a
672 tim 2898 simpler integration of the system. Suppose that a Hamiltonian system
673     takes the form,
674 tim 2702 \[
675     H = H_1 + H_2.
676     \]
677     Here, $H_1$ and $H_2$ may represent different physical processes of
678     the system. For instance, they may relate to kinetic and potential
679     energy respectively, which is a natural decomposition of the
680     problem. If $H_1$ and $H_2$ can be integrated using exact flows
681     $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a simple first
682 tim 2819 order expression is then given by the Lie-Trotter formula
683 tim 2699 \begin{equation}
684 tim 2702 \varphi _h = \varphi _{1,h} \circ \varphi _{2,h},
685     \label{introEquation:firstOrderSplitting}
686     \end{equation}
687     where $\varphi _h$ is the result of applying the corresponding
688     continuous $\varphi _i$ over a time $h$. By definition, as
689     $\varphi_i(t)$ is the exact solution of a Hamiltonian system, it
690     must follow that each operator $\varphi_i(t)$ is a symplectic map.
691     It is easy to show that any composition of symplectic flows yields a
692     symplectic map,
693     \begin{equation}
694 tim 2699 (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi
695 tim 2702 '\phi ' = \phi '^T J\phi ' = J,
696 tim 2699 \label{introEquation:SymplecticFlowComposition}
697     \end{equation}
698 tim 2702 where $\phi$ and $\psi$ both are symplectic maps. Thus operator
699     splitting in this context automatically generates a symplectic map.
700 tim 2699
701 tim 2702 The Lie-Trotter splitting(\ref{introEquation:firstOrderSplitting})
702     introduces local errors proportional to $h^2$, while Strang
703     splitting gives a second-order decomposition,
704     \begin{equation}
705     \varphi _h = \varphi _{1,h/2} \circ \varphi _{2,h} \circ \varphi
706 tim 2706 _{1,h/2} , \label{introEquation:secondOrderSplitting}
707 tim 2702 \end{equation}
708 tim 2819 which has a local error proportional to $h^3$. The Sprang
709     splitting's popularity in molecular simulation community attribute
710     to its symmetric property,
711 tim 2702 \begin{equation}
712     \varphi _h^{ - 1} = \varphi _{ - h}.
713 tim 2703 \label{introEquation:timeReversible}
714 tim 2882 \end{equation}
715 tim 2702
716 tim 2872 \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}}
717 tim 2702 The classical equation for a system consisting of interacting
718     particles can be written in Hamiltonian form,
719     \[
720     H = T + V
721     \]
722     where $T$ is the kinetic energy and $V$ is the potential energy.
723 tim 2872 Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one
724 tim 2702 obtains the following:
725     \begin{align}
726     q(\Delta t) &= q(0) + \dot{q}(0)\Delta t +
727     \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, %
728     \label{introEquation:Lp10a} \\%
729     %
730     \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
731     \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. %
732     \label{introEquation:Lp10b}
733     \end{align}
734     where $F(t)$ is the force at time $t$. This integration scheme is
735     known as \emph{velocity verlet} which is
736     symplectic(\ref{introEquation:SymplecticFlowComposition}),
737     time-reversible(\ref{introEquation:timeReversible}) and
738     volume-preserving (\ref{introEquation:volumePreserving}). These
739     geometric properties attribute to its long-time stability and its
740     popularity in the community. However, the most commonly used
741     velocity verlet integration scheme is written as below,
742     \begin{align}
743     \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
744     \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEquation:Lp9a}\\%
745     %
746     q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),%
747     \label{introEquation:Lp9b}\\%
748     %
749     \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
750 tim 2872 \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c}
751 tim 2702 \end{align}
752     From the preceding splitting, one can see that the integration of
753     the equations of motion would follow:
754     \begin{enumerate}
755     \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
756    
757     \item Use the half step velocities to move positions one whole step, $\Delta t$.
758    
759 tim 2872 \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
760 tim 2702
761     \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
762     \end{enumerate}
763 tim 2872 By simply switching the order of the propagators in the splitting
764     and composing a new integrator, the \emph{position verlet}
765     integrator, can be generated,
766 tim 2702 \begin{align}
767     \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) +
768     \frac{{\Delta t}}{{2m}}\dot q(0)} \right], %
769     \label{introEquation:positionVerlet1} \\%
770     %
771 tim 2703 q(\Delta t) &= q(0) + \frac{{\Delta t}}{2}\left[ {\dot q(0) + \dot
772 tim 2702 q(\Delta t)} \right]. %
773 tim 2719 \label{introEquation:positionVerlet2}
774 tim 2702 \end{align}
775    
776 tim 2819 \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
777 tim 2702
778 tim 2872 The Baker-Campbell-Hausdorff formula can be used to determine the
779     local error of splitting method in terms of the commutator of the
780 tim 2702 operators(\ref{introEquation:exponentialOperator}) associated with
781 tim 2872 the sub-flow. For operators $hX$ and $hY$ which are associated with
782 tim 2726 $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
783 tim 2702 \begin{equation}
784     \exp (hX + hY) = \exp (hZ)
785     \end{equation}
786     where
787     \begin{equation}
788     hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left(
789     {[X,[X,Y]] + [Y,[Y,X]]} \right) + \ldots .
790     \end{equation}
791     Here, $[X,Y]$ is the commutators of operator $X$ and $Y$ given by
792     \[
793     [X,Y] = XY - YX .
794     \]
795 tim 2872 Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974}
796     to the Sprang splitting, we can obtain
797 tim 2779 \begin{eqnarray*}
798 tim 2778 \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
799     & & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
800 tim 2779 & & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots )
801     \end{eqnarray*}
802 tim 2872 Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0,\] the dominant local
803 tim 2702 error of Spring splitting is proportional to $h^3$. The same
804 tim 2872 procedure can be applied to a general splitting, of the form
805 tim 2702 \begin{equation}
806     \varphi _{b_m h}^2 \circ \varphi _{a_m h}^1 \circ \varphi _{b_{m -
807     1} h}^2 \circ \ldots \circ \varphi _{a_1 h}^1 .
808     \end{equation}
809 tim 2872 A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
810     order methods. Yoshida proposed an elegant way to compose higher
811 tim 2789 order methods based on symmetric splitting\cite{Yoshida1990}. Given
812     a symmetric second order base method $ \varphi _h^{(2)} $, a
813     fourth-order symmetric method can be constructed by composing,
814 tim 2702 \[
815     \varphi _h^{(4)} = \varphi _{\alpha h}^{(2)} \circ \varphi _{\beta
816     h}^{(2)} \circ \varphi _{\alpha h}^{(2)}
817     \]
818     where $ \alpha = - \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$ and $ \beta
819     = \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$. Moreover, a symmetric
820     integrator $ \varphi _h^{(2n + 2)}$ can be composed by
821     \begin{equation}
822     \varphi _h^{(2n + 2)} = \varphi _{\alpha h}^{(2n)} \circ \varphi
823 tim 2872 _{\beta h}^{(2n)} \circ \varphi _{\alpha h}^{(2n)},
824 tim 2702 \end{equation}
825 tim 2872 if the weights are chosen as
826 tim 2702 \[
827     \alpha = - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta =
828     \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} .
829     \]
830    
831 tim 2694 \section{\label{introSection:molecularDynamics}Molecular Dynamics}
832    
833 tim 2720 As one of the principal tools of molecular modeling, Molecular
834     dynamics has proven to be a powerful tool for studying the functions
835     of biological systems, providing structural, thermodynamic and
836     dynamical information. The basic idea of molecular dynamics is that
837     macroscopic properties are related to microscopic behavior and
838     microscopic behavior can be calculated from the trajectories in
839     simulations. For instance, instantaneous temperature of an
840     Hamiltonian system of $N$ particle can be measured by
841     \[
842 tim 2725 T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}}
843 tim 2720 \]
844     where $m_i$ and $v_i$ are the mass and velocity of $i$th particle
845     respectively, $f$ is the number of degrees of freedom, and $k_B$ is
846     the boltzman constant.
847 tim 2694
848 tim 2720 A typical molecular dynamics run consists of three essential steps:
849     \begin{enumerate}
850     \item Initialization
851     \begin{enumerate}
852     \item Preliminary preparation
853     \item Minimization
854     \item Heating
855     \item Equilibration
856     \end{enumerate}
857     \item Production
858     \item Analysis
859     \end{enumerate}
860     These three individual steps will be covered in the following
861     sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
862 tim 2801 initialization of a simulation. Sec.~\ref{introSection:production}
863 tim 2872 will discusse issues in production run.
864 tim 2801 Sec.~\ref{introSection:Analysis} provides the theoretical tools for
865     trajectory analysis.
866 tim 2719
867 tim 2720 \subsection{\label{introSec:initialSystemSettings}Initialization}
868 tim 2719
869 tim 2819 \subsubsection{\textbf{Preliminary preparation}}
870 tim 2719
871 tim 2720 When selecting the starting structure of a molecule for molecular
872     simulation, one may retrieve its Cartesian coordinates from public
873     databases, such as RCSB Protein Data Bank \textit{etc}. Although
874     thousands of crystal structures of molecules are discovered every
875     year, many more remain unknown due to the difficulties of
876 tim 2872 purification and crystallization. Even for molecules with known
877     structure, some important information is missing. For example, a
878 tim 2720 missing hydrogen atom which acts as donor in hydrogen bonding must
879     be added. Moreover, in order to include electrostatic interaction,
880     one may need to specify the partial charges for individual atoms.
881     Under some circumstances, we may even need to prepare the system in
882 tim 2872 a special configuration. For instance, when studying transport
883     phenomenon in membrane systems, we may prepare the lipids in a
884     bilayer structure instead of placing lipids randomly in solvent,
885     since we are not interested in the slow self-aggregation process.
886 tim 2694
887 tim 2819 \subsubsection{\textbf{Minimization}}
888 tim 2705
889 tim 2720 It is quite possible that some of molecules in the system from
890 tim 2872 preliminary preparation may be overlapping with each other. This
891     close proximity leads to high initial potential energy which
892     consequently jeopardizes any molecular dynamics simulations. To
893     remove these steric overlaps, one typically performs energy
894     minimization to find a more reasonable conformation. Several energy
895     minimization methods have been developed to exploit the energy
896     surface and to locate the local minimum. While converging slowly
897     near the minimum, steepest descent method is extremely robust when
898     systems are strongly anharmonic. Thus, it is often used to refine
899     structure from crystallographic data. Relied on the gradient or
900     hessian, advanced methods like Newton-Raphson converge rapidly to a
901     local minimum, but become unstable if the energy surface is far from
902     quadratic. Another factor that must be taken into account, when
903 tim 2720 choosing energy minimization method, is the size of the system.
904     Steepest descent and conjugate gradient can deal with models of any
905 tim 2872 size. Because of the limits on computer memory to store the hessian
906     matrix and the computing power needed to diagonalized these
907     matrices, most Newton-Raphson methods can not be used with very
908     large systems.
909 tim 2694
910 tim 2819 \subsubsection{\textbf{Heating}}
911 tim 2720
912     Typically, Heating is performed by assigning random velocities
913 tim 2872 according to a Maxwell-Boltzman distribution for a desired
914     temperature. Beginning at a lower temperature and gradually
915     increasing the temperature by assigning larger random velocities, we
916     end up with setting the temperature of the system to a final
917     temperature at which the simulation will be conducted. In heating
918     phase, we should also keep the system from drifting or rotating as a
919     whole. To do this, the net linear momentum and angular momentum of
920     the system is shifted to zero after each resampling from the Maxwell
921     -Boltzman distribution.
922 tim 2720
923 tim 2819 \subsubsection{\textbf{Equilibration}}
924 tim 2720
925     The purpose of equilibration is to allow the system to evolve
926     spontaneously for a period of time and reach equilibrium. The
927     procedure is continued until various statistical properties, such as
928     temperature, pressure, energy, volume and other structural
929     properties \textit{etc}, become independent of time. Strictly
930     speaking, minimization and heating are not necessary, provided the
931     equilibration process is long enough. However, these steps can serve
932     as a means to arrive at an equilibrated structure in an effective
933     way.
934    
935     \subsection{\label{introSection:production}Production}
936    
937 tim 2872 The production run is the most important step of the simulation, in
938 tim 2725 which the equilibrated structure is used as a starting point and the
939     motions of the molecules are collected for later analysis. In order
940     to capture the macroscopic properties of the system, the molecular
941 tim 2872 dynamics simulation must be performed by sampling correctly and
942     efficiently from the relevant thermodynamic ensemble.
943 tim 2720
944 tim 2725 The most expensive part of a molecular dynamics simulation is the
945     calculation of non-bonded forces, such as van der Waals force and
946     Coulombic forces \textit{etc}. For a system of $N$ particles, the
947     complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
948     which making large simulations prohibitive in the absence of any
949 tim 2872 algorithmic tricks.
950 tim 2720
951 tim 2872 A natural approach to avoid system size issues is to represent the
952 tim 2725 bulk behavior by a finite number of the particles. However, this
953 tim 2872 approach will suffer from the surface effect at the edges of the
954     simulation. To offset this, \textit{Periodic boundary conditions}
955     (see Fig.~\ref{introFig:pbc}) is developed to simulate bulk
956     properties with a relatively small number of particles. In this
957     method, the simulation box is replicated throughout space to form an
958     infinite lattice. During the simulation, when a particle moves in
959     the primary cell, its image in other cells move in exactly the same
960     direction with exactly the same orientation. Thus, as a particle
961     leaves the primary cell, one of its images will enter through the
962     opposite face.
963 tim 2789 \begin{figure}
964     \centering
965     \includegraphics[width=\linewidth]{pbc.eps}
966     \caption[An illustration of periodic boundary conditions]{A 2-D
967     illustration of periodic boundary conditions. As one particle leaves
968     the left of the simulation box, an image of it enters the right.}
969     \label{introFig:pbc}
970     \end{figure}
971 tim 2725
972     %cutoff and minimum image convention
973     Another important technique to improve the efficiency of force
974 tim 2872 evaluation is to apply spherical cutoff where particles farther than
975     a predetermined distance are not included in the calculation
976 tim 2725 \cite{Frenkel1996}. The use of a cutoff radius will cause a
977 tim 2730 discontinuity in the potential energy curve. Fortunately, one can
978 tim 2872 shift simple radial potential to ensure the potential curve go
979     smoothly to zero at the cutoff radius. The cutoff strategy works
980     well for Lennard-Jones interaction because of its short range
981     nature. However, simply truncating the electrostatic interaction
982     with the use of cutoffs has been shown to lead to severe artifacts
983     in simulations. The Ewald summation, in which the slowly decaying
984     Coulomb potential is transformed into direct and reciprocal sums
985     with rapid and absolute convergence, has proved to minimize the
986     periodicity artifacts in liquid simulations. Taking the advantages
987     of the fast Fourier transform (FFT) for calculating discrete Fourier
988     transforms, the particle mesh-based
989 tim 2789 methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
990 tim 2872 $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
991     \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
992     which treats Coulombic interactions exactly at short range, and
993     approximate the potential at long range through multipolar
994     expansion. In spite of their wide acceptance at the molecular
995     simulation community, these two methods are difficult to implement
996     correctly and efficiently. Instead, we use a damped and
997     charge-neutralized Coulomb potential method developed by Wolf and
998     his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
999     particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
1000 tim 2725 \begin{equation}
1001     V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
1002     r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
1003     R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
1004     r_{ij})}{r_{ij}}\right\}. \label{introEquation:shiftedCoulomb}
1005     \end{equation}
1006     where $\alpha$ is the convergence parameter. Due to the lack of
1007     inherent periodicity and rapid convergence,this method is extremely
1008     efficient and easy to implement.
1009 tim 2789 \begin{figure}
1010     \centering
1011     \includegraphics[width=\linewidth]{shifted_coulomb.eps}
1012     \caption[An illustration of shifted Coulomb potential]{An
1013     illustration of shifted Coulomb potential.}
1014     \label{introFigure:shiftedCoulomb}
1015     \end{figure}
1016 tim 2725
1017     %multiple time step
1018    
1019 tim 2720 \subsection{\label{introSection:Analysis} Analysis}
1020    
1021 tim 2872 Recently, advanced visualization technique have become applied to
1022 tim 2721 monitor the motions of molecules. Although the dynamics of the
1023     system can be described qualitatively from animation, quantitative
1024 tim 2872 trajectory analysis are more useful. According to the principles of
1025     Statistical Mechanics, Sec.~\ref{introSection:statisticalMechanics},
1026     one can compute thermodynamic properties, analyze fluctuations of
1027     structural parameters, and investigate time-dependent processes of
1028     the molecule from the trajectories.
1029 tim 2721
1030 tim 2872 \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1031 tim 2721
1032 tim 2872 Thermodynamic properties, which can be expressed in terms of some
1033 tim 2725 function of the coordinates and momenta of all particles in the
1034     system, can be directly computed from molecular dynamics. The usual
1035     way to measure the pressure is based on virial theorem of Clausius
1036     which states that the virial is equal to $-3Nk_BT$. For a system
1037     with forces between particles, the total virial, $W$, contains the
1038     contribution from external pressure and interaction between the
1039     particles:
1040     \[
1041     W = - 3PV + \left\langle {\sum\limits_{i < j} {r{}_{ij} \cdot
1042     f_{ij} } } \right\rangle
1043     \]
1044     where $f_{ij}$ is the force between particle $i$ and $j$ at a
1045     distance $r_{ij}$. Thus, the expression for the pressure is given
1046     by:
1047     \begin{equation}
1048     P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\sum\limits_{i
1049     < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle
1050     \end{equation}
1051    
1052 tim 2819 \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}}
1053 tim 2721
1054     Structural Properties of a simple fluid can be described by a set of
1055 tim 2872 distribution functions. Among these functions,the \emph{pair
1056 tim 2721 distribution function}, also known as \emph{radial distribution
1057 tim 2872 function}, is of most fundamental importance to liquid theory.
1058     Experimentally, pair distribution function can be gathered by
1059     Fourier transforming raw data from a series of neutron diffraction
1060     experiments and integrating over the surface factor
1061     \cite{Powles1973}. The experimental results can serve as a criterion
1062     to justify the correctness of a liquid model. Moreover, various
1063     equilibrium thermodynamic and structural properties can also be
1064     expressed in terms of radial distribution function \cite{Allen1987}.
1065     The pair distribution functions $g(r)$ gives the probability that a
1066 tim 2721 particle $i$ will be located at a distance $r$ from a another
1067     particle $j$ in the system
1068     \[
1069     g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1070 tim 2874 \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
1071 tim 2872 (r)}{\rho}.
1072 tim 2721 \]
1073     Note that the delta function can be replaced by a histogram in
1074 tim 2881 computer simulation. Peaks in $g(r)$ represent solvent shells, and
1075     the height of these peaks gradually decreases to 1 as the liquid of
1076     large distance approaches the bulk density.
1077 tim 2721
1078    
1079 tim 2819 \subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent
1080     Properties}}
1081 tim 2721
1082     Time-dependent properties are usually calculated using \emph{time
1083 tim 2872 correlation functions}, which correlate random variables $A$ and $B$
1084     at two different times,
1085 tim 2721 \begin{equation}
1086     C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1087     \label{introEquation:timeCorrelationFunction}
1088     \end{equation}
1089     If $A$ and $B$ refer to same variable, this kind of correlation
1090 tim 2872 function is called an \emph{autocorrelation function}. One example
1091     of an auto correlation function is the velocity auto-correlation
1092     function which is directly related to transport properties of
1093     molecular liquids:
1094 tim 2725 \[
1095     D = \frac{1}{3}\int\limits_0^\infty {\left\langle {v(t) \cdot v(0)}
1096     \right\rangle } dt
1097     \]
1098 tim 2872 where $D$ is diffusion constant. Unlike the velocity autocorrelation
1099     function, which is averaging over time origins and over all the
1100     atoms, the dipole autocorrelation functions are calculated for the
1101     entire system. The dipole autocorrelation function is given by:
1102 tim 2725 \[
1103     c_{dipole} = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1104     \right\rangle
1105     \]
1106     Here $u_{tot}$ is the net dipole of the entire system and is given
1107     by
1108     \[
1109     u_{tot} (t) = \sum\limits_i {u_i (t)}
1110     \]
1111     In principle, many time correlation functions can be related with
1112     Fourier transforms of the infrared, Raman, and inelastic neutron
1113     scattering spectra of molecular liquids. In practice, one can
1114     extract the IR spectrum from the intensity of dipole fluctuation at
1115     each frequency using the following relationship:
1116     \[
1117     \hat c_{dipole} (v) = \int_{ - \infty }^\infty {c_{dipole} (t)e^{ -
1118     i2\pi vt} dt}
1119     \]
1120 tim 2721
1121 tim 2693 \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1122 tim 2692
1123 tim 2705 Rigid bodies are frequently involved in the modeling of different
1124     areas, from engineering, physics, to chemistry. For example,
1125     missiles and vehicle are usually modeled by rigid bodies. The
1126     movement of the objects in 3D gaming engine or other physics
1127 tim 2872 simulator is governed by rigid body dynamics. In molecular
1128     simulations, rigid bodies are used to simplify protein-protein
1129     docking studies\cite{Gray2003}.
1130 tim 2694
1131 tim 2705 It is very important to develop stable and efficient methods to
1132 tim 2872 integrate the equations of motion for orientational degrees of
1133     freedom. Euler angles are the natural choice to describe the
1134     rotational degrees of freedom. However, due to $\frac {1}{sin
1135     \theta}$ singularities, the numerical integration of corresponding
1136     equations of motion is very inefficient and inaccurate. Although an
1137     alternative integrator using multiple sets of Euler angles can
1138     overcome this difficulty\cite{Barojas1973}, the computational
1139     penalty and the loss of angular momentum conservation still remain.
1140     A singularity-free representation utilizing quaternions was
1141     developed by Evans in 1977\cite{Evans1977}. Unfortunately, this
1142     approach uses a nonseparable Hamiltonian resulting from the
1143     quaternion representation, which prevents the symplectic algorithm
1144     to be utilized. Another different approach is to apply holonomic
1145     constraints to the atoms belonging to the rigid body. Each atom
1146     moves independently under the normal forces deriving from potential
1147     energy and constraint forces which are used to guarantee the
1148     rigidness. However, due to their iterative nature, the SHAKE and
1149     Rattle algorithms also converge very slowly when the number of
1150     constraints increases\cite{Ryckaert1977, Andersen1983}.
1151 tim 2694
1152 tim 2872 A break-through in geometric literature suggests that, in order to
1153 tim 2705 develop a long-term integration scheme, one should preserve the
1154 tim 2872 symplectic structure of the flow. By introducing a conjugate
1155     momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's
1156     equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was
1157     proposed to evolve the Hamiltonian system in a constraint manifold
1158     by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1159     An alternative method using the quaternion representation was
1160     developed by Omelyan\cite{Omelyan1998}. However, both of these
1161     methods are iterative and inefficient. In this section, we descibe a
1162 tim 2789 symplectic Lie-Poisson integrator for rigid body developed by
1163     Dullweber and his coworkers\cite{Dullweber1997} in depth.
1164 tim 2705
1165 tim 2872 \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1166     The motion of a rigid body is Hamiltonian with the Hamiltonian
1167 tim 2713 function
1168 tim 2706 \begin{equation}
1169     H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1170     V(q,Q) + \frac{1}{2}tr[(QQ^T - 1)\Lambda ].
1171     \label{introEquation:RBHamiltonian}
1172     \end{equation}
1173     Here, $q$ and $Q$ are the position and rotation matrix for the
1174     rigid-body, $p$ and $P$ are conjugate momenta to $q$ and $Q$ , and
1175     $J$, a diagonal matrix, is defined by
1176     \[
1177     I_{ii}^{ - 1} = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1178     \]
1179     where $I_{ii}$ is the diagonal element of the inertia tensor. This
1180 tim 2872 constrained Hamiltonian equation is subjected to a holonomic
1181     constraint,
1182 tim 2706 \begin{equation}
1183 tim 2726 Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1184 tim 2706 \end{equation}
1185 tim 2872 which is used to ensure rotation matrix's unitarity. Differentiating
1186     \ref{introEquation:orthogonalConstraint} and using Equation
1187     \ref{introEquation:RBMotionMomentum}, one may obtain,
1188 tim 2706 \begin{equation}
1189 tim 2707 Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0 . \\
1190 tim 2706 \label{introEquation:RBFirstOrderConstraint}
1191     \end{equation}
1192     Using Equation (\ref{introEquation:motionHamiltonianCoordinate},
1193     \ref{introEquation:motionHamiltonianMomentum}), one can write down
1194     the equations of motion,
1195 tim 2796 \begin{eqnarray}
1196     \frac{{dq}}{{dt}} & = & \frac{p}{m} \label{introEquation:RBMotionPosition}\\
1197     \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q) \label{introEquation:RBMotionMomentum}\\
1198     \frac{{dQ}}{{dt}} & = & PJ^{ - 1} \label{introEquation:RBMotionRotation}\\
1199     \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1200     \end{eqnarray}
1201 tim 2707 In general, there are two ways to satisfy the holonomic constraints.
1202 tim 2872 We can use a constraint force provided by a Lagrange multiplier on
1203     the normal manifold to keep the motion on constraint space. Or we
1204     can simply evolve the system on the constraint manifold. These two
1205     methods have been proved to be equivalent. The holonomic constraint
1206     and equations of motions define a constraint manifold for rigid
1207     bodies
1208 tim 2707 \[
1209     M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0}
1210     \right\}.
1211     \]
1212     Unfortunately, this constraint manifold is not the cotangent bundle
1213 tim 2888 $T^* SO(3)$ which can be consider as a symplectic manifold on Lie
1214     rotation group $SO(3)$. However, it turns out that under symplectic
1215 tim 2707 transformation, the cotangent space and the phase space are
1216 tim 2872 diffeomorphic. By introducing
1217 tim 2706 \[
1218 tim 2707 \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1219 tim 2706 \]
1220 tim 2707 the mechanical system subject to a holonomic constraint manifold $M$
1221     can be re-formulated as a Hamiltonian system on the cotangent space
1222     \[
1223     T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
1224     1,\tilde Q^T \tilde PJ^{ - 1} + J^{ - 1} P^T \tilde Q = 0} \right\}
1225     \]
1226     For a body fixed vector $X_i$ with respect to the center of mass of
1227     the rigid body, its corresponding lab fixed vector $X_0^{lab}$ is
1228     given as
1229     \begin{equation}
1230     X_i^{lab} = Q X_i + q.
1231     \end{equation}
1232     Therefore, potential energy $V(q,Q)$ is defined by
1233     \[
1234     V(q,Q) = V(Q X_0 + q).
1235     \]
1236 tim 2713 Hence, the force and torque are given by
1237 tim 2707 \[
1238 tim 2713 \nabla _q V(q,Q) = F(q,Q) = \sum\limits_i {F_i (q,Q)},
1239 tim 2707 \]
1240 tim 2713 and
1241 tim 2707 \[
1242     \nabla _Q V(q,Q) = F(q,Q)X_i^t
1243     \]
1244 tim 2899 respectively. As a common choice to describe the rotation dynamics
1245     of the rigid body, the angular momentum on the body fixed frame $\Pi
1246     = Q^t P$ is introduced to rewrite the equations of motion,
1247 tim 2707 \begin{equation}
1248     \begin{array}{l}
1249 tim 2899 \dot \Pi = J^{ - 1} \Pi ^T \Pi + Q^T \sum\limits_i {F_i (q,Q)X_i^T } - \Lambda, \\
1250     \dot Q = Q\Pi {\rm{ }}J^{ - 1}, \\
1251 tim 2707 \end{array}
1252     \label{introEqaution:RBMotionPI}
1253     \end{equation}
1254 tim 2899 as well as holonomic constraints,
1255 tim 2707 \[
1256     \begin{array}{l}
1257 tim 2899 \Pi J^{ - 1} + J^{ - 1} \Pi ^t = 0, \\
1258     Q^T Q = 1 .\\
1259 tim 2707 \end{array}
1260     \]
1261     For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a matrix $\hat v \in
1262     so(3)^ \star$, the hat-map isomorphism,
1263     \begin{equation}
1264     v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left(
1265     {\begin{array}{*{20}c}
1266     0 & { - v_3 } & {v_2 } \\
1267     {v_3 } & 0 & { - v_1 } \\
1268     { - v_2 } & {v_1 } & 0 \\
1269     \end{array}} \right),
1270     \label{introEquation:hatmapIsomorphism}
1271     \end{equation}
1272     will let us associate the matrix products with traditional vector
1273     operations
1274     \[
1275 tim 2899 \hat vu = v \times u.
1276 tim 2707 \]
1277 tim 2899 Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew
1278 tim 2707 matrix,
1279 tim 2899 \begin{eqnarray}
1280     (\dot \Pi - \dot \Pi ^T ){\rm{ }} &= &{\rm{ }}(\Pi - \Pi ^T ){\rm{
1281     }}(J^{ - 1} \Pi + \Pi J^{ - 1} ) \notag \\
1282     + \sum\limits_i {[Q^T F_i
1283 tim 2888 (r,Q)X_i^T - X_i F_i (r,Q)^T Q]} - (\Lambda - \Lambda ^T ).
1284     \label{introEquation:skewMatrixPI}
1285 tim 2899 \end{eqnarray}
1286     Since $\Lambda$ is symmetric, the last term of
1287     Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1288     Lagrange multiplier $\Lambda$ is absent from the equations of
1289     motion. This unique property eliminates the requirement of
1290     iterations which can not be avoided in other methods\cite{Kol1997,
1291     Omelyan1998}. Applying the hat-map isomorphism, we obtain the
1292     equation of motion for angular momentum on body frame
1293 tim 2713 \begin{equation}
1294     \dot \pi = \pi \times I^{ - 1} \pi + \sum\limits_i {\left( {Q^T
1295     F_i (r,Q)} \right) \times X_i }.
1296     \label{introEquation:bodyAngularMotion}
1297     \end{equation}
1298 tim 2707 In the same manner, the equation of motion for rotation matrix is
1299     given by
1300     \[
1301 tim 2899 \dot Q = Qskew(I^{ - 1} \pi ).
1302 tim 2707 \]
1303    
1304 tim 2713 \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1305     Lie-Poisson Integrator for Free Rigid Body}
1306 tim 2707
1307 tim 2872 If there are no external forces exerted on the rigid body, the only
1308     contribution to the rotational motion is from the kinetic energy
1309     (the first term of \ref{introEquation:bodyAngularMotion}). The free
1310     rigid body is an example of a Lie-Poisson system with Hamiltonian
1311     function
1312 tim 2713 \begin{equation}
1313     T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 )
1314     \label{introEquation:rotationalKineticRB}
1315     \end{equation}
1316     where $T_i^r (\pi _i ) = \frac{{\pi _i ^2 }}{{2I_i }}$ and
1317     Lie-Poisson structure matrix,
1318     \begin{equation}
1319     J(\pi ) = \left( {\begin{array}{*{20}c}
1320     0 & {\pi _3 } & { - \pi _2 } \\
1321     { - \pi _3 } & 0 & {\pi _1 } \\
1322     {\pi _2 } & { - \pi _1 } & 0 \\
1323 tim 2899 \end{array}} \right).
1324 tim 2713 \end{equation}
1325     Thus, the dynamics of free rigid body is governed by
1326     \begin{equation}
1327 tim 2899 \frac{d}{{dt}}\pi = J(\pi )\nabla _\pi T^r (\pi ).
1328 tim 2713 \end{equation}
1329     One may notice that each $T_i^r$ in Equation
1330     \ref{introEquation:rotationalKineticRB} can be solved exactly. For
1331     instance, the equations of motion due to $T_1^r$ are given by
1332     \begin{equation}
1333     \frac{d}{{dt}}\pi = R_1 \pi ,\frac{d}{{dt}}Q = QR_1
1334     \label{introEqaution:RBMotionSingleTerm}
1335     \end{equation}
1336     where
1337     \[ R_1 = \left( {\begin{array}{*{20}c}
1338     0 & 0 & 0 \\
1339     0 & 0 & {\pi _1 } \\
1340     0 & { - \pi _1 } & 0 \\
1341     \end{array}} \right).
1342     \]
1343     The solutions of Equation \ref{introEqaution:RBMotionSingleTerm} is
1344 tim 2707 \[
1345 tim 2713 \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) =
1346     Q(0)e^{\Delta tR_1 }
1347 tim 2707 \]
1348 tim 2713 with
1349 tim 2707 \[
1350 tim 2713 e^{\Delta tR_1 } = \left( {\begin{array}{*{20}c}
1351     0 & 0 & 0 \\
1352     0 & {\cos \theta _1 } & {\sin \theta _1 } \\
1353     0 & { - \sin \theta _1 } & {\cos \theta _1 } \\
1354     \end{array}} \right),\theta _1 = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1355 tim 2707 \]
1356 tim 2719 To reduce the cost of computing expensive functions in $e^{\Delta
1357 tim 2872 tR_1 }$, we can use Cayley transformation to obtain a single-aixs
1358     propagator,
1359 tim 2713 \[
1360     e^{\Delta tR_1 } \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1
1361 tim 2899 ).
1362 tim 2713 \]
1363 tim 2720 The flow maps for $T_2^r$ and $T_3^r$ can be found in the same
1364 tim 2872 manner. In order to construct a second-order symplectic method, we
1365     split the angular kinetic Hamiltonian function can into five terms
1366 tim 2707 \[
1367 tim 2713 T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1368     ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
1369 tim 2872 (\pi _1 ).
1370     \]
1371     By concatenating the propagators corresponding to these five terms,
1372     we can obtain an symplectic integrator,
1373 tim 2713 \[
1374     \varphi _{\Delta t,T^r } = \varphi _{\Delta t/2,\pi _1 } \circ
1375 tim 2707 \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 }
1376     \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi
1377 tim 2713 _1 }.
1378 tim 2707 \]
1379 tim 2713 The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1380     $F(\pi )$ and $G(\pi )$ is defined by
1381 tim 2707 \[
1382 tim 2713 \{ F,G\} (\pi ) = [\nabla _\pi F(\pi )]^T J(\pi )\nabla _\pi G(\pi
1383 tim 2899 ).
1384 tim 2713 \]
1385     If the Poisson bracket of a function $F$ with an arbitrary smooth
1386     function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1387     conserved quantity in Poisson system. We can easily verify that the
1388     norm of the angular momentum, $\parallel \pi
1389     \parallel$, is a \emph{Casimir}. Let$ F(\pi ) = S(\frac{{\parallel
1390     \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1391     then by the chain rule
1392     \[
1393     \nabla _\pi F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2
1394 tim 2899 }}{2})\pi.
1395 tim 2713 \]
1396 tim 2899 Thus, $ [\nabla _\pi F(\pi )]^T J(\pi ) = - S'(\frac{{\parallel
1397     \pi
1398 tim 2713 \parallel ^2 }}{2})\pi \times \pi = 0 $. This explicit
1399 tim 2872 Lie-Poisson integrator is found to be both extremely efficient and
1400     stable. These properties can be explained by the fact the small
1401     angle approximation is used and the norm of the angular momentum is
1402     conserved.
1403 tim 2713
1404     \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian
1405     Splitting for Rigid Body}
1406    
1407     The Hamiltonian of rigid body can be separated in terms of kinetic
1408     energy and potential energy,
1409     \[
1410 tim 2899 H = T(p,\pi ) + V(q,Q).
1411 tim 2713 \]
1412     The equations of motion corresponding to potential energy and
1413     kinetic energy are listed in the below table,
1414 tim 2776 \begin{table}
1415 tim 2889 \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1416 tim 2713 \begin{center}
1417     \begin{tabular}{|l|l|}
1418     \hline
1419     % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ...
1420     Potential & Kinetic \\
1421     $\frac{{dq}}{{dt}} = \frac{p}{m}$ & $\frac{d}{{dt}}q = p$ \\
1422     $\frac{d}{{dt}}p = - \frac{{\partial V}}{{\partial q}}$ & $ \frac{d}{{dt}}p = 0$ \\
1423     $\frac{d}{{dt}}Q = 0$ & $ \frac{d}{{dt}}Q = Qskew(I^{ - 1} j)$ \\
1424     $ \frac{d}{{dt}}\pi = \sum\limits_i {\left( {Q^T F_i (r,Q)} \right) \times X_i }$ & $\frac{d}{{dt}}\pi = \pi \times I^{ - 1} \pi$\\
1425     \hline
1426     \end{tabular}
1427     \end{center}
1428 tim 2776 \end{table}
1429 tim 2872 A second-order symplectic method is now obtained by the composition
1430     of the position and velocity propagators,
1431 tim 2713 \[
1432     \varphi _{\Delta t} = \varphi _{\Delta t/2,V} \circ \varphi
1433     _{\Delta t,T} \circ \varphi _{\Delta t/2,V}.
1434     \]
1435 tim 2719 Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1436 tim 2872 sub-propagators which corresponding to force and torque
1437     respectively,
1438 tim 2713 \[
1439 tim 2707 \varphi _{\Delta t/2,V} = \varphi _{\Delta t/2,F} \circ \varphi
1440 tim 2713 _{\Delta t/2,\tau }.
1441 tim 2707 \]
1442 tim 2713 Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1443 tim 2872 $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order
1444     inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the
1445     kinetic energy can be separated to translational kinetic term, $T^t
1446     (p)$, and rotational kinetic term, $T^r (\pi )$,
1447 tim 2713 \begin{equation}
1448     T(p,\pi ) =T^t (p) + T^r (\pi ).
1449     \end{equation}
1450     where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1451     defined by \ref{introEquation:rotationalKineticRB}. Therefore, the
1452 tim 2872 corresponding propagators are given by
1453 tim 2713 \[
1454     \varphi _{\Delta t,T} = \varphi _{\Delta t,T^t } \circ \varphi
1455     _{\Delta t,T^r }.
1456     \]
1457 tim 2872 Finally, we obtain the overall symplectic propagators for freely
1458     moving rigid bodies
1459 tim 2899 \begin{eqnarray*}
1460     \varphi _{\Delta t} &=& \varphi _{\Delta t/2,F} \circ \varphi _{\Delta t/2,\tau } \\
1461     & & \circ \varphi _{\Delta t,T^t } \circ \varphi _{\Delta t/2,\pi _1 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi _1 } \\
1462     & & \circ \varphi _{\Delta t/2,\tau } \circ \varphi _{\Delta t/2,F} .\\
1463 tim 2713 \label{introEquation:overallRBFlowMaps}
1464 tim 2899 \end{eqnarray*}
1465 tim 2707
1466 tim 2685 \section{\label{introSection:langevinDynamics}Langevin Dynamics}
1467 tim 2716 As an alternative to newtonian dynamics, Langevin dynamics, which
1468     mimics a simple heat bath with stochastic and dissipative forces,
1469     has been applied in a variety of studies. This section will review
1470 tim 2872 the theory of Langevin dynamics. A brief derivation of generalized
1471     Langevin equation will be given first. Following that, we will
1472     discuss the physical meaning of the terms appearing in the equation
1473     as well as the calculation of friction tensor from hydrodynamics
1474     theory.
1475 tim 2685
1476 tim 2719 \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1477 tim 2685
1478 tim 2872 A harmonic bath model, in which an effective set of harmonic
1479 tim 2719 oscillators are used to mimic the effect of a linearly responding
1480     environment, has been widely used in quantum chemistry and
1481     statistical mechanics. One of the successful applications of
1482 tim 2872 Harmonic bath model is the derivation of the Generalized Langevin
1483     Dynamics (GLE). Lets consider a system, in which the degree of
1484 tim 2719 freedom $x$ is assumed to couple to the bath linearly, giving a
1485     Hamiltonian of the form
1486 tim 2696 \begin{equation}
1487     H = \frac{{p^2 }}{{2m}} + U(x) + H_B + \Delta U(x,x_1 , \ldots x_N)
1488 tim 2719 \label{introEquation:bathGLE}.
1489 tim 2696 \end{equation}
1490 tim 2872 Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated
1491     with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1492 tim 2696 \[
1493 tim 2719 H_B = \sum\limits_{\alpha = 1}^N {\left\{ {\frac{{p_\alpha ^2
1494     }}{{2m_\alpha }} + \frac{1}{2}m_\alpha \omega _\alpha ^2 }
1495     \right\}}
1496 tim 2696 \]
1497 tim 2719 where the index $\alpha$ runs over all the bath degrees of freedom,
1498     $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1499 tim 2872 the harmonic bath masses, and $\Delta U$ is a bilinear system-bath
1500 tim 2719 coupling,
1501 tim 2696 \[
1502     \Delta U = - \sum\limits_{\alpha = 1}^N {g_\alpha x_\alpha x}
1503     \]
1504 tim 2872 where $g_\alpha$ are the coupling constants between the bath
1505 tim 2874 coordinates ($x_ \alpha$) and the system coordinate ($x$).
1506 tim 2872 Introducing
1507 tim 2696 \[
1508 tim 2719 W(x) = U(x) - \sum\limits_{\alpha = 1}^N {\frac{{g_\alpha ^2
1509     }}{{2m_\alpha w_\alpha ^2 }}} x^2
1510 tim 2899 \]
1511     and combining the last two terms in Eq.~\ref{introEquation:bathGLE}, we may rewrite the Harmonic bath Hamiltonian as
1512 tim 2696 \[
1513     H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha = 1}^N
1514     {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha
1515     w_\alpha ^2 \left( {x_\alpha - \frac{{g_\alpha }}{{m_\alpha
1516 tim 2899 w_\alpha ^2 }}x} \right)^2 } \right\}}.
1517 tim 2696 \]
1518     Since the first two terms of the new Hamiltonian depend only on the
1519     system coordinates, we can get the equations of motion for
1520 tim 2872 Generalized Langevin Dynamics by Hamilton's equations,
1521 tim 2719 \begin{equation}
1522     m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} -
1523     \sum\limits_{\alpha = 1}^N {g_\alpha \left( {x_\alpha -
1524     \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right)},
1525     \label{introEquation:coorMotionGLE}
1526     \end{equation}
1527     and
1528     \begin{equation}
1529     m\ddot x_\alpha = - m_\alpha w_\alpha ^2 \left( {x_\alpha -
1530     \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right).
1531     \label{introEquation:bathMotionGLE}
1532     \end{equation}
1533     In order to derive an equation for $x$, the dynamics of the bath
1534     variables $x_\alpha$ must be solved exactly first. As an integral
1535     transform which is particularly useful in solving linear ordinary
1536 tim 2872 differential equations,the Laplace transform is the appropriate tool
1537     to solve this problem. The basic idea is to transform the difficult
1538 tim 2719 differential equations into simple algebra problems which can be
1539 tim 2872 solved easily. Then, by applying the inverse Laplace transform, also
1540     known as the Bromwich integral, we can retrieve the solutions of the
1541 tim 2899 original problems. Let $f(t)$ be a function defined on $ [0,\infty )
1542     $. The Laplace transform of f(t) is a new function defined as
1543 tim 2696 \[
1544 tim 2719 L(f(t)) \equiv F(p) = \int_0^\infty {f(t)e^{ - pt} dt}
1545 tim 2696 \]
1546 tim 2719 where $p$ is real and $L$ is called the Laplace Transform
1547     Operator. Below are some important properties of Laplace transform
1548 tim 2789 \begin{eqnarray*}
1549     L(x + y) & = & L(x) + L(y) \\
1550     L(ax) & = & aL(x) \\
1551     L(\dot x) & = & pL(x) - px(0) \\
1552     L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\
1553     L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\
1554     \end{eqnarray*}
1555 tim 2872 Applying the Laplace transform to the bath coordinates, we obtain
1556 tim 2789 \begin{eqnarray*}
1557     p^2 L(x_\alpha ) - px_\alpha (0) - \dot x_\alpha (0) & = & - \omega _\alpha ^2 L(x_\alpha ) + \frac{{g_\alpha }}{{\omega _\alpha }}L(x) \\
1558     L(x_\alpha ) & = & \frac{{\frac{{g_\alpha }}{{\omega _\alpha }}L(x) + px_\alpha (0) + \dot x_\alpha (0)}}{{p^2 + \omega _\alpha ^2 }} \\
1559     \end{eqnarray*}
1560 tim 2719 By the same way, the system coordinates become
1561 tim 2789 \begin{eqnarray*}
1562 tim 2899 mL(\ddot x) & = &
1563     - \sum\limits_{\alpha = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}\frac{p}{{p^2 + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2 + \omega _\alpha ^2 }}g_\alpha x_\alpha (0) - \frac{1}{{p^2 + \omega _\alpha ^2 }}g_\alpha \dot x_\alpha (0)} \right\}} \\
1564     & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}
1565 tim 2789 \end{eqnarray*}
1566 tim 2719 With the help of some relatively important inverse Laplace
1567     transformations:
1568 tim 2696 \[
1569 tim 2719 \begin{array}{c}
1570     L(\cos at) = \frac{p}{{p^2 + a^2 }} \\
1571     L(\sin at) = \frac{a}{{p^2 + a^2 }} \\
1572     L(1) = \frac{1}{p} \\
1573     \end{array}
1574 tim 2696 \]
1575 tim 2899 we obtain
1576 tim 2794 \begin{eqnarray*}
1577     m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} -
1578 tim 2696 \sum\limits_{\alpha = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2
1579     }}{{m_\alpha \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega
1580 tim 2794 _\alpha t)\dot x(t - \tau )d\tau } } \right\}} \\
1581     & & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha
1582     x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}}
1583     \right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha
1584     (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}
1585     \end{eqnarray*}
1586     \begin{eqnarray*}
1587     m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t
1588 tim 2696 {\sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2
1589     }}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha
1590 tim 2794 t)\dot x(t - \tau )d} \tau } \\
1591     & & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha
1592     x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}}
1593     \right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha
1594     (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}
1595     \end{eqnarray*}
1596 tim 2719 Introducing a \emph{dynamic friction kernel}
1597 tim 2696 \begin{equation}
1598 tim 2719 \xi (t) = \sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2
1599     }}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha t)}
1600     \label{introEquation:dynamicFrictionKernelDefinition}
1601     \end{equation}
1602     and \emph{a random force}
1603     \begin{equation}
1604     R(t) = \sum\limits_{\alpha = 1}^N {\left( {g_\alpha x_\alpha (0)
1605     - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}x(0)}
1606     \right)\cos (\omega _\alpha t)} + \frac{{\dot x_\alpha
1607     (0)}}{{\omega _\alpha }}\sin (\omega _\alpha t),
1608     \label{introEquation:randomForceDefinition}
1609     \end{equation}
1610     the equation of motion can be rewritten as
1611     \begin{equation}
1612 tim 2696 m\ddot x = - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi
1613     (t)\dot x(t - \tau )d\tau } + R(t)
1614     \label{introEuqation:GeneralizedLangevinDynamics}
1615     \end{equation}
1616 tim 2719 which is known as the \emph{generalized Langevin equation}.
1617    
1618 tim 2819 \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1619 tim 2719
1620     One may notice that $R(t)$ depends only on initial conditions, which
1621     implies it is completely deterministic within the context of a
1622     harmonic bath. However, it is easy to verify that $R(t)$ is totally
1623     uncorrelated to $x$ and $\dot x$,
1624 tim 2696 \[
1625 tim 2719 \begin{array}{l}
1626     \left\langle {x(t)R(t)} \right\rangle = 0, \\
1627     \left\langle {\dot x(t)R(t)} \right\rangle = 0. \\
1628     \end{array}
1629 tim 2696 \]
1630 tim 2719 This property is what we expect from a truly random process. As long
1631 tim 2872 as the model chosen for $R(t)$ was a gaussian distribution in
1632     general, the stochastic nature of the GLE still remains.
1633 tim 2696
1634 tim 2719 %dynamic friction kernel
1635     The convolution integral
1636 tim 2696 \[
1637 tim 2719 \int_0^t {\xi (t)\dot x(t - \tau )d\tau }
1638 tim 2696 \]
1639 tim 2719 depends on the entire history of the evolution of $x$, which implies
1640     that the bath retains memory of previous motions. In other words,
1641     the bath requires a finite time to respond to change in the motion
1642     of the system. For a sluggish bath which responds slowly to changes
1643     in the system coordinate, we may regard $\xi(t)$ as a constant
1644     $\xi(t) = \Xi_0$. Hence, the convolution integral becomes
1645     \[
1646     \int_0^t {\xi (t)\dot x(t - \tau )d\tau } = \xi _0 (x(t) - x(0))
1647     \]
1648 tim 2899 and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1649 tim 2719 \[
1650     m\ddot x = - \frac{\partial }{{\partial x}}\left( {W(x) +
1651     \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1652     \]
1653 tim 2872 which can be used to describe the effect of dynamic caging in
1654     viscous solvents. The other extreme is the bath that responds
1655     infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1656     taken as a $delta$ function in time:
1657 tim 2719 \[
1658     \xi (t) = 2\xi _0 \delta (t)
1659     \]
1660     Hence, the convolution integral becomes
1661     \[
1662     \int_0^t {\xi (t)\dot x(t - \tau )d\tau } = 2\xi _0 \int_0^t
1663     {\delta (t)\dot x(t - \tau )d\tau } = \xi _0 \dot x(t),
1664     \]
1665 tim 2899 and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1666 tim 2719 \begin{equation}
1667     m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1668     x(t) + R(t) \label{introEquation:LangevinEquation}
1669     \end{equation}
1670     which is known as the Langevin equation. The static friction
1671     coefficient $\xi _0$ can either be calculated from spectral density
1672 tim 2850 or be determined by Stokes' law for regular shaped particles. A
1673 tim 2719 briefly review on calculating friction tensor for arbitrary shaped
1674 tim 2720 particles is given in Sec.~\ref{introSection:frictionTensor}.
1675 tim 2696
1676 tim 2819 \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1677 tim 2719
1678     Defining a new set of coordinates,
1679 tim 2696 \[
1680     q_\alpha (t) = x_\alpha (t) - \frac{1}{{m_\alpha \omega _\alpha
1681     ^2 }}x(0)
1682 tim 2719 \],
1683     we can rewrite $R(T)$ as
1684 tim 2696 \[
1685 tim 2719 R(t) = \sum\limits_{\alpha = 1}^N {g_\alpha q_\alpha (t)}.
1686 tim 2696 \]
1687     And since the $q$ coordinates are harmonic oscillators,
1688 tim 2789 \begin{eqnarray*}
1689     \left\langle {q_\alpha ^2 } \right\rangle & = & \frac{{kT}}{{m_\alpha \omega _\alpha ^2 }} \\
1690     \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t) \\
1691     \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle \\
1692     \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha {\sum\limits_\beta {g_\alpha g_\beta \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle } } \\
1693     & = &\sum\limits_\alpha {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t)} \\
1694     & = &kT\xi (t) \\
1695     \end{eqnarray*}
1696 tim 2719 Thus, we recover the \emph{second fluctuation dissipation theorem}
1697 tim 2696 \begin{equation}
1698     \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1699 tim 2719 \label{introEquation:secondFluctuationDissipation}.
1700 tim 2696 \end{equation}
1701 tim 2719 In effect, it acts as a constraint on the possible ways in which one
1702     can model the random force and friction kernel.