ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2793 by tim, Mon Jun 5 21:24:52 2006 UTC vs.
Revision 2945 by tim, Tue Jul 18 14:19:49 2006 UTC

# Line 3 | Line 3 | Closely related to Classical Mechanics, Molecular Dyna
3   \section{\label{introSection:classicalMechanics}Classical
4   Mechanics}
5  
6 < Closely related to Classical Mechanics, Molecular Dynamics
7 < simulations are carried out by integrating the equations of motion
8 < for a given system of particles. There are three fundamental ideas
9 < behind classical mechanics. Firstly, One can determine the state of
10 < a mechanical system at any time of interest; Secondly, all the
11 < mechanical properties of the system at that time can be determined
12 < by combining the knowledge of the properties of the system with the
13 < specification of this state; Finally, the specification of the state
14 < when further combine with the laws of mechanics will also be
15 < sufficient to predict the future behavior of the system.
6 > Using equations of motion derived from Classical Mechanics,
7 > Molecular Dynamics simulations are carried out by integrating the
8 > equations of motion for a given system of particles. There are three
9 > fundamental ideas behind classical mechanics. Firstly, one can
10 > determine the state of a mechanical system at any time of interest;
11 > Secondly, all the mechanical properties of the system at that time
12 > can be determined by combining the knowledge of the properties of
13 > the system with the specification of this state; Finally, the
14 > specification of the state when further combined with the laws of
15 > mechanics will also be sufficient to predict the future behavior of
16 > the system.
17  
18   \subsection{\label{introSection:newtonian}Newtonian Mechanics}
19   The discovery of Newton's three laws of mechanics which govern the
20   motion of particles is the foundation of the classical mechanics.
21 < Newton¡¯s first law defines a class of inertial frames. Inertial
21 > Newton's first law defines a class of inertial frames. Inertial
22   frames are reference frames where a particle not interacting with
23   other bodies will move with constant speed in the same direction.
24 < With respect to inertial frames Newton¡¯s second law has the form
24 > With respect to inertial frames, Newton's second law has the form
25   \begin{equation}
26 < F = \frac {dp}{dt} = \frac {mv}{dt}
26 > F = \frac {dp}{dt} = \frac {mdv}{dt}
27   \label{introEquation:newtonSecondLaw}
28   \end{equation}
29   A point mass interacting with other bodies moves with the
30   acceleration along the direction of the force acting on it. Let
31   $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and
32   $F_{ji}$ be the force that particle $j$ exerts on particle $i$.
33 < Newton¡¯s third law states that
33 > Newton's third law states that
34   \begin{equation}
35 < F_{ij} = -F_{ji}
35 > F_{ij} = -F_{ji}.
36   \label{introEquation:newtonThirdLaw}
37   \end{equation}
37
38   Conservation laws of Newtonian Mechanics play very important roles
39   in solving mechanics problems. The linear momentum of a particle is
40   conserved if it is free or it experiences no force. The second
# Line 46 | Line 46 | N \equiv r \times F \label{introEquation:torqueDefinit
46   \end{equation}
47   The torque $\tau$ with respect to the same origin is defined to be
48   \begin{equation}
49 < N \equiv r \times F \label{introEquation:torqueDefinition}
49 > \tau \equiv r \times F \label{introEquation:torqueDefinition}
50   \end{equation}
51   Differentiating Eq.~\ref{introEquation:angularMomentumDefinition},
52   \[
# Line 59 | Line 59 | thus,
59   \]
60   thus,
61   \begin{equation}
62 < \dot L = r \times \dot p = N
62 > \dot L = r \times \dot p = \tau
63   \end{equation}
64   If there are no external torques acting on a body, the angular
65   momentum of it is conserved. The last conservation theorem state
66 < that if all forces are conservative, Energy
67 < \begin{equation}E = T + V \label{introEquation:energyConservation}
66 > that if all forces are conservative, energy is conserved,
67 > \begin{equation}E = T + V. \label{introEquation:energyConservation}
68   \end{equation}
69 < is conserved. All of these conserved quantities are
70 < important factors to determine the quality of numerical integration
71 < scheme for rigid body \cite{Dullweber1997}.
69 > All of these conserved quantities are important factors to determine
70 > the quality of numerical integration schemes for rigid
71 > bodies.\cite{Dullweber1997}
72  
73   \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74  
75 < Newtonian Mechanics suffers from two important limitations: it
76 < describes their motion in special cartesian coordinate systems.
77 < Another limitation of Newtonian mechanics becomes obvious when we
78 < try to describe systems with large numbers of particles. It becomes
79 < very difficult to predict the properties of the system by carrying
80 < out calculations involving the each individual interaction between
81 < all the particles, even if we know all of the details of the
82 < interaction. In order to overcome some of the practical difficulties
83 < which arise in attempts to apply Newton's equation to complex
84 < system, alternative procedures may be developed.
75 > Newtonian Mechanics suffers from an important limitation: motion can
76 > only be described in cartesian coordinate systems which make it
77 > impossible to predict analytically the properties of the system even
78 > if we know all of the details of the interaction. In order to
79 > overcome some of the practical difficulties which arise in attempts
80 > to apply Newton's equation to complex systems, approximate numerical
81 > procedures may be developed.
82  
83 < \subsubsection{\label{introSection:halmiltonPrinciple}Hamilton's
84 < Principle}
83 > \subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's
84 > Principle}}
85  
86   Hamilton introduced the dynamical principle upon which it is
87 < possible to base all of mechanics and, indeed, most of classical
88 < physics. Hamilton's Principle may be stated as follow,
89 <
90 < The actual trajectory, along which a dynamical system may move from
91 < one point to another within a specified time, is derived by finding
92 < the path which minimizes the time integral of the difference between
96 < the kinetic, $K$, and potential energies, $U$ \cite{Tolman1979}.
87 > possible to base all of mechanics and most of classical physics.
88 > Hamilton's Principle may be stated as follows: the trajectory, along
89 > which a dynamical system may move from one point to another within a
90 > specified time, is derived by finding the path which minimizes the
91 > time integral of the difference between the kinetic $K$, and
92 > potential energies $U$,
93   \begin{equation}
94 < \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0} ,
94 > \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}.
95   \label{introEquation:halmitonianPrinciple1}
96   \end{equation}
101
97   For simple mechanical systems, where the forces acting on the
98 < different part are derivable from a potential and the velocities are
99 < small compared with that of light, the Lagrangian function $L$ can
100 < be define as the difference between the kinetic energy of the system
106 < and its potential energy,
98 > different parts are derivable from a potential, the Lagrangian
99 > function $L$ can be defined as the difference between the kinetic
100 > energy of the system and its potential energy,
101   \begin{equation}
102 < L \equiv K - U = L(q_i ,\dot q_i ) ,
102 > L \equiv K - U = L(q_i ,\dot q_i ).
103   \label{introEquation:lagrangianDef}
104   \end{equation}
105 < then Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
105 > Thus, Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
106   \begin{equation}
107 < \delta \int_{t_1 }^{t_2 } {L dt = 0} ,
107 > \delta \int_{t_1 }^{t_2 } {L dt = 0} .
108   \label{introEquation:halmitonianPrinciple2}
109   \end{equation}
110  
111 < \subsubsection{\label{introSection:equationOfMotionLagrangian}The
112 < Equations of Motion in Lagrangian Mechanics}
111 > \subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The
112 > Equations of Motion in Lagrangian Mechanics}}
113  
114 < For a holonomic system of $f$ degrees of freedom, the equations of
115 < motion in the Lagrangian form is
114 > For a system of $f$ degrees of freedom, the equations of motion in
115 > the Lagrangian form is
116   \begin{equation}
117   \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} -
118   \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f
# Line 132 | Line 126 | independent of generalized velocities, the generalized
126   Arising from Lagrangian Mechanics, Hamiltonian Mechanics was
127   introduced by William Rowan Hamilton in 1833 as a re-formulation of
128   classical mechanics. If the potential energy of a system is
129 < independent of generalized velocities, the generalized momenta can
136 < be defined as
129 > independent of velocities, the momenta can be defined as
130   \begin{equation}
131   p_i = \frac{\partial L}{\partial \dot q_i}
132   \label{introEquation:generalizedMomenta}
# Line 143 | Line 136 | p_i  = \frac{{\partial L}}{{\partial q_i }}
136   p_i  = \frac{{\partial L}}{{\partial q_i }}
137   \label{introEquation:generalizedMomentaDot}
138   \end{equation}
146
139   With the help of the generalized momenta, we may now define a new
140   quantity $H$ by the equation
141   \begin{equation}
# Line 151 | Line 143 | $L$ is the Lagrangian function for the system.
143   \label{introEquation:hamiltonianDefByLagrangian}
144   \end{equation}
145   where $ \dot q_1  \ldots \dot q_f $ are generalized velocities and
146 < $L$ is the Lagrangian function for the system.
147 <
156 < Differentiating Eq.~\ref{introEquation:hamiltonianDefByLagrangian},
157 < one can obtain
146 > $L$ is the Lagrangian function for the system. Differentiating
147 > Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, one can obtain
148   \begin{equation}
149   dH = \sum\limits_k {\left( {p_k d\dot q_k  + \dot q_k dp_k  -
150   \frac{{\partial L}}{{\partial q_k }}dq_k  - \frac{{\partial
151   L}}{{\partial \dot q_k }}d\dot q_k } \right)}  - \frac{{\partial
152 < L}}{{\partial t}}dt \label{introEquation:diffHamiltonian1}
152 > L}}{{\partial t}}dt . \label{introEquation:diffHamiltonian1}
153   \end{equation}
154 < Making use of  Eq.~\ref{introEquation:generalizedMomenta}, the
155 < second and fourth terms in the parentheses cancel. Therefore,
154 > Making use of Eq.~\ref{introEquation:generalizedMomenta}, the second
155 > and fourth terms in the parentheses cancel. Therefore,
156   Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as
157   \begin{equation}
158   dH = \sum\limits_k {\left( {\dot q_k dp_k  - \dot p_k dq_k }
159 < \right)}  - \frac{{\partial L}}{{\partial t}}dt
159 > \right)}  - \frac{{\partial L}}{{\partial t}}dt .
160   \label{introEquation:diffHamiltonian2}
161   \end{equation}
162   By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can
163   find
164   \begin{equation}
165 < \frac{{\partial H}}{{\partial p_k }} = q_k
165 > \frac{{\partial H}}{{\partial p_k }} = \dot {q_k}
166   \label{introEquation:motionHamiltonianCoordinate}
167   \end{equation}
168   \begin{equation}
169 < \frac{{\partial H}}{{\partial q_k }} =  - p_k
169 > \frac{{\partial H}}{{\partial q_k }} =  - \dot {p_k}
170   \label{introEquation:motionHamiltonianMomentum}
171   \end{equation}
172   and
# Line 185 | Line 175 | t}}
175   t}}
176   \label{introEquation:motionHamiltonianTime}
177   \end{equation}
178 <
189 < Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
178 > where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179   Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180   equation of motion. Due to their symmetrical formula, they are also
181 < known as the canonical equations of motions \cite{Goldstein2001}.
181 > known as the canonical equations of motions.\cite{Goldstein2001}
182  
183   An important difference between Lagrangian approach and the
184   Hamiltonian approach is that the Lagrangian is considered to be a
185 < function of the generalized velocities $\dot q_i$ and the
186 < generalized coordinates $q_i$, while the Hamiltonian is considered
187 < to be a function of the generalized momenta $p_i$ and the conjugate
188 < generalized coordinate $q_i$. Hamiltonian Mechanics is more
189 < appropriate for application to statistical mechanics and quantum
190 < mechanics, since it treats the coordinate and its time derivative as
191 < independent variables and it only works with 1st-order differential
203 < equations\cite{Marion1990}.
204 <
185 > function of the generalized velocities $\dot q_i$ and coordinates
186 > $q_i$, while the Hamiltonian is considered to be a function of the
187 > generalized momenta $p_i$ and the conjugate coordinates $q_i$.
188 > Hamiltonian Mechanics is more appropriate for application to
189 > statistical mechanics and quantum mechanics, since it treats the
190 > coordinate and its time derivative as independent variables and it
191 > only works with 1st-order differential equations.\cite{Marion1990}
192   In Newtonian Mechanics, a system described by conservative forces
193 < conserves the total energy \ref{introEquation:energyConservation}.
194 < It follows that Hamilton's equations of motion conserve the total
195 < Hamiltonian.
193 > conserves the total energy
194 > (Eq.~\ref{introEquation:energyConservation}). It follows that
195 > Hamilton's equations of motion conserve the total Hamiltonian
196   \begin{equation}
197   \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial
198   H}}{{\partial q_i }}\dot q_i  + \frac{{\partial H}}{{\partial p_i
199   }}\dot p_i } \right)}  = \sum\limits_i {\left( {\frac{{\partial
200   H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} -
201   \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial
202 < q_i }}} \right) = 0} \label{introEquation:conserveHalmitonian}
202 > q_i }}} \right) = 0}. \label{introEquation:conserveHalmitonian}
203   \end{equation}
204  
205   \section{\label{introSection:statisticalMechanics}Statistical
# Line 221 | Line 208 | Statistical Mechanics concepts and theorem presented i
208   The thermodynamic behaviors and properties of Molecular Dynamics
209   simulation are governed by the principle of Statistical Mechanics.
210   The following section will give a brief introduction to some of the
211 < Statistical Mechanics concepts and theorem presented in this
211 > Statistical Mechanics concepts and theorems presented in this
212   dissertation.
213  
214   \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
215  
216   Mathematically, phase space is the space which represents all
217 < possible states. Each possible state of the system corresponds to
218 < one unique point in the phase space. For mechanical systems, the
219 < phase space usually consists of all possible values of position and
220 < momentum variables. Consider a dynamic system in a cartesian space,
221 < where each of the $6f$ coordinates and momenta is assigned to one of
222 < $6f$ mutually orthogonal axes, the phase space of this system is a
223 < $6f$ dimensional space. A point, $x = (q_1 , \ldots ,q_f ,p_1 ,
224 < \ldots ,p_f )$, with a unique set of values of $6f$ coordinates and
217 > possible states of a system. Each possible state of the system
218 > corresponds to one unique point in the phase space. For mechanical
219 > systems, the phase space usually consists of all possible values of
220 > position and momentum variables. Consider a dynamic system of $f$
221 > particles in a cartesian space, where each of the $6f$ coordinates
222 > and momenta is assigned to one of $6f$ mutually orthogonal axes, the
223 > phase space of this system is a $6f$ dimensional space. A point, $x
224 > =
225 > (\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
226 > \over q} _1 , \ldots
227 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
228 > \over q} _f
229 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
230 > \over p} _1  \ldots
231 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
232 > \over p} _f )$ , with a unique set of values of $6f$ coordinates and
233   momenta is a phase space vector.
234 + %%%fix me
235  
236 < A microscopic state or microstate of a classical system is
241 < specification of the complete phase space vector of a system at any
242 < instant in time. An ensemble is defined as a collection of systems
243 < sharing one or more macroscopic characteristics but each being in a
244 < unique microstate. The complete ensemble is specified by giving all
245 < systems or microstates consistent with the common macroscopic
246 < characteristics of the ensemble. Although the state of each
247 < individual system in the ensemble could be precisely described at
248 < any instance in time by a suitable phase space vector, when using
249 < ensembles for statistical purposes, there is no need to maintain
250 < distinctions between individual systems, since the numbers of
251 < systems at any time in the different states which correspond to
252 < different regions of the phase space are more interesting. Moreover,
253 < in the point of view of statistical mechanics, one would prefer to
254 < use ensembles containing a large enough population of separate
255 < members so that the numbers of systems in such different states can
256 < be regarded as changing continuously as we traverse different
257 < regions of the phase space. The condition of an ensemble at any time
236 > In statistical mechanics, the condition of an ensemble at any time
237   can be regarded as appropriately specified by the density $\rho$
238   with which representative points are distributed over the phase
239 < space. The density of distribution for an ensemble with $f$ degrees
240 < of freedom is defined as,
239 > space. The density distribution for an ensemble with $f$ degrees of
240 > freedom is defined as,
241   \begin{equation}
242   \rho  = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t).
243   \label{introEquation:densityDistribution}
244   \end{equation}
245   Governed by the principles of mechanics, the phase points change
246 < their value which would change the density at any time at phase
247 < space. Hence, the density of distribution is also to be taken as a
248 < function of the time.
249 <
271 < The number of systems $\delta N$ at time $t$ can be determined by,
246 > their locations which changes the density at any time at phase
247 > space. Hence, the density distribution is also to be taken as a
248 > function of the time. The number of systems $\delta N$ at time $t$
249 > can be determined by,
250   \begin{equation}
251   \delta N = \rho (q,p,t)dq_1  \ldots dq_f dp_1  \ldots dp_f.
252   \label{introEquation:deltaN}
253   \end{equation}
254 < Assuming a large enough population of systems are exploited, we can
255 < sufficiently approximate $\delta N$ without introducing
256 < discontinuity when we go from one region in the phase space to
257 < another. By integrating over the whole phase space,
254 > Assuming enough copies of the systems, we can sufficiently
255 > approximate $\delta N$ without introducing discontinuity when we go
256 > from one region in the phase space to another. By integrating over
257 > the whole phase space,
258   \begin{equation}
259   N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f
260   \label{introEquation:totalNumberSystem}
261   \end{equation}
262 < gives us an expression for the total number of the systems. Hence,
263 < the probability per unit in the phase space can be obtained by,
262 > gives us an expression for the total number of copies. Hence, the
263 > probability per unit volume in the phase space can be obtained by,
264   \begin{equation}
265   \frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int
266   {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
267   \label{introEquation:unitProbability}
268   \end{equation}
269 < With the help of Equation(\ref{introEquation:unitProbability}) and
270 < the knowledge of the system, it is possible to calculate the average
269 > With the help of Eq.~\ref{introEquation:unitProbability} and the
270 > knowledge of the system, it is possible to calculate the average
271   value of any desired quantity which depends on the coordinates and
272 < momenta of the system. Even when the dynamics of the real system is
272 > momenta of the system. Even when the dynamics of the real system are
273   complex, or stochastic, or even discontinuous, the average
274 < properties of the ensemble of possibilities as a whole may still
275 < remain well defined. For a classical system in thermal equilibrium
276 < with its environment, the ensemble average of a mechanical quantity,
277 < $\langle A(q , p) \rangle_t$, takes the form of an integral over the
278 < phase space of the system,
274 > properties of the ensemble of possibilities as a whole remain well
275 > defined. For a classical system in thermal equilibrium with its
276 > environment, the ensemble average of a mechanical quantity, $\langle
277 > A(q , p) \rangle_t$, takes the form of an integral over the phase
278 > space of the system,
279   \begin{equation}
280   \langle  A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho
281   (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
282 < (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}
282 > (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
283   \label{introEquation:ensembelAverage}
284   \end{equation}
285  
308 There are several different types of ensembles with different
309 statistical characteristics. As a function of macroscopic
310 parameters, such as temperature \textit{etc}, partition function can
311 be used to describe the statistical properties of a system in
312 thermodynamic equilibrium.
313
314 As an ensemble of systems, each of which is known to be thermally
315 isolated and conserve energy, Microcanonical ensemble(NVE) has a
316 partition function like,
317 \begin{equation}
318 \Omega (N,V,E) = e^{\beta TS} \label{introEquation:NVEPartition}.
319 \end{equation}
320 A canonical ensemble(NVT)is an ensemble of systems, each of which
321 can share its energy with a large heat reservoir. The distribution
322 of the total energy amongst the possible dynamical states is given
323 by the partition function,
324 \begin{equation}
325 \Omega (N,V,T) = e^{ - \beta A}
326 \label{introEquation:NVTPartition}
327 \end{equation}
328 Here, $A$ is the Helmholtz free energy which is defined as $ A = U -
329 TS$. Since most experiment are carried out under constant pressure
330 condition, isothermal-isobaric ensemble(NPT) play a very important
331 role in molecular simulation. The isothermal-isobaric ensemble allow
332 the system to exchange energy with a heat bath of temperature $T$
333 and to change the volume as well. Its partition function is given as
334 \begin{equation}
335 \Delta (N,P,T) =  - e^{\beta G}.
336 \label{introEquation:NPTPartition}
337 \end{equation}
338 Here, $G = U - TS + PV$, and $G$ is called Gibbs free energy.
339
286   \subsection{\label{introSection:liouville}Liouville's theorem}
287  
288 < The Liouville's theorem is the foundation on which statistical
289 < mechanics rests. It describes the time evolution of phase space
288 > Liouville's theorem is the foundation on which statistical mechanics
289 > rests. It describes the time evolution of the phase space
290   distribution function. In order to calculate the rate of change of
291 < $\rho$, we begin from Equation(\ref{introEquation:deltaN}). If we
292 < consider the two faces perpendicular to the $q_1$ axis, which are
293 < located at $q_1$ and $q_1 + \delta q_1$, the number of phase points
294 < leaving the opposite face is given by the expression,
291 > $\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider
292 > the two faces perpendicular to the $q_1$ axis, which are located at
293 > $q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the
294 > opposite face is given by the expression,
295   \begin{equation}
296   \left( {\rho  + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 }
297   \right)\left( {\dot q_1  + \frac{{\partial \dot q_1 }}{{\partial q_1
# Line 369 | Line 315 | divining $ \delta q_1  \ldots \delta q_f \delta p_1  \
315   + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)}  = 0 ,
316   \end{equation}
317   which cancels the first terms of the right hand side. Furthermore,
318 < divining $ \delta q_1  \ldots \delta q_f \delta p_1  \ldots \delta
318 > dividing $ \delta q_1  \ldots \delta q_f \delta p_1  \ldots \delta
319   p_f $ in both sides, we can write out Liouville's theorem in a
320   simple form,
321   \begin{equation}
# Line 378 | Line 324 | simple form,
324   \frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)}  = 0 .
325   \label{introEquation:liouvilleTheorem}
326   \end{equation}
381
327   Liouville's theorem states that the distribution function is
328   constant along any trajectory in phase space. In classical
329 < statistical mechanics, since the number of particles in the system
330 < is huge, we may be able to believe the system is stationary,
329 > statistical mechanics, since the number of system copies in an
330 > ensemble is huge and constant, we can assume the local density has
331 > no reason (other than classical mechanics) to change,
332   \begin{equation}
333   \frac{{\partial \rho }}{{\partial t}} = 0.
334   \label{introEquation:stationary}
# Line 395 | Line 341 | distribution,
341   \label{introEquation:densityAndHamiltonian}
342   \end{equation}
343  
344 < \subsubsection{\label{introSection:phaseSpaceConservation}Conservation of Phase Space}
344 > \subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}}
345   Lets consider a region in the phase space,
346   \begin{equation}
347   \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f .
348   \end{equation}
349   If this region is small enough, the density $\rho$ can be regarded
350 < as uniform over the whole phase space. Thus, the number of phase
351 < points inside this region is given by,
350 > as uniform over the whole integral. Thus, the number of phase points
351 > inside this region is given by,
352   \begin{equation}
353   \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f
354   dp_1 } ..dp_f.
# Line 412 | Line 358 | With the help of stationary assumption
358   \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho
359   \frac{d}{{dt}}(\delta v) = 0.
360   \end{equation}
361 < With the help of stationary assumption
362 < (\ref{introEquation:stationary}), we obtain the principle of the
363 < \emph{conservation of extension in phase space},
361 > With the help of the stationary assumption
362 > (Eq.~\ref{introEquation:stationary}), we obtain the principle of
363 > \emph{conservation of volume in phase space},
364   \begin{equation}
365   \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 }
366   ...dq_f dp_1 } ..dp_f  = 0.
367   \label{introEquation:volumePreserving}
368   \end{equation}
369  
370 < \subsubsection{\label{introSection:liouvilleInOtherForms}Liouville's Theorem in Other Forms}
370 > \subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}}
371  
372 < Liouville's theorem can be expresses in a variety of different forms
372 > Liouville's theorem can be expressed in a variety of different forms
373   which are convenient within different contexts. For any two function
374   $F$ and $G$ of the coordinates and momenta of a system, the Poisson
375 < bracket ${F, G}$ is defined as
375 > bracket $\{F,G\}$ is defined as
376   \begin{equation}
377   \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
378   F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
# Line 434 | Line 380 | Substituting equations of motion in Hamiltonian formal
380   q_i }}} \right)}.
381   \label{introEquation:poissonBracket}
382   \end{equation}
383 < Substituting equations of motion in Hamiltonian formalism(
384 < \ref{introEquation:motionHamiltonianCoordinate} ,
385 < \ref{introEquation:motionHamiltonianMomentum} ) into
386 < (\ref{introEquation:liouvilleTheorem}), we can rewrite Liouville's
387 < theorem using Poisson bracket notion,
383 > Substituting equations of motion in Hamiltonian formalism
384 > (Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
385 > Eq.~\ref{introEquation:motionHamiltonianMomentum}) into
386 > (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite
387 > Liouville's theorem using Poisson bracket notion,
388   \begin{equation}
389   \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - \left\{
390   {\rho ,H} \right\}.
# Line 457 | Line 403 | expressed as
403   \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - iL\rho
404   \label{introEquation:liouvilleTheoremInOperator}
405   \end{equation}
406 <
406 > which can help define a propagator $\rho (t) = e^{-iLt} \rho (0)$.
407   \subsection{\label{introSection:ergodic}The Ergodic Hypothesis}
408  
409   Various thermodynamic properties can be calculated from Molecular
410   Dynamics simulation. By comparing experimental values with the
411   calculated properties, one can determine the accuracy of the
412 < simulation and the quality of the underlying model. However, both of
413 < experiment and computer simulation are usually performed during a
412 > simulation and the quality of the underlying model. However, both
413 > experiments and computer simulations are usually performed during a
414   certain time interval and the measurements are averaged over a
415 < period of them which is different from the average behavior of
416 < many-body system in Statistical Mechanics. Fortunately, Ergodic
417 < Hypothesis is proposed to make a connection between time average and
418 < ensemble average. It states that time average and average over the
419 < statistical ensemble are identical \cite{Frenkel1996, Leach2001}.
415 > period of time which is different from the average behavior of
416 > many-body system in Statistical Mechanics. Fortunately, the Ergodic
417 > Hypothesis makes a connection between time average and the ensemble
418 > average. It states that the time average and average over the
419 > statistical ensemble are identical:\cite{Frenkel1996, Leach2001}
420   \begin{equation}
421   \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
422   \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
# Line 479 | Line 425 | sufficiently long time (longer than relaxation time),
425   where $\langle  A(q , p) \rangle_t$ is an equilibrium value of a
426   physical quantity and $\rho (p(t), q(t))$ is the equilibrium
427   distribution function. If an observation is averaged over a
428 < sufficiently long time (longer than relaxation time), all accessible
429 < microstates in phase space are assumed to be equally probed, giving
430 < a properly weighted statistical average. This allows the researcher
431 < freedom of choice when deciding how best to measure a given
432 < observable. In case an ensemble averaged approach sounds most
433 < reasonable, the Monte Carlo techniques\cite{Metropolis1949} can be
428 > sufficiently long time (longer than the relaxation time), all
429 > accessible microstates in phase space are assumed to be equally
430 > probed, giving a properly weighted statistical average. This allows
431 > the researcher freedom of choice when deciding how best to measure a
432 > given observable. In case an ensemble averaged approach sounds most
433 > reasonable, the Monte Carlo methods\cite{Metropolis1949} can be
434   utilized. Or if the system lends itself to a time averaging
435   approach, the Molecular Dynamics techniques in
436   Sec.~\ref{introSection:molecularDynamics} will be the best
437 < choice\cite{Frenkel1996}.
437 > choice.\cite{Frenkel1996}
438  
439   \section{\label{introSection:geometricIntegratos}Geometric Integrators}
440 < A variety of numerical integrators were proposed to simulate the
441 < motions. They usually begin with an initial conditionals and move
442 < the objects in the direction governed by the differential equations.
443 < However, most of them ignore the hidden physical law contained
444 < within the equations. Since 1990, geometric integrators, which
445 < preserve various phase-flow invariants such as symplectic structure,
446 < volume and time reversal symmetry, are developed to address this
447 < issue\cite{Dullweber1997, McLachlan1998, Leimkuhler1999}. The
448 < velocity verlet method, which happens to be a simple example of
449 < symplectic integrator, continues to gain its popularity in molecular
450 < dynamics community. This fact can be partly explained by its
451 < geometric nature.
440 > A variety of numerical integrators have been proposed to simulate
441 > the motions of atoms in MD simulation. They usually begin with
442 > initial conditions and move the objects in the direction governed by
443 > the differential equations. However, most of them ignore the hidden
444 > physical laws contained within the equations. Since 1990, geometric
445 > integrators, which preserve various phase-flow invariants such as
446 > symplectic structure, volume and time reversal symmetry, were
447 > developed to address this issue.\cite{Dullweber1997, McLachlan1998,
448 > Leimkuhler1999} The velocity Verlet method, which happens to be a
449 > simple example of symplectic integrator, continues to gain
450 > popularity in the molecular dynamics community. This fact can be
451 > partly explained by its geometric nature.
452  
453 < \subsection{\label{introSection:symplecticManifold}Symplectic Manifold}
454 < A \emph{manifold} is an abstract mathematical space. It locally
455 < looks like Euclidean space, but when viewed globally, it may have
456 < more complicate structure. A good example of manifold is the surface
457 < of Earth. It seems to be flat locally, but it is round if viewed as
458 < a whole. A \emph{differentiable manifold} (also known as
459 < \emph{smooth manifold}) is a manifold with an open cover in which
460 < the covering neighborhoods are all smoothly isomorphic to one
461 < another. In other words,it is possible to apply calculus on
462 < \emph{differentiable manifold}. A \emph{symplectic manifold} is
517 < defined as a pair $(M, \omega)$ which consisting of a
518 < \emph{differentiable manifold} $M$ and a close, non-degenerated,
453 > \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
454 > A \emph{manifold} is an abstract mathematical space. It looks
455 > locally like Euclidean space, but when viewed globally, it may have
456 > more complicated structure. A good example of manifold is the
457 > surface of Earth. It seems to be flat locally, but it is round if
458 > viewed as a whole. A \emph{differentiable manifold} (also known as
459 > \emph{smooth manifold}) is a manifold on which it is possible to
460 > apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is
461 > defined as a pair $(M, \omega)$ which consists of a
462 > \emph{differentiable manifold} $M$ and a close, non-degenerate,
463   bilinear symplectic form, $\omega$. A symplectic form on a vector
464   space $V$ is a function $\omega(x, y)$ which satisfies
465   $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
466   \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
467 < $\omega(x, x) = 0$. Cross product operation in vector field is an
468 < example of symplectic form.
467 > $\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in
468 > vector field is an example of symplectic form. One of the
469 > motivations to study \emph{symplectic manifolds} in Hamiltonian
470 > Mechanics is that a symplectic manifold can represent all possible
471 > configurations of the system and the phase space of the system can
472 > be described by it's cotangent bundle.\cite{Jost2002} Every
473 > symplectic manifold is even dimensional. For instance, in Hamilton
474 > equations, coordinate and momentum always appear in pairs.
475  
526 One of the motivations to study \emph{symplectic manifold} in
527 Hamiltonian Mechanics is that a symplectic manifold can represent
528 all possible configurations of the system and the phase space of the
529 system can be described by it's cotangent bundle. Every symplectic
530 manifold is even dimensional. For instance, in Hamilton equations,
531 coordinate and momentum always appear in pairs.
532
533 Let  $(M,\omega)$ and $(N, \eta)$ be symplectic manifolds. A map
534 \[
535 f : M \rightarrow N
536 \]
537 is a \emph{symplectomorphism} if it is a \emph{diffeomorphims} and
538 the \emph{pullback} of $\eta$ under f is equal to $\omega$.
539 Canonical transformation is an example of symplectomorphism in
540 classical mechanics.
541
476   \subsection{\label{introSection:ODE}Ordinary Differential Equations}
477  
478 < For a ordinary differential system defined as
478 > For an ordinary differential system defined as
479   \begin{equation}
480   \dot x = f(x)
481   \end{equation}
482 < where $x = x(q,p)^T$, this system is canonical Hamiltonian, if
482 > where $x = x(q,p)$, this system is a canonical Hamiltonian, if
483 > $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
484 > function and $J$ is the skew-symmetric matrix
485   \begin{equation}
550 f(r) = J\nabla _x H(r).
551 \end{equation}
552 $H = H (q, p)$ is Hamiltonian function and $J$ is the skew-symmetric
553 matrix
554 \begin{equation}
486   J = \left( {\begin{array}{*{20}c}
487     0 & I  \\
488     { - I} & 0  \\
# Line 561 | Line 492 | system can be rewritten as,
492   where $I$ is an identity matrix. Using this notation, Hamiltonian
493   system can be rewritten as,
494   \begin{equation}
495 < \frac{d}{{dt}}x = J\nabla _x H(x)
495 > \frac{d}{{dt}}x = J\nabla _x H(x).
496   \label{introEquation:compactHamiltonian}
497   \end{equation}In this case, $f$ is
498 < called a \emph{Hamiltonian vector field}.
499 <
569 < Another generalization of Hamiltonian dynamics is Poisson
570 < Dynamics\cite{Olver1986},
498 > called a \emph{Hamiltonian vector field}. Another generalization of
499 > Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986}
500   \begin{equation}
501   \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
502   \end{equation}
503 < The most obvious change being that matrix $J$ now depends on $x$.
503 > where the most obvious change being that matrix $J$ now depends on
504 > $x$.
505  
506 < \subsection{\label{introSection:exactFlow}Exact Flow}
506 > \subsection{\label{introSection:exactFlow}Exact Propagator}
507  
508 < Let $x(t)$ be the exact solution of the ODE system,
508 > Let $x(t)$ be the exact solution of the ODE
509 > system,
510   \begin{equation}
511 < \frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}
512 < \end{equation}
513 < The exact flow(solution) $\varphi_\tau$ is defined by
514 < \[
515 < x(t+\tau) =\varphi_\tau(x(t))
511 > \frac{{dx}}{{dt}} = f(x), \label{introEquation:ODE}
512 > \end{equation} we can
513 > define its exact propagator $\varphi_\tau$:
514 > \[ x(t+\tau)
515 > =\varphi_\tau(x(t))
516   \]
517   where $\tau$ is a fixed time step and $\varphi$ is a map from phase
518 < space to itself. The flow has the continuous group property,
518 > space to itself. The propagator has the continuous group property,
519   \begin{equation}
520   \varphi _{\tau _1 }  \circ \varphi _{\tau _2 }  = \varphi _{\tau _1
521   + \tau _2 } .
# Line 593 | Line 524 | Therefore, the exact flow is self-adjoint,
524   \begin{equation}
525   \varphi _\tau   \circ \varphi _{ - \tau }  = I
526   \end{equation}
527 < Therefore, the exact flow is self-adjoint,
527 > Therefore, the exact propagator is self-adjoint,
528   \begin{equation}
529   \varphi _\tau   = \varphi _{ - \tau }^{ - 1}.
530   \end{equation}
531 < The exact flow can also be written in terms of the of an operator,
531 > The exact propagator can also be written as an operator,
532   \begin{equation}
533   \varphi _\tau  (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
534   }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
535   \label{introEquation:exponentialOperator}
536   \end{equation}
537 <
538 < In most cases, it is not easy to find the exact flow $\varphi_\tau$.
539 < Instead, we use a approximate map, $\psi_\tau$, which is usually
540 < called integrator. The order of an integrator $\psi_\tau$ is $p$, if
541 < the Taylor series of $\psi_\tau$ agree to order $p$,
537 > In most cases, it is not easy to find the exact propagator
538 > $\varphi_\tau$. Instead, we use an approximate map, $\psi_\tau$,
539 > which is usually called an integrator. The order of an integrator
540 > $\psi_\tau$ is $p$, if the Taylor series of $\psi_\tau$ agree to
541 > order $p$,
542   \begin{equation}
543 < \psi_tau(x) = x + \tau f(x) + O(\tau^{p+1})
543 > \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1})
544   \end{equation}
545  
546   \subsection{\label{introSection:geometricProperties}Geometric Properties}
547  
548 < The hidden geometric properties\cite{Budd1999, Marsden1998} of ODE
549 < and its flow play important roles in numerical studies. Many of them
550 < can be found in systems which occur naturally in applications.
551 <
552 < Let $\varphi$ be the flow of Hamiltonian vector field, $\varphi$ is
622 < a \emph{symplectic} flow if it satisfies,
548 > The hidden geometric properties\cite{Budd1999, Marsden1998} of an
549 > ODE and its propagator play important roles in numerical studies.
550 > Many of them can be found in systems which occur naturally in
551 > applications. Let $\varphi$ be the propagator of Hamiltonian vector
552 > field, $\varphi$ is a \emph{symplectic} propagator if it satisfies,
553   \begin{equation}
554   {\varphi '}^T J \varphi ' = J.
555   \end{equation}
556   According to Liouville's theorem, the symplectic volume is invariant
557 < under a Hamiltonian flow, which is the basis for classical
558 < statistical mechanics. Furthermore, the flow of a Hamiltonian vector
559 < field on a symplectic manifold can be shown to be a
557 > under a Hamiltonian propagator, which is the basis for classical
558 > statistical mechanics. Furthermore, the propagator of a Hamiltonian
559 > vector field on a symplectic manifold can be shown to be a
560   symplectomorphism. As to the Poisson system,
561   \begin{equation}
562   {\varphi '}^T J \varphi ' = J \circ \varphi
563   \end{equation}
564 < is the property must be preserved by the integrator.
565 <
566 < It is possible to construct a \emph{volume-preserving} flow for a
567 < source free($ \nabla \cdot f = 0 $) ODE, if the flow satisfies $
568 < \det d\varphi  = 1$. One can show easily that a symplectic flow will
569 < be volume-preserving.
570 <
641 < Changing the variables $y = h(x)$ in a ODE\ref{introEquation:ODE}
642 < will result in a new system,
564 > is the property that must be preserved by the integrator. It is
565 > possible to construct a \emph{volume-preserving} propagator for a
566 > source free ODE ($ \nabla \cdot f = 0 $), if the propagator
567 > satisfies $ \det d\varphi  = 1$. One can show easily that a
568 > symplectic propagator will be volume-preserving. Changing the
569 > variables $y = h(x)$ in an ODE (Eq.~\ref{introEquation:ODE}) will
570 > result in a new system,
571   \[
572   \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y).
573   \]
574   The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$.
575 < In other words, the flow of this vector field is reversible if and
576 < only if $ h \circ \varphi ^{ - 1}  = \varphi  \circ h $.
577 <
578 < A \emph{first integral}, or conserved quantity of a general
579 < differential function is a function $ G:R^{2d}  \to R^d $ which is
652 < constant for all solutions of the ODE $\frac{{dx}}{{dt}} = f(x)$ ,
575 > In other words, the propagator of this vector field is reversible if
576 > and only if $ h \circ \varphi ^{ - 1}  = \varphi  \circ h $. A
577 > conserved quantity of a general differential function is a function
578 > $ G:R^{2d}  \to R^d $ which is constant for all solutions of the ODE
579 > $\frac{{dx}}{{dt}} = f(x)$ ,
580   \[
581   \frac{{dG(x(t))}}{{dt}} = 0.
582   \]
583 < Using chain rule, one may obtain,
583 > Using the chain rule, one may obtain,
584   \[
585 < \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \bullet \nabla G,
585 > \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \cdot \nabla G,
586   \]
587 < which is the condition for conserving \emph{first integral}. For a
588 < canonical Hamiltonian system, the time evolution of an arbitrary
589 < smooth function $G$ is given by,
663 <
587 > which is the condition for conserved quantities. For a canonical
588 > Hamiltonian system, the time evolution of an arbitrary smooth
589 > function $G$ is given by,
590   \begin{eqnarray}
591 < \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \\
592 <                        & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)). \\
591 > \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \notag\\
592 >                        & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)).
593   \label{introEquation:firstIntegral1}
594   \end{eqnarray}
595 <
596 <
671 < Using poisson bracket notion, Equation
672 < \ref{introEquation:firstIntegral1} can be rewritten as
595 > Using poisson bracket notion, Eq.~\ref{introEquation:firstIntegral1}
596 > can be rewritten as
597   \[
598   \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)).
599   \]
600 < Therefore, the sufficient condition for $G$ to be the \emph{first
601 < integral} of a Hamiltonian system is
602 < \[
603 < \left\{ {G,H} \right\} = 0.
680 < \]
681 < As well known, the Hamiltonian (or energy) H of a Hamiltonian system
682 < is a \emph{first integral}, which is due to the fact $\{ H,H\}  =
683 < 0$.
684 <
600 > Therefore, the sufficient condition for $G$ to be a conserved
601 > quantity of a Hamiltonian system is $\left\{ {G,H} \right\} = 0.$ As
602 > is well known, the Hamiltonian (or energy) H of a Hamiltonian system
603 > is a conserved quantity, which is due to the fact $\{ H,H\}  = 0$.
604   When designing any numerical methods, one should always try to
605 < preserve the structural properties of the original ODE and its flow.
605 > preserve the structural properties of the original ODE and its
606 > propagator.
607  
608   \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods}
609   A lot of well established and very effective numerical methods have
610 < been successful precisely because of their symplecticities even
610 > been successful precisely because of their symplectic nature even
611   though this fact was not recognized when they were first
612 < constructed. The most famous example is leapfrog methods in
613 < molecular dynamics. In general, symplectic integrators can be
612 > constructed. The most famous example is the Verlet-leapfrog method
613 > in molecular dynamics. In general, symplectic integrators can be
614   constructed using one of four different methods.
615   \begin{enumerate}
616   \item Generating functions
# Line 698 | Line 618 | constructed using one of four different methods.
618   \item Runge-Kutta methods
619   \item Splitting methods
620   \end{enumerate}
621 <
702 < Generating function\cite{Channell1990} tends to lead to methods
621 > Generating functions\cite{Channell1990} tend to lead to methods
622   which are cumbersome and difficult to use. In dissipative systems,
623   variational methods can capture the decay of energy
624 < accurately\cite{Kane2000}. Since their geometrically unstable nature
624 > accurately.\cite{Kane2000} Since they are geometrically unstable
625   against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
626 < methods are not suitable for Hamiltonian system. Recently, various
627 < high-order explicit Runge-Kutta methods
628 < \cite{Owren1992,Chen2003}have been developed to overcome this
629 < instability. However, due to computational penalty involved in
630 < implementing the Runge-Kutta methods, they do not attract too much
631 < attention from Molecular Dynamics community. Instead, splitting have
632 < been widely accepted since they exploit natural decompositions of
633 < the system\cite{Tuckerman1992, McLachlan1998}.
626 > methods are not suitable for Hamiltonian
627 > system.\cite{Cartwright1992} Recently, various high-order explicit
628 > Runge-Kutta methods \cite{Owren1992,Chen2003} have been developed to
629 > overcome this instability. However, due to computational penalty
630 > involved in implementing the Runge-Kutta methods, they have not
631 > attracted much attention from the Molecular Dynamics community.
632 > Instead, splitting methods have been widely accepted since they
633 > exploit natural decompositions of the system.\cite{McLachlan1998,
634 > Tuckerman1992}
635  
636 < \subsubsection{\label{introSection:splittingMethod}Splitting Method}
636 > \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
637  
638   The main idea behind splitting methods is to decompose the discrete
639 < $\varphi_h$ as a composition of simpler flows,
639 > $\varphi_h$ as a composition of simpler propagators,
640   \begin{equation}
641   \varphi _h  = \varphi _{h_1 }  \circ \varphi _{h_2 }  \ldots  \circ
642   \varphi _{h_n }
643   \label{introEquation:FlowDecomposition}
644   \end{equation}
645 < where each of the sub-flow is chosen such that each represent a
646 < simpler integration of the system.
647 <
728 < Suppose that a Hamiltonian system takes the form,
645 > where each of the sub-propagator is chosen such that each represent
646 > a simpler integration of the system. Suppose that a Hamiltonian
647 > system takes the form,
648   \[
649   H = H_1 + H_2.
650   \]
651   Here, $H_1$ and $H_2$ may represent different physical processes of
652   the system. For instance, they may relate to kinetic and potential
653   energy respectively, which is a natural decomposition of the
654 < problem. If $H_1$ and $H_2$ can be integrated using exact flows
655 < $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a simple first
656 < order is then given by the Lie-Trotter formula
654 > problem. If $H_1$ and $H_2$ can be integrated using exact
655 > propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a
656 > simple first order expression is then given by the Lie-Trotter
657 > formula\cite{Trotter1959}
658   \begin{equation}
659   \varphi _h  = \varphi _{1,h}  \circ \varphi _{2,h},
660   \label{introEquation:firstOrderSplitting}
# Line 743 | Line 663 | It is easy to show that any composition of symplectic
663   continuous $\varphi _i$ over a time $h$. By definition, as
664   $\varphi_i(t)$ is the exact solution of a Hamiltonian system, it
665   must follow that each operator $\varphi_i(t)$ is a symplectic map.
666 < It is easy to show that any composition of symplectic flows yields a
667 < symplectic map,
666 > It is easy to show that any composition of symplectic propagators
667 > yields a symplectic map,
668   \begin{equation}
669   (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi
670   '\phi ' = \phi '^T J\phi ' = J,
# Line 752 | Line 672 | splitting in this context automatically generates a sy
672   \end{equation}
673   where $\phi$ and $\psi$ both are symplectic maps. Thus operator
674   splitting in this context automatically generates a symplectic map.
675 <
676 < The Lie-Trotter splitting(\ref{introEquation:firstOrderSplitting})
677 < introduces local errors proportional to $h^2$, while Strang
678 < splitting gives a second-order decomposition,
675 > The Lie-Trotter
676 > splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
677 > local errors proportional to $h^2$, while the Strang splitting gives
678 > a second-order decomposition,\cite{Strang1968}
679   \begin{equation}
680   \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi
681   _{1,h/2} , \label{introEquation:secondOrderSplitting}
682   \end{equation}
683 < which has a local error proportional to $h^3$. Sprang splitting's
684 < popularity in molecular simulation community attribute to its
685 < symmetric property,
683 > which has a local error proportional to $h^3$. The Strang
684 > splitting's popularity in molecular simulation community attribute
685 > to its symmetric property,
686   \begin{equation}
687   \varphi _h^{ - 1} = \varphi _{ - h}.
688   \label{introEquation:timeReversible}
689   \end{equation}
690  
691 < \subsubsection{\label{introSection:exampleSplittingMethod}Example of Splitting Method}
691 > \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}}
692   The classical equation for a system consisting of interacting
693   particles can be written in Hamiltonian form,
694   \[
695   H = T + V
696   \]
697   where $T$ is the kinetic energy and $V$ is the potential energy.
698 < Setting $H_1 = T, H_2 = V$ and applying Strang splitting, one
698 > Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one
699   obtains the following:
700   \begin{align}
701   q(\Delta t) &= q(0) + \dot{q}(0)\Delta t +
# Line 788 | Line 708 | symplectic(\ref{introEquation:SymplecticFlowCompositio
708   \end{align}
709   where $F(t)$ is the force at time $t$. This integration scheme is
710   known as \emph{velocity verlet} which is
711 < symplectic(\ref{introEquation:SymplecticFlowComposition}),
712 < time-reversible(\ref{introEquation:timeReversible}) and
713 < volume-preserving (\ref{introEquation:volumePreserving}). These
711 > symplectic(Eq.~\ref{introEquation:SymplecticFlowComposition}),
712 > time-reversible(Eq.~\ref{introEquation:timeReversible}) and
713 > volume-preserving (Eq.~\ref{introEquation:volumePreserving}). These
714   geometric properties attribute to its long-time stability and its
715   popularity in the community. However, the most commonly used
716   velocity verlet integration scheme is written as below,
# Line 802 | Line 722 | q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{
722      \label{introEquation:Lp9b}\\%
723   %
724   \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
725 <    \frac{\Delta t}{2m}\, F[q(0)]. \label{introEquation:Lp9c}
725 >    \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c}
726   \end{align}
727   From the preceding splitting, one can see that the integration of
728   the equations of motion would follow:
# Line 811 | Line 731 | the equations of motion would follow:
731  
732   \item Use the half step velocities to move positions one whole step, $\Delta t$.
733  
734 < \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
734 > \item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move.
735  
736   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
737   \end{enumerate}
738 <
739 < Simply switching the order of splitting and composing, a new
740 < integrator, the \emph{position verlet} integrator, can be generated,
738 > By simply switching the order of the propagators in the splitting
739 > and composing a new integrator, the \emph{position verlet}
740 > integrator, can be generated,
741   \begin{align}
742   \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) +
743   \frac{{\Delta t}}{{2m}}\dot q(0)} \right], %
# Line 828 | Line 748 | q(\Delta t)} \right]. %
748   \label{introEquation:positionVerlet2}
749   \end{align}
750  
751 < \subsubsection{\label{introSection:errorAnalysis}Error Analysis and Higher Order Methods}
751 > \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
752  
753 < Baker-Campbell-Hausdorff formula can be used to determine the local
754 < error of splitting method in terms of commutator of the
755 < operators(\ref{introEquation:exponentialOperator}) associated with
756 < the sub-flow. For operators $hX$ and $hY$ which are associate to
757 < $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
753 > The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used
754 > to determine the local error of a splitting method in terms of the
755 > commutator of the operators associated with the sub-propagator. For
756 > operators $hX$ and $hY$ which are associated with $\varphi_1(t)$ and
757 > $\varphi_2(t)$ respectively , we have
758   \begin{equation}
759   \exp (hX + hY) = \exp (hZ)
760   \end{equation}
# Line 843 | Line 763 | Here, $[X,Y]$ is the commutators of operator $X$ and $
763   hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left(
764   {[X,[X,Y]] + [Y,[Y,X]]} \right) +  \ldots .
765   \end{equation}
766 < Here, $[X,Y]$ is the commutators of operator $X$ and $Y$ given by
766 > Here, $[X,Y]$ is the commutator of operator $X$ and $Y$ given by
767   \[
768   [X,Y] = XY - YX .
769   \]
770 < Applying Baker-Campbell-Hausdorff formula\cite{Varadarajan1974} to
771 < Sprang splitting, we can obtain
770 > Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974}
771 > to the Strang splitting, we can obtain
772   \begin{eqnarray*}
773   \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
774                                     &   & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
775 <                                   &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots )
775 >                                   &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots
776 >                                   ).
777   \end{eqnarray*}
778 < Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0\], the dominant local
779 < error of Spring splitting is proportional to $h^3$. The same
780 < procedure can be applied to general splitting,  of the form
778 > Since $ [X,Y] + [Y,X] = 0$ and $ [X,X] = 0$, the dominant local
779 > error of Strang splitting is proportional to $h^3$. The same
780 > procedure can be applied to a general splitting of the form
781   \begin{equation}
782   \varphi _{b_m h}^2  \circ \varphi _{a_m h}^1  \circ \varphi _{b_{m -
783   1} h}^2  \circ  \ldots  \circ \varphi _{a_1 h}^1 .
784   \end{equation}
785 < Careful choice of coefficient $a_1 \ldot b_m$ will lead to higher
786 < order method. Yoshida proposed an elegant way to compose higher
787 < order methods based on symmetric splitting\cite{Yoshida1990}. Given
785 > A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
786 > order methods. Yoshida proposed an elegant way to compose higher
787 > order methods based on symmetric splitting.\cite{Yoshida1990} Given
788   a symmetric second order base method $ \varphi _h^{(2)} $, a
789   fourth-order symmetric method can be constructed by composing,
790   \[
# Line 875 | Line 796 | _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)}
796   integrator $ \varphi _h^{(2n + 2)}$ can be composed by
797   \begin{equation}
798   \varphi _h^{(2n + 2)}  = \varphi _{\alpha h}^{(2n)}  \circ \varphi
799 < _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)}
799 > _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)},
800   \end{equation}
801 < , if the weights are chosen as
801 > if the weights are chosen as
802   \[
803   \alpha  =  - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta =
804   \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} .
# Line 891 | Line 812 | simulations. For instance, instantaneous temperature o
812   dynamical information. The basic idea of molecular dynamics is that
813   macroscopic properties are related to microscopic behavior and
814   microscopic behavior can be calculated from the trajectories in
815 < simulations. For instance, instantaneous temperature of an
816 < Hamiltonian system of $N$ particle can be measured by
815 > simulations. For instance, instantaneous temperature of a
816 > Hamiltonian system of $N$ particles can be measured by
817   \[
818   T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}}
819   \]
820   where $m_i$ and $v_i$ are the mass and velocity of $i$th particle
821   respectively, $f$ is the number of degrees of freedom, and $k_B$ is
822 < the boltzman constant.
822 > the Boltzman constant.
823  
824   A typical molecular dynamics run consists of three essential steps:
825   \begin{enumerate}
# Line 914 | Line 835 | initialization of a simulation. Sec.~\ref{introSec:pro
835   \end{enumerate}
836   These three individual steps will be covered in the following
837   sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
838 < initialization of a simulation. Sec.~\ref{introSec:production} will
839 < discusses issues in production run. Sec.~\ref{introSection:Analysis}
840 < provides the theoretical tools for trajectory analysis.
838 > initialization of a simulation. Sec.~\ref{introSection:production}
839 > discusses issues of production runs.
840 > Sec.~\ref{introSection:Analysis} provides the theoretical tools for
841 > analysis of trajectories.
842  
843   \subsection{\label{introSec:initialSystemSettings}Initialization}
844  
845 < \subsubsection{Preliminary preparation}
845 > \subsubsection{\textbf{Preliminary preparation}}
846  
847   When selecting the starting structure of a molecule for molecular
848   simulation, one may retrieve its Cartesian coordinates from public
849   databases, such as RCSB Protein Data Bank \textit{etc}. Although
850   thousands of crystal structures of molecules are discovered every
851   year, many more remain unknown due to the difficulties of
852 < purification and crystallization. Even for the molecule with known
853 < structure, some important information is missing. For example, the
852 > purification and crystallization. Even for molecules with known
853 > structures, some important information is missing. For example, a
854   missing hydrogen atom which acts as donor in hydrogen bonding must
855 < be added. Moreover, in order to include electrostatic interaction,
855 > be added. Moreover, in order to include electrostatic interactions,
856   one may need to specify the partial charges for individual atoms.
857   Under some circumstances, we may even need to prepare the system in
858 < a special setup. For instance, when studying transport phenomenon in
859 < membrane system, we may prepare the lipids in bilayer structure
860 < instead of placing lipids randomly in solvent, since we are not
861 < interested in self-aggregation and it takes a long time to happen.
858 > a special configuration. For instance, when studying transport
859 > phenomenon in membrane systems, we may prepare the lipids in a
860 > bilayer structure instead of placing lipids randomly in solvent,
861 > since we are not interested in the slow self-aggregation process.
862  
863 < \subsubsection{Minimization}
863 > \subsubsection{\textbf{Minimization}}
864  
865   It is quite possible that some of molecules in the system from
866 < preliminary preparation may be overlapped with each other. This
867 < close proximity leads to high potential energy which consequently
868 < jeopardizes any molecular dynamics simulations. To remove these
869 < steric overlaps, one typically performs energy minimization to find
870 < a more reasonable conformation. Several energy minimization methods
871 < have been developed to exploit the energy surface and to locate the
872 < local minimum. While converging slowly near the minimum, steepest
873 < descent method is extremely robust when systems are far from
874 < harmonic. Thus, it is often used to refine structure from
875 < crystallographic data. Relied on the gradient or hessian, advanced
876 < methods like conjugate gradient and Newton-Raphson converge rapidly
877 < to a local minimum, while become unstable if the energy surface is
878 < far from quadratic. Another factor must be taken into account, when
866 > preliminary preparation may be overlapping with each other. This
867 > close proximity leads to high initial potential energy which
868 > consequently jeopardizes any molecular dynamics simulations. To
869 > remove these steric overlaps, one typically performs energy
870 > minimization to find a more reasonable conformation. Several energy
871 > minimization methods have been developed to exploit the energy
872 > surface and to locate the local minimum. While converging slowly
873 > near the minimum, the steepest descent method is extremely robust when
874 > systems are strongly anharmonic. Thus, it is often used to refine
875 > structures from crystallographic data. Relying on the Hessian,
876 > advanced methods like Newton-Raphson converge rapidly to a local
877 > minimum, but become unstable if the energy surface is far from
878 > quadratic. Another factor that must be taken into account, when
879   choosing energy minimization method, is the size of the system.
880   Steepest descent and conjugate gradient can deal with models of any
881 < size. Because of the limit of computation power to calculate hessian
882 < matrix and insufficient storage capacity to store them, most
883 < Newton-Raphson methods can not be used with very large models.
881 > size. Because of the limits on computer memory to store the hessian
882 > matrix and the computing power needed to diagonalize these matrices,
883 > most Newton-Raphson methods can not be used with very large systems.
884  
885 < \subsubsection{Heating}
885 > \subsubsection{\textbf{Heating}}
886  
887 < Typically, Heating is performed by assigning random velocities
888 < according to a Gaussian distribution for a temperature. Beginning at
889 < a lower temperature and gradually increasing the temperature by
890 < assigning greater random velocities, we end up with setting the
891 < temperature of the system to a final temperature at which the
892 < simulation will be conducted. In heating phase, we should also keep
893 < the system from drifting or rotating as a whole. Equivalently, the
894 < net linear momentum and angular momentum of the system should be
895 < shifted to zero.
887 > Typically, heating is performed by assigning random velocities
888 > according to a Maxwell-Boltzman distribution for a desired
889 > temperature. Beginning at a lower temperature and gradually
890 > increasing the temperature by assigning larger random velocities, we
891 > end up setting the temperature of the system to a final temperature
892 > at which the simulation will be conducted. In the heating phase, we
893 > should also keep the system from drifting or rotating as a whole. To
894 > do this, the net linear momentum and angular momentum of the system
895 > is shifted to zero after each resampling from the Maxwell -Boltzman
896 > distribution.
897  
898 < \subsubsection{Equilibration}
898 > \subsubsection{\textbf{Equilibration}}
899  
900   The purpose of equilibration is to allow the system to evolve
901   spontaneously for a period of time and reach equilibrium. The
# Line 981 | Line 904 | as a means to arrive at an equilibrated structure in a
904   properties \textit{etc}, become independent of time. Strictly
905   speaking, minimization and heating are not necessary, provided the
906   equilibration process is long enough. However, these steps can serve
907 < as a means to arrive at an equilibrated structure in an effective
907 > as a mean to arrive at an equilibrated structure in an effective
908   way.
909  
910   \subsection{\label{introSection:production}Production}
911  
912 < Production run is the most important step of the simulation, in
912 > The production run is the most important step of the simulation, in
913   which the equilibrated structure is used as a starting point and the
914   motions of the molecules are collected for later analysis. In order
915   to capture the macroscopic properties of the system, the molecular
916 < dynamics simulation must be performed in correct and efficient way.
916 > dynamics simulation must be performed by sampling correctly and
917 > efficiently from the relevant thermodynamic ensemble.
918  
919   The most expensive part of a molecular dynamics simulation is the
920   calculation of non-bonded forces, such as van der Waals force and
921   Coulombic forces \textit{etc}. For a system of $N$ particles, the
922   complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
923 < which making large simulations prohibitive in the absence of any
924 < computation saving techniques.
925 <
926 < A natural approach to avoid system size issue is to represent the
927 < bulk behavior by a finite number of the particles. However, this
928 < approach will suffer from the surface effect. To offset this,
929 < \textit{Periodic boundary condition} (see Fig.~\ref{introFig:pbc})
930 < is developed to simulate bulk properties with a relatively small
931 < number of particles. In this method, the simulation box is
932 < replicated throughout space to form an infinite lattice. During the
933 < simulation, when a particle moves in the primary cell, its image in
1010 < other cells move in exactly the same direction with exactly the same
923 > which makes large simulations prohibitive in the absence of any
924 > algorithmic tricks. A natural approach to avoid system size issues
925 > is to represent the bulk behavior by a finite number of the
926 > particles. However, this approach will suffer from surface effects
927 > at the edges of the simulation. To offset this, \textit{Periodic
928 > boundary conditions} (see Fig.~\ref{introFig:pbc}) were developed to
929 > simulate bulk properties with a relatively small number of
930 > particles. In this method, the simulation box is replicated
931 > throughout space to form an infinite lattice. During the simulation,
932 > when a particle moves in the primary cell, its image in other cells
933 > move in exactly the same direction with exactly the same
934   orientation. Thus, as a particle leaves the primary cell, one of its
935   images will enter through the opposite face.
936   \begin{figure}
# Line 1021 | Line 944 | evaluation is to apply cutoff where particles farther
944  
945   %cutoff and minimum image convention
946   Another important technique to improve the efficiency of force
947 < evaluation is to apply cutoff where particles farther than a
948 < predetermined distance, are not included in the calculation
949 < \cite{Frenkel1996}. The use of a cutoff radius will cause a
950 < discontinuity in the potential energy curve. Fortunately, one can
951 < shift the potential to ensure the potential curve go smoothly to
952 < zero at the cutoff radius. Cutoff strategy works pretty well for
953 < Lennard-Jones interaction because of its short range nature.
954 < However, simply truncating the electrostatic interaction with the
955 < use of cutoff has been shown to lead to severe artifacts in
956 < simulations. Ewald summation, in which the slowly conditionally
957 < convergent Coulomb potential is transformed into direct and
958 < reciprocal sums with rapid and absolute convergence, has proved to
959 < minimize the periodicity artifacts in liquid simulations. Taking the
960 < advantages of the fast Fourier transform (FFT) for calculating
961 < discrete Fourier transforms, the particle mesh-based
947 > evaluation is to apply spherical cutoffs where particles farther
948 > than a predetermined distance are not included in the
949 > calculation.\cite{Frenkel1996} The use of a cutoff radius will cause
950 > a discontinuity in the potential energy curve. Fortunately, one can
951 > shift a simple radial potential to ensure the potential curve go
952 > smoothly to zero at the cutoff radius. The cutoff strategy works
953 > well for Lennard-Jones interaction because of its short range
954 > nature. However, simply truncating the electrostatic interaction
955 > with the use of cutoffs has been shown to lead to severe artifacts
956 > in simulations. The Ewald summation, in which the slowly decaying
957 > Coulomb potential is transformed into direct and reciprocal sums
958 > with rapid and absolute convergence, has proved to minimize the
959 > periodicity artifacts in liquid simulations. Taking advantage of
960 > fast Fourier transform (FFT) techniques for calculating discrete
961 > Fourier transforms, the particle mesh-based
962   methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
963 < $O(N^{3/2})$ to $O(N logN)$. An alternative approach is \emph{fast
964 < multipole method}\cite{Greengard1987, Greengard1994}, which treats
965 < Coulombic interaction exactly at short range, and approximate the
966 < potential at long range through multipolar expansion. In spite of
967 < their wide acceptances at the molecular simulation community, these
968 < two methods are hard to be implemented correctly and efficiently.
969 < Instead, we use a damped and charge-neutralized Coulomb potential
970 < method developed by Wolf and his coworkers\cite{Wolf1999}. The
971 < shifted Coulomb potential for particle $i$ and particle $j$ at
972 < distance $r_{rj}$ is given by:
963 > $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
964 > \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
965 > which treats Coulombic interactions exactly at short range, and
966 > approximate the potential at long range through multipolar
967 > expansion. In spite of their wide acceptance at the molecular
968 > simulation community, these two methods are difficult to implement
969 > correctly and efficiently. Instead, we use a damped and
970 > charge-neutralized Coulomb potential method developed by Wolf and
971 > his coworkers.\cite{Wolf1999} The shifted Coulomb potential for
972 > particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
973   \begin{equation}
974   V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
975   r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
976   R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
977 < r_{ij})}{r_{ij}}\right\}. \label{introEquation:shiftedCoulomb}
977 > r_{ij})}{r_{ij}}\right\}, \label{introEquation:shiftedCoulomb}
978   \end{equation}
979   where $\alpha$ is the convergence parameter. Due to the lack of
980   inherent periodicity and rapid convergence,this method is extremely
# Line 1068 | Line 991 | Recently, advanced visualization technique are widely
991  
992   \subsection{\label{introSection:Analysis} Analysis}
993  
994 < Recently, advanced visualization technique are widely applied to
994 > Recently, advanced visualization techniques have been applied to
995   monitor the motions of molecules. Although the dynamics of the
996   system can be described qualitatively from animation, quantitative
997 < trajectory analysis are more appreciable. According to the
998 < principles of Statistical Mechanics,
997 > trajectory analysis is more useful. According to the principles of
998 > Statistical Mechanics in
999   Sec.~\ref{introSection:statisticalMechanics}, one can compute
1000 < thermodynamics properties, analyze fluctuations of structural
1000 > thermodynamic properties, analyze fluctuations of structural
1001   parameters, and investigate time-dependent processes of the molecule
1002   from the trajectories.
1003  
1004 < \subsubsection{\label{introSection:thermodynamicsProperties}Thermodynamics Properties}
1004 > \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1005  
1006 < Thermodynamics properties, which can be expressed in terms of some
1006 > Thermodynamic properties, which can be expressed in terms of some
1007   function of the coordinates and momenta of all particles in the
1008   system, can be directly computed from molecular dynamics. The usual
1009   way to measure the pressure is based on virial theorem of Clausius
# Line 1100 | Line 1023 | P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\
1023   < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle
1024   \end{equation}
1025  
1026 < \subsubsection{\label{introSection:structuralProperties}Structural Properties}
1026 > \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}}
1027  
1028   Structural Properties of a simple fluid can be described by a set of
1029 < distribution functions. Among these functions,\emph{pair
1029 > distribution functions. Among these functions,the \emph{pair
1030   distribution function}, also known as \emph{radial distribution
1031 < function}, is of most fundamental importance to liquid-state theory.
1032 < Pair distribution function can be gathered by Fourier transforming
1033 < raw data from a series of neutron diffraction experiments and
1034 < integrating over the surface factor \cite{Powles1973}. The
1035 < experiment result can serve as a criterion to justify the
1036 < correctness of the theory. Moreover, various equilibrium
1037 < thermodynamic and structural properties can also be expressed in
1038 < terms of radial distribution function \cite{Allen1987}.
1039 <
1040 < A pair distribution functions $g(r)$ gives the probability that a
1041 < particle $i$ will be located at a distance $r$ from a another
1042 < particle $j$ in the system
1043 < \[
1044 < g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1045 < \ne i} {\delta (r - r_{ij} )} } } \right\rangle.
1046 < \]
1031 > function}, is of most fundamental importance to liquid theory.
1032 > Experimentally, pair distribution functions can be gathered by
1033 > Fourier transforming raw data from a series of neutron diffraction
1034 > experiments and integrating over the surface
1035 > factor.\cite{Powles1973} The experimental results can serve as a
1036 > criterion to justify the correctness of a liquid model. Moreover,
1037 > various equilibrium thermodynamic and structural properties can also
1038 > be expressed in terms of the radial distribution
1039 > function.\cite{Allen1987} The pair distribution functions $g(r)$
1040 > gives the probability that a particle $i$ will be located at a
1041 > distance $r$ from a another particle $j$ in the system
1042 > \begin{equation}
1043 > g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1044 > \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
1045 > (r)}{\rho}.
1046 > \end{equation}
1047   Note that the delta function can be replaced by a histogram in
1048 < computer simulation. Figure
1049 < \ref{introFigure:pairDistributionFunction} shows a typical pair
1050 < distribution function for the liquid argon system. The occurrence of
1128 < several peaks in the plot of $g(r)$ suggests that it is more likely
1129 < to find particles at certain radial values than at others. This is a
1130 < result of the attractive interaction at such distances. Because of
1131 < the strong repulsive forces at short distance, the probability of
1132 < locating particles at distances less than about 2.5{\AA} from each
1133 < other is essentially zero.
1048 > computer simulation. Peaks in $g(r)$ represent solvent shells, and
1049 > the height of these peaks gradually decreases to 1 as the liquid of
1050 > large distance approaches the bulk density.
1051  
1135 %\begin{figure}
1136 %\centering
1137 %\includegraphics[width=\linewidth]{pdf.eps}
1138 %\caption[Pair distribution function for the liquid argon
1139 %]{Pair distribution function for the liquid argon}
1140 %\label{introFigure:pairDistributionFunction}
1141 %\end{figure}
1052  
1053 < \subsubsection{\label{introSection:timeDependentProperties}Time-dependent
1054 < Properties}
1053 > \subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent
1054 > Properties}}
1055  
1056   Time-dependent properties are usually calculated using \emph{time
1057 < correlation function}, which correlates random variables $A$ and $B$
1058 < at two different time
1057 > correlation functions}, which correlate random variables $A$ and $B$
1058 > at two different times,
1059   \begin{equation}
1060   C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1061   \label{introEquation:timeCorrelationFunction}
1062   \end{equation}
1063   If $A$ and $B$ refer to same variable, this kind of correlation
1064 < function is called \emph{auto correlation function}. One example of
1065 < auto correlation function is velocity auto-correlation function
1066 < which is directly related to transport properties of molecular
1067 < liquids:
1158 < \[
1064 > functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation
1065 > function which is directly related to transport properties of
1066 > molecular liquids:
1067 > \begin{equation}
1068   D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1069   \right\rangle } dt
1070 < \]
1071 < where $D$ is diffusion constant. Unlike velocity autocorrelation
1072 < function which is averaging over time origins and over all the
1073 < atoms, dipole autocorrelation are calculated for the entire system.
1074 < The dipole autocorrelation function is given by:
1075 < \[
1070 > \end{equation}
1071 > where $D$ is diffusion constant. Unlike the velocity autocorrelation
1072 > function, which is averaged over time origins and over all the
1073 > atoms, the dipole autocorrelation functions is calculated for the
1074 > entire system. The dipole autocorrelation function is given by:
1075 > \begin{equation}
1076   c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1077   \right\rangle
1078 < \]
1078 > \end{equation}
1079   Here $u_{tot}$ is the net dipole of the entire system and is given
1080   by
1081 < \[
1082 < u_{tot} (t) = \sum\limits_i {u_i (t)}
1083 < \]
1084 < In principle, many time correlation functions can be related with
1081 > \begin{equation}
1082 > u_{tot} (t) = \sum\limits_i {u_i (t)}.
1083 > \end{equation}
1084 > In principle, many time correlation functions can be related to
1085   Fourier transforms of the infrared, Raman, and inelastic neutron
1086   scattering spectra of molecular liquids. In practice, one can
1087 < extract the IR spectrum from the intensity of dipole fluctuation at
1088 < each frequency using the following relationship:
1089 < \[
1087 > extract the IR spectrum from the intensity of the molecular dipole
1088 > fluctuation at each frequency using the following relationship:
1089 > \begin{equation}
1090   \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1091 < i2\pi vt} dt}
1092 < \]
1091 > i2\pi vt} dt}.
1092 > \end{equation}
1093  
1094   \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1095  
1096   Rigid bodies are frequently involved in the modeling of different
1097 < areas, from engineering, physics, to chemistry. For example,
1098 < missiles and vehicle are usually modeled by rigid bodies.  The
1099 < movement of the objects in 3D gaming engine or other physics
1100 < simulator is governed by the rigid body dynamics. In molecular
1101 < simulation, rigid body is used to simplify the model in
1102 < protein-protein docking study\cite{Gray2003}.
1097 > areas, including engineering, physics and chemistry. For example,
1098 > missiles and vehicles are usually modeled by rigid bodies.  The
1099 > movement of the objects in 3D gaming engines or other physics
1100 > simulators is governed by rigid body dynamics. In molecular
1101 > simulations, rigid bodies are used to simplify protein-protein
1102 > docking studies.\cite{Gray2003}
1103  
1104   It is very important to develop stable and efficient methods to
1105 < integrate the equations of motion of orientational degrees of
1106 < freedom. Euler angles are the nature choice to describe the
1107 < rotational degrees of freedom. However, due to its singularity, the
1108 < numerical integration of corresponding equations of motion is very
1109 < inefficient and inaccurate. Although an alternative integrator using
1110 < different sets of Euler angles can overcome this
1111 < difficulty\cite{Barojas1973}, the computational penalty and the lost
1112 < of angular momentum conservation still remain. A singularity free
1113 < representation utilizing quaternions was developed by Evans in
1114 < 1977\cite{Evans1977}. Unfortunately, this approach suffer from the
1115 < nonseparable Hamiltonian resulted from quaternion representation,
1116 < which prevents the symplectic algorithm to be utilized. Another
1117 < different approach is to apply holonomic constraints to the atoms
1118 < belonging to the rigid body. Each atom moves independently under the
1119 < normal forces deriving from potential energy and constraint forces
1120 < which are used to guarantee the rigidness. However, due to their
1121 < iterative nature, SHAKE and Rattle algorithm converge very slowly
1122 < when the number of constraint increases\cite{Ryckaert1977,
1123 < Andersen1983}.
1105 > integrate the equations of motion for orientational degrees of
1106 > freedom. Euler angles are the natural choice to describe the
1107 > rotational degrees of freedom. However, due to $\frac {1}{sin
1108 > \theta}$ singularities, the numerical integration of corresponding
1109 > equations of these motion is very inefficient and inaccurate.
1110 > Although an alternative integrator using multiple sets of Euler
1111 > angles can overcome this difficulty\cite{Barojas1973}, the
1112 > computational penalty and the loss of angular momentum conservation
1113 > still remain. A singularity-free representation utilizing
1114 > quaternions was developed by Evans in 1977.\cite{Evans1977}
1115 > Unfortunately, this approach used a nonseparable Hamiltonian
1116 > resulting from the quaternion representation, which prevented the
1117 > symplectic algorithm from being utilized. Another different approach
1118 > is to apply holonomic constraints to the atoms belonging to the
1119 > rigid body. Each atom moves independently under the normal forces
1120 > deriving from potential energy and constraint forces which are used
1121 > to guarantee the rigidness. However, due to their iterative nature,
1122 > the SHAKE and Rattle algorithms also converge very slowly when the
1123 > number of constraints increases.\cite{Ryckaert1977, Andersen1983}
1124  
1125 < The break through in geometric literature suggests that, in order to
1125 > A break-through in geometric literature suggests that, in order to
1126   develop a long-term integration scheme, one should preserve the
1127 < symplectic structure of the flow. Introducing conjugate momentum to
1128 < rotation matrix $Q$ and re-formulating Hamiltonian's equation, a
1129 < symplectic integrator, RSHAKE\cite{Kol1997}, was proposed to evolve
1130 < the Hamiltonian system in a constraint manifold by iteratively
1131 < satisfying the orthogonality constraint $Q_T Q = 1$. An alternative
1132 < method using quaternion representation was developed by
1133 < Omelyan\cite{Omelyan1998}. However, both of these methods are
1134 < iterative and inefficient. In this section, we will present a
1135 < symplectic Lie-Poisson integrator for rigid body developed by
1127 > symplectic structure of the propagator. By introducing a conjugate
1128 > momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's
1129 > equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was
1130 > proposed to evolve the Hamiltonian system in a constraint manifold
1131 > by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1132 > An alternative method using the quaternion representation was
1133 > developed by Omelyan.\cite{Omelyan1998} However, both of these
1134 > methods are iterative and inefficient. In this section, we descibe a
1135 > symplectic Lie-Poisson integrator for rigid bodies developed by
1136   Dullweber and his coworkers\cite{Dullweber1997} in depth.
1137  
1138 < \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Body}
1139 < The motion of the rigid body is Hamiltonian with the Hamiltonian
1231 < function
1138 > \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1139 > The Hamiltonian of a rigid body is given by
1140   \begin{equation}
1141   H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1142   V(q,Q) + \frac{1}{2}tr[(QQ^T  - 1)\Lambda ].
1143   \label{introEquation:RBHamiltonian}
1144   \end{equation}
1145 < Here, $q$ and $Q$  are the position and rotation matrix for the
1146 < rigid-body, $p$ and $P$  are conjugate momenta to $q$  and $Q$ , and
1147 < $J$, a diagonal matrix, is defined by
1145 > Here, $q$ and $Q$  are the position vector and rotation matrix for
1146 > the rigid-body, $p$ and $P$  are conjugate momenta to $q$  and $Q$ ,
1147 > and $J$, a diagonal matrix, is defined by
1148   \[
1149   I_{ii}^{ - 1}  = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1150   \]
1151   where $I_{ii}$ is the diagonal element of the inertia tensor. This
1152 < constrained Hamiltonian equation subjects to a holonomic constraint,
1152 > constrained Hamiltonian equation is subjected to a holonomic
1153 > constraint,
1154   \begin{equation}
1155   Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1156   \end{equation}
1157 < which is used to ensure rotation matrix's orthogonality.
1158 < Differentiating \ref{introEquation:orthogonalConstraint} and using
1159 < Equation \ref{introEquation:RBMotionMomentum}, one may obtain,
1157 > which is used to ensure the rotation matrix's unitarity. Using
1158 > Eq.~\ref{introEquation:motionHamiltonianCoordinate} and Eq.~
1159 > \ref{introEquation:motionHamiltonianMomentum}, one can write down
1160 > the equations of motion,
1161 > \begin{eqnarray}
1162 > \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
1163 > \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q), \label{introEquation:RBMotionMomentum}\\
1164 > \frac{{dQ}}{{dt}} & = & PJ^{ - 1},  \label{introEquation:RBMotionRotation}\\
1165 > \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1166 > \end{eqnarray}
1167 > Differentiating Eq.~\ref{introEquation:orthogonalConstraint} and
1168 > using Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1169   \begin{equation}
1170   Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1171   \label{introEquation:RBFirstOrderConstraint}
1172   \end{equation}
1255
1256 Using Equation (\ref{introEquation:motionHamiltonianCoordinate},
1257 \ref{introEquation:motionHamiltonianMomentum}), one can write down
1258 the equations of motion,
1259 \[
1260 \begin{array}{c}
1261 \frac{{dq}}{{dt}} = \frac{p}{m} \label{introEquation:RBMotionPosition}\\
1262 \frac{{dp}}{{dt}} =  - \nabla _q V(q,Q) \label{introEquation:RBMotionMomentum}\\
1263 \frac{{dQ}}{{dt}} = PJ^{ - 1}  \label{introEquation:RBMotionRotation}\\
1264 \frac{{dP}}{{dt}} =  - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}\\
1265 \end{array}
1266 \]
1267
1173   In general, there are two ways to satisfy the holonomic constraints.
1174 < We can use constraint force provided by lagrange multiplier on the
1175 < normal manifold to keep the motion on constraint space. Or we can
1176 < simply evolve the system in constraint manifold. These two methods
1177 < are proved to be equivalent. The holonomic constraint and equations
1178 < of motions define a constraint manifold for rigid body
1174 > We can use a constraint force provided by a Lagrange multiplier on
1175 > the normal manifold to keep the motion on the constraint space. Or
1176 > we can simply evolve the system on the constraint manifold. These
1177 > two methods have been proved to be equivalent. The holonomic
1178 > constraint and equations of motions define a constraint manifold for
1179 > rigid bodies
1180   \[
1181   M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0}
1182   \right\}.
1183   \]
1184 <
1185 < Unfortunately, this constraint manifold is not the cotangent bundle
1186 < $T_{\star}SO(3)$. However, it turns out that under symplectic
1187 < transformation, the cotangent space and the phase space are
1282 < diffeomorphic. Introducing
1184 > Unfortunately, this constraint manifold is not $T^* SO(3)$ which is
1185 > a symplectic manifold on Lie rotation group $SO(3)$. However, it
1186 > turns out that under symplectic transformation, the cotangent space
1187 > and the phase space are diffeomorphic. By introducing
1188   \[
1189   \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1190   \]
1191 < the mechanical system subject to a holonomic constraint manifold $M$
1191 > the mechanical system subjected to a holonomic constraint manifold $M$
1192   can be re-formulated as a Hamiltonian system on the cotangent space
1193   \[
1194   T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
1195   1,\tilde Q^T \tilde PJ^{ - 1}  + J^{ - 1} P^T \tilde Q = 0} \right\}
1196   \]
1292
1197   For a body fixed vector $X_i$ with respect to the center of mass of
1198   the rigid body, its corresponding lab fixed vector $X_0^{lab}$  is
1199   given as
# Line 1308 | Line 1212 | respectively.
1212   \[
1213   \nabla _Q V(q,Q) = F(q,Q)X_i^t
1214   \]
1215 < respectively.
1216 <
1217 < As a common choice to describe the rotation dynamics of the rigid
1314 < body, angular momentum on body frame $\Pi  = Q^t P$ is introduced to
1315 < rewrite the equations of motion,
1215 > respectively. As a common choice to describe the rotation dynamics
1216 > of the rigid body, the angular momentum on the body fixed frame $\Pi
1217 > = Q^t P$ is introduced to rewrite the equations of motion,
1218   \begin{equation}
1219   \begin{array}{l}
1220 < \mathop \Pi \limits^ \bullet   = J^{ - 1} \Pi ^T \Pi  + Q^T \sum\limits_i {F_i (q,Q)X_i^T }  - \Lambda  \\
1221 < \mathop Q\limits^{{\rm{   }} \bullet }  = Q\Pi {\rm{ }}J^{ - 1}  \\
1220 > \dot \Pi  = J^{ - 1} \Pi ^T \Pi  + Q^T \sum\limits_i {F_i (q,Q)X_i^T }  - \Lambda,  \\
1221 > \dot Q  = Q\Pi {\rm{ }}J^{ - 1},  \\
1222   \end{array}
1223   \label{introEqaution:RBMotionPI}
1224   \end{equation}
1225 < , as well as holonomic constraints,
1226 < \[
1227 < \begin{array}{l}
1326 < \Pi J^{ - 1}  + J^{ - 1} \Pi ^t  = 0 \\
1327 < Q^T Q = 1 \\
1328 < \end{array}
1329 < \]
1330 <
1331 < For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a matrix $\hat v \in
1332 < so(3)^ \star$, the hat-map isomorphism,
1225 > as well as holonomic constraints $\Pi J^{ - 1}  + J^{ - 1} \Pi ^t  =
1226 > 0$ and $Q^T Q = 1$. For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a
1227 > matrix $\hat v \in so(3)^ \star$, the hat-map isomorphism,
1228   \begin{equation}
1229   v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left(
1230   {\begin{array}{*{20}c}
# Line 1342 | Line 1237 | operations
1237   will let us associate the matrix products with traditional vector
1238   operations
1239   \[
1240 < \hat vu = v \times u
1240 > \hat vu = v \times u.
1241   \]
1242 <
1348 < Using \ref{introEqaution:RBMotionPI}, one can construct a skew
1242 > Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew
1243   matrix,
1244 + \begin{eqnarray}
1245 + (\dot \Pi  - \dot \Pi ^T )&= &(\Pi  - \Pi ^T )(J^{ - 1} \Pi  + \Pi J^{ - 1} ) \notag \\
1246 + & & + \sum\limits_i {[Q^T F_i (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]}  -
1247 + (\Lambda  - \Lambda ^T ). \label{introEquation:skewMatrixPI}
1248 + \end{eqnarray}
1249 + Since $\Lambda$ is symmetric, the last term of
1250 + Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1251 + Lagrange multiplier $\Lambda$ is absent from the equations of
1252 + motion. This unique property eliminates the requirement of
1253 + iterations which can not be avoided in other methods.\cite{Kol1997,
1254 + Omelyan1998} Applying the hat-map isomorphism, we obtain the
1255 + equation of motion for angular momentum in the body frame
1256   \begin{equation}
1351 (\mathop \Pi \limits^ \bullet   - \mathop \Pi \limits^ \bullet  ^T
1352 ){\rm{ }} = {\rm{ }}(\Pi  - \Pi ^T ){\rm{ }}(J^{ - 1} \Pi  + \Pi J^{
1353 - 1} ) + \sum\limits_i {[Q^T F_i (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]} -
1354 (\Lambda  - \Lambda ^T ) . \label{introEquation:skewMatrixPI}
1355 \end{equation}
1356 Since $\Lambda$ is symmetric, the last term of Equation
1357 \ref{introEquation:skewMatrixPI} is zero, which implies the Lagrange
1358 multiplier $\Lambda$ is absent from the equations of motion. This
1359 unique property eliminate the requirement of iterations which can
1360 not be avoided in other methods\cite{Kol1997, Omelyan1998}.
1361
1362 Applying hat-map isomorphism, we obtain the equation of motion for
1363 angular momentum on body frame
1364 \begin{equation}
1257   \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T
1258   F_i (r,Q)} \right) \times X_i }.
1259   \label{introEquation:bodyAngularMotion}
# Line 1369 | Line 1261 | given by
1261   In the same manner, the equation of motion for rotation matrix is
1262   given by
1263   \[
1264 < \dot Q = Qskew(I^{ - 1} \pi )
1264 > \dot Q = Qskew(I^{ - 1} \pi ).
1265   \]
1266  
1267   \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1268 < Lie-Poisson Integrator for Free Rigid Body}
1268 > Lie-Poisson Integrator for Free Rigid Bodies}
1269  
1270 < If there is not external forces exerted on the rigid body, the only
1271 < contribution to the rotational is from the kinetic potential (the
1272 < first term of \ref{ introEquation:bodyAngularMotion}). The free
1273 < rigid body is an example of Lie-Poisson system with Hamiltonian
1270 > If there are no external forces exerted on the rigid body, the only
1271 > contribution to the rotational motion is from the kinetic energy
1272 > (the first term of \ref{introEquation:bodyAngularMotion}). The free
1273 > rigid body is an example of a Lie-Poisson system with Hamiltonian
1274   function
1275   \begin{equation}
1276   T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 )
# Line 1391 | Line 1283 | J(\pi ) = \left( {\begin{array}{*{20}c}
1283     0 & {\pi _3 } & { - \pi _2 }  \\
1284     { - \pi _3 } & 0 & {\pi _1 }  \\
1285     {\pi _2 } & { - \pi _1 } & 0  \\
1286 < \end{array}} \right)
1286 > \end{array}} \right).
1287   \end{equation}
1288   Thus, the dynamics of free rigid body is governed by
1289   \begin{equation}
1290 < \frac{d}{{dt}}\pi  = J(\pi )\nabla _\pi  T^r (\pi )
1290 > \frac{d}{{dt}}\pi  = J(\pi )\nabla _\pi  T^r (\pi ).
1291   \end{equation}
1292 <
1293 < One may notice that each $T_i^r$ in Equation
1294 < \ref{introEquation:rotationalKineticRB} can be solved exactly. For
1403 < instance, the equations of motion due to $T_1^r$ are given by
1292 > One may notice that each $T_i^r$ in
1293 > Eq.~\ref{introEquation:rotationalKineticRB} can be solved exactly.
1294 > For instance, the equations of motion due to $T_1^r$ are given by
1295   \begin{equation}
1296   \frac{d}{{dt}}\pi  = R_1 \pi ,\frac{d}{{dt}}Q = QR_1
1297   \label{introEqaution:RBMotionSingleTerm}
1298   \end{equation}
1299 < where
1299 > with
1300   \[ R_1  = \left( {\begin{array}{*{20}c}
1301     0 & 0 & 0  \\
1302     0 & 0 & {\pi _1 }  \\
1303     0 & { - \pi _1 } & 0  \\
1304   \end{array}} \right).
1305   \]
1306 < The solutions of Equation \ref{introEqaution:RBMotionSingleTerm} is
1306 > The solutions of Eq.~\ref{introEqaution:RBMotionSingleTerm} is
1307   \[
1308   \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) =
1309   Q(0)e^{\Delta tR_1 }
# Line 1426 | Line 1317 | tR_1 }$, we can use Cayley transformation,
1317   \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1318   \]
1319   To reduce the cost of computing expensive functions in $e^{\Delta
1320 < tR_1 }$, we can use Cayley transformation,
1320 > tR_1 }$, we can use the Cayley transformation to obtain a
1321 > single-aixs propagator,
1322 > \begin{eqnarray*}
1323 > e^{\Delta tR_1 }  & \approx & (1 - \Delta tR_1 )^{ - 1} (1 + \Delta
1324 > tR_1 ) \\
1325 > %
1326 > & \approx & \left( \begin{array}{ccc}
1327 > 1 & 0 & 0 \\
1328 > 0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4}  & -\frac{\theta}{1+
1329 > \theta^2 / 4} \\
1330 > 0 & \frac{\theta}{1+ \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 +
1331 > \theta^2 / 4}
1332 > \end{array}
1333 > \right).
1334 > \end{eqnarray*}
1335 > The propagators for $T_2^r$ and $T_3^r$ can be found in the same
1336 > manner. In order to construct a second-order symplectic method, we
1337 > split the angular kinetic Hamiltonian function into five terms
1338   \[
1431 e^{\Delta tR_1 }  \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1
1432 )
1433 \]
1434 The flow maps for $T_2^r$ and $T_3^r$ can be found in the same
1435 manner.
1436
1437 In order to construct a second-order symplectic method, we split the
1438 angular kinetic Hamiltonian function can into five terms
1439 \[
1339   T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1340   ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
1341 < (\pi _1 )
1342 < \].
1343 < Concatenating flows corresponding to these five terms, we can obtain
1344 < an symplectic integrator,
1341 > (\pi _1 ).
1342 > \]
1343 > By concatenating the propagators corresponding to these five terms,
1344 > we can obtain an symplectic integrator,
1345   \[
1346   \varphi _{\Delta t,T^r }  = \varphi _{\Delta t/2,\pi _1 }  \circ
1347   \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }
1348   \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi
1349   _1 }.
1350   \]
1351 <
1453 < The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1454 < $F(\pi )$ and $G(\pi )$ is defined by
1351 > The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by
1352   \[
1353   \{ F,G\} (\pi ) = [\nabla _\pi  F(\pi )]^T J(\pi )\nabla _\pi  G(\pi
1354 < )
1354 > ).
1355   \]
1356   If the Poisson bracket of a function $F$ with an arbitrary smooth
1357   function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1358   conserved quantity in Poisson system. We can easily verify that the
1359   norm of the angular momentum, $\parallel \pi
1360 < \parallel$, is a \emph{Casimir}. Let$ F(\pi ) = S(\frac{{\parallel
1360 > \parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel
1361   \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1362   then by the chain rule
1363   \[
1364   \nabla _\pi  F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2
1365 < }}{2})\pi
1365 > }}{2})\pi.
1366   \]
1367 < Thus $ [\nabla _\pi  F(\pi )]^T J(\pi ) =  - S'(\frac{{\parallel \pi
1367 > Thus, $ [\nabla _\pi  F(\pi )]^T J(\pi ) =  - S'(\frac{{\parallel
1368 > \pi
1369   \parallel ^2 }}{2})\pi  \times \pi  = 0 $. This explicit
1370 < Lie-Poisson integrator is found to be extremely efficient and stable
1371 < which can be explained by the fact the small angle approximation is
1372 < used and the norm of the angular momentum is conserved.
1370 > Lie-Poisson integrator is found to be both extremely efficient and
1371 > stable. These properties can be explained by the fact the small
1372 > angle approximation is used and the norm of the angular momentum is
1373 > conserved.
1374  
1375   \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian
1376   Splitting for Rigid Body}
1377  
1378   The Hamiltonian of rigid body can be separated in terms of kinetic
1379 < energy and potential energy,
1380 < \[
1381 < H = T(p,\pi ) + V(q,Q)
1483 < \]
1484 < The equations of motion corresponding to potential energy and
1485 < kinetic energy are listed in the below table,
1379 > energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations
1380 > of motion corresponding to potential energy and kinetic energy are
1381 > listed in Table~\ref{introTable:rbEquations}.
1382   \begin{table}
1383 < \caption{Equations of motion due to Potential and Kinetic Energies}
1383 > \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1384 > \label{introTable:rbEquations}
1385   \begin{center}
1386   \begin{tabular}{|l|l|}
1387    \hline
# Line 1498 | Line 1395 | A second-order symplectic method is now obtained by th
1395   \end{tabular}
1396   \end{center}
1397   \end{table}
1398 < A second-order symplectic method is now obtained by the
1399 < composition of the flow maps,
1398 > A second-order symplectic method is now obtained by the composition
1399 > of the position and velocity propagators,
1400   \[
1401   \varphi _{\Delta t}  = \varphi _{\Delta t/2,V}  \circ \varphi
1402   _{\Delta t,T}  \circ \varphi _{\Delta t/2,V}.
1403   \]
1404   Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1405 < sub-flows which corresponding to force and torque respectively,
1405 > sub-propagators which corresponding to force and torque
1406 > respectively,
1407   \[
1408   \varphi _{\Delta t/2,V}  = \varphi _{\Delta t/2,F}  \circ \varphi
1409   _{\Delta t/2,\tau }.
1410   \]
1411   Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1412 < $\circ \varphi _{\Delta t/2,\tau }$ are commuted, the composition
1413 < order inside $\varphi _{\Delta t/2,V}$ does not matter.
1414 <
1415 < Furthermore, kinetic potential can be separated to translational
1518 < kinetic term, $T^t (p)$, and rotational kinetic term, $T^r (\pi )$,
1412 > $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order
1413 > inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the
1414 > kinetic energy can be separated to translational kinetic term, $T^t
1415 > (p)$, and rotational kinetic term, $T^r (\pi )$,
1416   \begin{equation}
1417   T(p,\pi ) =T^t (p) + T^r (\pi ).
1418   \end{equation}
1419   where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1420 < defined by \ref{introEquation:rotationalKineticRB}. Therefore, the
1421 < corresponding flow maps are given by
1420 > defined by Eq.~\ref{introEquation:rotationalKineticRB}. Therefore,
1421 > the corresponding propagators are given by
1422   \[
1423   \varphi _{\Delta t,T}  = \varphi _{\Delta t,T^t }  \circ \varphi
1424   _{\Delta t,T^r }.
1425   \]
1426 < Finally, we obtain the overall symplectic flow maps for free moving
1427 < rigid body
1428 < \begin{equation}
1429 < \begin{array}{c}
1430 < \varphi _{\Delta t}  = \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \\
1431 <  \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \\
1535 <  \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .\\
1536 < \end{array}
1426 > Finally, we obtain the overall symplectic propagators for freely
1427 > moving rigid bodies
1428 > \begin{eqnarray}
1429 > \varphi _{\Delta t}  &=& \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \notag\\
1430 >  & & \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \notag\\
1431 >  & & \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .
1432   \label{introEquation:overallRBFlowMaps}
1433 < \end{equation}
1433 > \end{eqnarray}
1434  
1435   \section{\label{introSection:langevinDynamics}Langevin Dynamics}
1436   As an alternative to newtonian dynamics, Langevin dynamics, which
1437   mimics a simple heat bath with stochastic and dissipative forces,
1438   has been applied in a variety of studies. This section will review
1439 < the theory of Langevin dynamics simulation. A brief derivation of
1440 < generalized Langevin equation will be given first. Follow that, we
1441 < will discuss the physical meaning of the terms appearing in the
1547 < equation as well as the calculation of friction tensor from
1548 < hydrodynamics theory.
1439 > the theory of Langevin dynamics. A brief derivation of the generalized
1440 > Langevin equation will be given first. Following that, we will
1441 > discuss the physical meaning of the terms appearing in the equation.
1442  
1443   \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1444  
1445 < Harmonic bath model, in which an effective set of harmonic
1445 > A harmonic bath model, in which an effective set of harmonic
1446   oscillators are used to mimic the effect of a linearly responding
1447   environment, has been widely used in quantum chemistry and
1448   statistical mechanics. One of the successful applications of
1449 < Harmonic bath model is the derivation of Deriving Generalized
1450 < Langevin Dynamics. Lets consider a system, in which the degree of
1449 > Harmonic bath model is the derivation of the Generalized Langevin
1450 > Dynamics (GLE). Consider a system, in which the degree of
1451   freedom $x$ is assumed to couple to the bath linearly, giving a
1452   Hamiltonian of the form
1453   \begin{equation}
1454   H = \frac{{p^2 }}{{2m}} + U(x) + H_B  + \Delta U(x,x_1 , \ldots x_N)
1455   \label{introEquation:bathGLE}.
1456   \end{equation}
1457 < Here $p$ is a momentum conjugate to $q$, $m$ is the mass associated
1458 < with this degree of freedom, $H_B$ is harmonic bath Hamiltonian,
1457 > Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated
1458 > with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1459   \[
1460   H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1461 < }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 }
1461 > }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  x_\alpha ^2 }
1462   \right\}}
1463   \]
1464   where the index $\alpha$ runs over all the bath degrees of freedom,
1465   $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1466 < the harmonic bath masses, and $\Delta U$ is bilinear system-bath
1466 > the harmonic bath masses, and $\Delta U$ is a bilinear system-bath
1467   coupling,
1468   \[
1469   \Delta U =  - \sum\limits_{\alpha  = 1}^N {g_\alpha  x_\alpha  x}
1470   \]
1471 < where $g_\alpha$ are the coupling constants between the bath and the
1472 < coordinate $x$. Introducing
1471 > where $g_\alpha$ are the coupling constants between the bath
1472 > coordinates ($x_ \alpha$) and the system coordinate ($x$).
1473 > Introducing
1474   \[
1475   W(x) = U(x) - \sum\limits_{\alpha  = 1}^N {\frac{{g_\alpha ^2
1476   }}{{2m_\alpha  w_\alpha ^2 }}} x^2
1477 < \] and combining the last two terms in Equation
1478 < \ref{introEquation:bathGLE}, we may rewrite the Harmonic bath
1585 < Hamiltonian as
1477 > \]
1478 > and combining the last two terms in Eq.~\ref{introEquation:bathGLE}, we may rewrite the Harmonic bath Hamiltonian as
1479   \[
1480   H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha  = 1}^N
1481   {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha
1482   w_\alpha ^2 \left( {x_\alpha   - \frac{{g_\alpha  }}{{m_\alpha
1483 < w_\alpha ^2 }}x} \right)^2 } \right\}}
1483 > w_\alpha ^2 }}x} \right)^2 } \right\}}.
1484   \]
1485   Since the first two terms of the new Hamiltonian depend only on the
1486   system coordinates, we can get the equations of motion for
1487 < Generalized Langevin Dynamics by Hamilton's equations
1595 < \ref{introEquation:motionHamiltonianCoordinate,
1596 < introEquation:motionHamiltonianMomentum},
1487 > Generalized Langevin Dynamics by Hamilton's equations,
1488   \begin{equation}
1489   m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} -
1490   \sum\limits_{\alpha  = 1}^N {g_\alpha  \left( {x_\alpha   -
# Line 1606 | Line 1497 | m\ddot x_\alpha   =  - m_\alpha  w_\alpha ^2 \left( {x
1497   \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right).
1498   \label{introEquation:bathMotionGLE}
1499   \end{equation}
1609
1500   In order to derive an equation for $x$, the dynamics of the bath
1501   variables $x_\alpha$ must be solved exactly first. As an integral
1502   transform which is particularly useful in solving linear ordinary
1503 < differential equations, Laplace transform is the appropriate tool to
1504 < solve this problem. The basic idea is to transform the difficult
1503 > differential equations,the Laplace transform is the appropriate tool
1504 > to solve this problem. The basic idea is to transform the difficult
1505   differential equations into simple algebra problems which can be
1506 < solved easily. Then applying inverse Laplace transform, also known
1507 < as the Bromwich integral, we can retrieve the solutions of the
1508 < original problems.
1509 <
1620 < Let $f(t)$ be a function defined on $ [0,\infty ) $. The Laplace
1621 < transform of f(t) is a new function defined as
1506 > solved easily. Then, by applying the inverse Laplace transform, we
1507 > can retrieve the solutions of the original problems. Let $f(t)$ be a
1508 > function defined on $ [0,\infty ) $, the Laplace transform of $f(t)$
1509 > is a new function defined as
1510   \[
1511   L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1512   \]
1513   where  $p$ is real and  $L$ is called the Laplace Transform
1514 < Operator. Below are some important properties of Laplace transform
1627 <
1514 > Operator. Below are some important properties of the Laplace transform
1515   \begin{eqnarray*}
1516   L(x + y)  & = & L(x) + L(y) \\
1517   L(ax)     & = & aL(x) \\
# Line 1632 | Line 1519 | Operator. Below are some important properties of Lapla
1519   L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\
1520   L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\
1521   \end{eqnarray*}
1522 <
1636 <
1637 < Applying Laplace transform to the bath coordinates, we obtain
1522 > Applying the Laplace transform to the bath coordinates, we obtain
1523   \begin{eqnarray*}
1524 < p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) \\
1525 < L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }} \\
1524 > p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x), \\
1525 > L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }}. \\
1526   \end{eqnarray*}
1527 <
1643 < By the same way, the system coordinates become
1527 > In the same way, the system coordinates become
1528   \begin{eqnarray*}
1529 < mL(\ddot x) & = & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} \\
1530 <  & & \mbox{} - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
1529 > mL(\ddot x) & = &
1530 >  - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
1531 >  & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}.
1532   \end{eqnarray*}
1648
1533   With the help of some relatively important inverse Laplace
1534   transformations:
1535   \[
# Line 1655 | Line 1539 | transformations:
1539   L(1) = \frac{1}{p} \\
1540   \end{array}
1541   \]
1542 < , we obtain
1543 < \[
1544 < m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} -
1542 > we obtain
1543 > \begin{eqnarray*}
1544 > m\ddot x & =  & - \frac{{\partial W(x)}}{{\partial x}} -
1545   \sum\limits_{\alpha  = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2
1546   }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega
1547 < _\alpha  t)\dot x(t - \tau )d\tau  - \left[ {g_\alpha  x_\alpha  (0)
1548 < - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}} \right]\cos
1549 < (\omega _\alpha  t) - \frac{{g_\alpha  \dot x_\alpha  (0)}}{{\omega
1550 < _\alpha  }}\sin (\omega _\alpha  t)} } \right\}}
1551 < \]
1552 < \[
1553 < m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t
1554 < {\sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2
1555 < }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha
1556 < t)\dot x(t - \tau )d} \tau }  + \sum\limits_{\alpha  = 1}^N {\left\{
1557 < {\left[ {g_\alpha  x_\alpha  (0) - \frac{{g_\alpha  }}{{m_\alpha
1558 < \omega _\alpha  }}} \right]\cos (\omega _\alpha  t) +
1559 < \frac{{g_\alpha  \dot x_\alpha  (0)}}{{\omega _\alpha  }}\sin
1560 < (\omega _\alpha  t)} \right\}}
1561 < \]
1562 <
1547 > _\alpha  t)\dot x(t - \tau )d\tau } } \right\}}  \\
1548 > & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1549 > x_\alpha (0) - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}}
1550 > \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha
1551 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}\\
1552 > %
1553 > & = & -
1554 > \frac{{\partial W(x)}}{{\partial x}} - \int_0^t {\sum\limits_{\alpha
1555 > = 1}^N {\left( { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha
1556 > ^2 }}} \right)\cos (\omega _\alpha
1557 > t)\dot x(t - \tau )d} \tau }  \\
1558 > & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1559 > x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha  }}}
1560 > \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha
1561 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}
1562 > \end{eqnarray*}
1563   Introducing a \emph{dynamic friction kernel}
1564   \begin{equation}
1565   \xi (t) = \sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2
# Line 1696 | Line 1580 | which is known as the \emph{generalized Langevin equat
1580   (t)\dot x(t - \tau )d\tau }  + R(t)
1581   \label{introEuqation:GeneralizedLangevinDynamics}
1582   \end{equation}
1583 < which is known as the \emph{generalized Langevin equation}.
1583 > which is known as the \emph{generalized Langevin equation} (GLE).
1584  
1585 < \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}Random Force and Dynamic Friction Kernel}
1585 > \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1586  
1587   One may notice that $R(t)$ depends only on initial conditions, which
1588   implies it is completely deterministic within the context of a
1589   harmonic bath. However, it is easy to verify that $R(t)$ is totally
1590 < uncorrelated to $x$ and $\dot x$,
1591 < \[
1592 < \begin{array}{l}
1593 < \left\langle {x(t)R(t)} \right\rangle  = 0, \\
1594 < \left\langle {\dot x(t)R(t)} \right\rangle  = 0. \\
1711 < \end{array}
1712 < \]
1713 < This property is what we expect from a truly random process. As long
1714 < as the model, which is gaussian distribution in general, chosen for
1715 < $R(t)$ is a truly random process, the stochastic nature of the GLE
1716 < still remains.
1717 <
1590 > uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)}
1591 > \right\rangle  = 0, \left\langle {\dot x(t)R(t)} \right\rangle  =
1592 > 0.$ This property is what we expect from a truly random process. As
1593 > long as the model chosen for $R(t)$ was a gaussian distribution in
1594 > general, the stochastic nature of the GLE still remains.
1595   %dynamic friction kernel
1596   The convolution integral
1597   \[
# Line 1729 | Line 1606 | and Equation \ref{introEuqation:GeneralizedLangevinDyn
1606   \[
1607   \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = \xi _0 (x(t) - x(0))
1608   \]
1609 < and Equation \ref{introEuqation:GeneralizedLangevinDynamics} becomes
1609 > and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1610   \[
1611   m\ddot x =  - \frac{\partial }{{\partial x}}\left( {W(x) +
1612   \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1613   \]
1614 < which can be used to describe dynamic caging effect. The other
1615 < extreme is the bath that responds infinitely quickly to motions in
1616 < the system. Thus, $\xi (t)$ can be taken as a $delta$ function in
1617 < time:
1614 > which can be used to describe the effect of dynamic caging in
1615 > viscous solvents. The other extreme is the bath that responds
1616 > infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1617 > taken as a $delta$ function in time:
1618   \[
1619 < \xi (t) = 2\xi _0 \delta (t)
1619 > \xi (t) = 2\xi _0 \delta (t).
1620   \]
1621   Hence, the convolution integral becomes
1622   \[
1623   \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = 2\xi _0 \int_0^t
1624   {\delta (t)\dot x(t - \tau )d\tau }  = \xi _0 \dot x(t),
1625   \]
1626 < and Equation \ref{introEuqation:GeneralizedLangevinDynamics} becomes
1626 > and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1627   \begin{equation}
1628   m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1629   x(t) + R(t) \label{introEquation:LangevinEquation}
1630   \end{equation}
1631   which is known as the Langevin equation. The static friction
1632   coefficient $\xi _0$ can either be calculated from spectral density
1633 < or be determined by Stokes' law for regular shaped particles.A
1634 < briefly review on calculating friction tensor for arbitrary shaped
1633 > or be determined by Stokes' law for regular shaped particles. A
1634 > brief review on calculating friction tensors for arbitrary shaped
1635   particles is given in Sec.~\ref{introSection:frictionTensor}.
1636  
1637 < \subsubsection{\label{introSection:secondFluctuationDissipation}The Second Fluctuation Dissipation Theorem}
1637 > \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1638  
1639 < Defining a new set of coordinates,
1639 > Defining a new set of coordinates
1640   \[
1641   q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1642 < ^2 }}x(0)
1643 < \],
1644 < we can rewrite $R(T)$ as
1642 > ^2 }}x(0),
1643 > \]
1644 > we can rewrite $R(t)$ as
1645   \[
1646   R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
1647   \]
1648   And since the $q$ coordinates are harmonic oscillators,
1772
1649   \begin{eqnarray*}
1650   \left\langle {q_\alpha ^2 } \right\rangle  & = & \frac{{kT}}{{m_\alpha  \omega _\alpha ^2 }} \\
1651   \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t) \\
1652   \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  \\
1653   \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha  {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle } }  \\
1654    & = &\sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\
1655 <  & = &kT\xi (t) \\
1655 >  & = &kT\xi (t)
1656   \end{eqnarray*}
1781
1657   Thus, we recover the \emph{second fluctuation dissipation theorem}
1658   \begin{equation}
1659   \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1660 < \label{introEquation:secondFluctuationDissipation}.
1660 > \label{introEquation:secondFluctuationDissipation},
1661   \end{equation}
1662 < In effect, it acts as a constraint on the possible ways in which one
1663 < can model the random force and friction kernel.
1789 <
1790 < \subsection{\label{introSection:frictionTensor} Friction Tensor}
1791 < Theoretically, the friction kernel can be determined using velocity
1792 < autocorrelation function. However, this approach become impractical
1793 < when the system become more and more complicate. Instead, various
1794 < approaches based on hydrodynamics have been developed to calculate
1795 < the friction coefficients. The friction effect is isotropic in
1796 < Equation, $\zeta$ can be taken as a scalar. In general, friction
1797 < tensor $\Xi$ is a $6\times 6$ matrix given by
1798 < \[
1799 < \Xi  = \left( {\begin{array}{*{20}c}
1800 <   {\Xi _{}^{tt} } & {\Xi _{}^{rt} }  \\
1801 <   {\Xi _{}^{tr} } & {\Xi _{}^{rr} }  \\
1802 < \end{array}} \right).
1803 < \]
1804 < Here, $ {\Xi^{tt} }$ and $ {\Xi^{rr} }$ are translational friction
1805 < tensor and rotational resistance (friction) tensor respectively,
1806 < while ${\Xi^{tr} }$ is translation-rotation coupling tensor and $
1807 < {\Xi^{rt} }$ is rotation-translation coupling tensor. When a
1808 < particle moves in a fluid, it may experience friction force or
1809 < torque along the opposite direction of the velocity or angular
1810 < velocity,
1811 < \[
1812 < \left( \begin{array}{l}
1813 < F_R  \\
1814 < \tau _R  \\
1815 < \end{array} \right) =  - \left( {\begin{array}{*{20}c}
1816 <   {\Xi ^{tt} } & {\Xi ^{rt} }  \\
1817 <   {\Xi ^{tr} } & {\Xi ^{rr} }  \\
1818 < \end{array}} \right)\left( \begin{array}{l}
1819 < v \\
1820 < w \\
1821 < \end{array} \right)
1822 < \]
1823 < where $F_r$ is the friction force and $\tau _R$ is the friction
1824 < toque.
1825 <
1826 < \subsubsection{\label{introSection:resistanceTensorRegular}The Resistance Tensor for Regular Shape}
1827 <
1828 < For a spherical particle, the translational and rotational friction
1829 < constant can be calculated from Stoke's law,
1830 < \[
1831 < \Xi ^{tt}  = \left( {\begin{array}{*{20}c}
1832 <   {6\pi \eta R} & 0 & 0  \\
1833 <   0 & {6\pi \eta R} & 0  \\
1834 <   0 & 0 & {6\pi \eta R}  \\
1835 < \end{array}} \right)
1836 < \]
1837 < and
1838 < \[
1839 < \Xi ^{rr}  = \left( {\begin{array}{*{20}c}
1840 <   {8\pi \eta R^3 } & 0 & 0  \\
1841 <   0 & {8\pi \eta R^3 } & 0  \\
1842 <   0 & 0 & {8\pi \eta R^3 }  \\
1843 < \end{array}} \right)
1844 < \]
1845 < where $\eta$ is the viscosity of the solvent and $R$ is the
1846 < hydrodynamics radius.
1847 <
1848 < Other non-spherical shape, such as cylinder and ellipsoid
1849 < \textit{etc}, are widely used as reference for developing new
1850 < hydrodynamics theory, because their properties can be calculated
1851 < exactly. In 1936, Perrin extended Stokes's law to general ellipsoid,
1852 < also called a triaxial ellipsoid, which is given in Cartesian
1853 < coordinates by\cite{Perrin1934, Perrin1936}
1854 < \[
1855 < \frac{{x^2 }}{{a^2 }} + \frac{{y^2 }}{{b^2 }} + \frac{{z^2 }}{{c^2
1856 < }} = 1
1857 < \]
1858 < where the semi-axes are of lengths $a$, $b$, and $c$. Unfortunately,
1859 < due to the complexity of the elliptic integral, only the ellipsoid
1860 < with the restriction of two axes having to be equal, \textit{i.e.}
1861 < prolate($ a \ge b = c$) and oblate ($ a < b = c $), can be solved
1862 < exactly. Introducing an elliptic integral parameter $S$ for prolate,
1863 < \[
1864 < S = \frac{2}{{\sqrt {a^2  - b^2 } }}\ln \frac{{a + \sqrt {a^2  - b^2
1865 < } }}{b},
1866 < \]
1867 < and oblate,
1868 < \[
1869 < S = \frac{2}{{\sqrt {b^2  - a^2 } }}arctg\frac{{\sqrt {b^2  - a^2 }
1870 < }}{a}
1871 < \],
1872 < one can write down the translational and rotational resistance
1873 < tensors
1874 < \[
1875 < \begin{array}{l}
1876 < \Xi _a^{tt}  = 16\pi \eta \frac{{a^2  - b^2 }}{{(2a^2  - b^2 )S - 2a}} \\
1877 < \Xi _b^{tt}  = \Xi _c^{tt}  = 32\pi \eta \frac{{a^2  - b^2 }}{{(2a^2  - 3b^2 )S + 2a}} \\
1878 < \end{array},
1879 < \]
1880 < and
1881 < \[
1882 < \begin{array}{l}
1883 < \Xi _a^{rr}  = \frac{{32\pi }}{3}\eta \frac{{(a^2  - b^2 )b^2 }}{{2a - b^2 S}} \\
1884 < \Xi _b^{rr}  = \Xi _c^{rr}  = \frac{{32\pi }}{3}\eta \frac{{(a^4  - b^4 )}}{{(2a^2  - b^2 )S - 2a}} \\
1885 < \end{array}.
1886 < \]
1887 <
1888 < \subsubsection{\label{introSection:resistanceTensorRegularArbitrary}The Resistance Tensor for Arbitrary Shape}
1889 <
1890 < Unlike spherical and other regular shaped molecules, there is not
1891 < analytical solution for friction tensor of any arbitrary shaped
1892 < rigid molecules. The ellipsoid of revolution model and general
1893 < triaxial ellipsoid model have been used to approximate the
1894 < hydrodynamic properties of rigid bodies. However, since the mapping
1895 < from all possible ellipsoidal space, $r$-space, to all possible
1896 < combination of rotational diffusion coefficients, $D$-space is not
1897 < unique\cite{Wegener1979} as well as the intrinsic coupling between
1898 < translational and rotational motion of rigid body, general ellipsoid
1899 < is not always suitable for modeling arbitrarily shaped rigid
1900 < molecule. A number of studies have been devoted to determine the
1901 < friction tensor for irregularly shaped rigid bodies using more
1902 < advanced method where the molecule of interest was modeled by
1903 < combinations of spheres(beads)\cite{Carrasco1999} and the
1904 < hydrodynamics properties of the molecule can be calculated using the
1905 < hydrodynamic interaction tensor. Let us consider a rigid assembly of
1906 < $N$ beads immersed in a continuous medium. Due to hydrodynamics
1907 < interaction, the ``net'' velocity of $i$th bead, $v'_i$ is different
1908 < than its unperturbed velocity $v_i$,
1909 < \[
1910 < v'_i  = v_i  - \sum\limits_{j \ne i} {T_{ij} F_j }
1911 < \]
1912 < where $F_i$ is the frictional force, and $T_{ij}$ is the
1913 < hydrodynamic interaction tensor. The friction force of $i$th bead is
1914 < proportional to its ``net'' velocity
1915 < \begin{equation}
1916 < F_i  = \zeta _i v_i  - \zeta _i \sum\limits_{j \ne i} {T_{ij} F_j }.
1917 < \label{introEquation:tensorExpression}
1918 < \end{equation}
1919 < This equation is the basis for deriving the hydrodynamic tensor. In
1920 < 1930, Oseen and Burgers gave a simple solution to Equation
1921 < \ref{introEquation:tensorExpression}
1922 < \begin{equation}
1923 < T_{ij}  = \frac{1}{{8\pi \eta r_{ij} }}\left( {I + \frac{{R_{ij}
1924 < R_{ij}^T }}{{R_{ij}^2 }}} \right).
1925 < \label{introEquation:oseenTensor}
1926 < \end{equation}
1927 < Here $R_{ij}$ is the distance vector between bead $i$ and bead $j$.
1928 < A second order expression for element of different size was
1929 < introduced by Rotne and Prager\cite{Rotne1969} and improved by
1930 < Garc\'{i}a de la Torre and Bloomfield\cite{Torre1977},
1931 < \begin{equation}
1932 < T_{ij}  = \frac{1}{{8\pi \eta R_{ij} }}\left[ {\left( {I +
1933 < \frac{{R_{ij} R_{ij}^T }}{{R_{ij}^2 }}} \right) + R\frac{{\sigma
1934 < _i^2  + \sigma _j^2 }}{{r_{ij}^2 }}\left( {\frac{I}{3} -
1935 < \frac{{R_{ij} R_{ij}^T }}{{R_{ij}^2 }}} \right)} \right].
1936 < \label{introEquation:RPTensorNonOverlapped}
1937 < \end{equation}
1938 < Both of the Equation \ref{introEquation:oseenTensor} and Equation
1939 < \ref{introEquation:RPTensorNonOverlapped} have an assumption $R_{ij}
1940 < \ge \sigma _i  + \sigma _j$. An alternative expression for
1941 < overlapping beads with the same radius, $\sigma$, is given by
1942 < \begin{equation}
1943 < T_{ij}  = \frac{1}{{6\pi \eta R_{ij} }}\left[ {\left( {1 -
1944 < \frac{2}{{32}}\frac{{R_{ij} }}{\sigma }} \right)I +
1945 < \frac{2}{{32}}\frac{{R_{ij} R_{ij}^T }}{{R_{ij} \sigma }}} \right]
1946 < \label{introEquation:RPTensorOverlapped}
1947 < \end{equation}
1948 <
1949 < To calculate the resistance tensor at an arbitrary origin $O$, we
1950 < construct a $3N \times 3N$ matrix consisting of $N \times N$
1951 < $B_{ij}$ blocks
1952 < \begin{equation}
1953 < B = \left( {\begin{array}{*{20}c}
1954 <   {B_{11} } &  \ldots  & {B_{1N} }  \\
1955 <    \vdots  &  \ddots  &  \vdots   \\
1956 <   {B_{N1} } &  \cdots  & {B_{NN} }  \\
1957 < \end{array}} \right),
1958 < \end{equation}
1959 < where $B_{ij}$ is given by
1960 < \[
1961 < B_{ij}  = \delta _{ij} \frac{I}{{6\pi \eta R}} + (1 - \delta _{ij}
1962 < )T_{ij}
1963 < \]
1964 < where $\delta _{ij}$ is Kronecker delta function. Inverting matrix
1965 < $B$, we obtain
1966 <
1967 < \[
1968 < C = B^{ - 1}  = \left( {\begin{array}{*{20}c}
1969 <   {C_{11} } &  \ldots  & {C_{1N} }  \\
1970 <    \vdots  &  \ddots  &  \vdots   \\
1971 <   {C_{N1} } &  \cdots  & {C_{NN} }  \\
1972 < \end{array}} \right)
1973 < \]
1974 < , which can be partitioned into $N \times N$ $3 \times 3$ block
1975 < $C_{ij}$. With the help of $C_{ij}$ and skew matrix $U_i$
1976 < \[
1977 < U_i  = \left( {\begin{array}{*{20}c}
1978 <   0 & { - z_i } & {y_i }  \\
1979 <   {z_i } & 0 & { - x_i }  \\
1980 <   { - y_i } & {x_i } & 0  \\
1981 < \end{array}} \right)
1982 < \]
1983 < where $x_i$, $y_i$, $z_i$ are the components of the vector joining
1984 < bead $i$ and origin $O$. Hence, the elements of resistance tensor at
1985 < arbitrary origin $O$ can be written as
1986 < \begin{equation}
1987 < \begin{array}{l}
1988 < \Xi _{}^{tt}  = \sum\limits_i {\sum\limits_j {C_{ij} } } , \\
1989 < \Xi _{}^{tr}  = \Xi _{}^{rt}  = \sum\limits_i {\sum\limits_j {U_i C_{ij} } } , \\
1990 < \Xi _{}^{rr}  =  - \sum\limits_i {\sum\limits_j {U_i C_{ij} } } U_j  \\
1991 < \end{array}
1992 < \label{introEquation:ResistanceTensorArbitraryOrigin}
1993 < \end{equation}
1994 <
1995 < The resistance tensor depends on the origin to which they refer. The
1996 < proper location for applying friction force is the center of
1997 < resistance (reaction), at which the trace of rotational resistance
1998 < tensor, $ \Xi ^{rr}$ reaches minimum. Mathematically, the center of
1999 < resistance is defined as an unique point of the rigid body at which
2000 < the translation-rotation coupling tensor are symmetric,
2001 < \begin{equation}
2002 < \Xi^{tr}  = \left( {\Xi^{tr} } \right)^T
2003 < \label{introEquation:definitionCR}
2004 < \end{equation}
2005 < Form Equation \ref{introEquation:ResistanceTensorArbitraryOrigin},
2006 < we can easily find out that the translational resistance tensor is
2007 < origin independent, while the rotational resistance tensor and
2008 < translation-rotation coupling resistance tensor depend on the
2009 < origin. Given resistance tensor at an arbitrary origin $O$, and a
2010 < vector ,$r_{OP}(x_{OP}, y_{OP}, z_{OP})$, from $O$ to $P$, we can
2011 < obtain the resistance tensor at $P$ by
2012 < \begin{equation}
2013 < \begin{array}{l}
2014 < \Xi _P^{tt}  = \Xi _O^{tt}  \\
2015 < \Xi _P^{tr}  = \Xi _P^{rt}  = \Xi _O^{tr}  - U_{OP} \Xi _O^{tt}  \\
2016 < \Xi _P^{rr}  = \Xi _O^{rr}  - U_{OP} \Xi _O^{tt} U_{OP}  + \Xi _O^{tr} U_{OP}  - U_{OP} \Xi _O^{tr} ^{^T }  \\
2017 < \end{array}
2018 < \label{introEquation:resistanceTensorTransformation}
2019 < \end{equation}
2020 < where
2021 < \[
2022 < U_{OP}  = \left( {\begin{array}{*{20}c}
2023 <   0 & { - z_{OP} } & {y_{OP} }  \\
2024 <   {z_i } & 0 & { - x_{OP} }  \\
2025 <   { - y_{OP} } & {x_{OP} } & 0  \\
2026 < \end{array}} \right)
2027 < \]
2028 < Using Equations \ref{introEquation:definitionCR} and
2029 < \ref{introEquation:resistanceTensorTransformation}, one can locate
2030 < the position of center of resistance,
2031 < \begin{eqnarray*}
2032 < \left( \begin{array}{l}
2033 < x_{OR}  \\
2034 < y_{OR}  \\
2035 < z_{OR}  \\
2036 < \end{array} \right) & = &\left( {\begin{array}{*{20}c}
2037 <   {(\Xi _O^{rr} )_{yy}  + (\Xi _O^{rr} )_{zz} } & { - (\Xi _O^{rr} )_{xy} } & { - (\Xi _O^{rr} )_{xz} }  \\
2038 <   { - (\Xi _O^{rr} )_{xy} } & {(\Xi _O^{rr} )_{zz}  + (\Xi _O^{rr} )_{xx} } & { - (\Xi _O^{rr} )_{yz} }  \\
2039 <   { - (\Xi _O^{rr} )_{xz} } & { - (\Xi _O^{rr} )_{yz} } & {(\Xi _O^{rr} )_{xx}  + (\Xi _O^{rr} )_{yy} }  \\
2040 < \end{array}} \right)^{ - 1}  \\
2041 <  & & \left( \begin{array}{l}
2042 < (\Xi _O^{tr} )_{yz}  - (\Xi _O^{tr} )_{zy}  \\
2043 < (\Xi _O^{tr} )_{zx}  - (\Xi _O^{tr} )_{xz}  \\
2044 < (\Xi _O^{tr} )_{xy}  - (\Xi _O^{tr} )_{yx}  \\
2045 < \end{array} \right) \\
2046 < \end{eqnarray*}
2047 <
2048 <
2049 <
2050 < where $x_OR$, $y_OR$, $z_OR$ are the components of the vector
2051 < joining center of resistance $R$ and origin $O$.
1662 > which acts as a constraint on the possible ways in which one can
1663 > model the random force and friction kernel.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines