ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2790 by tim, Mon Jun 5 21:11:51 2006 UTC vs.
Revision 2907 by tim, Thu Jun 29 16:57:37 2006 UTC

# Line 3 | Line 3 | Closely related to Classical Mechanics, Molecular Dyna
3   \section{\label{introSection:classicalMechanics}Classical
4   Mechanics}
5  
6 < Closely related to Classical Mechanics, Molecular Dynamics
7 < simulations are carried out by integrating the equations of motion
8 < for a given system of particles. There are three fundamental ideas
9 < behind classical mechanics. Firstly, One can determine the state of
10 < a mechanical system at any time of interest; Secondly, all the
11 < mechanical properties of the system at that time can be determined
12 < by combining the knowledge of the properties of the system with the
13 < specification of this state; Finally, the specification of the state
14 < when further combine with the laws of mechanics will also be
15 < sufficient to predict the future behavior of the system.
6 > Using equations of motion derived from Classical Mechanics,
7 > Molecular Dynamics simulations are carried out by integrating the
8 > equations of motion for a given system of particles. There are three
9 > fundamental ideas behind classical mechanics. Firstly, one can
10 > determine the state of a mechanical system at any time of interest;
11 > Secondly, all the mechanical properties of the system at that time
12 > can be determined by combining the knowledge of the properties of
13 > the system with the specification of this state; Finally, the
14 > specification of the state when further combined with the laws of
15 > mechanics will also be sufficient to predict the future behavior of
16 > the system.
17  
18   \subsection{\label{introSection:newtonian}Newtonian Mechanics}
19   The discovery of Newton's three laws of mechanics which govern the
20   motion of particles is the foundation of the classical mechanics.
21 < Newton¡¯s first law defines a class of inertial frames. Inertial
21 > Newton's first law defines a class of inertial frames. Inertial
22   frames are reference frames where a particle not interacting with
23   other bodies will move with constant speed in the same direction.
24 < With respect to inertial frames Newton¡¯s second law has the form
24 > With respect to inertial frames, Newton's second law has the form
25   \begin{equation}
26 < F = \frac {dp}{dt} = \frac {mv}{dt}
26 > F = \frac {dp}{dt} = \frac {mdv}{dt}
27   \label{introEquation:newtonSecondLaw}
28   \end{equation}
29   A point mass interacting with other bodies moves with the
30   acceleration along the direction of the force acting on it. Let
31   $F_{ij}$ be the force that particle $i$ exerts on particle $j$, and
32   $F_{ji}$ be the force that particle $j$ exerts on particle $i$.
33 < Newton¡¯s third law states that
33 > Newton's third law states that
34   \begin{equation}
35 < F_{ij} = -F_{ji}
35 > F_{ij} = -F_{ji}.
36   \label{introEquation:newtonThirdLaw}
37   \end{equation}
37
38   Conservation laws of Newtonian Mechanics play very important roles
39   in solving mechanics problems. The linear momentum of a particle is
40   conserved if it is free or it experiences no force. The second
# Line 46 | Line 46 | N \equiv r \times F \label{introEquation:torqueDefinit
46   \end{equation}
47   The torque $\tau$ with respect to the same origin is defined to be
48   \begin{equation}
49 < N \equiv r \times F \label{introEquation:torqueDefinition}
49 > \tau \equiv r \times F \label{introEquation:torqueDefinition}
50   \end{equation}
51   Differentiating Eq.~\ref{introEquation:angularMomentumDefinition},
52   \[
# Line 59 | Line 59 | thus,
59   \]
60   thus,
61   \begin{equation}
62 < \dot L = r \times \dot p = N
62 > \dot L = r \times \dot p = \tau
63   \end{equation}
64   If there are no external torques acting on a body, the angular
65   momentum of it is conserved. The last conservation theorem state
66 < that if all forces are conservative, Energy
67 < \begin{equation}E = T + V \label{introEquation:energyConservation}
66 > that if all forces are conservative, energy is conserved,
67 > \begin{equation}E = T + V. \label{introEquation:energyConservation}
68   \end{equation}
69 < is conserved. All of these conserved quantities are
70 < important factors to determine the quality of numerical integration
71 < scheme for rigid body \cite{Dullweber1997}.
69 > All of these conserved quantities are important factors to determine
70 > the quality of numerical integration schemes for rigid bodies
71 > \cite{Dullweber1997}.
72  
73   \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74  
75 < Newtonian Mechanics suffers from two important limitations: it
76 < describes their motion in special cartesian coordinate systems.
77 < Another limitation of Newtonian mechanics becomes obvious when we
78 < try to describe systems with large numbers of particles. It becomes
79 < very difficult to predict the properties of the system by carrying
80 < out calculations involving the each individual interaction between
81 < all the particles, even if we know all of the details of the
82 < interaction. In order to overcome some of the practical difficulties
83 < which arise in attempts to apply Newton's equation to complex
84 < system, alternative procedures may be developed.
75 > Newtonian Mechanics suffers from an important limitation: motion can
76 > only be described in cartesian coordinate systems which make it
77 > impossible to predict analytically the properties of the system even
78 > if we know all of the details of the interaction. In order to
79 > overcome some of the practical difficulties which arise in attempts
80 > to apply Newton's equation to complex systems, approximate numerical
81 > procedures may be developed.
82  
83 < \subsubsection{\label{introSection:halmiltonPrinciple}Hamilton's
84 < Principle}
83 > \subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's
84 > Principle}}
85  
86   Hamilton introduced the dynamical principle upon which it is
87 < possible to base all of mechanics and, indeed, most of classical
88 < physics. Hamilton's Principle may be stated as follow,
89 <
90 < The actual trajectory, along which a dynamical system may move from
91 < one point to another within a specified time, is derived by finding
92 < the path which minimizes the time integral of the difference between
96 < the kinetic, $K$, and potential energies, $U$ \cite{Tolman1979}.
87 > possible to base all of mechanics and most of classical physics.
88 > Hamilton's Principle may be stated as follows: the trajectory, along
89 > which a dynamical system may move from one point to another within a
90 > specified time, is derived by finding the path which minimizes the
91 > time integral of the difference between the kinetic $K$, and
92 > potential energies $U$,
93   \begin{equation}
94 < \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0} ,
94 > \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}.
95   \label{introEquation:halmitonianPrinciple1}
96   \end{equation}
101
97   For simple mechanical systems, where the forces acting on the
98 < different part are derivable from a potential and the velocities are
99 < small compared with that of light, the Lagrangian function $L$ can
100 < be define as the difference between the kinetic energy of the system
106 < and its potential energy,
98 > different parts are derivable from a potential, the Lagrangian
99 > function $L$ can be defined as the difference between the kinetic
100 > energy of the system and its potential energy,
101   \begin{equation}
102 < L \equiv K - U = L(q_i ,\dot q_i ) ,
102 > L \equiv K - U = L(q_i ,\dot q_i ).
103   \label{introEquation:lagrangianDef}
104   \end{equation}
105 < then Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
105 > Thus, Eq.~\ref{introEquation:halmitonianPrinciple1} becomes
106   \begin{equation}
107 < \delta \int_{t_1 }^{t_2 } {L dt = 0} ,
107 > \delta \int_{t_1 }^{t_2 } {L dt = 0} .
108   \label{introEquation:halmitonianPrinciple2}
109   \end{equation}
110  
111 < \subsubsection{\label{introSection:equationOfMotionLagrangian}The
112 < Equations of Motion in Lagrangian Mechanics}
111 > \subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The
112 > Equations of Motion in Lagrangian Mechanics}}
113  
114 < For a holonomic system of $f$ degrees of freedom, the equations of
115 < motion in the Lagrangian form is
114 > For a system of $f$ degrees of freedom, the equations of motion in
115 > the Lagrangian form is
116   \begin{equation}
117   \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} -
118   \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f
# Line 132 | Line 126 | independent of generalized velocities, the generalized
126   Arising from Lagrangian Mechanics, Hamiltonian Mechanics was
127   introduced by William Rowan Hamilton in 1833 as a re-formulation of
128   classical mechanics. If the potential energy of a system is
129 < independent of generalized velocities, the generalized momenta can
136 < be defined as
129 > independent of velocities, the momenta can be defined as
130   \begin{equation}
131   p_i = \frac{\partial L}{\partial \dot q_i}
132   \label{introEquation:generalizedMomenta}
# Line 143 | Line 136 | p_i  = \frac{{\partial L}}{{\partial q_i }}
136   p_i  = \frac{{\partial L}}{{\partial q_i }}
137   \label{introEquation:generalizedMomentaDot}
138   \end{equation}
146
139   With the help of the generalized momenta, we may now define a new
140   quantity $H$ by the equation
141   \begin{equation}
# Line 151 | Line 143 | $L$ is the Lagrangian function for the system.
143   \label{introEquation:hamiltonianDefByLagrangian}
144   \end{equation}
145   where $ \dot q_1  \ldots \dot q_f $ are generalized velocities and
146 < $L$ is the Lagrangian function for the system.
147 <
156 < Differentiating Eq.~\ref{introEquation:hamiltonianDefByLagrangian},
157 < one can obtain
146 > $L$ is the Lagrangian function for the system. Differentiating
147 > Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, one can obtain
148   \begin{equation}
149   dH = \sum\limits_k {\left( {p_k d\dot q_k  + \dot q_k dp_k  -
150   \frac{{\partial L}}{{\partial q_k }}dq_k  - \frac{{\partial
151   L}}{{\partial \dot q_k }}d\dot q_k } \right)}  - \frac{{\partial
152 < L}}{{\partial t}}dt \label{introEquation:diffHamiltonian1}
152 > L}}{{\partial t}}dt . \label{introEquation:diffHamiltonian1}
153   \end{equation}
154 < Making use of  Eq.~\ref{introEquation:generalizedMomenta}, the
155 < second and fourth terms in the parentheses cancel. Therefore,
154 > Making use of Eq.~\ref{introEquation:generalizedMomenta}, the second
155 > and fourth terms in the parentheses cancel. Therefore,
156   Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as
157   \begin{equation}
158   dH = \sum\limits_k {\left( {\dot q_k dp_k  - \dot p_k dq_k }
159 < \right)}  - \frac{{\partial L}}{{\partial t}}dt
159 > \right)}  - \frac{{\partial L}}{{\partial t}}dt .
160   \label{introEquation:diffHamiltonian2}
161   \end{equation}
162   By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can
163   find
164   \begin{equation}
165 < \frac{{\partial H}}{{\partial p_k }} = q_k
165 > \frac{{\partial H}}{{\partial p_k }} = \dot {q_k}
166   \label{introEquation:motionHamiltonianCoordinate}
167   \end{equation}
168   \begin{equation}
169 < \frac{{\partial H}}{{\partial q_k }} =  - p_k
169 > \frac{{\partial H}}{{\partial q_k }} =  - \dot {p_k}
170   \label{introEquation:motionHamiltonianMomentum}
171   \end{equation}
172   and
# Line 185 | Line 175 | t}}
175   t}}
176   \label{introEquation:motionHamiltonianTime}
177   \end{equation}
178 <
189 < Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
178 > where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179   Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180   equation of motion. Due to their symmetrical formula, they are also
181   known as the canonical equations of motions \cite{Goldstein2001}.
182  
183   An important difference between Lagrangian approach and the
184   Hamiltonian approach is that the Lagrangian is considered to be a
185 < function of the generalized velocities $\dot q_i$ and the
186 < generalized coordinates $q_i$, while the Hamiltonian is considered
187 < to be a function of the generalized momenta $p_i$ and the conjugate
188 < generalized coordinate $q_i$. Hamiltonian Mechanics is more
189 < appropriate for application to statistical mechanics and quantum
190 < mechanics, since it treats the coordinate and its time derivative as
191 < independent variables and it only works with 1st-order differential
203 < equations\cite{Marion1990}.
204 <
185 > function of the generalized velocities $\dot q_i$ and coordinates
186 > $q_i$, while the Hamiltonian is considered to be a function of the
187 > generalized momenta $p_i$ and the conjugate coordinates $q_i$.
188 > Hamiltonian Mechanics is more appropriate for application to
189 > statistical mechanics and quantum mechanics, since it treats the
190 > coordinate and its time derivative as independent variables and it
191 > only works with 1st-order differential equations\cite{Marion1990}.
192   In Newtonian Mechanics, a system described by conservative forces
193 < conserves the total energy \ref{introEquation:energyConservation}.
194 < It follows that Hamilton's equations of motion conserve the total
195 < Hamiltonian.
193 > conserves the total energy
194 > (Eq.~\ref{introEquation:energyConservation}). It follows that
195 > Hamilton's equations of motion conserve the total Hamiltonian
196   \begin{equation}
197   \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial
198   H}}{{\partial q_i }}\dot q_i  + \frac{{\partial H}}{{\partial p_i
199   }}\dot p_i } \right)}  = \sum\limits_i {\left( {\frac{{\partial
200   H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} -
201   \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial
202 < q_i }}} \right) = 0} \label{introEquation:conserveHalmitonian}
202 > q_i }}} \right) = 0}. \label{introEquation:conserveHalmitonian}
203   \end{equation}
204  
205   \section{\label{introSection:statisticalMechanics}Statistical
# Line 227 | Line 214 | possible states. Each possible state of the system cor
214   \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
215  
216   Mathematically, phase space is the space which represents all
217 < possible states. Each possible state of the system corresponds to
218 < one unique point in the phase space. For mechanical systems, the
219 < phase space usually consists of all possible values of position and
220 < momentum variables. Consider a dynamic system in a cartesian space,
221 < where each of the $6f$ coordinates and momenta is assigned to one of
222 < $6f$ mutually orthogonal axes, the phase space of this system is a
223 < $6f$ dimensional space. A point, $x = (q_1 , \ldots ,q_f ,p_1 ,
224 < \ldots ,p_f )$, with a unique set of values of $6f$ coordinates and
217 > possible states of a system. Each possible state of the system
218 > corresponds to one unique point in the phase space. For mechanical
219 > systems, the phase space usually consists of all possible values of
220 > position and momentum variables. Consider a dynamic system of $f$
221 > particles in a cartesian space, where each of the $6f$ coordinates
222 > and momenta is assigned to one of $6f$ mutually orthogonal axes, the
223 > phase space of this system is a $6f$ dimensional space. A point, $x
224 > =
225 > (\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
226 > \over q} _1 , \ldots
227 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
228 > \over q} _f
229 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
230 > \over p} _1  \ldots
231 > ,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}}
232 > \over p} _f )$ , with a unique set of values of $6f$ coordinates and
233   momenta is a phase space vector.
234 + %%%fix me
235  
236 < A microscopic state or microstate of a classical system is
241 < specification of the complete phase space vector of a system at any
242 < instant in time. An ensemble is defined as a collection of systems
243 < sharing one or more macroscopic characteristics but each being in a
244 < unique microstate. The complete ensemble is specified by giving all
245 < systems or microstates consistent with the common macroscopic
246 < characteristics of the ensemble. Although the state of each
247 < individual system in the ensemble could be precisely described at
248 < any instance in time by a suitable phase space vector, when using
249 < ensembles for statistical purposes, there is no need to maintain
250 < distinctions between individual systems, since the numbers of
251 < systems at any time in the different states which correspond to
252 < different regions of the phase space are more interesting. Moreover,
253 < in the point of view of statistical mechanics, one would prefer to
254 < use ensembles containing a large enough population of separate
255 < members so that the numbers of systems in such different states can
256 < be regarded as changing continuously as we traverse different
257 < regions of the phase space. The condition of an ensemble at any time
236 > In statistical mechanics, the condition of an ensemble at any time
237   can be regarded as appropriately specified by the density $\rho$
238   with which representative points are distributed over the phase
239 < space. The density of distribution for an ensemble with $f$ degrees
240 < of freedom is defined as,
239 > space. The density distribution for an ensemble with $f$ degrees of
240 > freedom is defined as,
241   \begin{equation}
242   \rho  = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t).
243   \label{introEquation:densityDistribution}
244   \end{equation}
245   Governed by the principles of mechanics, the phase points change
246 < their value which would change the density at any time at phase
247 < space. Hence, the density of distribution is also to be taken as a
248 < function of the time.
249 <
271 < The number of systems $\delta N$ at time $t$ can be determined by,
246 > their locations which changes the density at any time at phase
247 > space. Hence, the density distribution is also to be taken as a
248 > function of the time. The number of systems $\delta N$ at time $t$
249 > can be determined by,
250   \begin{equation}
251   \delta N = \rho (q,p,t)dq_1  \ldots dq_f dp_1  \ldots dp_f.
252   \label{introEquation:deltaN}
253   \end{equation}
254 < Assuming a large enough population of systems are exploited, we can
255 < sufficiently approximate $\delta N$ without introducing
256 < discontinuity when we go from one region in the phase space to
257 < another. By integrating over the whole phase space,
254 > Assuming enough copies of the systems, we can sufficiently
255 > approximate $\delta N$ without introducing discontinuity when we go
256 > from one region in the phase space to another. By integrating over
257 > the whole phase space,
258   \begin{equation}
259   N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f
260   \label{introEquation:totalNumberSystem}
261   \end{equation}
262 < gives us an expression for the total number of the systems. Hence,
263 < the probability per unit in the phase space can be obtained by,
262 > gives us an expression for the total number of copies. Hence, the
263 > probability per unit volume in the phase space can be obtained by,
264   \begin{equation}
265   \frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int
266   {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
267   \label{introEquation:unitProbability}
268   \end{equation}
269 < With the help of Equation(\ref{introEquation:unitProbability}) and
270 < the knowledge of the system, it is possible to calculate the average
269 > With the help of Eq.~\ref{introEquation:unitProbability} and the
270 > knowledge of the system, it is possible to calculate the average
271   value of any desired quantity which depends on the coordinates and
272 < momenta of the system. Even when the dynamics of the real system is
272 > momenta of the system. Even when the dynamics of the real system are
273   complex, or stochastic, or even discontinuous, the average
274 < properties of the ensemble of possibilities as a whole may still
275 < remain well defined. For a classical system in thermal equilibrium
276 < with its environment, the ensemble average of a mechanical quantity,
277 < $\langle A(q , p) \rangle_t$, takes the form of an integral over the
278 < phase space of the system,
274 > properties of the ensemble of possibilities as a whole remain well
275 > defined. For a classical system in thermal equilibrium with its
276 > environment, the ensemble average of a mechanical quantity, $\langle
277 > A(q , p) \rangle_t$, takes the form of an integral over the phase
278 > space of the system,
279   \begin{equation}
280   \langle  A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho
281   (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
282 < (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}
282 > (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
283   \label{introEquation:ensembelAverage}
284   \end{equation}
285  
286   There are several different types of ensembles with different
287   statistical characteristics. As a function of macroscopic
288 < parameters, such as temperature \textit{etc}, partition function can
289 < be used to describe the statistical properties of a system in
290 < thermodynamic equilibrium.
291 <
292 < As an ensemble of systems, each of which is known to be thermally
315 < isolated and conserve energy, Microcanonical ensemble(NVE) has a
316 < partition function like,
288 > parameters, such as temperature \textit{etc}, the partition function
289 > can be used to describe the statistical properties of a system in
290 > thermodynamic equilibrium. As an ensemble of systems, each of which
291 > is known to be thermally isolated and conserve energy, the
292 > Microcanonical ensemble (NVE) has a partition function like,
293   \begin{equation}
294 < \Omega (N,V,E) = e^{\beta TS} \label{introEquation:NVEPartition}.
294 > \Omega (N,V,E) = e^{\beta TS}. \label{introEquation:NVEPartition}
295   \end{equation}
296 < A canonical ensemble(NVT)is an ensemble of systems, each of which
296 > A canonical ensemble (NVT) is an ensemble of systems, each of which
297   can share its energy with a large heat reservoir. The distribution
298   of the total energy amongst the possible dynamical states is given
299   by the partition function,
300   \begin{equation}
301 < \Omega (N,V,T) = e^{ - \beta A}
301 > \Omega (N,V,T) = e^{ - \beta A}.
302   \label{introEquation:NVTPartition}
303   \end{equation}
304   Here, $A$ is the Helmholtz free energy which is defined as $ A = U -
305 < TS$. Since most experiment are carried out under constant pressure
306 < condition, isothermal-isobaric ensemble(NPT) play a very important
307 < role in molecular simulation. The isothermal-isobaric ensemble allow
308 < the system to exchange energy with a heat bath of temperature $T$
309 < and to change the volume as well. Its partition function is given as
305 > TS$. Since most experiments are carried out under constant pressure
306 > condition, the isothermal-isobaric ensemble (NPT) plays a very
307 > important role in molecular simulations. The isothermal-isobaric
308 > ensemble allow the system to exchange energy with a heat bath of
309 > temperature $T$ and to change the volume as well. Its partition
310 > function is given as
311   \begin{equation}
312   \Delta (N,P,T) =  - e^{\beta G}.
313   \label{introEquation:NPTPartition}
# Line 339 | Line 316 | The Liouville's theorem is the foundation on which sta
316  
317   \subsection{\label{introSection:liouville}Liouville's theorem}
318  
319 < The Liouville's theorem is the foundation on which statistical
320 < mechanics rests. It describes the time evolution of phase space
319 > Liouville's theorem is the foundation on which statistical mechanics
320 > rests. It describes the time evolution of the phase space
321   distribution function. In order to calculate the rate of change of
322 < $\rho$, we begin from Equation(\ref{introEquation:deltaN}). If we
323 < consider the two faces perpendicular to the $q_1$ axis, which are
324 < located at $q_1$ and $q_1 + \delta q_1$, the number of phase points
325 < leaving the opposite face is given by the expression,
322 > $\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider
323 > the two faces perpendicular to the $q_1$ axis, which are located at
324 > $q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the
325 > opposite face is given by the expression,
326   \begin{equation}
327   \left( {\rho  + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 }
328   \right)\left( {\dot q_1  + \frac{{\partial \dot q_1 }}{{\partial q_1
# Line 369 | Line 346 | divining $ \delta q_1  \ldots \delta q_f \delta p_1  \
346   + \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)}  = 0 ,
347   \end{equation}
348   which cancels the first terms of the right hand side. Furthermore,
349 < divining $ \delta q_1  \ldots \delta q_f \delta p_1  \ldots \delta
349 > dividing $ \delta q_1  \ldots \delta q_f \delta p_1  \ldots \delta
350   p_f $ in both sides, we can write out Liouville's theorem in a
351   simple form,
352   \begin{equation}
# Line 378 | Line 355 | simple form,
355   \frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)}  = 0 .
356   \label{introEquation:liouvilleTheorem}
357   \end{equation}
381
358   Liouville's theorem states that the distribution function is
359   constant along any trajectory in phase space. In classical
360 < statistical mechanics, since the number of particles in the system
361 < is huge, we may be able to believe the system is stationary,
360 > statistical mechanics, since the number of system copies in an
361 > ensemble is huge and constant, we can assume the local density has
362 > no reason (other than classical mechanics) to change,
363   \begin{equation}
364   \frac{{\partial \rho }}{{\partial t}} = 0.
365   \label{introEquation:stationary}
# Line 395 | Line 372 | distribution,
372   \label{introEquation:densityAndHamiltonian}
373   \end{equation}
374  
375 < \subsubsection{\label{introSection:phaseSpaceConservation}Conservation of Phase Space}
375 > \subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}}
376   Lets consider a region in the phase space,
377   \begin{equation}
378   \delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f .
379   \end{equation}
380   If this region is small enough, the density $\rho$ can be regarded
381 < as uniform over the whole phase space. Thus, the number of phase
382 < points inside this region is given by,
381 > as uniform over the whole integral. Thus, the number of phase points
382 > inside this region is given by,
383   \begin{equation}
384   \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f
385   dp_1 } ..dp_f.
# Line 412 | Line 389 | With the help of stationary assumption
389   \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho
390   \frac{d}{{dt}}(\delta v) = 0.
391   \end{equation}
392 < With the help of stationary assumption
393 < (\ref{introEquation:stationary}), we obtain the principle of the
394 < \emph{conservation of extension in phase space},
392 > With the help of the stationary assumption
393 > (Eq.~\ref{introEquation:stationary}), we obtain the principle of
394 > \emph{conservation of volume in phase space},
395   \begin{equation}
396   \frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 }
397   ...dq_f dp_1 } ..dp_f  = 0.
398   \label{introEquation:volumePreserving}
399   \end{equation}
400  
401 < \subsubsection{\label{introSection:liouvilleInOtherForms}Liouville's Theorem in Other Forms}
401 > \subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}}
402  
403 < Liouville's theorem can be expresses in a variety of different forms
403 > Liouville's theorem can be expressed in a variety of different forms
404   which are convenient within different contexts. For any two function
405   $F$ and $G$ of the coordinates and momenta of a system, the Poisson
406   bracket ${F, G}$ is defined as
# Line 434 | Line 411 | Substituting equations of motion in Hamiltonian formal
411   q_i }}} \right)}.
412   \label{introEquation:poissonBracket}
413   \end{equation}
414 < Substituting equations of motion in Hamiltonian formalism(
415 < \ref{introEquation:motionHamiltonianCoordinate} ,
416 < \ref{introEquation:motionHamiltonianMomentum} ) into
417 < (\ref{introEquation:liouvilleTheorem}), we can rewrite Liouville's
418 < theorem using Poisson bracket notion,
414 > Substituting equations of motion in Hamiltonian formalism
415 > (Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
416 > Eq.~\ref{introEquation:motionHamiltonianMomentum}) into
417 > (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite
418 > Liouville's theorem using Poisson bracket notion,
419   \begin{equation}
420   \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - \left\{
421   {\rho ,H} \right\}.
# Line 457 | Line 434 | expressed as
434   \left( {\frac{{\partial \rho }}{{\partial t}}} \right) =  - iL\rho
435   \label{introEquation:liouvilleTheoremInOperator}
436   \end{equation}
437 <
437 > which can help define a propagator $\rho (t) = e^{-iLt} \rho (0)$.
438   \subsection{\label{introSection:ergodic}The Ergodic Hypothesis}
439  
440   Various thermodynamic properties can be calculated from Molecular
441   Dynamics simulation. By comparing experimental values with the
442   calculated properties, one can determine the accuracy of the
443 < simulation and the quality of the underlying model. However, both of
444 < experiment and computer simulation are usually performed during a
443 > simulation and the quality of the underlying model. However, both
444 > experiments and computer simulations are usually performed during a
445   certain time interval and the measurements are averaged over a
446 < period of them which is different from the average behavior of
447 < many-body system in Statistical Mechanics. Fortunately, Ergodic
448 < Hypothesis is proposed to make a connection between time average and
449 < ensemble average. It states that time average and average over the
450 < statistical ensemble are identical \cite{Frenkel1996, Leach2001}.
446 > period of time which is different from the average behavior of
447 > many-body system in Statistical Mechanics. Fortunately, the Ergodic
448 > Hypothesis makes a connection between time average and the ensemble
449 > average. It states that the time average and average over the
450 > statistical ensemble are identical \cite{Frenkel1996, Leach2001}:
451   \begin{equation}
452   \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
453   \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
# Line 479 | Line 456 | sufficiently long time (longer than relaxation time),
456   where $\langle  A(q , p) \rangle_t$ is an equilibrium value of a
457   physical quantity and $\rho (p(t), q(t))$ is the equilibrium
458   distribution function. If an observation is averaged over a
459 < sufficiently long time (longer than relaxation time), all accessible
460 < microstates in phase space are assumed to be equally probed, giving
461 < a properly weighted statistical average. This allows the researcher
462 < freedom of choice when deciding how best to measure a given
463 < observable. In case an ensemble averaged approach sounds most
464 < reasonable, the Monte Carlo techniques\cite{Metropolis1949} can be
459 > sufficiently long time (longer than the relaxation time), all
460 > accessible microstates in phase space are assumed to be equally
461 > probed, giving a properly weighted statistical average. This allows
462 > the researcher freedom of choice when deciding how best to measure a
463 > given observable. In case an ensemble averaged approach sounds most
464 > reasonable, the Monte Carlo methods\cite{Metropolis1949} can be
465   utilized. Or if the system lends itself to a time averaging
466   approach, the Molecular Dynamics techniques in
467   Sec.~\ref{introSection:molecularDynamics} will be the best
468   choice\cite{Frenkel1996}.
469  
470   \section{\label{introSection:geometricIntegratos}Geometric Integrators}
471 < A variety of numerical integrators were proposed to simulate the
472 < motions. They usually begin with an initial conditionals and move
473 < the objects in the direction governed by the differential equations.
474 < However, most of them ignore the hidden physical law contained
475 < within the equations. Since 1990, geometric integrators, which
476 < preserve various phase-flow invariants such as symplectic structure,
477 < volume and time reversal symmetry, are developed to address this
478 < issue\cite{Dullweber1997, McLachlan1998, Leimkuhler1999}. The
479 < velocity verlet method, which happens to be a simple example of
480 < symplectic integrator, continues to gain its popularity in molecular
481 < dynamics community. This fact can be partly explained by its
482 < geometric nature.
471 > A variety of numerical integrators have been proposed to simulate
472 > the motions of atoms in MD simulation. They usually begin with
473 > initial conditionals and move the objects in the direction governed
474 > by the differential equations. However, most of them ignore the
475 > hidden physical laws contained within the equations. Since 1990,
476 > geometric integrators, which preserve various phase-flow invariants
477 > such as symplectic structure, volume and time reversal symmetry,
478 > were developed to address this issue\cite{Dullweber1997,
479 > McLachlan1998, Leimkuhler1999}. The velocity Verlet method, which
480 > happens to be a simple example of symplectic integrator, continues
481 > to gain popularity in the molecular dynamics community. This fact
482 > can be partly explained by its geometric nature.
483  
484 < \subsection{\label{introSection:symplecticManifold}Symplectic Manifold}
485 < A \emph{manifold} is an abstract mathematical space. It locally
486 < looks like Euclidean space, but when viewed globally, it may have
487 < more complicate structure. A good example of manifold is the surface
488 < of Earth. It seems to be flat locally, but it is round if viewed as
489 < a whole. A \emph{differentiable manifold} (also known as
490 < \emph{smooth manifold}) is a manifold with an open cover in which
491 < the covering neighborhoods are all smoothly isomorphic to one
492 < another. In other words,it is possible to apply calculus on
516 < \emph{differentiable manifold}. A \emph{symplectic manifold} is
517 < defined as a pair $(M, \omega)$ which consisting of a
484 > \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
485 > A \emph{manifold} is an abstract mathematical space. It looks
486 > locally like Euclidean space, but when viewed globally, it may have
487 > more complicated structure. A good example of manifold is the
488 > surface of Earth. It seems to be flat locally, but it is round if
489 > viewed as a whole. A \emph{differentiable manifold} (also known as
490 > \emph{smooth manifold}) is a manifold on which it is possible to
491 > apply calculus\cite{Hirsch1997}. A \emph{symplectic manifold} is
492 > defined as a pair $(M, \omega)$ which consists of a
493   \emph{differentiable manifold} $M$ and a close, non-degenerated,
494   bilinear symplectic form, $\omega$. A symplectic form on a vector
495   space $V$ is a function $\omega(x, y)$ which satisfies
496   $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
497   \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
498 < $\omega(x, x) = 0$. Cross product operation in vector field is an
499 < example of symplectic form.
498 > $\omega(x, x) = 0$\cite{McDuff1998}. The cross product operation in
499 > vector field is an example of symplectic form. One of the
500 > motivations to study \emph{symplectic manifolds} in Hamiltonian
501 > Mechanics is that a symplectic manifold can represent all possible
502 > configurations of the system and the phase space of the system can
503 > be described by it's cotangent bundle\cite{Jost2002}. Every
504 > symplectic manifold is even dimensional. For instance, in Hamilton
505 > equations, coordinate and momentum always appear in pairs.
506  
526 One of the motivations to study \emph{symplectic manifold} in
527 Hamiltonian Mechanics is that a symplectic manifold can represent
528 all possible configurations of the system and the phase space of the
529 system can be described by it's cotangent bundle. Every symplectic
530 manifold is even dimensional. For instance, in Hamilton equations,
531 coordinate and momentum always appear in pairs.
532
533 Let  $(M,\omega)$ and $(N, \eta)$ be symplectic manifolds. A map
534 \[
535 f : M \rightarrow N
536 \]
537 is a \emph{symplectomorphism} if it is a \emph{diffeomorphims} and
538 the \emph{pullback} of $\eta$ under f is equal to $\omega$.
539 Canonical transformation is an example of symplectomorphism in
540 classical mechanics.
541
507   \subsection{\label{introSection:ODE}Ordinary Differential Equations}
508  
509 < For a ordinary differential system defined as
509 > For an ordinary differential system defined as
510   \begin{equation}
511   \dot x = f(x)
512   \end{equation}
513 < where $x = x(q,p)^T$, this system is canonical Hamiltonian, if
513 > where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
514 > $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
515 > function and $J$ is the skew-symmetric matrix
516   \begin{equation}
550 f(r) = J\nabla _x H(r).
551 \end{equation}
552 $H = H (q, p)$ is Hamiltonian function and $J$ is the skew-symmetric
553 matrix
554 \begin{equation}
517   J = \left( {\begin{array}{*{20}c}
518     0 & I  \\
519     { - I} & 0  \\
# Line 561 | Line 523 | system can be rewritten as,
523   where $I$ is an identity matrix. Using this notation, Hamiltonian
524   system can be rewritten as,
525   \begin{equation}
526 < \frac{d}{{dt}}x = J\nabla _x H(x)
526 > \frac{d}{{dt}}x = J\nabla _x H(x).
527   \label{introEquation:compactHamiltonian}
528   \end{equation}In this case, $f$ is
529 < called a \emph{Hamiltonian vector field}.
530 <
569 < Another generalization of Hamiltonian dynamics is Poisson
570 < Dynamics\cite{Olver1986},
529 > called a \emph{Hamiltonian vector field}. Another generalization of
530 > Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986},
531   \begin{equation}
532   \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
533   \end{equation}
534   The most obvious change being that matrix $J$ now depends on $x$.
535  
536 < \subsection{\label{introSection:exactFlow}Exact Flow}
536 > \subsection{\label{introSection:exactFlow}Exact Propagator}
537  
538 < Let $x(t)$ be the exact solution of the ODE system,
539 < \begin{equation}
540 < \frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}
541 < \end{equation}
542 < The exact flow(solution) $\varphi_\tau$ is defined by
583 < \[
584 < x(t+\tau) =\varphi_\tau(x(t))
538 > Let $x(t)$ be the exact solution of the ODE
539 > system,$\frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}$, we can
540 > define its exact propagator(solution) $\varphi_\tau$
541 > \[ x(t+\tau)
542 > =\varphi_\tau(x(t))
543   \]
544   where $\tau$ is a fixed time step and $\varphi$ is a map from phase
545 < space to itself. The flow has the continuous group property,
545 > space to itself. The propagator has the continuous group property,
546   \begin{equation}
547   \varphi _{\tau _1 }  \circ \varphi _{\tau _2 }  = \varphi _{\tau _1
548   + \tau _2 } .
# Line 593 | Line 551 | Therefore, the exact flow is self-adjoint,
551   \begin{equation}
552   \varphi _\tau   \circ \varphi _{ - \tau }  = I
553   \end{equation}
554 < Therefore, the exact flow is self-adjoint,
554 > Therefore, the exact propagator is self-adjoint,
555   \begin{equation}
556   \varphi _\tau   = \varphi _{ - \tau }^{ - 1}.
557   \end{equation}
558 < The exact flow can also be written in terms of the of an operator,
558 > The exact propagator can also be written in terms of operator,
559   \begin{equation}
560   \varphi _\tau  (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
561   }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
562   \label{introEquation:exponentialOperator}
563   \end{equation}
564 <
565 < In most cases, it is not easy to find the exact flow $\varphi_\tau$.
566 < Instead, we use a approximate map, $\psi_\tau$, which is usually
567 < called integrator. The order of an integrator $\psi_\tau$ is $p$, if
568 < the Taylor series of $\psi_\tau$ agree to order $p$,
564 > In most cases, it is not easy to find the exact propagator
565 > $\varphi_\tau$. Instead, we use an approximate map, $\psi_\tau$,
566 > which is usually called an integrator. The order of an integrator
567 > $\psi_\tau$ is $p$, if the Taylor series of $\psi_\tau$ agree to
568 > order $p$,
569   \begin{equation}
570 < \psi_tau(x) = x + \tau f(x) + O(\tau^{p+1})
570 > \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1})
571   \end{equation}
572  
573   \subsection{\label{introSection:geometricProperties}Geometric Properties}
574  
575 < The hidden geometric properties\cite{Budd1999, Marsden1998} of ODE
576 < and its flow play important roles in numerical studies. Many of them
577 < can be found in systems which occur naturally in applications.
578 <
579 < Let $\varphi$ be the flow of Hamiltonian vector field, $\varphi$ is
622 < a \emph{symplectic} flow if it satisfies,
575 > The hidden geometric properties\cite{Budd1999, Marsden1998} of an
576 > ODE and its propagator play important roles in numerical studies.
577 > Many of them can be found in systems which occur naturally in
578 > applications. Let $\varphi$ be the propagator of Hamiltonian vector
579 > field, $\varphi$ is a \emph{symplectic} propagator if it satisfies,
580   \begin{equation}
581   {\varphi '}^T J \varphi ' = J.
582   \end{equation}
583   According to Liouville's theorem, the symplectic volume is invariant
584 < under a Hamiltonian flow, which is the basis for classical
585 < statistical mechanics. Furthermore, the flow of a Hamiltonian vector
586 < field on a symplectic manifold can be shown to be a
584 > under a Hamiltonian propagator, which is the basis for classical
585 > statistical mechanics. Furthermore, the propagator of a Hamiltonian
586 > vector field on a symplectic manifold can be shown to be a
587   symplectomorphism. As to the Poisson system,
588   \begin{equation}
589   {\varphi '}^T J \varphi ' = J \circ \varphi
590   \end{equation}
591 < is the property must be preserved by the integrator.
592 <
593 < It is possible to construct a \emph{volume-preserving} flow for a
594 < source free($ \nabla \cdot f = 0 $) ODE, if the flow satisfies $
595 < \det d\varphi  = 1$. One can show easily that a symplectic flow will
596 < be volume-preserving.
597 <
641 < Changing the variables $y = h(x)$ in a ODE\ref{introEquation:ODE}
642 < will result in a new system,
591 > is the property that must be preserved by the integrator. It is
592 > possible to construct a \emph{volume-preserving} propagator for a
593 > source free ODE ($ \nabla \cdot f = 0 $), if the propagator
594 > satisfies $ \det d\varphi  = 1$. One can show easily that a
595 > symplectic propagator will be volume-preserving. Changing the
596 > variables $y = h(x)$ in an ODE (Eq.~\ref{introEquation:ODE}) will
597 > result in a new system,
598   \[
599   \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y).
600   \]
601   The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$.
602 < In other words, the flow of this vector field is reversible if and
603 < only if $ h \circ \varphi ^{ - 1}  = \varphi  \circ h $.
604 <
605 < A \emph{first integral}, or conserved quantity of a general
606 < differential function is a function $ G:R^{2d}  \to R^d $ which is
652 < constant for all solutions of the ODE $\frac{{dx}}{{dt}} = f(x)$ ,
602 > In other words, the propagator of this vector field is reversible if
603 > and only if $ h \circ \varphi ^{ - 1}  = \varphi  \circ h $. A
604 > conserved quantity of a general differential function is a function
605 > $ G:R^{2d}  \to R^d $ which is constant for all solutions of the ODE
606 > $\frac{{dx}}{{dt}} = f(x)$ ,
607   \[
608   \frac{{dG(x(t))}}{{dt}} = 0.
609   \]
610 < Using chain rule, one may obtain,
610 > Using the chain rule, one may obtain,
611   \[
612 < \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \bullet \nabla G,
612 > \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \dot \nabla G,
613   \]
614 < which is the condition for conserving \emph{first integral}. For a
615 < canonical Hamiltonian system, the time evolution of an arbitrary
616 < smooth function $G$ is given by,
663 <
614 > which is the condition for conserved quantities. For a canonical
615 > Hamiltonian system, the time evolution of an arbitrary smooth
616 > function $G$ is given by,
617   \begin{eqnarray}
618 < \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \\
619 <                        & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)). \\
618 > \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \notag\\
619 >                        & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)).
620   \label{introEquation:firstIntegral1}
621   \end{eqnarray}
622 <
623 <
671 < Using poisson bracket notion, Equation
672 < \ref{introEquation:firstIntegral1} can be rewritten as
622 > Using poisson bracket notion, Eq.~\ref{introEquation:firstIntegral1}
623 > can be rewritten as
624   \[
625   \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)).
626   \]
627 < Therefore, the sufficient condition for $G$ to be the \emph{first
628 < integral} of a Hamiltonian system is
629 < \[
630 < \left\{ {G,H} \right\} = 0.
680 < \]
681 < As well known, the Hamiltonian (or energy) H of a Hamiltonian system
682 < is a \emph{first integral}, which is due to the fact $\{ H,H\}  =
683 < 0$.
684 <
627 > Therefore, the sufficient condition for $G$ to be a conserved
628 > quantity of a Hamiltonian system is $\left\{ {G,H} \right\} = 0.$ As
629 > is well known, the Hamiltonian (or energy) H of a Hamiltonian system
630 > is a conserved quantity, which is due to the fact $\{ H,H\}  = 0$.
631   When designing any numerical methods, one should always try to
632 < preserve the structural properties of the original ODE and its flow.
632 > preserve the structural properties of the original ODE and its
633 > propagator.
634  
635   \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods}
636   A lot of well established and very effective numerical methods have
637 < been successful precisely because of their symplecticities even
637 > been successful precisely because of their symplectic nature even
638   though this fact was not recognized when they were first
639 < constructed. The most famous example is leapfrog methods in
640 < molecular dynamics. In general, symplectic integrators can be
639 > constructed. The most famous example is the Verlet-leapfrog method
640 > in molecular dynamics. In general, symplectic integrators can be
641   constructed using one of four different methods.
642   \begin{enumerate}
643   \item Generating functions
# Line 698 | Line 645 | constructed using one of four different methods.
645   \item Runge-Kutta methods
646   \item Splitting methods
647   \end{enumerate}
648 <
702 < Generating function\cite{Channell1990} tends to lead to methods
648 > Generating functions\cite{Channell1990} tend to lead to methods
649   which are cumbersome and difficult to use. In dissipative systems,
650   variational methods can capture the decay of energy
651 < accurately\cite{Kane2000}. Since their geometrically unstable nature
651 > accurately\cite{Kane2000}. Since they are geometrically unstable
652   against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
653   methods are not suitable for Hamiltonian system. Recently, various
654 < high-order explicit Runge-Kutta methods
655 < \cite{Owren1992,Chen2003}have been developed to overcome this
656 < instability. However, due to computational penalty involved in
657 < implementing the Runge-Kutta methods, they do not attract too much
658 < attention from Molecular Dynamics community. Instead, splitting have
659 < been widely accepted since they exploit natural decompositions of
660 < the system\cite{Tuckerman1992, McLachlan1998}.
654 > high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
655 > have been developed to overcome this instability. However, due to
656 > computational penalty involved in implementing the Runge-Kutta
657 > methods, they have not attracted much attention from the Molecular
658 > Dynamics community. Instead, splitting methods have been widely
659 > accepted since they exploit natural decompositions of the
660 > system\cite{Tuckerman1992, McLachlan1998}.
661  
662 < \subsubsection{\label{introSection:splittingMethod}Splitting Method}
662 > \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
663  
664   The main idea behind splitting methods is to decompose the discrete
665 < $\varphi_h$ as a composition of simpler flows,
665 > $\varphi_h$ as a composition of simpler propagators,
666   \begin{equation}
667   \varphi _h  = \varphi _{h_1 }  \circ \varphi _{h_2 }  \ldots  \circ
668   \varphi _{h_n }
669   \label{introEquation:FlowDecomposition}
670   \end{equation}
671 < where each of the sub-flow is chosen such that each represent a
672 < simpler integration of the system.
673 <
728 < Suppose that a Hamiltonian system takes the form,
671 > where each of the sub-propagator is chosen such that each represent
672 > a simpler integration of the system. Suppose that a Hamiltonian
673 > system takes the form,
674   \[
675   H = H_1 + H_2.
676   \]
677   Here, $H_1$ and $H_2$ may represent different physical processes of
678   the system. For instance, they may relate to kinetic and potential
679   energy respectively, which is a natural decomposition of the
680 < problem. If $H_1$ and $H_2$ can be integrated using exact flows
681 < $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a simple first
682 < order is then given by the Lie-Trotter formula
680 > problem. If $H_1$ and $H_2$ can be integrated using exact
681 > propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a
682 > simple first order expression is then given by the Lie-Trotter
683 > formula
684   \begin{equation}
685   \varphi _h  = \varphi _{1,h}  \circ \varphi _{2,h},
686   \label{introEquation:firstOrderSplitting}
# Line 743 | Line 689 | It is easy to show that any composition of symplectic
689   continuous $\varphi _i$ over a time $h$. By definition, as
690   $\varphi_i(t)$ is the exact solution of a Hamiltonian system, it
691   must follow that each operator $\varphi_i(t)$ is a symplectic map.
692 < It is easy to show that any composition of symplectic flows yields a
693 < symplectic map,
692 > It is easy to show that any composition of symplectic propagators
693 > yields a symplectic map,
694   \begin{equation}
695   (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi
696   '\phi ' = \phi '^T J\phi ' = J,
# Line 752 | Line 698 | splitting in this context automatically generates a sy
698   \end{equation}
699   where $\phi$ and $\psi$ both are symplectic maps. Thus operator
700   splitting in this context automatically generates a symplectic map.
701 <
702 < The Lie-Trotter splitting(\ref{introEquation:firstOrderSplitting})
703 < introduces local errors proportional to $h^2$, while Strang
704 < splitting gives a second-order decomposition,
701 > The Lie-Trotter
702 > splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
703 > local errors proportional to $h^2$, while the Strang splitting gives
704 > a second-order decomposition,
705   \begin{equation}
706   \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi
707   _{1,h/2} , \label{introEquation:secondOrderSplitting}
708   \end{equation}
709 < which has a local error proportional to $h^3$. Sprang splitting's
710 < popularity in molecular simulation community attribute to its
711 < symmetric property,
709 > which has a local error proportional to $h^3$. The Strang
710 > splitting's popularity in molecular simulation community attribute
711 > to its symmetric property,
712   \begin{equation}
713   \varphi _h^{ - 1} = \varphi _{ - h}.
714   \label{introEquation:timeReversible}
715   \end{equation}
716  
717 < \subsubsection{\label{introSection:exampleSplittingMethod}Example of Splitting Method}
717 > \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}}
718   The classical equation for a system consisting of interacting
719   particles can be written in Hamiltonian form,
720   \[
721   H = T + V
722   \]
723   where $T$ is the kinetic energy and $V$ is the potential energy.
724 < Setting $H_1 = T, H_2 = V$ and applying Strang splitting, one
724 > Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one
725   obtains the following:
726   \begin{align}
727   q(\Delta t) &= q(0) + \dot{q}(0)\Delta t +
# Line 802 | Line 748 | q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{
748      \label{introEquation:Lp9b}\\%
749   %
750   \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
751 <    \frac{\Delta t}{2m}\, F[q(0)]. \label{introEquation:Lp9c}
751 >    \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c}
752   \end{align}
753   From the preceding splitting, one can see that the integration of
754   the equations of motion would follow:
# Line 811 | Line 757 | the equations of motion would follow:
757  
758   \item Use the half step velocities to move positions one whole step, $\Delta t$.
759  
760 < \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
760 > \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
761  
762   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
763   \end{enumerate}
764 <
765 < Simply switching the order of splitting and composing, a new
766 < integrator, the \emph{position verlet} integrator, can be generated,
764 > By simply switching the order of the propagators in the splitting
765 > and composing a new integrator, the \emph{position verlet}
766 > integrator, can be generated,
767   \begin{align}
768   \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) +
769   \frac{{\Delta t}}{{2m}}\dot q(0)} \right], %
# Line 828 | Line 774 | q(\Delta t)} \right]. %
774   \label{introEquation:positionVerlet2}
775   \end{align}
776  
777 < \subsubsection{\label{introSection:errorAnalysis}Error Analysis and Higher Order Methods}
777 > \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
778  
779 < Baker-Campbell-Hausdorff formula can be used to determine the local
780 < error of splitting method in terms of commutator of the
779 > The Baker-Campbell-Hausdorff formula can be used to determine the
780 > local error of a splitting method in terms of the commutator of the
781   operators(\ref{introEquation:exponentialOperator}) associated with
782 < the sub-flow. For operators $hX$ and $hY$ which are associate to
783 < $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
782 > the sub-propagator. For operators $hX$ and $hY$ which are associated
783 > with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
784   \begin{equation}
785   \exp (hX + hY) = \exp (hZ)
786   \end{equation}
# Line 843 | Line 789 | Here, $[X,Y]$ is the commutators of operator $X$ and $
789   hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left(
790   {[X,[X,Y]] + [Y,[Y,X]]} \right) +  \ldots .
791   \end{equation}
792 < Here, $[X,Y]$ is the commutators of operator $X$ and $Y$ given by
792 > Here, $[X,Y]$ is the commutator of operator $X$ and $Y$ given by
793   \[
794   [X,Y] = XY - YX .
795   \]
796 < Applying Baker-Campbell-Hausdorff formula\cite{Varadarajan1974} to
797 < Sprang splitting, we can obtain
796 > Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974}
797 > to the Strang splitting, we can obtain
798   \begin{eqnarray*}
799   \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
800                                     &   & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
801 <                                   &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots )
801 >                                   &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots
802 >                                   ).
803   \end{eqnarray*}
804 < Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0\], the dominant local
805 < error of Spring splitting is proportional to $h^3$. The same
806 < procedure can be applied to general splitting,  of the form
804 > Since $ [X,Y] + [Y,X] = 0$ and $ [X,X] = 0$, the dominant local
805 > error of Strang splitting is proportional to $h^3$. The same
806 > procedure can be applied to a general splitting of the form
807   \begin{equation}
808   \varphi _{b_m h}^2  \circ \varphi _{a_m h}^1  \circ \varphi _{b_{m -
809   1} h}^2  \circ  \ldots  \circ \varphi _{a_1 h}^1 .
810   \end{equation}
811 < Careful choice of coefficient $a_1 \ldot b_m$ will lead to higher
812 < order method. Yoshida proposed an elegant way to compose higher
811 > A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
812 > order methods. Yoshida proposed an elegant way to compose higher
813   order methods based on symmetric splitting\cite{Yoshida1990}. Given
814   a symmetric second order base method $ \varphi _h^{(2)} $, a
815   fourth-order symmetric method can be constructed by composing,
# Line 875 | Line 822 | _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)}
822   integrator $ \varphi _h^{(2n + 2)}$ can be composed by
823   \begin{equation}
824   \varphi _h^{(2n + 2)}  = \varphi _{\alpha h}^{(2n)}  \circ \varphi
825 < _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)}
825 > _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)},
826   \end{equation}
827 < , if the weights are chosen as
827 > if the weights are chosen as
828   \[
829   \alpha  =  - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta =
830   \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} .
# Line 891 | Line 838 | simulations. For instance, instantaneous temperature o
838   dynamical information. The basic idea of molecular dynamics is that
839   macroscopic properties are related to microscopic behavior and
840   microscopic behavior can be calculated from the trajectories in
841 < simulations. For instance, instantaneous temperature of an
842 < Hamiltonian system of $N$ particle can be measured by
841 > simulations. For instance, instantaneous temperature of a
842 > Hamiltonian system of $N$ particles can be measured by
843   \[
844   T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}}
845   \]
846   where $m_i$ and $v_i$ are the mass and velocity of $i$th particle
847   respectively, $f$ is the number of degrees of freedom, and $k_B$ is
848 < the boltzman constant.
848 > the Boltzman constant.
849  
850   A typical molecular dynamics run consists of three essential steps:
851   \begin{enumerate}
# Line 914 | Line 861 | initialization of a simulation. Sec.~\ref{introSec:pro
861   \end{enumerate}
862   These three individual steps will be covered in the following
863   sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
864 < initialization of a simulation. Sec.~\ref{introSec:production} will
865 < discusses issues in production run. Sec.~\ref{introSection:Analysis}
866 < provides the theoretical tools for trajectory analysis.
864 > initialization of a simulation. Sec.~\ref{introSection:production}
865 > will discuss issues of production runs.
866 > Sec.~\ref{introSection:Analysis} provides the theoretical tools for
867 > analysis of trajectories.
868  
869   \subsection{\label{introSec:initialSystemSettings}Initialization}
870  
871 < \subsubsection{Preliminary preparation}
871 > \subsubsection{\textbf{Preliminary preparation}}
872  
873   When selecting the starting structure of a molecule for molecular
874   simulation, one may retrieve its Cartesian coordinates from public
875   databases, such as RCSB Protein Data Bank \textit{etc}. Although
876   thousands of crystal structures of molecules are discovered every
877   year, many more remain unknown due to the difficulties of
878 < purification and crystallization. Even for the molecule with known
879 < structure, some important information is missing. For example, the
878 > purification and crystallization. Even for molecules with known
879 > structures, some important information is missing. For example, a
880   missing hydrogen atom which acts as donor in hydrogen bonding must
881 < be added. Moreover, in order to include electrostatic interaction,
881 > be added. Moreover, in order to include electrostatic interactions,
882   one may need to specify the partial charges for individual atoms.
883   Under some circumstances, we may even need to prepare the system in
884 < a special setup. For instance, when studying transport phenomenon in
885 < membrane system, we may prepare the lipids in bilayer structure
886 < instead of placing lipids randomly in solvent, since we are not
887 < interested in self-aggregation and it takes a long time to happen.
884 > a special configuration. For instance, when studying transport
885 > phenomenon in membrane systems, we may prepare the lipids in a
886 > bilayer structure instead of placing lipids randomly in solvent,
887 > since we are not interested in the slow self-aggregation process.
888  
889 < \subsubsection{Minimization}
889 > \subsubsection{\textbf{Minimization}}
890  
891   It is quite possible that some of molecules in the system from
892 < preliminary preparation may be overlapped with each other. This
893 < close proximity leads to high potential energy which consequently
894 < jeopardizes any molecular dynamics simulations. To remove these
895 < steric overlaps, one typically performs energy minimization to find
896 < a more reasonable conformation. Several energy minimization methods
897 < have been developed to exploit the energy surface and to locate the
898 < local minimum. While converging slowly near the minimum, steepest
899 < descent method is extremely robust when systems are far from
900 < harmonic. Thus, it is often used to refine structure from
901 < crystallographic data. Relied on the gradient or hessian, advanced
902 < methods like conjugate gradient and Newton-Raphson converge rapidly
903 < to a local minimum, while become unstable if the energy surface is
904 < far from quadratic. Another factor must be taken into account, when
892 > preliminary preparation may be overlapping with each other. This
893 > close proximity leads to high initial potential energy which
894 > consequently jeopardizes any molecular dynamics simulations. To
895 > remove these steric overlaps, one typically performs energy
896 > minimization to find a more reasonable conformation. Several energy
897 > minimization methods have been developed to exploit the energy
898 > surface and to locate the local minimum. While converging slowly
899 > near the minimum, steepest descent method is extremely robust when
900 > systems are strongly anharmonic. Thus, it is often used to refine
901 > structures from crystallographic data. Relying on the Hessian,
902 > advanced methods like Newton-Raphson converge rapidly to a local
903 > minimum, but become unstable if the energy surface is far from
904 > quadratic. Another factor that must be taken into account, when
905   choosing energy minimization method, is the size of the system.
906   Steepest descent and conjugate gradient can deal with models of any
907 < size. Because of the limit of computation power to calculate hessian
908 < matrix and insufficient storage capacity to store them, most
909 < Newton-Raphson methods can not be used with very large models.
907 > size. Because of the limits on computer memory to store the hessian
908 > matrix and the computing power needed to diagonalize these matrices,
909 > most Newton-Raphson methods can not be used with very large systems.
910  
911 < \subsubsection{Heating}
911 > \subsubsection{\textbf{Heating}}
912  
913 < Typically, Heating is performed by assigning random velocities
914 < according to a Gaussian distribution for a temperature. Beginning at
915 < a lower temperature and gradually increasing the temperature by
916 < assigning greater random velocities, we end up with setting the
917 < temperature of the system to a final temperature at which the
918 < simulation will be conducted. In heating phase, we should also keep
919 < the system from drifting or rotating as a whole. Equivalently, the
920 < net linear momentum and angular momentum of the system should be
921 < shifted to zero.
913 > Typically, heating is performed by assigning random velocities
914 > according to a Maxwell-Boltzman distribution for a desired
915 > temperature. Beginning at a lower temperature and gradually
916 > increasing the temperature by assigning larger random velocities, we
917 > end up setting the temperature of the system to a final temperature
918 > at which the simulation will be conducted. In heating phase, we
919 > should also keep the system from drifting or rotating as a whole. To
920 > do this, the net linear momentum and angular momentum of the system
921 > is shifted to zero after each resampling from the Maxwell -Boltzman
922 > distribution.
923  
924 < \subsubsection{Equilibration}
924 > \subsubsection{\textbf{Equilibration}}
925  
926   The purpose of equilibration is to allow the system to evolve
927   spontaneously for a period of time and reach equilibrium. The
# Line 986 | Line 935 | Production run is the most important step of the simul
935  
936   \subsection{\label{introSection:production}Production}
937  
938 < Production run is the most important step of the simulation, in
938 > The production run is the most important step of the simulation, in
939   which the equilibrated structure is used as a starting point and the
940   motions of the molecules are collected for later analysis. In order
941   to capture the macroscopic properties of the system, the molecular
942 < dynamics simulation must be performed in correct and efficient way.
942 > dynamics simulation must be performed by sampling correctly and
943 > efficiently from the relevant thermodynamic ensemble.
944  
945   The most expensive part of a molecular dynamics simulation is the
946   calculation of non-bonded forces, such as van der Waals force and
947   Coulombic forces \textit{etc}. For a system of $N$ particles, the
948   complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
949 < which making large simulations prohibitive in the absence of any
950 < computation saving techniques.
951 <
952 < A natural approach to avoid system size issue is to represent the
953 < bulk behavior by a finite number of the particles. However, this
954 < approach will suffer from the surface effect. To offset this,
955 < \textit{Periodic boundary condition} (see Fig.~\ref{introFig:pbc})
956 < is developed to simulate bulk properties with a relatively small
957 < number of particles. In this method, the simulation box is
958 < replicated throughout space to form an infinite lattice. During the
959 < simulation, when a particle moves in the primary cell, its image in
1010 < other cells move in exactly the same direction with exactly the same
949 > which makes large simulations prohibitive in the absence of any
950 > algorithmic tricks. A natural approach to avoid system size issues
951 > is to represent the bulk behavior by a finite number of the
952 > particles. However, this approach will suffer from surface effects
953 > at the edges of the simulation. To offset this, \textit{Periodic
954 > boundary conditions} (see Fig.~\ref{introFig:pbc}) were developed to
955 > simulate bulk properties with a relatively small number of
956 > particles. In this method, the simulation box is replicated
957 > throughout space to form an infinite lattice. During the simulation,
958 > when a particle moves in the primary cell, its image in other cells
959 > move in exactly the same direction with exactly the same
960   orientation. Thus, as a particle leaves the primary cell, one of its
961   images will enter through the opposite face.
962   \begin{figure}
# Line 1021 | Line 970 | evaluation is to apply cutoff where particles farther
970  
971   %cutoff and minimum image convention
972   Another important technique to improve the efficiency of force
973 < evaluation is to apply cutoff where particles farther than a
974 < predetermined distance, are not included in the calculation
973 > evaluation is to apply spherical cutoffs where particles farther
974 > than a predetermined distance are not included in the calculation
975   \cite{Frenkel1996}. The use of a cutoff radius will cause a
976   discontinuity in the potential energy curve. Fortunately, one can
977 < shift the potential to ensure the potential curve go smoothly to
978 < zero at the cutoff radius. Cutoff strategy works pretty well for
979 < Lennard-Jones interaction because of its short range nature.
980 < However, simply truncating the electrostatic interaction with the
981 < use of cutoff has been shown to lead to severe artifacts in
982 < simulations. Ewald summation, in which the slowly conditionally
983 < convergent Coulomb potential is transformed into direct and
984 < reciprocal sums with rapid and absolute convergence, has proved to
985 < minimize the periodicity artifacts in liquid simulations. Taking the
986 < advantages of the fast Fourier transform (FFT) for calculating
987 < discrete Fourier transforms, the particle mesh-based
977 > shift a simple radial potential to ensure the potential curve go
978 > smoothly to zero at the cutoff radius. The cutoff strategy works
979 > well for Lennard-Jones interaction because of its short range
980 > nature. However, simply truncating the electrostatic interaction
981 > with the use of cutoffs has been shown to lead to severe artifacts
982 > in simulations. The Ewald summation, in which the slowly decaying
983 > Coulomb potential is transformed into direct and reciprocal sums
984 > with rapid and absolute convergence, has proved to minimize the
985 > periodicity artifacts in liquid simulations. Taking the advantages
986 > of the fast Fourier transform (FFT) for calculating discrete Fourier
987 > transforms, the particle mesh-based
988   methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
989 < $O(N^{3/2})$ to $O(N logN)$. An alternative approach is \emph{fast
990 < multipole method}\cite{Greengard1987, Greengard1994}, which treats
991 < Coulombic interaction exactly at short range, and approximate the
992 < potential at long range through multipolar expansion. In spite of
993 < their wide acceptances at the molecular simulation community, these
994 < two methods are hard to be implemented correctly and efficiently.
995 < Instead, we use a damped and charge-neutralized Coulomb potential
996 < method developed by Wolf and his coworkers\cite{Wolf1999}. The
997 < shifted Coulomb potential for particle $i$ and particle $j$ at
998 < distance $r_{rj}$ is given by:
989 > $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
990 > \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
991 > which treats Coulombic interactions exactly at short range, and
992 > approximate the potential at long range through multipolar
993 > expansion. In spite of their wide acceptance at the molecular
994 > simulation community, these two methods are difficult to implement
995 > correctly and efficiently. Instead, we use a damped and
996 > charge-neutralized Coulomb potential method developed by Wolf and
997 > his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
998 > particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
999   \begin{equation}
1000   V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
1001   r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
# Line 1068 | Line 1017 | Recently, advanced visualization technique are widely
1017  
1018   \subsection{\label{introSection:Analysis} Analysis}
1019  
1020 < Recently, advanced visualization technique are widely applied to
1020 > Recently, advanced visualization technique have become applied to
1021   monitor the motions of molecules. Although the dynamics of the
1022   system can be described qualitatively from animation, quantitative
1023 < trajectory analysis are more appreciable. According to the
1024 < principles of Statistical Mechanics,
1023 > trajectory analysis is more useful. According to the principles of
1024 > Statistical Mechanics in
1025   Sec.~\ref{introSection:statisticalMechanics}, one can compute
1026 < thermodynamics properties, analyze fluctuations of structural
1026 > thermodynamic properties, analyze fluctuations of structural
1027   parameters, and investigate time-dependent processes of the molecule
1028   from the trajectories.
1029  
1030 < \subsubsection{\label{introSection:thermodynamicsProperties}Thermodynamics Properties}
1030 > \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1031  
1032 < Thermodynamics properties, which can be expressed in terms of some
1032 > Thermodynamic properties, which can be expressed in terms of some
1033   function of the coordinates and momenta of all particles in the
1034   system, can be directly computed from molecular dynamics. The usual
1035   way to measure the pressure is based on virial theorem of Clausius
# Line 1100 | Line 1049 | P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\
1049   < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle
1050   \end{equation}
1051  
1052 < \subsubsection{\label{introSection:structuralProperties}Structural Properties}
1052 > \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}}
1053  
1054   Structural Properties of a simple fluid can be described by a set of
1055 < distribution functions. Among these functions,\emph{pair
1055 > distribution functions. Among these functions,the \emph{pair
1056   distribution function}, also known as \emph{radial distribution
1057 < function}, is of most fundamental importance to liquid-state theory.
1058 < Pair distribution function can be gathered by Fourier transforming
1059 < raw data from a series of neutron diffraction experiments and
1060 < integrating over the surface factor \cite{Powles1973}. The
1061 < experiment result can serve as a criterion to justify the
1062 < correctness of the theory. Moreover, various equilibrium
1063 < thermodynamic and structural properties can also be expressed in
1064 < terms of radial distribution function \cite{Allen1987}.
1065 <
1066 < A pair distribution functions $g(r)$ gives the probability that a
1067 < particle $i$ will be located at a distance $r$ from a another
1068 < particle $j$ in the system
1120 < \[
1057 > function}, is of most fundamental importance to liquid theory.
1058 > Experimentally, pair distribution functions can be gathered by
1059 > Fourier transforming raw data from a series of neutron diffraction
1060 > experiments and integrating over the surface factor
1061 > \cite{Powles1973}. The experimental results can serve as a criterion
1062 > to justify the correctness of a liquid model. Moreover, various
1063 > equilibrium thermodynamic and structural properties can also be
1064 > expressed in terms of the radial distribution function
1065 > \cite{Allen1987}. The pair distribution functions $g(r)$ gives the
1066 > probability that a particle $i$ will be located at a distance $r$
1067 > from a another particle $j$ in the system
1068 > \begin{equation}
1069   g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1070 < \ne i} {\delta (r - r_{ij} )} } } \right\rangle.
1071 < \]
1070 > \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
1071 > (r)}{\rho}.
1072 > \end{equation}
1073   Note that the delta function can be replaced by a histogram in
1074 < computer simulation. Figure
1075 < \ref{introFigure:pairDistributionFunction} shows a typical pair
1076 < distribution function for the liquid argon system. The occurrence of
1128 < several peaks in the plot of $g(r)$ suggests that it is more likely
1129 < to find particles at certain radial values than at others. This is a
1130 < result of the attractive interaction at such distances. Because of
1131 < the strong repulsive forces at short distance, the probability of
1132 < locating particles at distances less than about 2.5{\AA} from each
1133 < other is essentially zero.
1074 > computer simulation. Peaks in $g(r)$ represent solvent shells, and
1075 > the height of these peaks gradually decreases to 1 as the liquid of
1076 > large distance approaches the bulk density.
1077  
1135 %\begin{figure}
1136 %\centering
1137 %\includegraphics[width=\linewidth]{pdf.eps}
1138 %\caption[Pair distribution function for the liquid argon
1139 %]{Pair distribution function for the liquid argon}
1140 %\label{introFigure:pairDistributionFunction}
1141 %\end{figure}
1078  
1079 < \subsubsection{\label{introSection:timeDependentProperties}Time-dependent
1080 < Properties}
1079 > \subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent
1080 > Properties}}
1081  
1082   Time-dependent properties are usually calculated using \emph{time
1083 < correlation function}, which correlates random variables $A$ and $B$
1084 < at two different time
1083 > correlation functions}, which correlate random variables $A$ and $B$
1084 > at two different times,
1085   \begin{equation}
1086   C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1087   \label{introEquation:timeCorrelationFunction}
1088   \end{equation}
1089   If $A$ and $B$ refer to same variable, this kind of correlation
1090 < function is called \emph{auto correlation function}. One example of
1091 < auto correlation function is velocity auto-correlation function
1092 < which is directly related to transport properties of molecular
1093 < liquids:
1090 > function is called an \emph{autocorrelation function}. One example
1091 > of an auto correlation function is the velocity auto-correlation
1092 > function which is directly related to transport properties of
1093 > molecular liquids:
1094   \[
1095   D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1096   \right\rangle } dt
1097   \]
1098 < where $D$ is diffusion constant. Unlike velocity autocorrelation
1099 < function which is averaging over time origins and over all the
1100 < atoms, dipole autocorrelation are calculated for the entire system.
1101 < The dipole autocorrelation function is given by:
1098 > where $D$ is diffusion constant. Unlike the velocity autocorrelation
1099 > function, which is averaged over time origins and over all the
1100 > atoms, the dipole autocorrelation functions is calculated for the
1101 > entire system. The dipole autocorrelation function is given by:
1102   \[
1103   c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1104   \right\rangle
# Line 1170 | Line 1106 | u_{tot} (t) = \sum\limits_i {u_i (t)}
1106   Here $u_{tot}$ is the net dipole of the entire system and is given
1107   by
1108   \[
1109 < u_{tot} (t) = \sum\limits_i {u_i (t)}
1109 > u_{tot} (t) = \sum\limits_i {u_i (t)}.
1110   \]
1111 < In principle, many time correlation functions can be related with
1111 > In principle, many time correlation functions can be related to
1112   Fourier transforms of the infrared, Raman, and inelastic neutron
1113   scattering spectra of molecular liquids. In practice, one can
1114 < extract the IR spectrum from the intensity of dipole fluctuation at
1115 < each frequency using the following relationship:
1114 > extract the IR spectrum from the intensity of the molecular dipole
1115 > fluctuation at each frequency using the following relationship:
1116   \[
1117   \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1118 < i2\pi vt} dt}
1118 > i2\pi vt} dt}.
1119   \]
1120  
1121   \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1122  
1123   Rigid bodies are frequently involved in the modeling of different
1124   areas, from engineering, physics, to chemistry. For example,
1125 < missiles and vehicle are usually modeled by rigid bodies.  The
1126 < movement of the objects in 3D gaming engine or other physics
1127 < simulator is governed by the rigid body dynamics. In molecular
1128 < simulation, rigid body is used to simplify the model in
1129 < protein-protein docking study\cite{Gray2003}.
1125 > missiles and vehicles are usually modeled by rigid bodies.  The
1126 > movement of the objects in 3D gaming engines or other physics
1127 > simulators is governed by rigid body dynamics. In molecular
1128 > simulations, rigid bodies are used to simplify protein-protein
1129 > docking studies\cite{Gray2003}.
1130  
1131   It is very important to develop stable and efficient methods to
1132 < integrate the equations of motion of orientational degrees of
1133 < freedom. Euler angles are the nature choice to describe the
1134 < rotational degrees of freedom. However, due to its singularity, the
1135 < numerical integration of corresponding equations of motion is very
1136 < inefficient and inaccurate. Although an alternative integrator using
1137 < different sets of Euler angles can overcome this
1138 < difficulty\cite{Barojas1973}, the computational penalty and the lost
1139 < of angular momentum conservation still remain. A singularity free
1140 < representation utilizing quaternions was developed by Evans in
1141 < 1977\cite{Evans1977}. Unfortunately, this approach suffer from the
1142 < nonseparable Hamiltonian resulted from quaternion representation,
1143 < which prevents the symplectic algorithm to be utilized. Another
1144 < different approach is to apply holonomic constraints to the atoms
1145 < belonging to the rigid body. Each atom moves independently under the
1146 < normal forces deriving from potential energy and constraint forces
1147 < which are used to guarantee the rigidness. However, due to their
1148 < iterative nature, SHAKE and Rattle algorithm converge very slowly
1149 < when the number of constraint increases\cite{Ryckaert1977,
1150 < Andersen1983}.
1132 > integrate the equations of motion for orientational degrees of
1133 > freedom. Euler angles are the natural choice to describe the
1134 > rotational degrees of freedom. However, due to $\frac {1}{sin
1135 > \theta}$ singularities, the numerical integration of corresponding
1136 > equations of these motion is very inefficient and inaccurate.
1137 > Although an alternative integrator using multiple sets of Euler
1138 > angles can overcome this difficulty\cite{Barojas1973}, the
1139 > computational penalty and the loss of angular momentum conservation
1140 > still remain. A singularity-free representation utilizing
1141 > quaternions was developed by Evans in 1977\cite{Evans1977}.
1142 > Unfortunately, this approach uses a nonseparable Hamiltonian
1143 > resulting from the quaternion representation, which prevents the
1144 > symplectic algorithm from being utilized. Another different approach
1145 > is to apply holonomic constraints to the atoms belonging to the
1146 > rigid body. Each atom moves independently under the normal forces
1147 > deriving from potential energy and constraint forces which are used
1148 > to guarantee the rigidness. However, due to their iterative nature,
1149 > the SHAKE and Rattle algorithms also converge very slowly when the
1150 > number of constraints increases\cite{Ryckaert1977, Andersen1983}.
1151  
1152 < The break through in geometric literature suggests that, in order to
1152 > A break-through in geometric literature suggests that, in order to
1153   develop a long-term integration scheme, one should preserve the
1154 < symplectic structure of the flow. Introducing conjugate momentum to
1155 < rotation matrix $Q$ and re-formulating Hamiltonian's equation, a
1156 < symplectic integrator, RSHAKE\cite{Kol1997}, was proposed to evolve
1157 < the Hamiltonian system in a constraint manifold by iteratively
1158 < satisfying the orthogonality constraint $Q_T Q = 1$. An alternative
1159 < method using quaternion representation was developed by
1160 < Omelyan\cite{Omelyan1998}. However, both of these methods are
1161 < iterative and inefficient. In this section, we will present a
1162 < symplectic Lie-Poisson integrator for rigid body developed by
1154 > symplectic structure of the propagator. By introducing a conjugate
1155 > momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's
1156 > equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was
1157 > proposed to evolve the Hamiltonian system in a constraint manifold
1158 > by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1159 > An alternative method using the quaternion representation was
1160 > developed by Omelyan\cite{Omelyan1998}. However, both of these
1161 > methods are iterative and inefficient. In this section, we descibe a
1162 > symplectic Lie-Poisson integrator for rigid bodies developed by
1163   Dullweber and his coworkers\cite{Dullweber1997} in depth.
1164  
1165 < \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Body}
1166 < The motion of the rigid body is Hamiltonian with the Hamiltonian
1165 > \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1166 > The motion of a rigid body is Hamiltonian with the Hamiltonian
1167   function
1168   \begin{equation}
1169   H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
# Line 1241 | Line 1177 | constrained Hamiltonian equation subjects to a holonom
1177   I_{ii}^{ - 1}  = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1178   \]
1179   where $I_{ii}$ is the diagonal element of the inertia tensor. This
1180 < constrained Hamiltonian equation subjects to a holonomic constraint,
1180 > constrained Hamiltonian equation is subjected to a holonomic
1181 > constraint,
1182   \begin{equation}
1183   Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1184   \end{equation}
1185 < which is used to ensure rotation matrix's orthogonality.
1186 < Differentiating \ref{introEquation:orthogonalConstraint} and using
1187 < Equation \ref{introEquation:RBMotionMomentum}, one may obtain,
1185 > which is used to ensure rotation matrix's unitarity. Differentiating
1186 > Eq.~\ref{introEquation:orthogonalConstraint} and using
1187 > Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1188   \begin{equation}
1189   Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1190   \label{introEquation:RBFirstOrderConstraint}
1191   \end{equation}
1255
1192   Using Equation (\ref{introEquation:motionHamiltonianCoordinate},
1193   \ref{introEquation:motionHamiltonianMomentum}), one can write down
1194   the equations of motion,
1195 < \[
1196 < \begin{array}{c}
1197 < \frac{{dq}}{{dt}} = \frac{p}{m} \label{introEquation:RBMotionPosition}\\
1198 < \frac{{dp}}{{dt}} =  - \nabla _q V(q,Q) \label{introEquation:RBMotionMomentum}\\
1199 < \frac{{dQ}}{{dt}} = PJ^{ - 1}  \label{introEquation:RBMotionRotation}\\
1200 < \frac{{dP}}{{dt}} =  - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}\\
1265 < \end{array}
1266 < \]
1267 <
1195 > \begin{eqnarray}
1196 > \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
1197 > \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q), \label{introEquation:RBMotionMomentum}\\
1198 > \frac{{dQ}}{{dt}} & = & PJ^{ - 1},  \label{introEquation:RBMotionRotation}\\
1199 > \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1200 > \end{eqnarray}
1201   In general, there are two ways to satisfy the holonomic constraints.
1202 < We can use constraint force provided by lagrange multiplier on the
1203 < normal manifold to keep the motion on constraint space. Or we can
1204 < simply evolve the system in constraint manifold. These two methods
1205 < are proved to be equivalent. The holonomic constraint and equations
1206 < of motions define a constraint manifold for rigid body
1202 > We can use a constraint force provided by a Lagrange multiplier on
1203 > the normal manifold to keep the motion on constraint space. Or we
1204 > can simply evolve the system on the constraint manifold. These two
1205 > methods have been proved to be equivalent. The holonomic constraint
1206 > and equations of motions define a constraint manifold for rigid
1207 > bodies
1208   \[
1209   M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0}
1210   \right\}.
1211   \]
1278
1212   Unfortunately, this constraint manifold is not the cotangent bundle
1213 < $T_{\star}SO(3)$. However, it turns out that under symplectic
1213 > $T^* SO(3)$ which can be consider as a symplectic manifold on Lie
1214 > rotation group $SO(3)$. However, it turns out that under symplectic
1215   transformation, the cotangent space and the phase space are
1216 < diffeomorphic. Introducing
1216 > diffeomorphic. By introducing
1217   \[
1218   \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1219   \]
# Line 1289 | Line 1223 | T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \t
1223   T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q =
1224   1,\tilde Q^T \tilde PJ^{ - 1}  + J^{ - 1} P^T \tilde Q = 0} \right\}
1225   \]
1292
1226   For a body fixed vector $X_i$ with respect to the center of mass of
1227   the rigid body, its corresponding lab fixed vector $X_0^{lab}$  is
1228   given as
# Line 1308 | Line 1241 | respectively.
1241   \[
1242   \nabla _Q V(q,Q) = F(q,Q)X_i^t
1243   \]
1244 < respectively.
1245 <
1246 < As a common choice to describe the rotation dynamics of the rigid
1314 < body, angular momentum on body frame $\Pi  = Q^t P$ is introduced to
1315 < rewrite the equations of motion,
1244 > respectively. As a common choice to describe the rotation dynamics
1245 > of the rigid body, the angular momentum on the body fixed frame $\Pi
1246 > = Q^t P$ is introduced to rewrite the equations of motion,
1247   \begin{equation}
1248   \begin{array}{l}
1249 < \mathop \Pi \limits^ \bullet   = J^{ - 1} \Pi ^T \Pi  + Q^T \sum\limits_i {F_i (q,Q)X_i^T }  - \Lambda  \\
1250 < \mathop Q\limits^{{\rm{   }} \bullet }  = Q\Pi {\rm{ }}J^{ - 1}  \\
1249 > \dot \Pi  = J^{ - 1} \Pi ^T \Pi  + Q^T \sum\limits_i {F_i (q,Q)X_i^T }  - \Lambda,  \\
1250 > \dot Q  = Q\Pi {\rm{ }}J^{ - 1},  \\
1251   \end{array}
1252   \label{introEqaution:RBMotionPI}
1253   \end{equation}
1254 < , as well as holonomic constraints,
1255 < \[
1256 < \begin{array}{l}
1326 < \Pi J^{ - 1}  + J^{ - 1} \Pi ^t  = 0 \\
1327 < Q^T Q = 1 \\
1328 < \end{array}
1329 < \]
1330 <
1331 < For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a matrix $\hat v \in
1332 < so(3)^ \star$, the hat-map isomorphism,
1254 > as well as holonomic constraints $\Pi J^{ - 1}  + J^{ - 1} \Pi ^t  =
1255 > 0$ and $Q^T Q = 1$. For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a
1256 > matrix $\hat v \in so(3)^ \star$, the hat-map isomorphism,
1257   \begin{equation}
1258   v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left(
1259   {\begin{array}{*{20}c}
# Line 1342 | Line 1266 | operations
1266   will let us associate the matrix products with traditional vector
1267   operations
1268   \[
1269 < \hat vu = v \times u
1269 > \hat vu = v \times u.
1270   \]
1271 <
1348 < Using \ref{introEqaution:RBMotionPI}, one can construct a skew
1271 > Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew
1272   matrix,
1273 + \begin{eqnarray}
1274 + (\dot \Pi  - \dot \Pi ^T )&= &(\Pi  - \Pi ^T )(J^{ - 1} \Pi  + \Pi J^{ - 1} ) \notag \\
1275 + & & + \sum\limits_i {[Q^T F_i (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]}  -
1276 + (\Lambda  - \Lambda ^T ). \label{introEquation:skewMatrixPI}
1277 + \end{eqnarray}
1278 + Since $\Lambda$ is symmetric, the last term of
1279 + Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1280 + Lagrange multiplier $\Lambda$ is absent from the equations of
1281 + motion. This unique property eliminates the requirement of
1282 + iterations which can not be avoided in other methods\cite{Kol1997,
1283 + Omelyan1998}. Applying the hat-map isomorphism, we obtain the
1284 + equation of motion for angular momentum on body frame
1285   \begin{equation}
1351 (\mathop \Pi \limits^ \bullet   - \mathop \Pi \limits^ \bullet  ^T
1352 ){\rm{ }} = {\rm{ }}(\Pi  - \Pi ^T ){\rm{ }}(J^{ - 1} \Pi  + \Pi J^{
1353 - 1} ) + \sum\limits_i {[Q^T F_i (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]} -
1354 (\Lambda  - \Lambda ^T ) . \label{introEquation:skewMatrixPI}
1355 \end{equation}
1356 Since $\Lambda$ is symmetric, the last term of Equation
1357 \ref{introEquation:skewMatrixPI} is zero, which implies the Lagrange
1358 multiplier $\Lambda$ is absent from the equations of motion. This
1359 unique property eliminate the requirement of iterations which can
1360 not be avoided in other methods\cite{Kol1997, Omelyan1998}.
1361
1362 Applying hat-map isomorphism, we obtain the equation of motion for
1363 angular momentum on body frame
1364 \begin{equation}
1286   \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T
1287   F_i (r,Q)} \right) \times X_i }.
1288   \label{introEquation:bodyAngularMotion}
# Line 1369 | Line 1290 | given by
1290   In the same manner, the equation of motion for rotation matrix is
1291   given by
1292   \[
1293 < \dot Q = Qskew(I^{ - 1} \pi )
1293 > \dot Q = Qskew(I^{ - 1} \pi ).
1294   \]
1295  
1296   \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1297   Lie-Poisson Integrator for Free Rigid Body}
1298  
1299 < If there is not external forces exerted on the rigid body, the only
1300 < contribution to the rotational is from the kinetic potential (the
1301 < first term of \ref{ introEquation:bodyAngularMotion}). The free
1302 < rigid body is an example of Lie-Poisson system with Hamiltonian
1299 > If there are no external forces exerted on the rigid body, the only
1300 > contribution to the rotational motion is from the kinetic energy
1301 > (the first term of \ref{introEquation:bodyAngularMotion}). The free
1302 > rigid body is an example of a Lie-Poisson system with Hamiltonian
1303   function
1304   \begin{equation}
1305   T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 )
# Line 1391 | Line 1312 | J(\pi ) = \left( {\begin{array}{*{20}c}
1312     0 & {\pi _3 } & { - \pi _2 }  \\
1313     { - \pi _3 } & 0 & {\pi _1 }  \\
1314     {\pi _2 } & { - \pi _1 } & 0  \\
1315 < \end{array}} \right)
1315 > \end{array}} \right).
1316   \end{equation}
1317   Thus, the dynamics of free rigid body is governed by
1318   \begin{equation}
1319 < \frac{d}{{dt}}\pi  = J(\pi )\nabla _\pi  T^r (\pi )
1319 > \frac{d}{{dt}}\pi  = J(\pi )\nabla _\pi  T^r (\pi ).
1320   \end{equation}
1321 <
1322 < One may notice that each $T_i^r$ in Equation
1323 < \ref{introEquation:rotationalKineticRB} can be solved exactly. For
1403 < instance, the equations of motion due to $T_1^r$ are given by
1321 > One may notice that each $T_i^r$ in
1322 > Eq.~\ref{introEquation:rotationalKineticRB} can be solved exactly.
1323 > For instance, the equations of motion due to $T_1^r$ are given by
1324   \begin{equation}
1325   \frac{d}{{dt}}\pi  = R_1 \pi ,\frac{d}{{dt}}Q = QR_1
1326   \label{introEqaution:RBMotionSingleTerm}
1327   \end{equation}
1328 < where
1328 > with
1329   \[ R_1  = \left( {\begin{array}{*{20}c}
1330     0 & 0 & 0  \\
1331     0 & 0 & {\pi _1 }  \\
1332     0 & { - \pi _1 } & 0  \\
1333   \end{array}} \right).
1334   \]
1335 < The solutions of Equation \ref{introEqaution:RBMotionSingleTerm} is
1335 > The solutions of Eq.~\ref{introEqaution:RBMotionSingleTerm} is
1336   \[
1337   \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) =
1338   Q(0)e^{\Delta tR_1 }
# Line 1426 | Line 1346 | tR_1 }$, we can use Cayley transformation,
1346   \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1347   \]
1348   To reduce the cost of computing expensive functions in $e^{\Delta
1349 < tR_1 }$, we can use Cayley transformation,
1349 > tR_1 }$, we can use Cayley transformation to obtain a single-aixs
1350 > propagator,
1351   \[
1352   e^{\Delta tR_1 }  \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1
1353 < )
1353 > ).
1354   \]
1355 < The flow maps for $T_2^r$ and $T_3^r$ can be found in the same
1356 < manner.
1357 <
1437 < In order to construct a second-order symplectic method, we split the
1438 < angular kinetic Hamiltonian function can into five terms
1355 > The propagator maps for $T_2^r$ and $T_3^r$ can be found in the same
1356 > manner. In order to construct a second-order symplectic method, we
1357 > split the angular kinetic Hamiltonian function into five terms
1358   \[
1359   T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1360   ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
1361 < (\pi _1 )
1362 < \].
1363 < Concatenating flows corresponding to these five terms, we can obtain
1364 < an symplectic integrator,
1361 > (\pi _1 ).
1362 > \]
1363 > By concatenating the propagators corresponding to these five terms,
1364 > we can obtain an symplectic integrator,
1365   \[
1366   \varphi _{\Delta t,T^r }  = \varphi _{\Delta t/2,\pi _1 }  \circ
1367   \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }
1368   \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi
1369   _1 }.
1370   \]
1452
1371   The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1372   $F(\pi )$ and $G(\pi )$ is defined by
1373   \[
1374   \{ F,G\} (\pi ) = [\nabla _\pi  F(\pi )]^T J(\pi )\nabla _\pi  G(\pi
1375 < )
1375 > ).
1376   \]
1377   If the Poisson bracket of a function $F$ with an arbitrary smooth
1378   function $G$ is zero, $F$ is a \emph{Casimir}, which is the
# Line 1465 | Line 1383 | then by the chain rule
1383   then by the chain rule
1384   \[
1385   \nabla _\pi  F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2
1386 < }}{2})\pi
1386 > }}{2})\pi.
1387   \]
1388 < Thus $ [\nabla _\pi  F(\pi )]^T J(\pi ) =  - S'(\frac{{\parallel \pi
1388 > Thus, $ [\nabla _\pi  F(\pi )]^T J(\pi ) =  - S'(\frac{{\parallel
1389 > \pi
1390   \parallel ^2 }}{2})\pi  \times \pi  = 0 $. This explicit
1391 < Lie-Poisson integrator is found to be extremely efficient and stable
1392 < which can be explained by the fact the small angle approximation is
1393 < used and the norm of the angular momentum is conserved.
1391 > Lie-Poisson integrator is found to be both extremely efficient and
1392 > stable. These properties can be explained by the fact the small
1393 > angle approximation is used and the norm of the angular momentum is
1394 > conserved.
1395  
1396   \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian
1397   Splitting for Rigid Body}
1398  
1399   The Hamiltonian of rigid body can be separated in terms of kinetic
1400 < energy and potential energy,
1401 < \[
1402 < H = T(p,\pi ) + V(q,Q)
1483 < \]
1484 < The equations of motion corresponding to potential energy and
1485 < kinetic energy are listed in the below table,
1400 > energy and potential energy,$H = T(p,\pi ) + V(q,Q)$. The equations
1401 > of motion corresponding to potential energy and kinetic energy are
1402 > listed in the below table,
1403   \begin{table}
1404 < \caption{Equations of motion due to Potential and Kinetic Energies}
1404 > \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1405   \begin{center}
1406   \begin{tabular}{|l|l|}
1407    \hline
# Line 1498 | Line 1415 | A second-order symplectic method is now obtained by th
1415   \end{tabular}
1416   \end{center}
1417   \end{table}
1418 < A second-order symplectic method is now obtained by the
1419 < composition of the flow maps,
1418 > A second-order symplectic method is now obtained by the composition
1419 > of the position and velocity propagators,
1420   \[
1421   \varphi _{\Delta t}  = \varphi _{\Delta t/2,V}  \circ \varphi
1422   _{\Delta t,T}  \circ \varphi _{\Delta t/2,V}.
1423   \]
1424   Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1425 < sub-flows which corresponding to force and torque respectively,
1425 > sub-propagators which corresponding to force and torque
1426 > respectively,
1427   \[
1428   \varphi _{\Delta t/2,V}  = \varphi _{\Delta t/2,F}  \circ \varphi
1429   _{\Delta t/2,\tau }.
1430   \]
1431   Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1432 < $\circ \varphi _{\Delta t/2,\tau }$ are commuted, the composition
1433 < order inside $\varphi _{\Delta t/2,V}$ does not matter.
1434 <
1435 < Furthermore, kinetic potential can be separated to translational
1518 < kinetic term, $T^t (p)$, and rotational kinetic term, $T^r (\pi )$,
1432 > $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order
1433 > inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the
1434 > kinetic energy can be separated to translational kinetic term, $T^t
1435 > (p)$, and rotational kinetic term, $T^r (\pi )$,
1436   \begin{equation}
1437   T(p,\pi ) =T^t (p) + T^r (\pi ).
1438   \end{equation}
1439   where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1440 < defined by \ref{introEquation:rotationalKineticRB}. Therefore, the
1441 < corresponding flow maps are given by
1440 > defined by Eq.~\ref{introEquation:rotationalKineticRB}. Therefore,
1441 > the corresponding propagators are given by
1442   \[
1443   \varphi _{\Delta t,T}  = \varphi _{\Delta t,T^t }  \circ \varphi
1444   _{\Delta t,T^r }.
1445   \]
1446 < Finally, we obtain the overall symplectic flow maps for free moving
1447 < rigid body
1448 < \begin{equation}
1449 < \begin{array}{c}
1450 < \varphi _{\Delta t}  = \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \\
1451 <  \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \\
1535 <  \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .\\
1536 < \end{array}
1446 > Finally, we obtain the overall symplectic propagators for freely
1447 > moving rigid bodies
1448 > \begin{eqnarray}
1449 > \varphi _{\Delta t}  &=& \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \notag\\
1450 >  & & \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \notag\\
1451 >  & & \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .\\
1452   \label{introEquation:overallRBFlowMaps}
1453 < \end{equation}
1453 > \end{eqnarray}
1454  
1455   \section{\label{introSection:langevinDynamics}Langevin Dynamics}
1456   As an alternative to newtonian dynamics, Langevin dynamics, which
1457   mimics a simple heat bath with stochastic and dissipative forces,
1458   has been applied in a variety of studies. This section will review
1459 < the theory of Langevin dynamics simulation. A brief derivation of
1460 < generalized Langevin equation will be given first. Follow that, we
1461 < will discuss the physical meaning of the terms appearing in the
1462 < equation as well as the calculation of friction tensor from
1463 < hydrodynamics theory.
1459 > the theory of Langevin dynamics. A brief derivation of generalized
1460 > Langevin equation will be given first. Following that, we will
1461 > discuss the physical meaning of the terms appearing in the equation
1462 > as well as the calculation of friction tensor from hydrodynamics
1463 > theory.
1464  
1465   \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1466  
1467 < Harmonic bath model, in which an effective set of harmonic
1467 > A harmonic bath model, in which an effective set of harmonic
1468   oscillators are used to mimic the effect of a linearly responding
1469   environment, has been widely used in quantum chemistry and
1470   statistical mechanics. One of the successful applications of
1471 < Harmonic bath model is the derivation of Deriving Generalized
1472 < Langevin Dynamics. Lets consider a system, in which the degree of
1471 > Harmonic bath model is the derivation of the Generalized Langevin
1472 > Dynamics (GLE). Lets consider a system, in which the degree of
1473   freedom $x$ is assumed to couple to the bath linearly, giving a
1474   Hamiltonian of the form
1475   \begin{equation}
1476   H = \frac{{p^2 }}{{2m}} + U(x) + H_B  + \Delta U(x,x_1 , \ldots x_N)
1477   \label{introEquation:bathGLE}.
1478   \end{equation}
1479 < Here $p$ is a momentum conjugate to $q$, $m$ is the mass associated
1480 < with this degree of freedom, $H_B$ is harmonic bath Hamiltonian,
1479 > Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated
1480 > with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1481   \[
1482   H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1483   }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 }
# Line 1570 | Line 1485 | the harmonic bath masses, and $\Delta U$ is bilinear s
1485   \]
1486   where the index $\alpha$ runs over all the bath degrees of freedom,
1487   $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1488 < the harmonic bath masses, and $\Delta U$ is bilinear system-bath
1488 > the harmonic bath masses, and $\Delta U$ is a bilinear system-bath
1489   coupling,
1490   \[
1491   \Delta U =  - \sum\limits_{\alpha  = 1}^N {g_\alpha  x_\alpha  x}
1492   \]
1493 < where $g_\alpha$ are the coupling constants between the bath and the
1494 < coordinate $x$. Introducing
1493 > where $g_\alpha$ are the coupling constants between the bath
1494 > coordinates ($x_ \alpha$) and the system coordinate ($x$).
1495 > Introducing
1496   \[
1497   W(x) = U(x) - \sum\limits_{\alpha  = 1}^N {\frac{{g_\alpha ^2
1498   }}{{2m_\alpha  w_\alpha ^2 }}} x^2
1499 < \] and combining the last two terms in Equation
1500 < \ref{introEquation:bathGLE}, we may rewrite the Harmonic bath
1585 < Hamiltonian as
1499 > \]
1500 > and combining the last two terms in Eq.~\ref{introEquation:bathGLE}, we may rewrite the Harmonic bath Hamiltonian as
1501   \[
1502   H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha  = 1}^N
1503   {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha
1504   w_\alpha ^2 \left( {x_\alpha   - \frac{{g_\alpha  }}{{m_\alpha
1505 < w_\alpha ^2 }}x} \right)^2 } \right\}}
1505 > w_\alpha ^2 }}x} \right)^2 } \right\}}.
1506   \]
1507   Since the first two terms of the new Hamiltonian depend only on the
1508   system coordinates, we can get the equations of motion for
1509 < Generalized Langevin Dynamics by Hamilton's equations
1595 < \ref{introEquation:motionHamiltonianCoordinate,
1596 < introEquation:motionHamiltonianMomentum},
1509 > Generalized Langevin Dynamics by Hamilton's equations,
1510   \begin{equation}
1511   m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} -
1512   \sum\limits_{\alpha  = 1}^N {g_\alpha  \left( {x_\alpha   -
# Line 1606 | Line 1519 | m\ddot x_\alpha   =  - m_\alpha  w_\alpha ^2 \left( {x
1519   \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right).
1520   \label{introEquation:bathMotionGLE}
1521   \end{equation}
1609
1522   In order to derive an equation for $x$, the dynamics of the bath
1523   variables $x_\alpha$ must be solved exactly first. As an integral
1524   transform which is particularly useful in solving linear ordinary
1525 < differential equations, Laplace transform is the appropriate tool to
1526 < solve this problem. The basic idea is to transform the difficult
1525 > differential equations,the Laplace transform is the appropriate tool
1526 > to solve this problem. The basic idea is to transform the difficult
1527   differential equations into simple algebra problems which can be
1528 < solved easily. Then applying inverse Laplace transform, also known
1529 < as the Bromwich integral, we can retrieve the solutions of the
1530 < original problems.
1531 <
1620 < Let $f(t)$ be a function defined on $ [0,\infty ) $. The Laplace
1621 < transform of f(t) is a new function defined as
1528 > solved easily. Then, by applying the inverse Laplace transform, also
1529 > known as the Bromwich integral, we can retrieve the solutions of the
1530 > original problems. Let $f(t)$ be a function defined on $ [0,\infty )
1531 > $, the Laplace transform of $f(t)$ is a new function defined as
1532   \[
1533   L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1534   \]
1535   where  $p$ is real and  $L$ is called the Laplace Transform
1536   Operator. Below are some important properties of Laplace transform
1627
1537   \begin{eqnarray*}
1538   L(x + y)  & = & L(x) + L(y) \\
1539   L(ax)     & = & aL(x) \\
# Line 1632 | Line 1541 | Operator. Below are some important properties of Lapla
1541   L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\
1542   L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\
1543   \end{eqnarray*}
1544 <
1636 <
1637 < Applying Laplace transform to the bath coordinates, we obtain
1544 > Applying the Laplace transform to the bath coordinates, we obtain
1545   \begin{eqnarray*}
1546 < p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) \\
1547 < L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }} \\
1546 > p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x), \\
1547 > L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }}. \\
1548   \end{eqnarray*}
1642
1549   By the same way, the system coordinates become
1550   \begin{eqnarray*}
1551 < mL(\ddot x) & = & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} \\
1552 <  & & \mbox{} - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
1551 > mL(\ddot x) & = &
1552 >  - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
1553 >  & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}.
1554   \end{eqnarray*}
1648
1555   With the help of some relatively important inverse Laplace
1556   transformations:
1557   \[
# Line 1655 | Line 1561 | transformations:
1561   L(1) = \frac{1}{p} \\
1562   \end{array}
1563   \]
1564 < , we obtain
1564 > we obtain
1565   \begin{eqnarray*}
1566 < m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} -
1566 > m\ddot x & =  & - \frac{{\partial W(x)}}{{\partial x}} -
1567   \sum\limits_{\alpha  = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2
1568   }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega
1569 < _\alpha  t)\dot x(t - \tau )d\tau  - \left[ {g_\alpha  x_\alpha  (0)
1570 < - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}} \right]\cos
1571 < (\omega _\alpha  t) - \frac{{g_\alpha  \dot x_\alpha  (0)}}{{\omega
1572 < _\alpha  }}\sin (\omega _\alpha  t)} } \right\}}
1573 < %
1574 < & = & - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t
1569 > _\alpha  t)\dot x(t - \tau )d\tau } } \right\}}  \\
1570 > & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1571 > x_\alpha (0) - \frac{{g_\alpha  }}{{m_\alpha  \omega _\alpha  }}}
1572 > \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha
1573 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}
1574 > \end{eqnarray*}
1575 > \begin{eqnarray*}
1576 > m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t
1577   {\sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2
1578   }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha
1579 < t)\dot x(t - \tau )d} \tau }  + \sum\limits_{\alpha  = 1}^N {\left\{
1580 < {\left[ {g_\alpha  x_\alpha  (0) - \frac{{g_\alpha  }}{{m_\alpha
1581 < \omega _\alpha  }}} \right]\cos (\omega _\alpha  t) +
1582 < \frac{{g_\alpha  \dot x_\alpha  (0)}}{{\omega _\alpha  }}\sin
1583 < (\omega _\alpha  t)} \right\}}
1579 > t)\dot x(t - \tau )d} \tau }  \\
1580 > & & + \sum\limits_{\alpha  = 1}^N {\left\{ {\left[ {g_\alpha
1581 > x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha  }}}
1582 > \right]\cos (\omega _\alpha  t) + \frac{{g_\alpha  \dot x_\alpha
1583 > (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)} \right\}}
1584   \end{eqnarray*}
1585   Introducing a \emph{dynamic friction kernel}
1586   \begin{equation}
# Line 1696 | Line 1604 | which is known as the \emph{generalized Langevin equat
1604   \end{equation}
1605   which is known as the \emph{generalized Langevin equation}.
1606  
1607 < \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}Random Force and Dynamic Friction Kernel}
1607 > \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1608  
1609   One may notice that $R(t)$ depends only on initial conditions, which
1610   implies it is completely deterministic within the context of a
1611   harmonic bath. However, it is easy to verify that $R(t)$ is totally
1612 < uncorrelated to $x$ and $\dot x$,
1613 < \[
1614 < \begin{array}{l}
1615 < \left\langle {x(t)R(t)} \right\rangle  = 0, \\
1616 < \left\langle {\dot x(t)R(t)} \right\rangle  = 0. \\
1709 < \end{array}
1710 < \]
1711 < This property is what we expect from a truly random process. As long
1712 < as the model, which is gaussian distribution in general, chosen for
1713 < $R(t)$ is a truly random process, the stochastic nature of the GLE
1714 < still remains.
1715 <
1612 > uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)}
1613 > \right\rangle  = 0, \left\langle {\dot x(t)R(t)} \right\rangle  =
1614 > 0.$ This property is what we expect from a truly random process. As
1615 > long as the model chosen for $R(t)$ was a gaussian distribution in
1616 > general, the stochastic nature of the GLE still remains.
1617   %dynamic friction kernel
1618   The convolution integral
1619   \[
# Line 1727 | Line 1628 | and Equation \ref{introEuqation:GeneralizedLangevinDyn
1628   \[
1629   \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = \xi _0 (x(t) - x(0))
1630   \]
1631 < and Equation \ref{introEuqation:GeneralizedLangevinDynamics} becomes
1631 > and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1632   \[
1633   m\ddot x =  - \frac{\partial }{{\partial x}}\left( {W(x) +
1634   \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1635   \]
1636 < which can be used to describe dynamic caging effect. The other
1637 < extreme is the bath that responds infinitely quickly to motions in
1638 < the system. Thus, $\xi (t)$ can be taken as a $delta$ function in
1639 < time:
1636 > which can be used to describe the effect of dynamic caging in
1637 > viscous solvents. The other extreme is the bath that responds
1638 > infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1639 > taken as a $delta$ function in time:
1640   \[
1641   \xi (t) = 2\xi _0 \delta (t)
1642   \]
# Line 1744 | Line 1645 | and Equation \ref{introEuqation:GeneralizedLangevinDyn
1645   \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = 2\xi _0 \int_0^t
1646   {\delta (t)\dot x(t - \tau )d\tau }  = \xi _0 \dot x(t),
1647   \]
1648 < and Equation \ref{introEuqation:GeneralizedLangevinDynamics} becomes
1648 > and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1649   \begin{equation}
1650   m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1651   x(t) + R(t) \label{introEquation:LangevinEquation}
1652   \end{equation}
1653   which is known as the Langevin equation. The static friction
1654   coefficient $\xi _0$ can either be calculated from spectral density
1655 < or be determined by Stokes' law for regular shaped particles.A
1655 > or be determined by Stokes' law for regular shaped particles. A
1656   briefly review on calculating friction tensor for arbitrary shaped
1657   particles is given in Sec.~\ref{introSection:frictionTensor}.
1658  
1659 < \subsubsection{\label{introSection:secondFluctuationDissipation}The Second Fluctuation Dissipation Theorem}
1659 > \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1660  
1661 < Defining a new set of coordinates,
1661 > Defining a new set of coordinates
1662   \[
1663   q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1664 < ^2 }}x(0)
1665 < \],
1664 > ^2 }}x(0),
1665 > \]
1666   we can rewrite $R(T)$ as
1667   \[
1668   R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
1669   \]
1670   And since the $q$ coordinates are harmonic oscillators,
1770
1671   \begin{eqnarray*}
1672   \left\langle {q_\alpha ^2 } \right\rangle  & = & \frac{{kT}}{{m_\alpha  \omega _\alpha ^2 }} \\
1673   \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t) \\
# Line 1776 | Line 1676 | And since the $q$ coordinates are harmonic oscillators
1676    & = &\sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\
1677    & = &kT\xi (t) \\
1678   \end{eqnarray*}
1779
1679   Thus, we recover the \emph{second fluctuation dissipation theorem}
1680   \begin{equation}
1681   \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1682 < \label{introEquation:secondFluctuationDissipation}.
1682 > \label{introEquation:secondFluctuationDissipation},
1683   \end{equation}
1684 < In effect, it acts as a constraint on the possible ways in which one
1685 < can model the random force and friction kernel.
1787 <
1788 < \subsection{\label{introSection:frictionTensor} Friction Tensor}
1789 < Theoretically, the friction kernel can be determined using velocity
1790 < autocorrelation function. However, this approach become impractical
1791 < when the system become more and more complicate. Instead, various
1792 < approaches based on hydrodynamics have been developed to calculate
1793 < the friction coefficients. The friction effect is isotropic in
1794 < Equation, $\zeta$ can be taken as a scalar. In general, friction
1795 < tensor $\Xi$ is a $6\times 6$ matrix given by
1796 < \[
1797 < \Xi  = \left( {\begin{array}{*{20}c}
1798 <   {\Xi _{}^{tt} } & {\Xi _{}^{rt} }  \\
1799 <   {\Xi _{}^{tr} } & {\Xi _{}^{rr} }  \\
1800 < \end{array}} \right).
1801 < \]
1802 < Here, $ {\Xi^{tt} }$ and $ {\Xi^{rr} }$ are translational friction
1803 < tensor and rotational resistance (friction) tensor respectively,
1804 < while ${\Xi^{tr} }$ is translation-rotation coupling tensor and $
1805 < {\Xi^{rt} }$ is rotation-translation coupling tensor. When a
1806 < particle moves in a fluid, it may experience friction force or
1807 < torque along the opposite direction of the velocity or angular
1808 < velocity,
1809 < \[
1810 < \left( \begin{array}{l}
1811 < F_R  \\
1812 < \tau _R  \\
1813 < \end{array} \right) =  - \left( {\begin{array}{*{20}c}
1814 <   {\Xi ^{tt} } & {\Xi ^{rt} }  \\
1815 <   {\Xi ^{tr} } & {\Xi ^{rr} }  \\
1816 < \end{array}} \right)\left( \begin{array}{l}
1817 < v \\
1818 < w \\
1819 < \end{array} \right)
1820 < \]
1821 < where $F_r$ is the friction force and $\tau _R$ is the friction
1822 < toque.
1823 <
1824 < \subsubsection{\label{introSection:resistanceTensorRegular}The Resistance Tensor for Regular Shape}
1825 <
1826 < For a spherical particle, the translational and rotational friction
1827 < constant can be calculated from Stoke's law,
1828 < \[
1829 < \Xi ^{tt}  = \left( {\begin{array}{*{20}c}
1830 <   {6\pi \eta R} & 0 & 0  \\
1831 <   0 & {6\pi \eta R} & 0  \\
1832 <   0 & 0 & {6\pi \eta R}  \\
1833 < \end{array}} \right)
1834 < \]
1835 < and
1836 < \[
1837 < \Xi ^{rr}  = \left( {\begin{array}{*{20}c}
1838 <   {8\pi \eta R^3 } & 0 & 0  \\
1839 <   0 & {8\pi \eta R^3 } & 0  \\
1840 <   0 & 0 & {8\pi \eta R^3 }  \\
1841 < \end{array}} \right)
1842 < \]
1843 < where $\eta$ is the viscosity of the solvent and $R$ is the
1844 < hydrodynamics radius.
1845 <
1846 < Other non-spherical shape, such as cylinder and ellipsoid
1847 < \textit{etc}, are widely used as reference for developing new
1848 < hydrodynamics theory, because their properties can be calculated
1849 < exactly. In 1936, Perrin extended Stokes's law to general ellipsoid,
1850 < also called a triaxial ellipsoid, which is given in Cartesian
1851 < coordinates by\cite{Perrin1934, Perrin1936}
1852 < \[
1853 < \frac{{x^2 }}{{a^2 }} + \frac{{y^2 }}{{b^2 }} + \frac{{z^2 }}{{c^2
1854 < }} = 1
1855 < \]
1856 < where the semi-axes are of lengths $a$, $b$, and $c$. Unfortunately,
1857 < due to the complexity of the elliptic integral, only the ellipsoid
1858 < with the restriction of two axes having to be equal, \textit{i.e.}
1859 < prolate($ a \ge b = c$) and oblate ($ a < b = c $), can be solved
1860 < exactly. Introducing an elliptic integral parameter $S$ for prolate,
1861 < \[
1862 < S = \frac{2}{{\sqrt {a^2  - b^2 } }}\ln \frac{{a + \sqrt {a^2  - b^2
1863 < } }}{b},
1864 < \]
1865 < and oblate,
1866 < \[
1867 < S = \frac{2}{{\sqrt {b^2  - a^2 } }}arctg\frac{{\sqrt {b^2  - a^2 }
1868 < }}{a}
1869 < \],
1870 < one can write down the translational and rotational resistance
1871 < tensors
1872 < \[
1873 < \begin{array}{l}
1874 < \Xi _a^{tt}  = 16\pi \eta \frac{{a^2  - b^2 }}{{(2a^2  - b^2 )S - 2a}} \\
1875 < \Xi _b^{tt}  = \Xi _c^{tt}  = 32\pi \eta \frac{{a^2  - b^2 }}{{(2a^2  - 3b^2 )S + 2a}} \\
1876 < \end{array},
1877 < \]
1878 < and
1879 < \[
1880 < \begin{array}{l}
1881 < \Xi _a^{rr}  = \frac{{32\pi }}{3}\eta \frac{{(a^2  - b^2 )b^2 }}{{2a - b^2 S}} \\
1882 < \Xi _b^{rr}  = \Xi _c^{rr}  = \frac{{32\pi }}{3}\eta \frac{{(a^4  - b^4 )}}{{(2a^2  - b^2 )S - 2a}} \\
1883 < \end{array}.
1884 < \]
1885 <
1886 < \subsubsection{\label{introSection:resistanceTensorRegularArbitrary}The Resistance Tensor for Arbitrary Shape}
1887 <
1888 < Unlike spherical and other regular shaped molecules, there is not
1889 < analytical solution for friction tensor of any arbitrary shaped
1890 < rigid molecules. The ellipsoid of revolution model and general
1891 < triaxial ellipsoid model have been used to approximate the
1892 < hydrodynamic properties of rigid bodies. However, since the mapping
1893 < from all possible ellipsoidal space, $r$-space, to all possible
1894 < combination of rotational diffusion coefficients, $D$-space is not
1895 < unique\cite{Wegener1979} as well as the intrinsic coupling between
1896 < translational and rotational motion of rigid body, general ellipsoid
1897 < is not always suitable for modeling arbitrarily shaped rigid
1898 < molecule. A number of studies have been devoted to determine the
1899 < friction tensor for irregularly shaped rigid bodies using more
1900 < advanced method where the molecule of interest was modeled by
1901 < combinations of spheres(beads)\cite{Carrasco1999} and the
1902 < hydrodynamics properties of the molecule can be calculated using the
1903 < hydrodynamic interaction tensor. Let us consider a rigid assembly of
1904 < $N$ beads immersed in a continuous medium. Due to hydrodynamics
1905 < interaction, the ``net'' velocity of $i$th bead, $v'_i$ is different
1906 < than its unperturbed velocity $v_i$,
1907 < \[
1908 < v'_i  = v_i  - \sum\limits_{j \ne i} {T_{ij} F_j }
1909 < \]
1910 < where $F_i$ is the frictional force, and $T_{ij}$ is the
1911 < hydrodynamic interaction tensor. The friction force of $i$th bead is
1912 < proportional to its ``net'' velocity
1913 < \begin{equation}
1914 < F_i  = \zeta _i v_i  - \zeta _i \sum\limits_{j \ne i} {T_{ij} F_j }.
1915 < \label{introEquation:tensorExpression}
1916 < \end{equation}
1917 < This equation is the basis for deriving the hydrodynamic tensor. In
1918 < 1930, Oseen and Burgers gave a simple solution to Equation
1919 < \ref{introEquation:tensorExpression}
1920 < \begin{equation}
1921 < T_{ij}  = \frac{1}{{8\pi \eta r_{ij} }}\left( {I + \frac{{R_{ij}
1922 < R_{ij}^T }}{{R_{ij}^2 }}} \right).
1923 < \label{introEquation:oseenTensor}
1924 < \end{equation}
1925 < Here $R_{ij}$ is the distance vector between bead $i$ and bead $j$.
1926 < A second order expression for element of different size was
1927 < introduced by Rotne and Prager\cite{Rotne1969} and improved by
1928 < Garc\'{i}a de la Torre and Bloomfield\cite{Torre1977},
1929 < \begin{equation}
1930 < T_{ij}  = \frac{1}{{8\pi \eta R_{ij} }}\left[ {\left( {I +
1931 < \frac{{R_{ij} R_{ij}^T }}{{R_{ij}^2 }}} \right) + R\frac{{\sigma
1932 < _i^2  + \sigma _j^2 }}{{r_{ij}^2 }}\left( {\frac{I}{3} -
1933 < \frac{{R_{ij} R_{ij}^T }}{{R_{ij}^2 }}} \right)} \right].
1934 < \label{introEquation:RPTensorNonOverlapped}
1935 < \end{equation}
1936 < Both of the Equation \ref{introEquation:oseenTensor} and Equation
1937 < \ref{introEquation:RPTensorNonOverlapped} have an assumption $R_{ij}
1938 < \ge \sigma _i  + \sigma _j$. An alternative expression for
1939 < overlapping beads with the same radius, $\sigma$, is given by
1940 < \begin{equation}
1941 < T_{ij}  = \frac{1}{{6\pi \eta R_{ij} }}\left[ {\left( {1 -
1942 < \frac{2}{{32}}\frac{{R_{ij} }}{\sigma }} \right)I +
1943 < \frac{2}{{32}}\frac{{R_{ij} R_{ij}^T }}{{R_{ij} \sigma }}} \right]
1944 < \label{introEquation:RPTensorOverlapped}
1945 < \end{equation}
1946 <
1947 < To calculate the resistance tensor at an arbitrary origin $O$, we
1948 < construct a $3N \times 3N$ matrix consisting of $N \times N$
1949 < $B_{ij}$ blocks
1950 < \begin{equation}
1951 < B = \left( {\begin{array}{*{20}c}
1952 <   {B_{11} } &  \ldots  & {B_{1N} }  \\
1953 <    \vdots  &  \ddots  &  \vdots   \\
1954 <   {B_{N1} } &  \cdots  & {B_{NN} }  \\
1955 < \end{array}} \right),
1956 < \end{equation}
1957 < where $B_{ij}$ is given by
1958 < \[
1959 < B_{ij}  = \delta _{ij} \frac{I}{{6\pi \eta R}} + (1 - \delta _{ij}
1960 < )T_{ij}
1961 < \]
1962 < where $\delta _{ij}$ is Kronecker delta function. Inverting matrix
1963 < $B$, we obtain
1964 <
1965 < \[
1966 < C = B^{ - 1}  = \left( {\begin{array}{*{20}c}
1967 <   {C_{11} } &  \ldots  & {C_{1N} }  \\
1968 <    \vdots  &  \ddots  &  \vdots   \\
1969 <   {C_{N1} } &  \cdots  & {C_{NN} }  \\
1970 < \end{array}} \right)
1971 < \]
1972 < , which can be partitioned into $N \times N$ $3 \times 3$ block
1973 < $C_{ij}$. With the help of $C_{ij}$ and skew matrix $U_i$
1974 < \[
1975 < U_i  = \left( {\begin{array}{*{20}c}
1976 <   0 & { - z_i } & {y_i }  \\
1977 <   {z_i } & 0 & { - x_i }  \\
1978 <   { - y_i } & {x_i } & 0  \\
1979 < \end{array}} \right)
1980 < \]
1981 < where $x_i$, $y_i$, $z_i$ are the components of the vector joining
1982 < bead $i$ and origin $O$. Hence, the elements of resistance tensor at
1983 < arbitrary origin $O$ can be written as
1984 < \begin{equation}
1985 < \begin{array}{l}
1986 < \Xi _{}^{tt}  = \sum\limits_i {\sum\limits_j {C_{ij} } } , \\
1987 < \Xi _{}^{tr}  = \Xi _{}^{rt}  = \sum\limits_i {\sum\limits_j {U_i C_{ij} } } , \\
1988 < \Xi _{}^{rr}  =  - \sum\limits_i {\sum\limits_j {U_i C_{ij} } } U_j  \\
1989 < \end{array}
1990 < \label{introEquation:ResistanceTensorArbitraryOrigin}
1991 < \end{equation}
1992 <
1993 < The resistance tensor depends on the origin to which they refer. The
1994 < proper location for applying friction force is the center of
1995 < resistance (reaction), at which the trace of rotational resistance
1996 < tensor, $ \Xi ^{rr}$ reaches minimum. Mathematically, the center of
1997 < resistance is defined as an unique point of the rigid body at which
1998 < the translation-rotation coupling tensor are symmetric,
1999 < \begin{equation}
2000 < \Xi^{tr}  = \left( {\Xi^{tr} } \right)^T
2001 < \label{introEquation:definitionCR}
2002 < \end{equation}
2003 < Form Equation \ref{introEquation:ResistanceTensorArbitraryOrigin},
2004 < we can easily find out that the translational resistance tensor is
2005 < origin independent, while the rotational resistance tensor and
2006 < translation-rotation coupling resistance tensor depend on the
2007 < origin. Given resistance tensor at an arbitrary origin $O$, and a
2008 < vector ,$r_{OP}(x_{OP}, y_{OP}, z_{OP})$, from $O$ to $P$, we can
2009 < obtain the resistance tensor at $P$ by
2010 < \begin{equation}
2011 < \begin{array}{l}
2012 < \Xi _P^{tt}  = \Xi _O^{tt}  \\
2013 < \Xi _P^{tr}  = \Xi _P^{rt}  = \Xi _O^{tr}  - U_{OP} \Xi _O^{tt}  \\
2014 < \Xi _P^{rr}  = \Xi _O^{rr}  - U_{OP} \Xi _O^{tt} U_{OP}  + \Xi _O^{tr} U_{OP}  - U_{OP} \Xi _O^{tr} ^{^T }  \\
2015 < \end{array}
2016 < \label{introEquation:resistanceTensorTransformation}
2017 < \end{equation}
2018 < where
2019 < \[
2020 < U_{OP}  = \left( {\begin{array}{*{20}c}
2021 <   0 & { - z_{OP} } & {y_{OP} }  \\
2022 <   {z_i } & 0 & { - x_{OP} }  \\
2023 <   { - y_{OP} } & {x_{OP} } & 0  \\
2024 < \end{array}} \right)
2025 < \]
2026 < Using Equations \ref{introEquation:definitionCR} and
2027 < \ref{introEquation:resistanceTensorTransformation}, one can locate
2028 < the position of center of resistance,
2029 < \begin{eqnarray*}
2030 < \left( \begin{array}{l}
2031 < x_{OR}  \\
2032 < y_{OR}  \\
2033 < z_{OR}  \\
2034 < \end{array} \right) & = &\left( {\begin{array}{*{20}c}
2035 <   {(\Xi _O^{rr} )_{yy}  + (\Xi _O^{rr} )_{zz} } & { - (\Xi _O^{rr} )_{xy} } & { - (\Xi _O^{rr} )_{xz} }  \\
2036 <   { - (\Xi _O^{rr} )_{xy} } & {(\Xi _O^{rr} )_{zz}  + (\Xi _O^{rr} )_{xx} } & { - (\Xi _O^{rr} )_{yz} }  \\
2037 <   { - (\Xi _O^{rr} )_{xz} } & { - (\Xi _O^{rr} )_{yz} } & {(\Xi _O^{rr} )_{xx}  + (\Xi _O^{rr} )_{yy} }  \\
2038 < \end{array}} \right)^{ - 1}  \\
2039 <  & & \left( \begin{array}{l}
2040 < (\Xi _O^{tr} )_{yz}  - (\Xi _O^{tr} )_{zy}  \\
2041 < (\Xi _O^{tr} )_{zx}  - (\Xi _O^{tr} )_{xz}  \\
2042 < (\Xi _O^{tr} )_{xy}  - (\Xi _O^{tr} )_{yx}  \\
2043 < \end{array} \right) \\
2044 < \end{eqnarray*}
2045 <
2046 <
2047 <
2048 < where $x_OR$, $y_OR$, $z_OR$ are the components of the vector
2049 < joining center of resistance $R$ and origin $O$.
1684 > which acts as a constraint on the possible ways in which one can
1685 > model the random force and friction kernel.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines