--- trunk/tengDissertation/Introduction.tex 2006/04/10 05:35:55 2699 +++ trunk/tengDissertation/Introduction.tex 2006/07/18 16:42:49 2950 @@ -3,38 +3,38 @@ Closely related to Classical Mechanics, Molecular Dyna \section{\label{introSection:classicalMechanics}Classical Mechanics} -Closely related to Classical Mechanics, Molecular Dynamics -simulations are carried out by integrating the equations of motion -for a given system of particles. There are three fundamental ideas -behind classical mechanics. Firstly, One can determine the state of -a mechanical system at any time of interest; Secondly, all the -mechanical properties of the system at that time can be determined -by combining the knowledge of the properties of the system with the -specification of this state; Finally, the specification of the state -when further combine with the laws of mechanics will also be -sufficient to predict the future behavior of the system. +Using equations of motion derived from Classical Mechanics, +Molecular Dynamics simulations are carried out by integrating the +equations of motion for a given system of particles. There are three +fundamental ideas behind classical mechanics. Firstly, one can +determine the state of a mechanical system at any time of interest; +Secondly, all the mechanical properties of the system at that time +can be determined by combining the knowledge of the properties of +the system with the specification of this state; Finally, the +specification of the state when further combined with the laws of +mechanics will also be sufficient to predict the future behavior of +the system. \subsection{\label{introSection:newtonian}Newtonian Mechanics} The discovery of Newton's three laws of mechanics which govern the motion of particles is the foundation of the classical mechanics. -Newton¡¯s first law defines a class of inertial frames. Inertial +Newton's first law defines a class of inertial frames. Inertial frames are reference frames where a particle not interacting with other bodies will move with constant speed in the same direction. -With respect to inertial frames Newton¡¯s second law has the form +With respect to inertial frames, Newton's second law has the form \begin{equation} -F = \frac {dp}{dt} = \frac {mv}{dt} +F = \frac {dp}{dt} = \frac {mdv}{dt} \label{introEquation:newtonSecondLaw} \end{equation} A point mass interacting with other bodies moves with the acceleration along the direction of the force acting on it. Let -$F_ij$ be the force that particle $i$ exerts on particle $j$, and -$F_ji$ be the force that particle $j$ exerts on particle $i$. -Newton¡¯s third law states that +$F_{ij}$ be the force that particle $i$ exerts on particle $j$, and +$F_{ji}$ be the force that particle $j$ exerts on particle $i$. +Newton's third law states that \begin{equation} -F_ij = -F_ji +F_{ij} = -F_{ji}. \label{introEquation:newtonThirdLaw} \end{equation} - Conservation laws of Newtonian Mechanics play very important roles in solving mechanics problems. The linear momentum of a particle is conserved if it is free or it experiences no force. The second @@ -46,7 +46,7 @@ N \equiv r \times F \label{introEquation:torqueDefinit \end{equation} The torque $\tau$ with respect to the same origin is defined to be \begin{equation} -N \equiv r \times F \label{introEquation:torqueDefinition} +\tau \equiv r \times F \label{introEquation:torqueDefinition} \end{equation} Differentiating Eq.~\ref{introEquation:angularMomentumDefinition}, \[ @@ -59,66 +59,60 @@ thus, \] thus, \begin{equation} -\dot L = r \times \dot p = N +\dot L = r \times \dot p = \tau \end{equation} If there are no external torques acting on a body, the angular momentum of it is conserved. The last conservation theorem state -that if all forces are conservative, Energy -\begin{equation}E = T + V \label{introEquation:energyConservation} +that if all forces are conservative, energy is conserved, +\begin{equation}E = T + V. \label{introEquation:energyConservation} \end{equation} - is conserved. All of these conserved quantities are -important factors to determine the quality of numerical integration -scheme for rigid body \cite{Dullweber1997}. +All of these conserved quantities are important factors to determine +the quality of numerical integration schemes for rigid +bodies.\cite{Dullweber1997} \subsection{\label{introSection:lagrangian}Lagrangian Mechanics} -Newtonian Mechanics suffers from two important limitations: it -describes their motion in special cartesian coordinate systems. -Another limitation of Newtonian mechanics becomes obvious when we -try to describe systems with large numbers of particles. It becomes -very difficult to predict the properties of the system by carrying -out calculations involving the each individual interaction between -all the particles, even if we know all of the details of the -interaction. In order to overcome some of the practical difficulties -which arise in attempts to apply Newton's equation to complex -system, alternative procedures may be developed. +Newtonian Mechanics suffers from an important limitation: motion can +only be described in cartesian coordinate systems which make it +impossible to predict analytically the properties of the system even +if we know all of the details of the interaction. In order to +overcome some of the practical difficulties which arise in attempts +to apply Newton's equation to complex systems, approximate numerical +procedures may be developed. -\subsubsection{\label{introSection:halmiltonPrinciple}Hamilton's -Principle} +\subsubsection{\label{introSection:halmiltonPrinciple}\textbf{Hamilton's +Principle}} Hamilton introduced the dynamical principle upon which it is -possible to base all of mechanics and, indeed, most of classical -physics. Hamilton's Principle may be stated as follow, - -The actual trajectory, along which a dynamical system may move from -one point to another within a specified time, is derived by finding -the path which minimizes the time integral of the difference between -the kinetic, $K$, and potential energies, $U$ \cite{tolman79}. +possible to base all of mechanics and most of classical physics. +Hamilton's Principle may be stated as follows: the trajectory, along +which a dynamical system may move from one point to another within a +specified time, is derived by finding the path which minimizes the +time integral of the difference between the kinetic $K$, and +potential energies $U$, \begin{equation} -\delta \int_{t_1 }^{t_2 } {(K - U)dt = 0} , +\delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}. \label{introEquation:halmitonianPrinciple1} \end{equation} - For simple mechanical systems, where the forces acting on the -different part are derivable from a potential and the velocities are -small compared with that of light, the Lagrangian function $L$ can -be define as the difference between the kinetic energy of the system -and its potential energy, +different parts are derivable from a potential, the Lagrangian +function $L$ can be defined as the difference between the kinetic +energy of the system and its potential energy, \begin{equation} -L \equiv K - U = L(q_i ,\dot q_i ) , +L \equiv K - U = L(q_i ,\dot q_i ). \label{introEquation:lagrangianDef} \end{equation} -then Eq.~\ref{introEquation:halmitonianPrinciple1} becomes +Thus, Eq.~\ref{introEquation:halmitonianPrinciple1} becomes \begin{equation} -\delta \int_{t_1 }^{t_2 } {L dt = 0} , +\delta \int_{t_1 }^{t_2 } {L dt = 0} . \label{introEquation:halmitonianPrinciple2} \end{equation} -\subsubsection{\label{introSection:equationOfMotionLagrangian}The -Equations of Motion in Lagrangian Mechanics} +\subsubsection{\label{introSection:equationOfMotionLagrangian}\textbf{The +Equations of Motion in Lagrangian Mechanics}} -for a holonomic system of $f$ degrees of freedom, the equations of -motion in the Lagrangian form is +For a system of $f$ degrees of freedom, the equations of motion in +the Lagrangian form is \begin{equation} \frac{d}{{dt}}\frac{{\partial L}}{{\partial \dot q_i }} - \frac{{\partial L}}{{\partial q_i }} = 0,{\rm{ }}i = 1, \ldots,f @@ -132,8 +126,7 @@ independent of generalized velocities, the generalized Arising from Lagrangian Mechanics, Hamiltonian Mechanics was introduced by William Rowan Hamilton in 1833 as a re-formulation of classical mechanics. If the potential energy of a system is -independent of generalized velocities, the generalized momenta can -be defined as +independent of velocities, the momenta can be defined as \begin{equation} p_i = \frac{\partial L}{\partial \dot q_i} \label{introEquation:generalizedMomenta} @@ -143,7 +136,6 @@ p_i = \frac{{\partial L}}{{\partial q_i }} p_i = \frac{{\partial L}}{{\partial q_i }} \label{introEquation:generalizedMomentaDot} \end{equation} - With the help of the generalized momenta, we may now define a new quantity $H$ by the equation \begin{equation} @@ -151,32 +143,30 @@ $L$ is the Lagrangian function for the system. \label{introEquation:hamiltonianDefByLagrangian} \end{equation} where $ \dot q_1 \ldots \dot q_f $ are generalized velocities and -$L$ is the Lagrangian function for the system. - -Differentiating Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, -one can obtain +$L$ is the Lagrangian function for the system. Differentiating +Eq.~\ref{introEquation:hamiltonianDefByLagrangian}, one can obtain \begin{equation} dH = \sum\limits_k {\left( {p_k d\dot q_k + \dot q_k dp_k - \frac{{\partial L}}{{\partial q_k }}dq_k - \frac{{\partial L}}{{\partial \dot q_k }}d\dot q_k } \right)} - \frac{{\partial -L}}{{\partial t}}dt \label{introEquation:diffHamiltonian1} +L}}{{\partial t}}dt . \label{introEquation:diffHamiltonian1} \end{equation} -Making use of Eq.~\ref{introEquation:generalizedMomenta}, the -second and fourth terms in the parentheses cancel. Therefore, +Making use of Eq.~\ref{introEquation:generalizedMomenta}, the second +and fourth terms in the parentheses cancel. Therefore, Eq.~\ref{introEquation:diffHamiltonian1} can be rewritten as \begin{equation} dH = \sum\limits_k {\left( {\dot q_k dp_k - \dot p_k dq_k } -\right)} - \frac{{\partial L}}{{\partial t}}dt +\right)} - \frac{{\partial L}}{{\partial t}}dt . \label{introEquation:diffHamiltonian2} \end{equation} By identifying the coefficients of $dq_k$, $dp_k$ and dt, we can find \begin{equation} -\frac{{\partial H}}{{\partial p_k }} = q_k +\frac{{\partial H}}{{\partial p_k }} = \dot {q_k} \label{introEquation:motionHamiltonianCoordinate} \end{equation} \begin{equation} -\frac{{\partial H}}{{\partial q_k }} = - p_k +\frac{{\partial H}}{{\partial q_k }} = - \dot {p_k} \label{introEquation:motionHamiltonianMomentum} \end{equation} and @@ -185,34 +175,31 @@ t}} t}} \label{introEquation:motionHamiltonianTime} \end{equation} - -Eq.~\ref{introEquation:motionHamiltonianCoordinate} and +where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's equation of motion. Due to their symmetrical formula, they are also -known as the canonical equations of motions \cite{Goldstein01}. +known as the canonical equations of motions.\cite{Goldstein2001} An important difference between Lagrangian approach and the Hamiltonian approach is that the Lagrangian is considered to be a -function of the generalized velocities $\dot q_i$ and the -generalized coordinates $q_i$, while the Hamiltonian is considered -to be a function of the generalized momenta $p_i$ and the conjugate -generalized coordinate $q_i$. Hamiltonian Mechanics is more -appropriate for application to statistical mechanics and quantum -mechanics, since it treats the coordinate and its time derivative as -independent variables and it only works with 1st-order differential -equations\cite{Marion90}. - +function of the generalized velocities $\dot q_i$ and coordinates +$q_i$, while the Hamiltonian is considered to be a function of the +generalized momenta $p_i$ and the conjugate coordinates $q_i$. +Hamiltonian Mechanics is more appropriate for application to +statistical mechanics and quantum mechanics, since it treats the +coordinate and its time derivative as independent variables and it +only works with 1st-order differential equations.\cite{Marion1990} In Newtonian Mechanics, a system described by conservative forces -conserves the total energy \ref{introEquation:energyConservation}. -It follows that Hamilton's equations of motion conserve the total -Hamiltonian. +conserves the total energy +(Eq.~\ref{introEquation:energyConservation}). It follows that +Hamilton's equations of motion conserve the total Hamiltonian \begin{equation} \frac{{dH}}{{dt}} = \sum\limits_i {\left( {\frac{{\partial H}}{{\partial q_i }}\dot q_i + \frac{{\partial H}}{{\partial p_i }}\dot p_i } \right)} = \sum\limits_i {\left( {\frac{{\partial H}}{{\partial q_i }}\frac{{\partial H}}{{\partial p_i }} - \frac{{\partial H}}{{\partial p_i }}\frac{{\partial H}}{{\partial -q_i }}} \right) = 0} \label{introEquation:conserveHalmitonian} +q_i }}} \right) = 0}. \label{introEquation:conserveHalmitonian} \end{equation} \section{\label{introSection:statisticalMechanics}Statistical @@ -221,101 +208,286 @@ Statistical Mechanics concepts presented in this disse The thermodynamic behaviors and properties of Molecular Dynamics simulation are governed by the principle of Statistical Mechanics. The following section will give a brief introduction to some of the -Statistical Mechanics concepts presented in this dissertation. +Statistical Mechanics concepts and theorems presented in this +dissertation. -\subsection{\label{introSection:ensemble}Ensemble and Phase Space} +\subsection{\label{introSection:ensemble}Phase Space and Ensemble} -\subsection{\label{introSection:ergodic}The Ergodic Hypothesis} +Mathematically, phase space is the space which represents all +possible states of a system. Each possible state of the system +corresponds to one unique point in the phase space. For mechanical +systems, the phase space usually consists of all possible values of +position and momentum variables. Consider a dynamic system of $f$ +particles in a cartesian space, where each of the $6f$ coordinates +and momenta is assigned to one of $6f$ mutually orthogonal axes, the +phase space of this system is a $6f$ dimensional space. A point, $x += +(\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} +\over q} _1 , \ldots +,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} +\over q} _f +,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} +\over p} _1 \ldots +,\mathord{\buildrel{\lower3pt\hbox{$\scriptscriptstyle\rightharpoonup$}} +\over p} _f )$ , with a unique set of values of $6f$ coordinates and +momenta is a phase space vector. +%%%fix me -Various thermodynamic properties can be calculated from Molecular -Dynamics simulation. By comparing experimental values with the -calculated properties, one can determine the accuracy of the -simulation and the quality of the underlying model. However, both of -experiment and computer simulation are usually performed during a -certain time interval and the measurements are averaged over a -period of them which is different from the average behavior of -many-body system in Statistical Mechanics. Fortunately, Ergodic -Hypothesis is proposed to make a connection between time average and -ensemble average. It states that time average and average over the -statistical ensemble are identical \cite{Frenkel1996, leach01:mm}. +In statistical mechanics, the condition of an ensemble at any time +can be regarded as appropriately specified by the density $\rho$ +with which representative points are distributed over the phase +space. The density distribution for an ensemble with $f$ degrees of +freedom is defined as, \begin{equation} -\langle A \rangle_t = \mathop {\lim }\limits_{t \to \infty } -\frac{1}{t}\int\limits_0^t {A(p(t),q(t))dt = \int\limits_\Gamma -{A(p(t),q(t))} } \rho (p(t), q(t)) dpdq +\rho = \rho (q_1 , \ldots ,q_f ,p_1 , \ldots ,p_f ,t). +\label{introEquation:densityDistribution} \end{equation} -where $\langle A \rangle_t$ is an equilibrium value of a physical -quantity and $\rho (p(t), q(t))$ is the equilibrium distribution -function. If an observation is averaged over a sufficiently long -time (longer than relaxation time), all accessible microstates in -phase space are assumed to be equally probed, giving a properly -weighted statistical average. This allows the researcher freedom of -choice when deciding how best to measure a given observable. In case -an ensemble averaged approach sounds most reasonable, the Monte -Carlo techniques\cite{metropolis:1949} can be utilized. Or if the -system lends itself to a time averaging approach, the Molecular -Dynamics techniques in Sec.~\ref{introSection:molecularDynamics} -will be the best choice\cite{Frenkel1996}. +Governed by the principles of mechanics, the phase points change +their locations which changes the density at any time at phase +space. Hence, the density distribution is also to be taken as a +function of the time. The number of systems $\delta N$ at time $t$ +can be determined by, +\begin{equation} +\delta N = \rho (q,p,t)dq_1 \ldots dq_f dp_1 \ldots dp_f. +\label{introEquation:deltaN} +\end{equation} +Assuming enough copies of the systems, we can sufficiently +approximate $\delta N$ without introducing discontinuity when we go +from one region in the phase space to another. By integrating over +the whole phase space, +\begin{equation} +N = \int { \ldots \int {\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f +\label{introEquation:totalNumberSystem} +\end{equation} +gives us an expression for the total number of copies. Hence, the +probability per unit volume in the phase space can be obtained by, +\begin{equation} +\frac{{\rho (q,p,t)}}{N} = \frac{{\rho (q,p,t)}}{{\int { \ldots \int +{\rho (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}. +\label{introEquation:unitProbability} +\end{equation} +With the help of Eq.~\ref{introEquation:unitProbability} and the +knowledge of the system, it is possible to calculate the average +value of any desired quantity which depends on the coordinates and +momenta of the system. Even when the dynamics of the real system are +complex, or stochastic, or even discontinuous, the average +properties of the ensemble of possibilities as a whole remain well +defined. For a classical system in thermal equilibrium with its +environment, the ensemble average of a mechanical quantity, $\langle +A(q , p) \rangle_t$, takes the form of an integral over the phase +space of the system, +\begin{equation} +\langle A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho +(q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho +(q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}. +\label{introEquation:ensembelAverage} +\end{equation} +\subsection{\label{introSection:liouville}Liouville's theorem} + +Liouville's theorem is the foundation on which statistical mechanics +rests. It describes the time evolution of the phase space +distribution function. In order to calculate the rate of change of +$\rho$, we begin from Eq.~\ref{introEquation:deltaN}. If we consider +the two faces perpendicular to the $q_1$ axis, which are located at +$q_1$ and $q_1 + \delta q_1$, the number of phase points leaving the +opposite face is given by the expression, +\begin{equation} +\left( {\rho + \frac{{\partial \rho }}{{\partial q_1 }}\delta q_1 } +\right)\left( {\dot q_1 + \frac{{\partial \dot q_1 }}{{\partial q_1 +}}\delta q_1 } \right)\delta q_2 \ldots \delta q_f \delta p_1 +\ldots \delta p_f . +\end{equation} +Summing all over the phase space, we obtain +\begin{equation} +\frac{{d(\delta N)}}{{dt}} = - \sum\limits_{i = 1}^f {\left[ {\rho +\left( {\frac{{\partial \dot q_i }}{{\partial q_i }} + +\frac{{\partial \dot p_i }}{{\partial p_i }}} \right) + \left( +{\frac{{\partial \rho }}{{\partial q_i }}\dot q_i + \frac{{\partial +\rho }}{{\partial p_i }}\dot p_i } \right)} \right]} \delta q_1 +\ldots \delta q_f \delta p_1 \ldots \delta p_f . +\end{equation} +Differentiating the equations of motion in Hamiltonian formalism +(\ref{introEquation:motionHamiltonianCoordinate}, +\ref{introEquation:motionHamiltonianMomentum}), we can show, +\begin{equation} +\sum\limits_i {\left( {\frac{{\partial \dot q_i }}{{\partial q_i }} ++ \frac{{\partial \dot p_i }}{{\partial p_i }}} \right)} = 0 , +\end{equation} +which cancels the first terms of the right hand side. Furthermore, +dividing $ \delta q_1 \ldots \delta q_f \delta p_1 \ldots \delta +p_f $ in both sides, we can write out Liouville's theorem in a +simple form, +\begin{equation} +\frac{{\partial \rho }}{{\partial t}} + \sum\limits_{i = 1}^f +{\left( {\frac{{\partial \rho }}{{\partial q_i }}\dot q_i + +\frac{{\partial \rho }}{{\partial p_i }}\dot p_i } \right)} = 0 . +\label{introEquation:liouvilleTheorem} +\end{equation} +Liouville's theorem states that the distribution function is +constant along any trajectory in phase space. In classical +statistical mechanics, since the number of system copies in an +ensemble is huge and constant, we can assume the local density has +no reason (other than classical mechanics) to change, +\begin{equation} +\frac{{\partial \rho }}{{\partial t}} = 0. +\label{introEquation:stationary} +\end{equation} +In such stationary system, the density of distribution $\rho$ can be +connected to the Hamiltonian $H$ through Maxwell-Boltzmann +distribution, +\begin{equation} +\rho \propto e^{ - \beta H}. +\label{introEquation:densityAndHamiltonian} +\end{equation} + +\subsubsection{\label{introSection:phaseSpaceConservation}\textbf{Conservation of Phase Space}} +Lets consider a region in the phase space, +\begin{equation} +\delta v = \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f . +\end{equation} +If this region is small enough, the density $\rho$ can be regarded +as uniform over the whole integral. Thus, the number of phase points +inside this region is given by, +\begin{eqnarray} +\delta N &=& \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f,\\ +\frac{{d(\delta N)}}{{dt}} &=& \frac{{d\rho }}{{dt}}\delta v + \rho +\frac{d}{{dt}}(\delta v) = 0. +\end{eqnarray} +With the help of the stationary assumption +(Eq.~\ref{introEquation:stationary}), we obtain the principle of +\emph{conservation of volume in phase space}, +\begin{equation} +\frac{d}{{dt}}(\delta v) = \frac{d}{{dt}}\int { \ldots \int {dq_1 } +...dq_f dp_1 } ..dp_f = 0. +\label{introEquation:volumePreserving} +\end{equation} + +\subsubsection{\label{introSection:liouvilleInOtherForms}\textbf{Liouville's Theorem in Other Forms}} + +Liouville's theorem can be expressed in a variety of different forms +which are convenient within different contexts. For any two function +$F$ and $G$ of the coordinates and momenta of a system, the Poisson +bracket $\{F,G\}$ is defined as +\begin{equation} +\left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial +F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} - +\frac{{\partial F}}{{\partial p_i }}\frac{{\partial G}}{{\partial +q_i }}} \right)}. +\label{introEquation:poissonBracket} +\end{equation} +Substituting equations of motion in Hamiltonian formalism +(Eq.~\ref{introEquation:motionHamiltonianCoordinate} , +Eq.~\ref{introEquation:motionHamiltonianMomentum}) into +(Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite +Liouville's theorem using Poisson bracket notion, +\begin{equation} +\left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - \left\{ +{\rho ,H} \right\}. +\label{introEquation:liouvilleTheromInPoissin} +\end{equation} +Moreover, the Liouville operator is defined as +\begin{equation} +iL = \sum\limits_{i = 1}^f {\left( {\frac{{\partial H}}{{\partial +p_i }}\frac{\partial }{{\partial q_i }} - \frac{{\partial +H}}{{\partial q_i }}\frac{\partial }{{\partial p_i }}} \right)} +\label{introEquation:liouvilleOperator} +\end{equation} +In terms of Liouville operator, Liouville's equation can also be +expressed as +\begin{equation} +\left( {\frac{{\partial \rho }}{{\partial t}}} \right) = - iL\rho +\label{introEquation:liouvilleTheoremInOperator} +\end{equation} +which can help define a propagator $\rho (t) = e^{-iLt} \rho (0)$. +\subsection{\label{introSection:ergodic}The Ergodic Hypothesis} + +Various thermodynamic properties can be calculated from Molecular +Dynamics simulation. By comparing experimental values with the +calculated properties, one can determine the accuracy of the +simulation and the quality of the underlying model. However, both +experiments and computer simulations are usually performed during a +certain time interval and the measurements are averaged over a +period of time which is different from the average behavior of +many-body system in Statistical Mechanics. Fortunately, the Ergodic +Hypothesis makes a connection between time average and the ensemble +average. It states that the time average and average over the +statistical ensemble are identical:\cite{Frenkel1996, Leach2001} +\begin{equation} +\langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty } +\frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma +{A(q(t),p(t))} } \rho (q(t), p(t)) dqdp +\end{equation} +where $\langle A(q , p) \rangle_t$ is an equilibrium value of a +physical quantity and $\rho (p(t), q(t))$ is the equilibrium +distribution function. If an observation is averaged over a +sufficiently long time (longer than the relaxation time), all +accessible microstates in phase space are assumed to be equally +probed, giving a properly weighted statistical average. This allows +the researcher freedom of choice when deciding how best to measure a +given observable. In case an ensemble averaged approach sounds most +reasonable, the Monte Carlo methods\cite{Metropolis1949} can be +utilized. Or if the system lends itself to a time averaging +approach, the Molecular Dynamics techniques in +Sec.~\ref{introSection:molecularDynamics} will be the best +choice.\cite{Frenkel1996} + \section{\label{introSection:geometricIntegratos}Geometric Integrators} -A variety of numerical integrators were proposed to simulate the -motions. They usually begin with an initial conditionals and move -the objects in the direction governed by the differential equations. -However, most of them ignore the hidden physical law contained -within the equations. Since 1990, geometric integrators, which -preserve various phase-flow invariants such as symplectic structure, -volume and time reversal symmetry, are developed to address this -issue. The velocity verlet method, which happens to be a simple -example of symplectic integrator, continues to gain its popularity -in molecular dynamics community. This fact can be partly explained -by its geometric nature. +A variety of numerical integrators have been proposed to simulate +the motions of atoms in MD simulation. They usually begin with +initial conditions and move the objects in the direction governed by +the differential equations. However, most of them ignore the hidden +physical laws contained within the equations. Since 1990, geometric +integrators, which preserve various phase-flow invariants such as +symplectic structure, volume and time reversal symmetry, were +developed to address this issue.\cite{Dullweber1997, McLachlan1998, +Leimkuhler1999} The velocity Verlet method, which happens to be a +simple example of symplectic integrator, continues to gain +popularity in the molecular dynamics community. This fact can be +partly explained by its geometric nature. -\subsection{\label{introSection:symplecticManifold}Symplectic Manifold} -A \emph{manifold} is an abstract mathematical space. It locally -looks like Euclidean space, but when viewed globally, it may have -more complicate structure. A good example of manifold is the surface -of Earth. It seems to be flat locally, but it is round if viewed as -a whole. A \emph{differentiable manifold} (also known as -\emph{smooth manifold}) is a manifold with an open cover in which -the covering neighborhoods are all smoothly isomorphic to one -another. In other words,it is possible to apply calculus on -\emph{differentiable manifold}. A \emph{symplectic manifold} is -defined as a pair $(M, \omega)$ which consisting of a -\emph{differentiable manifold} $M$ and a close, non-degenerated, +\subsection{\label{introSection:symplecticManifold}Manifolds and Bundles} +A \emph{manifold} is an abstract mathematical space. It looks +locally like Euclidean space, but when viewed globally, it may have +more complicated structure. A good example of manifold is the +surface of Earth. It seems to be flat locally, but it is round if +viewed as a whole. A \emph{differentiable manifold} (also known as +\emph{smooth manifold}) is a manifold on which it is possible to +apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is +defined as a pair $(M, \omega)$ which consists of a +\emph{differentiable manifold} $M$ and a close, non-degenerate, bilinear symplectic form, $\omega$. A symplectic form on a vector space $V$ is a function $\omega(x, y)$ which satisfies $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+ \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and -$\omega(x, x) = 0$. Cross product operation in vector field is an -example of symplectic form. +$\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in +vector field is an example of symplectic form. +Given vector spaces $V$ and $W$ over same field $F$, $f: V \to W$ is a linear transformation if +\begin{eqnarray*} +f(x+y) & = & f(x) + f(y) \\ +f(ax) & = & af(x) +\end{eqnarray*} +are always satisfied for any two vectors $x$ and $y$ in $V$ and any scalar $a$ in $F$. One can define the dual vector space $V^*$ of $V$ if any two built-in linear transformations $\phi$ and $\psi$ in $V^*$ satisfy the following definition of addition and scalar multiplication: +\begin{eqnarray*} +(\phi+\psi)(x) & = & \phi(x)+\psi(x) \\ +(a\phi)(x) & = & a \phi(x) +\end{eqnarray*} +for all $a$ in $F$ and $x$ in $V$. For a manifold $M$, one can define a tangent vector of a tangent space $TM_q$ at every point $q$ +\begin{equation} +\dot q = \mathop {\lim }\limits_{t \to 0} \frac{{\phi (t) - \phi (0)}}{t} +\end{equation} +where $\phi(0)=q$ and $\phi(t) \in M$. One may also define a cotangent space $T^*M_q$ as the dual space of the tangent space $TM_q$. The tangent space and the cotangent space are isomorphic to each other, since they are both real vector spaces with same dimension. +The union of tangent spaces at every point of $M$ is called the tangent bundle of $M$ and is denoted by $TM$, while cotangent bundle $T^*M$ is defined as the union of the cotangent spaces to $M$.\cite{Jost2002} For a Hamiltonian system with configuration manifold $V$, the $(q,\dot q)$ phase space is the tangent bundle of the configuration manifold $V$, while the cotangent bundle is represented by $(q,p)$. -One of the motivations to study \emph{symplectic manifold} in -Hamiltonian Mechanics is that a symplectic manifold can represent -all possible configurations of the system and the phase space of the -system can be described by it's cotangent bundle. Every symplectic -manifold is even dimensional. For instance, in Hamilton equations, -coordinate and momentum always appear in pairs. - -Let $(M,\omega)$ and $(N, \eta)$ be symplectic manifolds. A map -\[ -f : M \rightarrow N -\] -is a \emph{symplectomorphism} if it is a \emph{diffeomorphims} and -the \emph{pullback} of $\eta$ under f is equal to $\omega$. -Canonical transformation is an example of symplectomorphism in -classical mechanics. - \subsection{\label{introSection:ODE}Ordinary Differential Equations} -For a ordinary differential system defined as +For an ordinary differential system defined as \begin{equation} \dot x = f(x) -\end{equation} -where $x = x(q,p)^T$, this system is canonical Hamiltonian, if -\begin{equation} -f(r) = J\nabla _x H(r). \end{equation} -$H = H (q, p)$ is Hamiltonian function and $J$ is the skew-symmetric -matrix +where $x = x(q,p)$, this system is a canonical Hamiltonian, if +$f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian +function and $J$ is the skew-symmetric matrix \begin{equation} J = \left( {\begin{array}{*{20}c} 0 & I \\ @@ -326,85 +498,119 @@ system can be rewritten as, where $I$ is an identity matrix. Using this notation, Hamiltonian system can be rewritten as, \begin{equation} -\frac{d}{{dt}}x = J\nabla _x H(x) +\frac{d}{{dt}}x = J\nabla _x H(x). \label{introEquation:compactHamiltonian} \end{equation}In this case, $f$ is -called a \emph{Hamiltonian vector field}. - -Another generalization of Hamiltonian dynamics is Poisson Dynamics, +called a \emph{Hamiltonian vector field}. Another generalization of +Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986} \begin{equation} \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian} \end{equation} -The most obvious change being that matrix $J$ now depends on $x$. -The free rigid body is an example of Poisson system (actually a -Lie-Poisson system) with Hamiltonian function of angular kinetic -energy. +where the most obvious change being that matrix $J$ now depends on +$x$. + +\subsection{\label{introSection:exactFlow}Exact Propagator} + +Let $x(t)$ be the exact solution of the ODE +system, \begin{equation} -J(\pi ) = \left( {\begin{array}{*{20}c} - 0 & {\pi _3 } & { - \pi _2 } \\ - { - \pi _3 } & 0 & {\pi _1 } \\ - {\pi _2 } & { - \pi _1 } & 0 \\ -\end{array}} \right) +\frac{{dx}}{{dt}} = f(x), \label{introEquation:ODE} +\end{equation} we can +define its exact propagator $\varphi_\tau$: +\[ x(t+\tau) +=\varphi_\tau(x(t)) +\] +where $\tau$ is a fixed time step and $\varphi$ is a map from phase +space to itself. The propagator has the continuous group property, +\begin{equation} +\varphi _{\tau _1 } \circ \varphi _{\tau _2 } = \varphi _{\tau _1 ++ \tau _2 } . \end{equation} - +In particular, \begin{equation} -H = \frac{1}{2}\left( {\frac{{\pi _1^2 }}{{I_1 }} + \frac{{\pi _2^2 -}}{{I_2 }} + \frac{{\pi _3^2 }}{{I_3 }}} \right) +\varphi _\tau \circ \varphi _{ - \tau } = I \end{equation} - -\subsection{\label{introSection:geometricProperties}Geometric Properties} -Let $x(t)$ be the exact solution of the ODE system, +Therefore, the exact propagator is self-adjoint, \begin{equation} -\frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE} +\varphi _\tau = \varphi _{ - \tau }^{ - 1}. \end{equation} -The exact flow(solution) $\varphi_\tau$ is defined by -\[ -x(t+\tau) =\varphi_\tau(x(t)) -\] -where $\tau$ is a fixed time step and $\varphi$ is a map from phase -space to itself. In most cases, it is not easy to find the exact -flow $\varphi_\tau$. Instead, we use a approximate map, $\psi_\tau$, -which is usually called integrator. The order of an integrator +In most cases, it is not easy to find the exact propagator +$\varphi_\tau$. Instead, we use an approximate map, $\psi_\tau$, +which is usually called an integrator. The order of an integrator $\psi_\tau$ is $p$, if the Taylor series of $\psi_\tau$ agree to order $p$, \begin{equation} -\psi_tau(x) = x + \tau f(x) + O(\tau^{p+1}) +\psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1}) \end{equation} -The hidden geometric properties of ODE and its flow play important -roles in numerical studies. Let $\varphi$ be the flow of Hamiltonian -vector field, $\varphi$ is a \emph{symplectic} flow if it satisfies, +\subsection{\label{introSection:geometricProperties}Geometric Properties} + +The hidden geometric properties\cite{Budd1999, Marsden1998} of an +ODE and its propagator play important roles in numerical studies. +Many of them can be found in systems which occur naturally in +applications. Let $\varphi$ be the propagator of Hamiltonian vector +field, $\varphi$ is a \emph{symplectic} propagator if it satisfies, \begin{equation} -'\varphi^T J '\varphi = J. +{\varphi '}^T J \varphi ' = J. \end{equation} According to Liouville's theorem, the symplectic volume is invariant -under a Hamiltonian flow, which is the basis for classical -statistical mechanics. Furthermore, the flow of a Hamiltonian vector -field on a symplectic manifold can be shown to be a +under a Hamiltonian propagator, which is the basis for classical +statistical mechanics. Furthermore, the propagator of a Hamiltonian +vector field on a symplectic manifold can be shown to be a symplectomorphism. As to the Poisson system, \begin{equation} -'\varphi ^T J '\varphi = J \circ \varphi +{\varphi '}^T J \varphi ' = J \circ \varphi \end{equation} -is the property must be preserved by the integrator. It is possible -to construct a \emph{volume-preserving} flow for a source free($ -\nabla \cdot f = 0 $) ODE, if the flow satisfies $ \det d\varphi = -1$. Changing the variables $y = h(x)$ in a -ODE\ref{introEquation:ODE} will result in a new system, +is the property that must be preserved by the integrator. It is +possible to construct a \emph{volume-preserving} propagator for a +source free ODE ($ \nabla \cdot f = 0 $), if the propagator +satisfies $ \det d\varphi = 1$. One can show easily that a +symplectic propagator will be volume-preserving. Changing the +variables $y = h(x)$ in an ODE (Eq.~\ref{introEquation:ODE}) will +result in a new system, \[ \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y). \] The vector filed $f$ has reversing symmetry $h$ if $f = - \tilde f$. -In other words, the flow of this vector field is reversible if and -only if $ h \circ \varphi ^{ - 1} = \varphi \circ h $. When -designing any numerical methods, one should always try to preserve -the structural properties of the original ODE and its flow. +In other words, the propagator of this vector field is reversible if +and only if $ h \circ \varphi ^{ - 1} = \varphi \circ h $. A +conserved quantity of a general differential function is a function +$ G:R^{2d} \to R^d $ which is constant for all solutions of the ODE +$\frac{{dx}}{{dt}} = f(x)$ , +\[ +\frac{{dG(x(t))}}{{dt}} = 0. +\] +Using the chain rule, one may obtain, +\[ +\sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \cdot \nabla G, +\] +which is the condition for conserved quantities. For a canonical +Hamiltonian system, the time evolution of an arbitrary smooth +function $G$ is given by, +\begin{eqnarray} +\frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \notag\\ + & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)). +\label{introEquation:firstIntegral1} +\end{eqnarray} +Using poisson bracket notion, Eq.~\ref{introEquation:firstIntegral1} +can be rewritten as +\[ +\frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)). +\] +Therefore, the sufficient condition for $G$ to be a conserved +quantity of a Hamiltonian system is $\left\{ {G,H} \right\} = 0.$ As +is well known, the Hamiltonian (or energy) H of a Hamiltonian system +is a conserved quantity, which is due to the fact $\{ H,H\} = 0$. +When designing any numerical methods, one should always try to +preserve the structural properties of the original ODE and its +propagator. \subsection{\label{introSection:constructionSymplectic}Construction of Symplectic Methods} A lot of well established and very effective numerical methods have -been successful precisely because of their symplecticities even +been successful precisely because of their symplectic nature even though this fact was not recognized when they were first -constructed. The most famous example is leapfrog methods in -molecular dynamics. In general, symplectic integrators can be +constructed. The most famous example is the Verlet-leapfrog method +in molecular dynamics. In general, symplectic integrators can be constructed using one of four different methods. \begin{enumerate} \item Generating functions @@ -412,273 +618,1044 @@ constructed using one of four different methods. \item Runge-Kutta methods \item Splitting methods \end{enumerate} +Generating functions\cite{Channell1990} tend to lead to methods +which are cumbersome and difficult to use. In dissipative systems, +variational methods can capture the decay of energy +accurately.\cite{Kane2000} Since they are geometrically unstable +against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta +methods are not suitable for Hamiltonian +system.\cite{Cartwright1992} Recently, various high-order explicit +Runge-Kutta methods \cite{Owren1992,Chen2003} have been developed to +overcome this instability. However, due to computational penalty +involved in implementing the Runge-Kutta methods, they have not +attracted much attention from the Molecular Dynamics community. +Instead, splitting methods have been widely accepted since they +exploit natural decompositions of the system.\cite{McLachlan1998, +Tuckerman1992} -Generating function tends to lead to methods which are cumbersome -and difficult to use\cite{}. In dissipative systems, variational -methods can capture the decay of energy accurately\cite{}. Since -their geometrically unstable nature against non-Hamiltonian -perturbations, ordinary implicit Runge-Kutta methods are not -suitable for Hamiltonian system. Recently, various high-order -explicit Runge--Kutta methods have been developed to overcome this -instability \cite{}. However, due to computational penalty involved -in implementing the Runge-Kutta methods, they do not attract too -much attention from Molecular Dynamics community. Instead, splitting -have been widely accepted since they exploit natural decompositions -of the system\cite{Tuckerman92}. The main idea behind splitting -methods is to decompose the discrete $\varphi_h$ as a composition of -simpler flows, +\subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}} + +The main idea behind splitting methods is to decompose the discrete +$\varphi_h$ as a composition of simpler propagators, \begin{equation} \varphi _h = \varphi _{h_1 } \circ \varphi _{h_2 } \ldots \circ \varphi _{h_n } \label{introEquation:FlowDecomposition} \end{equation} -where each of the sub-flow is chosen such that each represent a -simpler integration of the system. Let $\phi$ and $\psi$ both be -symplectic maps, it is easy to show that any composition of -symplectic flows yields a symplectic map, +where each of the sub-propagator is chosen such that each represent +a simpler integration of the system. Suppose that a Hamiltonian +system takes the form, +\[ +H = H_1 + H_2. +\] +Here, $H_1$ and $H_2$ may represent different physical processes of +the system. For instance, they may relate to kinetic and potential +energy respectively, which is a natural decomposition of the +problem. If $H_1$ and $H_2$ can be integrated using exact +propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a +simple first order expression is then given by the Lie-Trotter +formula\cite{Trotter1959} \begin{equation} +\varphi _h = \varphi _{1,h} \circ \varphi _{2,h}, +\label{introEquation:firstOrderSplitting} +\end{equation} +where $\varphi _h$ is the result of applying the corresponding +continuous $\varphi _i$ over a time $h$. By definition, as +$\varphi_i(t)$ is the exact solution of a Hamiltonian system, it +must follow that each operator $\varphi_i(t)$ is a symplectic map. +It is easy to show that any composition of symplectic propagators +yields a symplectic map, +\begin{equation} (\varphi '\phi ')^T J\varphi '\phi ' = \phi '^T \varphi '^T J\varphi -'\phi ' = \phi '^T J\phi ' = J. +'\phi ' = \phi '^T J\phi ' = J, \label{introEquation:SymplecticFlowComposition} \end{equation} -Suppose that a Hamiltonian system has a form with $H = T + V$ +where $\phi$ and $\psi$ both are symplectic maps. Thus operator +splitting in this context automatically generates a symplectic map. +The Lie-Trotter +splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces +local errors proportional to $h^2$, while the Strang splitting gives +a second-order decomposition,\cite{Strang1968} +\begin{equation} +\varphi _h = \varphi _{1,h/2} \circ \varphi _{2,h} \circ \varphi +_{1,h/2} , \label{introEquation:secondOrderSplitting} +\end{equation} +which has a local error proportional to $h^3$. The Strang +splitting's popularity in molecular simulation community attribute +to its symmetric property, +\begin{equation} +\varphi _h^{ - 1} = \varphi _{ - h}. +\label{introEquation:timeReversible} +\end{equation} +\subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}} +The classical equation for a system consisting of interacting +particles can be written in Hamiltonian form, +\[ +H = T + V +\] +where $T$ is the kinetic energy and $V$ is the potential energy. +Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one +obtains the following: +\begin{align} +q(\Delta t) &= q(0) + \dot{q}(0)\Delta t + + \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, % +\label{introEquation:Lp10a} \\% +% +\dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m} + \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. % +\label{introEquation:Lp10b} +\end{align} +where $F(t)$ is the force at time $t$. This integration scheme is +known as \emph{velocity verlet} which is +symplectic(Eq.~\ref{introEquation:SymplecticFlowComposition}), +time-reversible(Eq.~\ref{introEquation:timeReversible}) and +volume-preserving (Eq.~\ref{introEquation:volumePreserving}). These +geometric properties attribute to its long-time stability and its +popularity in the community. However, the most commonly used +velocity verlet integration scheme is written as below, +\begin{align} +\dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &= + \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEquation:Lp9a}\\% +% +q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),% + \label{introEquation:Lp9b}\\% +% +\dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) + + \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c} +\end{align} +From the preceding splitting, one can see that the integration of +the equations of motion would follow: +\begin{enumerate} +\item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position. +\item Use the half step velocities to move positions one whole step, $\Delta t$. -\section{\label{introSection:molecularDynamics}Molecular Dynamics} +\item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move. -As a special discipline of molecular modeling, Molecular dynamics -has proven to be a powerful tool for studying the functions of -biological systems, providing structural, thermodynamic and -dynamical information. +\item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values. +\end{enumerate} +By simply switching the order of the propagators in the splitting +and composing a new integrator, the \emph{position verlet} +integrator, can be generated, +\begin{align} +\dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) + +\frac{{\Delta t}}{{2m}}\dot q(0)} \right], % +\label{introEquation:positionVerlet1} \\% +% +q(\Delta t) &= q(0) + \frac{{\Delta t}}{2}\left[ {\dot q(0) + \dot +q(\Delta t)} \right]. % + \label{introEquation:positionVerlet2} +\end{align} -\subsection{\label{introSec:mdInit}Initialization} +\subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}} -\subsection{\label{introSection:mdIntegration} Integration of the Equations of Motion} +The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used +to determine the local error of a splitting method in terms of the +commutator of the +operators associated +with the sub-propagator. For operators $hX$ and $hY$ which are +associated with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we +have +\begin{equation} +\exp (hX + hY) = \exp (hZ) +\end{equation} +where +\begin{equation} +hZ = hX + hY + \frac{{h^2 }}{2}[X,Y] + \frac{{h^3 }}{2}\left( +{[X,[X,Y]] + [Y,[Y,X]]} \right) + \ldots . +\end{equation} +Here, $[X,Y]$ is the commutator of operator $X$ and $Y$ given by +\[ +[X,Y] = XY - YX . +\] +Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974} +to the Strang splitting, we can obtain +\begin{eqnarray*} +\exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\ + & & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\ + & & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots + ). +\end{eqnarray*} +Since $ [X,Y] + [Y,X] = 0$ and $ [X,X] = 0$, the dominant local +error of Strang splitting is proportional to $h^3$. The same +procedure can be applied to a general splitting of the form +\begin{equation} +\varphi _{b_m h}^2 \circ \varphi _{a_m h}^1 \circ \varphi _{b_{m - +1} h}^2 \circ \ldots \circ \varphi _{a_1 h}^1 . +\end{equation} +A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher +order methods. Yoshida proposed an elegant way to compose higher +order methods based on symmetric splitting.\cite{Yoshida1990} Given +a symmetric second order base method $ \varphi _h^{(2)} $, a +fourth-order symmetric method can be constructed by composing, +\[ +\varphi _h^{(4)} = \varphi _{\alpha h}^{(2)} \circ \varphi _{\beta +h}^{(2)} \circ \varphi _{\alpha h}^{(2)} +\] +where $ \alpha = - \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$ and $ \beta += \frac{{2^{1/3} }}{{2 - 2^{1/3} }}$. Moreover, a symmetric +integrator $ \varphi _h^{(2n + 2)}$ can be composed by +\begin{equation} +\varphi _h^{(2n + 2)} = \varphi _{\alpha h}^{(2n)} \circ \varphi +_{\beta h}^{(2n)} \circ \varphi _{\alpha h}^{(2n)}, +\end{equation} +if the weights are chosen as +\[ +\alpha = - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta = +\frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} . +\] -\section{\label{introSection:rigidBody}Dynamics of Rigid Bodies} +\section{\label{introSection:molecularDynamics}Molecular Dynamics} -A rigid body is a body in which the distance between any two given -points of a rigid body remains constant regardless of external -forces exerted on it. A rigid body therefore conserves its shape -during its motion. +As one of the principal tools of molecular modeling, Molecular +dynamics has proven to be a powerful tool for studying the functions +of biological systems, providing structural, thermodynamic and +dynamical information. The basic idea of molecular dynamics is that +macroscopic properties are related to microscopic behavior and +microscopic behavior can be calculated from the trajectories in +simulations. For instance, instantaneous temperature of a +Hamiltonian system of $N$ particles can be measured by +\[ +T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}} +\] +where $m_i$ and $v_i$ are the mass and velocity of $i$th particle +respectively, $f$ is the number of degrees of freedom, and $k_B$ is +the Boltzman constant. -Applications of dynamics of rigid bodies. +A typical molecular dynamics run consists of three essential steps: +\begin{enumerate} + \item Initialization + \begin{enumerate} + \item Preliminary preparation + \item Minimization + \item Heating + \item Equilibration + \end{enumerate} + \item Production + \item Analysis +\end{enumerate} +These three individual steps will be covered in the following +sections. Sec.~\ref{introSec:initialSystemSettings} deals with the +initialization of a simulation. Sec.~\ref{introSection:production} + discusses issues of production runs. +Sec.~\ref{introSection:Analysis} provides the theoretical tools for +analysis of trajectories. -\subsection{\label{introSection:lieAlgebra}Lie Algebra} +\subsection{\label{introSec:initialSystemSettings}Initialization} -\subsection{\label{introSection:DLMMotionEquation}The Euler Equations of Rigid Body Motion} +\subsubsection{\textbf{Preliminary preparation}} -\subsection{\label{introSection:otherRBMotionEquation}Other Formulations for Rigid Body Motion} +When selecting the starting structure of a molecule for molecular +simulation, one may retrieve its Cartesian coordinates from public +databases, such as RCSB Protein Data Bank \textit{etc}. Although +thousands of crystal structures of molecules are discovered every +year, many more remain unknown due to the difficulties of +purification and crystallization. Even for molecules with known +structures, some important information is missing. For example, a +missing hydrogen atom which acts as donor in hydrogen bonding must +be added. Moreover, in order to include electrostatic interactions, +one may need to specify the partial charges for individual atoms. +Under some circumstances, we may even need to prepare the system in +a special configuration. For instance, when studying transport +phenomenon in membrane systems, we may prepare the lipids in a +bilayer structure instead of placing lipids randomly in solvent, +since we are not interested in the slow self-aggregation process. -%\subsection{\label{introSection:poissonBrackets}Poisson Brackets} +\subsubsection{\textbf{Minimization}} -\section{\label{introSection:correlationFunctions}Correlation Functions} +It is quite possible that some of molecules in the system from +preliminary preparation may be overlapping with each other. This +close proximity leads to high initial potential energy which +consequently jeopardizes any molecular dynamics simulations. To +remove these steric overlaps, one typically performs energy +minimization to find a more reasonable conformation. Several energy +minimization methods have been developed to exploit the energy +surface and to locate the local minimum. While converging slowly +near the minimum, the steepest descent method is extremely robust when +systems are strongly anharmonic. Thus, it is often used to refine +structures from crystallographic data. Relying on the Hessian, +advanced methods like Newton-Raphson converge rapidly to a local +minimum, but become unstable if the energy surface is far from +quadratic. Another factor that must be taken into account, when +choosing energy minimization method, is the size of the system. +Steepest descent and conjugate gradient can deal with models of any +size. Because of the limits on computer memory to store the hessian +matrix and the computing power needed to diagonalize these matrices, +most Newton-Raphson methods can not be used with very large systems. -\section{\label{introSection:langevinDynamics}Langevin Dynamics} +\subsubsection{\textbf{Heating}} -\subsection{\label{introSection:LDIntroduction}Introduction and application of Langevin Dynamics} +Typically, heating is performed by assigning random velocities +according to a Maxwell-Boltzman distribution for a desired +temperature. Beginning at a lower temperature and gradually +increasing the temperature by assigning larger random velocities, we +end up setting the temperature of the system to a final temperature +at which the simulation will be conducted. In the heating phase, we +should also keep the system from drifting or rotating as a whole. To +do this, the net linear momentum and angular momentum of the system +is shifted to zero after each resampling from the Maxwell -Boltzman +distribution. -\subsection{\label{introSection:generalizedLangevinDynamics}Generalized Langevin Dynamics} +\subsubsection{\textbf{Equilibration}} +The purpose of equilibration is to allow the system to evolve +spontaneously for a period of time and reach equilibrium. The +procedure is continued until various statistical properties, such as +temperature, pressure, energy, volume and other structural +properties \textit{etc}, become independent of time. Strictly +speaking, minimization and heating are not necessary, provided the +equilibration process is long enough. However, these steps can serve +as a mean to arrive at an equilibrated structure in an effective +way. + +\subsection{\label{introSection:production}Production} + +The production run is the most important step of the simulation, in +which the equilibrated structure is used as a starting point and the +motions of the molecules are collected for later analysis. In order +to capture the macroscopic properties of the system, the molecular +dynamics simulation must be performed by sampling correctly and +efficiently from the relevant thermodynamic ensemble. + +The most expensive part of a molecular dynamics simulation is the +calculation of non-bonded forces, such as van der Waals force and +Coulombic forces \textit{etc}. For a system of $N$ particles, the +complexity of the algorithm for pair-wise interactions is $O(N^2 )$, +which makes large simulations prohibitive in the absence of any +algorithmic tricks. A natural approach to avoid system size issues +is to represent the bulk behavior by a finite number of the +particles. However, this approach will suffer from surface effects +at the edges of the simulation. To offset this, \textit{Periodic +boundary conditions} (see Fig.~\ref{introFig:pbc}) were developed to +simulate bulk properties with a relatively small number of +particles. In this method, the simulation box is replicated +throughout space to form an infinite lattice. During the simulation, +when a particle moves in the primary cell, its image in other cells +move in exactly the same direction with exactly the same +orientation. Thus, as a particle leaves the primary cell, one of its +images will enter through the opposite face. +\begin{figure} +\centering +\includegraphics[width=\linewidth]{pbc.eps} +\caption[An illustration of periodic boundary conditions]{A 2-D +illustration of periodic boundary conditions. As one particle leaves +the left of the simulation box, an image of it enters the right.} +\label{introFig:pbc} +\end{figure} + +%cutoff and minimum image convention +Another important technique to improve the efficiency of force +evaluation is to apply spherical cutoffs where particles farther +than a predetermined distance are not included in the +calculation.\cite{Frenkel1996} The use of a cutoff radius will cause +a discontinuity in the potential energy curve. Fortunately, one can +shift a simple radial potential to ensure the potential curve go +smoothly to zero at the cutoff radius. The cutoff strategy works +well for Lennard-Jones interaction because of its short range +nature. However, simply truncating the electrostatic interaction +with the use of cutoffs has been shown to lead to severe artifacts +in simulations. The Ewald summation, in which the slowly decaying +Coulomb potential is transformed into direct and reciprocal sums +with rapid and absolute convergence, has proved to minimize the +periodicity artifacts in liquid simulations. Taking advantage of +fast Fourier transform (FFT) techniques for calculating discrete +Fourier transforms, the particle mesh-based +methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from +$O(N^{3/2})$ to $O(N logN)$. An alternative approach is the +\emph{fast multipole method}\cite{Greengard1987, Greengard1994}, +which treats Coulombic interactions exactly at short range, and +approximate the potential at long range through multipolar +expansion. In spite of their wide acceptance at the molecular +simulation community, these two methods are difficult to implement +correctly and efficiently. Instead, we use a damped and +charge-neutralized Coulomb potential method developed by Wolf and +his coworkers.\cite{Wolf1999} The shifted Coulomb potential for +particle $i$ and particle $j$ at distance $r_{rj}$ is given by: \begin{equation} -H = \frac{{p^2 }}{{2m}} + U(x) + H_B + \Delta U(x,x_1 , \ldots x_N) -\label{introEquation:bathGLE} +V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha +r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow +R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha +r_{ij})}{r_{ij}}\right\}, \label{introEquation:shiftedCoulomb} \end{equation} -where $H_B$ is harmonic bath Hamiltonian, -\[ -H_B =\sum\limits_{\alpha = 1}^N {\left\{ {\frac{{p_\alpha ^2 -}}{{2m_\alpha }} + \frac{1}{2}m_\alpha w_\alpha ^2 } \right\}} -\] -and $\Delta U$ is bilinear system-bath coupling, -\[ -\Delta U = - \sum\limits_{\alpha = 1}^N {g_\alpha x_\alpha x} -\] -Completing the square, -\[ -H_B + \Delta U = \sum\limits_{\alpha = 1}^N {\left\{ -{\frac{{p_\alpha ^2 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha -w_\alpha ^2 \left( {x_\alpha - \frac{{g_\alpha }}{{m_\alpha -w_\alpha ^2 }}x} \right)^2 } \right\}} - \sum\limits_{\alpha = -1}^N {\frac{{g_\alpha ^2 }}{{2m_\alpha w_\alpha ^2 }}} x^2 -\] -and putting it back into Eq.~\ref{introEquation:bathGLE}, -\[ -H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha = 1}^N -{\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha -w_\alpha ^2 \left( {x_\alpha - \frac{{g_\alpha }}{{m_\alpha -w_\alpha ^2 }}x} \right)^2 } \right\}} -\] -where -\[ -W(x) = U(x) - \sum\limits_{\alpha = 1}^N {\frac{{g_\alpha ^2 -}}{{2m_\alpha w_\alpha ^2 }}} x^2 -\] -Since the first two terms of the new Hamiltonian depend only on the -system coordinates, we can get the equations of motion for -Generalized Langevin Dynamics by Hamilton's equations -\ref{introEquation:motionHamiltonianCoordinate, -introEquation:motionHamiltonianMomentum}, -\begin{align} -\dot p &= - \frac{{\partial H}}{{\partial x}} - &= m\ddot x - &= - \frac{{\partial W(x)}}{{\partial x}} - \sum\limits_{\alpha = 1}^N {g_\alpha \left( {x_\alpha - \frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right)} -\label{introEq:Lp5} -\end{align} -, and -\begin{align} -\dot p_\alpha &= - \frac{{\partial H}}{{\partial x_\alpha }} - &= m\ddot x_\alpha - &= \- m_\alpha w_\alpha ^2 \left( {x_\alpha - \frac{{g_\alpha}}{{m_\alpha w_\alpha ^2 }}x} \right) -\end{align} +where $\alpha$ is the convergence parameter. Due to the lack of +inherent periodicity and rapid convergence,this method is extremely +efficient and easy to implement. +\begin{figure} +\centering +\includegraphics[width=\linewidth]{shifted_coulomb.eps} +\caption[An illustration of shifted Coulomb potential]{An +illustration of shifted Coulomb potential.} +\label{introFigure:shiftedCoulomb} +\end{figure} -\subsection{\label{introSection:laplaceTransform}The Laplace Transform} +\subsection{\label{introSection:Analysis} Analysis} -\[ -L(x) = \int_0^\infty {x(t)e^{ - pt} dt} -\] +According to the principles of +Statistical Mechanics in +Sec.~\ref{introSection:statisticalMechanics}, one can compute +thermodynamic properties, analyze fluctuations of structural +parameters, and investigate time-dependent processes of the molecule +from the trajectories. +\subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}} + +Thermodynamic properties, which can be expressed in terms of some +function of the coordinates and momenta of all particles in the +system, can be directly computed from molecular dynamics. The usual +way to measure the pressure is based on virial theorem of Clausius +which states that the virial is equal to $-3Nk_BT$. For a system +with forces between particles, the total virial, $W$, contains the +contribution from external pressure and interaction between the +particles: \[ -L(x + y) = L(x) + L(y) +W = - 3PV + \left\langle {\sum\limits_{i < j} {r{}_{ij} \cdot +f_{ij} } } \right\rangle \] +where $f_{ij}$ is the force between particle $i$ and $j$ at a +distance $r_{ij}$. Thus, the expression for the pressure is given +by: +\begin{equation} +P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\sum\limits_{i +< j} {r{}_{ij} \cdot f_{ij} } } \right\rangle +\end{equation} +\subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}} + +Structural Properties of a simple fluid can be described by a set of +distribution functions. Among these functions,the \emph{pair +distribution function}, also known as \emph{radial distribution +function}, is of most fundamental importance to liquid theory. +Experimentally, pair distribution functions can be gathered by +Fourier transforming raw data from a series of neutron diffraction +experiments and integrating over the surface +factor.\cite{Powles1973} The experimental results can serve as a +criterion to justify the correctness of a liquid model. Moreover, +various equilibrium thermodynamic and structural properties can also +be expressed in terms of the radial distribution +function.\cite{Allen1987} The pair distribution functions $g(r)$ +gives the probability that a particle $i$ will be located at a +distance $r$ from a another particle $j$ in the system +\begin{equation} +g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j +\ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho +(r)}{\rho}. +\end{equation} +Note that the delta function can be replaced by a histogram in +computer simulation. Peaks in $g(r)$ represent solvent shells, and +the height of these peaks gradually decreases to 1 as the liquid of +large distance approaches the bulk density. + + +\subsubsection{\label{introSection:timeDependentProperties}\textbf{Time-dependent +Properties}} + +Time-dependent properties are usually calculated using \emph{time +correlation functions}, which correlate random variables $A$ and $B$ +at two different times, +\begin{equation} +C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle. +\label{introEquation:timeCorrelationFunction} +\end{equation} +If $A$ and $B$ refer to same variable, this kind of correlation +functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation +function which is directly related to transport properties of +molecular liquids: +\begin{equation} +D = \frac{1}{3}\int\limits_0^\infty {\left\langle {v(t) \cdot v(0)} +\right\rangle } dt +\end{equation} +where $D$ is diffusion constant. Unlike the velocity autocorrelation +function, which is averaged over time origins and over all the +atoms, the dipole autocorrelation functions is calculated for the +entire system. The dipole autocorrelation function is given by: +\begin{equation} +c_{dipole} = \left\langle {u_{tot} (t) \cdot u_{tot} (t)} +\right\rangle +\end{equation} +Here $u_{tot}$ is the net dipole of the entire system and is given +by +\begin{equation} +u_{tot} (t) = \sum\limits_i {u_i (t)}. +\end{equation} +In principle, many time correlation functions can be related to +Fourier transforms of the infrared, Raman, and inelastic neutron +scattering spectra of molecular liquids. In practice, one can +extract the IR spectrum from the intensity of the molecular dipole +fluctuation at each frequency using the following relationship: +\begin{equation} +\hat c_{dipole} (v) = \int_{ - \infty }^\infty {c_{dipole} (t)e^{ - +i2\pi vt} dt}. +\end{equation} + +\section{\label{introSection:rigidBody}Dynamics of Rigid Bodies} + +Rigid bodies are frequently involved in the modeling of different +areas, including engineering, physics and chemistry. For example, +missiles and vehicles are usually modeled by rigid bodies. The +movement of the objects in 3D gaming engines or other physics +simulators is governed by rigid body dynamics. In molecular +simulations, rigid bodies are used to simplify protein-protein +docking studies.\cite{Gray2003} + +It is very important to develop stable and efficient methods to +integrate the equations of motion for orientational degrees of +freedom. Euler angles are the natural choice to describe the +rotational degrees of freedom. However, due to $\frac {1}{sin +\theta}$ singularities, the numerical integration of corresponding +equations of these motion is very inefficient and inaccurate. +Although an alternative integrator using multiple sets of Euler +angles can overcome this difficulty\cite{Barojas1973}, the +computational penalty and the loss of angular momentum conservation +still remain. A singularity-free representation utilizing +quaternions was developed by Evans in 1977.\cite{Evans1977} +Unfortunately, this approach used a nonseparable Hamiltonian +resulting from the quaternion representation, which prevented the +symplectic algorithm from being utilized. Another different approach +is to apply holonomic constraints to the atoms belonging to the +rigid body. Each atom moves independently under the normal forces +deriving from potential energy and constraint forces which are used +to guarantee the rigidness. However, due to their iterative nature, +the SHAKE and Rattle algorithms also converge very slowly when the +number of constraints increases.\cite{Ryckaert1977, Andersen1983} + +A break-through in geometric literature suggests that, in order to +develop a long-term integration scheme, one should preserve the +symplectic structure of the propagator. By introducing a conjugate +momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's +equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was +proposed to evolve the Hamiltonian system in a constraint manifold +by iteratively satisfying the orthogonality constraint $Q^T Q = 1$. +An alternative method using the quaternion representation was +developed by Omelyan.\cite{Omelyan1998} However, both of these +methods are iterative and inefficient. In this section, we descibe a +symplectic Lie-Poisson integrator for rigid bodies developed by +Dullweber and his coworkers\cite{Dullweber1997} in depth. + +\subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies} +The Hamiltonian of a rigid body is given by +\begin{equation} +H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) + +V(q,Q) + \frac{1}{2}tr[(QQ^T - 1)\Lambda ]. +\label{introEquation:RBHamiltonian} +\end{equation} +Here, $q$ and $Q$ are the position vector and rotation matrix for +the rigid-body, $p$ and $P$ are conjugate momenta to $q$ and $Q$ , +and $J$, a diagonal matrix, is defined by \[ -L(ax) = aL(x) +I_{ii}^{ - 1} = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} } \] - +where $I_{ii}$ is the diagonal element of the inertia tensor. This +constrained Hamiltonian equation is subjected to a holonomic +constraint, +\begin{equation} +Q^T Q = 1, \label{introEquation:orthogonalConstraint} +\end{equation} +which is used to ensure the rotation matrix's unitarity. Using +Eq.~\ref{introEquation:motionHamiltonianCoordinate} and Eq.~ +\ref{introEquation:motionHamiltonianMomentum}, one can write down +the equations of motion, +\begin{eqnarray} + \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\ + \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q), \label{introEquation:RBMotionMomentum}\\ + \frac{{dQ}}{{dt}} & = & PJ^{ - 1}, \label{introEquation:RBMotionRotation}\\ + \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP} +\end{eqnarray} +Differentiating Eq.~\ref{introEquation:orthogonalConstraint} and +using Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain, +\begin{equation} +Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0 . \\ +\label{introEquation:RBFirstOrderConstraint} +\end{equation} +In general, there are two ways to satisfy the holonomic constraints. +We can use a constraint force provided by a Lagrange multiplier on +the normal manifold to keep the motion on the constraint space. Or +we can simply evolve the system on the constraint manifold. These +two methods have been proved to be equivalent. The holonomic +constraint and equations of motions define a constraint manifold for +rigid bodies \[ -L(\dot x) = pL(x) - px(0) +M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1} + J^{ - 1} P^T Q = 0} +\right\}. \] - +Unfortunately, this constraint manifold is not $T^* SO(3)$ which is +a symplectic manifold on Lie rotation group $SO(3)$. However, it +turns out that under symplectic transformation, the cotangent space +and the phase space are diffeomorphic. By introducing \[ -L(\ddot x) = p^2 L(x) - px(0) - \dot x(0) +\tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right), \] - +the mechanical system subjected to a holonomic constraint manifold $M$ +can be re-formulated as a Hamiltonian system on the cotangent space \[ -L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right) = G(p)H(p) +T^* SO(3) = \left\{ {(\tilde Q,\tilde P):\tilde Q^T \tilde Q = +1,\tilde Q^T \tilde PJ^{ - 1} + J^{ - 1} P^T \tilde Q = 0} \right\} \] - -Some relatively important transformation, +For a body fixed vector $X_i$ with respect to the center of mass of +the rigid body, its corresponding lab fixed vector $X_i^{lab}$ is +given as +\begin{equation} +X_i^{lab} = Q X_i + q. +\end{equation} +Therefore, potential energy $V(q,Q)$ is defined by \[ -L(\cos at) = \frac{p}{{p^2 + a^2 }} +V(q,Q) = V(Q X_0 + q). \] - +Hence, the force and torque are given by \[ -L(\sin at) = \frac{a}{{p^2 + a^2 }} +\nabla _q V(q,Q) = F(q,Q) = \sum\limits_i {F_i (q,Q)}, \] - +and \[ -L(1) = \frac{1}{p} +\nabla _Q V(q,Q) = F(q,Q)X_i^t \] - -First, the bath coordinates, +respectively. As a common choice to describe the rotation dynamics +of the rigid body, the angular momentum on the body fixed frame $\Pi += Q^t P$ is introduced to rewrite the equations of motion, +\begin{equation} + \begin{array}{l} + \dot \Pi = J^{ - 1} \Pi ^T \Pi + Q^T \sum\limits_i {F_i (q,Q)X_i^T } - \Lambda, \\ + \dot Q = Q\Pi {\rm{ }}J^{ - 1}, \\ + \end{array} + \label{introEqaution:RBMotionPI} +\end{equation} +as well as holonomic constraints $\Pi J^{ - 1} + J^{ - 1} \Pi ^t = +0$ and $Q^T Q = 1$. For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a +matrix $\hat v \in so(3)^ \star$, the hat-map isomorphism, +\begin{equation} +v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left( +{\begin{array}{*{20}c} + 0 & { - v_3 } & {v_2 } \\ + {v_3 } & 0 & { - v_1 } \\ + { - v_2 } & {v_1 } & 0 \\ +\end{array}} \right), +\label{introEquation:hatmapIsomorphism} +\end{equation} +will let us associate the matrix products with traditional vector +operations \[ -p^2 L(x_\alpha ) - px_\alpha (0) - \dot x_\alpha (0) = - \omega -_\alpha ^2 L(x_\alpha ) + \frac{{g_\alpha }}{{\omega _\alpha -}}L(x) +\hat vu = v \times u. \] +Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew +matrix, +\begin{eqnarray} +(\dot \Pi - \dot \Pi ^T )&= &(\Pi - \Pi ^T )(J^{ - 1} \Pi + \Pi J^{ - 1} ) \notag \\ +& & + \sum\limits_i {[Q^T F_i (r,Q)X_i^T - X_i F_i (r,Q)^T Q]} - +(\Lambda - \Lambda ^T ). \label{introEquation:skewMatrixPI} +\end{eqnarray} +Since $\Lambda$ is symmetric, the last term of +Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the +Lagrange multiplier $\Lambda$ is absent from the equations of +motion. This unique property eliminates the requirement of +iterations which can not be avoided in other methods.\cite{Kol1997, +Omelyan1998} Applying the hat-map isomorphism, we obtain the +equation of motion for angular momentum +\begin{equation} +\dot \pi = \pi \times I^{ - 1} \pi + \sum\limits_i {\left( {Q^T +F_i (r,Q)} \right) \times X_i }. +\label{introEquation:bodyAngularMotion} +\end{equation} +In the same manner, the equation of motion for rotation matrix is +given by \[ -L(x_\alpha ) = \frac{{\frac{{g_\alpha }}{{\omega _\alpha }}L(x) + -px_\alpha (0) + \dot x_\alpha (0)}}{{p^2 + \omega _\alpha ^2 }} +\dot Q = Qskew(I^{ - 1} \pi ). \] -Then, the system coordinates, -\begin{align} -mL(\ddot x) &= - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} - -\sum\limits_{\alpha = 1}^N {\left\{ {\frac{{\frac{{g_\alpha -}}{{\omega _\alpha }}L(x) + px_\alpha (0) + \dot x_\alpha -(0)}}{{p^2 + \omega _\alpha ^2 }} - \frac{{g_\alpha ^2 }}{{m_\alpha -}}\omega _\alpha ^2 L(x)} \right\}} -% - &= - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} - - \sum\limits_{\alpha = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}\frac{p}{{p^2 + \omega _\alpha ^2 }}pL(x) - - \frac{p}{{p^2 + \omega _\alpha ^2 }}g_\alpha x_\alpha (0) - - \frac{1}{{p^2 + \omega _\alpha ^2 }}g_\alpha \dot x_\alpha (0)} \right\}} -\end{align} -Then, the inverse transform, -\begin{align} -m\ddot x &= - \frac{{\partial W(x)}}{{\partial x}} - -\sum\limits_{\alpha = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2 -}}{{m_\alpha \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega -_\alpha t)\dot x(t - \tau )d\tau - \left[ {g_\alpha x_\alpha (0) -- \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}} \right]\cos -(\omega _\alpha t) - \frac{{g_\alpha \dot x_\alpha (0)}}{{\omega -_\alpha }}\sin (\omega _\alpha t)} } \right\}} +\subsection{\label{introSection:SymplecticFreeRB}Symplectic +Lie-Poisson Integrator for Free Rigid Bodies} + +If there are no external forces exerted on the rigid body, the only +contribution to the rotational motion is from the kinetic energy +(the first term of \ref{introEquation:bodyAngularMotion}). The free +rigid body is an example of a Lie-Poisson system with Hamiltonian +function +\begin{equation} +T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 ) +\label{introEquation:rotationalKineticRB} +\end{equation} +where $T_i^r (\pi _i ) = \frac{{\pi _i ^2 }}{{2I_i }}$ and +Lie-Poisson structure matrix, +\begin{equation} +J(\pi ) = \left( {\begin{array}{*{20}c} + 0 & {\pi _3 } & { - \pi _2 } \\ + { - \pi _3 } & 0 & {\pi _1 } \\ + {\pi _2 } & { - \pi _1 } & 0 \\ +\end{array}} \right). +\end{equation} +Thus, the dynamics of free rigid body is governed by +\begin{equation} +\frac{d}{{dt}}\pi = J(\pi )\nabla _\pi T^r (\pi ). +\end{equation} +One may notice that each $T_i^r$ in +Eq.~\ref{introEquation:rotationalKineticRB} can be solved exactly. +For instance, the equations of motion due to $T_1^r$ are given by +\begin{equation} +\frac{d}{{dt}}\pi = R_1 \pi ,\frac{d}{{dt}}Q = QR_1 +\label{introEqaution:RBMotionSingleTerm} +\end{equation} +with +\[ R_1 = \left( {\begin{array}{*{20}c} + 0 & 0 & 0 \\ + 0 & 0 & {\pi _1 } \\ + 0 & { - \pi _1 } & 0 \\ +\end{array}} \right). +\] +The solutions of Eq.~\ref{introEqaution:RBMotionSingleTerm} is +\[ +\pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) = +Q(0)e^{\Delta tR_1 } +\] +with +\[ +e^{\Delta tR_1 } = \left( {\begin{array}{*{20}c} + 0 & 0 & 0 \\ + 0 & {\cos \theta _1 } & {\sin \theta _1 } \\ + 0 & { - \sin \theta _1 } & {\cos \theta _1 } \\ +\end{array}} \right),\theta _1 = \frac{{\pi _1 }}{{I_1 }}\Delta t. +\] +To reduce the cost of computing expensive functions in $e^{\Delta +tR_1 }$, we can use the Cayley transformation to obtain a +single-aixs propagator, +\begin{eqnarray*} +e^{\Delta tR_1 } & \approx & (1 - \Delta tR_1 )^{ - 1} (1 + \Delta +tR_1 ) \\ % -&= - \frac{{\partial W(x)}}{{\partial x}} - \int_0^t -{\sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2 -}}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha -t)\dot x(t - \tau )d} \tau } + \sum\limits_{\alpha = 1}^N {\left\{ -{\left[ {g_\alpha x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha -\omega _\alpha }}} \right]\cos (\omega _\alpha t) + -\frac{{g_\alpha \dot x_\alpha (0)}}{{\omega _\alpha }}\sin -(\omega _\alpha t)} \right\}} -\end{align} +& \approx & \left( \begin{array}{ccc} +1 & 0 & 0 \\ +0 & \frac{1-\theta^2 / 4}{1 + \theta^2 / 4} & -\frac{\theta}{1+ +\theta^2 / 4} \\ +0 & \frac{\theta}{1+ \theta^2 / 4} & \frac{1-\theta^2 / 4}{1 + +\theta^2 / 4} +\end{array} +\right). +\end{eqnarray*} +The propagators for $T_2^r$ and $T_3^r$ can be found in the same +manner. In order to construct a second-order symplectic method, we +split the angular kinetic Hamiltonian function into five terms +\[ +T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2 +) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r +(\pi _1 ). +\] +By concatenating the propagators corresponding to these five terms, +we can obtain an symplectic integrator, +\[ +\varphi _{\Delta t,T^r } = \varphi _{\Delta t/2,\pi _1 } \circ +\varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 } +\circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi +_1 }. +\] +The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by +\[ +\{ F,G\} (\pi ) = [\nabla _\pi F(\pi )]^T J(\pi )\nabla _\pi G(\pi +). +\] +If the Poisson bracket of a function $F$ with an arbitrary smooth +function $G$ is zero, $F$ is a \emph{Casimir}, which is the +conserved quantity in Poisson system. We can easily verify that the +norm of the angular momentum, $\parallel \pi +\parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel +\pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ , +then by the chain rule +\[ +\nabla _\pi F(\pi ) = S'(\frac{{\parallel \pi \parallel ^2 +}}{2})\pi. +\] +Thus, $ [\nabla _\pi F(\pi )]^T J(\pi ) = - S'(\frac{{\parallel +\pi +\parallel ^2 }}{2})\pi \times \pi = 0 $. This explicit +Lie-Poisson integrator is found to be both extremely efficient and +stable. These properties can be explained by the fact the small +angle approximation is used and the norm of the angular momentum is +conserved. +\subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian +Splitting for Rigid Body} + +The Hamiltonian of rigid body can be separated in terms of kinetic +energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations +of motion corresponding to potential energy and kinetic energy are +listed in Table~\ref{introTable:rbEquations}. +\begin{table} +\caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES} +\label{introTable:rbEquations} +\begin{center} +\begin{tabular}{|l|l|} + \hline + % after \\: \hline or \cline{col1-col2} \cline{col3-col4} ... + Potential & Kinetic \\ + $\frac{{dq}}{{dt}} = \frac{p}{m}$ & $\frac{d}{{dt}}q = p$ \\ + $\frac{d}{{dt}}p = - \frac{{\partial V}}{{\partial q}}$ & $ \frac{d}{{dt}}p = 0$ \\ + $\frac{d}{{dt}}Q = 0$ & $ \frac{d}{{dt}}Q = Qskew(I^{ - 1} j)$ \\ + $ \frac{d}{{dt}}\pi = \sum\limits_i {\left( {Q^T F_i (r,Q)} \right) \times X_i }$ & $\frac{d}{{dt}}\pi = \pi \times I^{ - 1} \pi$\\ + \hline +\end{tabular} +\end{center} +\end{table} +A second-order symplectic method is now obtained by the composition +of the position and velocity propagators, +\[ +\varphi _{\Delta t} = \varphi _{\Delta t/2,V} \circ \varphi +_{\Delta t,T} \circ \varphi _{\Delta t/2,V}. +\] +Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two +sub-propagators which corresponding to force and torque +respectively, +\[ +\varphi _{\Delta t/2,V} = \varphi _{\Delta t/2,F} \circ \varphi +_{\Delta t/2,\tau }. +\] +Since the associated operators of $\varphi _{\Delta t/2,F} $ and +$\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order +inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the +kinetic energy can be separated to translational kinetic term, $T^t +(p)$, and rotational kinetic term, $T^r (\pi )$, \begin{equation} -m\ddot x = - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi -(t)\dot x(t - \tau )d\tau } + R(t) -\label{introEuqation:GeneralizedLangevinDynamics} +T(p,\pi ) =T^t (p) + T^r (\pi ). \end{equation} -%where $ {\xi (t)}$ is friction kernel, $R(t)$ is random force and -%$W$ is the potential of mean force. $W(x) = - kT\ln p(x)$ +where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is +defined by Eq.~\ref{introEquation:rotationalKineticRB}. Therefore, +the corresponding propagators are given by \[ -\xi (t) = \sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2 -}}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha t)} +\varphi _{\Delta t,T} = \varphi _{\Delta t,T^t } \circ \varphi +_{\Delta t,T^r }. \] -For an infinite harmonic bath, we can use the spectral density and -an integral over frequencies. +Finally, we obtain the overall symplectic propagators for freely +moving rigid bodies +\begin{eqnarray} + \varphi _{\Delta t} &=& \varphi _{\Delta t/2,F} \circ \varphi _{\Delta t/2,\tau } \notag\\ + & & \circ \varphi _{\Delta t,T^t } \circ \varphi _{\Delta t/2,\pi _1 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t,\pi _3 } \circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi _1 } \notag\\ + & & \circ \varphi _{\Delta t/2,\tau } \circ \varphi _{\Delta t/2,F} . +\label{introEquation:overallRBFlowMaps} +\end{eqnarray} +\section{\label{introSection:langevinDynamics}Langevin Dynamics} +As an alternative to newtonian dynamics, Langevin dynamics, which +mimics a simple heat bath with stochastic and dissipative forces, +has been applied in a variety of studies. This section will review +the theory of Langevin dynamics. A brief derivation of the generalized +Langevin equation will be given first. Following that, we will +discuss the physical meaning of the terms appearing in the equation. + +\subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation} + +A harmonic bath model, in which an effective set of harmonic +oscillators are used to mimic the effect of a linearly responding +environment, has been widely used in quantum chemistry and +statistical mechanics. One of the successful applications of +Harmonic bath model is the derivation of the Generalized Langevin +Dynamics (GLE). Consider a system, in which the degree of +freedom $x$ is assumed to couple to the bath linearly, giving a +Hamiltonian of the form +\begin{equation} +H = \frac{{p^2 }}{{2m}} + U(x) + H_B + \Delta U(x,x_1 , \ldots x_N) +\label{introEquation:bathGLE}. +\end{equation} +Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated +with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian, \[ -R(t) = \sum\limits_{\alpha = 1}^N {\left( {g_\alpha x_\alpha (0) -- \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}x(0)} -\right)\cos (\omega _\alpha t)} + \frac{{\dot x_\alpha -(0)}}{{\omega _\alpha }}\sin (\omega _\alpha t) +H_B = \sum\limits_{\alpha = 1}^N {\left\{ {\frac{{p_\alpha ^2 +}}{{2m_\alpha }} + \frac{1}{2}m_\alpha x_\alpha ^2 } +\right\}} \] -The random forces depend only on initial conditions. - -\subsubsection{\label{introSection:secondFluctuationDissipation}The Second Fluctuation Dissipation Theorem} -So we can define a new set of coordinates, +where the index $\alpha$ runs over all the bath degrees of freedom, +$\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are +the harmonic bath masses, and $\Delta U$ is a bilinear system-bath +coupling, \[ -q_\alpha (t) = x_\alpha (t) - \frac{1}{{m_\alpha \omega _\alpha -^2 }}x(0) +\Delta U = - \sum\limits_{\alpha = 1}^N {g_\alpha x_\alpha x} \] -This makes +where $g_\alpha$ are the coupling constants between the bath +coordinates ($x_ \alpha$) and the system coordinate ($x$). +Introducing \[ -R(t) = \sum\limits_{\alpha = 1}^N {g_\alpha q_\alpha (t)} +W(x) = U(x) - \sum\limits_{\alpha = 1}^N {\frac{{g_\alpha ^2 +}}{{2m_\alpha w_\alpha ^2 }}} x^2 \] -And since the $q$ coordinates are harmonic oscillators, +and combining the last two terms in Eq.~\ref{introEquation:bathGLE}, we may rewrite the Harmonic bath Hamiltonian as \[ -\begin{array}{l} - \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle = \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t) \\ - \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle = \delta _{\alpha \beta } \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle \\ +H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha = 1}^N +{\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha }} + \frac{1}{2}m_\alpha +w_\alpha ^2 \left( {x_\alpha - \frac{{g_\alpha }}{{m_\alpha +w_\alpha ^2 }}x} \right)^2 } \right\}}. +\] +Since the first two terms of the new Hamiltonian depend only on the +system coordinates, we can get the equations of motion for +Generalized Langevin Dynamics by Hamilton's equations, +\begin{equation} +m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} - +\sum\limits_{\alpha = 1}^N {g_\alpha \left( {x_\alpha - +\frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right)}, +\label{introEquation:coorMotionGLE} +\end{equation} +and +\begin{equation} +m\ddot x_\alpha = - m_\alpha w_\alpha ^2 \left( {x_\alpha - +\frac{{g_\alpha }}{{m_\alpha w_\alpha ^2 }}x} \right). +\label{introEquation:bathMotionGLE} +\end{equation} +In order to derive an equation for $x$, the dynamics of the bath +variables $x_\alpha$ must be solved exactly first. As an integral +transform which is particularly useful in solving linear ordinary +differential equations,the Laplace transform is the appropriate tool +to solve this problem. The basic idea is to transform the difficult +differential equations into simple algebra problems which can be +solved easily. Then, by applying the inverse Laplace transform, we +can retrieve the solutions of the original problems. Let $f(t)$ be a +function defined on $ [0,\infty ) $, the Laplace transform of $f(t)$ +is a new function defined as +\[ +L(f(t)) \equiv F(p) = \int_0^\infty {f(t)e^{ - pt} dt} +\] +where $p$ is real and $L$ is called the Laplace Transform +Operator. Below are some important properties of the Laplace transform +\begin{eqnarray*} + L(x + y) & = & L(x) + L(y) \\ + L(ax) & = & aL(x) \\ + L(\dot x) & = & pL(x) - px(0) \\ + L(\ddot x)& = & p^2 L(x) - px(0) - \dot x(0) \\ + L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right)& = & G(p)H(p) \\ + \end{eqnarray*} +Applying the Laplace transform to the bath coordinates, we obtain +\begin{eqnarray*} +p^2 L(x_\alpha ) - px_\alpha (0) - \dot x_\alpha (0) & = & - \omega _\alpha ^2 L(x_\alpha ) + \frac{{g_\alpha }}{{\omega _\alpha }}L(x), \\ +L(x_\alpha ) & = & \frac{{\frac{{g_\alpha }}{{\omega _\alpha }}L(x) + px_\alpha (0) + \dot x_\alpha (0)}}{{p^2 + \omega _\alpha ^2 }}. \\ +\end{eqnarray*} +In the same way, the system coordinates become +\begin{eqnarray*} + mL(\ddot x) & = & + - \sum\limits_{\alpha = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}\frac{p}{{p^2 + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2 + \omega _\alpha ^2 }}g_\alpha x_\alpha (0) - \frac{1}{{p^2 + \omega _\alpha ^2 }}g_\alpha \dot x_\alpha (0)} \right\}} \\ + & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}. +\end{eqnarray*} +With the help of some relatively important inverse Laplace +transformations: +\[ +\begin{array}{c} + L(\cos at) = \frac{p}{{p^2 + a^2 }} \\ + L(\sin at) = \frac{a}{{p^2 + a^2 }} \\ + L(1) = \frac{1}{p} \\ \end{array} \] - -\begin{align} -\left\langle {R(t)R(0)} \right\rangle &= \sum\limits_\alpha -{\sum\limits_\beta {g_\alpha g_\beta \left\langle {q_\alpha -(t)q_\beta (0)} \right\rangle } } +we obtain +\begin{eqnarray*} +m\ddot x & = & - \frac{{\partial W(x)}}{{\partial x}} - +\sum\limits_{\alpha = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2 +}}{{m_\alpha \omega _\alpha ^2 }}} \right)\int_0^t {\cos (\omega +_\alpha t)\dot x(t - \tau )d\tau } } \right\}} \\ +& & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha +x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}} +\right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha +(0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}}\\ % -&= \sum\limits_\alpha {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} -\right\rangle \cos (\omega _\alpha t)} -% -&= kT\xi (t) -\end{align} - +& = & - +\frac{{\partial W(x)}}{{\partial x}} - \int_0^t {\sum\limits_{\alpha += 1}^N {\left( { - \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha +^2 }}} \right)\cos (\omega _\alpha +t)\dot x(t - \tau )d} \tau } \\ +& & + \sum\limits_{\alpha = 1}^N {\left\{ {\left[ {g_\alpha +x_\alpha (0) - \frac{{g_\alpha }}{{m_\alpha \omega _\alpha }}} +\right]\cos (\omega _\alpha t) + \frac{{g_\alpha \dot x_\alpha +(0)}}{{\omega _\alpha }}\sin (\omega _\alpha t)} \right\}} +\end{eqnarray*} +Introducing a \emph{dynamic friction kernel} \begin{equation} -\xi (t) = \left\langle {R(t)R(0)} \right\rangle -\label{introEquation:secondFluctuationDissipation} +\xi (t) = \sum\limits_{\alpha = 1}^N {\left( { - \frac{{g_\alpha ^2 +}}{{m_\alpha \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha t)} +\label{introEquation:dynamicFrictionKernelDefinition} \end{equation} +and \emph{a random force} +\begin{equation} +R(t) = \sum\limits_{\alpha = 1}^N {\left( {g_\alpha x_\alpha (0) +- \frac{{g_\alpha ^2 }}{{m_\alpha \omega _\alpha ^2 }}x(0)} +\right)\cos (\omega _\alpha t)} + \frac{{\dot x_\alpha +(0)}}{{\omega _\alpha }}\sin (\omega _\alpha t), +\label{introEquation:randomForceDefinition} +\end{equation} +the equation of motion can be rewritten as +\begin{equation} +m\ddot x = - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi +(t)\dot x(t - \tau )d\tau } + R(t) +\label{introEuqation:GeneralizedLangevinDynamics} +\end{equation} +which is known as the \emph{generalized Langevin equation} (GLE). -\section{\label{introSection:hydroynamics}Hydrodynamics} +\subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}} -\subsection{\label{introSection:frictionTensor} Friction Tensor} -\subsection{\label{introSection:analyticalApproach}Analytical -Approach} +One may notice that $R(t)$ depends only on initial conditions, which +implies it is completely deterministic within the context of a +harmonic bath. However, it is easy to verify that $R(t)$ is totally +uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)} +\right\rangle = 0, \left\langle {\dot x(t)R(t)} \right\rangle = +0.$ This property is what we expect from a truly random process. As +long as the model chosen for $R(t)$ was a gaussian distribution in +general, the stochastic nature of the GLE still remains. +%dynamic friction kernel +The convolution integral +\[ +\int_0^t {\xi (t)\dot x(t - \tau )d\tau } +\] +depends on the entire history of the evolution of $x$, which implies +that the bath retains memory of previous motions. In other words, +the bath requires a finite time to respond to change in the motion +of the system. For a sluggish bath which responds slowly to changes +in the system coordinate, we may regard $\xi(t)$ as a constant +$\xi(t) = \Xi_0$. Hence, the convolution integral becomes +\[ +\int_0^t {\xi (t)\dot x(t - \tau )d\tau } = \xi _0 (x(t) - x(0)) +\] +and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes +\[ +m\ddot x = - \frac{\partial }{{\partial x}}\left( {W(x) + +\frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t), +\] +which can be used to describe the effect of dynamic caging in +viscous solvents. The other extreme is the bath that responds +infinitely quickly to motions in the system. Thus, $\xi (t)$ can be +taken as a $delta$ function in time: +\[ +\xi (t) = 2\xi _0 \delta (t). +\] +Hence, the convolution integral becomes +\[ +\int_0^t {\xi (t)\dot x(t - \tau )d\tau } = 2\xi _0 \int_0^t +{\delta (t)\dot x(t - \tau )d\tau } = \xi _0 \dot x(t), +\] +and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes the +Langevin equation +\begin{equation} +m\ddot x = - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot +x(t) + R(t) \label{introEquation:LangevinEquation}. +\end{equation} +The static friction coefficient $\xi _0$ can either be calculated +from spectral density or be determined by Stokes' law for regular +shaped particles. A brief review on calculating friction tensors for +arbitrary shaped particles is given in +Sec.~\ref{introSection:frictionTensor}. -\subsection{\label{introSection:approximationApproach}Approximation -Approach} +\subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}} -\subsection{\label{introSection:centersRigidBody}Centers of Rigid -Body} +Defining a new set of coordinates +\[ +q_\alpha (t) = x_\alpha (t) - \frac{1}{{m_\alpha \omega _\alpha +^2 }}x(0), +\] +we can rewrite $R(t)$ as +\[ +R(t) = \sum\limits_{\alpha = 1}^N {g_\alpha q_\alpha (t)}. +\] +And since the $q$ coordinates are harmonic oscillators, +\begin{eqnarray*} + \left\langle {q_\alpha ^2 } \right\rangle & = & \frac{{kT}}{{m_\alpha \omega _\alpha ^2 }} \\ + \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle & = & \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t) \\ + \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle & = &\delta _{\alpha \beta } \left\langle {q_\alpha (t)q_\alpha (0)} \right\rangle \\ + \left\langle {R(t)R(0)} \right\rangle & = & \sum\limits_\alpha {\sum\limits_\beta {g_\alpha g_\beta \left\langle {q_\alpha (t)q_\beta (0)} \right\rangle } } \\ + & = &\sum\limits_\alpha {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha t)} \\ + & = &kT\xi (t) +\end{eqnarray*} +Thus, we recover the \emph{second fluctuation dissipation theorem} +\begin{equation} +\xi (t) = \left\langle {R(t)R(0)} \right\rangle +\label{introEquation:secondFluctuationDissipation}, +\end{equation} +which acts as a constraint on the possible ways in which one can +model the random force and friction kernel.