ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2850 by tim, Sun Jun 11 01:55:48 2006 UTC vs.
Revision 2872 by tim, Wed Jun 21 16:28:25 2006 UTC

# Line 496 | Line 496 | Leimkuhler1999}. The velocity verlet method, which hap
496   geometric integrators, which preserve various phase-flow invariants
497   such as symplectic structure, volume and time reversal symmetry, are
498   developed to address this issue\cite{Dullweber1997, McLachlan1998,
499 < Leimkuhler1999}. The velocity verlet method, which happens to be a
499 > Leimkuhler1999}. The velocity Verlet method, which happens to be a
500   simple example of symplectic integrator, continues to gain
501   popularity in the molecular dynamics community. This fact can be
502   partly explained by its geometric nature.
# Line 591 | Line 591 | Instead, we use a approximate map, $\psi_\tau$, which
591   \end{equation}
592  
593   In most cases, it is not easy to find the exact flow $\varphi_\tau$.
594 < Instead, we use a approximate map, $\psi_\tau$, which is usually
594 > Instead, we use an approximate map, $\psi_\tau$, which is usually
595   called integrator. The order of an integrator $\psi_\tau$ is $p$, if
596   the Taylor series of $\psi_\tau$ agree to order $p$,
597   \begin{equation}
598 < \psi_tau(x) = x + \tau f(x) + O(\tau^{p+1})
598 > \psi_\tau(x) = x + \tau f(x) + O(\tau^{p+1})
599   \end{equation}
600  
601   \subsection{\label{introSection:geometricProperties}Geometric Properties}
602  
603 < The hidden geometric properties\cite{Budd1999, Marsden1998} of ODE
604 < and its flow play important roles in numerical studies. Many of them
605 < can be found in systems which occur naturally in applications.
603 > The hidden geometric properties\cite{Budd1999, Marsden1998} of an
604 > ODE and its flow play important roles in numerical studies. Many of
605 > them can be found in systems which occur naturally in applications.
606  
607   Let $\varphi$ be the flow of Hamiltonian vector field, $\varphi$ is
608   a \emph{symplectic} flow if it satisfies,
# Line 617 | Line 617 | is the property must be preserved by the integrator.
617   \begin{equation}
618   {\varphi '}^T J \varphi ' = J \circ \varphi
619   \end{equation}
620 < is the property must be preserved by the integrator.
620 > is the property that must be preserved by the integrator.
621  
622   It is possible to construct a \emph{volume-preserving} flow for a
623 < source free($ \nabla \cdot f = 0 $) ODE, if the flow satisfies $
623 > source free ODE ($ \nabla \cdot f = 0 $), if the flow satisfies $
624   \det d\varphi  = 1$. One can show easily that a symplectic flow will
625   be volume-preserving.
626  
627 < Changing the variables $y = h(x)$ in a ODE\ref{introEquation:ODE}
628 < will result in a new system,
627 > Changing the variables $y = h(x)$ in an ODE
628 > (Eq.~\ref{introEquation:ODE}) will result in a new system,
629   \[
630   \dot y = \tilde f(y) = ((dh \cdot f)h^{ - 1} )(y).
631   \]
# Line 675 | Line 675 | constructed. The most famous example is the Verlet-lea
675   A lot of well established and very effective numerical methods have
676   been successful precisely because of their symplecticities even
677   though this fact was not recognized when they were first
678 < constructed. The most famous example is the Verlet-leapfrog methods
678 > constructed. The most famous example is the Verlet-leapfrog method
679   in molecular dynamics. In general, symplectic integrators can be
680   constructed using one of four different methods.
681   \begin{enumerate}
# Line 754 | Line 754 | to its symmetric property,
754   \label{introEquation:timeReversible}
755   \end{equation},appendixFig:architecture
756  
757 < \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Example of Splitting Method}}
757 > \subsubsection{\label{introSection:exampleSplittingMethod}\textbf{Examples of the Splitting Method}}
758   The classical equation for a system consisting of interacting
759   particles can be written in Hamiltonian form,
760   \[
761   H = T + V
762   \]
763   where $T$ is the kinetic energy and $V$ is the potential energy.
764 < Setting $H_1 = T, H_2 = V$ and applying Strang splitting, one
764 > Setting $H_1 = T, H_2 = V$ and applying the Strang splitting, one
765   obtains the following:
766   \begin{align}
767   q(\Delta t) &= q(0) + \dot{q}(0)\Delta t +
# Line 788 | Line 788 | q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{
788      \label{introEquation:Lp9b}\\%
789   %
790   \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
791 <    \frac{\Delta t}{2m}\, F[q(0)]. \label{introEquation:Lp9c}
791 >    \frac{\Delta t}{2m}\, F[q(t)]. \label{introEquation:Lp9c}
792   \end{align}
793   From the preceding splitting, one can see that the integration of
794   the equations of motion would follow:
# Line 797 | Line 797 | the equations of motion would follow:
797  
798   \item Use the half step velocities to move positions one whole step, $\Delta t$.
799  
800 < \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
800 > \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
801  
802   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
803   \end{enumerate}
804  
805 < Simply switching the order of splitting and composing, a new
806 < integrator, the \emph{position verlet} integrator, can be generated,
805 > By simply switching the order of the propagators in the splitting
806 > and composing a new integrator, the \emph{position verlet}
807 > integrator, can be generated,
808   \begin{align}
809   \dot q(\Delta t) &= \dot q(0) + \Delta tF(q(0))\left[ {q(0) +
810   \frac{{\Delta t}}{{2m}}\dot q(0)} \right], %
# Line 816 | Line 817 | Baker-Campbell-Hausdorff formula can be used to determ
817  
818   \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
819  
820 < Baker-Campbell-Hausdorff formula can be used to determine the local
821 < error of splitting method in terms of commutator of the
820 > The Baker-Campbell-Hausdorff formula can be used to determine the
821 > local error of splitting method in terms of the commutator of the
822   operators(\ref{introEquation:exponentialOperator}) associated with
823 < the sub-flow. For operators $hX$ and $hY$ which are associate to
823 > the sub-flow. For operators $hX$ and $hY$ which are associated with
824   $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
825   \begin{equation}
826   \exp (hX + hY) = \exp (hZ)
# Line 833 | Line 834 | Applying Baker-Campbell-Hausdorff formula\cite{Varadar
834   \[
835   [X,Y] = XY - YX .
836   \]
837 < Applying Baker-Campbell-Hausdorff formula\cite{Varadarajan1974} to
838 < Sprang splitting, we can obtain
837 > Applying the Baker-Campbell-Hausdorff formula\cite{Varadarajan1974}
838 > to the Sprang splitting, we can obtain
839   \begin{eqnarray*}
840   \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
841                                     &   & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
842                                     &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots )
843   \end{eqnarray*}
844 < Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0\], the dominant local
844 > Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0,\] the dominant local
845   error of Spring splitting is proportional to $h^3$. The same
846 < procedure can be applied to general splitting,  of the form
846 > procedure can be applied to a general splitting,  of the form
847   \begin{equation}
848   \varphi _{b_m h}^2  \circ \varphi _{a_m h}^1  \circ \varphi _{b_{m -
849   1} h}^2  \circ  \ldots  \circ \varphi _{a_1 h}^1 .
850   \end{equation}
851 < Careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
852 < order method. Yoshida proposed an elegant way to compose higher
851 > A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
852 > order methods. Yoshida proposed an elegant way to compose higher
853   order methods based on symmetric splitting\cite{Yoshida1990}. Given
854   a symmetric second order base method $ \varphi _h^{(2)} $, a
855   fourth-order symmetric method can be constructed by composing,
# Line 861 | Line 862 | _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)}
862   integrator $ \varphi _h^{(2n + 2)}$ can be composed by
863   \begin{equation}
864   \varphi _h^{(2n + 2)}  = \varphi _{\alpha h}^{(2n)}  \circ \varphi
865 < _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)}
865 > _{\beta h}^{(2n)}  \circ \varphi _{\alpha h}^{(2n)},
866   \end{equation}
867 < , if the weights are chosen as
867 > if the weights are chosen as
868   \[
869   \alpha  =  - \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }},\beta =
870   \frac{{2^{1/(2n + 1)} }}{{2 - 2^{1/(2n + 1)} }} .
# Line 901 | Line 902 | will discusses issues in production run.
902   These three individual steps will be covered in the following
903   sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
904   initialization of a simulation. Sec.~\ref{introSection:production}
905 < will discusses issues in production run.
905 > will discusse issues in production run.
906   Sec.~\ref{introSection:Analysis} provides the theoretical tools for
907   trajectory analysis.
908  
# Line 914 | Line 915 | purification and crystallization. Even for the molecul
915   databases, such as RCSB Protein Data Bank \textit{etc}. Although
916   thousands of crystal structures of molecules are discovered every
917   year, many more remain unknown due to the difficulties of
918 < purification and crystallization. Even for the molecule with known
919 < structure, some important information is missing. For example, the
918 > purification and crystallization. Even for molecules with known
919 > structure, some important information is missing. For example, a
920   missing hydrogen atom which acts as donor in hydrogen bonding must
921   be added. Moreover, in order to include electrostatic interaction,
922   one may need to specify the partial charges for individual atoms.
923   Under some circumstances, we may even need to prepare the system in
924 < a special setup. For instance, when studying transport phenomenon in
925 < membrane system, we may prepare the lipids in bilayer structure
926 < instead of placing lipids randomly in solvent, since we are not
927 < interested in self-aggregation and it takes a long time to happen.
924 > a special configuration. For instance, when studying transport
925 > phenomenon in membrane systems, we may prepare the lipids in a
926 > bilayer structure instead of placing lipids randomly in solvent,
927 > since we are not interested in the slow self-aggregation process.
928  
929   \subsubsection{\textbf{Minimization}}
930  
931   It is quite possible that some of molecules in the system from
932 < preliminary preparation may be overlapped with each other. This
933 < close proximity leads to high potential energy which consequently
934 < jeopardizes any molecular dynamics simulations. To remove these
935 < steric overlaps, one typically performs energy minimization to find
936 < a more reasonable conformation. Several energy minimization methods
937 < have been developed to exploit the energy surface and to locate the
938 < local minimum. While converging slowly near the minimum, steepest
939 < descent method is extremely robust when systems are far from
940 < harmonic. Thus, it is often used to refine structure from
941 < crystallographic data. Relied on the gradient or hessian, advanced
942 < methods like conjugate gradient and Newton-Raphson converge rapidly
943 < to a local minimum, while become unstable if the energy surface is
944 < far from quadratic. Another factor must be taken into account, when
932 > preliminary preparation may be overlapping with each other. This
933 > close proximity leads to high initial potential energy which
934 > consequently jeopardizes any molecular dynamics simulations. To
935 > remove these steric overlaps, one typically performs energy
936 > minimization to find a more reasonable conformation. Several energy
937 > minimization methods have been developed to exploit the energy
938 > surface and to locate the local minimum. While converging slowly
939 > near the minimum, steepest descent method is extremely robust when
940 > systems are strongly anharmonic. Thus, it is often used to refine
941 > structure from crystallographic data. Relied on the gradient or
942 > hessian, advanced methods like Newton-Raphson converge rapidly to a
943 > local minimum, but become unstable if the energy surface is far from
944 > quadratic. Another factor that must be taken into account, when
945   choosing energy minimization method, is the size of the system.
946   Steepest descent and conjugate gradient can deal with models of any
947 < size. Because of the limit of computation power to calculate hessian
948 < matrix and insufficient storage capacity to store them, most
949 < Newton-Raphson methods can not be used with very large models.
947 > size. Because of the limits on computer memory to store the hessian
948 > matrix and the computing power needed to diagonalized these
949 > matrices, most Newton-Raphson methods can not be used with very
950 > large systems.
951  
952   \subsubsection{\textbf{Heating}}
953  
954   Typically, Heating is performed by assigning random velocities
955 < according to a Gaussian distribution for a temperature. Beginning at
956 < a lower temperature and gradually increasing the temperature by
957 < assigning greater random velocities, we end up with setting the
958 < temperature of the system to a final temperature at which the
959 < simulation will be conducted. In heating phase, we should also keep
960 < the system from drifting or rotating as a whole. Equivalently, the
961 < net linear momentum and angular momentum of the system should be
962 < shifted to zero.
955 > according to a Maxwell-Boltzman distribution for a desired
956 > temperature. Beginning at a lower temperature and gradually
957 > increasing the temperature by assigning larger random velocities, we
958 > end up with setting the temperature of the system to a final
959 > temperature at which the simulation will be conducted. In heating
960 > phase, we should also keep the system from drifting or rotating as a
961 > whole. To do this, the net linear momentum and angular momentum of
962 > the system is shifted to zero after each resampling from the Maxwell
963 > -Boltzman distribution.
964  
965   \subsubsection{\textbf{Equilibration}}
966  
# Line 973 | Line 976 | Production run is the most important step of the simul
976  
977   \subsection{\label{introSection:production}Production}
978  
979 < Production run is the most important step of the simulation, in
979 > The production run is the most important step of the simulation, in
980   which the equilibrated structure is used as a starting point and the
981   motions of the molecules are collected for later analysis. In order
982   to capture the macroscopic properties of the system, the molecular
983 < dynamics simulation must be performed in correct and efficient way.
983 > dynamics simulation must be performed by sampling correctly and
984 > efficiently from the relevant thermodynamic ensemble.
985  
986   The most expensive part of a molecular dynamics simulation is the
987   calculation of non-bonded forces, such as van der Waals force and
988   Coulombic forces \textit{etc}. For a system of $N$ particles, the
989   complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
990   which making large simulations prohibitive in the absence of any
991 < computation saving techniques.
991 > algorithmic tricks.
992  
993 < A natural approach to avoid system size issue is to represent the
993 > A natural approach to avoid system size issues is to represent the
994   bulk behavior by a finite number of the particles. However, this
995 < approach will suffer from the surface effect. To offset this,
996 < \textit{Periodic boundary condition} (see Fig.~\ref{introFig:pbc})
997 < is developed to simulate bulk properties with a relatively small
998 < number of particles. In this method, the simulation box is
999 < replicated throughout space to form an infinite lattice. During the
1000 < simulation, when a particle moves in the primary cell, its image in
1001 < other cells move in exactly the same direction with exactly the same
1002 < orientation. Thus, as a particle leaves the primary cell, one of its
1003 < images will enter through the opposite face.
995 > approach will suffer from the surface effect at the edges of the
996 > simulation. To offset this, \textit{Periodic boundary conditions}
997 > (see Fig.~\ref{introFig:pbc}) is developed to simulate bulk
998 > properties with a relatively small number of particles. In this
999 > method, the simulation box is replicated throughout space to form an
1000 > infinite lattice. During the simulation, when a particle moves in
1001 > the primary cell, its image in other cells move in exactly the same
1002 > direction with exactly the same orientation. Thus, as a particle
1003 > leaves the primary cell, one of its images will enter through the
1004 > opposite face.
1005   \begin{figure}
1006   \centering
1007   \includegraphics[width=\linewidth]{pbc.eps}
# Line 1008 | Line 1013 | evaluation is to apply cutoff where particles farther
1013  
1014   %cutoff and minimum image convention
1015   Another important technique to improve the efficiency of force
1016 < evaluation is to apply cutoff where particles farther than a
1017 < predetermined distance, are not included in the calculation
1016 > evaluation is to apply spherical cutoff where particles farther than
1017 > a predetermined distance are not included in the calculation
1018   \cite{Frenkel1996}. The use of a cutoff radius will cause a
1019   discontinuity in the potential energy curve. Fortunately, one can
1020 < shift the potential to ensure the potential curve go smoothly to
1021 < zero at the cutoff radius. Cutoff strategy works pretty well for
1022 < Lennard-Jones interaction because of its short range nature.
1023 < However, simply truncating the electrostatic interaction with the
1024 < use of cutoff has been shown to lead to severe artifacts in
1025 < simulations. Ewald summation, in which the slowly conditionally
1026 < convergent Coulomb potential is transformed into direct and
1027 < reciprocal sums with rapid and absolute convergence, has proved to
1028 < minimize the periodicity artifacts in liquid simulations. Taking the
1029 < advantages of the fast Fourier transform (FFT) for calculating
1030 < discrete Fourier transforms, the particle mesh-based
1020 > shift simple radial potential to ensure the potential curve go
1021 > smoothly to zero at the cutoff radius. The cutoff strategy works
1022 > well for Lennard-Jones interaction because of its short range
1023 > nature. However, simply truncating the electrostatic interaction
1024 > with the use of cutoffs has been shown to lead to severe artifacts
1025 > in simulations. The Ewald summation, in which the slowly decaying
1026 > Coulomb potential is transformed into direct and reciprocal sums
1027 > with rapid and absolute convergence, has proved to minimize the
1028 > periodicity artifacts in liquid simulations. Taking the advantages
1029 > of the fast Fourier transform (FFT) for calculating discrete Fourier
1030 > transforms, the particle mesh-based
1031   methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
1032 < $O(N^{3/2})$ to $O(N logN)$. An alternative approach is \emph{fast
1033 < multipole method}\cite{Greengard1987, Greengard1994}, which treats
1034 < Coulombic interaction exactly at short range, and approximate the
1035 < potential at long range through multipolar expansion. In spite of
1036 < their wide acceptances at the molecular simulation community, these
1037 < two methods are hard to be implemented correctly and efficiently.
1038 < Instead, we use a damped and charge-neutralized Coulomb potential
1039 < method developed by Wolf and his coworkers\cite{Wolf1999}. The
1040 < shifted Coulomb potential for particle $i$ and particle $j$ at
1041 < distance $r_{rj}$ is given by:
1032 > $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
1033 > \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
1034 > which treats Coulombic interactions exactly at short range, and
1035 > approximate the potential at long range through multipolar
1036 > expansion. In spite of their wide acceptance at the molecular
1037 > simulation community, these two methods are difficult to implement
1038 > correctly and efficiently. Instead, we use a damped and
1039 > charge-neutralized Coulomb potential method developed by Wolf and
1040 > his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
1041 > particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
1042   \begin{equation}
1043   V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
1044   r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
# Line 1055 | Line 1060 | Recently, advanced visualization technique are widely
1060  
1061   \subsection{\label{introSection:Analysis} Analysis}
1062  
1063 < Recently, advanced visualization technique are widely applied to
1063 > Recently, advanced visualization technique have become applied to
1064   monitor the motions of molecules. Although the dynamics of the
1065   system can be described qualitatively from animation, quantitative
1066 < trajectory analysis are more appreciable. According to the
1067 < principles of Statistical Mechanics,
1068 < Sec.~\ref{introSection:statisticalMechanics}, one can compute
1069 < thermodynamics properties, analyze fluctuations of structural
1070 < parameters, and investigate time-dependent processes of the molecule
1066 < from the trajectories.
1066 > trajectory analysis are more useful. According to the principles of
1067 > Statistical Mechanics, Sec.~\ref{introSection:statisticalMechanics},
1068 > one can compute thermodynamic properties, analyze fluctuations of
1069 > structural parameters, and investigate time-dependent processes of
1070 > the molecule from the trajectories.
1071  
1072 < \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamics Properties}}
1072 > \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1073  
1074 < Thermodynamics properties, which can be expressed in terms of some
1074 > Thermodynamic properties, which can be expressed in terms of some
1075   function of the coordinates and momenta of all particles in the
1076   system, can be directly computed from molecular dynamics. The usual
1077   way to measure the pressure is based on virial theorem of Clausius
# Line 1090 | Line 1094 | distribution functions. Among these functions,\emph{pa
1094   \subsubsection{\label{introSection:structuralProperties}\textbf{Structural Properties}}
1095  
1096   Structural Properties of a simple fluid can be described by a set of
1097 < distribution functions. Among these functions,\emph{pair
1097 > distribution functions. Among these functions,the \emph{pair
1098   distribution function}, also known as \emph{radial distribution
1099 < function}, is of most fundamental importance to liquid-state theory.
1100 < Pair distribution function can be gathered by Fourier transforming
1101 < raw data from a series of neutron diffraction experiments and
1102 < integrating over the surface factor \cite{Powles1973}. The
1103 < experiment result can serve as a criterion to justify the
1104 < correctness of the theory. Moreover, various equilibrium
1105 < thermodynamic and structural properties can also be expressed in
1106 < terms of radial distribution function \cite{Allen1987}.
1099 > function}, is of most fundamental importance to liquid theory.
1100 > Experimentally, pair distribution function can be gathered by
1101 > Fourier transforming raw data from a series of neutron diffraction
1102 > experiments and integrating over the surface factor
1103 > \cite{Powles1973}. The experimental results can serve as a criterion
1104 > to justify the correctness of a liquid model. Moreover, various
1105 > equilibrium thermodynamic and structural properties can also be
1106 > expressed in terms of radial distribution function \cite{Allen1987}.
1107  
1108 < A pair distribution functions $g(r)$ gives the probability that a
1108 > The pair distribution functions $g(r)$ gives the probability that a
1109   particle $i$ will be located at a distance $r$ from a another
1110   particle $j$ in the system
1111   \[
1112   g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1113 < \ne i} {\delta (r - r_{ij} )} } } \right\rangle.
1113 > \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \fract{\rho
1114 > (r)}{\rho}.
1115   \]
1116   Note that the delta function can be replaced by a histogram in
1117   computer simulation. Figure
# Line 1116 | Line 1121 | locating particles at distances less than about 2.5{\A
1121   to find particles at certain radial values than at others. This is a
1122   result of the attractive interaction at such distances. Because of
1123   the strong repulsive forces at short distance, the probability of
1124 < locating particles at distances less than about 2.5{\AA} from each
1124 > locating particles at distances less than about 3.7{\AA} from each
1125   other is essentially zero.
1126  
1127   %\begin{figure}
# Line 1131 | Line 1136 | correlation function}, which correlates random variabl
1136   Properties}}
1137  
1138   Time-dependent properties are usually calculated using \emph{time
1139 < correlation function}, which correlates random variables $A$ and $B$
1140 < at two different time
1139 > correlation functions}, which correlate random variables $A$ and $B$
1140 > at two different times,
1141   \begin{equation}
1142   C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1143   \label{introEquation:timeCorrelationFunction}
1144   \end{equation}
1145   If $A$ and $B$ refer to same variable, this kind of correlation
1146 < function is called \emph{auto correlation function}. One example of
1147 < auto correlation function is velocity auto-correlation function
1148 < which is directly related to transport properties of molecular
1149 < liquids:
1146 > function is called an \emph{autocorrelation function}. One example
1147 > of an auto correlation function is the velocity auto-correlation
1148 > function which is directly related to transport properties of
1149 > molecular liquids:
1150   \[
1151   D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1152   \right\rangle } dt
1153   \]
1154 < where $D$ is diffusion constant. Unlike velocity autocorrelation
1155 < function which is averaging over time origins and over all the
1156 < atoms, dipole autocorrelation are calculated for the entire system.
1157 < The dipole autocorrelation function is given by:
1154 > where $D$ is diffusion constant. Unlike the velocity autocorrelation
1155 > function, which is averaging over time origins and over all the
1156 > atoms, the dipole autocorrelation functions are calculated for the
1157 > entire system. The dipole autocorrelation function is given by:
1158   \[
1159   c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1160   \right\rangle
# Line 1175 | Line 1180 | simulator is governed by the rigid body dynamics. In m
1180   areas, from engineering, physics, to chemistry. For example,
1181   missiles and vehicle are usually modeled by rigid bodies.  The
1182   movement of the objects in 3D gaming engine or other physics
1183 < simulator is governed by the rigid body dynamics. In molecular
1184 < simulation, rigid body is used to simplify the model in
1185 < protein-protein docking study\cite{Gray2003}.
1183 > simulator is governed by rigid body dynamics. In molecular
1184 > simulations, rigid bodies are used to simplify protein-protein
1185 > docking studies\cite{Gray2003}.
1186  
1187   It is very important to develop stable and efficient methods to
1188 < integrate the equations of motion of orientational degrees of
1189 < freedom. Euler angles are the nature choice to describe the
1190 < rotational degrees of freedom. However, due to its singularity, the
1191 < numerical integration of corresponding equations of motion is very
1192 < inefficient and inaccurate. Although an alternative integrator using
1193 < different sets of Euler angles can overcome this
1194 < difficulty\cite{Barojas1973}, the computational penalty and the lost
1195 < of angular momentum conservation still remain. A singularity free
1196 < representation utilizing quaternions was developed by Evans in
1197 < 1977\cite{Evans1977}. Unfortunately, this approach suffer from the
1198 < nonseparable Hamiltonian resulted from quaternion representation,
1199 < which prevents the symplectic algorithm to be utilized. Another
1200 < different approach is to apply holonomic constraints to the atoms
1201 < belonging to the rigid body. Each atom moves independently under the
1202 < normal forces deriving from potential energy and constraint forces
1203 < which are used to guarantee the rigidness. However, due to their
1204 < iterative nature, SHAKE and Rattle algorithm converge very slowly
1205 < when the number of constraint increases\cite{Ryckaert1977,
1206 < Andersen1983}.
1188 > integrate the equations of motion for orientational degrees of
1189 > freedom. Euler angles are the natural choice to describe the
1190 > rotational degrees of freedom. However, due to $\frac {1}{sin
1191 > \theta}$ singularities, the numerical integration of corresponding
1192 > equations of motion is very inefficient and inaccurate. Although an
1193 > alternative integrator using multiple sets of Euler angles can
1194 > overcome this difficulty\cite{Barojas1973}, the computational
1195 > penalty and the loss of angular momentum conservation still remain.
1196 > A singularity-free representation utilizing quaternions was
1197 > developed by Evans in 1977\cite{Evans1977}. Unfortunately, this
1198 > approach uses a nonseparable Hamiltonian resulting from the
1199 > quaternion representation, which prevents the symplectic algorithm
1200 > to be utilized. Another different approach is to apply holonomic
1201 > constraints to the atoms belonging to the rigid body. Each atom
1202 > moves independently under the normal forces deriving from potential
1203 > energy and constraint forces which are used to guarantee the
1204 > rigidness. However, due to their iterative nature, the SHAKE and
1205 > Rattle algorithms also converge very slowly when the number of
1206 > constraints increases\cite{Ryckaert1977, Andersen1983}.
1207  
1208 < The break through in geometric literature suggests that, in order to
1208 > A break-through in geometric literature suggests that, in order to
1209   develop a long-term integration scheme, one should preserve the
1210 < symplectic structure of the flow. Introducing conjugate momentum to
1211 < rotation matrix $Q$ and re-formulating Hamiltonian's equation, a
1212 < symplectic integrator, RSHAKE\cite{Kol1997}, was proposed to evolve
1213 < the Hamiltonian system in a constraint manifold by iteratively
1214 < satisfying the orthogonality constraint $Q_T Q = 1$. An alternative
1215 < method using quaternion representation was developed by
1216 < Omelyan\cite{Omelyan1998}. However, both of these methods are
1217 < iterative and inefficient. In this section, we will present a
1210 > symplectic structure of the flow. By introducing a conjugate
1211 > momentum to the rotation matrix $Q$ and re-formulating Hamiltonian's
1212 > equation, a symplectic integrator, RSHAKE\cite{Kol1997}, was
1213 > proposed to evolve the Hamiltonian system in a constraint manifold
1214 > by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1215 > An alternative method using the quaternion representation was
1216 > developed by Omelyan\cite{Omelyan1998}. However, both of these
1217 > methods are iterative and inefficient. In this section, we descibe a
1218   symplectic Lie-Poisson integrator for rigid body developed by
1219   Dullweber and his coworkers\cite{Dullweber1997} in depth.
1220  
1221 < \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Body}
1222 < The motion of the rigid body is Hamiltonian with the Hamiltonian
1221 > \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1222 > The motion of a rigid body is Hamiltonian with the Hamiltonian
1223   function
1224   \begin{equation}
1225   H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
# Line 1228 | Line 1233 | constrained Hamiltonian equation subjects to a holonom
1233   I_{ii}^{ - 1}  = \frac{1}{2}\sum\limits_{i \ne j} {J_{jj}^{ - 1} }
1234   \]
1235   where $I_{ii}$ is the diagonal element of the inertia tensor. This
1236 < constrained Hamiltonian equation subjects to a holonomic constraint,
1236 > constrained Hamiltonian equation is subjected to a holonomic
1237 > constraint,
1238   \begin{equation}
1239   Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1240   \end{equation}
1241 < which is used to ensure rotation matrix's orthogonality.
1242 < Differentiating \ref{introEquation:orthogonalConstraint} and using
1243 < Equation \ref{introEquation:RBMotionMomentum}, one may obtain,
1241 > which is used to ensure rotation matrix's unitarity. Differentiating
1242 > \ref{introEquation:orthogonalConstraint} and using Equation
1243 > \ref{introEquation:RBMotionMomentum}, one may obtain,
1244   \begin{equation}
1245   Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1246   \label{introEquation:RBFirstOrderConstraint}
# Line 1252 | Line 1258 | We can use constraint force provided by lagrange multi
1258   \end{eqnarray}
1259  
1260   In general, there are two ways to satisfy the holonomic constraints.
1261 < We can use constraint force provided by lagrange multiplier on the
1262 < normal manifold to keep the motion on constraint space. Or we can
1263 < simply evolve the system in constraint manifold. These two methods
1264 < are proved to be equivalent. The holonomic constraint and equations
1265 < of motions define a constraint manifold for rigid body
1261 > We can use a constraint force provided by a Lagrange multiplier on
1262 > the normal manifold to keep the motion on constraint space. Or we
1263 > can simply evolve the system on the constraint manifold. These two
1264 > methods have been proved to be equivalent. The holonomic constraint
1265 > and equations of motions define a constraint manifold for rigid
1266 > bodies
1267   \[
1268   M = \left\{ {(Q,P):Q^T Q = 1,Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0}
1269   \right\}.
# Line 1265 | Line 1272 | diffeomorphic. Introducing
1272   Unfortunately, this constraint manifold is not the cotangent bundle
1273   $T_{\star}SO(3)$. However, it turns out that under symplectic
1274   transformation, the cotangent space and the phase space are
1275 < diffeomorphic. Introducing
1275 > diffeomorphic. By introducing
1276   \[
1277   \tilde Q = Q,\tilde P = \frac{1}{2}\left( {P - QP^T Q} \right),
1278   \]
# Line 1297 | Line 1304 | body, angular momentum on body frame $\Pi  = Q^t P$ is
1304   respectively.
1305  
1306   As a common choice to describe the rotation dynamics of the rigid
1307 < body, angular momentum on body frame $\Pi  = Q^t P$ is introduced to
1308 < rewrite the equations of motion,
1307 > body, the angular momentum on the body fixed frame $\Pi  = Q^t P$ is
1308 > introduced to rewrite the equations of motion,
1309   \begin{equation}
1310   \begin{array}{l}
1311   \mathop \Pi \limits^ \bullet   = J^{ - 1} \Pi ^T \Pi  + Q^T \sum\limits_i {F_i (q,Q)X_i^T }  - \Lambda  \\
# Line 1341 | Line 1348 | unique property eliminate the requirement of iteration
1348   Since $\Lambda$ is symmetric, the last term of Equation
1349   \ref{introEquation:skewMatrixPI} is zero, which implies the Lagrange
1350   multiplier $\Lambda$ is absent from the equations of motion. This
1351 < unique property eliminate the requirement of iterations which can
1351 > unique property eliminates the requirement of iterations which can
1352   not be avoided in other methods\cite{Kol1997, Omelyan1998}.
1353  
1354 < Applying hat-map isomorphism, we obtain the equation of motion for
1355 < angular momentum on body frame
1354 > Applying the hat-map isomorphism, we obtain the equation of motion
1355 > for angular momentum on body frame
1356   \begin{equation}
1357   \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T
1358   F_i (r,Q)} \right) \times X_i }.
# Line 1360 | Line 1367 | If there is not external forces exerted on the rigid b
1367   \subsection{\label{introSection:SymplecticFreeRB}Symplectic
1368   Lie-Poisson Integrator for Free Rigid Body}
1369  
1370 < If there is not external forces exerted on the rigid body, the only
1371 < contribution to the rotational is from the kinetic potential (the
1372 < first term of \ref{introEquation:bodyAngularMotion}). The free rigid
1373 < body is an example of Lie-Poisson system with Hamiltonian function
1370 > If there are no external forces exerted on the rigid body, the only
1371 > contribution to the rotational motion is from the kinetic energy
1372 > (the first term of \ref{introEquation:bodyAngularMotion}). The free
1373 > rigid body is an example of a Lie-Poisson system with Hamiltonian
1374 > function
1375   \begin{equation}
1376   T^r (\pi ) = T_1 ^r (\pi _1 ) + T_2^r (\pi _2 ) + T_3^r (\pi _3 )
1377   \label{introEquation:rotationalKineticRB}
# Line 1410 | Line 1418 | tR_1 }$, we can use Cayley transformation,
1418   \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1419   \]
1420   To reduce the cost of computing expensive functions in $e^{\Delta
1421 < tR_1 }$, we can use Cayley transformation,
1421 > tR_1 }$, we can use Cayley transformation to obtain a single-aixs
1422 > propagator,
1423   \[
1424   e^{\Delta tR_1 }  \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1
1425   )
1426   \]
1427   The flow maps for $T_2^r$ and $T_3^r$ can be found in the same
1428 < manner.
1429 <
1421 < In order to construct a second-order symplectic method, we split the
1422 < angular kinetic Hamiltonian function can into five terms
1428 > manner. In order to construct a second-order symplectic method, we
1429 > split the angular kinetic Hamiltonian function can into five terms
1430   \[
1431   T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1432   ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
1433 < (\pi _1 )
1434 < \].
1435 < Concatenating flows corresponding to these five terms, we can obtain
1436 < an symplectic integrator,
1433 > (\pi _1 ).
1434 > \]
1435 > By concatenating the propagators corresponding to these five terms,
1436 > we can obtain an symplectic integrator,
1437   \[
1438   \varphi _{\Delta t,T^r }  = \varphi _{\Delta t/2,\pi _1 }  \circ
1439   \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }
# Line 1453 | Line 1460 | Lie-Poisson integrator is found to be extremely effici
1460   \]
1461   Thus $ [\nabla _\pi  F(\pi )]^T J(\pi ) =  - S'(\frac{{\parallel \pi
1462   \parallel ^2 }}{2})\pi  \times \pi  = 0 $. This explicit
1463 < Lie-Poisson integrator is found to be extremely efficient and stable
1464 < which can be explained by the fact the small angle approximation is
1465 < used and the norm of the angular momentum is conserved.
1463 > Lie-Poisson integrator is found to be both extremely efficient and
1464 > stable. These properties can be explained by the fact the small
1465 > angle approximation is used and the norm of the angular momentum is
1466 > conserved.
1467  
1468   \subsection{\label{introSection:RBHamiltonianSplitting} Hamiltonian
1469   Splitting for Rigid Body}
# Line 1482 | Line 1490 | A second-order symplectic method is now obtained by th
1490   \end{tabular}
1491   \end{center}
1492   \end{table}
1493 < A second-order symplectic method is now obtained by the
1494 < composition of the flow maps,
1493 > A second-order symplectic method is now obtained by the composition
1494 > of the position and velocity propagators,
1495   \[
1496   \varphi _{\Delta t}  = \varphi _{\Delta t/2,V}  \circ \varphi
1497   _{\Delta t,T}  \circ \varphi _{\Delta t/2,V}.
1498   \]
1499   Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1500 < sub-flows which corresponding to force and torque respectively,
1500 > sub-propagators which corresponding to force and torque
1501 > respectively,
1502   \[
1503   \varphi _{\Delta t/2,V}  = \varphi _{\Delta t/2,F}  \circ \varphi
1504   _{\Delta t/2,\tau }.
1505   \]
1506   Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1507 < $\circ \varphi _{\Delta t/2,\tau }$ are commuted, the composition
1508 < order inside $\varphi _{\Delta t/2,V}$ does not matter.
1509 <
1510 < Furthermore, kinetic potential can be separated to translational
1502 < kinetic term, $T^t (p)$, and rotational kinetic term, $T^r (\pi )$,
1507 > $\circ \varphi _{\Delta t/2,\tau }$ commute, the composition order
1508 > inside $\varphi _{\Delta t/2,V}$ does not matter. Furthermore, the
1509 > kinetic energy can be separated to translational kinetic term, $T^t
1510 > (p)$, and rotational kinetic term, $T^r (\pi )$,
1511   \begin{equation}
1512   T(p,\pi ) =T^t (p) + T^r (\pi ).
1513   \end{equation}
1514   where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1515   defined by \ref{introEquation:rotationalKineticRB}. Therefore, the
1516 < corresponding flow maps are given by
1516 > corresponding propagators are given by
1517   \[
1518   \varphi _{\Delta t,T}  = \varphi _{\Delta t,T^t }  \circ \varphi
1519   _{\Delta t,T^r }.
1520   \]
1521 < Finally, we obtain the overall symplectic flow maps for free moving
1522 < rigid body
1521 > Finally, we obtain the overall symplectic propagators for freely
1522 > moving rigid bodies
1523   \begin{equation}
1524   \begin{array}{c}
1525   \varphi _{\Delta t}  = \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \\
# Line 1525 | Line 1533 | the theory of Langevin dynamics simulation. A brief de
1533   As an alternative to newtonian dynamics, Langevin dynamics, which
1534   mimics a simple heat bath with stochastic and dissipative forces,
1535   has been applied in a variety of studies. This section will review
1536 < the theory of Langevin dynamics simulation. A brief derivation of
1537 < generalized Langevin equation will be given first. Follow that, we
1538 < will discuss the physical meaning of the terms appearing in the
1539 < equation as well as the calculation of friction tensor from
1540 < hydrodynamics theory.
1536 > the theory of Langevin dynamics. A brief derivation of generalized
1537 > Langevin equation will be given first. Following that, we will
1538 > discuss the physical meaning of the terms appearing in the equation
1539 > as well as the calculation of friction tensor from hydrodynamics
1540 > theory.
1541  
1542   \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1543  
1544 < Harmonic bath model, in which an effective set of harmonic
1544 > A harmonic bath model, in which an effective set of harmonic
1545   oscillators are used to mimic the effect of a linearly responding
1546   environment, has been widely used in quantum chemistry and
1547   statistical mechanics. One of the successful applications of
1548 < Harmonic bath model is the derivation of Deriving Generalized
1549 < Langevin Dynamics. Lets consider a system, in which the degree of
1548 > Harmonic bath model is the derivation of the Generalized Langevin
1549 > Dynamics (GLE). Lets consider a system, in which the degree of
1550   freedom $x$ is assumed to couple to the bath linearly, giving a
1551   Hamiltonian of the form
1552   \begin{equation}
1553   H = \frac{{p^2 }}{{2m}} + U(x) + H_B  + \Delta U(x,x_1 , \ldots x_N)
1554   \label{introEquation:bathGLE}.
1555   \end{equation}
1556 < Here $p$ is a momentum conjugate to $q$, $m$ is the mass associated
1557 < with this degree of freedom, $H_B$ is harmonic bath Hamiltonian,
1556 > Here $p$ is a momentum conjugate to $x$, $m$ is the mass associated
1557 > with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1558   \[
1559   H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1560   }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 }
# Line 1554 | Line 1562 | the harmonic bath masses, and $\Delta U$ is bilinear s
1562   \]
1563   where the index $\alpha$ runs over all the bath degrees of freedom,
1564   $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1565 < the harmonic bath masses, and $\Delta U$ is bilinear system-bath
1565 > the harmonic bath masses, and $\Delta U$ is a bilinear system-bath
1566   coupling,
1567   \[
1568   \Delta U =  - \sum\limits_{\alpha  = 1}^N {g_\alpha  x_\alpha  x}
1569   \]
1570 < where $g_\alpha$ are the coupling constants between the bath and the
1571 < coordinate $x$. Introducing
1570 > where $g_\alpha$ are the coupling constants between the bath
1571 > coordinates ($x_ \apha$) and the system coordinate ($x$).
1572 > Introducing
1573   \[
1574   W(x) = U(x) - \sum\limits_{\alpha  = 1}^N {\frac{{g_\alpha ^2
1575   }}{{2m_\alpha  w_\alpha ^2 }}} x^2
# Line 1575 | Line 1584 | Generalized Langevin Dynamics by Hamilton's equations
1584   \]
1585   Since the first two terms of the new Hamiltonian depend only on the
1586   system coordinates, we can get the equations of motion for
1587 < Generalized Langevin Dynamics by Hamilton's equations
1579 < \ref{introEquation:motionHamiltonianCoordinate,
1580 < introEquation:motionHamiltonianMomentum},
1587 > Generalized Langevin Dynamics by Hamilton's equations,
1588   \begin{equation}
1589   m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} -
1590   \sum\limits_{\alpha  = 1}^N {g_\alpha  \left( {x_\alpha   -
# Line 1594 | Line 1601 | differential equations, Laplace transform is the appro
1601   In order to derive an equation for $x$, the dynamics of the bath
1602   variables $x_\alpha$ must be solved exactly first. As an integral
1603   transform which is particularly useful in solving linear ordinary
1604 < differential equations, Laplace transform is the appropriate tool to
1605 < solve this problem. The basic idea is to transform the difficult
1604 > differential equations,the Laplace transform is the appropriate tool
1605 > to solve this problem. The basic idea is to transform the difficult
1606   differential equations into simple algebra problems which can be
1607 < solved easily. Then applying inverse Laplace transform, also known
1608 < as the Bromwich integral, we can retrieve the solutions of the
1607 > solved easily. Then, by applying the inverse Laplace transform, also
1608 > known as the Bromwich integral, we can retrieve the solutions of the
1609   original problems.
1610  
1611   Let $f(t)$ be a function defined on $ [0,\infty ) $. The Laplace
# Line 1618 | Line 1625 | Applying Laplace transform to the bath coordinates, we
1625   \end{eqnarray*}
1626  
1627  
1628 < Applying Laplace transform to the bath coordinates, we obtain
1628 > Applying the Laplace transform to the bath coordinates, we obtain
1629   \begin{eqnarray*}
1630   p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) \\
1631   L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }} \\
# Line 1695 | Line 1702 | as the model, which is gaussian distribution in genera
1702   \end{array}
1703   \]
1704   This property is what we expect from a truly random process. As long
1705 < as the model, which is gaussian distribution in general, chosen for
1706 < $R(t)$ is a truly random process, the stochastic nature of the GLE
1700 < still remains.
1705 > as the model chosen for $R(t)$ was a gaussian distribution in
1706 > general, the stochastic nature of the GLE still remains.
1707  
1708   %dynamic friction kernel
1709   The convolution integral
# Line 1718 | Line 1724 | which can be used to describe dynamic caging effect. T
1724   m\ddot x =  - \frac{\partial }{{\partial x}}\left( {W(x) +
1725   \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1726   \]
1727 < which can be used to describe dynamic caging effect. The other
1728 < extreme is the bath that responds infinitely quickly to motions in
1729 < the system. Thus, $\xi (t)$ can be taken as a $delta$ function in
1730 < time:
1727 > which can be used to describe the effect of dynamic caging in
1728 > viscous solvents. The other extreme is the bath that responds
1729 > infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1730 > taken as a $delta$ function in time:
1731   \[
1732   \xi (t) = 2\xi _0 \delta (t)
1733   \]

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines