ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2912 by tim, Fri Jun 30 02:45:29 2006 UTC vs.
Revision 2950 by tim, Tue Jul 18 16:42:49 2006 UTC

# Line 67 | Line 67 | All of these conserved quantities are important factor
67   \begin{equation}E = T + V. \label{introEquation:energyConservation}
68   \end{equation}
69   All of these conserved quantities are important factors to determine
70 < the quality of numerical integration schemes for rigid bodies
71 < \cite{Dullweber1997}.
70 > the quality of numerical integration schemes for rigid
71 > bodies.\cite{Dullweber1997}
72  
73   \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74  
# Line 178 | Line 178 | equation of motion. Due to their symmetrical formula,
178   where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179   Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180   equation of motion. Due to their symmetrical formula, they are also
181 < known as the canonical equations of motions \cite{Goldstein2001}.
181 > known as the canonical equations of motions.\cite{Goldstein2001}
182  
183   An important difference between Lagrangian approach and the
184   Hamiltonian approach is that the Lagrangian is considered to be a
# Line 188 | Line 188 | coordinate and its time derivative as independent vari
188   Hamiltonian Mechanics is more appropriate for application to
189   statistical mechanics and quantum mechanics, since it treats the
190   coordinate and its time derivative as independent variables and it
191 < only works with 1st-order differential equations\cite{Marion1990}.
191 > only works with 1st-order differential equations.\cite{Marion1990}
192   In Newtonian Mechanics, a system described by conservative forces
193   conserves the total energy
194   (Eq.~\ref{introEquation:energyConservation}). It follows that
# Line 208 | Line 208 | The following section will give a brief introduction t
208   The thermodynamic behaviors and properties of Molecular Dynamics
209   simulation are governed by the principle of Statistical Mechanics.
210   The following section will give a brief introduction to some of the
211 < Statistical Mechanics concepts and theorem presented in this
211 > Statistical Mechanics concepts and theorems presented in this
212   dissertation.
213  
214   \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
# Line 337 | Line 337 | distribution,
337   connected to the Hamiltonian $H$ through Maxwell-Boltzmann
338   distribution,
339   \begin{equation}
340 < \rho  \propto e^{ - \beta H}
340 > \rho  \propto e^{ - \beta H}.
341   \label{introEquation:densityAndHamiltonian}
342   \end{equation}
343  
# Line 349 | Line 349 | inside this region is given by,
349   If this region is small enough, the density $\rho$ can be regarded
350   as uniform over the whole integral. Thus, the number of phase points
351   inside this region is given by,
352 < \begin{equation}
353 < \delta N = \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f
354 < dp_1 } ..dp_f.
355 < \end{equation}
356 <
357 < \begin{equation}
358 < \frac{{d(\delta N)}}{{dt}} = \frac{{d\rho }}{{dt}}\delta v + \rho
352 > \begin{eqnarray}
353 > \delta N &=& \rho \delta v = \rho \int { \ldots \int {dq_1 } ...dq_f dp_1 } ..dp_f,\\
354 > \frac{{d(\delta N)}}{{dt}} &=& \frac{{d\rho }}{{dt}}\delta v + \rho
355   \frac{d}{{dt}}(\delta v) = 0.
356 < \end{equation}
356 > \end{eqnarray}
357   With the help of the stationary assumption
358   (Eq.~\ref{introEquation:stationary}), we obtain the principle of
359   \emph{conservation of volume in phase space},
# Line 372 | Line 368 | $F$ and $G$ of the coordinates and momenta of a system
368   Liouville's theorem can be expressed in a variety of different forms
369   which are convenient within different contexts. For any two function
370   $F$ and $G$ of the coordinates and momenta of a system, the Poisson
371 < bracket ${F, G}$ is defined as
371 > bracket $\{F,G\}$ is defined as
372   \begin{equation}
373   \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
374   F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
# Line 416 | Line 412 | average. It states that the time average and average o
412   many-body system in Statistical Mechanics. Fortunately, the Ergodic
413   Hypothesis makes a connection between time average and the ensemble
414   average. It states that the time average and average over the
415 < statistical ensemble are identical \cite{Frenkel1996, Leach2001}:
415 > statistical ensemble are identical:\cite{Frenkel1996, Leach2001}
416   \begin{equation}
417   \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
418   \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
# Line 434 | Line 430 | Sec.~\ref{introSection:molecularDynamics} will be the
430   utilized. Or if the system lends itself to a time averaging
431   approach, the Molecular Dynamics techniques in
432   Sec.~\ref{introSection:molecularDynamics} will be the best
433 < choice\cite{Frenkel1996}.
433 > choice.\cite{Frenkel1996}
434  
435   \section{\label{introSection:geometricIntegratos}Geometric Integrators}
436   A variety of numerical integrators have been proposed to simulate
437   the motions of atoms in MD simulation. They usually begin with
438 < initial conditionals and move the objects in the direction governed
439 < by the differential equations. However, most of them ignore the
440 < hidden physical laws contained within the equations. Since 1990,
441 < geometric integrators, which preserve various phase-flow invariants
442 < such as symplectic structure, volume and time reversal symmetry,
443 < were developed to address this issue\cite{Dullweber1997,
444 < McLachlan1998, Leimkuhler1999}. The velocity Verlet method, which
445 < happens to be a simple example of symplectic integrator, continues
446 < to gain popularity in the molecular dynamics community. This fact
447 < can be partly explained by its geometric nature.
438 > initial conditions and move the objects in the direction governed by
439 > the differential equations. However, most of them ignore the hidden
440 > physical laws contained within the equations. Since 1990, geometric
441 > integrators, which preserve various phase-flow invariants such as
442 > symplectic structure, volume and time reversal symmetry, were
443 > developed to address this issue.\cite{Dullweber1997, McLachlan1998,
444 > Leimkuhler1999} The velocity Verlet method, which happens to be a
445 > simple example of symplectic integrator, continues to gain
446 > popularity in the molecular dynamics community. This fact can be
447 > partly explained by its geometric nature.
448  
449 < \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
449 > \subsection{\label{introSection:symplecticManifold}Manifolds and Bundles}
450   A \emph{manifold} is an abstract mathematical space. It looks
451   locally like Euclidean space, but when viewed globally, it may have
452   more complicated structure. A good example of manifold is the
453   surface of Earth. It seems to be flat locally, but it is round if
454   viewed as a whole. A \emph{differentiable manifold} (also known as
455   \emph{smooth manifold}) is a manifold on which it is possible to
456 < apply calculus\cite{Hirsch1997}. A \emph{symplectic manifold} is
456 > apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is
457   defined as a pair $(M, \omega)$ which consists of a
458 < \emph{differentiable manifold} $M$ and a close, non-degenerated,
458 > \emph{differentiable manifold} $M$ and a close, non-degenerate,
459   bilinear symplectic form, $\omega$. A symplectic form on a vector
460   space $V$ is a function $\omega(x, y)$ which satisfies
461   $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
462   \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
463 < $\omega(x, x) = 0$\cite{McDuff1998}. The cross product operation in
464 < vector field is an example of symplectic form. One of the
465 < motivations to study \emph{symplectic manifolds} in Hamiltonian
466 < Mechanics is that a symplectic manifold can represent all possible
467 < configurations of the system and the phase space of the system can
468 < be described by it's cotangent bundle\cite{Jost2002}. Every
469 < symplectic manifold is even dimensional. For instance, in Hamilton
470 < equations, coordinate and momentum always appear in pairs.
463 > $\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in
464 > vector field is an example of symplectic form.
465 > Given vector spaces $V$ and $W$ over same field $F$, $f: V \to W$ is a linear transformation if
466 > \begin{eqnarray*}
467 > f(x+y) & = & f(x) + f(y) \\
468 > f(ax) & = & af(x)
469 > \end{eqnarray*}
470 > are always satisfied for any two vectors $x$ and $y$ in $V$ and any scalar $a$ in $F$. One can define the dual vector space $V^*$ of $V$ if any two built-in linear transformations $\phi$ and $\psi$ in $V^*$ satisfy the following definition of addition and scalar multiplication:
471 > \begin{eqnarray*}
472 > (\phi+\psi)(x) & = & \phi(x)+\psi(x) \\
473 > (a\phi)(x) & = & a \phi(x)
474 > \end{eqnarray*}
475 > for all $a$ in $F$ and $x$ in $V$. For a manifold $M$, one can define a tangent vector of a tangent space $TM_q$ at every point $q$
476 > \begin{equation}
477 > \dot q = \mathop {\lim }\limits_{t \to 0} \frac{{\phi (t) - \phi (0)}}{t}
478 > \end{equation}
479 > where $\phi(0)=q$ and $\phi(t) \in M$. One may also define a cotangent space $T^*M_q$ as the dual space of the tangent space $TM_q$. The tangent space and the cotangent space are isomorphic to each other, since they are both real vector spaces with same dimension.
480 > The union of tangent spaces at every point of $M$ is called the tangent bundle of $M$ and is denoted by $TM$, while cotangent bundle $T^*M$ is defined as the union of the cotangent spaces to $M$.\cite{Jost2002} For a Hamiltonian system with configuration manifold $V$, the $(q,\dot q)$ phase space is the tangent bundle of the configuration manifold $V$, while the cotangent bundle is represented by $(q,p)$.
481  
482   \subsection{\label{introSection:ODE}Ordinary Differential Equations}
483  
# Line 479 | Line 485 | For an ordinary differential system defined as
485   \begin{equation}
486   \dot x = f(x)
487   \end{equation}
488 < where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
488 > where $x = x(q,p)$, this system is a canonical Hamiltonian, if
489   $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
490   function and $J$ is the skew-symmetric matrix
491   \begin{equation}
# Line 496 | Line 502 | called a \emph{Hamiltonian vector field}. Another gene
502   \label{introEquation:compactHamiltonian}
503   \end{equation}In this case, $f$ is
504   called a \emph{Hamiltonian vector field}. Another generalization of
505 < Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986},
505 > Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986}
506   \begin{equation}
507   \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
508   \end{equation}
509 < The most obvious change being that matrix $J$ now depends on $x$.
509 > where the most obvious change being that matrix $J$ now depends on
510 > $x$.
511  
512   \subsection{\label{introSection:exactFlow}Exact Propagator}
513  
# Line 527 | Line 534 | Therefore, the exact propagator is self-adjoint,
534   \begin{equation}
535   \varphi _\tau   = \varphi _{ - \tau }^{ - 1}.
536   \end{equation}
530 The exact propagator can also be written in terms of operator,
531 \begin{equation}
532 \varphi _\tau  (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
533 }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
534 \label{introEquation:exponentialOperator}
535 \end{equation}
537   In most cases, it is not easy to find the exact propagator
538   $\varphi_\tau$. Instead, we use an approximate map, $\psi_\tau$,
539   which is usually called an integrator. The order of an integrator
# Line 620 | Line 621 | variational methods can capture the decay of energy
621   Generating functions\cite{Channell1990} tend to lead to methods
622   which are cumbersome and difficult to use. In dissipative systems,
623   variational methods can capture the decay of energy
624 < accurately\cite{Kane2000}. Since they are geometrically unstable
624 > accurately.\cite{Kane2000} Since they are geometrically unstable
625   against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
626 < methods are not suitable for Hamiltonian system. Recently, various
627 < high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
628 < have been developed to overcome this instability. However, due to
629 < computational penalty involved in implementing the Runge-Kutta
630 < methods, they have not attracted much attention from the Molecular
631 < Dynamics community. Instead, splitting methods have been widely
632 < accepted since they exploit natural decompositions of the
633 < system\cite{Tuckerman1992, McLachlan1998}.
626 > methods are not suitable for Hamiltonian
627 > system.\cite{Cartwright1992} Recently, various high-order explicit
628 > Runge-Kutta methods \cite{Owren1992,Chen2003} have been developed to
629 > overcome this instability. However, due to computational penalty
630 > involved in implementing the Runge-Kutta methods, they have not
631 > attracted much attention from the Molecular Dynamics community.
632 > Instead, splitting methods have been widely accepted since they
633 > exploit natural decompositions of the system.\cite{McLachlan1998,
634 > Tuckerman1992}
635  
636   \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
637  
# Line 652 | Line 654 | simple first order expression is then given by the Lie
654   problem. If $H_1$ and $H_2$ can be integrated using exact
655   propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a
656   simple first order expression is then given by the Lie-Trotter
657 < formula
657 > formula\cite{Trotter1959}
658   \begin{equation}
659   \varphi _h  = \varphi _{1,h}  \circ \varphi _{2,h},
660   \label{introEquation:firstOrderSplitting}
# Line 673 | Line 675 | local errors proportional to $h^2$, while the Strang s
675   The Lie-Trotter
676   splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
677   local errors proportional to $h^2$, while the Strang splitting gives
678 < a second-order decomposition,
678 > a second-order decomposition,\cite{Strang1968}
679   \begin{equation}
680   \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi
681   _{1,h/2} , \label{introEquation:secondOrderSplitting}
# Line 729 | Line 731 | the equations of motion would follow:
731  
732   \item Use the half step velocities to move positions one whole step, $\Delta t$.
733  
734 < \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
734 > \item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move.
735  
736   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
737   \end{enumerate}
# Line 748 | Line 750 | q(\Delta t)} \right]. %
750  
751   \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
752  
753 < The Baker-Campbell-Hausdorff formula can be used to determine the
754 < local error of a splitting method in terms of the commutator of the
755 < operators(Eq.~\ref{introEquation:exponentialOperator}) associated with
756 < the sub-propagator. For operators $hX$ and $hY$ which are associated
757 < with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
753 > The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used
754 > to determine the local error of a splitting method in terms of the
755 > commutator of the
756 > operators associated
757 > with the sub-propagator. For operators $hX$ and $hY$ which are
758 > associated with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we
759 > have
760   \begin{equation}
761   \exp (hX + hY) = \exp (hZ)
762   \end{equation}
# Line 782 | Line 786 | order methods. Yoshida proposed an elegant way to comp
786   \end{equation}
787   A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
788   order methods. Yoshida proposed an elegant way to compose higher
789 < order methods based on symmetric splitting\cite{Yoshida1990}. Given
789 > order methods based on symmetric splitting.\cite{Yoshida1990} Given
790   a symmetric second order base method $ \varphi _h^{(2)} $, a
791   fourth-order symmetric method can be constructed by composing,
792   \[
# Line 868 | Line 872 | surface and to locate the local minimum. While converg
872   minimization to find a more reasonable conformation. Several energy
873   minimization methods have been developed to exploit the energy
874   surface and to locate the local minimum. While converging slowly
875 < near the minimum, steepest descent method is extremely robust when
875 > near the minimum, the steepest descent method is extremely robust when
876   systems are strongly anharmonic. Thus, it is often used to refine
877   structures from crystallographic data. Relying on the Hessian,
878   advanced methods like Newton-Raphson converge rapidly to a local
# Line 887 | Line 891 | end up setting the temperature of the system to a fina
891   temperature. Beginning at a lower temperature and gradually
892   increasing the temperature by assigning larger random velocities, we
893   end up setting the temperature of the system to a final temperature
894 < at which the simulation will be conducted. In heating phase, we
894 > at which the simulation will be conducted. In the heating phase, we
895   should also keep the system from drifting or rotating as a whole. To
896   do this, the net linear momentum and angular momentum of the system
897   is shifted to zero after each resampling from the Maxwell -Boltzman
# Line 943 | Line 947 | evaluation is to apply spherical cutoffs where particl
947   %cutoff and minimum image convention
948   Another important technique to improve the efficiency of force
949   evaluation is to apply spherical cutoffs where particles farther
950 < than a predetermined distance are not included in the calculation
951 < \cite{Frenkel1996}. The use of a cutoff radius will cause a
952 < discontinuity in the potential energy curve. Fortunately, one can
950 > than a predetermined distance are not included in the
951 > calculation.\cite{Frenkel1996} The use of a cutoff radius will cause
952 > a discontinuity in the potential energy curve. Fortunately, one can
953   shift a simple radial potential to ensure the potential curve go
954   smoothly to zero at the cutoff radius. The cutoff strategy works
955   well for Lennard-Jones interaction because of its short range
# Line 954 | Line 958 | with rapid and absolute convergence, has proved to min
958   in simulations. The Ewald summation, in which the slowly decaying
959   Coulomb potential is transformed into direct and reciprocal sums
960   with rapid and absolute convergence, has proved to minimize the
961 < periodicity artifacts in liquid simulations. Taking the advantages
962 < of the fast Fourier transform (FFT) for calculating discrete Fourier
963 < transforms, the particle mesh-based
961 > periodicity artifacts in liquid simulations. Taking advantage of
962 > fast Fourier transform (FFT) techniques for calculating discrete
963 > Fourier transforms, the particle mesh-based
964   methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
965   $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
966   \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
# Line 966 | Line 970 | charge-neutralized Coulomb potential method developed
970   simulation community, these two methods are difficult to implement
971   correctly and efficiently. Instead, we use a damped and
972   charge-neutralized Coulomb potential method developed by Wolf and
973 < his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
973 > his coworkers.\cite{Wolf1999} The shifted Coulomb potential for
974   particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
975   \begin{equation}
976   V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
# Line 985 | Line 989 | illustration of shifted Coulomb potential.}
989   \label{introFigure:shiftedCoulomb}
990   \end{figure}
991  
988 %multiple time step
989
992   \subsection{\label{introSection:Analysis} Analysis}
993  
994 < Recently, advanced visualization techniques have been applied to
993 < monitor the motions of molecules. Although the dynamics of the
994 < system can be described qualitatively from animation, quantitative
995 < trajectory analysis is more useful. According to the principles of
994 > According to the principles of
995   Statistical Mechanics in
996   Sec.~\ref{introSection:statisticalMechanics}, one can compute
997   thermodynamic properties, analyze fluctuations of structural
# Line 1029 | Line 1028 | Fourier transforming raw data from a series of neutron
1028   function}, is of most fundamental importance to liquid theory.
1029   Experimentally, pair distribution functions can be gathered by
1030   Fourier transforming raw data from a series of neutron diffraction
1031 < experiments and integrating over the surface factor
1032 < \cite{Powles1973}. The experimental results can serve as a criterion
1033 < to justify the correctness of a liquid model. Moreover, various
1034 < equilibrium thermodynamic and structural properties can also be
1035 < expressed in terms of the radial distribution function
1036 < \cite{Allen1987}. The pair distribution functions $g(r)$ gives the
1037 < probability that a particle $i$ will be located at a distance $r$
1038 < from a another particle $j$ in the system
1031 > experiments and integrating over the surface
1032 > factor.\cite{Powles1973} The experimental results can serve as a
1033 > criterion to justify the correctness of a liquid model. Moreover,
1034 > various equilibrium thermodynamic and structural properties can also
1035 > be expressed in terms of the radial distribution
1036 > function.\cite{Allen1987} The pair distribution functions $g(r)$
1037 > gives the probability that a particle $i$ will be located at a
1038 > distance $r$ from a another particle $j$ in the system
1039   \begin{equation}
1040   g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1041   \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
# Line 1059 | Line 1058 | If $A$ and $B$ refer to same variable, this kind of co
1058   \label{introEquation:timeCorrelationFunction}
1059   \end{equation}
1060   If $A$ and $B$ refer to same variable, this kind of correlation
1061 < functions are called \emph{autocorrelation functions}. One example
1063 < of auto correlation function is the velocity auto-correlation
1061 > functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation
1062   function which is directly related to transport properties of
1063   molecular liquids:
1064 < \[
1064 > \begin{equation}
1065   D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1066   \right\rangle } dt
1067 < \]
1067 > \end{equation}
1068   where $D$ is diffusion constant. Unlike the velocity autocorrelation
1069   function, which is averaged over time origins and over all the
1070   atoms, the dipole autocorrelation functions is calculated for the
1071   entire system. The dipole autocorrelation function is given by:
1072 < \[
1072 > \begin{equation}
1073   c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1074   \right\rangle
1075 < \]
1075 > \end{equation}
1076   Here $u_{tot}$ is the net dipole of the entire system and is given
1077   by
1078 < \[
1078 > \begin{equation}
1079   u_{tot} (t) = \sum\limits_i {u_i (t)}.
1080 < \]
1080 > \end{equation}
1081   In principle, many time correlation functions can be related to
1082   Fourier transforms of the infrared, Raman, and inelastic neutron
1083   scattering spectra of molecular liquids. In practice, one can
1084   extract the IR spectrum from the intensity of the molecular dipole
1085   fluctuation at each frequency using the following relationship:
1086 < \[
1086 > \begin{equation}
1087   \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1088   i2\pi vt} dt}.
1089 < \]
1089 > \end{equation}
1090  
1091   \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1092  
1093   Rigid bodies are frequently involved in the modeling of different
1094 < areas, from engineering, physics, to chemistry. For example,
1094 > areas, including engineering, physics and chemistry. For example,
1095   missiles and vehicles are usually modeled by rigid bodies.  The
1096   movement of the objects in 3D gaming engines or other physics
1097   simulators is governed by rigid body dynamics. In molecular
1098   simulations, rigid bodies are used to simplify protein-protein
1099 < docking studies\cite{Gray2003}.
1099 > docking studies.\cite{Gray2003}
1100  
1101   It is very important to develop stable and efficient methods to
1102   integrate the equations of motion for orientational degrees of
# Line 1110 | Line 1108 | still remain. A singularity-free representation utiliz
1108   angles can overcome this difficulty\cite{Barojas1973}, the
1109   computational penalty and the loss of angular momentum conservation
1110   still remain. A singularity-free representation utilizing
1111 < quaternions was developed by Evans in 1977\cite{Evans1977}.
1111 > quaternions was developed by Evans in 1977.\cite{Evans1977}
1112   Unfortunately, this approach used a nonseparable Hamiltonian
1113   resulting from the quaternion representation, which prevented the
1114   symplectic algorithm from being utilized. Another different approach
# Line 1119 | Line 1117 | the SHAKE and Rattle algorithms also converge very slo
1117   deriving from potential energy and constraint forces which are used
1118   to guarantee the rigidness. However, due to their iterative nature,
1119   the SHAKE and Rattle algorithms also converge very slowly when the
1120 < number of constraints increases\cite{Ryckaert1977, Andersen1983}.
1120 > number of constraints increases.\cite{Ryckaert1977, Andersen1983}
1121  
1122   A break-through in geometric literature suggests that, in order to
1123   develop a long-term integration scheme, one should preserve the
# Line 1129 | Line 1127 | An alternative method using the quaternion representat
1127   proposed to evolve the Hamiltonian system in a constraint manifold
1128   by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1129   An alternative method using the quaternion representation was
1130 < developed by Omelyan\cite{Omelyan1998}. However, both of these
1130 > developed by Omelyan.\cite{Omelyan1998} However, both of these
1131   methods are iterative and inefficient. In this section, we descibe a
1132   symplectic Lie-Poisson integrator for rigid bodies developed by
1133   Dullweber and his coworkers\cite{Dullweber1997} in depth.
1134  
1135   \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1136 < The motion of a rigid body is Hamiltonian with the Hamiltonian
1139 < function
1136 > The Hamiltonian of a rigid body is given by
1137   \begin{equation}
1138   H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1139   V(q,Q) + \frac{1}{2}tr[(QQ^T  - 1)\Lambda ].
# Line 1195 | Line 1192 | For a body fixed vector $X_i$ with respect to the cent
1192   1,\tilde Q^T \tilde PJ^{ - 1}  + J^{ - 1} P^T \tilde Q = 0} \right\}
1193   \]
1194   For a body fixed vector $X_i$ with respect to the center of mass of
1195 < the rigid body, its corresponding lab fixed vector $X_0^{lab}$  is
1195 > the rigid body, its corresponding lab fixed vector $X_i^{lab}$  is
1196   given as
1197   \begin{equation}
1198   X_i^{lab} = Q X_i + q.
# Line 1250 | Line 1247 | motion. This unique property eliminates the requiremen
1247   Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1248   Lagrange multiplier $\Lambda$ is absent from the equations of
1249   motion. This unique property eliminates the requirement of
1250 < iterations which can not be avoided in other methods\cite{Kol1997,
1251 < Omelyan1998}. Applying the hat-map isomorphism, we obtain the
1252 < equation of motion for angular momentum in the body frame
1250 > iterations which can not be avoided in other methods.\cite{Kol1997,
1251 > Omelyan1998} Applying the hat-map isomorphism, we obtain the
1252 > equation of motion for angular momentum
1253   \begin{equation}
1254   \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T
1255   F_i (r,Q)} \right) \times X_i }.
# Line 1348 | Line 1345 | _1 }.
1345   \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi
1346   _1 }.
1347   \]
1348 < The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1352 < $F(\pi )$ and $G(\pi )$ is defined by
1348 > The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by
1349   \[
1350   \{ F,G\} (\pi ) = [\nabla _\pi  F(\pi )]^T J(\pi )\nabla _\pi  G(\pi
1351   ).
# Line 1358 | Line 1354 | norm of the angular momentum, $\parallel \pi
1354   function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1355   conserved quantity in Poisson system. We can easily verify that the
1356   norm of the angular momentum, $\parallel \pi
1357 < \parallel$, is a \emph{Casimir}\cite{McLachlan1993}. Let$ F(\pi ) = S(\frac{{\parallel
1357 > \parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel
1358   \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1359   then by the chain rule
1360   \[
# Line 1379 | Line 1375 | of motion corresponding to potential energy and kineti
1375   The Hamiltonian of rigid body can be separated in terms of kinetic
1376   energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations
1377   of motion corresponding to potential energy and kinetic energy are
1378 < listed in Table~\ref{introTable:rbEquations}
1378 > listed in Table~\ref{introTable:rbEquations}.
1379   \begin{table}
1380   \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1381   \label{introTable:rbEquations}
# Line 1437 | Line 1433 | has been applied in a variety of studies. This section
1433   As an alternative to newtonian dynamics, Langevin dynamics, which
1434   mimics a simple heat bath with stochastic and dissipative forces,
1435   has been applied in a variety of studies. This section will review
1436 < the theory of Langevin dynamics. A brief derivation of generalized
1436 > the theory of Langevin dynamics. A brief derivation of the generalized
1437   Langevin equation will be given first. Following that, we will
1438 < discuss the physical meaning of the terms appearing in the equation
1443 < as well as the calculation of friction tensor from hydrodynamics
1444 < theory.
1438 > discuss the physical meaning of the terms appearing in the equation.
1439  
1440   \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1441  
# Line 1450 | Line 1444 | Harmonic bath model is the derivation of the Generaliz
1444   environment, has been widely used in quantum chemistry and
1445   statistical mechanics. One of the successful applications of
1446   Harmonic bath model is the derivation of the Generalized Langevin
1447 < Dynamics (GLE). Lets consider a system, in which the degree of
1447 > Dynamics (GLE). Consider a system, in which the degree of
1448   freedom $x$ is assumed to couple to the bath linearly, giving a
1449   Hamiltonian of the form
1450   \begin{equation}
# Line 1461 | Line 1455 | H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_
1455   with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1456   \[
1457   H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1458 < }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 }
1458 > }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  x_\alpha ^2 }
1459   \right\}}
1460   \]
1461   where the index $\alpha$ runs over all the bath degrees of freedom,
# Line 1514 | Line 1508 | where  $p$ is real and  $L$ is called the Laplace Tran
1508   L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1509   \]
1510   where  $p$ is real and  $L$ is called the Laplace Transform
1511 < Operator. Below are some important properties of Laplace transform
1511 > Operator. Below are some important properties of the Laplace transform
1512   \begin{eqnarray*}
1513   L(x + y)  & = & L(x) + L(y) \\
1514   L(ax)     & = & aL(x) \\
# Line 1583 | Line 1577 | m\ddot x =  - \frac{{\partial W}}{{\partial x}} - \int
1577   (t)\dot x(t - \tau )d\tau }  + R(t)
1578   \label{introEuqation:GeneralizedLangevinDynamics}
1579   \end{equation}
1580 < which is known as the \emph{generalized Langevin equation}.
1580 > which is known as the \emph{generalized Langevin equation} (GLE).
1581  
1582   \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1583  
1584   One may notice that $R(t)$ depends only on initial conditions, which
1585   implies it is completely deterministic within the context of a
1586   harmonic bath. However, it is easy to verify that $R(t)$ is totally
1587 < uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)}
1587 > uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)}
1588   \right\rangle  = 0, \left\langle {\dot x(t)R(t)} \right\rangle  =
1589   0.$ This property is what we expect from a truly random process. As
1590   long as the model chosen for $R(t)$ was a gaussian distribution in
# Line 1619 | Line 1613 | taken as a $delta$ function in time:
1613   infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1614   taken as a $delta$ function in time:
1615   \[
1616 < \xi (t) = 2\xi _0 \delta (t)
1616 > \xi (t) = 2\xi _0 \delta (t).
1617   \]
1618   Hence, the convolution integral becomes
1619   \[
1620   \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = 2\xi _0 \int_0^t
1621   {\delta (t)\dot x(t - \tau )d\tau }  = \xi _0 \dot x(t),
1622   \]
1623 < and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes
1623 > and Eq.~\ref{introEuqation:GeneralizedLangevinDynamics} becomes the
1624 > Langevin equation
1625   \begin{equation}
1626   m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1627 < x(t) + R(t) \label{introEquation:LangevinEquation}
1627 > x(t) + R(t) \label{introEquation:LangevinEquation}.
1628   \end{equation}
1629 < which is known as the Langevin equation. The static friction
1630 < coefficient $\xi _0$ can either be calculated from spectral density
1631 < or be determined by Stokes' law for regular shaped particles. A
1632 < brief review on calculating friction tensors for arbitrary shaped
1633 < particles is given in Sec.~\ref{introSection:frictionTensor}.
1629 > The static friction coefficient $\xi _0$ can either be calculated
1630 > from spectral density or be determined by Stokes' law for regular
1631 > shaped particles. A brief review on calculating friction tensors for
1632 > arbitrary shaped particles is given in
1633 > Sec.~\ref{introSection:frictionTensor}.
1634  
1635   \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1636  
# Line 1644 | Line 1639 | q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \o
1639   q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1640   ^2 }}x(0),
1641   \]
1642 < we can rewrite $R(T)$ as
1642 > we can rewrite $R(t)$ as
1643   \[
1644   R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
1645   \]

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines