ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2912 by tim, Fri Jun 30 02:45:29 2006 UTC vs.
Revision 2941 by tim, Mon Jul 17 20:01:05 2006 UTC

# Line 67 | Line 67 | All of these conserved quantities are important factor
67   \begin{equation}E = T + V. \label{introEquation:energyConservation}
68   \end{equation}
69   All of these conserved quantities are important factors to determine
70 < the quality of numerical integration schemes for rigid bodies
71 < \cite{Dullweber1997}.
70 > the quality of numerical integration schemes for rigid
71 > bodies.\cite{Dullweber1997}
72  
73   \subsection{\label{introSection:lagrangian}Lagrangian Mechanics}
74  
# Line 178 | Line 178 | equation of motion. Due to their symmetrical formula,
178   where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and
179   Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's
180   equation of motion. Due to their symmetrical formula, they are also
181 < known as the canonical equations of motions \cite{Goldstein2001}.
181 > known as the canonical equations of motions.\cite{Goldstein2001}
182  
183   An important difference between Lagrangian approach and the
184   Hamiltonian approach is that the Lagrangian is considered to be a
# Line 188 | Line 188 | coordinate and its time derivative as independent vari
188   Hamiltonian Mechanics is more appropriate for application to
189   statistical mechanics and quantum mechanics, since it treats the
190   coordinate and its time derivative as independent variables and it
191 < only works with 1st-order differential equations\cite{Marion1990}.
191 > only works with 1st-order differential equations.\cite{Marion1990}
192   In Newtonian Mechanics, a system described by conservative forces
193   conserves the total energy
194   (Eq.~\ref{introEquation:energyConservation}). It follows that
# Line 208 | Line 208 | The following section will give a brief introduction t
208   The thermodynamic behaviors and properties of Molecular Dynamics
209   simulation are governed by the principle of Statistical Mechanics.
210   The following section will give a brief introduction to some of the
211 < Statistical Mechanics concepts and theorem presented in this
211 > Statistical Mechanics concepts and theorems presented in this
212   dissertation.
213  
214   \subsection{\label{introSection:ensemble}Phase Space and Ensemble}
# Line 372 | Line 372 | $F$ and $G$ of the coordinates and momenta of a system
372   Liouville's theorem can be expressed in a variety of different forms
373   which are convenient within different contexts. For any two function
374   $F$ and $G$ of the coordinates and momenta of a system, the Poisson
375 < bracket ${F, G}$ is defined as
375 > bracket $\{F,G\}$ is defined as
376   \begin{equation}
377   \left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial
378   F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} -
# Line 416 | Line 416 | average. It states that the time average and average o
416   many-body system in Statistical Mechanics. Fortunately, the Ergodic
417   Hypothesis makes a connection between time average and the ensemble
418   average. It states that the time average and average over the
419 < statistical ensemble are identical \cite{Frenkel1996, Leach2001}:
419 > statistical ensemble are identical:\cite{Frenkel1996, Leach2001}
420   \begin{equation}
421   \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
422   \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
# Line 434 | Line 434 | Sec.~\ref{introSection:molecularDynamics} will be the
434   utilized. Or if the system lends itself to a time averaging
435   approach, the Molecular Dynamics techniques in
436   Sec.~\ref{introSection:molecularDynamics} will be the best
437 < choice\cite{Frenkel1996}.
437 > choice.\cite{Frenkel1996}
438  
439   \section{\label{introSection:geometricIntegratos}Geometric Integrators}
440   A variety of numerical integrators have been proposed to simulate
441   the motions of atoms in MD simulation. They usually begin with
442 < initial conditionals and move the objects in the direction governed
443 < by the differential equations. However, most of them ignore the
444 < hidden physical laws contained within the equations. Since 1990,
445 < geometric integrators, which preserve various phase-flow invariants
446 < such as symplectic structure, volume and time reversal symmetry,
447 < were developed to address this issue\cite{Dullweber1997,
448 < McLachlan1998, Leimkuhler1999}. The velocity Verlet method, which
449 < happens to be a simple example of symplectic integrator, continues
450 < to gain popularity in the molecular dynamics community. This fact
451 < can be partly explained by its geometric nature.
442 > initial conditions and move the objects in the direction governed by
443 > the differential equations. However, most of them ignore the hidden
444 > physical laws contained within the equations. Since 1990, geometric
445 > integrators, which preserve various phase-flow invariants such as
446 > symplectic structure, volume and time reversal symmetry, were
447 > developed to address this issue.\cite{Dullweber1997, McLachlan1998,
448 > Leimkuhler1999} The velocity Verlet method, which happens to be a
449 > simple example of symplectic integrator, continues to gain
450 > popularity in the molecular dynamics community. This fact can be
451 > partly explained by its geometric nature.
452  
453   \subsection{\label{introSection:symplecticManifold}Symplectic Manifolds}
454   A \emph{manifold} is an abstract mathematical space. It looks
# Line 457 | Line 457 | viewed as a whole. A \emph{differentiable manifold} (a
457   surface of Earth. It seems to be flat locally, but it is round if
458   viewed as a whole. A \emph{differentiable manifold} (also known as
459   \emph{smooth manifold}) is a manifold on which it is possible to
460 < apply calculus\cite{Hirsch1997}. A \emph{symplectic manifold} is
460 > apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is
461   defined as a pair $(M, \omega)$ which consists of a
462 < \emph{differentiable manifold} $M$ and a close, non-degenerated,
462 > \emph{differentiable manifold} $M$ and a close, non-degenerate,
463   bilinear symplectic form, $\omega$. A symplectic form on a vector
464   space $V$ is a function $\omega(x, y)$ which satisfies
465   $\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+
466   \lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and
467 < $\omega(x, x) = 0$\cite{McDuff1998}. The cross product operation in
467 > $\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in
468   vector field is an example of symplectic form. One of the
469   motivations to study \emph{symplectic manifolds} in Hamiltonian
470   Mechanics is that a symplectic manifold can represent all possible
471   configurations of the system and the phase space of the system can
472 < be described by it's cotangent bundle\cite{Jost2002}. Every
472 > be described by it's cotangent bundle.\cite{Jost2002} Every
473   symplectic manifold is even dimensional. For instance, in Hamilton
474   equations, coordinate and momentum always appear in pairs.
475  
# Line 479 | Line 479 | For an ordinary differential system defined as
479   \begin{equation}
480   \dot x = f(x)
481   \end{equation}
482 < where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
482 > where $x = x(q,p)$, this system is a canonical Hamiltonian, if
483   $f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian
484   function and $J$ is the skew-symmetric matrix
485   \begin{equation}
# Line 496 | Line 496 | called a \emph{Hamiltonian vector field}. Another gene
496   \label{introEquation:compactHamiltonian}
497   \end{equation}In this case, $f$ is
498   called a \emph{Hamiltonian vector field}. Another generalization of
499 < Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986},
499 > Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986}
500   \begin{equation}
501   \dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian}
502   \end{equation}
503 < The most obvious change being that matrix $J$ now depends on $x$.
503 > where the most obvious change being that matrix $J$ now depends on
504 > $x$.
505  
506   \subsection{\label{introSection:exactFlow}Exact Propagator}
507  
# Line 527 | Line 528 | Therefore, the exact propagator is self-adjoint,
528   \begin{equation}
529   \varphi _\tau   = \varphi _{ - \tau }^{ - 1}.
530   \end{equation}
531 < The exact propagator can also be written in terms of operator,
531 > The exact propagator can also be written as an operator,
532   \begin{equation}
533   \varphi _\tau  (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial
534   }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
# Line 620 | Line 621 | variational methods can capture the decay of energy
621   Generating functions\cite{Channell1990} tend to lead to methods
622   which are cumbersome and difficult to use. In dissipative systems,
623   variational methods can capture the decay of energy
624 < accurately\cite{Kane2000}. Since they are geometrically unstable
624 > accurately.\cite{Kane2000} Since they are geometrically unstable
625   against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
626   methods are not suitable for Hamiltonian system. Recently, various
627   high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
# Line 629 | Line 630 | accepted since they exploit natural decompositions of
630   methods, they have not attracted much attention from the Molecular
631   Dynamics community. Instead, splitting methods have been widely
632   accepted since they exploit natural decompositions of the
633 < system\cite{Tuckerman1992, McLachlan1998}.
633 > system.\cite{McLachlan1998, Tuckerman1992}
634  
635   \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
636  
# Line 673 | Line 674 | local errors proportional to $h^2$, while the Strang s
674   The Lie-Trotter
675   splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
676   local errors proportional to $h^2$, while the Strang splitting gives
677 < a second-order decomposition,
677 > a second-order decomposition,\cite{Strang1968}
678   \begin{equation}
679   \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi
680   _{1,h/2} , \label{introEquation:secondOrderSplitting}
# Line 729 | Line 730 | the equations of motion would follow:
730  
731   \item Use the half step velocities to move positions one whole step, $\Delta t$.
732  
733 < \item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move.
733 > \item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move.
734  
735   \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
736   \end{enumerate}
# Line 748 | Line 749 | q(\Delta t)} \right]. %
749  
750   \subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}}
751  
752 < The Baker-Campbell-Hausdorff formula can be used to determine the
753 < local error of a splitting method in terms of the commutator of the
754 < operators(Eq.~\ref{introEquation:exponentialOperator}) associated with
755 < the sub-propagator. For operators $hX$ and $hY$ which are associated
756 < with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have
752 > The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used
753 > to determine the local error of a splitting method in terms of the
754 > commutator of the
755 > operators(Eq.~\ref{introEquation:exponentialOperator}) associated
756 > with the sub-propagator. For operators $hX$ and $hY$ which are
757 > associated with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we
758 > have
759   \begin{equation}
760   \exp (hX + hY) = \exp (hZ)
761   \end{equation}
# Line 782 | Line 785 | order methods. Yoshida proposed an elegant way to comp
785   \end{equation}
786   A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher
787   order methods. Yoshida proposed an elegant way to compose higher
788 < order methods based on symmetric splitting\cite{Yoshida1990}. Given
788 > order methods based on symmetric splitting.\cite{Yoshida1990} Given
789   a symmetric second order base method $ \varphi _h^{(2)} $, a
790   fourth-order symmetric method can be constructed by composing,
791   \[
# Line 868 | Line 871 | surface and to locate the local minimum. While converg
871   minimization to find a more reasonable conformation. Several energy
872   minimization methods have been developed to exploit the energy
873   surface and to locate the local minimum. While converging slowly
874 < near the minimum, steepest descent method is extremely robust when
874 > near the minimum, the steepest descent method is extremely robust when
875   systems are strongly anharmonic. Thus, it is often used to refine
876   structures from crystallographic data. Relying on the Hessian,
877   advanced methods like Newton-Raphson converge rapidly to a local
# Line 887 | Line 890 | end up setting the temperature of the system to a fina
890   temperature. Beginning at a lower temperature and gradually
891   increasing the temperature by assigning larger random velocities, we
892   end up setting the temperature of the system to a final temperature
893 < at which the simulation will be conducted. In heating phase, we
893 > at which the simulation will be conducted. In the heating phase, we
894   should also keep the system from drifting or rotating as a whole. To
895   do this, the net linear momentum and angular momentum of the system
896   is shifted to zero after each resampling from the Maxwell -Boltzman
# Line 943 | Line 946 | evaluation is to apply spherical cutoffs where particl
946   %cutoff and minimum image convention
947   Another important technique to improve the efficiency of force
948   evaluation is to apply spherical cutoffs where particles farther
949 < than a predetermined distance are not included in the calculation
950 < \cite{Frenkel1996}. The use of a cutoff radius will cause a
951 < discontinuity in the potential energy curve. Fortunately, one can
949 > than a predetermined distance are not included in the
950 > calculation.\cite{Frenkel1996} The use of a cutoff radius will cause
951 > a discontinuity in the potential energy curve. Fortunately, one can
952   shift a simple radial potential to ensure the potential curve go
953   smoothly to zero at the cutoff radius. The cutoff strategy works
954   well for Lennard-Jones interaction because of its short range
# Line 954 | Line 957 | with rapid and absolute convergence, has proved to min
957   in simulations. The Ewald summation, in which the slowly decaying
958   Coulomb potential is transformed into direct and reciprocal sums
959   with rapid and absolute convergence, has proved to minimize the
960 < periodicity artifacts in liquid simulations. Taking the advantages
961 < of the fast Fourier transform (FFT) for calculating discrete Fourier
962 < transforms, the particle mesh-based
960 > periodicity artifacts in liquid simulations. Taking advantage of
961 > fast Fourier transform (FFT) techniques for calculating discrete
962 > Fourier transforms, the particle mesh-based
963   methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from
964   $O(N^{3/2})$ to $O(N logN)$. An alternative approach is the
965   \emph{fast multipole method}\cite{Greengard1987, Greengard1994},
# Line 966 | Line 969 | charge-neutralized Coulomb potential method developed
969   simulation community, these two methods are difficult to implement
970   correctly and efficiently. Instead, we use a damped and
971   charge-neutralized Coulomb potential method developed by Wolf and
972 < his coworkers\cite{Wolf1999}. The shifted Coulomb potential for
972 > his coworkers.\cite{Wolf1999} The shifted Coulomb potential for
973   particle $i$ and particle $j$ at distance $r_{rj}$ is given by:
974   \begin{equation}
975   V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
# Line 1029 | Line 1032 | Fourier transforming raw data from a series of neutron
1032   function}, is of most fundamental importance to liquid theory.
1033   Experimentally, pair distribution functions can be gathered by
1034   Fourier transforming raw data from a series of neutron diffraction
1035 < experiments and integrating over the surface factor
1036 < \cite{Powles1973}. The experimental results can serve as a criterion
1037 < to justify the correctness of a liquid model. Moreover, various
1038 < equilibrium thermodynamic and structural properties can also be
1039 < expressed in terms of the radial distribution function
1040 < \cite{Allen1987}. The pair distribution functions $g(r)$ gives the
1041 < probability that a particle $i$ will be located at a distance $r$
1042 < from a another particle $j$ in the system
1035 > experiments and integrating over the surface
1036 > factor.\cite{Powles1973} The experimental results can serve as a
1037 > criterion to justify the correctness of a liquid model. Moreover,
1038 > various equilibrium thermodynamic and structural properties can also
1039 > be expressed in terms of the radial distribution
1040 > function.\cite{Allen1987} The pair distribution functions $g(r)$
1041 > gives the probability that a particle $i$ will be located at a
1042 > distance $r$ from a another particle $j$ in the system
1043   \begin{equation}
1044   g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1045   \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
# Line 1059 | Line 1062 | If $A$ and $B$ refer to same variable, this kind of co
1062   \label{introEquation:timeCorrelationFunction}
1063   \end{equation}
1064   If $A$ and $B$ refer to same variable, this kind of correlation
1065 < functions are called \emph{autocorrelation functions}. One example
1063 < of auto correlation function is the velocity auto-correlation
1065 > functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation
1066   function which is directly related to transport properties of
1067   molecular liquids:
1068 < \[
1068 > \begin{equation}
1069   D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1070   \right\rangle } dt
1071 < \]
1071 > \end{equation}
1072   where $D$ is diffusion constant. Unlike the velocity autocorrelation
1073   function, which is averaged over time origins and over all the
1074   atoms, the dipole autocorrelation functions is calculated for the
1075   entire system. The dipole autocorrelation function is given by:
1076 < \[
1076 > \begin{equation}
1077   c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1078   \right\rangle
1079 < \]
1079 > \end{equation}
1080   Here $u_{tot}$ is the net dipole of the entire system and is given
1081   by
1082 < \[
1082 > \begin{equation}
1083   u_{tot} (t) = \sum\limits_i {u_i (t)}.
1084 < \]
1084 > \end{equation}
1085   In principle, many time correlation functions can be related to
1086   Fourier transforms of the infrared, Raman, and inelastic neutron
1087   scattering spectra of molecular liquids. In practice, one can
1088   extract the IR spectrum from the intensity of the molecular dipole
1089   fluctuation at each frequency using the following relationship:
1090 < \[
1090 > \begin{equation}
1091   \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1092   i2\pi vt} dt}.
1093 < \]
1093 > \end{equation}
1094  
1095   \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1096  
1097   Rigid bodies are frequently involved in the modeling of different
1098 < areas, from engineering, physics, to chemistry. For example,
1098 > areas, including engineering, physics and chemistry. For example,
1099   missiles and vehicles are usually modeled by rigid bodies.  The
1100   movement of the objects in 3D gaming engines or other physics
1101   simulators is governed by rigid body dynamics. In molecular
1102   simulations, rigid bodies are used to simplify protein-protein
1103 < docking studies\cite{Gray2003}.
1103 > docking studies.\cite{Gray2003}
1104  
1105   It is very important to develop stable and efficient methods to
1106   integrate the equations of motion for orientational degrees of
# Line 1110 | Line 1112 | still remain. A singularity-free representation utiliz
1112   angles can overcome this difficulty\cite{Barojas1973}, the
1113   computational penalty and the loss of angular momentum conservation
1114   still remain. A singularity-free representation utilizing
1115 < quaternions was developed by Evans in 1977\cite{Evans1977}.
1115 > quaternions was developed by Evans in 1977.\cite{Evans1977}
1116   Unfortunately, this approach used a nonseparable Hamiltonian
1117   resulting from the quaternion representation, which prevented the
1118   symplectic algorithm from being utilized. Another different approach
# Line 1119 | Line 1121 | the SHAKE and Rattle algorithms also converge very slo
1121   deriving from potential energy and constraint forces which are used
1122   to guarantee the rigidness. However, due to their iterative nature,
1123   the SHAKE and Rattle algorithms also converge very slowly when the
1124 < number of constraints increases\cite{Ryckaert1977, Andersen1983}.
1124 > number of constraints increases.\cite{Ryckaert1977, Andersen1983}
1125  
1126   A break-through in geometric literature suggests that, in order to
1127   develop a long-term integration scheme, one should preserve the
# Line 1129 | Line 1131 | An alternative method using the quaternion representat
1131   proposed to evolve the Hamiltonian system in a constraint manifold
1132   by iteratively satisfying the orthogonality constraint $Q^T Q = 1$.
1133   An alternative method using the quaternion representation was
1134 < developed by Omelyan\cite{Omelyan1998}. However, both of these
1134 > developed by Omelyan.\cite{Omelyan1998} However, both of these
1135   methods are iterative and inefficient. In this section, we descibe a
1136   symplectic Lie-Poisson integrator for rigid bodies developed by
1137   Dullweber and his coworkers\cite{Dullweber1997} in depth.
1138  
1139   \subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies}
1140 < The motion of a rigid body is Hamiltonian with the Hamiltonian
1139 < function
1140 > The Hamiltonian of a rigid body is given by
1141   \begin{equation}
1142   H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) +
1143   V(q,Q) + \frac{1}{2}tr[(QQ^T  - 1)\Lambda ].
# Line 1250 | Line 1251 | motion. This unique property eliminates the requiremen
1251   Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
1252   Lagrange multiplier $\Lambda$ is absent from the equations of
1253   motion. This unique property eliminates the requirement of
1254 < iterations which can not be avoided in other methods\cite{Kol1997,
1255 < Omelyan1998}. Applying the hat-map isomorphism, we obtain the
1254 > iterations which can not be avoided in other methods.\cite{Kol1997,
1255 > Omelyan1998} Applying the hat-map isomorphism, we obtain the
1256   equation of motion for angular momentum in the body frame
1257   \begin{equation}
1258   \dot \pi  = \pi  \times I^{ - 1} \pi  + \sum\limits_i {\left( {Q^T
# Line 1348 | Line 1349 | _1 }.
1349   \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi
1350   _1 }.
1351   \]
1352 < The non-canonical Lie-Poisson bracket ${F, G}$ of two function
1352 < $F(\pi )$ and $G(\pi )$ is defined by
1352 > The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by
1353   \[
1354   \{ F,G\} (\pi ) = [\nabla _\pi  F(\pi )]^T J(\pi )\nabla _\pi  G(\pi
1355   ).
# Line 1358 | Line 1358 | norm of the angular momentum, $\parallel \pi
1358   function $G$ is zero, $F$ is a \emph{Casimir}, which is the
1359   conserved quantity in Poisson system. We can easily verify that the
1360   norm of the angular momentum, $\parallel \pi
1361 < \parallel$, is a \emph{Casimir}\cite{McLachlan1993}. Let$ F(\pi ) = S(\frac{{\parallel
1361 > \parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel
1362   \pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ ,
1363   then by the chain rule
1364   \[
# Line 1379 | Line 1379 | of motion corresponding to potential energy and kineti
1379   The Hamiltonian of rigid body can be separated in terms of kinetic
1380   energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations
1381   of motion corresponding to potential energy and kinetic energy are
1382 < listed in Table~\ref{introTable:rbEquations}
1382 > listed in Table~\ref{introTable:rbEquations}.
1383   \begin{table}
1384   \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1385   \label{introTable:rbEquations}
# Line 1437 | Line 1437 | has been applied in a variety of studies. This section
1437   As an alternative to newtonian dynamics, Langevin dynamics, which
1438   mimics a simple heat bath with stochastic and dissipative forces,
1439   has been applied in a variety of studies. This section will review
1440 < the theory of Langevin dynamics. A brief derivation of generalized
1440 > the theory of Langevin dynamics. A brief derivation of the generalized
1441   Langevin equation will be given first. Following that, we will
1442 < discuss the physical meaning of the terms appearing in the equation
1443 < as well as the calculation of friction tensor from hydrodynamics
1444 < theory.
1442 > discuss the physical meaning of the terms appearing in the equation.
1443  
1444   \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1445  
# Line 1450 | Line 1448 | Harmonic bath model is the derivation of the Generaliz
1448   environment, has been widely used in quantum chemistry and
1449   statistical mechanics. One of the successful applications of
1450   Harmonic bath model is the derivation of the Generalized Langevin
1451 < Dynamics (GLE). Lets consider a system, in which the degree of
1451 > Dynamics (GLE). Consider a system, in which the degree of
1452   freedom $x$ is assumed to couple to the bath linearly, giving a
1453   Hamiltonian of the form
1454   \begin{equation}
# Line 1461 | Line 1459 | H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_
1459   with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian,
1460   \[
1461   H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1462 < }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 }
1462 > }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  x_\alpha ^2 }
1463   \right\}}
1464   \]
1465   where the index $\alpha$ runs over all the bath degrees of freedom,
# Line 1514 | Line 1512 | where  $p$ is real and  $L$ is called the Laplace Tran
1512   L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1513   \]
1514   where  $p$ is real and  $L$ is called the Laplace Transform
1515 < Operator. Below are some important properties of Laplace transform
1515 > Operator. Below are some important properties of the Laplace transform
1516   \begin{eqnarray*}
1517   L(x + y)  & = & L(x) + L(y) \\
1518   L(ax)     & = & aL(x) \\
# Line 1583 | Line 1581 | m\ddot x =  - \frac{{\partial W}}{{\partial x}} - \int
1581   (t)\dot x(t - \tau )d\tau }  + R(t)
1582   \label{introEuqation:GeneralizedLangevinDynamics}
1583   \end{equation}
1584 < which is known as the \emph{generalized Langevin equation}.
1584 > which is known as the \emph{generalized Langevin equation} (GLE).
1585  
1586   \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}}
1587  
1588   One may notice that $R(t)$ depends only on initial conditions, which
1589   implies it is completely deterministic within the context of a
1590   harmonic bath. However, it is easy to verify that $R(t)$ is totally
1591 < uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)}
1591 > uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)}
1592   \right\rangle  = 0, \left\langle {\dot x(t)R(t)} \right\rangle  =
1593   0.$ This property is what we expect from a truly random process. As
1594   long as the model chosen for $R(t)$ was a gaussian distribution in
# Line 1619 | Line 1617 | taken as a $delta$ function in time:
1617   infinitely quickly to motions in the system. Thus, $\xi (t)$ can be
1618   taken as a $delta$ function in time:
1619   \[
1620 < \xi (t) = 2\xi _0 \delta (t)
1620 > \xi (t) = 2\xi _0 \delta (t).
1621   \]
1622   Hence, the convolution integral becomes
1623   \[
# Line 1644 | Line 1642 | q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \o
1642   q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1643   ^2 }}x(0),
1644   \]
1645 < we can rewrite $R(T)$ as
1645 > we can rewrite $R(t)$ as
1646   \[
1647   R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
1648   \]

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines