67 |
|
\begin{equation}E = T + V. \label{introEquation:energyConservation} |
68 |
|
\end{equation} |
69 |
|
All of these conserved quantities are important factors to determine |
70 |
< |
the quality of numerical integration schemes for rigid bodies |
71 |
< |
\cite{Dullweber1997}. |
70 |
> |
the quality of numerical integration schemes for rigid |
71 |
> |
bodies.\cite{Dullweber1997} |
72 |
|
|
73 |
|
\subsection{\label{introSection:lagrangian}Lagrangian Mechanics} |
74 |
|
|
178 |
|
where Eq.~\ref{introEquation:motionHamiltonianCoordinate} and |
179 |
|
Eq.~\ref{introEquation:motionHamiltonianMomentum} are Hamilton's |
180 |
|
equation of motion. Due to their symmetrical formula, they are also |
181 |
< |
known as the canonical equations of motions \cite{Goldstein2001}. |
181 |
> |
known as the canonical equations of motions.\cite{Goldstein2001} |
182 |
|
|
183 |
|
An important difference between Lagrangian approach and the |
184 |
|
Hamiltonian approach is that the Lagrangian is considered to be a |
188 |
|
Hamiltonian Mechanics is more appropriate for application to |
189 |
|
statistical mechanics and quantum mechanics, since it treats the |
190 |
|
coordinate and its time derivative as independent variables and it |
191 |
< |
only works with 1st-order differential equations\cite{Marion1990}. |
191 |
> |
only works with 1st-order differential equations.\cite{Marion1990} |
192 |
|
In Newtonian Mechanics, a system described by conservative forces |
193 |
|
conserves the total energy |
194 |
|
(Eq.~\ref{introEquation:energyConservation}). It follows that |
208 |
|
The thermodynamic behaviors and properties of Molecular Dynamics |
209 |
|
simulation are governed by the principle of Statistical Mechanics. |
210 |
|
The following section will give a brief introduction to some of the |
211 |
< |
Statistical Mechanics concepts and theorem presented in this |
211 |
> |
Statistical Mechanics concepts and theorems presented in this |
212 |
|
dissertation. |
213 |
|
|
214 |
|
\subsection{\label{introSection:ensemble}Phase Space and Ensemble} |
372 |
|
Liouville's theorem can be expressed in a variety of different forms |
373 |
|
which are convenient within different contexts. For any two function |
374 |
|
$F$ and $G$ of the coordinates and momenta of a system, the Poisson |
375 |
< |
bracket ${F, G}$ is defined as |
375 |
> |
bracket $\{F,G\}$ is defined as |
376 |
|
\begin{equation} |
377 |
|
\left\{ {F,G} \right\} = \sum\limits_i {\left( {\frac{{\partial |
378 |
|
F}}{{\partial q_i }}\frac{{\partial G}}{{\partial p_i }} - |
416 |
|
many-body system in Statistical Mechanics. Fortunately, the Ergodic |
417 |
|
Hypothesis makes a connection between time average and the ensemble |
418 |
|
average. It states that the time average and average over the |
419 |
< |
statistical ensemble are identical \cite{Frenkel1996, Leach2001}: |
419 |
> |
statistical ensemble are identical:\cite{Frenkel1996, Leach2001} |
420 |
|
\begin{equation} |
421 |
|
\langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty } |
422 |
|
\frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma |
434 |
|
utilized. Or if the system lends itself to a time averaging |
435 |
|
approach, the Molecular Dynamics techniques in |
436 |
|
Sec.~\ref{introSection:molecularDynamics} will be the best |
437 |
< |
choice\cite{Frenkel1996}. |
437 |
> |
choice.\cite{Frenkel1996} |
438 |
|
|
439 |
|
\section{\label{introSection:geometricIntegratos}Geometric Integrators} |
440 |
|
A variety of numerical integrators have been proposed to simulate |
441 |
|
the motions of atoms in MD simulation. They usually begin with |
442 |
< |
initial conditionals and move the objects in the direction governed |
443 |
< |
by the differential equations. However, most of them ignore the |
444 |
< |
hidden physical laws contained within the equations. Since 1990, |
445 |
< |
geometric integrators, which preserve various phase-flow invariants |
446 |
< |
such as symplectic structure, volume and time reversal symmetry, |
447 |
< |
were developed to address this issue\cite{Dullweber1997, |
448 |
< |
McLachlan1998, Leimkuhler1999}. The velocity Verlet method, which |
449 |
< |
happens to be a simple example of symplectic integrator, continues |
450 |
< |
to gain popularity in the molecular dynamics community. This fact |
451 |
< |
can be partly explained by its geometric nature. |
442 |
> |
initial conditions and move the objects in the direction governed by |
443 |
> |
the differential equations. However, most of them ignore the hidden |
444 |
> |
physical laws contained within the equations. Since 1990, geometric |
445 |
> |
integrators, which preserve various phase-flow invariants such as |
446 |
> |
symplectic structure, volume and time reversal symmetry, were |
447 |
> |
developed to address this issue.\cite{Dullweber1997, McLachlan1998, |
448 |
> |
Leimkuhler1999} The velocity Verlet method, which happens to be a |
449 |
> |
simple example of symplectic integrator, continues to gain |
450 |
> |
popularity in the molecular dynamics community. This fact can be |
451 |
> |
partly explained by its geometric nature. |
452 |
|
|
453 |
< |
\subsection{\label{introSection:symplecticManifold}Symplectic Manifolds} |
453 |
> |
\subsection{\label{introSection:symplecticManifold}Manifolds and Bundles} |
454 |
|
A \emph{manifold} is an abstract mathematical space. It looks |
455 |
|
locally like Euclidean space, but when viewed globally, it may have |
456 |
|
more complicated structure. A good example of manifold is the |
457 |
|
surface of Earth. It seems to be flat locally, but it is round if |
458 |
|
viewed as a whole. A \emph{differentiable manifold} (also known as |
459 |
|
\emph{smooth manifold}) is a manifold on which it is possible to |
460 |
< |
apply calculus\cite{Hirsch1997}. A \emph{symplectic manifold} is |
460 |
> |
apply calculus.\cite{Hirsch1997} A \emph{symplectic manifold} is |
461 |
|
defined as a pair $(M, \omega)$ which consists of a |
462 |
< |
\emph{differentiable manifold} $M$ and a close, non-degenerated, |
462 |
> |
\emph{differentiable manifold} $M$ and a close, non-degenerate, |
463 |
|
bilinear symplectic form, $\omega$. A symplectic form on a vector |
464 |
|
space $V$ is a function $\omega(x, y)$ which satisfies |
465 |
|
$\omega(\lambda_1x_1+\lambda_2x_2, y) = \lambda_1\omega(x_1, y)+ |
466 |
|
\lambda_2\omega(x_2, y)$, $\omega(x, y) = - \omega(y, x)$ and |
467 |
< |
$\omega(x, x) = 0$\cite{McDuff1998}. The cross product operation in |
468 |
< |
vector field is an example of symplectic form. One of the |
469 |
< |
motivations to study \emph{symplectic manifolds} in Hamiltonian |
470 |
< |
Mechanics is that a symplectic manifold can represent all possible |
471 |
< |
configurations of the system and the phase space of the system can |
472 |
< |
be described by it's cotangent bundle\cite{Jost2002}. Every |
473 |
< |
symplectic manifold is even dimensional. For instance, in Hamilton |
474 |
< |
equations, coordinate and momentum always appear in pairs. |
467 |
> |
$\omega(x, x) = 0$.\cite{McDuff1998} The cross product operation in |
468 |
> |
vector field is an example of symplectic form. |
469 |
> |
Given vector spaces $V$ and $W$ over same field $F$, $f: V \to W$ is a linear transformation if |
470 |
> |
\begin{eqnarray*} |
471 |
> |
f(x+y) & = & f(x) + f(y) \\ |
472 |
> |
f(ax) & = & af(x) |
473 |
> |
\end{eqnarray*} |
474 |
> |
are always satisfied for any two vectors $x$ and $y$ in $V$ and any scalar $a$ in $F$. One can define the dual vector space $V^*$ of $V$ if any two built-in linear transformations $\phi$ and $\psi$ in $V^*$ satisfy the following definition of addition and scalar multiplication: |
475 |
> |
\begin{eqnarray*} |
476 |
> |
(\phi+\psi)(x) & = & \phi(x)+\psi(x) \\ |
477 |
> |
(a\phi)(x) & = & a \phi(x) |
478 |
> |
\end{eqnarray*} |
479 |
> |
for all $a$ in $F$ and $x$ in $V$. For a manifold $M$, one can define a tangent vector of a tangent space $TM_q$ at every point $q$ |
480 |
> |
\begin{equation} |
481 |
> |
\dot q = \mathop {\lim }\limits_{t \to 0} \frac{{\phi (t) - \phi (0)}}{t} |
482 |
> |
\end{equation} |
483 |
> |
where $\phi(0)=q$ and $\phi(t) \in M$. One may also define a cotangent space $T^*M_q$ as the dual space of the tangent space $TM_q$. The tangent space and the cotangent space are isomorphic to each other, since they are both real vector spaces with same dimension. |
484 |
> |
The union of tangent spaces at every point of $M$ is called the tangent bundle of $M$ and is denoted by $TM$, while cotangent bundle $T^*M$ is defined as the union of the cotangent spaces to $M$.\cite{Jost2002} For a Hamiltonian system with configuration manifold $V$, the $(q,\dot q)$ phase space is the tangent bundle of the configuration manifold $V$, while the cotangent bundle is represented by $(q,p)$. |
485 |
|
|
486 |
|
\subsection{\label{introSection:ODE}Ordinary Differential Equations} |
487 |
|
|
489 |
|
\begin{equation} |
490 |
|
\dot x = f(x) |
491 |
|
\end{equation} |
492 |
< |
where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if |
492 |
> |
where $x = x(q,p)$, this system is a canonical Hamiltonian, if |
493 |
|
$f(x) = J\nabla _x H(x)$. Here, $H = H (q, p)$ is Hamiltonian |
494 |
|
function and $J$ is the skew-symmetric matrix |
495 |
|
\begin{equation} |
506 |
|
\label{introEquation:compactHamiltonian} |
507 |
|
\end{equation}In this case, $f$ is |
508 |
|
called a \emph{Hamiltonian vector field}. Another generalization of |
509 |
< |
Hamiltonian dynamics is Poisson Dynamics\cite{Olver1986}, |
509 |
> |
Hamiltonian dynamics is Poisson Dynamics,\cite{Olver1986} |
510 |
|
\begin{equation} |
511 |
|
\dot x = J(x)\nabla _x H \label{introEquation:poissonHamiltonian} |
512 |
|
\end{equation} |
513 |
< |
The most obvious change being that matrix $J$ now depends on $x$. |
513 |
> |
where the most obvious change being that matrix $J$ now depends on |
514 |
> |
$x$. |
515 |
|
|
516 |
|
\subsection{\label{introSection:exactFlow}Exact Propagator} |
517 |
|
|
538 |
|
\begin{equation} |
539 |
|
\varphi _\tau = \varphi _{ - \tau }^{ - 1}. |
540 |
|
\end{equation} |
530 |
– |
The exact propagator can also be written in terms of operator, |
531 |
– |
\begin{equation} |
532 |
– |
\varphi _\tau (x) = e^{\tau \sum\limits_i {f_i (x)\frac{\partial |
533 |
– |
}{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x). |
534 |
– |
\label{introEquation:exponentialOperator} |
535 |
– |
\end{equation} |
541 |
|
In most cases, it is not easy to find the exact propagator |
542 |
|
$\varphi_\tau$. Instead, we use an approximate map, $\psi_\tau$, |
543 |
|
which is usually called an integrator. The order of an integrator |
625 |
|
Generating functions\cite{Channell1990} tend to lead to methods |
626 |
|
which are cumbersome and difficult to use. In dissipative systems, |
627 |
|
variational methods can capture the decay of energy |
628 |
< |
accurately\cite{Kane2000}. Since they are geometrically unstable |
628 |
> |
accurately.\cite{Kane2000} Since they are geometrically unstable |
629 |
|
against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta |
630 |
< |
methods are not suitable for Hamiltonian system. Recently, various |
631 |
< |
high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003} |
632 |
< |
have been developed to overcome this instability. However, due to |
633 |
< |
computational penalty involved in implementing the Runge-Kutta |
634 |
< |
methods, they have not attracted much attention from the Molecular |
635 |
< |
Dynamics community. Instead, splitting methods have been widely |
636 |
< |
accepted since they exploit natural decompositions of the |
637 |
< |
system\cite{Tuckerman1992, McLachlan1998}. |
630 |
> |
methods are not suitable for Hamiltonian |
631 |
> |
system.\cite{Cartwright1992} Recently, various high-order explicit |
632 |
> |
Runge-Kutta methods \cite{Owren1992,Chen2003} have been developed to |
633 |
> |
overcome this instability. However, due to computational penalty |
634 |
> |
involved in implementing the Runge-Kutta methods, they have not |
635 |
> |
attracted much attention from the Molecular Dynamics community. |
636 |
> |
Instead, splitting methods have been widely accepted since they |
637 |
> |
exploit natural decompositions of the system.\cite{McLachlan1998, |
638 |
> |
Tuckerman1992} |
639 |
|
|
640 |
|
\subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}} |
641 |
|
|
658 |
|
problem. If $H_1$ and $H_2$ can be integrated using exact |
659 |
|
propagators $\varphi_1(t)$ and $\varphi_2(t)$, respectively, a |
660 |
|
simple first order expression is then given by the Lie-Trotter |
661 |
< |
formula |
661 |
> |
formula\cite{Trotter1959} |
662 |
|
\begin{equation} |
663 |
|
\varphi _h = \varphi _{1,h} \circ \varphi _{2,h}, |
664 |
|
\label{introEquation:firstOrderSplitting} |
679 |
|
The Lie-Trotter |
680 |
|
splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces |
681 |
|
local errors proportional to $h^2$, while the Strang splitting gives |
682 |
< |
a second-order decomposition, |
682 |
> |
a second-order decomposition,\cite{Strang1968} |
683 |
|
\begin{equation} |
684 |
|
\varphi _h = \varphi _{1,h/2} \circ \varphi _{2,h} \circ \varphi |
685 |
|
_{1,h/2} , \label{introEquation:secondOrderSplitting} |
735 |
|
|
736 |
|
\item Use the half step velocities to move positions one whole step, $\Delta t$. |
737 |
|
|
738 |
< |
\item Evaluate the forces at the new positions, $\mathbf{q}(\Delta t)$, and use the new forces to complete the velocity move. |
738 |
> |
\item Evaluate the forces at the new positions, $q(\Delta t)$, and use the new forces to complete the velocity move. |
739 |
|
|
740 |
|
\item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values. |
741 |
|
\end{enumerate} |
754 |
|
|
755 |
|
\subsubsection{\label{introSection:errorAnalysis}\textbf{Error Analysis and Higher Order Methods}} |
756 |
|
|
757 |
< |
The Baker-Campbell-Hausdorff formula can be used to determine the |
758 |
< |
local error of a splitting method in terms of the commutator of the |
759 |
< |
operators(Eq.~\ref{introEquation:exponentialOperator}) associated with |
760 |
< |
the sub-propagator. For operators $hX$ and $hY$ which are associated |
761 |
< |
with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we have |
757 |
> |
The Baker-Campbell-Hausdorff formula\cite{Gilmore1974} can be used |
758 |
> |
to determine the local error of a splitting method in terms of the |
759 |
> |
commutator of the |
760 |
> |
operators associated |
761 |
> |
with the sub-propagator. For operators $hX$ and $hY$ which are |
762 |
> |
associated with $\varphi_1(t)$ and $\varphi_2(t)$ respectively , we |
763 |
> |
have |
764 |
|
\begin{equation} |
765 |
|
\exp (hX + hY) = \exp (hZ) |
766 |
|
\end{equation} |
790 |
|
\end{equation} |
791 |
|
A careful choice of coefficient $a_1 \ldots b_m$ will lead to higher |
792 |
|
order methods. Yoshida proposed an elegant way to compose higher |
793 |
< |
order methods based on symmetric splitting\cite{Yoshida1990}. Given |
793 |
> |
order methods based on symmetric splitting.\cite{Yoshida1990} Given |
794 |
|
a symmetric second order base method $ \varphi _h^{(2)} $, a |
795 |
|
fourth-order symmetric method can be constructed by composing, |
796 |
|
\[ |
876 |
|
minimization to find a more reasonable conformation. Several energy |
877 |
|
minimization methods have been developed to exploit the energy |
878 |
|
surface and to locate the local minimum. While converging slowly |
879 |
< |
near the minimum, steepest descent method is extremely robust when |
879 |
> |
near the minimum, the steepest descent method is extremely robust when |
880 |
|
systems are strongly anharmonic. Thus, it is often used to refine |
881 |
|
structures from crystallographic data. Relying on the Hessian, |
882 |
|
advanced methods like Newton-Raphson converge rapidly to a local |
895 |
|
temperature. Beginning at a lower temperature and gradually |
896 |
|
increasing the temperature by assigning larger random velocities, we |
897 |
|
end up setting the temperature of the system to a final temperature |
898 |
< |
at which the simulation will be conducted. In heating phase, we |
898 |
> |
at which the simulation will be conducted. In the heating phase, we |
899 |
|
should also keep the system from drifting or rotating as a whole. To |
900 |
|
do this, the net linear momentum and angular momentum of the system |
901 |
|
is shifted to zero after each resampling from the Maxwell -Boltzman |
951 |
|
%cutoff and minimum image convention |
952 |
|
Another important technique to improve the efficiency of force |
953 |
|
evaluation is to apply spherical cutoffs where particles farther |
954 |
< |
than a predetermined distance are not included in the calculation |
955 |
< |
\cite{Frenkel1996}. The use of a cutoff radius will cause a |
956 |
< |
discontinuity in the potential energy curve. Fortunately, one can |
954 |
> |
than a predetermined distance are not included in the |
955 |
> |
calculation.\cite{Frenkel1996} The use of a cutoff radius will cause |
956 |
> |
a discontinuity in the potential energy curve. Fortunately, one can |
957 |
|
shift a simple radial potential to ensure the potential curve go |
958 |
|
smoothly to zero at the cutoff radius. The cutoff strategy works |
959 |
|
well for Lennard-Jones interaction because of its short range |
962 |
|
in simulations. The Ewald summation, in which the slowly decaying |
963 |
|
Coulomb potential is transformed into direct and reciprocal sums |
964 |
|
with rapid and absolute convergence, has proved to minimize the |
965 |
< |
periodicity artifacts in liquid simulations. Taking the advantages |
966 |
< |
of the fast Fourier transform (FFT) for calculating discrete Fourier |
967 |
< |
transforms, the particle mesh-based |
965 |
> |
periodicity artifacts in liquid simulations. Taking advantage of |
966 |
> |
fast Fourier transform (FFT) techniques for calculating discrete |
967 |
> |
Fourier transforms, the particle mesh-based |
968 |
|
methods\cite{Hockney1981,Shimada1993, Luty1994} are accelerated from |
969 |
|
$O(N^{3/2})$ to $O(N logN)$. An alternative approach is the |
970 |
|
\emph{fast multipole method}\cite{Greengard1987, Greengard1994}, |
974 |
|
simulation community, these two methods are difficult to implement |
975 |
|
correctly and efficiently. Instead, we use a damped and |
976 |
|
charge-neutralized Coulomb potential method developed by Wolf and |
977 |
< |
his coworkers\cite{Wolf1999}. The shifted Coulomb potential for |
977 |
> |
his coworkers.\cite{Wolf1999} The shifted Coulomb potential for |
978 |
|
particle $i$ and particle $j$ at distance $r_{rj}$ is given by: |
979 |
|
\begin{equation} |
980 |
|
V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha |
1037 |
|
function}, is of most fundamental importance to liquid theory. |
1038 |
|
Experimentally, pair distribution functions can be gathered by |
1039 |
|
Fourier transforming raw data from a series of neutron diffraction |
1040 |
< |
experiments and integrating over the surface factor |
1041 |
< |
\cite{Powles1973}. The experimental results can serve as a criterion |
1042 |
< |
to justify the correctness of a liquid model. Moreover, various |
1043 |
< |
equilibrium thermodynamic and structural properties can also be |
1044 |
< |
expressed in terms of the radial distribution function |
1045 |
< |
\cite{Allen1987}. The pair distribution functions $g(r)$ gives the |
1046 |
< |
probability that a particle $i$ will be located at a distance $r$ |
1047 |
< |
from a another particle $j$ in the system |
1040 |
> |
experiments and integrating over the surface |
1041 |
> |
factor.\cite{Powles1973} The experimental results can serve as a |
1042 |
> |
criterion to justify the correctness of a liquid model. Moreover, |
1043 |
> |
various equilibrium thermodynamic and structural properties can also |
1044 |
> |
be expressed in terms of the radial distribution |
1045 |
> |
function.\cite{Allen1987} The pair distribution functions $g(r)$ |
1046 |
> |
gives the probability that a particle $i$ will be located at a |
1047 |
> |
distance $r$ from a another particle $j$ in the system |
1048 |
|
\begin{equation} |
1049 |
|
g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j |
1050 |
|
\ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho |
1067 |
|
\label{introEquation:timeCorrelationFunction} |
1068 |
|
\end{equation} |
1069 |
|
If $A$ and $B$ refer to same variable, this kind of correlation |
1070 |
< |
functions are called \emph{autocorrelation functions}. One example |
1063 |
< |
of auto correlation function is the velocity auto-correlation |
1070 |
> |
functions are called \emph{autocorrelation functions}. One typical example is the velocity autocorrelation |
1071 |
|
function which is directly related to transport properties of |
1072 |
|
molecular liquids: |
1073 |
< |
\[ |
1073 |
> |
\begin{equation} |
1074 |
|
D = \frac{1}{3}\int\limits_0^\infty {\left\langle {v(t) \cdot v(0)} |
1075 |
|
\right\rangle } dt |
1076 |
< |
\] |
1076 |
> |
\end{equation} |
1077 |
|
where $D$ is diffusion constant. Unlike the velocity autocorrelation |
1078 |
|
function, which is averaged over time origins and over all the |
1079 |
|
atoms, the dipole autocorrelation functions is calculated for the |
1080 |
|
entire system. The dipole autocorrelation function is given by: |
1081 |
< |
\[ |
1081 |
> |
\begin{equation} |
1082 |
|
c_{dipole} = \left\langle {u_{tot} (t) \cdot u_{tot} (t)} |
1083 |
|
\right\rangle |
1084 |
< |
\] |
1084 |
> |
\end{equation} |
1085 |
|
Here $u_{tot}$ is the net dipole of the entire system and is given |
1086 |
|
by |
1087 |
< |
\[ |
1087 |
> |
\begin{equation} |
1088 |
|
u_{tot} (t) = \sum\limits_i {u_i (t)}. |
1089 |
< |
\] |
1089 |
> |
\end{equation} |
1090 |
|
In principle, many time correlation functions can be related to |
1091 |
|
Fourier transforms of the infrared, Raman, and inelastic neutron |
1092 |
|
scattering spectra of molecular liquids. In practice, one can |
1093 |
|
extract the IR spectrum from the intensity of the molecular dipole |
1094 |
|
fluctuation at each frequency using the following relationship: |
1095 |
< |
\[ |
1095 |
> |
\begin{equation} |
1096 |
|
\hat c_{dipole} (v) = \int_{ - \infty }^\infty {c_{dipole} (t)e^{ - |
1097 |
|
i2\pi vt} dt}. |
1098 |
< |
\] |
1098 |
> |
\end{equation} |
1099 |
|
|
1100 |
|
\section{\label{introSection:rigidBody}Dynamics of Rigid Bodies} |
1101 |
|
|
1102 |
|
Rigid bodies are frequently involved in the modeling of different |
1103 |
< |
areas, from engineering, physics, to chemistry. For example, |
1103 |
> |
areas, including engineering, physics and chemistry. For example, |
1104 |
|
missiles and vehicles are usually modeled by rigid bodies. The |
1105 |
|
movement of the objects in 3D gaming engines or other physics |
1106 |
|
simulators is governed by rigid body dynamics. In molecular |
1107 |
|
simulations, rigid bodies are used to simplify protein-protein |
1108 |
< |
docking studies\cite{Gray2003}. |
1108 |
> |
docking studies.\cite{Gray2003} |
1109 |
|
|
1110 |
|
It is very important to develop stable and efficient methods to |
1111 |
|
integrate the equations of motion for orientational degrees of |
1117 |
|
angles can overcome this difficulty\cite{Barojas1973}, the |
1118 |
|
computational penalty and the loss of angular momentum conservation |
1119 |
|
still remain. A singularity-free representation utilizing |
1120 |
< |
quaternions was developed by Evans in 1977\cite{Evans1977}. |
1120 |
> |
quaternions was developed by Evans in 1977.\cite{Evans1977} |
1121 |
|
Unfortunately, this approach used a nonseparable Hamiltonian |
1122 |
|
resulting from the quaternion representation, which prevented the |
1123 |
|
symplectic algorithm from being utilized. Another different approach |
1126 |
|
deriving from potential energy and constraint forces which are used |
1127 |
|
to guarantee the rigidness. However, due to their iterative nature, |
1128 |
|
the SHAKE and Rattle algorithms also converge very slowly when the |
1129 |
< |
number of constraints increases\cite{Ryckaert1977, Andersen1983}. |
1129 |
> |
number of constraints increases.\cite{Ryckaert1977, Andersen1983} |
1130 |
|
|
1131 |
|
A break-through in geometric literature suggests that, in order to |
1132 |
|
develop a long-term integration scheme, one should preserve the |
1136 |
|
proposed to evolve the Hamiltonian system in a constraint manifold |
1137 |
|
by iteratively satisfying the orthogonality constraint $Q^T Q = 1$. |
1138 |
|
An alternative method using the quaternion representation was |
1139 |
< |
developed by Omelyan\cite{Omelyan1998}. However, both of these |
1139 |
> |
developed by Omelyan.\cite{Omelyan1998} However, both of these |
1140 |
|
methods are iterative and inefficient. In this section, we descibe a |
1141 |
|
symplectic Lie-Poisson integrator for rigid bodies developed by |
1142 |
|
Dullweber and his coworkers\cite{Dullweber1997} in depth. |
1143 |
|
|
1144 |
|
\subsection{\label{introSection:constrainedHamiltonianRB}Constrained Hamiltonian for Rigid Bodies} |
1145 |
< |
The motion of a rigid body is Hamiltonian with the Hamiltonian |
1139 |
< |
function |
1145 |
> |
The Hamiltonian of a rigid body is given by |
1146 |
|
\begin{equation} |
1147 |
|
H = \frac{1}{2}(p^T m^{ - 1} p) + \frac{1}{2}tr(PJ^{ - 1} P) + |
1148 |
|
V(q,Q) + \frac{1}{2}tr[(QQ^T - 1)\Lambda ]. |
1201 |
|
1,\tilde Q^T \tilde PJ^{ - 1} + J^{ - 1} P^T \tilde Q = 0} \right\} |
1202 |
|
\] |
1203 |
|
For a body fixed vector $X_i$ with respect to the center of mass of |
1204 |
< |
the rigid body, its corresponding lab fixed vector $X_0^{lab}$ is |
1204 |
> |
the rigid body, its corresponding lab fixed vector $X_i^{lab}$ is |
1205 |
|
given as |
1206 |
|
\begin{equation} |
1207 |
|
X_i^{lab} = Q X_i + q. |
1256 |
|
Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the |
1257 |
|
Lagrange multiplier $\Lambda$ is absent from the equations of |
1258 |
|
motion. This unique property eliminates the requirement of |
1259 |
< |
iterations which can not be avoided in other methods\cite{Kol1997, |
1260 |
< |
Omelyan1998}. Applying the hat-map isomorphism, we obtain the |
1259 |
> |
iterations which can not be avoided in other methods.\cite{Kol1997, |
1260 |
> |
Omelyan1998} Applying the hat-map isomorphism, we obtain the |
1261 |
|
equation of motion for angular momentum in the body frame |
1262 |
|
\begin{equation} |
1263 |
|
\dot \pi = \pi \times I^{ - 1} \pi + \sum\limits_i {\left( {Q^T |
1354 |
|
\circ \varphi _{\Delta t/2,\pi _2 } \circ \varphi _{\Delta t/2,\pi |
1355 |
|
_1 }. |
1356 |
|
\] |
1357 |
< |
The non-canonical Lie-Poisson bracket ${F, G}$ of two function |
1352 |
< |
$F(\pi )$ and $G(\pi )$ is defined by |
1357 |
> |
The non-canonical Lie-Poisson bracket $\{F, G\}$ of two functions $F(\pi )$ and $G(\pi )$ is defined by |
1358 |
|
\[ |
1359 |
|
\{ F,G\} (\pi ) = [\nabla _\pi F(\pi )]^T J(\pi )\nabla _\pi G(\pi |
1360 |
|
). |
1363 |
|
function $G$ is zero, $F$ is a \emph{Casimir}, which is the |
1364 |
|
conserved quantity in Poisson system. We can easily verify that the |
1365 |
|
norm of the angular momentum, $\parallel \pi |
1366 |
< |
\parallel$, is a \emph{Casimir}\cite{McLachlan1993}. Let$ F(\pi ) = S(\frac{{\parallel |
1366 |
> |
\parallel$, is a \emph{Casimir}.\cite{McLachlan1993} Let $F(\pi ) = S(\frac{{\parallel |
1367 |
|
\pi \parallel ^2 }}{2})$ for an arbitrary function $ S:R \to R$ , |
1368 |
|
then by the chain rule |
1369 |
|
\[ |
1384 |
|
The Hamiltonian of rigid body can be separated in terms of kinetic |
1385 |
|
energy and potential energy, $H = T(p,\pi ) + V(q,Q)$. The equations |
1386 |
|
of motion corresponding to potential energy and kinetic energy are |
1387 |
< |
listed in Table~\ref{introTable:rbEquations} |
1387 |
> |
listed in Table~\ref{introTable:rbEquations}. |
1388 |
|
\begin{table} |
1389 |
|
\caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES} |
1390 |
|
\label{introTable:rbEquations} |
1442 |
|
As an alternative to newtonian dynamics, Langevin dynamics, which |
1443 |
|
mimics a simple heat bath with stochastic and dissipative forces, |
1444 |
|
has been applied in a variety of studies. This section will review |
1445 |
< |
the theory of Langevin dynamics. A brief derivation of generalized |
1445 |
> |
the theory of Langevin dynamics. A brief derivation of the generalized |
1446 |
|
Langevin equation will be given first. Following that, we will |
1447 |
< |
discuss the physical meaning of the terms appearing in the equation |
1443 |
< |
as well as the calculation of friction tensor from hydrodynamics |
1444 |
< |
theory. |
1447 |
> |
discuss the physical meaning of the terms appearing in the equation. |
1448 |
|
|
1449 |
|
\subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation} |
1450 |
|
|
1453 |
|
environment, has been widely used in quantum chemistry and |
1454 |
|
statistical mechanics. One of the successful applications of |
1455 |
|
Harmonic bath model is the derivation of the Generalized Langevin |
1456 |
< |
Dynamics (GLE). Lets consider a system, in which the degree of |
1456 |
> |
Dynamics (GLE). Consider a system, in which the degree of |
1457 |
|
freedom $x$ is assumed to couple to the bath linearly, giving a |
1458 |
|
Hamiltonian of the form |
1459 |
|
\begin{equation} |
1464 |
|
with this degree of freedom, $H_B$ is a harmonic bath Hamiltonian, |
1465 |
|
\[ |
1466 |
|
H_B = \sum\limits_{\alpha = 1}^N {\left\{ {\frac{{p_\alpha ^2 |
1467 |
< |
}}{{2m_\alpha }} + \frac{1}{2}m_\alpha \omega _\alpha ^2 } |
1467 |
> |
}}{{2m_\alpha }} + \frac{1}{2}m_\alpha x_\alpha ^2 } |
1468 |
|
\right\}} |
1469 |
|
\] |
1470 |
|
where the index $\alpha$ runs over all the bath degrees of freedom, |
1517 |
|
L(f(t)) \equiv F(p) = \int_0^\infty {f(t)e^{ - pt} dt} |
1518 |
|
\] |
1519 |
|
where $p$ is real and $L$ is called the Laplace Transform |
1520 |
< |
Operator. Below are some important properties of Laplace transform |
1520 |
> |
Operator. Below are some important properties of the Laplace transform |
1521 |
|
\begin{eqnarray*} |
1522 |
|
L(x + y) & = & L(x) + L(y) \\ |
1523 |
|
L(ax) & = & aL(x) \\ |
1586 |
|
(t)\dot x(t - \tau )d\tau } + R(t) |
1587 |
|
\label{introEuqation:GeneralizedLangevinDynamics} |
1588 |
|
\end{equation} |
1589 |
< |
which is known as the \emph{generalized Langevin equation}. |
1589 |
> |
which is known as the \emph{generalized Langevin equation} (GLE). |
1590 |
|
|
1591 |
|
\subsubsection{\label{introSection:randomForceDynamicFrictionKernel}\textbf{Random Force and Dynamic Friction Kernel}} |
1592 |
|
|
1593 |
|
One may notice that $R(t)$ depends only on initial conditions, which |
1594 |
|
implies it is completely deterministic within the context of a |
1595 |
|
harmonic bath. However, it is easy to verify that $R(t)$ is totally |
1596 |
< |
uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)} |
1596 |
> |
uncorrelated to $x$ and $\dot x$, $\left\langle {x(t)R(t)} |
1597 |
|
\right\rangle = 0, \left\langle {\dot x(t)R(t)} \right\rangle = |
1598 |
|
0.$ This property is what we expect from a truly random process. As |
1599 |
|
long as the model chosen for $R(t)$ was a gaussian distribution in |
1622 |
|
infinitely quickly to motions in the system. Thus, $\xi (t)$ can be |
1623 |
|
taken as a $delta$ function in time: |
1624 |
|
\[ |
1625 |
< |
\xi (t) = 2\xi _0 \delta (t) |
1625 |
> |
\xi (t) = 2\xi _0 \delta (t). |
1626 |
|
\] |
1627 |
|
Hence, the convolution integral becomes |
1628 |
|
\[ |
1647 |
|
q_\alpha (t) = x_\alpha (t) - \frac{1}{{m_\alpha \omega _\alpha |
1648 |
|
^2 }}x(0), |
1649 |
|
\] |
1650 |
< |
we can rewrite $R(T)$ as |
1650 |
> |
we can rewrite $R(t)$ as |
1651 |
|
\[ |
1652 |
|
R(t) = \sum\limits_{\alpha = 1}^N {g_\alpha q_\alpha (t)}. |
1653 |
|
\] |