ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2718 by tim, Tue Apr 18 04:11:56 2006 UTC vs.
Revision 2725 by tim, Fri Apr 21 05:45:14 2006 UTC

# Line 822 | Line 822 | q(\Delta t)} \right]. %
822   %
823   q(\Delta t) &= q(0) + \frac{{\Delta t}}{2}\left[ {\dot q(0) + \dot
824   q(\Delta t)} \right]. %
825 < \label{introEquation:positionVerlet1}
825 > \label{introEquation:positionVerlet2}
826   \end{align}
827  
828   \subsubsection{\label{introSection:errorAnalysis}Error Analysis and Higher Order Methods}
# Line 883 | Line 883 | As a special discipline of molecular modeling, Molecul
883  
884   \section{\label{introSection:molecularDynamics}Molecular Dynamics}
885  
886 < As a special discipline of molecular modeling, Molecular dynamics
887 < has proven to be a powerful tool for studying the functions of
888 < biological systems, providing structural, thermodynamic and
889 < dynamical information.
890 <
891 < \subsection{\label{introSec:mdInit}Initialization}
886 > As one of the principal tools of molecular modeling, Molecular
887 > dynamics has proven to be a powerful tool for studying the functions
888 > of biological systems, providing structural, thermodynamic and
889 > dynamical information. The basic idea of molecular dynamics is that
890 > macroscopic properties are related to microscopic behavior and
891 > microscopic behavior can be calculated from the trajectories in
892 > simulations. For instance, instantaneous temperature of an
893 > Hamiltonian system of $N$ particle can be measured by
894 > \[
895 > T = \sum\limits_{i = 1}^N {\frac{{m_i v_i^2 }}{{fk_B }}}
896 > \]
897 > where $m_i$ and $v_i$ are the mass and velocity of $i$th particle
898 > respectively, $f$ is the number of degrees of freedom, and $k_B$ is
899 > the boltzman constant.
900  
901 < \subsection{\label{introSec:forceEvaluation}Force Evaluation}
901 > A typical molecular dynamics run consists of three essential steps:
902 > \begin{enumerate}
903 >  \item Initialization
904 >    \begin{enumerate}
905 >    \item Preliminary preparation
906 >    \item Minimization
907 >    \item Heating
908 >    \item Equilibration
909 >    \end{enumerate}
910 >  \item Production
911 >  \item Analysis
912 > \end{enumerate}
913 > These three individual steps will be covered in the following
914 > sections. Sec.~\ref{introSec:initialSystemSettings} deals with the
915 > initialization of a simulation. Sec.~\ref{introSec:production} will
916 > discusses issues in production run. Sec.~\ref{introSection:Analysis}
917 > provides the theoretical tools for trajectory analysis.
918 >
919 > \subsection{\label{introSec:initialSystemSettings}Initialization}
920 >
921 > \subsubsection{Preliminary preparation}
922  
923 < \subsection{\label{introSection:mdIntegration} Integration of the Equations of Motion}
923 > When selecting the starting structure of a molecule for molecular
924 > simulation, one may retrieve its Cartesian coordinates from public
925 > databases, such as RCSB Protein Data Bank \textit{etc}. Although
926 > thousands of crystal structures of molecules are discovered every
927 > year, many more remain unknown due to the difficulties of
928 > purification and crystallization. Even for the molecule with known
929 > structure, some important information is missing. For example, the
930 > missing hydrogen atom which acts as donor in hydrogen bonding must
931 > be added. Moreover, in order to include electrostatic interaction,
932 > one may need to specify the partial charges for individual atoms.
933 > Under some circumstances, we may even need to prepare the system in
934 > a special setup. For instance, when studying transport phenomenon in
935 > membrane system, we may prepare the lipids in bilayer structure
936 > instead of placing lipids randomly in solvent, since we are not
937 > interested in self-aggregation and it takes a long time to happen.
938 >
939 > \subsubsection{Minimization}
940 >
941 > It is quite possible that some of molecules in the system from
942 > preliminary preparation may be overlapped with each other. This
943 > close proximity leads to high potential energy which consequently
944 > jeopardizes any molecular dynamics simulations. To remove these
945 > steric overlaps, one typically performs energy minimization to find
946 > a more reasonable conformation. Several energy minimization methods
947 > have been developed to exploit the energy surface and to locate the
948 > local minimum. While converging slowly near the minimum, steepest
949 > descent method is extremely robust when systems are far from
950 > harmonic. Thus, it is often used to refine structure from
951 > crystallographic data. Relied on the gradient or hessian, advanced
952 > methods like conjugate gradient and Newton-Raphson converge rapidly
953 > to a local minimum, while become unstable if the energy surface is
954 > far from quadratic. Another factor must be taken into account, when
955 > choosing energy minimization method, is the size of the system.
956 > Steepest descent and conjugate gradient can deal with models of any
957 > size. Because of the limit of computation power to calculate hessian
958 > matrix and insufficient storage capacity to store them, most
959 > Newton-Raphson methods can not be used with very large models.
960 >
961 > \subsubsection{Heating}
962  
963 + Typically, Heating is performed by assigning random velocities
964 + according to a Gaussian distribution for a temperature. Beginning at
965 + a lower temperature and gradually increasing the temperature by
966 + assigning greater random velocities, we end up with setting the
967 + temperature of the system to a final temperature at which the
968 + simulation will be conducted. In heating phase, we should also keep
969 + the system from drifting or rotating as a whole. Equivalently, the
970 + net linear momentum and angular momentum of the system should be
971 + shifted to zero.
972 +
973 + \subsubsection{Equilibration}
974 +
975 + The purpose of equilibration is to allow the system to evolve
976 + spontaneously for a period of time and reach equilibrium. The
977 + procedure is continued until various statistical properties, such as
978 + temperature, pressure, energy, volume and other structural
979 + properties \textit{etc}, become independent of time. Strictly
980 + speaking, minimization and heating are not necessary, provided the
981 + equilibration process is long enough. However, these steps can serve
982 + as a means to arrive at an equilibrated structure in an effective
983 + way.
984 +
985 + \subsection{\label{introSection:production}Production}
986 +
987 + Production run is the most important steps of the simulation, in
988 + which the equilibrated structure is used as a starting point and the
989 + motions of the molecules are collected for later analysis. In order
990 + to capture the macroscopic properties of the system, the molecular
991 + dynamics simulation must be performed in correct and efficient way.
992 +
993 + The most expensive part of a molecular dynamics simulation is the
994 + calculation of non-bonded forces, such as van der Waals force and
995 + Coulombic forces \textit{etc}. For a system of $N$ particles, the
996 + complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
997 + which making large simulations prohibitive in the absence of any
998 + computation saving techniques.
999 +
1000 + A natural approach to avoid system size issue is to represent the
1001 + bulk behavior by a finite number of the particles. However, this
1002 + approach will suffer from the surface effect. To offset this,
1003 + \textit{Periodic boundary condition} is developed to simulate bulk
1004 + properties with a relatively small number of particles. In this
1005 + method, the simulation box is replicated throughout space to form an
1006 + infinite lattice. During the simulation, when a particle moves in
1007 + the primary cell, its image in other cells move in exactly the same
1008 + direction with exactly the same orientation. Thus, as a particle
1009 + leaves the primary cell, one of its images will enter through the
1010 + opposite face.
1011 + %\begin{figure}
1012 + %\centering
1013 + %\includegraphics[width=\linewidth]{pbcFig.eps}
1014 + %\caption[An illustration of periodic boundary conditions]{A 2-D
1015 + %illustration of periodic boundary conditions. As one particle leaves
1016 + %the right of the simulation box, an image of it enters the left.}
1017 + %\label{introFig:pbc}
1018 + %\end{figure}
1019 +
1020 + %cutoff and minimum image convention
1021 + Another important technique to improve the efficiency of force
1022 + evaluation is to apply cutoff where particles farther than a
1023 + predetermined distance, are not included in the calculation
1024 + \cite{Frenkel1996}. The use of a cutoff radius will cause a
1025 + discontinuity in the potential energy curve
1026 + (Fig.~\ref{introFig:shiftPot}). Fortunately, one can shift the
1027 + potential to ensure the potential curve go smoothly to zero at the
1028 + cutoff radius. Cutoff strategy works pretty well for Lennard-Jones
1029 + interaction because of its short range nature. However, simply
1030 + truncating the electrostatic interaction with the use of cutoff has
1031 + been shown to lead to severe artifacts in simulations. Ewald
1032 + summation, in which the slowly conditionally convergent Coulomb
1033 + potential is transformed into direct and reciprocal sums with rapid
1034 + and absolute convergence, has proved to minimize the periodicity
1035 + artifacts in liquid simulations. Taking the advantages of the fast
1036 + Fourier transform (FFT) for calculating discrete Fourier transforms,
1037 + the particle mesh-based methods are accelerated from $O(N^{3/2})$ to
1038 + $O(N logN)$. An alternative approach is \emph{fast multipole
1039 + method}, which treats Coulombic interaction exactly at short range,
1040 + and approximate the potential at long range through multipolar
1041 + expansion. In spite of their wide acceptances at the molecular
1042 + simulation community, these two methods are hard to be implemented
1043 + correctly and efficiently. Instead, we use a damped and
1044 + charge-neutralized Coulomb potential method developed by Wolf and
1045 + his coworkers. The shifted Coulomb potential for particle $i$ and
1046 + particle $j$ at distance $r_{rj}$ is given by:
1047 + \begin{equation}
1048 + V(r_{ij})= \frac{q_i q_j \textrm{erfc}(\alpha
1049 + r_{ij})}{r_{ij}}-\lim_{r_{ij}\rightarrow
1050 + R_\textrm{c}}\left\{\frac{q_iq_j \textrm{erfc}(\alpha
1051 + r_{ij})}{r_{ij}}\right\}. \label{introEquation:shiftedCoulomb}
1052 + \end{equation}
1053 + where $\alpha$ is the convergence parameter. Due to the lack of
1054 + inherent periodicity and rapid convergence,this method is extremely
1055 + efficient and easy to implement.
1056 + %\begin{figure}
1057 + %\centering
1058 + %\includegraphics[width=\linewidth]{pbcFig.eps}
1059 + %\caption[An illustration of shifted Coulomb potential]{An illustration of shifted Coulomb potential.}
1060 + %\label{introFigure:shiftedCoulomb}
1061 + %\end{figure}
1062 +
1063 + %multiple time step
1064 +
1065 + \subsection{\label{introSection:Analysis} Analysis}
1066 +
1067 + Recently, advanced visualization technique are widely applied to
1068 + monitor the motions of molecules. Although the dynamics of the
1069 + system can be described qualitatively from animation, quantitative
1070 + trajectory analysis are more appreciable. According to the
1071 + principles of Statistical Mechanics,
1072 + Sec.~\ref{introSection:statisticalMechanics}, one can compute
1073 + thermodynamics properties, analyze fluctuations of structural
1074 + parameters, and investigate time-dependent processes of the molecule
1075 + from the trajectories.
1076 +
1077 + \subsubsection{\label{introSection:thermodynamicsProperties}Thermodynamics Properties}
1078 +
1079 + Thermodynamics properties, which can be expressed in terms of some
1080 + function of the coordinates and momenta of all particles in the
1081 + system, can be directly computed from molecular dynamics. The usual
1082 + way to measure the pressure is based on virial theorem of Clausius
1083 + which states that the virial is equal to $-3Nk_BT$. For a system
1084 + with forces between particles, the total virial, $W$, contains the
1085 + contribution from external pressure and interaction between the
1086 + particles:
1087 + \[
1088 + W =  - 3PV + \left\langle {\sum\limits_{i < j} {r{}_{ij} \cdot
1089 + f_{ij} } } \right\rangle
1090 + \]
1091 + where $f_{ij}$ is the force between particle $i$ and $j$ at a
1092 + distance $r_{ij}$. Thus, the expression for the pressure is given
1093 + by:
1094 + \begin{equation}
1095 + P = \frac{{Nk_B T}}{V} - \frac{1}{{3V}}\left\langle {\sum\limits_{i
1096 + < j} {r{}_{ij} \cdot f_{ij} } } \right\rangle
1097 + \end{equation}
1098 +
1099 + \subsubsection{\label{introSection:structuralProperties}Structural Properties}
1100 +
1101 + Structural Properties of a simple fluid can be described by a set of
1102 + distribution functions. Among these functions,\emph{pair
1103 + distribution function}, also known as \emph{radial distribution
1104 + function}, is of most fundamental importance to liquid-state theory.
1105 + Pair distribution function can be gathered by Fourier transforming
1106 + raw data from a series of neutron diffraction experiments and
1107 + integrating over the surface factor \cite{Powles73}. The experiment
1108 + result can serve as a criterion to justify the correctness of the
1109 + theory. Moreover, various equilibrium thermodynamic and structural
1110 + properties can also be expressed in terms of radial distribution
1111 + function \cite{allen87:csl}.
1112 +
1113 + A pair distribution functions $g(r)$ gives the probability that a
1114 + particle $i$ will be located at a distance $r$ from a another
1115 + particle $j$ in the system
1116 + \[
1117 + g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1118 + \ne i} {\delta (r - r_{ij} )} } } \right\rangle.
1119 + \]
1120 + Note that the delta function can be replaced by a histogram in
1121 + computer simulation. Figure
1122 + \ref{introFigure:pairDistributionFunction} shows a typical pair
1123 + distribution function for the liquid argon system. The occurrence of
1124 + several peaks in the plot of $g(r)$ suggests that it is more likely
1125 + to find particles at certain radial values than at others. This is a
1126 + result of the attractive interaction at such distances. Because of
1127 + the strong repulsive forces at short distance, the probability of
1128 + locating particles at distances less than about 2.5{\AA} from each
1129 + other is essentially zero.
1130 +
1131 + %\begin{figure}
1132 + %\centering
1133 + %\includegraphics[width=\linewidth]{pdf.eps}
1134 + %\caption[Pair distribution function for the liquid argon
1135 + %]{Pair distribution function for the liquid argon}
1136 + %\label{introFigure:pairDistributionFunction}
1137 + %\end{figure}
1138 +
1139 + \subsubsection{\label{introSection:timeDependentProperties}Time-dependent
1140 + Properties}
1141 +
1142 + Time-dependent properties are usually calculated using \emph{time
1143 + correlation function}, which correlates random variables $A$ and $B$
1144 + at two different time
1145 + \begin{equation}
1146 + C_{AB} (t) = \left\langle {A(t)B(0)} \right\rangle.
1147 + \label{introEquation:timeCorrelationFunction}
1148 + \end{equation}
1149 + If $A$ and $B$ refer to same variable, this kind of correlation
1150 + function is called \emph{auto correlation function}. One example of
1151 + auto correlation function is velocity auto-correlation function
1152 + which is directly related to transport properties of molecular
1153 + liquids:
1154 + \[
1155 + D = \frac{1}{3}\int\limits_0^\infty  {\left\langle {v(t) \cdot v(0)}
1156 + \right\rangle } dt
1157 + \]
1158 + where $D$ is diffusion constant. Unlike velocity autocorrelation
1159 + function which is averaging over time origins and over all the
1160 + atoms, dipole autocorrelation are calculated for the entire system.
1161 + The dipole autocorrelation function is given by:
1162 + \[
1163 + c_{dipole}  = \left\langle {u_{tot} (t) \cdot u_{tot} (t)}
1164 + \right\rangle
1165 + \]
1166 + Here $u_{tot}$ is the net dipole of the entire system and is given
1167 + by
1168 + \[
1169 + u_{tot} (t) = \sum\limits_i {u_i (t)}
1170 + \]
1171 + In principle, many time correlation functions can be related with
1172 + Fourier transforms of the infrared, Raman, and inelastic neutron
1173 + scattering spectra of molecular liquids. In practice, one can
1174 + extract the IR spectrum from the intensity of dipole fluctuation at
1175 + each frequency using the following relationship:
1176 + \[
1177 + \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1178 + i2\pi vt} dt}
1179 + \]
1180 +
1181   \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
1182  
1183   Rigid bodies are frequently involved in the modeling of different
# Line 927 | Line 1211 | rotation matrix $A$ and re-formulating Hamiltonian's e
1211   The break through in geometric literature suggests that, in order to
1212   develop a long-term integration scheme, one should preserve the
1213   symplectic structure of the flow. Introducing conjugate momentum to
1214 < rotation matrix $A$ and re-formulating Hamiltonian's equation, a
1214 > rotation matrix $Q$ and re-formulating Hamiltonian's equation, a
1215   symplectic integrator, RSHAKE, was proposed to evolve the
1216   Hamiltonian system in a constraint manifold by iteratively
1217 < satisfying the orthogonality constraint $A_t A = 1$. An alternative
1217 > satisfying the orthogonality constraint $Q_T Q = 1$. An alternative
1218   method using quaternion representation was developed by Omelyan.
1219   However, both of these methods are iterative and inefficient. In
1220   this section, we will present a symplectic Lie-Poisson integrator
# Line 1136 | Line 1420 | To reduce the cost of computing expensive functions in
1420     0 & { - \sin \theta _1 } & {\cos \theta _1 }  \\
1421   \end{array}} \right),\theta _1  = \frac{{\pi _1 }}{{I_1 }}\Delta t.
1422   \]
1423 < To reduce the cost of computing expensive functions in e^{\Delta
1424 < tR_1 }, we can use Cayley transformation,
1423 > To reduce the cost of computing expensive functions in $e^{\Delta
1424 > tR_1 }$, we can use Cayley transformation,
1425   \[
1426   e^{\Delta tR_1 }  \approx (1 - \Delta tR_1 )^{ - 1} (1 + \Delta tR_1
1427   )
1428   \]
1429 <
1146 < The flow maps for $T_2^r$ and $T_2^r$ can be found in the same
1429 > The flow maps for $T_2^r$ and $T_3^r$ can be found in the same
1430   manner.
1431  
1432   In order to construct a second-order symplectic method, we split the
# Line 1213 | Line 1496 | Moreover, \varphi _{\Delta t/2,V} can be divided into
1496   \varphi _{\Delta t}  = \varphi _{\Delta t/2,V}  \circ \varphi
1497   _{\Delta t,T}  \circ \varphi _{\Delta t/2,V}.
1498   \]
1499 < Moreover, \varphi _{\Delta t/2,V} can be divided into two sub-flows
1500 < which corresponding to force and torque respectively,
1499 > Moreover, $\varphi _{\Delta t/2,V}$ can be divided into two
1500 > sub-flows which corresponding to force and torque respectively,
1501   \[
1502   \varphi _{\Delta t/2,V}  = \varphi _{\Delta t/2,F}  \circ \varphi
1503   _{\Delta t/2,\tau }.
1504   \]
1505   Since the associated operators of $\varphi _{\Delta t/2,F} $ and
1506   $\circ \varphi _{\Delta t/2,\tau }$ are commuted, the composition
1507 < order inside \varphi _{\Delta t/2,V} does not matter.
1507 > order inside $\varphi _{\Delta t/2,V}$ does not matter.
1508  
1509   Furthermore, kinetic potential can be separated to translational
1510   kinetic term, $T^t (p)$, and rotational kinetic term, $T^r (\pi )$,
# Line 1251 | Line 1534 | generalized Langevin Dynamics will be given first. Fol
1534   mimics a simple heat bath with stochastic and dissipative forces,
1535   has been applied in a variety of studies. This section will review
1536   the theory of Langevin dynamics simulation. A brief derivation of
1537 < generalized Langevin Dynamics will be given first. Follow that, we
1537 > generalized Langevin equation will be given first. Follow that, we
1538   will discuss the physical meaning of the terms appearing in the
1539   equation as well as the calculation of friction tensor from
1540   hydrodynamics theory.
1541  
1542 < \subsection{\label{introSection:generalizedLangevinDynamics}Generalized Langevin Dynamics}
1542 > \subsection{\label{introSection:generalizedLangevinDynamics}Derivation of Generalized Langevin Equation}
1543  
1544 + Harmonic bath model, in which an effective set of harmonic
1545 + oscillators are used to mimic the effect of a linearly responding
1546 + environment, has been widely used in quantum chemistry and
1547 + statistical mechanics. One of the successful applications of
1548 + Harmonic bath model is the derivation of Deriving Generalized
1549 + Langevin Dynamics. Lets consider a system, in which the degree of
1550 + freedom $x$ is assumed to couple to the bath linearly, giving a
1551 + Hamiltonian of the form
1552   \begin{equation}
1553   H = \frac{{p^2 }}{{2m}} + U(x) + H_B  + \Delta U(x,x_1 , \ldots x_N)
1554 < \label{introEquation:bathGLE}
1554 > \label{introEquation:bathGLE}.
1555   \end{equation}
1556 < where $H_B$ is harmonic bath Hamiltonian,
1556 > Here $p$ is a momentum conjugate to $q$, $m$ is the mass associated
1557 > with this degree of freedom, $H_B$ is harmonic bath Hamiltonian,
1558   \[
1559 < H_B  =\sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1560 < }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  w_\alpha ^2 } \right\}}
1559 > H_B  = \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{p_\alpha ^2
1560 > }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha  \omega _\alpha ^2 }
1561 > \right\}}
1562   \]
1563 < and $\Delta U$ is bilinear system-bath coupling,
1563 > where the index $\alpha$ runs over all the bath degrees of freedom,
1564 > $\omega _\alpha$ are the harmonic bath frequencies, $m_\alpha$ are
1565 > the harmonic bath masses, and $\Delta U$ is bilinear system-bath
1566 > coupling,
1567   \[
1568   \Delta U =  - \sum\limits_{\alpha  = 1}^N {g_\alpha  x_\alpha  x}
1569   \]
1570 < Completing the square,
1570 > where $g_\alpha$ are the coupling constants between the bath and the
1571 > coordinate $x$. Introducing
1572   \[
1573 < H_B  + \Delta U = \sum\limits_{\alpha  = 1}^N {\left\{
1574 < {\frac{{p_\alpha ^2 }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha
1575 < w_\alpha ^2 \left( {x_\alpha   - \frac{{g_\alpha  }}{{m_\alpha
1576 < w_\alpha ^2 }}x} \right)^2 } \right\}}  - \sum\limits_{\alpha  =
1577 < 1}^N {\frac{{g_\alpha ^2 }}{{2m_\alpha  w_\alpha ^2 }}} x^2
1281 < \]
1282 < and putting it back into Eq.~\ref{introEquation:bathGLE},
1573 > W(x) = U(x) - \sum\limits_{\alpha  = 1}^N {\frac{{g_\alpha ^2
1574 > }}{{2m_\alpha  w_\alpha ^2 }}} x^2
1575 > \] and combining the last two terms in Equation
1576 > \ref{introEquation:bathGLE}, we may rewrite the Harmonic bath
1577 > Hamiltonian as
1578   \[
1579   H = \frac{{p^2 }}{{2m}} + W(x) + \sum\limits_{\alpha  = 1}^N
1580   {\left\{ {\frac{{p_\alpha ^2 }}{{2m_\alpha  }} + \frac{1}{2}m_\alpha
1581   w_\alpha ^2 \left( {x_\alpha   - \frac{{g_\alpha  }}{{m_\alpha
1582   w_\alpha ^2 }}x} \right)^2 } \right\}}
1583   \]
1289 where
1290 \[
1291 W(x) = U(x) - \sum\limits_{\alpha  = 1}^N {\frac{{g_\alpha ^2
1292 }}{{2m_\alpha  w_\alpha ^2 }}} x^2
1293 \]
1584   Since the first two terms of the new Hamiltonian depend only on the
1585   system coordinates, we can get the equations of motion for
1586   Generalized Langevin Dynamics by Hamilton's equations
1587   \ref{introEquation:motionHamiltonianCoordinate,
1588   introEquation:motionHamiltonianMomentum},
1589 < \begin{align}
1590 < \dot p &=  - \frac{{\partial H}}{{\partial x}}
1591 <       &= m\ddot x
1592 <       &= - \frac{{\partial W(x)}}{{\partial x}} - \sum\limits_{\alpha  = 1}^N {g_\alpha  \left( {x_\alpha   - \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right)}
1593 < \label{introEquation:Lp5}
1594 < \end{align}
1595 < , and
1596 < \begin{align}
1597 < \dot p_\alpha   &=  - \frac{{\partial H}}{{\partial x_\alpha  }}
1598 <                &= m\ddot x_\alpha
1599 <                &= \- m_\alpha  w_\alpha ^2 \left( {x_\alpha   - \frac{{g_\alpha}}{{m_\alpha  w_\alpha ^2 }}x} \right)
1600 < \end{align}
1589 > \begin{equation}
1590 > m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} -
1591 > \sum\limits_{\alpha  = 1}^N {g_\alpha  \left( {x_\alpha   -
1592 > \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right)},
1593 > \label{introEquation:coorMotionGLE}
1594 > \end{equation}
1595 > and
1596 > \begin{equation}
1597 > m\ddot x_\alpha   =  - m_\alpha  w_\alpha ^2 \left( {x_\alpha   -
1598 > \frac{{g_\alpha  }}{{m_\alpha  w_\alpha ^2 }}x} \right).
1599 > \label{introEquation:bathMotionGLE}
1600 > \end{equation}
1601  
1602 < \subsection{\label{introSection:laplaceTransform}The Laplace Transform}
1602 > In order to derive an equation for $x$, the dynamics of the bath
1603 > variables $x_\alpha$ must be solved exactly first. As an integral
1604 > transform which is particularly useful in solving linear ordinary
1605 > differential equations, Laplace transform is the appropriate tool to
1606 > solve this problem. The basic idea is to transform the difficult
1607 > differential equations into simple algebra problems which can be
1608 > solved easily. Then applying inverse Laplace transform, also known
1609 > as the Bromwich integral, we can retrieve the solutions of the
1610 > original problems.
1611  
1612 + Let $f(t)$ be a function defined on $ [0,\infty ) $. The Laplace
1613 + transform of f(t) is a new function defined as
1614   \[
1615 < L(x) = \int_0^\infty  {x(t)e^{ - pt} dt}
1615 > L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1616   \]
1617 + where  $p$ is real and  $L$ is called the Laplace Transform
1618 + Operator. Below are some important properties of Laplace transform
1619 + \begin{equation}
1620 + \begin{array}{c}
1621 + L(x + y) = L(x) + L(y) \\
1622 + L(ax) = aL(x) \\
1623 + L(\dot x) = pL(x) - px(0) \\
1624 + L(\ddot x) = p^2 L(x) - px(0) - \dot x(0) \\
1625 + L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right) = G(p)H(p) \\
1626 + \end{array}
1627 + \end{equation}
1628  
1629 + Applying Laplace transform to the bath coordinates, we obtain
1630   \[
1631 < L(x + y) = L(x) + L(y)
1631 > \begin{array}{c}
1632 > p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) =  - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) \\
1633 > L(x_\alpha  ) = \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }} \\
1634 > \end{array}
1635   \]
1636 <
1636 > By the same way, the system coordinates become
1637   \[
1638 < L(ax) = aL(x)
1638 > \begin{array}{c}
1639 > mL(\ddot x) =  - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} \\
1640 >  - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
1641 > \end{array}
1642   \]
1643  
1644 + With the help of some relatively important inverse Laplace
1645 + transformations:
1646   \[
1647 < L(\dot x) = pL(x) - px(0)
1647 > \begin{array}{c}
1648 > L(\cos at) = \frac{p}{{p^2  + a^2 }} \\
1649 > L(\sin at) = \frac{a}{{p^2  + a^2 }} \\
1650 > L(1) = \frac{1}{p} \\
1651 > \end{array}
1652   \]
1653 <
1330 < \[
1331 < L(\ddot x) = p^2 L(x) - px(0) - \dot x(0)
1332 < \]
1333 <
1334 < \[
1335 < L\left( {\int_0^t {g(t - \tau )h(\tau )d\tau } } \right) = G(p)H(p)
1336 < \]
1337 <
1338 < Some relatively important transformation,
1339 < \[
1340 < L(\cos at) = \frac{p}{{p^2  + a^2 }}
1341 < \]
1342 <
1343 < \[
1344 < L(\sin at) = \frac{a}{{p^2  + a^2 }}
1345 < \]
1346 <
1347 < \[
1348 < L(1) = \frac{1}{p}
1349 < \]
1350 <
1351 < First, the bath coordinates,
1352 < \[
1353 < p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) =  - \omega
1354 < _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha
1355 < }}L(x)
1356 < \]
1357 < \[
1358 < L(x_\alpha  ) = \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) +
1359 < px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }}
1360 < \]
1361 < Then, the system coordinates,
1362 < \begin{align}
1363 < mL(\ddot x) &=  - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} -
1364 < \sum\limits_{\alpha  = 1}^N {\left\{ {\frac{{\frac{{g_\alpha
1365 < }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha
1366 < (0)}}{{p^2  + \omega _\alpha ^2 }} - \frac{{g_\alpha ^2 }}{{m_\alpha
1367 < }}\omega _\alpha ^2 L(x)} \right\}}
1368 < %
1369 < &= - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}} -
1370 < \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x)
1371 < - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0)
1372 < - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}
1373 < \end{align}
1374 < Then, the inverse transform,
1375 <
1653 > , we obtain
1654   \begin{align}
1655   m\ddot x &=  - \frac{{\partial W(x)}}{{\partial x}} -
1656   \sum\limits_{\alpha  = 1}^N {\left\{ {\left( { - \frac{{g_\alpha ^2
# Line 1392 | Line 1670 | t)\dot x(t - \tau )d} \tau }  + \sum\limits_{\alpha  =
1670   (\omega _\alpha  t)} \right\}}
1671   \end{align}
1672  
1673 + Introducing a \emph{dynamic friction kernel}
1674   \begin{equation}
1675 + \xi (t) = \sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2
1676 + }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha  t)}
1677 + \label{introEquation:dynamicFrictionKernelDefinition}
1678 + \end{equation}
1679 + and \emph{a random force}
1680 + \begin{equation}
1681 + R(t) = \sum\limits_{\alpha  = 1}^N {\left( {g_\alpha  x_\alpha  (0)
1682 + - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}x(0)}
1683 + \right)\cos (\omega _\alpha  t)}  + \frac{{\dot x_\alpha
1684 + (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t),
1685 + \label{introEquation:randomForceDefinition}
1686 + \end{equation}
1687 + the equation of motion can be rewritten as
1688 + \begin{equation}
1689   m\ddot x =  - \frac{{\partial W}}{{\partial x}} - \int_0^t {\xi
1690   (t)\dot x(t - \tau )d\tau }  + R(t)
1691   \label{introEuqation:GeneralizedLangevinDynamics}
1692   \end{equation}
1693 < %where $ {\xi (t)}$ is friction kernel, $R(t)$ is random force and
1694 < %$W$ is the potential of mean force. $W(x) =  - kT\ln p(x)$
1693 > which is known as the \emph{generalized Langevin equation}.
1694 >
1695 > \subsubsection{\label{introSection:randomForceDynamicFrictionKernel}Random Force and Dynamic Friction Kernel}
1696 >
1697 > One may notice that $R(t)$ depends only on initial conditions, which
1698 > implies it is completely deterministic within the context of a
1699 > harmonic bath. However, it is easy to verify that $R(t)$ is totally
1700 > uncorrelated to $x$ and $\dot x$,
1701   \[
1702 < \xi (t) = \sum\limits_{\alpha  = 1}^N {\left( { - \frac{{g_\alpha ^2
1703 < }}{{m_\alpha  \omega _\alpha ^2 }}} \right)\cos (\omega _\alpha  t)}
1702 > \begin{array}{l}
1703 > \left\langle {x(t)R(t)} \right\rangle  = 0, \\
1704 > \left\langle {\dot x(t)R(t)} \right\rangle  = 0. \\
1705 > \end{array}
1706   \]
1707 < For an infinite harmonic bath, we can use the spectral density and
1708 < an integral over frequencies.
1707 > This property is what we expect from a truly random process. As long
1708 > as the model, which is gaussian distribution in general, chosen for
1709 > $R(t)$ is a truly random process, the stochastic nature of the GLE
1710 > still remains.
1711  
1712 + %dynamic friction kernel
1713 + The convolution integral
1714   \[
1715 < R(t) = \sum\limits_{\alpha  = 1}^N {\left( {g_\alpha  x_\alpha  (0)
1411 < - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}x(0)}
1412 < \right)\cos (\omega _\alpha  t)}  + \frac{{\dot x_\alpha
1413 < (0)}}{{\omega _\alpha  }}\sin (\omega _\alpha  t)
1715 > \int_0^t {\xi (t)\dot x(t - \tau )d\tau }
1716   \]
1717 < The random forces depend only on initial conditions.
1717 > depends on the entire history of the evolution of $x$, which implies
1718 > that the bath retains memory of previous motions. In other words,
1719 > the bath requires a finite time to respond to change in the motion
1720 > of the system. For a sluggish bath which responds slowly to changes
1721 > in the system coordinate, we may regard $\xi(t)$ as a constant
1722 > $\xi(t) = \Xi_0$. Hence, the convolution integral becomes
1723 > \[
1724 > \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = \xi _0 (x(t) - x(0))
1725 > \]
1726 > and Equation \ref{introEuqation:GeneralizedLangevinDynamics} becomes
1727 > \[
1728 > m\ddot x =  - \frac{\partial }{{\partial x}}\left( {W(x) +
1729 > \frac{1}{2}\xi _0 (x - x_0 )^2 } \right) + R(t),
1730 > \]
1731 > which can be used to describe dynamic caging effect. The other
1732 > extreme is the bath that responds infinitely quickly to motions in
1733 > the system. Thus, $\xi (t)$ can be taken as a $delta$ function in
1734 > time:
1735 > \[
1736 > \xi (t) = 2\xi _0 \delta (t)
1737 > \]
1738 > Hence, the convolution integral becomes
1739 > \[
1740 > \int_0^t {\xi (t)\dot x(t - \tau )d\tau }  = 2\xi _0 \int_0^t
1741 > {\delta (t)\dot x(t - \tau )d\tau }  = \xi _0 \dot x(t),
1742 > \]
1743 > and Equation \ref{introEuqation:GeneralizedLangevinDynamics} becomes
1744 > \begin{equation}
1745 > m\ddot x =  - \frac{{\partial W(x)}}{{\partial x}} - \xi _0 \dot
1746 > x(t) + R(t) \label{introEquation:LangevinEquation}
1747 > \end{equation}
1748 > which is known as the Langevin equation. The static friction
1749 > coefficient $\xi _0$ can either be calculated from spectral density
1750 > or be determined by Stokes' law for regular shaped particles.A
1751 > briefly review on calculating friction tensor for arbitrary shaped
1752 > particles is given in Sec.~\ref{introSection:frictionTensor}.
1753  
1754   \subsubsection{\label{introSection:secondFluctuationDissipation}The Second Fluctuation Dissipation Theorem}
1755 < So we can define a new set of coordinates,
1755 >
1756 > Defining a new set of coordinates,
1757   \[
1758   q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1759   ^2 }}x(0)
1760 < \]
1761 < This makes
1760 > \],
1761 > we can rewrite $R(T)$ as
1762   \[
1763 < R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}
1763 > R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
1764   \]
1765   And since the $q$ coordinates are harmonic oscillators,
1766   \[
1767 < \begin{array}{l}
1767 > \begin{array}{c}
1768 > \left\langle {q_\alpha ^2 } \right\rangle  = \frac{{kT}}{{m_\alpha  \omega _\alpha ^2 }} \\
1769   \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  = \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t) \\
1770   \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle  = \delta _{\alpha \beta } \left\langle {q_\alpha  (t)q_\alpha  (0)} \right\rangle  \\
1771 + \left\langle {R(t)R(0)} \right\rangle  = \sum\limits_\alpha  {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha  (t)q_\beta  (0)} \right\rangle } }  \\
1772 +  = \sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)} \right\rangle \cos (\omega _\alpha  t)}  \\
1773 +  = kT\xi (t) \\
1774   \end{array}
1775   \]
1776 <
1435 < \begin{align}
1436 < \left\langle {R(t)R(0)} \right\rangle  &= \sum\limits_\alpha
1437 < {\sum\limits_\beta  {g_\alpha  g_\beta  \left\langle {q_\alpha
1438 < (t)q_\beta  (0)} \right\rangle } }
1439 < %
1440 < &= \sum\limits_\alpha  {g_\alpha ^2 \left\langle {q_\alpha ^2 (0)}
1441 < \right\rangle \cos (\omega _\alpha  t)}
1442 < %
1443 < &= kT\xi (t)
1444 < \end{align}
1445 <
1776 > Thus, we recover the \emph{second fluctuation dissipation theorem}
1777   \begin{equation}
1778   \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1779 < \label{introEquation:secondFluctuationDissipation}
1779 > \label{introEquation:secondFluctuationDissipation}.
1780   \end{equation}
1781 + In effect, it acts as a constraint on the possible ways in which one
1782 + can model the random force and friction kernel.
1783  
1784   \subsection{\label{introSection:frictionTensor} Friction Tensor}
1785   Theoretically, the friction kernel can be determined using velocity
# Line 1622 | Line 1955 | where \delta _{ij} is Kronecker delta function. Invert
1955   B_{ij}  = \delta _{ij} \frac{I}{{6\pi \eta R}} + (1 - \delta _{ij}
1956   )T_{ij}
1957   \]
1958 < where \delta _{ij} is Kronecker delta function. Inverting matrix
1958 > where $\delta _{ij}$ is Kronecker delta function. Inverting matrix
1959   $B$, we obtain
1960  
1961   \[
# Line 1666 | Line 1999 | translation-rotation coupling resistance tensor do dep
1999   Form Equation \ref{introEquation:ResistanceTensorArbitraryOrigin},
2000   we can easily find out that the translational resistance tensor is
2001   origin independent, while the rotational resistance tensor and
2002 < translation-rotation coupling resistance tensor do depend on the
2002 > translation-rotation coupling resistance tensor depend on the
2003   origin. Given resistance tensor at an arbitrary origin $O$, and a
2004   vector ,$r_{OP}(x_{OP}, y_{OP}, z_{OP})$, from $O$ to $P$, we can
2005   obtain the resistance tensor at $P$ by
# Line 1706 | Line 2039 | joining center of resistance $R$ and origin $O$.
2039   \]
2040   where $x_OR$, $y_OR$, $z_OR$ are the components of the vector
2041   joining center of resistance $R$ and origin $O$.
1709
1710 %\section{\label{introSection:correlationFunctions}Correlation Functions}

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines