ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/tengDissertation/Introduction.tex
(Generate patch)

Comparing trunk/tengDissertation/Introduction.tex (file contents):
Revision 2904 by tim, Wed Jun 28 17:36:32 2006 UTC vs.
Revision 2905 by tim, Wed Jun 28 19:09:25 2006 UTC

# Line 88 | Line 88 | kinetic, $K$, and potential energies, $U$,
88   trajectory, along which a dynamical system may move from one point
89   to another within a specified time, is derived by finding the path
90   which minimizes the time integral of the difference between the
91 < kinetic, $K$, and potential energies, $U$,
91 > kinetic $K$, and potential energies $U$,
92   \begin{equation}
93   \delta \int_{t_1 }^{t_2 } {(K - U)dt = 0}.
94   \label{introEquation:halmitonianPrinciple1}
# Line 243 | Line 243 | function of the time.
243   Governed by the principles of mechanics, the phase points change
244   their locations which would change the density at any time at phase
245   space. Hence, the density distribution is also to be taken as a
246 < function of the time.
247 <
248 < The number of systems $\delta N$ at time $t$ can be determined by,
246 > function of the time. The number of systems $\delta N$ at time $t$
247 > can be determined by,
248   \begin{equation}
249   \delta N = \rho (q,p,t)dq_1  \ldots dq_f dp_1  \ldots dp_f.
250   \label{introEquation:deltaN}
# Line 278 | Line 277 | space of the system,
277   \begin{equation}
278   \langle  A(q , p) \rangle_t = \frac{{\int { \ldots \int {A(q,p)\rho
279   (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}{{\int { \ldots \int {\rho
280 < (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}
280 > (q,p,t)dq_1 } ...dq_f dp_1 } ...dp_f }}.
281   \label{introEquation:ensembelAverage}
282   \end{equation}
283  
# Line 292 | Line 291 | A canonical ensemble (NVT)is an ensemble of systems, e
291   \begin{equation}
292   \Omega (N,V,E) = e^{\beta TS}. \label{introEquation:NVEPartition}
293   \end{equation}
294 < A canonical ensemble (NVT)is an ensemble of systems, each of which
294 > A canonical ensemble (NVT) is an ensemble of systems, each of which
295   can share its energy with a large heat reservoir. The distribution
296   of the total energy amongst the possible dynamical states is given
297   by the partition function,
# Line 410 | Line 409 | Substituting equations of motion in Hamiltonian formal
409   q_i }}} \right)}.
410   \label{introEquation:poissonBracket}
411   \end{equation}
412 < Substituting equations of motion in Hamiltonian formalism(
413 < Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
414 < Eq.~\ref{introEquation:motionHamiltonianMomentum} ) into
412 > Substituting equations of motion in Hamiltonian formalism
413 > (Eq.~\ref{introEquation:motionHamiltonianCoordinate} ,
414 > Eq.~\ref{introEquation:motionHamiltonianMomentum}) into
415   (Eq.~\ref{introEquation:liouvilleTheorem}), we can rewrite
416   Liouville's theorem using Poisson bracket notion,
417   \begin{equation}
# Line 446 | Line 445 | statistical ensemble are identical \cite{Frenkel1996,
445   many-body system in Statistical Mechanics. Fortunately, the Ergodic
446   Hypothesis makes a connection between time average and the ensemble
447   average. It states that the time average and average over the
448 < statistical ensemble are identical \cite{Frenkel1996, Leach2001}.
448 > statistical ensemble are identical \cite{Frenkel1996, Leach2001}:
449   \begin{equation}
450   \langle A(q , p) \rangle_t = \mathop {\lim }\limits_{t \to \infty }
451   \frac{1}{t}\int\limits_0^t {A(q(t),p(t))dt = \int\limits_\Gamma
# Line 460 | Line 459 | reasonable, the Monte Carlo techniques\cite{Metropolis
459   a properly weighted statistical average. This allows the researcher
460   freedom of choice when deciding how best to measure a given
461   observable. In case an ensemble averaged approach sounds most
462 < reasonable, the Monte Carlo techniques\cite{Metropolis1949} can be
462 > reasonable, the Monte Carlo methods\cite{Metropolis1949} can be
463   utilized. Or if the system lends itself to a time averaging
464   approach, the Molecular Dynamics techniques in
465   Sec.~\ref{introSection:molecularDynamics} will be the best
# Line 510 | Line 509 | where $x = x(q,p)^T$, this system is a canonical Hamil
509   \dot x = f(x)
510   \end{equation}
511   where $x = x(q,p)^T$, this system is a canonical Hamiltonian, if
512 + $f(r) = J\nabla _x H(r)$. Here, $H = H (q, p)$ is Hamiltonian
513 + function and $J$ is the skew-symmetric matrix
514   \begin{equation}
514 f(r) = J\nabla _x H(r).
515 \end{equation}
516 $H = H (q, p)$ is Hamiltonian function and $J$ is the skew-symmetric
517 matrix
518 \begin{equation}
515   J = \left( {\begin{array}{*{20}c}
516     0 & I  \\
517     { - I} & 0  \\
# Line 525 | Line 521 | system can be rewritten as,
521   where $I$ is an identity matrix. Using this notation, Hamiltonian
522   system can be rewritten as,
523   \begin{equation}
524 < \frac{d}{{dt}}x = J\nabla _x H(x)
524 > \frac{d}{{dt}}x = J\nabla _x H(x).
525   \label{introEquation:compactHamiltonian}
526   \end{equation}In this case, $f$ is
527   called a \emph{Hamiltonian vector field}. Another generalization of
# Line 537 | Line 533 | Let $x(t)$ be the exact solution of the ODE system,
533  
534   \subsection{\label{introSection:exactFlow}Exact Flow}
535  
536 < Let $x(t)$ be the exact solution of the ODE system,
537 < \begin{equation}
538 < \frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}
539 < \end{equation}
540 < The exact flow(solution) $\varphi_\tau$ is defined by
545 < \[
546 < x(t+\tau) =\varphi_\tau(x(t))
536 > Let $x(t)$ be the exact solution of the ODE
537 > system,$\frac{{dx}}{{dt}} = f(x) \label{introEquation:ODE}$, we can
538 > define its exact flow(solution) $\varphi_\tau$
539 > \[ x(t+\tau)
540 > =\varphi_\tau(x(t))
541   \]
542   where $\tau$ is a fixed time step and $\varphi$ is a map from phase
543   space to itself. The flow has the continuous group property,
# Line 565 | Line 559 | The exact flow can also be written in terms of the of
559   }{{\partial x_i }}} } (x) \equiv \exp (\tau f)(x).
560   \label{introEquation:exponentialOperator}
561   \end{equation}
568
562   In most cases, it is not easy to find the exact flow $\varphi_\tau$.
563   Instead, we use an approximate map, $\psi_\tau$, which is usually
564   called integrator. The order of an integrator $\psi_\tau$ is $p$, if
# Line 612 | Line 605 | Using chain rule, one may obtain,
605   \]
606   Using chain rule, one may obtain,
607   \[
608 < \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \bullet \nabla G,
608 > \sum\limits_i {\frac{{dG}}{{dx_i }}} f_i (x) = f \dot \nabla G,
609   \]
610   which is the condition for conserving \emph{first integral}. For a
611   canonical Hamiltonian system, the time evolution of an arbitrary
612   smooth function $G$ is given by,
613   \begin{eqnarray}
614 < \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \\
615 <                        & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)). \\
614 > \frac{{dG(x(t))}}{{dt}} & = & [\nabla _x G(x(t))]^T \dot x(t) \notag\\
615 >                        & = & [\nabla _x G(x(t))]^T J\nabla _x H(x(t)).
616   \label{introEquation:firstIntegral1}
617   \end{eqnarray}
618 < Using poisson bracket notion, Equation
619 < \ref{introEquation:firstIntegral1} can be rewritten as
618 > Using poisson bracket notion, Eq.~\ref{introEquation:firstIntegral1}
619 > can be rewritten as
620   \[
621   \frac{d}{{dt}}G(x(t)) = \left\{ {G,H} \right\}(x(t)).
622   \]
623   Therefore, the sufficient condition for $G$ to be the \emph{first
624 < integral} of a Hamiltonian system is
632 < \[
633 < \left\{ {G,H} \right\} = 0.
634 < \]
624 > integral} of a Hamiltonian system is $\left\{ {G,H} \right\} = 0.$
625   As well known, the Hamiltonian (or energy) H of a Hamiltonian system
626   is a \emph{first integral}, which is due to the fact $\{ H,H\}  =
627   0$. When designing any numerical methods, one should always try to
# Line 650 | Line 640 | constructed using one of four different methods.
640   \item Runge-Kutta methods
641   \item Splitting methods
642   \end{enumerate}
653
643   Generating function\cite{Channell1990} tends to lead to methods
644   which are cumbersome and difficult to use. In dissipative systems,
645   variational methods can capture the decay of energy
646   accurately\cite{Kane2000}. Since their geometrically unstable nature
647   against non-Hamiltonian perturbations, ordinary implicit Runge-Kutta
648   methods are not suitable for Hamiltonian system. Recently, various
649 < high-order explicit Runge-Kutta methods
650 < \cite{Owren1992,Chen2003}have been developed to overcome this
651 < instability. However, due to computational penalty involved in
652 < implementing the Runge-Kutta methods, they have not attracted much
653 < attention from the Molecular Dynamics community. Instead, splitting
654 < methods have been widely accepted since they exploit natural
655 < decompositions of the system\cite{Tuckerman1992, McLachlan1998}.
649 > high-order explicit Runge-Kutta methods \cite{Owren1992,Chen2003}
650 > have been developed to overcome this instability. However, due to
651 > computational penalty involved in implementing the Runge-Kutta
652 > methods, they have not attracted much attention from the Molecular
653 > Dynamics community. Instead, splitting methods have been widely
654 > accepted since they exploit natural decompositions of the
655 > system\cite{Tuckerman1992, McLachlan1998}.
656  
657   \subsubsection{\label{introSection:splittingMethod}\textbf{Splitting Methods}}
658  
# Line 704 | Line 693 | The Lie-Trotter splitting(\ref{introEquation:firstOrde
693   where $\phi$ and $\psi$ both are symplectic maps. Thus operator
694   splitting in this context automatically generates a symplectic map.
695  
696 < The Lie-Trotter splitting(\ref{introEquation:firstOrderSplitting})
697 < introduces local errors proportional to $h^2$, while Strang
698 < splitting gives a second-order decomposition,
696 > The Lie-Trotter
697 > splitting(Eq.~\ref{introEquation:firstOrderSplitting}) introduces
698 > local errors proportional to $h^2$, while Strang splitting gives a
699 > second-order decomposition,
700   \begin{equation}
701   \varphi _h  = \varphi _{1,h/2}  \circ \varphi _{2,h}  \circ \varphi
702   _{1,h/2} , \label{introEquation:secondOrderSplitting}
# Line 803 | Line 793 | to the Sprang splitting, we can obtain
793   \begin{eqnarray*}
794   \exp (h X/2)\exp (h Y)\exp (h X/2) & = & \exp (h X + h Y + h^2 [X,Y]/4 + h^2 [Y,X]/4 \\
795                                     &   & \mbox{} + h^2 [X,X]/8 + h^2 [Y,Y]/8 \\
796 <                                   &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots )
796 >                                   &   & \mbox{} + h^3 [Y,[Y,X]]/12 - h^3[X,[X,Y]]/24 + \ldots
797 >                                   ).
798   \end{eqnarray*}
799 < Since \[ [X,Y] + [Y,X] = 0\] and \[ [X,X] = 0,\] the dominant local
799 > Since $ [X,Y] + [Y,X] = 0$ and $ [X,X] = 0$, the dominant local
800   error of Spring splitting is proportional to $h^3$. The same
801 < procedure can be applied to a general splitting,  of the form
801 > procedure can be applied to a general splitting of the form
802   \begin{equation}
803   \varphi _{b_m h}^2  \circ \varphi _{a_m h}^1  \circ \varphi _{b_{m -
804   1} h}^2  \circ  \ldots  \circ \varphi _{a_1 h}^1 .
# Line 952 | Line 943 | algorithmic tricks.
943   Coulombic forces \textit{etc}. For a system of $N$ particles, the
944   complexity of the algorithm for pair-wise interactions is $O(N^2 )$,
945   which making large simulations prohibitive in the absence of any
946 < algorithmic tricks.
947 <
948 < A natural approach to avoid system size issues is to represent the
949 < bulk behavior by a finite number of the particles. However, this
950 < approach will suffer from the surface effect at the edges of the
951 < simulation. To offset this, \textit{Periodic boundary conditions}
952 < (see Fig.~\ref{introFig:pbc}) is developed to simulate bulk
953 < properties with a relatively small number of particles. In this
954 < method, the simulation box is replicated throughout space to form an
955 < infinite lattice. During the simulation, when a particle moves in
956 < the primary cell, its image in other cells move in exactly the same
957 < direction with exactly the same orientation. Thus, as a particle
967 < leaves the primary cell, one of its images will enter through the
968 < opposite face.
946 > algorithmic tricks. A natural approach to avoid system size issues
947 > is to represent the bulk behavior by a finite number of the
948 > particles. However, this approach will suffer from the surface
949 > effect at the edges of the simulation. To offset this,
950 > \textit{Periodic boundary conditions} (see Fig.~\ref{introFig:pbc})
951 > is developed to simulate bulk properties with a relatively small
952 > number of particles. In this method, the simulation box is
953 > replicated throughout space to form an infinite lattice. During the
954 > simulation, when a particle moves in the primary cell, its image in
955 > other cells move in exactly the same direction with exactly the same
956 > orientation. Thus, as a particle leaves the primary cell, one of its
957 > images will enter through the opposite face.
958   \begin{figure}
959   \centering
960   \includegraphics[width=\linewidth]{pbc.eps}
# Line 1028 | Line 1017 | Statistical Mechanics, Sec.~\ref{introSection:statisti
1017   monitor the motions of molecules. Although the dynamics of the
1018   system can be described qualitatively from animation, quantitative
1019   trajectory analysis are more useful. According to the principles of
1020 < Statistical Mechanics, Sec.~\ref{introSection:statisticalMechanics},
1021 < one can compute thermodynamic properties, analyze fluctuations of
1022 < structural parameters, and investigate time-dependent processes of
1023 < the molecule from the trajectories.
1020 > Statistical Mechanics in
1021 > Sec.~\ref{introSection:statisticalMechanics}, one can compute
1022 > thermodynamic properties, analyze fluctuations of structural
1023 > parameters, and investigate time-dependent processes of the molecule
1024 > from the trajectories.
1025  
1026   \subsubsection{\label{introSection:thermodynamicsProperties}\textbf{Thermodynamic Properties}}
1027  
# Line 1071 | Line 1061 | particle $j$ in the system
1061   The pair distribution functions $g(r)$ gives the probability that a
1062   particle $i$ will be located at a distance $r$ from a another
1063   particle $j$ in the system
1064 < \[
1064 > \begin{equation}
1065   g(r) = \frac{V}{{N^2 }}\left\langle {\sum\limits_i {\sum\limits_{j
1066   \ne i} {\delta (r - r_{ij} )} } } \right\rangle = \frac{\rho
1067   (r)}{\rho}.
1068 < \]
1068 > \end{equation}
1069   Note that the delta function can be replaced by a histogram in
1070   computer simulation. Peaks in $g(r)$ represent solvent shells, and
1071   the height of these peaks gradually decreases to 1 as the liquid of
# Line 1112 | Line 1102 | u_{tot} (t) = \sum\limits_i {u_i (t)}
1102   Here $u_{tot}$ is the net dipole of the entire system and is given
1103   by
1104   \[
1105 < u_{tot} (t) = \sum\limits_i {u_i (t)}
1105 > u_{tot} (t) = \sum\limits_i {u_i (t)}.
1106   \]
1107   In principle, many time correlation functions can be related with
1108   Fourier transforms of the infrared, Raman, and inelastic neutron
# Line 1121 | Line 1111 | i2\pi vt} dt}
1111   each frequency using the following relationship:
1112   \[
1113   \hat c_{dipole} (v) = \int_{ - \infty }^\infty  {c_{dipole} (t)e^{ -
1114 < i2\pi vt} dt}
1114 > i2\pi vt} dt}.
1115   \]
1116  
1117   \section{\label{introSection:rigidBody}Dynamics of Rigid Bodies}
# Line 1189 | Line 1179 | which is used to ensure rotation matrix's unitarity. D
1179   Q^T Q = 1, \label{introEquation:orthogonalConstraint}
1180   \end{equation}
1181   which is used to ensure rotation matrix's unitarity. Differentiating
1182 < \ref{introEquation:orthogonalConstraint} and using Equation
1183 < \ref{introEquation:RBMotionMomentum}, one may obtain,
1182 > Eq.~\ref{introEquation:orthogonalConstraint} and using
1183 > Eq.~\ref{introEquation:RBMotionMomentum}, one may obtain,
1184   \begin{equation}
1185   Q^T PJ^{ - 1}  + J^{ - 1} P^T Q = 0 . \\
1186   \label{introEquation:RBFirstOrderConstraint}
# Line 1199 | Line 1189 | the equations of motion,
1189   \ref{introEquation:motionHamiltonianMomentum}), one can write down
1190   the equations of motion,
1191   \begin{eqnarray}
1192 < \frac{{dq}}{{dt}} & = & \frac{p}{m} \label{introEquation:RBMotionPosition}\\
1193 < \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q) \label{introEquation:RBMotionMomentum}\\
1194 < \frac{{dQ}}{{dt}} & = & PJ^{ - 1}  \label{introEquation:RBMotionRotation}\\
1192 > \frac{{dq}}{{dt}} & = & \frac{p}{m}, \label{introEquation:RBMotionPosition}\\
1193 > \frac{{dp}}{{dt}} & = & - \nabla _q V(q,Q), \label{introEquation:RBMotionMomentum}\\
1194 > \frac{{dQ}}{{dt}} & = & PJ^{ - 1},  \label{introEquation:RBMotionRotation}\\
1195   \frac{{dP}}{{dt}} & = & - \nabla _Q V(q,Q) - 2Q\Lambda . \label{introEquation:RBMotionP}
1196   \end{eqnarray}
1197   In general, there are two ways to satisfy the holonomic constraints.
# Line 1257 | Line 1247 | as well as holonomic constraints,
1247   \end{array}
1248   \label{introEqaution:RBMotionPI}
1249   \end{equation}
1250 < as well as holonomic constraints,
1251 < \[
1252 < \begin{array}{l}
1263 < \Pi J^{ - 1}  + J^{ - 1} \Pi ^t  = 0, \\
1264 < Q^T Q = 1 .\\
1265 < \end{array}
1266 < \]
1267 < For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a matrix $\hat v \in
1268 < so(3)^ \star$, the hat-map isomorphism,
1250 > as well as holonomic constraints $\Pi J^{ - 1}  + J^{ - 1} \Pi ^t  =
1251 > 0$ and $Q^T Q = 1$. For a vector $v(v_1 ,v_2 ,v_3 ) \in R^3$ and a
1252 > matrix $\hat v \in so(3)^ \star$, the hat-map isomorphism,
1253   \begin{equation}
1254   v(v_1 ,v_2 ,v_3 ) \Leftrightarrow \hat v = \left(
1255   {\begin{array}{*{20}c}
# Line 1283 | Line 1267 | matrix,
1267   Using Eq.~\ref{introEqaution:RBMotionPI}, one can construct a skew
1268   matrix,
1269   \begin{eqnarray}
1270 < (\dot \Pi  - \dot \Pi ^T ){\rm{ }} &= &{\rm{ }}(\Pi  - \Pi ^T ){\rm{
1271 < }}(J^{ - 1} \Pi  + \Pi J^{ - 1} ) \notag \\
1272 < + \sum\limits_i {[Q^T F_i
1289 < (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]}  - (\Lambda  - \Lambda ^T ).
1290 < \label{introEquation:skewMatrixPI}
1270 > (\dot \Pi  - \dot \Pi ^T )&= &(\Pi  - \Pi ^T )(J^{ - 1} \Pi  + \Pi J^{ - 1} ) \notag \\
1271 > & & + \sum\limits_i {[Q^T F_i (r,Q)X_i^T  - X_i F_i (r,Q)^T Q]}  -
1272 > (\Lambda  - \Lambda ^T ). \label{introEquation:skewMatrixPI}
1273   \end{eqnarray}
1274   Since $\Lambda$ is symmetric, the last term of
1275   Eq.~\ref{introEquation:skewMatrixPI} is zero, which implies the
# Line 1332 | Line 1314 | One may notice that each $T_i^r$ in Equation
1314   \begin{equation}
1315   \frac{d}{{dt}}\pi  = J(\pi )\nabla _\pi  T^r (\pi ).
1316   \end{equation}
1317 < One may notice that each $T_i^r$ in Equation
1318 < \ref{introEquation:rotationalKineticRB} can be solved exactly. For
1319 < instance, the equations of motion due to $T_1^r$ are given by
1317 > One may notice that each $T_i^r$ in
1318 > Eq.~\ref{introEquation:rotationalKineticRB} can be solved exactly.
1319 > For instance, the equations of motion due to $T_1^r$ are given by
1320   \begin{equation}
1321   \frac{d}{{dt}}\pi  = R_1 \pi ,\frac{d}{{dt}}Q = QR_1
1322   \label{introEqaution:RBMotionSingleTerm}
1323   \end{equation}
1324 < where
1324 > with
1325   \[ R_1  = \left( {\begin{array}{*{20}c}
1326     0 & 0 & 0  \\
1327     0 & 0 & {\pi _1 }  \\
1328     0 & { - \pi _1 } & 0  \\
1329   \end{array}} \right).
1330   \]
1331 < The solutions of Equation \ref{introEqaution:RBMotionSingleTerm} is
1331 > The solutions of Eq.~\ref{introEqaution:RBMotionSingleTerm} is
1332   \[
1333   \pi (\Delta t) = e^{\Delta tR_1 } \pi (0),Q(\Delta t) =
1334   Q(0)e^{\Delta tR_1 }
# Line 1368 | Line 1350 | split the angular kinetic Hamiltonian function can int
1350   \]
1351   The flow maps for $T_2^r$ and $T_3^r$ can be found in the same
1352   manner. In order to construct a second-order symplectic method, we
1353 < split the angular kinetic Hamiltonian function can into five terms
1353 > split the angular kinetic Hamiltonian function into five terms
1354   \[
1355   T^r (\pi ) = \frac{1}{2}T_1 ^r (\pi _1 ) + \frac{1}{2}T_2^r (\pi _2
1356   ) + T_3^r (\pi _3 ) + \frac{1}{2}T_2^r (\pi _2 ) + \frac{1}{2}T_1 ^r
# Line 1411 | Line 1393 | energy and potential energy,
1393   Splitting for Rigid Body}
1394  
1395   The Hamiltonian of rigid body can be separated in terms of kinetic
1396 < energy and potential energy,
1397 < \[
1398 < H = T(p,\pi ) + V(q,Q).
1417 < \]
1418 < The equations of motion corresponding to potential energy and
1419 < kinetic energy are listed in the below table,
1396 > energy and potential energy,$H = T(p,\pi ) + V(q,Q)$. The equations
1397 > of motion corresponding to potential energy and kinetic energy are
1398 > listed in the below table,
1399   \begin{table}
1400   \caption{EQUATIONS OF MOTION DUE TO POTENTIAL AND KINETIC ENERGIES}
1401   \begin{center}
# Line 1454 | Line 1433 | defined by \ref{introEquation:rotationalKineticRB}. Th
1433   T(p,\pi ) =T^t (p) + T^r (\pi ).
1434   \end{equation}
1435   where $ T^t (p) = \frac{1}{2}p^T m^{ - 1} p $ and $T^r (\pi )$ is
1436 < defined by \ref{introEquation:rotationalKineticRB}. Therefore, the
1437 < corresponding propagators are given by
1436 > defined by Eq.~\ref{introEquation:rotationalKineticRB}. Therefore,
1437 > the corresponding propagators are given by
1438   \[
1439   \varphi _{\Delta t,T}  = \varphi _{\Delta t,T^t }  \circ \varphi
1440   _{\Delta t,T^r }.
1441   \]
1442   Finally, we obtain the overall symplectic propagators for freely
1443   moving rigid bodies
1444 < \begin{eqnarray*}
1445 < \varphi _{\Delta t}  &=& \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \\
1446 <  & & \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \\
1444 > \begin{eqnarray}
1445 > \varphi _{\Delta t}  &=& \varphi _{\Delta t/2,F}  \circ \varphi _{\Delta t/2,\tau }  \notag\\
1446 >  & & \circ \varphi _{\Delta t,T^t }  \circ \varphi _{\Delta t/2,\pi _1 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t,\pi _3 }  \circ \varphi _{\Delta t/2,\pi _2 }  \circ \varphi _{\Delta t/2,\pi _1 }  \notag\\
1447    & & \circ \varphi _{\Delta t/2,\tau }  \circ \varphi _{\Delta t/2,F}  .\\
1448   \label{introEquation:overallRBFlowMaps}
1449 < \end{eqnarray*}
1449 > \end{eqnarray}
1450  
1451   \section{\label{introSection:langevinDynamics}Langevin Dynamics}
1452   As an alternative to newtonian dynamics, Langevin dynamics, which
# Line 1545 | Line 1524 | $. The Laplace transform of f(t) is a new function def
1524   solved easily. Then, by applying the inverse Laplace transform, also
1525   known as the Bromwich integral, we can retrieve the solutions of the
1526   original problems. Let $f(t)$ be a function defined on $ [0,\infty )
1527 < $. The Laplace transform of f(t) is a new function defined as
1527 > $, the Laplace transform of $f(t)$ is a new function defined as
1528   \[
1529   L(f(t)) \equiv F(p) = \int_0^\infty  {f(t)e^{ - pt} dt}
1530   \]
# Line 1560 | Line 1539 | p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0
1539   \end{eqnarray*}
1540   Applying the Laplace transform to the bath coordinates, we obtain
1541   \begin{eqnarray*}
1542 < p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) \\
1543 < L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }} \\
1542 > p^2 L(x_\alpha  ) - px_\alpha  (0) - \dot x_\alpha  (0) & = & - \omega _\alpha ^2 L(x_\alpha  ) + \frac{{g_\alpha  }}{{\omega _\alpha  }}L(x), \\
1543 > L(x_\alpha  ) & = & \frac{{\frac{{g_\alpha  }}{{\omega _\alpha  }}L(x) + px_\alpha  (0) + \dot x_\alpha  (0)}}{{p^2  + \omega _\alpha ^2 }}. \\
1544   \end{eqnarray*}
1545   By the same way, the system coordinates become
1546   \begin{eqnarray*}
1547   mL(\ddot x) & = &
1548    - \sum\limits_{\alpha  = 1}^N {\left\{ { - \frac{{g_\alpha ^2 }}{{m_\alpha  \omega _\alpha ^2 }}\frac{p}{{p^2  + \omega _\alpha ^2 }}pL(x) - \frac{p}{{p^2  + \omega _\alpha ^2 }}g_\alpha  x_\alpha  (0) - \frac{1}{{p^2  + \omega _\alpha ^2 }}g_\alpha  \dot x_\alpha  (0)} \right\}}  \\
1549 <  & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}
1549 >  & & - \frac{1}{p}\frac{{\partial W(x)}}{{\partial x}}.
1550   \end{eqnarray*}
1551   With the help of some relatively important inverse Laplace
1552   transformations:
# Line 1626 | Line 1605 | uncorrelated to $x$ and $\dot x$,
1605   One may notice that $R(t)$ depends only on initial conditions, which
1606   implies it is completely deterministic within the context of a
1607   harmonic bath. However, it is easy to verify that $R(t)$ is totally
1608 < uncorrelated to $x$ and $\dot x$,
1609 < \[
1610 < \begin{array}{l}
1611 < \left\langle {x(t)R(t)} \right\rangle  = 0, \\
1633 < \left\langle {\dot x(t)R(t)} \right\rangle  = 0. \\
1634 < \end{array}
1635 < \]
1636 < This property is what we expect from a truly random process. As long
1637 < as the model chosen for $R(t)$ was a gaussian distribution in
1608 > uncorrelated to $x$ and $\dot x$,$\left\langle {x(t)R(t)}
1609 > \right\rangle  = 0, \left\langle {\dot x(t)R(t)} \right\rangle  =
1610 > 0.$ This property is what we expect from a truly random process. As
1611 > long as the model chosen for $R(t)$ was a gaussian distribution in
1612   general, the stochastic nature of the GLE still remains.
1639
1613   %dynamic friction kernel
1614   The convolution integral
1615   \[
# Line 1681 | Line 1654 | Defining a new set of coordinates,
1654  
1655   \subsubsection{\label{introSection:secondFluctuationDissipation}\textbf{The Second Fluctuation Dissipation Theorem}}
1656  
1657 < Defining a new set of coordinates,
1657 > Defining a new set of coordinates
1658   \[
1659   q_\alpha  (t) = x_\alpha  (t) - \frac{1}{{m_\alpha  \omega _\alpha
1660 < ^2 }}x(0)
1661 < \],
1660 > ^2 }}x(0),
1661 > \]
1662   we can rewrite $R(T)$ as
1663   \[
1664   R(t) = \sum\limits_{\alpha  = 1}^N {g_\alpha  q_\alpha  (t)}.
# Line 1702 | Line 1675 | Thus, we recover the \emph{second fluctuation dissipat
1675   Thus, we recover the \emph{second fluctuation dissipation theorem}
1676   \begin{equation}
1677   \xi (t) = \left\langle {R(t)R(0)} \right\rangle
1678 < \label{introEquation:secondFluctuationDissipation}.
1678 > \label{introEquation:secondFluctuationDissipation},
1679   \end{equation}
1680 < In effect, it acts as a constraint on the possible ways in which one
1681 < can model the random force and friction kernel.
1680 > which acts as a constraint on the possible ways in which one can
1681 > model the random force and friction kernel.

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines