ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 1105 by mmeineke, Tue Apr 13 18:32:53 2004 UTC vs.
Revision 1106 by mmeineke, Tue Apr 13 21:13:53 2004 UTC

# Line 1 | Line 1
1  
2
2   \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
3  
4  
# Line 72 | Line 71 | Or additively as
71   \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma}).
72   \label{introEq:SM1}
73   \end{equation}
74 < Or additively as
74 > Or additively as,
75   \begin{equation}
76   \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma}).
77   \label{introEq:SM2}
# Line 162 | Line 161 | P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
161   states the coupled system is able to assume. Namely,
162   \begin{equation}
163   P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
164 <        e^{\ln \Omega( E - E_{\gamma})}
164 >        e^{\ln \Omega( E - E_{\gamma})}.
165   \label{introEq:SM11}
166   \end{equation}
167 < With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
167 > Because $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
168   \begin{equation}
169   \ln \Omega ( E - E_{\gamma}) =
170          \ln \Omega (E) -
171          E_{\gamma}  \frac{\partial \ln \Omega }{\partial E}
172 <        + \ldots
172 >        + \ldots.
173   \label{introEq:SM12}
174   \end{equation}
175   Higher order terms are omitted as $E$ is an infinite thermal
176   bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
177   be rewritten:
178   \begin{equation}
179 < P_{\gamma} \propto e^{-\beta E_{\gamma}}
179 > P_{\gamma} \propto e^{-\beta E_{\gamma}},
180   \label{introEq:SM13}
181   \end{equation}
182 < Where $\ln \Omega(E)$ has been factored out of the proportionality as a
182 > where $\ln \Omega(E)$ has been factored out of the proportionality as a
183   constant.  Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
184   d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
185   \begin{equation}
186   P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
187 < {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
187 > {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}.
188   \label{introEq:SM14}
189   \end{equation}
190   This result is the standard Boltzmann statistical distribution.
# Line 194 | Line 193 | Applying it to Eq.~\ref{introEq:SM10} one can obtain t
193   \langle A \rangle =
194   \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
195          A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
196 < {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
196 > {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}.
197   \label{introEq:SM15}
198   \end{equation}
199  
# Line 206 | Line 205 | average of an observable. Namely,
205   average of an observable. Namely,
206   \begin{equation}
207   \langle A \rangle_t = \frac{1}{\tau}
208 <        \int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt
208 >        \int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt,
209   \label{introEq:SM16}
210   \end{equation}
211 < Where the value of an observable is averaged over the length of time,
211 > where the value of an observable is averaged over the length of time,
212   $\tau$, that the simulation is run. This type of measurement mirrors
213   the experimental measurement of an observable. In an experiment, the
214   instrument analyzing the system must average its observation over the
# Line 244 | Line 243 | I = \int_a^b f(x)dx
243   force method of sampling.\cite{Frenkel1996} Consider the following
244   single dimensional integral:
245   \begin{equation}
246 < I = \int_a^b f(x)dx
246 > I = \int_a^b f(x)dx.
247   \label{eq:MCex1}
248   \end{equation}
249   The equation can be recast as:
250   \begin{equation}
251 < I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
251 > I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx,
252   \label{introEq:Importance1}
253   \end{equation}
254 < Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
254 > where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
255   one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
256   from the distribution $\rho(x)$ on the interval $[a,b]$, then
257   Eq.~\ref{introEq:Importance1} becomes
258   \begin{equation}
259 < I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]}
259 > I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]}.
260   \label{introEq:Importance2}
261   \end{equation}
262   If $\rho(x)$ is uniformly distributed over the interval $[a,b]$,
263   \begin{equation}
264 < \rho(x) = \frac{1}{b-a}
264 > \rho(x) = \frac{1}{b-a},
265   \label{introEq:importance2b}
266   \end{equation}
267   then the integral can be rewritten as
268   \begin{equation}
269   I = (b-a)\lim_{\tau \rightarrow \infty}
270 <        \langle f(x) \rangle_{\text{trials}[a,b]}
270 >        \langle f(x) \rangle_{\text{trials}[a,b]}.
271   \label{eq:MCex2}
272   \end{equation}
273  
# Line 277 | Line 276 | integrals of the form:
276   \begin{equation}
277   \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
278          e^{-\beta V(\mathbf{r}^N)}}%
279 <        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
279 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}},
280   \label{eq:mcEnsAvg}
281   \end{equation}
282 < Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
282 > where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
283   and $A$ is some observable that is only dependent on position. This is
284   the ensemble average of $A$ as presented in
285   Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
# Line 299 | Line 298 | Eq.~\ref{eq:mcEnsAvg}, and realizing
298   \begin {equation}
299   \rho_{kT}(\mathbf{r}^N) =
300          \frac{e^{-\beta V(\mathbf{r}^N)}}
301 <        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
301 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}},
302   \label{introEq:MCboltzman}
303   \end{equation}
304 < Where $\rho_{kT}$ is the Boltzmann distribution.  The ensemble average
304 > where $\rho_{kT}$ is the Boltzmann distribution.  The ensemble average
305   can be rewritten as
306   \begin{equation}
307   \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
308 <        \rho_{kT}(\mathbf{r}^N)
308 >        \rho_{kT}(\mathbf{r}^N).
309   \label{introEq:Importance3}
310   \end{equation}
311   Applying Eq.~\ref{introEq:Importance1} one obtains
312   \begin{equation}
313   \langle A \rangle = \biggl \langle
314          \frac{ A \rho_{kT}(\mathbf{r}^N) }
315 <        {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
315 >        {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}.
316   \label{introEq:Importance4}
317   \end{equation}
318   By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
319   Eq.~\ref{introEq:Importance4} becomes
320   \begin{equation}
321 < \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT}
321 > \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT}.
322   \label{introEq:Importance5}
323   \end{equation}
324   The difficulty is selecting points $\mathbf{r}^N$ such that they are
# Line 346 | Line 345 | The transition probability is given by the following:
345  
346   The transition probability is given by the following:
347   \begin{equation}
348 < \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
348 > \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n),
349   \label{introEq:MCpi}
350   \end{equation}
351 < Where $\alpha_{mn}$ is the probability of attempting the move $m
351 > where $\alpha_{mn}$ is the probability of attempting the move $m
352   \rightarrow n$, and $\accMe$ is the probability of accepting the move
353   $m \rightarrow n$.  Defining a probability vector,
354   $\boldsymbol{\rho}$, such that
355   \begin{equation}
356   \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
357 <        \ldots \rho_N \}
357 >        \ldots \rho_N \},
358   \label{introEq:MCrhoVector}
359   \end{equation}
360   a transition matrix $\boldsymbol{\Pi}$ can be defined,
361   whose elements are $\pi_{mn}$, for each given transition.  The
362   limiting distribution of the Markov chain can then be found by
363   applying the transition matrix an infinite number of times to the
364 < distribution vector.
364 > distribution vector,
365   \begin{equation}
366   \boldsymbol{\rho}_{\text{limit}} =
367          \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
368 <        \boldsymbol{\Pi}^N
368 >        \boldsymbol{\Pi}^N.
369   \label{introEq:MCmarkovLimit}
370   \end{equation}
371   The limiting distribution of the chain is independent of the starting
372   distribution, and successive applications of the transition matrix
373 < will only yield the limiting distribution again.
373 > will only yield the limiting distribution again,
374   \begin{equation}
375   \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{limit}}
376 <        \boldsymbol{\Pi}
376 >        \boldsymbol{\Pi}.
377   \label{introEq:MCmarkovEquil}
378   \end{equation}
379  
# Line 386 | Line 385 | from $m$ to $n$ is the same as going from $n$ to $m$.
385   of states.  The method accomplishes this by imposing the strong
386   condition of microscopic reversibility on the equilibrium
387   distribution.  This means that at equilibrium, the probability of going
388 < from $m$ to $n$ is the same as going from $n$ to $m$.
388 > from $m$ to $n$ is the same as going from $n$ to $m$,
389   \begin{equation}
390 < \rho_m\pi_{mn} = \rho_n\pi_{nm}
390 > \rho_m\pi_{mn} = \rho_n\pi_{nm}.
391   \label{introEq:MCmicroReverse}
392   \end{equation}
393   Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
# Line 396 | Line 395 | Eq.~\ref{introEq:MCmicroReverse} becomes
395   Eq.~\ref{introEq:MCmicroReverse} becomes
396   \begin{equation}
397   \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
398 <        \frac{\rho_n}{\rho_m}
398 >        \frac{\rho_n}{\rho_m}.
399   \label{introEq:MCmicro2}
400   \end{equation}
401   For a Boltzmann limiting distribution,
402   \begin{equation}
403   \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
404 <        = e^{-\beta \Delta \mathcal{U}}
404 >        = e^{-\beta \Delta \mathcal{U}}.
405   \label{introEq:MCmicro3}
406   \end{equation}
407   This allows for the following set of acceptance rules be defined:
# Line 443 | Line 442 | calculate time correlation functions of the form\cite{
442   calculate time correlation functions of the form\cite{Hansen86}
443   \begin{equation}
444   \langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau}
445 <        \int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime}
445 >        \int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime}.
446   \label{introEq:timeCorr}
447   \end{equation}
448   These correlations can be used to measure fundamental time constants
# Line 614 | Line 613 | q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\D
613   \begin{equation}
614   q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
615          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
616 <        \mathcal{O}(\Delta t^4)
616 >        \mathcal{O}(\Delta t^4) .
617   \label{introEq:verletForward}
618   \end{equation}
619   As well as,
620   \begin{equation}
621   q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
622          \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
623 <        \mathcal{O}(\Delta t^4)
623 >        \mathcal{O}(\Delta t^4) ,
624   \label{introEq:verletBack}
625   \end{equation}
626 < Where $m$ is the mass of the particle, $q(t)$ is the position at time
626 > where $m$ is the mass of the particle, $q(t)$ is the position at time
627   $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
628   particle. Adding together Eq.~\ref{introEq:verletForward} and
629   Eq.~\ref{introEq:verletBack} results in,
630   \begin{equation}
631   q(t+\Delta t)+q(t-\Delta t) =
632 <        2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
632 >        2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4) ,
633   \label{introEq:verletSum}
634   \end{equation}
635 < Or equivalently,
635 > or equivalently,
636   \begin{equation}
637   q(t+\Delta t) \approx
638 <        2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
638 >        2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2.
639   \label{introEq:verletFinal}
640   \end{equation}
641   Which contains an error in the estimate of the new positions on the
# Line 645 | Line 644 | q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}
644   In practice, however, the simulations in this research were integrated
645   with the velocity reformulation of the Verlet method.\cite{allen87:csl}
646   \begin{align}
647 < q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
647 > q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 ,%
648   \label{introEq:MDvelVerletPos} \\%
649   %
650 < v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
650 > v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] .%
651   \label{introEq:MDvelVerletVel}
652   \end{align}
653   The original Verlet algorithm can be regained by substituting the
# Line 704 | Line 703 | defined as follows:
703          \sin\phi\sin\theta &%
704          -\cos\phi\sin\theta &%
705          \cos\theta
706 < \end{bmatrix}
706 > \end{bmatrix}.
707   \label{introEq:EulerRotMat}
708   \end{equation}
709  
# Line 720 | Line 719 | as\cite{allen87:csl}
719   \begin{align}
720   \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
721          \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
722 <        \omega^s_z
722 >        \omega^s_z,
723   \label{introEq:MDeulerPhi} \\%
724   %
725 < \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
725 > \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi,
726   \label{introEq:MDeulerTheta} \\%
727   %
728   \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
729 <        \omega^s_y \frac{\cos\phi}{\sin\theta}
729 >        \omega^s_y \frac{\cos\phi}{\sin\theta},
730   \label{introEq:MDeulerPsi}
731   \end{align}
732 < Where $\omega^s_{\alpha}$ is the angular velocity in the lab space frame
732 > where $\omega^s_{\alpha}$ is the angular velocity in the lab space frame
733   along Cartesian coordinate $\alpha$.  However, a difficulty arises when
734   attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
735   Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
# Line 756 | Line 755 | iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\part
755   defined as,
756   \begin{equation}
757   iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
758 <        F_j\frac{\partial}{\partial p_j} \biggr ]
758 >        F_j\frac{\partial}{\partial p_j} \biggr ].
759   \label{introEq:LiouvilleOperator}
760   \end{equation}
761   Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
# Line 764 | Line 763 | U(t) = e^{iLt}
763   $\Gamma$ is defined as the set of all positions and conjugate momenta,
764   $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
765   \begin {equation}
766 < U(t) = e^{iLt}
766 > U(t) = e^{iLt}.
767   \label{introEq:Lpropagator}
768   \end{equation}
769   This allows the specification of $\Gamma$ at any time $t$ as
770   \begin{equation}
771 < \Gamma(t) = U(t)\Gamma(0)
771 > \Gamma(t) = U(t)\Gamma(0).
772   \label{introEq:Lp2}
773   \end{equation}
774   It is important to note, $U(t)$ is a unitary operator meaning
775   \begin{equation}
776 < U(-t)=U^{-1}(t)
776 > U(-t)=U^{-1}(t).
777   \label{introEq:Lp3}
778   \end{equation}
779  
780   Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
781   Trotter theorem to yield
782   \begin{align}
783 < e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
783 > e^{iLt} &= e^{i(L_1 + L_2)t}, \notag \\%
784   %
785 <        &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
785 >        &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P, \notag \\%
786   %
787          &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
788          e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
789 <        \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
789 >        \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ), \label{introEq:Lp4}
790   \end{align}
791 < Where $\Delta t = t/P$.
791 > where $\Delta t = t/P$.
792   With this, a discrete time operator $G(\Delta t)$ can be defined:
793   \begin{align}
794   G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
795 <        e^{iL_1\frac{\Delta t}{2}} \notag \\%
795 >        e^{iL_1\frac{\Delta t}{2}}, \notag \\%
796   %
797          &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
798 <        U_1 \biggl ( \frac{\Delta t}{2} \biggr )
798 >        U_1 \biggl ( \frac{\Delta t}{2} \biggr ).
799   \label{introEq:Lp5}
800   \end{align}
801   Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
# Line 805 | Line 804 | iL_1 &= \dot{q}\frac{\partial}{\partial q}%
804  
805   As an example, consider the following decomposition of $L$:
806   \begin{align}
807 < iL_1 &= \dot{q}\frac{\partial}{\partial q}%
807 > iL_1 &= \dot{q}\frac{\partial}{\partial q},%
808   \label{introEq:Lp6a} \\%
809   %
810 < iL_2 &= F(q)\frac{\partial}{\partial p}%
810 > iL_2 &= F(q)\frac{\partial}{\partial p}.%
811   \label{introEq:Lp6b}
812   \end{align}
813   This leads to propagator $G( \Delta t )$ as,
814   \begin{equation}
815   G(\Delta t) =  e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
816          e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
817 <        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
817 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}.
818   \label{introEq:Lp7}
819   \end{equation}
820   Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
821   \begin{equation}
822 < e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
822 > e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c),
823   \label{introEq:Lp8}
824   \end{equation}
825 < Where $c$ is independent of $x$.  One obtains the following:  
825 > where $c$ is independent of $x$.  One obtains the following:  
826   \begin{align}
827   \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
828 <        \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
828 >        \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEq:Lp9a}\\%
829   %
830 < q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
830 > q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),%
831          \label{introEq:Lp9b}\\%
832   %
833   \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
834 <        \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
834 >        \frac{\Delta t}{2m}\, F[q(0)]. \label{introEq:Lp9c}
835   \end{align}
836   Or written another way,
837   \begin{align}
838   q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
839 <        \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
839 >        \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, %
840   \label{introEq:Lp10a} \\%
841   %
842   \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
843 <        \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
843 >        \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. %
844   \label{introEq:Lp10b}
845   \end{align}
846   This is the velocity Verlet formulation presented in
# Line 869 | Line 868 | iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q}
868   Liouville operator:
869   \begin{align}
870   iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
871 <        \mathsf{\dot{A}}\frac{\partial}{\partial \mathsf{A}}
871 >        \mathsf{\dot{A}}\frac{\partial}{\partial \mathsf{A}} ,
872   \label{introEq:SR1a} \\%
873   %
874 < iL_F &= F(q)\frac{\partial}{\partial p}
874 > iL_F &= F(q)\frac{\partial}{\partial p},
875   \label{introEq:SR1b} \\%
876 < iL_{\tau} &= \tau(\mathsf{A})\frac{\partial}{\partial j}
876 > iL_{\tau} &= \tau(\mathsf{A})\frac{\partial}{\partial j},
877   \label{introEq:SR1b} \\%
878   \end{align}
879 < Where $\tau(\mathsf{A})$ is the torque of the system
879 > where $\tau(\mathsf{A})$ is the torque of the system
880   due to the configuration, and $j$ is the conjugate
881   angular momenta of the system. The propagator, $G(\Delta t)$, becomes
882   \begin{equation}
# Line 885 | Line 884 | G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial
884          e^{\frac{\Delta t}{2} \tau(\mathsf{A})\frac{\partial}{\partial j}} \,
885          e^{\Delta t\,iL_{\text{pos}}} \,
886          e^{\frac{\Delta t}{2} \tau(\mathsf{A})\frac{\partial}{\partial j}} \,
887 <        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
887 >        e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}.
888   \label{introEq:SR2}
889   \end{equation}
890   Propagation of the linear and angular momenta follows as in the Verlet
# Line 899 | Line 898 | $U_{\text{pos}}(\Delta t)$.
898          \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
899          \mathcal{U}_z (\Delta t)\,
900          \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
901 <        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
901 >        \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr),
902   \label{introEq:SR3}
903   \end{equation}
904 < Where $\mathcal{U}_{\alpha}$ is a unitary rotation of $\mathsf{A}$ and
904 > where $\mathcal{U}_{\alpha}$ is a unitary rotation of $\mathsf{A}$ and
905   $j$ about each axis $\alpha$.  As all propagations are now
906   unitary and symplectic, the entire integration scheme is also
907   symplectic and time reversible.
# Line 962 | Line 961 | give a summary of the results found in this dissertati
961   In the last chapter, I discuss future directions
962   for both {\sc oopse} and this mesoscale model.  Additionally, I will
963   give a summary of the results found in this dissertation.
965
966

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines