ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
Revision: 1092
Committed: Tue Mar 16 21:35:16 2004 UTC (20 years, 3 months ago) by mmeineke
Content type: application/x-tex
File size: 44338 byte(s)
Log Message:
added lst styles, and some preliminary fixes to the intro.

File Contents

# User Rev Content
1 mmeineke 914
2    
3 mmeineke 1020 \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
4 mmeineke 914
5    
6 mmeineke 953 The techniques used in the course of this research fall under the two
7     main classes of molecular simulation: Molecular Dynamics and Monte
8 mmeineke 1088 Carlo. Molecular Dynamics simulations are carried out by integrating
9     the equations of motion for a given system of particles, allowing the
10     researcher to gain insight into the time dependent evolution of a
11     system. Transport phenomena are readily studied with this simulation
12     technique, making Molecular Dynamics the main simulation technique
13     used in this research. Other aspects of the research fall in the
14     Monte Carlo class of simulations. In Monte Carlo, the configuration
15     space available to the collection of particles is sampled
16     stochastically, or randomly. Each configuration is chosen with a given
17     probability based on the Boltzmann distribution. These types
18     of simulations are best used to probe properties of a system that are
19     dependent only on the state of the system. Structural information
20     about a system is most readily obtained through these types of
21     methods.
22 mmeineke 914
23 mmeineke 953 Although the two techniques employed seem dissimilar, they are both
24 mmeineke 1092 linked by the over-arching principles of Statistical
25 mmeineke 1088 Mechanics. Statistical Mechanics governs the behavior of
26 mmeineke 953 both classes of simulations and dictates what each method can and
27 mmeineke 1088 cannot do. When investigating a system, one must first analyze what
28     thermodynamic properties of the system are being probed, then choose
29 mmeineke 953 which method best suits that objective.
30 mmeineke 914
31 mmeineke 1003 \section{\label{introSec:statThermo}Statistical Mechanics}
32 mmeineke 914
33 mmeineke 1001 The following section serves as a brief introduction to some of the
34 mmeineke 1088 Statistical Mechanics concepts presented in this dissertation. What
35 mmeineke 1037 follows is a brief derivation of Boltzmann weighted statistics and an
36 mmeineke 1001 explanation of how one can use the information to calculate an
37     observable for a system. This section then concludes with a brief
38     discussion of the ergodic hypothesis and its relevance to this
39     research.
40 mmeineke 914
41 mmeineke 1008 \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
42 mmeineke 914
43 mmeineke 1037 Consider a system, $\gamma$, with total energy $E_{\gamma}$. Let
44     $\Omega(E_{\gamma})$ represent the number of degenerate ways
45     $\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the
46     collection of positions and conjugate momenta of system $\gamma$, can
47     be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset
48     of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots
49     E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be
50 mmeineke 1092 expressed as
51 mmeineke 1001 \begin{equation}
52 mmeineke 1037 \Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots
53 mmeineke 1092 \Omega(E_{\gamma}) \times \ldots \Omega(E_n).
54 mmeineke 1037 \label{introEq:SM0.1}
55     \end{equation}
56     This multiplicative combination of degeneracies is illustrated in
57     Fig.~\ref{introFig:degenProd}.
58    
59     \begin{figure}
60     \centering
61     \includegraphics[width=\linewidth]{omegaFig.eps}
62     \caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.}
63     \label{introFig:degenProd}
64     \end{figure}
65    
66     Next, consider the specific case of $\gamma$ in contact with a
67     bath. Exchange of energy is allowed between the bath and the system,
68     subject to the constraint that the total energy
69     ($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$
70     is the total energy of both systems, can be represented as
71     \begin{equation}
72 mmeineke 1003 \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
73 mmeineke 1001 \label{introEq:SM1}
74     \end{equation}
75     Or additively as
76     \begin{equation}
77 mmeineke 1003 \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma})
78 mmeineke 1001 \label{introEq:SM2}
79     \end{equation}
80    
81     The solution to Eq.~\ref{introEq:SM2} maximizes the number of
82 mmeineke 1042 degenerate configurations in $E$. \cite{Frenkel1996}
83 mmeineke 1001 This gives
84     \begin{equation}
85 mmeineke 1003 \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
86     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
87     +
88     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
89     \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}
90 mmeineke 1001 \label{introEq:SM3}
91     \end{equation}
92     Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
93 mmeineke 1003 $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
94 mmeineke 1001 $-1$. Eq.~\ref{introEq:SM3} becomes
95     \begin{equation}
96 mmeineke 1003 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
97     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
98 mmeineke 1001 \label{introEq:SM4}
99     \end{equation}
100    
101     At this point, one can draw a relationship between the maximization of
102     degeneracy in Eq.~\ref{introEq:SM3} and the second law of
103 mmeineke 1003 thermodynamics. Namely, that for a closed system, entropy will
104     increase for an irreversible process.\cite{chandler:1987} Here the
105 mmeineke 1001 process is the partitioning of energy among the two systems. This
106     allows the following definition of entropy:
107     \begin{equation}
108 mmeineke 1003 S = k_B \ln \Omega(E)
109 mmeineke 1001 \label{introEq:SM5}
110     \end{equation}
111 mmeineke 1008 Where $k_B$ is the Boltzmann constant. Having defined entropy, one can
112 mmeineke 1068 also define the temperature of the system using the Maxwell relation
113 mmeineke 1001 \begin{equation}
114 mmeineke 1003 \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
115 mmeineke 1001 \label{introEq:SM6}
116     \end{equation}
117     The temperature in the system $\gamma$ is then
118     \begin{equation}
119 mmeineke 1003 \beta( E_{\gamma} ) = \frac{1}{k_B T} =
120     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
121 mmeineke 1001 \label{introEq:SM7}
122     \end{equation}
123     Applying this to Eq.~\ref{introEq:SM4} gives the following
124     \begin{equation}
125 mmeineke 1003 \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
126 mmeineke 1001 \label{introEq:SM8}
127     \end{equation}
128 mmeineke 1088 Eq.~\ref{introEq:SM8} shows that the partitioning of energy between
129     the two systems is actually a process of thermal
130     equilibration.\cite{Frenkel1996}
131 mmeineke 1001
132 mmeineke 1088 An application of these results is to formulate the expectation value
133     of an observable, $A$, in the canonical ensemble. In the canonical
134     ensemble, the number of particles, $N$, the volume, $V$, and the
135     temperature, $T$, are all held constant while the energy, $E$, is
136     allowed to fluctuate. Returning to the previous example, the bath
137 mmeineke 1008 system is now an infinitely large thermal bath, whose exchange of
138 mmeineke 1003 energy with the system $\gamma$ holds the temperature constant. The
139 mmeineke 1088 partitioning of energy between the bath and the system is then related
140     to the total energy of both systems and the fluctuations in
141     $E_{\gamma}$:
142 mmeineke 1001 \begin{equation}
143 mmeineke 1003 \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
144 mmeineke 1001 \label{introEq:SM9}
145     \end{equation}
146     As for the expectation value, it can be defined
147     \begin{equation}
148 mmeineke 1003 \langle A \rangle =
149     \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
150     P_{\gamma} A(\boldsymbol{\Gamma})
151 mmeineke 1001 \label{introEq:SM10}
152 mmeineke 1003 \end{equation}
153     Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
154 mmeineke 1088 an integration over all accessible points in phase space, $P_{\gamma}$
155     is the probability of being in a given phase state and
156     $A(\boldsymbol{\Gamma})$ is an observable that is a function of the
157 mmeineke 1001 phase state.
158    
159     Because entropy seeks to maximize the number of degenerate states at a
160     given energy, the probability of being in a particular state in
161     $\gamma$ will be directly proportional to the number of allowable
162     states the coupled system is able to assume. Namely,
163     \begin{equation}
164 mmeineke 1003 P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
165     e^{\ln \Omega( E - E_{\gamma})}
166 mmeineke 1001 \label{introEq:SM11}
167     \end{equation}
168 mmeineke 1003 With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
169 mmeineke 1001 \begin{equation}
170 mmeineke 1003 \ln \Omega ( E - E_{\gamma}) =
171     \ln \Omega (E) -
172     E_{\gamma} \frac{\partial \ln \Omega }{\partial E}
173     + \ldots
174 mmeineke 1001 \label{introEq:SM12}
175     \end{equation}
176     Higher order terms are omitted as $E$ is an infinite thermal
177     bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
178     be rewritten:
179     \begin{equation}
180 mmeineke 1003 P_{\gamma} \propto e^{-\beta E_{\gamma}}
181 mmeineke 1001 \label{introEq:SM13}
182     \end{equation}
183 mmeineke 1008 Where $\ln \Omega(E)$ has been factored out of the proportionality as a
184 mmeineke 1003 constant. Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
185     d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
186 mmeineke 1001 \begin{equation}
187 mmeineke 1003 P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
188     {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
189 mmeineke 1001 \label{introEq:SM14}
190     \end{equation}
191 mmeineke 1008 This result is the standard Boltzmann statistical distribution.
192 mmeineke 1001 Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
193     \begin{equation}
194 mmeineke 1003 \langle A \rangle =
195     \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
196     A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
197     {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
198 mmeineke 1001 \label{introEq:SM15}
199     \end{equation}
200    
201     \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
202    
203 mmeineke 1042 In the case of a Molecular Dynamics simulation, rather than
204     calculating an ensemble average integral over phase space as in
205     Eq.~\ref{introEq:SM15}, it becomes easier to caclulate the time
206     average of an observable. Namely,
207     \begin{equation}
208     \langle A \rangle_t = \frac{1}{\tau}
209     \int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt
210     \label{introEq:SM16}
211     \end{equation}
212 mmeineke 1088 Where the value of an observable is averaged over the length of time,
213     $\tau$, that the simulation is run. This type of measurement mirrors
214     the experimental measurement of an observable. In an experiment, the
215 mmeineke 1068 instrument analyzing the system must average its observation over the
216 mmeineke 1042 finite time of the measurement. What is required then, is a principle
217     to relate the time average to the ensemble average. This is the
218     ergodic hypothesis.
219 mmeineke 1001
220 mmeineke 1042 Ergodicity is the assumption that given an infinite amount of time, a
221     system will visit every available point in phase
222     space.\cite{Frenkel1996} For most systems, this is a valid assumption,
223     except in cases where the system may be trapped in a local feature
224     (\emph{e.g.}~glasses). When valid, ergodicity allows the unification
225     of a time averaged observation and an ensemble averaged one. If an
226     observation is averaged over a sufficiently long time, the system is
227 mmeineke 1088 assumed to visit all energetically accessible points in phase space,
228 mmeineke 1042 giving a properly weighted statistical average. This allows the
229     researcher freedom of choice when deciding how best to measure a given
230     observable. When an ensemble averaged approach seems most logical,
231     the Monte Carlo techniques described in Sec.~\ref{introSec:monteCarlo}
232     can be utilized. Conversely, if a problem lends itself to a time
233     averaging approach, the Molecular Dynamics techniques in
234     Sec.~\ref{introSec:MD} can be employed.
235    
236 mmeineke 1003 \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
237 mmeineke 914
238 mmeineke 953 The Monte Carlo method was developed by Metropolis and Ulam for their
239     work in fissionable material.\cite{metropolis:1949} The method is so
240 mmeineke 1088 named, because it uses random numbers extensively.\cite{allen87:csl}
241     The Monte Carlo method allows for the solution of integrals through
242     the stochastic sampling of the values within the integral. In the
243     simplest case, the evaluation of an integral would follow a brute
244     force method of sampling.\cite{Frenkel1996} Consider the following
245     single dimensional integral:
246 mmeineke 955 \begin{equation}
247 mmeineke 1088 I = \int_a^b f(x)dx
248 mmeineke 955 \label{eq:MCex1}
249     \end{equation}
250     The equation can be recast as:
251     \begin{equation}
252 mmeineke 1042 I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
253     \label{introEq:Importance1}
254     \end{equation}
255     Where $\rho(x)$ is an arbitrary probability distribution in $x$. If
256     one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
257     from the distribution $\rho(x)$ on the interval $[a,b]$, then
258     Eq.~\ref{introEq:Importance1} becomes
259     \begin{equation}
260     I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]}
261     \label{introEq:Importance2}
262     \end{equation}
263     If $\rho(x)$ is uniformly distributed over the interval $[a,b]$,
264     \begin{equation}
265     \rho(x) = \frac{1}{b-a}
266     \label{introEq:importance2b}
267     \end{equation}
268     then the integral can be rewritten as
269     \begin{equation}
270     I = (b-a)\lim_{\tau \rightarrow \infty}
271     \langle f(x) \rangle_{\text{trials}[a,b]}
272 mmeineke 955 \label{eq:MCex2}
273     \end{equation}
274 mmeineke 914
275 mmeineke 955 However, in Statistical Mechanics, one is typically interested in
276     integrals of the form:
277     \begin{equation}
278 mmeineke 977 \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
279     e^{-\beta V(\mathbf{r}^N)}}%
280     {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
281 mmeineke 955 \label{eq:mcEnsAvg}
282     \end{equation}
283 mmeineke 977 Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
284 mmeineke 1003 and $A$ is some observable that is only dependent on position. This is
285     the ensemble average of $A$ as presented in
286     Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
287     momentum. Therefore the momenta contribution of the integral can be
288     factored out, leaving the configurational integral. Application of the
289     brute force method to this system would yield highly inefficient
290 mmeineke 1008 results. Due to the Boltzmann weighting of this integral, most random
291 mmeineke 955 configurations will have a near zero contribution to the ensemble
292 mmeineke 1003 average. This is where importance sampling comes into
293 mmeineke 955 play.\cite{allen87:csl}
294 mmeineke 914
295 mmeineke 1042 Importance sampling is a method where the distribution, from which the
296     random configurations are chosen, is selected in such a way as to
297     efficiently sample the integral in question. Looking at
298     Eq.~\ref{eq:mcEnsAvg}, and realizing
299 mmeineke 956 \begin {equation}
300 mmeineke 977 \rho_{kT}(\mathbf{r}^N) =
301     \frac{e^{-\beta V(\mathbf{r}^N)}}
302     {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
303     \label{introEq:MCboltzman}
304 mmeineke 956 \end{equation}
305 mmeineke 1008 Where $\rho_{kT}$ is the Boltzmann distribution. The ensemble average
306 mmeineke 977 can be rewritten as
307 mmeineke 956 \begin{equation}
308 mmeineke 977 \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
309     \rho_{kT}(\mathbf{r}^N)
310     \label{introEq:Importance3}
311 mmeineke 956 \end{equation}
312 mmeineke 977 Applying Eq.~\ref{introEq:Importance1} one obtains
313 mmeineke 956 \begin{equation}
314 mmeineke 977 \langle A \rangle = \biggl \langle
315     \frac{ A \rho_{kT}(\mathbf{r}^N) }
316     {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
317     \label{introEq:Importance4}
318 mmeineke 956 \end{equation}
319 mmeineke 977 By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
320     Eq.~\ref{introEq:Importance4} becomes
321 mmeineke 956 \begin{equation}
322 mmeineke 1042 \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT}
323 mmeineke 977 \label{introEq:Importance5}
324 mmeineke 956 \end{equation}
325 mmeineke 977 The difficulty is selecting points $\mathbf{r}^N$ such that they are
326     sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$. A solution
327 mmeineke 1009 was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
328 mmeineke 977 the use of a Markov chain whose limiting distribution was
329     $\rho_{kT}(\mathbf{r}^N)$.
330 mmeineke 955
331 mmeineke 1003 \subsection{\label{introSec:markovChains}Markov Chains}
332 mmeineke 955
333 mmeineke 956 A Markov chain is a chain of states satisfying the following
334 mmeineke 977 conditions:\cite{leach01:mm}
335     \begin{enumerate}
336 mmeineke 956 \item The outcome of each trial depends only on the outcome of the previous trial.
337     \item Each trial belongs to a finite set of outcomes called the state space.
338 mmeineke 977 \end{enumerate}
339 mmeineke 1008 If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
340     $\rho_m$ and $\rho_n$ are the probabilities of being in state
341 mmeineke 977 $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively. Further, the two
342     states are linked by a transition probability, $\pi_{mn}$, which is the
343     probability of going from state $m$ to state $n$.
344 mmeineke 955
345 mmeineke 977 \newcommand{\accMe}{\operatorname{acc}}
346    
347 mmeineke 956 The transition probability is given by the following:
348     \begin{equation}
349 mmeineke 977 \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
350     \label{introEq:MCpi}
351 mmeineke 956 \end{equation}
352 mmeineke 977 Where $\alpha_{mn}$ is the probability of attempting the move $m
353     \rightarrow n$, and $\accMe$ is the probability of accepting the move
354     $m \rightarrow n$. Defining a probability vector,
355     $\boldsymbol{\rho}$, such that
356 mmeineke 956 \begin{equation}
357 mmeineke 977 \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
358     \ldots \rho_N \}
359     \label{introEq:MCrhoVector}
360 mmeineke 956 \end{equation}
361 mmeineke 977 a transition matrix $\boldsymbol{\Pi}$ can be defined,
362     whose elements are $\pi_{mn}$, for each given transition. The
363     limiting distribution of the Markov chain can then be found by
364     applying the transition matrix an infinite number of times to the
365     distribution vector.
366 mmeineke 956 \begin{equation}
367 mmeineke 977 \boldsymbol{\rho}_{\text{limit}} =
368     \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
369     \boldsymbol{\Pi}^N
370     \label{introEq:MCmarkovLimit}
371 mmeineke 956 \end{equation}
372     The limiting distribution of the chain is independent of the starting
373     distribution, and successive applications of the transition matrix
374     will only yield the limiting distribution again.
375     \begin{equation}
376 mmeineke 1088 \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{limit}}
377 mmeineke 977 \boldsymbol{\Pi}
378     \label{introEq:MCmarkovEquil}
379 mmeineke 956 \end{equation}
380    
381 mmeineke 1003 \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
382 mmeineke 956
383 mmeineke 977 In the Metropolis method\cite{metropolis:1953}
384     Eq.~\ref{introEq:MCmarkovEquil} is solved such that
385 mmeineke 1008 $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
386 mmeineke 977 of states. The method accomplishes this by imposing the strong
387     condition of microscopic reversibility on the equilibrium
388 mmeineke 1088 distribution. This means that at equilibrium, the probability of going
389 mmeineke 977 from $m$ to $n$ is the same as going from $n$ to $m$.
390 mmeineke 956 \begin{equation}
391 mmeineke 977 \rho_m\pi_{mn} = \rho_n\pi_{nm}
392     \label{introEq:MCmicroReverse}
393 mmeineke 956 \end{equation}
394 mmeineke 1008 Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
395 mmeineke 977 the Metropolis method. Using Eq.~\ref{introEq:MCpi},
396     Eq.~\ref{introEq:MCmicroReverse} becomes
397 mmeineke 956 \begin{equation}
398 mmeineke 977 \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
399     \frac{\rho_n}{\rho_m}
400     \label{introEq:MCmicro2}
401 mmeineke 956 \end{equation}
402 mmeineke 1008 For a Boltzmann limiting distribution,
403 mmeineke 956 \begin{equation}
404 mmeineke 977 \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
405     = e^{-\beta \Delta \mathcal{U}}
406     \label{introEq:MCmicro3}
407 mmeineke 956 \end{equation}
408     This allows for the following set of acceptance rules be defined:
409     \begin{equation}
410 mmeineke 1003 \accMe( m \rightarrow n ) =
411     \begin{cases}
412     1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
413     e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
414     \end{cases}
415     \label{introEq:accRules}
416 mmeineke 956 \end{equation}
417    
418 mmeineke 1003 Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
419     Metropolis method proceeds as follows
420     \begin{enumerate}
421     \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
422 mmeineke 1008 \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
423 mmeineke 1088 \item If the new configuration lowers the energy of the system, accept the move ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$). Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
424 mmeineke 1008 \item Accumulate the average for the configurational observable of interest.
425 mmeineke 1003 \item Repeat from step 2 until the average converges.
426     \end{enumerate}
427 mmeineke 956 One important note is that the average is accumulated whether the move
428     is accepted or not, this ensures proper weighting of the average.
429 mmeineke 1003 Using Eq.~\ref{introEq:Importance4} it becomes clear that the
430     accumulated averages are the ensemble averages, as this method ensures
431 mmeineke 1008 that the limiting distribution is the Boltzmann distribution.
432 mmeineke 956
433 mmeineke 1003 \section{\label{introSec:MD}Molecular Dynamics Simulations}
434 mmeineke 914
435 mmeineke 956 The main simulation tool used in this research is Molecular Dynamics.
436 mmeineke 1088 Molecular Dynamics is the numerical integration of the equations of
437     motion for a system in order to obtain information about the dynamic
438     changes in the positions and momentum of the constituent particles.
439     This allows the calculation of not only configurational observables,
440     but momentum dependent ones as well: diffusion constants, relaxation
441     events, folding/unfolding events, etc. With the time dependent
442     information gained from a Molecular Dynamics simulation, one can also
443     calculate time correlation functions of the form\cite{Hansen86}
444 mmeineke 1042 \begin{equation}
445     \langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau}
446     \int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime}
447     \label{introEq:timeCorr}
448     \end{equation}
449     These correlations can be used to measure fundamental time constants
450     of a system, such as diffusion constants from the velocity
451     autocorrelation or dipole relaxation times from the dipole
452     autocorrelation. Due to the principle of ergodicity,
453 mmeineke 1088 Sec.~\ref{introSec:ergodic}, the average of static observables over
454     the length of the simulation are taken to be the ensemble averages for
455     the system.
456 mmeineke 914
457 mmeineke 956 The choice of when to use molecular dynamics over Monte Carlo
458     techniques, is normally decided by the observables in which the
459 mmeineke 1088 researcher is interested. If the observables depend on time in any
460     fashion, then the only choice is molecular dynamics in some form.
461 mmeineke 956 However, when the observable is dependent only on the configuration,
462 mmeineke 1088 then for most small systems, Monte Carlo techniques will be more
463     efficient. The focus of research in the second half of this
464     dissertation is centered on the dynamic properties of phospholipid
465     bilayers, making molecular dynamics key in the simulation of those
466     properties.
467 mmeineke 914
468 mmeineke 1088 \subsection{\label{introSec:mdAlgorithm}Molecular Dynamics Algorithms}
469 mmeineke 914
470 mmeineke 956 To illustrate how the molecular dynamics technique is applied, the
471     following sections will describe the sequence involved in a
472 mmeineke 1003 simulation. Sec.~\ref{introSec:mdInit} deals with the initialization
473     of a simulation. Sec.~\ref{introSec:mdForce} discusses issues
474     involved with the calculation of the forces.
475     Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
476     with the integration of the equations of motion.\cite{Frenkel1996}
477 mmeineke 914
478 mmeineke 1003 \subsection{\label{introSec:mdInit}Initialization}
479 mmeineke 914
480 mmeineke 956 When selecting the initial configuration for the simulation it is
481     important to consider what dynamics one is hoping to observe.
482 mmeineke 1003 Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
483 mmeineke 956 phospholipid membranes. Therefore in these simulations initial
484     positions were selected that in some cases dispersed the lipids in
485 mmeineke 1088 water, and in other cases structured the lipids into preformed
486 mmeineke 956 bilayers. Important considerations at this stage of the simulation are:
487     \begin{itemize}
488 mmeineke 1088 \item There are no overlapping molecules or atoms.
489     \item Velocities are chosen in such a way as to give the system zero total momentum and angular momentum.
490     \item It is also sometimes desirable to select the velocities to correctly sample the ensemble at a particular target temperature.
491 mmeineke 956 \end{itemize}
492    
493 mmeineke 1088 The first point is important due to the forces generated when two
494     particles are too close together. If overlap occurs, the first
495     evaluation of forces will return numbers so large as to render the
496     numerical integration of the equations motion meaningless. The second
497     consideration keeps the system from drifting or rotating as a whole.
498     This arises from the fact that most simulations are of systems in
499     equilibrium in the absence of outside forces. Therefore any net
500 mmeineke 956 movement would be unphysical and an artifact of the simulation method
501 mmeineke 1003 used. The final point addresses the selection of the magnitude of the
502 mmeineke 1088 initial velocities. For many simulations it is convenient to use this
503     opportunity to scale the amount of kinetic energy to reflect the
504 mmeineke 956 desired thermal distribution of the system. However, it must be noted
505 mmeineke 1088 that due to equipartition, most systems will require further velocity
506     rescaling after the first few initial simulation steps due to either
507     loss or gain of kinetic energy from energy stored as potential energy.
508 mmeineke 956
509 mmeineke 1003 \subsection{\label{introSec:mdForce}Force Evaluation}
510 mmeineke 956
511     The evaluation of forces is the most computationally expensive portion
512 mmeineke 1088 of any molecular dynamics simulation. This is due entirely to the
513     evaluation of long range forces in a simulation, which are typically
514     pair potentials. These forces are most commonly the van der Waals
515     force, and sometimes Coulombic forces as well. For a pair-wise
516     interaction, there are $N(N-1)/ 2$ pairs to be evaluated, where $N$ is
517     the number of particles in the system. This leads to calculations which
518     scale as $N^2$, making large simulations prohibitive in the absence
519     of any computation saving techniques.
520 mmeineke 956
521 mmeineke 1088 Another consideration one must resolve, is that in a given simulation,
522 mmeineke 956 a disproportionate number of the particles will feel the effects of
523 mmeineke 1003 the surface.\cite{allen87:csl} For a cubic system of 1000 particles
524     arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
525 mmeineke 1088 exposed to the surface. Unless one is simulating an isolated cluster
526     in a vacuum, the behavior of the system will be far from the desired
527     bulk characteristics. To offset this, simulations employ the use of
528     periodic images.\cite{born:1912}
529 mmeineke 956
530     The technique involves the use of an algorithm that replicates the
531 mmeineke 1088 simulation box on an infinite lattice in Cartesian space. Any
532 mmeineke 956 particle leaving the simulation box on one side will have an image of
533 mmeineke 1088 itself enter on the opposite side (see
534     Fig.~\ref{introFig:pbc}). Therefore, a pair of particles have an
535     image of each other, real or periodic, within $\text{box}/2$. A
536     discussion of the method used to calculate the periodic image can be
537     found in Sec.\ref{oopseSec:pbc}.
538 mmeineke 956
539 mmeineke 1003 \begin{figure}
540     \centering
541     \includegraphics[width=\linewidth]{pbcFig.eps}
542 mmeineke 1008 \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
543 mmeineke 1003 \label{introFig:pbc}
544     \end{figure}
545    
546 mmeineke 1088 Returning to the topic of the computational scaling of the force
547 mmeineke 956 evaluation, the use of periodic boundary conditions requires that a
548     cutoff radius be employed. Using a cutoff radius improves the
549     efficiency of the force evaluation, as particles farther than a
550 mmeineke 1003 predetermined distance, $r_{\text{cut}}$, are not included in the
551 mmeineke 1008 calculation.\cite{Frenkel1996} In a simulation with periodic images,
552 mmeineke 1063 there are two methods to choose from, both with their own cutoff
553     limits. In the minimum image convention, $r_{\text{cut}}$ has a
554     maximum value of $\text{box}/2$. This is because each atom has only
555 mmeineke 1088 one image that is seen by other atoms. The image used is the one that
556     minimizes the distance between the two atoms. A system of wrapped
557     images about a central atom therefore has a maximum length scale of
558     box on a side (Fig.~\ref{introFig:rMaxMin}). The second convention,
559     multiple image convention, has a maximum $r_{\text{cut}}$ of the box
560     length itself. Here multiple images of each atom are replicated in the
561 mmeineke 1063 periodic cells surrounding the central atom, this causes the atom to
562     see multiple copies of several atoms. If the cutoff radius is larger
563     than box, however, then the atom will see an image of itself, and
564     attempt to calculate an unphysical self-self force interaction
565     (Fig.~\ref{introFig:rMaxMult}). Due to the increased complexity and
566     commputaional ineffeciency of the multiple image method, the minimum
567 mmeineke 1088 image method is the used throughout this research.
568 mmeineke 956
569 mmeineke 1003 \begin{figure}
570     \centering
571     \includegraphics[width=\linewidth]{rCutMaxFig.eps}
572 mmeineke 1063 \caption[An explanation of minimum image convention]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
573     \label{introFig:rMaxMin}
574 mmeineke 1003 \end{figure}
575    
576 mmeineke 1063 \begin{figure}
577     \centering
578     \includegraphics[width=\linewidth]{rCutMaxMultFig.eps}
579 mmeineke 1088 \caption[An explanation of multiple image convention]{The yellow atom is the central wrapping point. The blue atoms are the minimum images of the system about the central atom. The boxes with the green atoms are multiple images of the central box. If $r_{\text{cut}} \geq \text{box}$ then the central atom sees multiple images of itself (red atom), creating a self-self force evaluation.}
580 mmeineke 1063 \label{introFig:rMaxMult}
581     \end{figure}
582    
583 mmeineke 1088 With the use of a cutoff radius, however, comes a discontinuity in
584 mmeineke 1006 the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
585     discontinuity, one calculates the potential energy at the
586     $r_{\text{cut}}$, and adds that value to the potential, causing
587     the function to go smoothly to zero at the cutoff radius. This
588     shifted potential ensures conservation of energy when integrating the
589     Newtonian equations of motion.
590 mmeineke 956
591 mmeineke 1006 \begin{figure}
592     \centering
593     \includegraphics[width=\linewidth]{shiftedPot.eps}
594 mmeineke 1008 \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
595 mmeineke 1006 \label{introFig:shiftPot}
596     \end{figure}
597    
598 mmeineke 978 The second main simplification used in this research is the Verlet
599 mmeineke 1088 neighbor list.\cite{allen87:csl} In the Verlet method, one generates
600 mmeineke 978 a list of all neighbor atoms, $j$, surrounding atom $i$ within some
601     cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
602     This list is created the first time forces are evaluated, then on
603     subsequent force evaluations, pair calculations are only calculated
604 mmeineke 1088 from the neighbor lists. The lists are updated if any particle
605 mmeineke 978 in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
606 mmeineke 1088 which indeicates the possibility that a particle has left or joined the
607 mmeineke 978 neighbor list.
608 mmeineke 956
609 mmeineke 1088 \subsection{\label{introSec:mdIntegrate} Integration of the Equations of Motion}
610 mmeineke 978
611     A starting point for the discussion of molecular dynamics integrators
612 mmeineke 1008 is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
613 mmeineke 978 expansion of position in time:
614     \begin{equation}
615 mmeineke 1008 q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
616 mmeineke 1088 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
617 mmeineke 1008 \mathcal{O}(\Delta t^4)
618 mmeineke 978 \label{introEq:verletForward}
619     \end{equation}
620     As well as,
621     \begin{equation}
622 mmeineke 1008 q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
623 mmeineke 1088 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
624 mmeineke 1008 \mathcal{O}(\Delta t^4)
625 mmeineke 978 \label{introEq:verletBack}
626     \end{equation}
627 mmeineke 1009 Where $m$ is the mass of the particle, $q(t)$ is the position at time
628     $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
629     particle. Adding together Eq.~\ref{introEq:verletForward} and
630 mmeineke 978 Eq.~\ref{introEq:verletBack} results in,
631     \begin{equation}
632 mmeineke 1009 q(t+\Delta t)+q(t-\Delta t) =
633     2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
634 mmeineke 978 \label{introEq:verletSum}
635     \end{equation}
636     Or equivalently,
637     \begin{equation}
638 mmeineke 1012 q(t+\Delta t) \approx
639     2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
640 mmeineke 978 \label{introEq:verletFinal}
641     \end{equation}
642     Which contains an error in the estimate of the new positions on the
643     order of $\Delta t^4$.
644    
645     In practice, however, the simulations in this research were integrated
646 mmeineke 1088 with the velocity reformulation of the Verlet method.\cite{allen87:csl}
647 mmeineke 1012 \begin{align}
648     q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
649     \label{introEq:MDvelVerletPos} \\%
650     %
651     v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
652 mmeineke 978 \label{introEq:MDvelVerletVel}
653 mmeineke 1012 \end{align}
654 mmeineke 978 The original Verlet algorithm can be regained by substituting the
655     velocity back into Eq.~\ref{introEq:MDvelVerletPos}. The Verlet
656     formulations are chosen in this research because the algorithms have
657 mmeineke 1088 very little long time drift in energy. Energy conservation in a
658     molecular dynamics simulation is of extreme importance, as it is a
659     measure of how closely one is following the ``true'' trajectory with
660     the finite integration scheme. An exact solution to the integration
661     will conserve area in phase space (i.e.~will be symplectic), as well
662 mmeineke 978 as be reversible in time, that is, the trajectory integrated forward
663     or backwards will exactly match itself. Having a finite algorithm
664     that both conserves area in phase space and is time reversible,
665 mmeineke 1088 therefore increases the reliability, but does not guarantee the
666     ``correctness'' of the integrated trajectory.
667 mmeineke 978
668 mmeineke 1001 It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
669 mmeineke 978 does not rigorously preserve the actual Hamiltonian, it does preserve
670     a pseudo-Hamiltonian which shadows the real one in phase space. This
671 mmeineke 1008 pseudo-Hamiltonian is provably area-conserving as well as time
672 mmeineke 978 reversible. The fact that it shadows the true Hamiltonian in phase
673     space is acceptable in actual simulations as one is interested in the
674     ensemble average of the observable being measured. From the ergodic
675 mmeineke 1012 hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
676 mmeineke 978 average will match the ensemble average, therefore two similar
677     trajectories in phase space should give matching statistical averages.
678    
679 mmeineke 979 \subsection{\label{introSec:MDfurther}Further Considerations}
680 mmeineke 1012
681 mmeineke 978 In the simulations presented in this research, a few additional
682 mmeineke 1088 parameters are needed to describe the motions. In the simulations
683     involving water and phospholipids in Ch.~\ref{chapt:lipid}, we are
684 mmeineke 978 required to integrate the equations of motions for dipoles on atoms.
685     This involves an additional three parameters be specified for each
686     dipole atom: $\phi$, $\theta$, and $\psi$. These three angles are
687     taken to be the Euler angles, where $\phi$ is a rotation about the
688     $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
689     $\psi$ is a final rotation about the new $z$-axis (see
690 mmeineke 1012 Fig.~\ref{introFig:eulerAngles}). This sequence of rotations can be
691 mmeineke 1088 accumulated into a single $3 \times 3$ matrix, $\mathsf{A}$,
692 mmeineke 978 defined as follows:
693     \begin{equation}
694 mmeineke 1088 \mathsf{A} =
695 mmeineke 1012 \begin{bmatrix}
696     \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
697     \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
698     \sin\theta\sin\psi \\%
699     %
700     -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
701     -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
702     \sin\theta\cos\psi \\%
703     %
704     \sin\phi\sin\theta &%
705     -\cos\phi\sin\theta &%
706     \cos\theta
707     \end{bmatrix}
708 mmeineke 978 \label{introEq:EulerRotMat}
709     \end{equation}
710    
711 mmeineke 1013 \begin{figure}
712 mmeineke 1014 \centering
713 mmeineke 1013 \includegraphics[width=\linewidth]{eulerRotFig.eps}
714 mmeineke 1037 \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).}
715 mmeineke 1013 \label{introFig:eulerAngles}
716     \end{figure}
717    
718 mmeineke 1012 The equations of motion for Euler angles can be written down
719     as\cite{allen87:csl}
720     \begin{align}
721     \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
722     \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
723     \omega^s_z
724     \label{introEq:MDeulerPhi} \\%
725     %
726     \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
727     \label{introEq:MDeulerTheta} \\%
728     %
729     \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
730     \omega^s_y \frac{\cos\phi}{\sin\theta}
731     \label{introEq:MDeulerPsi}
732     \end{align}
733 mmeineke 1088 Where $\omega^s_{\alpha}$ is the angular velocity in the lab space frame
734     along Cartesian coordinate $\alpha$. However, a difficulty arises when
735 mmeineke 979 attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
736 mmeineke 978 Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
737     both equations means there is a non-physical instability present when
738 mmeineke 1012 $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
739 mmeineke 1088 the rotation matrix, $\mathsf{A}$, directly, thus avoiding the
740 mmeineke 1012 instability. This method was proposed by Dullweber
741     \emph{et. al.}\cite{Dullweber1997}, and is presented in
742 mmeineke 978 Sec.~\ref{introSec:MDsymplecticRot}.
743    
744 mmeineke 1012 \subsection{\label{introSec:MDliouville}Liouville Propagator}
745 mmeineke 978
746 mmeineke 980 Before discussing the integration of the rotation matrix, it is
747     necessary to understand the construction of a ``good'' integration
748 mmeineke 1088 scheme. It has been previously discussed
749     (Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
750     integrator to be symplectic and time reversible. The following is an
751 mmeineke 980 outline of the Trotter factorization of the Liouville Propagator as a
752 mmeineke 1088 scheme for generating symplectic, time-reversible
753     integrators.\cite{Tuckerman92}
754 mmeineke 978
755 mmeineke 980 For a system with $f$ degrees of freedom the Liouville operator can be
756     defined as,
757     \begin{equation}
758 mmeineke 1012 iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
759     F_j\frac{\partial}{\partial p_j} \biggr ]
760 mmeineke 980 \label{introEq:LiouvilleOperator}
761     \end{equation}
762 mmeineke 1012 Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
763     degree of freedom, and $F_j$ is the force on that degree of freedom.
764 mmeineke 1008 $\Gamma$ is defined as the set of all positions and conjugate momenta,
765 mmeineke 1012 $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
766 mmeineke 980 \begin {equation}
767 mmeineke 1012 U(t) = e^{iLt}
768 mmeineke 980 \label{introEq:Lpropagator}
769     \end{equation}
770     This allows the specification of $\Gamma$ at any time $t$ as
771     \begin{equation}
772 mmeineke 1012 \Gamma(t) = U(t)\Gamma(0)
773 mmeineke 980 \label{introEq:Lp2}
774     \end{equation}
775     It is important to note, $U(t)$ is a unitary operator meaning
776     \begin{equation}
777     U(-t)=U^{-1}(t)
778     \label{introEq:Lp3}
779     \end{equation}
780    
781     Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
782     Trotter theorem to yield
783 mmeineke 1012 \begin{align}
784     e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
785     %
786     &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
787     %
788     &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
789     e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
790     \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
791     \end{align}
792     Where $\Delta t = t/P$.
793 mmeineke 980 With this, a discrete time operator $G(\Delta t)$ can be defined:
794 mmeineke 1012 \begin{align}
795     G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
796     e^{iL_1\frac{\Delta t}{2}} \notag \\%
797     %
798     &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
799     U_1 \biggl ( \frac{\Delta t}{2} \biggr )
800 mmeineke 980 \label{introEq:Lp5}
801 mmeineke 1012 \end{align}
802     Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
803 mmeineke 1088 unitary. This means that an integrator based on this factorization will be
804 mmeineke 980 reversible in time.
805    
806     As an example, consider the following decomposition of $L$:
807 mmeineke 1012 \begin{align}
808     iL_1 &= \dot{q}\frac{\partial}{\partial q}%
809     \label{introEq:Lp6a} \\%
810     %
811     iL_2 &= F(q)\frac{\partial}{\partial p}%
812     \label{introEq:Lp6b}
813     \end{align}
814     This leads to propagator $G( \Delta t )$ as,
815 mmeineke 980 \begin{equation}
816 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
817     e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
818     e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
819     \label{introEq:Lp7}
820 mmeineke 980 \end{equation}
821 mmeineke 1012 Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
822 mmeineke 980 \begin{equation}
823 mmeineke 1012 e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
824 mmeineke 980 \label{introEq:Lp8}
825     \end{equation}
826 mmeineke 1012 Where $c$ is independent of $x$. One obtains the following:
827     \begin{align}
828     \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
829     \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
830     %
831     q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
832     \label{introEq:Lp9b}\\%
833     %
834     \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
835     \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
836     \end{align}
837 mmeineke 980 Or written another way,
838 mmeineke 1012 \begin{align}
839     q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
840     \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
841     \label{introEq:Lp10a} \\%
842     %
843     \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
844     \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
845     \label{introEq:Lp10b}
846     \end{align}
847 mmeineke 980 This is the velocity Verlet formulation presented in
848 mmeineke 1012 Sec.~\ref{introSec:mdIntegrate}. Because this integration scheme is
849 mmeineke 980 comprised of unitary propagators, it is symplectic, and therefore area
850 mmeineke 1008 preserving in phase space. From the preceding factorization, one can
851 mmeineke 980 see that the integration of the equations of motion would follow:
852     \begin{enumerate}
853     \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
854    
855     \item Use the half step velocities to move positions one whole step, $\Delta t$.
856    
857     \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
858    
859     \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
860     \end{enumerate}
861    
862 mmeineke 1012 \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
863 mmeineke 980
864     Based on the factorization from the previous section,
865 mmeineke 1012 Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
866 mmeineke 1088 symplectic propagation of the rotation matrix, $\mathsf{A}$, as an
867 mmeineke 980 alternative method for the integration of orientational degrees of
868     freedom. The method starts with a straightforward splitting of the
869     Liouville operator:
870 mmeineke 1012 \begin{align}
871     iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
872 mmeineke 1088 \mathsf{\dot{A}}\frac{\partial}{\partial \mathsf{A}}
873 mmeineke 1012 \label{introEq:SR1a} \\%
874     %
875     iL_F &= F(q)\frac{\partial}{\partial p}
876     \label{introEq:SR1b} \\%
877 mmeineke 1088 iL_{\tau} &= \tau(\mathsf{A})\frac{\partial}{\partial j}
878 mmeineke 1012 \label{introEq:SR1b} \\%
879     \end{align}
880 mmeineke 1088 Where $\tau(\mathsf{A})$ is the torque of the system
881     due to the configuration, and $j$ is the conjugate
882 mmeineke 980 angular momenta of the system. The propagator, $G(\Delta t)$, becomes
883     \begin{equation}
884 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
885 mmeineke 1088 e^{\frac{\Delta t}{2} \tau(\mathsf{A})\frac{\partial}{\partial j}} \,
886 mmeineke 1012 e^{\Delta t\,iL_{\text{pos}}} \,
887 mmeineke 1088 e^{\frac{\Delta t}{2} \tau(\mathsf{A})\frac{\partial}{\partial j}} \,
888 mmeineke 1012 e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
889 mmeineke 980 \label{introEq:SR2}
890     \end{equation}
891 mmeineke 1008 Propagation of the linear and angular momenta follows as in the Verlet
892     scheme. The propagation of positions also follows the Verlet scheme
893 mmeineke 980 with the addition of a further symplectic splitting of the rotation
894 mmeineke 1012 matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
895     $U_{\text{pos}}(\Delta t)$.
896 mmeineke 980 \begin{equation}
897 mmeineke 1012 \mathcal{U}_{\text{rot}}(\Delta t) =
898     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
899     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
900     \mathcal{U}_z (\Delta t)\,
901     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
902     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
903 mmeineke 980 \label{introEq:SR3}
904     \end{equation}
905 mmeineke 1088 Where $\mathcal{U}_{\alpha}$ is a unitary rotation of $\mathsf{A}$ and
906     $j$ about each axis $\alpha$. As all propagations are now
907 mmeineke 980 unitary and symplectic, the entire integration scheme is also
908     symplectic and time reversible.
909    
910 mmeineke 1001 \section{\label{introSec:layout}Dissertation Layout}
911 mmeineke 914
912 mmeineke 1088 This dissertation is divided as follows: Ch.~\ref{chapt:RSA}
913 mmeineke 1001 presents the random sequential adsorption simulations of related
914 mmeineke 1087 pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:oopse}
915 mmeineke 1088 is about our molecular dynamics simulation package
916 mmeineke 1012 {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
917     phospholipid bilayers using a mesoscale model. And lastly,
918 mmeineke 1008 Ch.~\ref{chapt:conclusion} concludes this dissertation with a
919 mmeineke 1001 summary of all results. The chapters are arranged in chronological
920     order, and reflect the progression of techniques I employed during my
921     research.
922 mmeineke 914
923 mmeineke 1012 The chapter concerning random sequential adsorption simulations is a
924     study in applying Statistical Mechanics simulation techniques in order
925     to obtain a simple model capable of explaining the results. My
926     advisor, Dr. Gezelter, and I were approached by a colleague,
927 mmeineke 1088 Dr. Lieberman, about possible explanations for the partial coverage of
928     a gold surface by a particular compound synthesized in her group. We
929     suggested it might be due to the statistical packing fraction of disks
930     on a plane, and set about simulating this system. As the events in
931     our model were not dynamic in nature, a Monte Carlo method was
932     employed. Here, if a molecule landed on the surface without
933     overlapping with another, then its landing was accepted. However, if there
934     was overlap, the landing we rejected and a new random landing location
935     was chosen. This defined our acceptance rules and allowed us to
936     construct a Markov chain whose limiting distribution was the surface
937     coverage in which we were interested.
938 mmeineke 914
939 mmeineke 1001 The following chapter, about the simulation package {\sc oopse},
940     describes in detail the large body of scientific code that had to be
941 mmeineke 1012 written in order to study phospholipid bilayers. Although there are
942 mmeineke 1001 pre-existing molecular dynamic simulation packages available, none
943 mmeineke 1088 were capable of implementing the models we were developing. {\sc
944     oopse} is a unique package capable of not only integrating the
945     equations of motion in Cartesian space, but is also able to integrate
946     the rotational motion of rigid bodies and dipoles. It also has the
947     ability to perform calculations across distributed parallel processors
948     and contains a flexible script syntax for creating systems. {\sc
949     oopse} has become a very powerful scientific instrument for the
950     exploration of our model.
951 mmeineke 1001
952 mmeineke 1088 In Ch.~\ref{chapt:lipid}, utilizing {\sc oopse}, I have been
953 mmeineke 1008 able to parameterize a mesoscale model for phospholipid simulations.
954 mmeineke 1012 This model retains information about solvent ordering around the
955 mmeineke 1001 bilayer, as well as information regarding the interaction of the
956 mmeineke 1012 phospholipid head groups' dipoles with each other and the surrounding
957 mmeineke 1001 solvent. These simulations give us insight into the dynamic events
958     that lead to the formation of phospholipid bilayers, as well as
959     provide the foundation for future exploration of bilayer phase
960     behavior with this model.
961    
962 mmeineke 1088 In the last chapter, I discuss future directions
963     for both {\sc oopse} and this mesoscale model. Additionally, I will
964     give a summary of the results found in this dissertation.
965 mmeineke 1001
966