ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
Revision: 1013
Committed: Tue Feb 3 21:24:17 2004 UTC (20 years, 4 months ago) by mmeineke
Content type: application/x-tex
File size: 40752 byte(s)
Log Message:
added the new figure into the Introduction

File Contents

# User Rev Content
1 mmeineke 914
2    
3     \chapter{\label{chapt:intro}Introduction and Theoretical Background}
4    
5    
6 mmeineke 953 The techniques used in the course of this research fall under the two
7     main classes of molecular simulation: Molecular Dynamics and Monte
8     Carlo. Molecular Dynamic simulations integrate the equations of motion
9 mmeineke 1008 for a given system of particles, allowing the researcher to gain
10 mmeineke 953 insight into the time dependent evolution of a system. Diffusion
11     phenomena are readily studied with this simulation technique, making
12     Molecular Dynamics the main simulation technique used in this
13     research. Other aspects of the research fall under the Monte Carlo
14     class of simulations. In Monte Carlo, the configuration space
15 mmeineke 1008 available to the collection of particles is sampled stochastically,
16 mmeineke 953 or randomly. Each configuration is chosen with a given probability
17 mmeineke 1008 based on the Maxwell Boltzmann distribution. These types of simulations
18 mmeineke 953 are best used to probe properties of a system that are only dependent
19     only on the state of the system. Structural information about a system
20     is most readily obtained through these types of methods.
21 mmeineke 914
22 mmeineke 953 Although the two techniques employed seem dissimilar, they are both
23     linked by the overarching principles of Statistical
24     Thermodynamics. Statistical Thermodynamics governs the behavior of
25     both classes of simulations and dictates what each method can and
26     cannot do. When investigating a system, one most first analyze what
27     thermodynamic properties of the system are being probed, then chose
28     which method best suits that objective.
29 mmeineke 914
30 mmeineke 1003 \section{\label{introSec:statThermo}Statistical Mechanics}
31 mmeineke 914
32 mmeineke 1001 The following section serves as a brief introduction to some of the
33     Statistical Mechanics concepts present in this dissertation. What
34 mmeineke 1008 follows is a brief derivation of Boltzmann weighted statistics, and an
35 mmeineke 1001 explanation of how one can use the information to calculate an
36     observable for a system. This section then concludes with a brief
37     discussion of the ergodic hypothesis and its relevance to this
38     research.
39 mmeineke 914
40 mmeineke 1008 \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41 mmeineke 914
42 mmeineke 1001 Consider a system, $\gamma$, with some total energy,, $E_{\gamma}$.
43 mmeineke 1003 Let $\Omega(E_{\gamma})$ represent the number of degenerate ways
44 mmeineke 1001 $\boldsymbol{\Gamma}$, the collection of positions and conjugate
45     momenta of system $\gamma$, can be configured to give
46     $E_{\gamma}$. Further, if $\gamma$ is in contact with a bath system
47     where energy is exchanged between the two systems, $\Omega(E)$, where
48     $E$ is the total energy of both systems, can be represented as
49     \begin{equation}
50 mmeineke 1003 \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
51 mmeineke 1001 \label{introEq:SM1}
52     \end{equation}
53     Or additively as
54     \begin{equation}
55 mmeineke 1003 \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma})
56 mmeineke 1001 \label{introEq:SM2}
57     \end{equation}
58    
59     The solution to Eq.~\ref{introEq:SM2} maximizes the number of
60 mmeineke 1003 degenerative configurations in $E$. \cite{Frenkel1996}
61 mmeineke 1001 This gives
62     \begin{equation}
63 mmeineke 1003 \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
64     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
65     +
66     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
67     \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}
68 mmeineke 1001 \label{introEq:SM3}
69     \end{equation}
70     Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
71 mmeineke 1003 $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
72 mmeineke 1001 $-1$. Eq.~\ref{introEq:SM3} becomes
73     \begin{equation}
74 mmeineke 1003 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
75     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
76 mmeineke 1001 \label{introEq:SM4}
77     \end{equation}
78    
79     At this point, one can draw a relationship between the maximization of
80     degeneracy in Eq.~\ref{introEq:SM3} and the second law of
81 mmeineke 1003 thermodynamics. Namely, that for a closed system, entropy will
82     increase for an irreversible process.\cite{chandler:1987} Here the
83 mmeineke 1001 process is the partitioning of energy among the two systems. This
84     allows the following definition of entropy:
85     \begin{equation}
86 mmeineke 1003 S = k_B \ln \Omega(E)
87 mmeineke 1001 \label{introEq:SM5}
88     \end{equation}
89 mmeineke 1008 Where $k_B$ is the Boltzmann constant. Having defined entropy, one can
90 mmeineke 1001 also define the temperature of the system using the relation
91     \begin{equation}
92 mmeineke 1003 \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
93 mmeineke 1001 \label{introEq:SM6}
94     \end{equation}
95     The temperature in the system $\gamma$ is then
96     \begin{equation}
97 mmeineke 1003 \beta( E_{\gamma} ) = \frac{1}{k_B T} =
98     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
99 mmeineke 1001 \label{introEq:SM7}
100     \end{equation}
101     Applying this to Eq.~\ref{introEq:SM4} gives the following
102     \begin{equation}
103 mmeineke 1003 \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
104 mmeineke 1001 \label{introEq:SM8}
105     \end{equation}
106     Showing that the partitioning of energy between the two systems is
107 mmeineke 1003 actually a process of thermal equilibration.\cite{Frenkel1996}
108 mmeineke 1001
109     An application of these results is to formulate the form of an
110     expectation value of an observable, $A$, in the canonical ensemble. In
111     the canonical ensemble, the number of particles, $N$, the volume, $V$,
112     and the temperature, $T$, are all held constant while the energy, $E$,
113     is allowed to fluctuate. Returning to the previous example, the bath
114 mmeineke 1008 system is now an infinitely large thermal bath, whose exchange of
115 mmeineke 1003 energy with the system $\gamma$ holds the temperature constant. The
116 mmeineke 1001 partitioning of energy in the bath system is then related to the total
117 mmeineke 1003 energy of both systems and the fluctuations in $E_{\gamma}$:
118 mmeineke 1001 \begin{equation}
119 mmeineke 1003 \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
120 mmeineke 1001 \label{introEq:SM9}
121     \end{equation}
122     As for the expectation value, it can be defined
123     \begin{equation}
124 mmeineke 1003 \langle A \rangle =
125     \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
126     P_{\gamma} A(\boldsymbol{\Gamma})
127 mmeineke 1001 \label{introEq:SM10}
128 mmeineke 1003 \end{equation}
129     Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
130 mmeineke 1008 an integration over all accessible phase space, $P_{\gamma}$ is the
131 mmeineke 1001 probability of being in a given phase state and
132     $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
133     phase state.
134    
135     Because entropy seeks to maximize the number of degenerate states at a
136     given energy, the probability of being in a particular state in
137     $\gamma$ will be directly proportional to the number of allowable
138     states the coupled system is able to assume. Namely,
139     \begin{equation}
140 mmeineke 1003 P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
141     e^{\ln \Omega( E - E_{\gamma})}
142 mmeineke 1001 \label{introEq:SM11}
143     \end{equation}
144 mmeineke 1003 With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
145 mmeineke 1001 \begin{equation}
146 mmeineke 1003 \ln \Omega ( E - E_{\gamma}) =
147     \ln \Omega (E) -
148     E_{\gamma} \frac{\partial \ln \Omega }{\partial E}
149     + \ldots
150 mmeineke 1001 \label{introEq:SM12}
151     \end{equation}
152     Higher order terms are omitted as $E$ is an infinite thermal
153     bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
154     be rewritten:
155     \begin{equation}
156 mmeineke 1003 P_{\gamma} \propto e^{-\beta E_{\gamma}}
157 mmeineke 1001 \label{introEq:SM13}
158     \end{equation}
159 mmeineke 1008 Where $\ln \Omega(E)$ has been factored out of the proportionality as a
160 mmeineke 1003 constant. Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
161     d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
162 mmeineke 1001 \begin{equation}
163 mmeineke 1003 P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
164     {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
165 mmeineke 1001 \label{introEq:SM14}
166     \end{equation}
167 mmeineke 1008 This result is the standard Boltzmann statistical distribution.
168 mmeineke 1001 Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
169     \begin{equation}
170 mmeineke 1003 \langle A \rangle =
171     \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
172     A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
173     {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
174 mmeineke 1001 \label{introEq:SM15}
175     \end{equation}
176    
177     \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
178    
179     One last important consideration is that of ergodicity. Ergodicity is
180     the assumption that given an infinite amount of time, a system will
181 mmeineke 1003 visit every available point in phase space.\cite{Frenkel1996} For most
182 mmeineke 1001 systems, this is a valid assumption, except in cases where the system
183 mmeineke 1003 may be trapped in a local feature (\emph{e.g.}~glasses). When valid,
184 mmeineke 1001 ergodicity allows the unification of a time averaged observation and
185 mmeineke 1008 an ensemble averaged one. If an observation is averaged over a
186 mmeineke 1001 sufficiently long time, the system is assumed to visit all
187     appropriately available points in phase space, giving a properly
188     weighted statistical average. This allows the researcher freedom of
189     choice when deciding how best to measure a given observable. When an
190     ensemble averaged approach seems most logical, the Monte Carlo
191 mmeineke 1003 techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized.
192 mmeineke 1001 Conversely, if a problem lends itself to a time averaging approach,
193     the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be
194     employed.
195    
196 mmeineke 1003 \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
197 mmeineke 914
198 mmeineke 953 The Monte Carlo method was developed by Metropolis and Ulam for their
199     work in fissionable material.\cite{metropolis:1949} The method is so
200 mmeineke 955 named, because it heavily uses random numbers in its
201     solution.\cite{allen87:csl} The Monte Carlo method allows for the
202     solution of integrals through the stochastic sampling of the values
203     within the integral. In the simplest case, the evaluation of an
204     integral would follow a brute force method of
205     sampling.\cite{Frenkel1996} Consider the following single dimensional
206     integral:
207     \begin{equation}
208     I = f(x)dx
209     \label{eq:MCex1}
210     \end{equation}
211     The equation can be recast as:
212     \begin{equation}
213 mmeineke 977 I = (b-a)\langle f(x) \rangle
214 mmeineke 955 \label{eq:MCex2}
215     \end{equation}
216 mmeineke 977 Where $\langle f(x) \rangle$ is the unweighted average over the interval
217 mmeineke 955 $[a,b]$. The calculation of the integral could then be solved by
218     randomly choosing points along the interval $[a,b]$ and calculating
219     the value of $f(x)$ at each point. The accumulated average would then
220 mmeineke 1008 approach $I$ in the limit where the number of trials is infinitely
221 mmeineke 955 large.
222 mmeineke 914
223 mmeineke 955 However, in Statistical Mechanics, one is typically interested in
224     integrals of the form:
225     \begin{equation}
226 mmeineke 977 \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
227     e^{-\beta V(\mathbf{r}^N)}}%
228     {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
229 mmeineke 955 \label{eq:mcEnsAvg}
230     \end{equation}
231 mmeineke 977 Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
232 mmeineke 1003 and $A$ is some observable that is only dependent on position. This is
233     the ensemble average of $A$ as presented in
234     Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
235     momentum. Therefore the momenta contribution of the integral can be
236     factored out, leaving the configurational integral. Application of the
237     brute force method to this system would yield highly inefficient
238 mmeineke 1008 results. Due to the Boltzmann weighting of this integral, most random
239 mmeineke 955 configurations will have a near zero contribution to the ensemble
240 mmeineke 1003 average. This is where importance sampling comes into
241 mmeineke 955 play.\cite{allen87:csl}
242 mmeineke 914
243 mmeineke 955 Importance Sampling is a method where one selects a distribution from
244     which the random configurations are chosen in order to more
245     efficiently calculate the integral.\cite{Frenkel1996} Consider again
246     Eq.~\ref{eq:MCex1} rewritten to be:
247 mmeineke 956 \begin{equation}
248 mmeineke 977 I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
249     \label{introEq:Importance1}
250 mmeineke 956 \end{equation}
251 mmeineke 977 Where $\rho(x)$ is an arbitrary probability distribution in $x$. If
252     one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
253     from the distribution $\rho(x)$ on the interval $[a,b]$, then
254     Eq.~\ref{introEq:Importance1} becomes
255 mmeineke 956 \begin{equation}
256 mmeineke 977 I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
257     \label{introEq:Importance2}
258 mmeineke 956 \end{equation}
259 mmeineke 977 Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
260 mmeineke 956 \begin {equation}
261 mmeineke 977 \rho_{kT}(\mathbf{r}^N) =
262     \frac{e^{-\beta V(\mathbf{r}^N)}}
263     {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
264     \label{introEq:MCboltzman}
265 mmeineke 956 \end{equation}
266 mmeineke 1008 Where $\rho_{kT}$ is the Boltzmann distribution. The ensemble average
267 mmeineke 977 can be rewritten as
268 mmeineke 956 \begin{equation}
269 mmeineke 977 \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
270     \rho_{kT}(\mathbf{r}^N)
271     \label{introEq:Importance3}
272 mmeineke 956 \end{equation}
273 mmeineke 977 Applying Eq.~\ref{introEq:Importance1} one obtains
274 mmeineke 956 \begin{equation}
275 mmeineke 977 \langle A \rangle = \biggl \langle
276     \frac{ A \rho_{kT}(\mathbf{r}^N) }
277     {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
278     \label{introEq:Importance4}
279 mmeineke 956 \end{equation}
280 mmeineke 977 By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
281     Eq.~\ref{introEq:Importance4} becomes
282 mmeineke 956 \begin{equation}
283 mmeineke 977 \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
284     \label{introEq:Importance5}
285 mmeineke 956 \end{equation}
286 mmeineke 977 The difficulty is selecting points $\mathbf{r}^N$ such that they are
287     sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$. A solution
288 mmeineke 1009 was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
289 mmeineke 977 the use of a Markov chain whose limiting distribution was
290     $\rho_{kT}(\mathbf{r}^N)$.
291 mmeineke 955
292 mmeineke 1003 \subsection{\label{introSec:markovChains}Markov Chains}
293 mmeineke 955
294 mmeineke 956 A Markov chain is a chain of states satisfying the following
295 mmeineke 977 conditions:\cite{leach01:mm}
296     \begin{enumerate}
297 mmeineke 956 \item The outcome of each trial depends only on the outcome of the previous trial.
298     \item Each trial belongs to a finite set of outcomes called the state space.
299 mmeineke 977 \end{enumerate}
300 mmeineke 1008 If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
301     $\rho_m$ and $\rho_n$ are the probabilities of being in state
302 mmeineke 977 $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively. Further, the two
303     states are linked by a transition probability, $\pi_{mn}$, which is the
304     probability of going from state $m$ to state $n$.
305 mmeineke 955
306 mmeineke 977 \newcommand{\accMe}{\operatorname{acc}}
307    
308 mmeineke 956 The transition probability is given by the following:
309     \begin{equation}
310 mmeineke 977 \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
311     \label{introEq:MCpi}
312 mmeineke 956 \end{equation}
313 mmeineke 977 Where $\alpha_{mn}$ is the probability of attempting the move $m
314     \rightarrow n$, and $\accMe$ is the probability of accepting the move
315     $m \rightarrow n$. Defining a probability vector,
316     $\boldsymbol{\rho}$, such that
317 mmeineke 956 \begin{equation}
318 mmeineke 977 \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
319     \ldots \rho_N \}
320     \label{introEq:MCrhoVector}
321 mmeineke 956 \end{equation}
322 mmeineke 977 a transition matrix $\boldsymbol{\Pi}$ can be defined,
323     whose elements are $\pi_{mn}$, for each given transition. The
324     limiting distribution of the Markov chain can then be found by
325     applying the transition matrix an infinite number of times to the
326     distribution vector.
327 mmeineke 956 \begin{equation}
328 mmeineke 977 \boldsymbol{\rho}_{\text{limit}} =
329     \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
330     \boldsymbol{\Pi}^N
331     \label{introEq:MCmarkovLimit}
332 mmeineke 956 \end{equation}
333     The limiting distribution of the chain is independent of the starting
334     distribution, and successive applications of the transition matrix
335     will only yield the limiting distribution again.
336     \begin{equation}
337 mmeineke 977 \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
338     \boldsymbol{\Pi}
339     \label{introEq:MCmarkovEquil}
340 mmeineke 956 \end{equation}
341    
342 mmeineke 1003 \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
343 mmeineke 956
344 mmeineke 977 In the Metropolis method\cite{metropolis:1953}
345     Eq.~\ref{introEq:MCmarkovEquil} is solved such that
346 mmeineke 1008 $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
347 mmeineke 977 of states. The method accomplishes this by imposing the strong
348     condition of microscopic reversibility on the equilibrium
349     distribution. Meaning, that at equilibrium the probability of going
350     from $m$ to $n$ is the same as going from $n$ to $m$.
351 mmeineke 956 \begin{equation}
352 mmeineke 977 \rho_m\pi_{mn} = \rho_n\pi_{nm}
353     \label{introEq:MCmicroReverse}
354 mmeineke 956 \end{equation}
355 mmeineke 1008 Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
356 mmeineke 977 the Metropolis method. Using Eq.~\ref{introEq:MCpi},
357     Eq.~\ref{introEq:MCmicroReverse} becomes
358 mmeineke 956 \begin{equation}
359 mmeineke 977 \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
360     \frac{\rho_n}{\rho_m}
361     \label{introEq:MCmicro2}
362 mmeineke 956 \end{equation}
363 mmeineke 1008 For a Boltzmann limiting distribution,
364 mmeineke 956 \begin{equation}
365 mmeineke 977 \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
366     = e^{-\beta \Delta \mathcal{U}}
367     \label{introEq:MCmicro3}
368 mmeineke 956 \end{equation}
369     This allows for the following set of acceptance rules be defined:
370     \begin{equation}
371 mmeineke 1003 \accMe( m \rightarrow n ) =
372     \begin{cases}
373     1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
374     e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
375     \end{cases}
376     \label{introEq:accRules}
377 mmeineke 956 \end{equation}
378    
379 mmeineke 1003 Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
380     Metropolis method proceeds as follows
381     \begin{enumerate}
382     \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
383 mmeineke 1008 \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
384 mmeineke 1003 \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$). Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
385 mmeineke 1008 \item Accumulate the average for the configurational observable of interest.
386 mmeineke 1003 \item Repeat from step 2 until the average converges.
387     \end{enumerate}
388 mmeineke 956 One important note is that the average is accumulated whether the move
389     is accepted or not, this ensures proper weighting of the average.
390 mmeineke 1003 Using Eq.~\ref{introEq:Importance4} it becomes clear that the
391     accumulated averages are the ensemble averages, as this method ensures
392 mmeineke 1008 that the limiting distribution is the Boltzmann distribution.
393 mmeineke 956
394 mmeineke 1003 \section{\label{introSec:MD}Molecular Dynamics Simulations}
395 mmeineke 914
396 mmeineke 956 The main simulation tool used in this research is Molecular Dynamics.
397     Molecular Dynamics is when the equations of motion for a system are
398     integrated in order to obtain information about both the positions and
399     momentum of a system, allowing the calculation of not only
400     configurational observables, but momenta dependent ones as well:
401     diffusion constants, velocity auto correlations, folding/unfolding
402 mmeineke 1003 events, etc. Due to the principle of ergodicity,
403     Sec.~\ref{introSec:ergodic}, the average of these observables over the
404     time period of the simulation are taken to be the ensemble averages
405     for the system.
406 mmeineke 914
407 mmeineke 956 The choice of when to use molecular dynamics over Monte Carlo
408     techniques, is normally decided by the observables in which the
409 mmeineke 1001 researcher is interested. If the observables depend on momenta in
410 mmeineke 956 any fashion, then the only choice is molecular dynamics in some form.
411     However, when the observable is dependent only on the configuration,
412 mmeineke 1008 then most of the time Monte Carlo techniques will be more efficient.
413 mmeineke 914
414 mmeineke 956 The focus of research in the second half of this dissertation is
415     centered around the dynamic properties of phospholipid bilayers,
416     making molecular dynamics key in the simulation of those properties.
417 mmeineke 914
418 mmeineke 1003 \subsection{\label{introSec:mdAlgorithm}The Molecular Dynamics Algorithm}
419 mmeineke 914
420 mmeineke 956 To illustrate how the molecular dynamics technique is applied, the
421     following sections will describe the sequence involved in a
422 mmeineke 1003 simulation. Sec.~\ref{introSec:mdInit} deals with the initialization
423     of a simulation. Sec.~\ref{introSec:mdForce} discusses issues
424     involved with the calculation of the forces.
425     Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
426     with the integration of the equations of motion.\cite{Frenkel1996}
427 mmeineke 914
428 mmeineke 1003 \subsection{\label{introSec:mdInit}Initialization}
429 mmeineke 914
430 mmeineke 956 When selecting the initial configuration for the simulation it is
431     important to consider what dynamics one is hoping to observe.
432 mmeineke 1003 Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
433 mmeineke 956 phospholipid membranes. Therefore in these simulations initial
434     positions were selected that in some cases dispersed the lipids in
435 mmeineke 1008 water, and in other cases structured the lipids into performed
436 mmeineke 956 bilayers. Important considerations at this stage of the simulation are:
437     \begin{itemize}
438     \item There are no major overlaps of molecular or atomic orbitals
439 mmeineke 1008 \item Velocities are chosen in such a way as to not give the system a non=zero total momentum or angular momentum.
440     \item It is also sometimes desirable to select the velocities to correctly sample the target temperature.
441 mmeineke 956 \end{itemize}
442    
443     The first point is important due to the amount of potential energy
444     generated by having two particles too close together. If overlap
445     occurs, the first evaluation of forces will return numbers so large as
446 mmeineke 1008 to render the numerical integration of the motion meaningless. The
447 mmeineke 956 second consideration keeps the system from drifting or rotating as a
448     whole. This arises from the fact that most simulations are of systems
449     in equilibrium in the absence of outside forces. Therefore any net
450     movement would be unphysical and an artifact of the simulation method
451 mmeineke 1003 used. The final point addresses the selection of the magnitude of the
452 mmeineke 1008 initial velocities. For many simulations it is convenient to use
453 mmeineke 956 this opportunity to scale the amount of kinetic energy to reflect the
454     desired thermal distribution of the system. However, it must be noted
455     that most systems will require further velocity rescaling after the
456     first few initial simulation steps due to either loss or gain of
457     kinetic energy from energy stored in potential degrees of freedom.
458    
459 mmeineke 1003 \subsection{\label{introSec:mdForce}Force Evaluation}
460 mmeineke 956
461     The evaluation of forces is the most computationally expensive portion
462     of a given molecular dynamics simulation. This is due entirely to the
463     evaluation of long range forces in a simulation, typically pair-wise.
464     These forces are most commonly the Van der Waals force, and sometimes
465 mmeineke 1003 Coulombic forces as well. For a pair-wise force, there are $N(N-1)/ 2$
466     pairs to be evaluated, where $N$ is the number of particles in the
467     system. This leads to the calculations scaling as $N^2$, making large
468 mmeineke 956 simulations prohibitive in the absence of any computation saving
469     techniques.
470    
471     Another consideration one must resolve, is that in a given simulation
472     a disproportionate number of the particles will feel the effects of
473 mmeineke 1003 the surface.\cite{allen87:csl} For a cubic system of 1000 particles
474     arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
475     exposed to the surface. Unless one is simulating an isolated particle
476     group in a vacuum, the behavior of the system will be far from the
477 mmeineke 1008 desired bulk characteristics. To offset this, simulations employ the
478 mmeineke 1003 use of periodic boundary images.\cite{born:1912}
479 mmeineke 956
480     The technique involves the use of an algorithm that replicates the
481 mmeineke 1008 simulation box on an infinite lattice in Cartesian space. Any given
482 mmeineke 956 particle leaving the simulation box on one side will have an image of
483 mmeineke 1003 itself enter on the opposite side (see Fig.~\ref{introFig:pbc}). In
484 mmeineke 1009 addition, this sets that any two particles have an image, real or
485     periodic, within $\text{box}/2$ of each other. A discussion of the
486     method used to calculate the periodic image can be found in
487 mmeineke 1003 Sec.\ref{oopseSec:pbc}.
488 mmeineke 956
489 mmeineke 1003 \begin{figure}
490     \centering
491     \includegraphics[width=\linewidth]{pbcFig.eps}
492 mmeineke 1008 \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
493 mmeineke 1003 \label{introFig:pbc}
494     \end{figure}
495    
496 mmeineke 956 Returning to the topic of the computational scale of the force
497     evaluation, the use of periodic boundary conditions requires that a
498     cutoff radius be employed. Using a cutoff radius improves the
499     efficiency of the force evaluation, as particles farther than a
500 mmeineke 1003 predetermined distance, $r_{\text{cut}}$, are not included in the
501 mmeineke 1008 calculation.\cite{Frenkel1996} In a simulation with periodic images,
502 mmeineke 1003 $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
503     Fig.~\ref{introFig:rMax} illustrates how when using an
504     $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
505     $r_{\text{cut}}$ at all, the corners of the simulation box are
506     unequally weighted due to the lack of particle images in the $x$, $y$,
507 mmeineke 1008 or $z$ directions past a distance of $\text{box} / 2$.
508 mmeineke 956
509 mmeineke 1003 \begin{figure}
510     \centering
511     \includegraphics[width=\linewidth]{rCutMaxFig.eps}
512 mmeineke 1006 \caption[An explanation of $r_{\text{cut}}$]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
513 mmeineke 1003 \label{introFig:rMax}
514     \end{figure}
515    
516 mmeineke 1006 With the use of an $r_{\text{cut}}$, however, comes a discontinuity in
517     the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
518     discontinuity, one calculates the potential energy at the
519     $r_{\text{cut}}$, and adds that value to the potential, causing
520     the function to go smoothly to zero at the cutoff radius. This
521     shifted potential ensures conservation of energy when integrating the
522     Newtonian equations of motion.
523 mmeineke 956
524 mmeineke 1006 \begin{figure}
525     \centering
526     \includegraphics[width=\linewidth]{shiftedPot.eps}
527 mmeineke 1008 \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
528 mmeineke 1006 \label{introFig:shiftPot}
529     \end{figure}
530    
531 mmeineke 978 The second main simplification used in this research is the Verlet
532     neighbor list. \cite{allen87:csl} In the Verlet method, one generates
533     a list of all neighbor atoms, $j$, surrounding atom $i$ within some
534     cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
535     This list is created the first time forces are evaluated, then on
536     subsequent force evaluations, pair calculations are only calculated
537     from the neighbor lists. The lists are updated if any given particle
538     in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
539     giving rise to the possibility that a particle has left or joined a
540     neighbor list.
541 mmeineke 956
542 mmeineke 1003 \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
543 mmeineke 978
544     A starting point for the discussion of molecular dynamics integrators
545 mmeineke 1008 is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
546 mmeineke 978 expansion of position in time:
547     \begin{equation}
548 mmeineke 1008 q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
549     \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
550     \mathcal{O}(\Delta t^4)
551 mmeineke 978 \label{introEq:verletForward}
552     \end{equation}
553     As well as,
554     \begin{equation}
555 mmeineke 1008 q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
556     \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
557     \mathcal{O}(\Delta t^4)
558 mmeineke 978 \label{introEq:verletBack}
559     \end{equation}
560 mmeineke 1009 Where $m$ is the mass of the particle, $q(t)$ is the position at time
561     $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
562     particle. Adding together Eq.~\ref{introEq:verletForward} and
563 mmeineke 978 Eq.~\ref{introEq:verletBack} results in,
564     \begin{equation}
565 mmeineke 1009 q(t+\Delta t)+q(t-\Delta t) =
566     2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
567 mmeineke 978 \label{introEq:verletSum}
568     \end{equation}
569     Or equivalently,
570     \begin{equation}
571 mmeineke 1012 q(t+\Delta t) \approx
572     2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
573 mmeineke 978 \label{introEq:verletFinal}
574     \end{equation}
575     Which contains an error in the estimate of the new positions on the
576     order of $\Delta t^4$.
577    
578     In practice, however, the simulations in this research were integrated
579 mmeineke 1008 with a velocity reformulation of the Verlet method.\cite{allen87:csl}
580 mmeineke 1012 \begin{align}
581     q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
582     \label{introEq:MDvelVerletPos} \\%
583     %
584     v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
585 mmeineke 978 \label{introEq:MDvelVerletVel}
586 mmeineke 1012 \end{align}
587 mmeineke 978 The original Verlet algorithm can be regained by substituting the
588     velocity back into Eq.~\ref{introEq:MDvelVerletPos}. The Verlet
589     formulations are chosen in this research because the algorithms have
590     very little long term drift in energy conservation. Energy
591     conservation in a molecular dynamics simulation is of extreme
592     importance, as it is a measure of how closely one is following the
593 mmeineke 1008 ``true'' trajectory with the finite integration scheme. An exact
594 mmeineke 978 solution to the integration will conserve area in phase space, as well
595     as be reversible in time, that is, the trajectory integrated forward
596     or backwards will exactly match itself. Having a finite algorithm
597     that both conserves area in phase space and is time reversible,
598     therefore increases, but does not guarantee the ``correctness'' or the
599     integrated trajectory.
600    
601 mmeineke 1001 It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
602 mmeineke 978 does not rigorously preserve the actual Hamiltonian, it does preserve
603     a pseudo-Hamiltonian which shadows the real one in phase space. This
604 mmeineke 1008 pseudo-Hamiltonian is provably area-conserving as well as time
605 mmeineke 978 reversible. The fact that it shadows the true Hamiltonian in phase
606     space is acceptable in actual simulations as one is interested in the
607     ensemble average of the observable being measured. From the ergodic
608 mmeineke 1012 hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
609 mmeineke 978 average will match the ensemble average, therefore two similar
610     trajectories in phase space should give matching statistical averages.
611    
612 mmeineke 979 \subsection{\label{introSec:MDfurther}Further Considerations}
613 mmeineke 1012
614 mmeineke 978 In the simulations presented in this research, a few additional
615     parameters are needed to describe the motions. The simulations
616 mmeineke 1012 involving water and phospholipids in Ch.~\ref{chapt:lipid} are
617 mmeineke 978 required to integrate the equations of motions for dipoles on atoms.
618     This involves an additional three parameters be specified for each
619     dipole atom: $\phi$, $\theta$, and $\psi$. These three angles are
620     taken to be the Euler angles, where $\phi$ is a rotation about the
621     $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
622     $\psi$ is a final rotation about the new $z$-axis (see
623 mmeineke 1012 Fig.~\ref{introFig:eulerAngles}). This sequence of rotations can be
624     accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$,
625 mmeineke 978 defined as follows:
626     \begin{equation}
627 mmeineke 1012 \mathbf{A} =
628     \begin{bmatrix}
629     \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
630     \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
631     \sin\theta\sin\psi \\%
632     %
633     -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
634     -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
635     \sin\theta\cos\psi \\%
636     %
637     \sin\phi\sin\theta &%
638     -\cos\phi\sin\theta &%
639     \cos\theta
640     \end{bmatrix}
641 mmeineke 978 \label{introEq:EulerRotMat}
642     \end{equation}
643    
644 mmeineke 1013 \begin{figure}
645     \caentering
646     \includegraphics[width=\linewidth]{eulerRotFig.eps}
647     \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x\prime$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z\prime$ axis (red rotation).}
648     \label{introFig:eulerAngles}
649     \end{figure}
650    
651 mmeineke 1012 The equations of motion for Euler angles can be written down
652     as\cite{allen87:csl}
653     \begin{align}
654     \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
655     \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
656     \omega^s_z
657     \label{introEq:MDeulerPhi} \\%
658     %
659     \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
660     \label{introEq:MDeulerTheta} \\%
661     %
662     \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
663     \omega^s_y \frac{\cos\phi}{\sin\theta}
664     \label{introEq:MDeulerPsi}
665     \end{align}
666 mmeineke 978 Where $\omega^s_i$ is the angular velocity in the lab space frame
667 mmeineke 1008 along Cartesian coordinate $i$. However, a difficulty arises when
668 mmeineke 979 attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
669 mmeineke 978 Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
670     both equations means there is a non-physical instability present when
671 mmeineke 1012 $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
672     the rotation matrix, $\mathbf{A}$, directly, thus avoiding the
673     instability. This method was proposed by Dullweber
674     \emph{et. al.}\cite{Dullweber1997}, and is presented in
675 mmeineke 978 Sec.~\ref{introSec:MDsymplecticRot}.
676    
677 mmeineke 1012 \subsection{\label{introSec:MDliouville}Liouville Propagator}
678 mmeineke 978
679 mmeineke 980 Before discussing the integration of the rotation matrix, it is
680     necessary to understand the construction of a ``good'' integration
681     scheme. It has been previously
682 mmeineke 1012 discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
683 mmeineke 980 integrator to be symplectic, or time reversible. The following is an
684     outline of the Trotter factorization of the Liouville Propagator as a
685 mmeineke 1012 scheme for generating symplectic integrators.\cite{Tuckerman92}
686 mmeineke 978
687 mmeineke 980 For a system with $f$ degrees of freedom the Liouville operator can be
688     defined as,
689     \begin{equation}
690 mmeineke 1012 iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
691     F_j\frac{\partial}{\partial p_j} \biggr ]
692 mmeineke 980 \label{introEq:LiouvilleOperator}
693     \end{equation}
694 mmeineke 1012 Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
695     degree of freedom, and $F_j$ is the force on that degree of freedom.
696 mmeineke 1008 $\Gamma$ is defined as the set of all positions and conjugate momenta,
697 mmeineke 1012 $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
698 mmeineke 980 \begin {equation}
699 mmeineke 1012 U(t) = e^{iLt}
700 mmeineke 980 \label{introEq:Lpropagator}
701     \end{equation}
702     This allows the specification of $\Gamma$ at any time $t$ as
703     \begin{equation}
704 mmeineke 1012 \Gamma(t) = U(t)\Gamma(0)
705 mmeineke 980 \label{introEq:Lp2}
706     \end{equation}
707     It is important to note, $U(t)$ is a unitary operator meaning
708     \begin{equation}
709     U(-t)=U^{-1}(t)
710     \label{introEq:Lp3}
711     \end{equation}
712    
713     Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
714     Trotter theorem to yield
715 mmeineke 1012 \begin{align}
716     e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
717     %
718     &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
719     %
720     &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
721     e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
722     \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
723     \end{align}
724     Where $\Delta t = t/P$.
725 mmeineke 980 With this, a discrete time operator $G(\Delta t)$ can be defined:
726 mmeineke 1012 \begin{align}
727     G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
728     e^{iL_1\frac{\Delta t}{2}} \notag \\%
729     %
730     &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
731     U_1 \biggl ( \frac{\Delta t}{2} \biggr )
732 mmeineke 980 \label{introEq:Lp5}
733 mmeineke 1012 \end{align}
734     Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
735 mmeineke 980 unitary. Meaning an integrator based on this factorization will be
736     reversible in time.
737    
738     As an example, consider the following decomposition of $L$:
739 mmeineke 1012 \begin{align}
740     iL_1 &= \dot{q}\frac{\partial}{\partial q}%
741     \label{introEq:Lp6a} \\%
742     %
743     iL_2 &= F(q)\frac{\partial}{\partial p}%
744     \label{introEq:Lp6b}
745     \end{align}
746     This leads to propagator $G( \Delta t )$ as,
747 mmeineke 980 \begin{equation}
748 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
749     e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
750     e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
751     \label{introEq:Lp7}
752 mmeineke 980 \end{equation}
753 mmeineke 1012 Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
754 mmeineke 980 \begin{equation}
755 mmeineke 1012 e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
756 mmeineke 980 \label{introEq:Lp8}
757     \end{equation}
758 mmeineke 1012 Where $c$ is independent of $x$. One obtains the following:
759     \begin{align}
760     \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
761     \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
762     %
763     q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
764     \label{introEq:Lp9b}\\%
765     %
766     \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
767     \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
768     \end{align}
769 mmeineke 980 Or written another way,
770 mmeineke 1012 \begin{align}
771     q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
772     \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
773     \label{introEq:Lp10a} \\%
774     %
775     \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
776     \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
777     \label{introEq:Lp10b}
778     \end{align}
779 mmeineke 980 This is the velocity Verlet formulation presented in
780 mmeineke 1012 Sec.~\ref{introSec:mdIntegrate}. Because this integration scheme is
781 mmeineke 980 comprised of unitary propagators, it is symplectic, and therefore area
782 mmeineke 1008 preserving in phase space. From the preceding factorization, one can
783 mmeineke 980 see that the integration of the equations of motion would follow:
784     \begin{enumerate}
785     \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
786    
787     \item Use the half step velocities to move positions one whole step, $\Delta t$.
788    
789     \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
790    
791     \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
792     \end{enumerate}
793    
794 mmeineke 1012 \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
795 mmeineke 980
796     Based on the factorization from the previous section,
797 mmeineke 1012 Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
798 mmeineke 980 symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
799     alternative method for the integration of orientational degrees of
800     freedom. The method starts with a straightforward splitting of the
801     Liouville operator:
802 mmeineke 1012 \begin{align}
803     iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
804     \mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}}
805     \label{introEq:SR1a} \\%
806     %
807     iL_F &= F(q)\frac{\partial}{\partial p}
808     \label{introEq:SR1b} \\%
809     iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi}
810     \label{introEq:SR1b} \\%
811     \end{align}
812     Where $\tau(\mathbf{A})$ is the torque of the system
813     due to the configuration, and $\pi$ is the conjugate
814 mmeineke 980 angular momenta of the system. The propagator, $G(\Delta t)$, becomes
815     \begin{equation}
816 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
817     e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
818     e^{\Delta t\,iL_{\text{pos}}} \,
819     e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
820     e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
821 mmeineke 980 \label{introEq:SR2}
822     \end{equation}
823 mmeineke 1008 Propagation of the linear and angular momenta follows as in the Verlet
824     scheme. The propagation of positions also follows the Verlet scheme
825 mmeineke 980 with the addition of a further symplectic splitting of the rotation
826 mmeineke 1012 matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
827     $U_{\text{pos}}(\Delta t)$.
828 mmeineke 980 \begin{equation}
829 mmeineke 1012 \mathcal{U}_{\text{rot}}(\Delta t) =
830     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
831     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
832     \mathcal{U}_z (\Delta t)\,
833     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
834     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
835 mmeineke 980 \label{introEq:SR3}
836     \end{equation}
837 mmeineke 1012 Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and
838     $\pi$ about each axis $j$. As all propagations are now
839 mmeineke 980 unitary and symplectic, the entire integration scheme is also
840     symplectic and time reversible.
841    
842 mmeineke 1001 \section{\label{introSec:layout}Dissertation Layout}
843 mmeineke 914
844 mmeineke 1012 This dissertation is divided as follows:Ch.~\ref{chapt:RSA}
845 mmeineke 1001 presents the random sequential adsorption simulations of related
846 mmeineke 1008 pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE}
847 mmeineke 1001 is about the writing of the molecular dynamics simulation package
848 mmeineke 1012 {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
849     phospholipid bilayers using a mesoscale model. And lastly,
850 mmeineke 1008 Ch.~\ref{chapt:conclusion} concludes this dissertation with a
851 mmeineke 1001 summary of all results. The chapters are arranged in chronological
852     order, and reflect the progression of techniques I employed during my
853     research.
854 mmeineke 914
855 mmeineke 1012 The chapter concerning random sequential adsorption simulations is a
856     study in applying Statistical Mechanics simulation techniques in order
857     to obtain a simple model capable of explaining the results. My
858     advisor, Dr. Gezelter, and I were approached by a colleague,
859     Dr. Lieberman, about possible explanations for the partial coverage of a
860     gold surface by a particular compound of hers. We suggested it might
861     be due to the statistical packing fraction of disks on a plane, and
862     set about to simulate this system. As the events in our model were
863     not dynamic in nature, a Monte Carlo method was employed. Here, if a
864     molecule landed on the surface without overlapping another, then its
865     landing was accepted. However, if there was overlap, the landing we
866     rejected and a new random landing location was chosen. This defined
867     our acceptance rules and allowed us to construct a Markov chain whose
868     limiting distribution was the surface coverage in which we were
869     interested.
870 mmeineke 914
871 mmeineke 1001 The following chapter, about the simulation package {\sc oopse},
872     describes in detail the large body of scientific code that had to be
873 mmeineke 1012 written in order to study phospholipid bilayers. Although there are
874 mmeineke 1001 pre-existing molecular dynamic simulation packages available, none
875     were capable of implementing the models we were developing.{\sc oopse}
876     is a unique package capable of not only integrating the equations of
877 mmeineke 1008 motion in Cartesian space, but is also able to integrate the
878 mmeineke 1001 rotational motion of rigid bodies and dipoles. Add to this the
879     ability to perform calculations across parallel processors and a
880     flexible script syntax for creating systems, and {\sc oopse} becomes a
881     very powerful scientific instrument for the exploration of our model.
882    
883 mmeineke 1008 Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
884     able to parameterize a mesoscale model for phospholipid simulations.
885 mmeineke 1012 This model retains information about solvent ordering around the
886 mmeineke 1001 bilayer, as well as information regarding the interaction of the
887 mmeineke 1012 phospholipid head groups' dipoles with each other and the surrounding
888 mmeineke 1001 solvent. These simulations give us insight into the dynamic events
889     that lead to the formation of phospholipid bilayers, as well as
890     provide the foundation for future exploration of bilayer phase
891     behavior with this model.
892    
893     Which leads into the last chapter, where I discuss future directions
894     for both{\sc oopse} and this mesoscale model. Additionally, I will
895     give a summary of results for this dissertation.
896    
897