ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
Revision: 1106
Committed: Tue Apr 13 21:13:53 2004 UTC (20 years, 2 months ago) by mmeineke
Content type: application/x-tex
File size: 44404 byte(s)
Log Message:
returned the referencess to the thesis layout, and fixed the equation grammer in the Introduction

File Contents

# User Rev Content
1 mmeineke 914
2 mmeineke 1020 \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
3 mmeineke 914
4    
5 mmeineke 953 The techniques used in the course of this research fall under the two
6     main classes of molecular simulation: Molecular Dynamics and Monte
7 mmeineke 1088 Carlo. Molecular Dynamics simulations are carried out by integrating
8     the equations of motion for a given system of particles, allowing the
9     researcher to gain insight into the time dependent evolution of a
10     system. Transport phenomena are readily studied with this simulation
11     technique, making Molecular Dynamics the main simulation technique
12     used in this research. Other aspects of the research fall in the
13     Monte Carlo class of simulations. In Monte Carlo, the configuration
14     space available to the collection of particles is sampled
15     stochastically, or randomly. Each configuration is chosen with a given
16     probability based on the Boltzmann distribution. These types
17     of simulations are best used to probe properties of a system that are
18     dependent only on the state of the system. Structural information
19     about a system is most readily obtained through these types of
20     methods.
21 mmeineke 914
22 mmeineke 953 Although the two techniques employed seem dissimilar, they are both
23 mmeineke 1092 linked by the over-arching principles of Statistical
24 mmeineke 1088 Mechanics. Statistical Mechanics governs the behavior of
25 mmeineke 953 both classes of simulations and dictates what each method can and
26 mmeineke 1088 cannot do. When investigating a system, one must first analyze what
27     thermodynamic properties of the system are being probed, then choose
28 mmeineke 953 which method best suits that objective.
29 mmeineke 914
30 mmeineke 1003 \section{\label{introSec:statThermo}Statistical Mechanics}
31 mmeineke 914
32 mmeineke 1001 The following section serves as a brief introduction to some of the
33 mmeineke 1088 Statistical Mechanics concepts presented in this dissertation. What
34 mmeineke 1037 follows is a brief derivation of Boltzmann weighted statistics and an
35 mmeineke 1001 explanation of how one can use the information to calculate an
36     observable for a system. This section then concludes with a brief
37     discussion of the ergodic hypothesis and its relevance to this
38     research.
39 mmeineke 914
40 mmeineke 1008 \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41 mmeineke 914
42 mmeineke 1037 Consider a system, $\gamma$, with total energy $E_{\gamma}$. Let
43     $\Omega(E_{\gamma})$ represent the number of degenerate ways
44     $\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the
45     collection of positions and conjugate momenta of system $\gamma$, can
46     be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset
47     of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots
48     E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be
49 mmeineke 1092 expressed as
50 mmeineke 1001 \begin{equation}
51 mmeineke 1037 \Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots
52 mmeineke 1092 \Omega(E_{\gamma}) \times \ldots \Omega(E_n).
53 mmeineke 1037 \label{introEq:SM0.1}
54     \end{equation}
55     This multiplicative combination of degeneracies is illustrated in
56     Fig.~\ref{introFig:degenProd}.
57    
58     \begin{figure}
59     \centering
60     \includegraphics[width=\linewidth]{omegaFig.eps}
61     \caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.}
62     \label{introFig:degenProd}
63     \end{figure}
64    
65     Next, consider the specific case of $\gamma$ in contact with a
66     bath. Exchange of energy is allowed between the bath and the system,
67     subject to the constraint that the total energy
68     ($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$
69     is the total energy of both systems, can be represented as
70     \begin{equation}
71 mmeineke 1105 \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma}).
72 mmeineke 1001 \label{introEq:SM1}
73     \end{equation}
74 mmeineke 1106 Or additively as,
75 mmeineke 1001 \begin{equation}
76 mmeineke 1105 \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma}).
77 mmeineke 1001 \label{introEq:SM2}
78     \end{equation}
79    
80     The solution to Eq.~\ref{introEq:SM2} maximizes the number of
81 mmeineke 1042 degenerate configurations in $E$. \cite{Frenkel1996}
82 mmeineke 1001 This gives
83     \begin{equation}
84 mmeineke 1003 \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
85     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
86     +
87     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
88 mmeineke 1105 \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}},
89 mmeineke 1001 \label{introEq:SM3}
90     \end{equation}
91 mmeineke 1105 where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
92 mmeineke 1003 $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
93 mmeineke 1001 $-1$. Eq.~\ref{introEq:SM3} becomes
94     \begin{equation}
95 mmeineke 1003 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
96 mmeineke 1105 \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}.
97 mmeineke 1001 \label{introEq:SM4}
98     \end{equation}
99    
100     At this point, one can draw a relationship between the maximization of
101     degeneracy in Eq.~\ref{introEq:SM3} and the second law of
102 mmeineke 1003 thermodynamics. Namely, that for a closed system, entropy will
103     increase for an irreversible process.\cite{chandler:1987} Here the
104 mmeineke 1001 process is the partitioning of energy among the two systems. This
105     allows the following definition of entropy:
106     \begin{equation}
107 mmeineke 1105 S = k_B \ln \Omega(E),
108 mmeineke 1001 \label{introEq:SM5}
109     \end{equation}
110 mmeineke 1105 where $k_B$ is the Boltzmann constant. Having defined entropy, one can
111     also define the temperature of the system using the Maxwell relation,
112 mmeineke 1001 \begin{equation}
113 mmeineke 1105 \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}.
114 mmeineke 1001 \label{introEq:SM6}
115     \end{equation}
116     The temperature in the system $\gamma$ is then
117     \begin{equation}
118 mmeineke 1003 \beta( E_{\gamma} ) = \frac{1}{k_B T} =
119 mmeineke 1105 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}.
120 mmeineke 1001 \label{introEq:SM7}
121     \end{equation}
122     Applying this to Eq.~\ref{introEq:SM4} gives the following
123     \begin{equation}
124 mmeineke 1105 \beta( E_{\gamma} ) = \beta( E_{\text{bath}} ).
125 mmeineke 1001 \label{introEq:SM8}
126     \end{equation}
127 mmeineke 1088 Eq.~\ref{introEq:SM8} shows that the partitioning of energy between
128     the two systems is actually a process of thermal
129     equilibration.\cite{Frenkel1996}
130 mmeineke 1001
131 mmeineke 1088 An application of these results is to formulate the expectation value
132     of an observable, $A$, in the canonical ensemble. In the canonical
133     ensemble, the number of particles, $N$, the volume, $V$, and the
134     temperature, $T$, are all held constant while the energy, $E$, is
135     allowed to fluctuate. Returning to the previous example, the bath
136 mmeineke 1008 system is now an infinitely large thermal bath, whose exchange of
137 mmeineke 1003 energy with the system $\gamma$ holds the temperature constant. The
138 mmeineke 1088 partitioning of energy between the bath and the system is then related
139     to the total energy of both systems and the fluctuations in
140     $E_{\gamma}$:
141 mmeineke 1001 \begin{equation}
142 mmeineke 1105 \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} ).
143 mmeineke 1001 \label{introEq:SM9}
144     \end{equation}
145     As for the expectation value, it can be defined
146     \begin{equation}
147 mmeineke 1003 \langle A \rangle =
148     \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
149 mmeineke 1105 P_{\gamma} A(\boldsymbol{\Gamma}),
150 mmeineke 1001 \label{introEq:SM10}
151 mmeineke 1003 \end{equation}
152 mmeineke 1105 where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
153 mmeineke 1088 an integration over all accessible points in phase space, $P_{\gamma}$
154     is the probability of being in a given phase state and
155     $A(\boldsymbol{\Gamma})$ is an observable that is a function of the
156 mmeineke 1001 phase state.
157    
158     Because entropy seeks to maximize the number of degenerate states at a
159     given energy, the probability of being in a particular state in
160     $\gamma$ will be directly proportional to the number of allowable
161     states the coupled system is able to assume. Namely,
162     \begin{equation}
163 mmeineke 1003 P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
164 mmeineke 1106 e^{\ln \Omega( E - E_{\gamma})}.
165 mmeineke 1001 \label{introEq:SM11}
166     \end{equation}
167 mmeineke 1106 Because $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
168 mmeineke 1001 \begin{equation}
169 mmeineke 1003 \ln \Omega ( E - E_{\gamma}) =
170     \ln \Omega (E) -
171     E_{\gamma} \frac{\partial \ln \Omega }{\partial E}
172 mmeineke 1106 + \ldots.
173 mmeineke 1001 \label{introEq:SM12}
174     \end{equation}
175     Higher order terms are omitted as $E$ is an infinite thermal
176     bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
177     be rewritten:
178     \begin{equation}
179 mmeineke 1106 P_{\gamma} \propto e^{-\beta E_{\gamma}},
180 mmeineke 1001 \label{introEq:SM13}
181     \end{equation}
182 mmeineke 1106 where $\ln \Omega(E)$ has been factored out of the proportionality as a
183 mmeineke 1003 constant. Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
184     d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
185 mmeineke 1001 \begin{equation}
186 mmeineke 1003 P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
187 mmeineke 1106 {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}.
188 mmeineke 1001 \label{introEq:SM14}
189     \end{equation}
190 mmeineke 1008 This result is the standard Boltzmann statistical distribution.
191 mmeineke 1001 Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
192     \begin{equation}
193 mmeineke 1003 \langle A \rangle =
194     \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
195     A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
196 mmeineke 1106 {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}.
197 mmeineke 1001 \label{introEq:SM15}
198     \end{equation}
199    
200     \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
201    
202 mmeineke 1042 In the case of a Molecular Dynamics simulation, rather than
203     calculating an ensemble average integral over phase space as in
204     Eq.~\ref{introEq:SM15}, it becomes easier to caclulate the time
205     average of an observable. Namely,
206     \begin{equation}
207     \langle A \rangle_t = \frac{1}{\tau}
208 mmeineke 1106 \int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt,
209 mmeineke 1042 \label{introEq:SM16}
210     \end{equation}
211 mmeineke 1106 where the value of an observable is averaged over the length of time,
212 mmeineke 1088 $\tau$, that the simulation is run. This type of measurement mirrors
213     the experimental measurement of an observable. In an experiment, the
214 mmeineke 1068 instrument analyzing the system must average its observation over the
215 mmeineke 1042 finite time of the measurement. What is required then, is a principle
216     to relate the time average to the ensemble average. This is the
217     ergodic hypothesis.
218 mmeineke 1001
219 mmeineke 1042 Ergodicity is the assumption that given an infinite amount of time, a
220     system will visit every available point in phase
221     space.\cite{Frenkel1996} For most systems, this is a valid assumption,
222     except in cases where the system may be trapped in a local feature
223     (\emph{e.g.}~glasses). When valid, ergodicity allows the unification
224     of a time averaged observation and an ensemble averaged one. If an
225     observation is averaged over a sufficiently long time, the system is
226 mmeineke 1088 assumed to visit all energetically accessible points in phase space,
227 mmeineke 1042 giving a properly weighted statistical average. This allows the
228     researcher freedom of choice when deciding how best to measure a given
229     observable. When an ensemble averaged approach seems most logical,
230     the Monte Carlo techniques described in Sec.~\ref{introSec:monteCarlo}
231     can be utilized. Conversely, if a problem lends itself to a time
232     averaging approach, the Molecular Dynamics techniques in
233     Sec.~\ref{introSec:MD} can be employed.
234    
235 mmeineke 1003 \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
236 mmeineke 914
237 mmeineke 953 The Monte Carlo method was developed by Metropolis and Ulam for their
238     work in fissionable material.\cite{metropolis:1949} The method is so
239 mmeineke 1088 named, because it uses random numbers extensively.\cite{allen87:csl}
240     The Monte Carlo method allows for the solution of integrals through
241     the stochastic sampling of the values within the integral. In the
242     simplest case, the evaluation of an integral would follow a brute
243     force method of sampling.\cite{Frenkel1996} Consider the following
244     single dimensional integral:
245 mmeineke 955 \begin{equation}
246 mmeineke 1106 I = \int_a^b f(x)dx.
247 mmeineke 955 \label{eq:MCex1}
248     \end{equation}
249     The equation can be recast as:
250     \begin{equation}
251 mmeineke 1106 I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx,
252 mmeineke 1042 \label{introEq:Importance1}
253     \end{equation}
254 mmeineke 1106 where $\rho(x)$ is an arbitrary probability distribution in $x$. If
255 mmeineke 1042 one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
256     from the distribution $\rho(x)$ on the interval $[a,b]$, then
257     Eq.~\ref{introEq:Importance1} becomes
258     \begin{equation}
259 mmeineke 1106 I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]}.
260 mmeineke 1042 \label{introEq:Importance2}
261     \end{equation}
262     If $\rho(x)$ is uniformly distributed over the interval $[a,b]$,
263     \begin{equation}
264 mmeineke 1106 \rho(x) = \frac{1}{b-a},
265 mmeineke 1042 \label{introEq:importance2b}
266     \end{equation}
267     then the integral can be rewritten as
268     \begin{equation}
269     I = (b-a)\lim_{\tau \rightarrow \infty}
270 mmeineke 1106 \langle f(x) \rangle_{\text{trials}[a,b]}.
271 mmeineke 955 \label{eq:MCex2}
272     \end{equation}
273 mmeineke 914
274 mmeineke 955 However, in Statistical Mechanics, one is typically interested in
275     integrals of the form:
276     \begin{equation}
277 mmeineke 977 \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
278     e^{-\beta V(\mathbf{r}^N)}}%
279 mmeineke 1106 {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}},
280 mmeineke 955 \label{eq:mcEnsAvg}
281     \end{equation}
282 mmeineke 1106 where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
283 mmeineke 1003 and $A$ is some observable that is only dependent on position. This is
284     the ensemble average of $A$ as presented in
285     Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
286     momentum. Therefore the momenta contribution of the integral can be
287     factored out, leaving the configurational integral. Application of the
288     brute force method to this system would yield highly inefficient
289 mmeineke 1008 results. Due to the Boltzmann weighting of this integral, most random
290 mmeineke 955 configurations will have a near zero contribution to the ensemble
291 mmeineke 1003 average. This is where importance sampling comes into
292 mmeineke 955 play.\cite{allen87:csl}
293 mmeineke 914
294 mmeineke 1042 Importance sampling is a method where the distribution, from which the
295     random configurations are chosen, is selected in such a way as to
296     efficiently sample the integral in question. Looking at
297     Eq.~\ref{eq:mcEnsAvg}, and realizing
298 mmeineke 956 \begin {equation}
299 mmeineke 977 \rho_{kT}(\mathbf{r}^N) =
300     \frac{e^{-\beta V(\mathbf{r}^N)}}
301 mmeineke 1106 {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}},
302 mmeineke 977 \label{introEq:MCboltzman}
303 mmeineke 956 \end{equation}
304 mmeineke 1106 where $\rho_{kT}$ is the Boltzmann distribution. The ensemble average
305 mmeineke 977 can be rewritten as
306 mmeineke 956 \begin{equation}
307 mmeineke 977 \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
308 mmeineke 1106 \rho_{kT}(\mathbf{r}^N).
309 mmeineke 977 \label{introEq:Importance3}
310 mmeineke 956 \end{equation}
311 mmeineke 977 Applying Eq.~\ref{introEq:Importance1} one obtains
312 mmeineke 956 \begin{equation}
313 mmeineke 977 \langle A \rangle = \biggl \langle
314     \frac{ A \rho_{kT}(\mathbf{r}^N) }
315 mmeineke 1106 {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}.
316 mmeineke 977 \label{introEq:Importance4}
317 mmeineke 956 \end{equation}
318 mmeineke 977 By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
319     Eq.~\ref{introEq:Importance4} becomes
320 mmeineke 956 \begin{equation}
321 mmeineke 1106 \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT}.
322 mmeineke 977 \label{introEq:Importance5}
323 mmeineke 956 \end{equation}
324 mmeineke 977 The difficulty is selecting points $\mathbf{r}^N$ such that they are
325     sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$. A solution
326 mmeineke 1009 was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
327 mmeineke 977 the use of a Markov chain whose limiting distribution was
328     $\rho_{kT}(\mathbf{r}^N)$.
329 mmeineke 955
330 mmeineke 1003 \subsection{\label{introSec:markovChains}Markov Chains}
331 mmeineke 955
332 mmeineke 956 A Markov chain is a chain of states satisfying the following
333 mmeineke 977 conditions:\cite{leach01:mm}
334     \begin{enumerate}
335 mmeineke 956 \item The outcome of each trial depends only on the outcome of the previous trial.
336     \item Each trial belongs to a finite set of outcomes called the state space.
337 mmeineke 977 \end{enumerate}
338 mmeineke 1008 If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
339     $\rho_m$ and $\rho_n$ are the probabilities of being in state
340 mmeineke 977 $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively. Further, the two
341     states are linked by a transition probability, $\pi_{mn}$, which is the
342     probability of going from state $m$ to state $n$.
343 mmeineke 955
344 mmeineke 977 \newcommand{\accMe}{\operatorname{acc}}
345    
346 mmeineke 956 The transition probability is given by the following:
347     \begin{equation}
348 mmeineke 1106 \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n),
349 mmeineke 977 \label{introEq:MCpi}
350 mmeineke 956 \end{equation}
351 mmeineke 1106 where $\alpha_{mn}$ is the probability of attempting the move $m
352 mmeineke 977 \rightarrow n$, and $\accMe$ is the probability of accepting the move
353     $m \rightarrow n$. Defining a probability vector,
354     $\boldsymbol{\rho}$, such that
355 mmeineke 956 \begin{equation}
356 mmeineke 977 \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
357 mmeineke 1106 \ldots \rho_N \},
358 mmeineke 977 \label{introEq:MCrhoVector}
359 mmeineke 956 \end{equation}
360 mmeineke 977 a transition matrix $\boldsymbol{\Pi}$ can be defined,
361     whose elements are $\pi_{mn}$, for each given transition. The
362     limiting distribution of the Markov chain can then be found by
363     applying the transition matrix an infinite number of times to the
364 mmeineke 1106 distribution vector,
365 mmeineke 956 \begin{equation}
366 mmeineke 977 \boldsymbol{\rho}_{\text{limit}} =
367     \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
368 mmeineke 1106 \boldsymbol{\Pi}^N.
369 mmeineke 977 \label{introEq:MCmarkovLimit}
370 mmeineke 956 \end{equation}
371     The limiting distribution of the chain is independent of the starting
372     distribution, and successive applications of the transition matrix
373 mmeineke 1106 will only yield the limiting distribution again,
374 mmeineke 956 \begin{equation}
375 mmeineke 1088 \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{limit}}
376 mmeineke 1106 \boldsymbol{\Pi}.
377 mmeineke 977 \label{introEq:MCmarkovEquil}
378 mmeineke 956 \end{equation}
379    
380 mmeineke 1003 \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
381 mmeineke 956
382 mmeineke 977 In the Metropolis method\cite{metropolis:1953}
383     Eq.~\ref{introEq:MCmarkovEquil} is solved such that
384 mmeineke 1008 $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
385 mmeineke 977 of states. The method accomplishes this by imposing the strong
386     condition of microscopic reversibility on the equilibrium
387 mmeineke 1088 distribution. This means that at equilibrium, the probability of going
388 mmeineke 1106 from $m$ to $n$ is the same as going from $n$ to $m$,
389 mmeineke 956 \begin{equation}
390 mmeineke 1106 \rho_m\pi_{mn} = \rho_n\pi_{nm}.
391 mmeineke 977 \label{introEq:MCmicroReverse}
392 mmeineke 956 \end{equation}
393 mmeineke 1008 Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
394 mmeineke 977 the Metropolis method. Using Eq.~\ref{introEq:MCpi},
395     Eq.~\ref{introEq:MCmicroReverse} becomes
396 mmeineke 956 \begin{equation}
397 mmeineke 977 \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
398 mmeineke 1106 \frac{\rho_n}{\rho_m}.
399 mmeineke 977 \label{introEq:MCmicro2}
400 mmeineke 956 \end{equation}
401 mmeineke 1008 For a Boltzmann limiting distribution,
402 mmeineke 956 \begin{equation}
403 mmeineke 977 \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
404 mmeineke 1106 = e^{-\beta \Delta \mathcal{U}}.
405 mmeineke 977 \label{introEq:MCmicro3}
406 mmeineke 956 \end{equation}
407     This allows for the following set of acceptance rules be defined:
408     \begin{equation}
409 mmeineke 1003 \accMe( m \rightarrow n ) =
410     \begin{cases}
411     1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
412     e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
413     \end{cases}
414     \label{introEq:accRules}
415 mmeineke 956 \end{equation}
416    
417 mmeineke 1003 Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
418     Metropolis method proceeds as follows
419     \begin{enumerate}
420     \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
421 mmeineke 1008 \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
422 mmeineke 1088 \item If the new configuration lowers the energy of the system, accept the move ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$). Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
423 mmeineke 1008 \item Accumulate the average for the configurational observable of interest.
424 mmeineke 1003 \item Repeat from step 2 until the average converges.
425     \end{enumerate}
426 mmeineke 956 One important note is that the average is accumulated whether the move
427     is accepted or not, this ensures proper weighting of the average.
428 mmeineke 1003 Using Eq.~\ref{introEq:Importance4} it becomes clear that the
429     accumulated averages are the ensemble averages, as this method ensures
430 mmeineke 1008 that the limiting distribution is the Boltzmann distribution.
431 mmeineke 956
432 mmeineke 1003 \section{\label{introSec:MD}Molecular Dynamics Simulations}
433 mmeineke 914
434 mmeineke 956 The main simulation tool used in this research is Molecular Dynamics.
435 mmeineke 1088 Molecular Dynamics is the numerical integration of the equations of
436     motion for a system in order to obtain information about the dynamic
437     changes in the positions and momentum of the constituent particles.
438     This allows the calculation of not only configurational observables,
439     but momentum dependent ones as well: diffusion constants, relaxation
440     events, folding/unfolding events, etc. With the time dependent
441     information gained from a Molecular Dynamics simulation, one can also
442     calculate time correlation functions of the form\cite{Hansen86}
443 mmeineke 1042 \begin{equation}
444     \langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau}
445 mmeineke 1106 \int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime}.
446 mmeineke 1042 \label{introEq:timeCorr}
447     \end{equation}
448     These correlations can be used to measure fundamental time constants
449     of a system, such as diffusion constants from the velocity
450     autocorrelation or dipole relaxation times from the dipole
451     autocorrelation. Due to the principle of ergodicity,
452 mmeineke 1088 Sec.~\ref{introSec:ergodic}, the average of static observables over
453     the length of the simulation are taken to be the ensemble averages for
454     the system.
455 mmeineke 914
456 mmeineke 956 The choice of when to use molecular dynamics over Monte Carlo
457     techniques, is normally decided by the observables in which the
458 mmeineke 1088 researcher is interested. If the observables depend on time in any
459     fashion, then the only choice is molecular dynamics in some form.
460 mmeineke 956 However, when the observable is dependent only on the configuration,
461 mmeineke 1088 then for most small systems, Monte Carlo techniques will be more
462     efficient. The focus of research in the second half of this
463     dissertation is centered on the dynamic properties of phospholipid
464     bilayers, making molecular dynamics key in the simulation of those
465     properties.
466 mmeineke 914
467 mmeineke 1088 \subsection{\label{introSec:mdAlgorithm}Molecular Dynamics Algorithms}
468 mmeineke 914
469 mmeineke 956 To illustrate how the molecular dynamics technique is applied, the
470     following sections will describe the sequence involved in a
471 mmeineke 1003 simulation. Sec.~\ref{introSec:mdInit} deals with the initialization
472     of a simulation. Sec.~\ref{introSec:mdForce} discusses issues
473     involved with the calculation of the forces.
474     Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
475     with the integration of the equations of motion.\cite{Frenkel1996}
476 mmeineke 914
477 mmeineke 1003 \subsection{\label{introSec:mdInit}Initialization}
478 mmeineke 914
479 mmeineke 956 When selecting the initial configuration for the simulation it is
480     important to consider what dynamics one is hoping to observe.
481 mmeineke 1003 Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
482 mmeineke 956 phospholipid membranes. Therefore in these simulations initial
483     positions were selected that in some cases dispersed the lipids in
484 mmeineke 1088 water, and in other cases structured the lipids into preformed
485 mmeineke 956 bilayers. Important considerations at this stage of the simulation are:
486     \begin{itemize}
487 mmeineke 1088 \item There are no overlapping molecules or atoms.
488     \item Velocities are chosen in such a way as to give the system zero total momentum and angular momentum.
489     \item It is also sometimes desirable to select the velocities to correctly sample the ensemble at a particular target temperature.
490 mmeineke 956 \end{itemize}
491    
492 mmeineke 1088 The first point is important due to the forces generated when two
493     particles are too close together. If overlap occurs, the first
494     evaluation of forces will return numbers so large as to render the
495     numerical integration of the equations motion meaningless. The second
496     consideration keeps the system from drifting or rotating as a whole.
497     This arises from the fact that most simulations are of systems in
498     equilibrium in the absence of outside forces. Therefore any net
499 mmeineke 956 movement would be unphysical and an artifact of the simulation method
500 mmeineke 1003 used. The final point addresses the selection of the magnitude of the
501 mmeineke 1088 initial velocities. For many simulations it is convenient to use this
502     opportunity to scale the amount of kinetic energy to reflect the
503 mmeineke 956 desired thermal distribution of the system. However, it must be noted
504 mmeineke 1088 that due to equipartition, most systems will require further velocity
505     rescaling after the first few initial simulation steps due to either
506     loss or gain of kinetic energy from energy stored as potential energy.
507 mmeineke 956
508 mmeineke 1003 \subsection{\label{introSec:mdForce}Force Evaluation}
509 mmeineke 956
510     The evaluation of forces is the most computationally expensive portion
511 mmeineke 1088 of any molecular dynamics simulation. This is due entirely to the
512     evaluation of long range forces in a simulation, which are typically
513     pair potentials. These forces are most commonly the van der Waals
514     force, and sometimes Coulombic forces as well. For a pair-wise
515     interaction, there are $N(N-1)/ 2$ pairs to be evaluated, where $N$ is
516     the number of particles in the system. This leads to calculations which
517     scale as $N^2$, making large simulations prohibitive in the absence
518     of any computation saving techniques.
519 mmeineke 956
520 mmeineke 1088 Another consideration one must resolve, is that in a given simulation,
521 mmeineke 956 a disproportionate number of the particles will feel the effects of
522 mmeineke 1003 the surface.\cite{allen87:csl} For a cubic system of 1000 particles
523     arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
524 mmeineke 1088 exposed to the surface. Unless one is simulating an isolated cluster
525     in a vacuum, the behavior of the system will be far from the desired
526     bulk characteristics. To offset this, simulations employ the use of
527     periodic images.\cite{born:1912}
528 mmeineke 956
529     The technique involves the use of an algorithm that replicates the
530 mmeineke 1088 simulation box on an infinite lattice in Cartesian space. Any
531 mmeineke 956 particle leaving the simulation box on one side will have an image of
532 mmeineke 1088 itself enter on the opposite side (see
533     Fig.~\ref{introFig:pbc}). Therefore, a pair of particles have an
534     image of each other, real or periodic, within $\text{box}/2$. A
535     discussion of the method used to calculate the periodic image can be
536     found in Sec.\ref{oopseSec:pbc}.
537 mmeineke 956
538 mmeineke 1003 \begin{figure}
539     \centering
540     \includegraphics[width=\linewidth]{pbcFig.eps}
541 mmeineke 1008 \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
542 mmeineke 1003 \label{introFig:pbc}
543     \end{figure}
544    
545 mmeineke 1088 Returning to the topic of the computational scaling of the force
546 mmeineke 956 evaluation, the use of periodic boundary conditions requires that a
547     cutoff radius be employed. Using a cutoff radius improves the
548     efficiency of the force evaluation, as particles farther than a
549 mmeineke 1003 predetermined distance, $r_{\text{cut}}$, are not included in the
550 mmeineke 1008 calculation.\cite{Frenkel1996} In a simulation with periodic images,
551 mmeineke 1063 there are two methods to choose from, both with their own cutoff
552     limits. In the minimum image convention, $r_{\text{cut}}$ has a
553     maximum value of $\text{box}/2$. This is because each atom has only
554 mmeineke 1088 one image that is seen by other atoms. The image used is the one that
555     minimizes the distance between the two atoms. A system of wrapped
556     images about a central atom therefore has a maximum length scale of
557     box on a side (Fig.~\ref{introFig:rMaxMin}). The second convention,
558     multiple image convention, has a maximum $r_{\text{cut}}$ of the box
559     length itself. Here multiple images of each atom are replicated in the
560 mmeineke 1063 periodic cells surrounding the central atom, this causes the atom to
561     see multiple copies of several atoms. If the cutoff radius is larger
562     than box, however, then the atom will see an image of itself, and
563     attempt to calculate an unphysical self-self force interaction
564     (Fig.~\ref{introFig:rMaxMult}). Due to the increased complexity and
565     commputaional ineffeciency of the multiple image method, the minimum
566 mmeineke 1088 image method is the used throughout this research.
567 mmeineke 956
568 mmeineke 1003 \begin{figure}
569     \centering
570     \includegraphics[width=\linewidth]{rCutMaxFig.eps}
571 mmeineke 1063 \caption[An explanation of minimum image convention]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
572     \label{introFig:rMaxMin}
573 mmeineke 1003 \end{figure}
574    
575 mmeineke 1063 \begin{figure}
576     \centering
577     \includegraphics[width=\linewidth]{rCutMaxMultFig.eps}
578 mmeineke 1088 \caption[An explanation of multiple image convention]{The yellow atom is the central wrapping point. The blue atoms are the minimum images of the system about the central atom. The boxes with the green atoms are multiple images of the central box. If $r_{\text{cut}} \geq \text{box}$ then the central atom sees multiple images of itself (red atom), creating a self-self force evaluation.}
579 mmeineke 1063 \label{introFig:rMaxMult}
580     \end{figure}
581    
582 mmeineke 1088 With the use of a cutoff radius, however, comes a discontinuity in
583 mmeineke 1006 the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
584     discontinuity, one calculates the potential energy at the
585     $r_{\text{cut}}$, and adds that value to the potential, causing
586     the function to go smoothly to zero at the cutoff radius. This
587     shifted potential ensures conservation of energy when integrating the
588     Newtonian equations of motion.
589 mmeineke 956
590 mmeineke 1006 \begin{figure}
591     \centering
592     \includegraphics[width=\linewidth]{shiftedPot.eps}
593 mmeineke 1008 \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
594 mmeineke 1006 \label{introFig:shiftPot}
595     \end{figure}
596    
597 mmeineke 978 The second main simplification used in this research is the Verlet
598 mmeineke 1088 neighbor list.\cite{allen87:csl} In the Verlet method, one generates
599 mmeineke 978 a list of all neighbor atoms, $j$, surrounding atom $i$ within some
600     cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
601     This list is created the first time forces are evaluated, then on
602     subsequent force evaluations, pair calculations are only calculated
603 mmeineke 1088 from the neighbor lists. The lists are updated if any particle
604 mmeineke 978 in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
605 mmeineke 1088 which indeicates the possibility that a particle has left or joined the
606 mmeineke 978 neighbor list.
607 mmeineke 956
608 mmeineke 1088 \subsection{\label{introSec:mdIntegrate} Integration of the Equations of Motion}
609 mmeineke 978
610     A starting point for the discussion of molecular dynamics integrators
611 mmeineke 1008 is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
612 mmeineke 978 expansion of position in time:
613     \begin{equation}
614 mmeineke 1008 q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
615 mmeineke 1088 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
616 mmeineke 1106 \mathcal{O}(\Delta t^4) .
617 mmeineke 978 \label{introEq:verletForward}
618     \end{equation}
619     As well as,
620     \begin{equation}
621 mmeineke 1008 q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
622 mmeineke 1088 \frac{\Delta t^3}{3!}\frac{\partial^3 q(t)}{\partial t^3} +
623 mmeineke 1106 \mathcal{O}(\Delta t^4) ,
624 mmeineke 978 \label{introEq:verletBack}
625     \end{equation}
626 mmeineke 1106 where $m$ is the mass of the particle, $q(t)$ is the position at time
627 mmeineke 1009 $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
628     particle. Adding together Eq.~\ref{introEq:verletForward} and
629 mmeineke 978 Eq.~\ref{introEq:verletBack} results in,
630     \begin{equation}
631 mmeineke 1009 q(t+\Delta t)+q(t-\Delta t) =
632 mmeineke 1106 2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4) ,
633 mmeineke 978 \label{introEq:verletSum}
634     \end{equation}
635 mmeineke 1106 or equivalently,
636 mmeineke 978 \begin{equation}
637 mmeineke 1012 q(t+\Delta t) \approx
638 mmeineke 1106 2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2.
639 mmeineke 978 \label{introEq:verletFinal}
640     \end{equation}
641     Which contains an error in the estimate of the new positions on the
642     order of $\Delta t^4$.
643    
644     In practice, however, the simulations in this research were integrated
645 mmeineke 1088 with the velocity reformulation of the Verlet method.\cite{allen87:csl}
646 mmeineke 1012 \begin{align}
647 mmeineke 1106 q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 ,%
648 mmeineke 1012 \label{introEq:MDvelVerletPos} \\%
649     %
650 mmeineke 1106 v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] .%
651 mmeineke 978 \label{introEq:MDvelVerletVel}
652 mmeineke 1012 \end{align}
653 mmeineke 978 The original Verlet algorithm can be regained by substituting the
654     velocity back into Eq.~\ref{introEq:MDvelVerletPos}. The Verlet
655     formulations are chosen in this research because the algorithms have
656 mmeineke 1088 very little long time drift in energy. Energy conservation in a
657     molecular dynamics simulation is of extreme importance, as it is a
658     measure of how closely one is following the ``true'' trajectory with
659     the finite integration scheme. An exact solution to the integration
660     will conserve area in phase space (i.e.~will be symplectic), as well
661 mmeineke 978 as be reversible in time, that is, the trajectory integrated forward
662     or backwards will exactly match itself. Having a finite algorithm
663     that both conserves area in phase space and is time reversible,
664 mmeineke 1088 therefore increases the reliability, but does not guarantee the
665     ``correctness'' of the integrated trajectory.
666 mmeineke 978
667 mmeineke 1001 It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
668 mmeineke 978 does not rigorously preserve the actual Hamiltonian, it does preserve
669     a pseudo-Hamiltonian which shadows the real one in phase space. This
670 mmeineke 1008 pseudo-Hamiltonian is provably area-conserving as well as time
671 mmeineke 978 reversible. The fact that it shadows the true Hamiltonian in phase
672     space is acceptable in actual simulations as one is interested in the
673     ensemble average of the observable being measured. From the ergodic
674 mmeineke 1012 hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
675 mmeineke 978 average will match the ensemble average, therefore two similar
676     trajectories in phase space should give matching statistical averages.
677    
678 mmeineke 979 \subsection{\label{introSec:MDfurther}Further Considerations}
679 mmeineke 1012
680 mmeineke 978 In the simulations presented in this research, a few additional
681 mmeineke 1088 parameters are needed to describe the motions. In the simulations
682     involving water and phospholipids in Ch.~\ref{chapt:lipid}, we are
683 mmeineke 978 required to integrate the equations of motions for dipoles on atoms.
684     This involves an additional three parameters be specified for each
685     dipole atom: $\phi$, $\theta$, and $\psi$. These three angles are
686     taken to be the Euler angles, where $\phi$ is a rotation about the
687     $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
688     $\psi$ is a final rotation about the new $z$-axis (see
689 mmeineke 1012 Fig.~\ref{introFig:eulerAngles}). This sequence of rotations can be
690 mmeineke 1088 accumulated into a single $3 \times 3$ matrix, $\mathsf{A}$,
691 mmeineke 978 defined as follows:
692     \begin{equation}
693 mmeineke 1088 \mathsf{A} =
694 mmeineke 1012 \begin{bmatrix}
695     \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
696     \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
697     \sin\theta\sin\psi \\%
698     %
699     -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
700     -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
701     \sin\theta\cos\psi \\%
702     %
703     \sin\phi\sin\theta &%
704     -\cos\phi\sin\theta &%
705     \cos\theta
706 mmeineke 1106 \end{bmatrix}.
707 mmeineke 978 \label{introEq:EulerRotMat}
708     \end{equation}
709    
710 mmeineke 1013 \begin{figure}
711 mmeineke 1014 \centering
712 mmeineke 1013 \includegraphics[width=\linewidth]{eulerRotFig.eps}
713 mmeineke 1037 \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).}
714 mmeineke 1013 \label{introFig:eulerAngles}
715     \end{figure}
716    
717 mmeineke 1012 The equations of motion for Euler angles can be written down
718     as\cite{allen87:csl}
719     \begin{align}
720     \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
721     \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
722 mmeineke 1106 \omega^s_z,
723 mmeineke 1012 \label{introEq:MDeulerPhi} \\%
724     %
725 mmeineke 1106 \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi,
726 mmeineke 1012 \label{introEq:MDeulerTheta} \\%
727     %
728     \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
729 mmeineke 1106 \omega^s_y \frac{\cos\phi}{\sin\theta},
730 mmeineke 1012 \label{introEq:MDeulerPsi}
731     \end{align}
732 mmeineke 1106 where $\omega^s_{\alpha}$ is the angular velocity in the lab space frame
733 mmeineke 1088 along Cartesian coordinate $\alpha$. However, a difficulty arises when
734 mmeineke 979 attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
735 mmeineke 978 Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
736     both equations means there is a non-physical instability present when
737 mmeineke 1012 $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
738 mmeineke 1088 the rotation matrix, $\mathsf{A}$, directly, thus avoiding the
739 mmeineke 1012 instability. This method was proposed by Dullweber
740     \emph{et. al.}\cite{Dullweber1997}, and is presented in
741 mmeineke 978 Sec.~\ref{introSec:MDsymplecticRot}.
742    
743 mmeineke 1012 \subsection{\label{introSec:MDliouville}Liouville Propagator}
744 mmeineke 978
745 mmeineke 980 Before discussing the integration of the rotation matrix, it is
746     necessary to understand the construction of a ``good'' integration
747 mmeineke 1088 scheme. It has been previously discussed
748     (Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
749     integrator to be symplectic and time reversible. The following is an
750 mmeineke 980 outline of the Trotter factorization of the Liouville Propagator as a
751 mmeineke 1088 scheme for generating symplectic, time-reversible
752     integrators.\cite{Tuckerman92}
753 mmeineke 978
754 mmeineke 980 For a system with $f$ degrees of freedom the Liouville operator can be
755     defined as,
756     \begin{equation}
757 mmeineke 1012 iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
758 mmeineke 1106 F_j\frac{\partial}{\partial p_j} \biggr ].
759 mmeineke 980 \label{introEq:LiouvilleOperator}
760     \end{equation}
761 mmeineke 1012 Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
762     degree of freedom, and $F_j$ is the force on that degree of freedom.
763 mmeineke 1008 $\Gamma$ is defined as the set of all positions and conjugate momenta,
764 mmeineke 1012 $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
765 mmeineke 980 \begin {equation}
766 mmeineke 1106 U(t) = e^{iLt}.
767 mmeineke 980 \label{introEq:Lpropagator}
768     \end{equation}
769     This allows the specification of $\Gamma$ at any time $t$ as
770     \begin{equation}
771 mmeineke 1106 \Gamma(t) = U(t)\Gamma(0).
772 mmeineke 980 \label{introEq:Lp2}
773     \end{equation}
774     It is important to note, $U(t)$ is a unitary operator meaning
775     \begin{equation}
776 mmeineke 1106 U(-t)=U^{-1}(t).
777 mmeineke 980 \label{introEq:Lp3}
778     \end{equation}
779    
780     Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
781     Trotter theorem to yield
782 mmeineke 1012 \begin{align}
783 mmeineke 1106 e^{iLt} &= e^{i(L_1 + L_2)t}, \notag \\%
784 mmeineke 1012 %
785 mmeineke 1106 &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P, \notag \\%
786 mmeineke 1012 %
787     &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
788     e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
789 mmeineke 1106 \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ), \label{introEq:Lp4}
790 mmeineke 1012 \end{align}
791 mmeineke 1106 where $\Delta t = t/P$.
792 mmeineke 980 With this, a discrete time operator $G(\Delta t)$ can be defined:
793 mmeineke 1012 \begin{align}
794     G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
795 mmeineke 1106 e^{iL_1\frac{\Delta t}{2}}, \notag \\%
796 mmeineke 1012 %
797     &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
798 mmeineke 1106 U_1 \biggl ( \frac{\Delta t}{2} \biggr ).
799 mmeineke 980 \label{introEq:Lp5}
800 mmeineke 1012 \end{align}
801     Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
802 mmeineke 1088 unitary. This means that an integrator based on this factorization will be
803 mmeineke 980 reversible in time.
804    
805     As an example, consider the following decomposition of $L$:
806 mmeineke 1012 \begin{align}
807 mmeineke 1106 iL_1 &= \dot{q}\frac{\partial}{\partial q},%
808 mmeineke 1012 \label{introEq:Lp6a} \\%
809     %
810 mmeineke 1106 iL_2 &= F(q)\frac{\partial}{\partial p}.%
811 mmeineke 1012 \label{introEq:Lp6b}
812     \end{align}
813     This leads to propagator $G( \Delta t )$ as,
814 mmeineke 980 \begin{equation}
815 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
816     e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
817 mmeineke 1106 e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}.
818 mmeineke 1012 \label{introEq:Lp7}
819 mmeineke 980 \end{equation}
820 mmeineke 1012 Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
821 mmeineke 980 \begin{equation}
822 mmeineke 1106 e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c),
823 mmeineke 980 \label{introEq:Lp8}
824     \end{equation}
825 mmeineke 1106 where $c$ is independent of $x$. One obtains the following:
826 mmeineke 1012 \begin{align}
827     \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
828 mmeineke 1106 \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)], \label{introEq:Lp9a}\\%
829 mmeineke 1012 %
830 mmeineke 1106 q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr ),%
831 mmeineke 1012 \label{introEq:Lp9b}\\%
832     %
833     \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
834 mmeineke 1106 \frac{\Delta t}{2m}\, F[q(0)]. \label{introEq:Lp9c}
835 mmeineke 1012 \end{align}
836 mmeineke 980 Or written another way,
837 mmeineke 1012 \begin{align}
838     q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
839 mmeineke 1106 \frac{F[q(0)]}{m}\frac{\Delta t^2}{2}, %
840 mmeineke 1012 \label{introEq:Lp10a} \\%
841     %
842     \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
843 mmeineke 1106 \biggl [F[q(0)] + F[q(\Delta t)] \biggr]. %
844 mmeineke 1012 \label{introEq:Lp10b}
845     \end{align}
846 mmeineke 980 This is the velocity Verlet formulation presented in
847 mmeineke 1012 Sec.~\ref{introSec:mdIntegrate}. Because this integration scheme is
848 mmeineke 980 comprised of unitary propagators, it is symplectic, and therefore area
849 mmeineke 1008 preserving in phase space. From the preceding factorization, one can
850 mmeineke 980 see that the integration of the equations of motion would follow:
851     \begin{enumerate}
852     \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
853    
854     \item Use the half step velocities to move positions one whole step, $\Delta t$.
855    
856     \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
857    
858     \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
859     \end{enumerate}
860    
861 mmeineke 1012 \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
862 mmeineke 980
863     Based on the factorization from the previous section,
864 mmeineke 1012 Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
865 mmeineke 1088 symplectic propagation of the rotation matrix, $\mathsf{A}$, as an
866 mmeineke 980 alternative method for the integration of orientational degrees of
867     freedom. The method starts with a straightforward splitting of the
868     Liouville operator:
869 mmeineke 1012 \begin{align}
870     iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
871 mmeineke 1106 \mathsf{\dot{A}}\frac{\partial}{\partial \mathsf{A}} ,
872 mmeineke 1012 \label{introEq:SR1a} \\%
873     %
874 mmeineke 1106 iL_F &= F(q)\frac{\partial}{\partial p},
875 mmeineke 1012 \label{introEq:SR1b} \\%
876 mmeineke 1106 iL_{\tau} &= \tau(\mathsf{A})\frac{\partial}{\partial j},
877 mmeineke 1012 \label{introEq:SR1b} \\%
878     \end{align}
879 mmeineke 1106 where $\tau(\mathsf{A})$ is the torque of the system
880 mmeineke 1088 due to the configuration, and $j$ is the conjugate
881 mmeineke 980 angular momenta of the system. The propagator, $G(\Delta t)$, becomes
882     \begin{equation}
883 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
884 mmeineke 1088 e^{\frac{\Delta t}{2} \tau(\mathsf{A})\frac{\partial}{\partial j}} \,
885 mmeineke 1012 e^{\Delta t\,iL_{\text{pos}}} \,
886 mmeineke 1088 e^{\frac{\Delta t}{2} \tau(\mathsf{A})\frac{\partial}{\partial j}} \,
887 mmeineke 1106 e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}.
888 mmeineke 980 \label{introEq:SR2}
889     \end{equation}
890 mmeineke 1008 Propagation of the linear and angular momenta follows as in the Verlet
891     scheme. The propagation of positions also follows the Verlet scheme
892 mmeineke 980 with the addition of a further symplectic splitting of the rotation
893 mmeineke 1012 matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
894     $U_{\text{pos}}(\Delta t)$.
895 mmeineke 980 \begin{equation}
896 mmeineke 1012 \mathcal{U}_{\text{rot}}(\Delta t) =
897     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
898     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
899     \mathcal{U}_z (\Delta t)\,
900     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
901 mmeineke 1106 \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr),
902 mmeineke 980 \label{introEq:SR3}
903     \end{equation}
904 mmeineke 1106 where $\mathcal{U}_{\alpha}$ is a unitary rotation of $\mathsf{A}$ and
905 mmeineke 1088 $j$ about each axis $\alpha$. As all propagations are now
906 mmeineke 980 unitary and symplectic, the entire integration scheme is also
907     symplectic and time reversible.
908    
909 mmeineke 1001 \section{\label{introSec:layout}Dissertation Layout}
910 mmeineke 914
911 mmeineke 1088 This dissertation is divided as follows: Ch.~\ref{chapt:RSA}
912 mmeineke 1001 presents the random sequential adsorption simulations of related
913 mmeineke 1087 pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:oopse}
914 mmeineke 1088 is about our molecular dynamics simulation package
915 mmeineke 1012 {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
916     phospholipid bilayers using a mesoscale model. And lastly,
917 mmeineke 1008 Ch.~\ref{chapt:conclusion} concludes this dissertation with a
918 mmeineke 1001 summary of all results. The chapters are arranged in chronological
919     order, and reflect the progression of techniques I employed during my
920     research.
921 mmeineke 914
922 mmeineke 1012 The chapter concerning random sequential adsorption simulations is a
923     study in applying Statistical Mechanics simulation techniques in order
924     to obtain a simple model capable of explaining the results. My
925     advisor, Dr. Gezelter, and I were approached by a colleague,
926 mmeineke 1088 Dr. Lieberman, about possible explanations for the partial coverage of
927     a gold surface by a particular compound synthesized in her group. We
928     suggested it might be due to the statistical packing fraction of disks
929     on a plane, and set about simulating this system. As the events in
930     our model were not dynamic in nature, a Monte Carlo method was
931     employed. Here, if a molecule landed on the surface without
932     overlapping with another, then its landing was accepted. However, if there
933     was overlap, the landing we rejected and a new random landing location
934     was chosen. This defined our acceptance rules and allowed us to
935     construct a Markov chain whose limiting distribution was the surface
936     coverage in which we were interested.
937 mmeineke 914
938 mmeineke 1001 The following chapter, about the simulation package {\sc oopse},
939     describes in detail the large body of scientific code that had to be
940 mmeineke 1012 written in order to study phospholipid bilayers. Although there are
941 mmeineke 1001 pre-existing molecular dynamic simulation packages available, none
942 mmeineke 1088 were capable of implementing the models we were developing. {\sc
943     oopse} is a unique package capable of not only integrating the
944     equations of motion in Cartesian space, but is also able to integrate
945     the rotational motion of rigid bodies and dipoles. It also has the
946     ability to perform calculations across distributed parallel processors
947     and contains a flexible script syntax for creating systems. {\sc
948     oopse} has become a very powerful scientific instrument for the
949     exploration of our model.
950 mmeineke 1001
951 mmeineke 1088 In Ch.~\ref{chapt:lipid}, utilizing {\sc oopse}, I have been
952 mmeineke 1008 able to parameterize a mesoscale model for phospholipid simulations.
953 mmeineke 1012 This model retains information about solvent ordering around the
954 mmeineke 1001 bilayer, as well as information regarding the interaction of the
955 mmeineke 1012 phospholipid head groups' dipoles with each other and the surrounding
956 mmeineke 1001 solvent. These simulations give us insight into the dynamic events
957     that lead to the formation of phospholipid bilayers, as well as
958     provide the foundation for future exploration of bilayer phase
959     behavior with this model.
960    
961 mmeineke 1088 In the last chapter, I discuss future directions
962     for both {\sc oopse} and this mesoscale model. Additionally, I will
963     give a summary of the results found in this dissertation.