ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
Revision: 1063
Committed: Mon Feb 23 19:16:22 2004 UTC (20 years, 4 months ago) by mmeineke
Content type: application/x-tex
File size: 44191 byte(s)
Log Message:
added discussion of multiple image periodic boundry conditions, and added a figure to match the discussion

File Contents

# User Rev Content
1 mmeineke 914
2    
3 mmeineke 1020 \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
4 mmeineke 914
5    
6 mmeineke 953 The techniques used in the course of this research fall under the two
7     main classes of molecular simulation: Molecular Dynamics and Monte
8 mmeineke 1037 Carlo. Molecular Dynamics simulations integrate the equations of
9     motion for a given system of particles, allowing the researcher to
10     gain insight into the time dependent evolution of a system. Diffusion
11 mmeineke 953 phenomena are readily studied with this simulation technique, making
12     Molecular Dynamics the main simulation technique used in this
13     research. Other aspects of the research fall under the Monte Carlo
14     class of simulations. In Monte Carlo, the configuration space
15 mmeineke 1037 available to the collection of particles is sampled stochastically, or
16     randomly. Each configuration is chosen with a given probability based
17     on the Maxwell Boltzmann distribution. These types of simulations are
18     best used to probe properties of a system that are dependent only on
19     the state of the system. Structural information about a system is most
20     readily obtained through these types of methods.
21 mmeineke 914
22 mmeineke 953 Although the two techniques employed seem dissimilar, they are both
23     linked by the overarching principles of Statistical
24 mmeineke 1037 Mechanics. Statistical Meachanics governs the behavior of
25 mmeineke 953 both classes of simulations and dictates what each method can and
26     cannot do. When investigating a system, one most first analyze what
27     thermodynamic properties of the system are being probed, then chose
28     which method best suits that objective.
29 mmeineke 914
30 mmeineke 1003 \section{\label{introSec:statThermo}Statistical Mechanics}
31 mmeineke 914
32 mmeineke 1001 The following section serves as a brief introduction to some of the
33     Statistical Mechanics concepts present in this dissertation. What
34 mmeineke 1037 follows is a brief derivation of Boltzmann weighted statistics and an
35 mmeineke 1001 explanation of how one can use the information to calculate an
36     observable for a system. This section then concludes with a brief
37     discussion of the ergodic hypothesis and its relevance to this
38     research.
39 mmeineke 914
40 mmeineke 1008 \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41 mmeineke 914
42 mmeineke 1037 Consider a system, $\gamma$, with total energy $E_{\gamma}$. Let
43     $\Omega(E_{\gamma})$ represent the number of degenerate ways
44     $\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the
45     collection of positions and conjugate momenta of system $\gamma$, can
46     be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset
47     of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots
48     E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be
49     expressed as,
50 mmeineke 1001 \begin{equation}
51 mmeineke 1037 \Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots
52     \Omega(E_{\gamma}) \times \ldots \Omega(E_n)
53     \label{introEq:SM0.1}
54     \end{equation}
55     This multiplicative combination of degeneracies is illustrated in
56     Fig.~\ref{introFig:degenProd}.
57    
58     \begin{figure}
59     \centering
60     \includegraphics[width=\linewidth]{omegaFig.eps}
61     \caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.}
62     \label{introFig:degenProd}
63     \end{figure}
64    
65     Next, consider the specific case of $\gamma$ in contact with a
66     bath. Exchange of energy is allowed between the bath and the system,
67     subject to the constraint that the total energy
68     ($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$
69     is the total energy of both systems, can be represented as
70     \begin{equation}
71 mmeineke 1003 \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
72 mmeineke 1001 \label{introEq:SM1}
73     \end{equation}
74     Or additively as
75     \begin{equation}
76 mmeineke 1003 \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma})
77 mmeineke 1001 \label{introEq:SM2}
78     \end{equation}
79    
80     The solution to Eq.~\ref{introEq:SM2} maximizes the number of
81 mmeineke 1042 degenerate configurations in $E$. \cite{Frenkel1996}
82 mmeineke 1001 This gives
83     \begin{equation}
84 mmeineke 1003 \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
85     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
86     +
87     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
88     \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}
89 mmeineke 1001 \label{introEq:SM3}
90     \end{equation}
91     Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
92 mmeineke 1003 $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
93 mmeineke 1001 $-1$. Eq.~\ref{introEq:SM3} becomes
94     \begin{equation}
95 mmeineke 1003 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
96     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
97 mmeineke 1001 \label{introEq:SM4}
98     \end{equation}
99    
100     At this point, one can draw a relationship between the maximization of
101     degeneracy in Eq.~\ref{introEq:SM3} and the second law of
102 mmeineke 1003 thermodynamics. Namely, that for a closed system, entropy will
103     increase for an irreversible process.\cite{chandler:1987} Here the
104 mmeineke 1001 process is the partitioning of energy among the two systems. This
105     allows the following definition of entropy:
106     \begin{equation}
107 mmeineke 1003 S = k_B \ln \Omega(E)
108 mmeineke 1001 \label{introEq:SM5}
109     \end{equation}
110 mmeineke 1008 Where $k_B$ is the Boltzmann constant. Having defined entropy, one can
111 mmeineke 1001 also define the temperature of the system using the relation
112     \begin{equation}
113 mmeineke 1003 \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
114 mmeineke 1001 \label{introEq:SM6}
115     \end{equation}
116     The temperature in the system $\gamma$ is then
117     \begin{equation}
118 mmeineke 1003 \beta( E_{\gamma} ) = \frac{1}{k_B T} =
119     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
120 mmeineke 1001 \label{introEq:SM7}
121     \end{equation}
122     Applying this to Eq.~\ref{introEq:SM4} gives the following
123     \begin{equation}
124 mmeineke 1003 \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
125 mmeineke 1001 \label{introEq:SM8}
126     \end{equation}
127     Showing that the partitioning of energy between the two systems is
128 mmeineke 1003 actually a process of thermal equilibration.\cite{Frenkel1996}
129 mmeineke 1001
130     An application of these results is to formulate the form of an
131     expectation value of an observable, $A$, in the canonical ensemble. In
132     the canonical ensemble, the number of particles, $N$, the volume, $V$,
133     and the temperature, $T$, are all held constant while the energy, $E$,
134     is allowed to fluctuate. Returning to the previous example, the bath
135 mmeineke 1008 system is now an infinitely large thermal bath, whose exchange of
136 mmeineke 1003 energy with the system $\gamma$ holds the temperature constant. The
137 mmeineke 1001 partitioning of energy in the bath system is then related to the total
138 mmeineke 1003 energy of both systems and the fluctuations in $E_{\gamma}$:
139 mmeineke 1001 \begin{equation}
140 mmeineke 1003 \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
141 mmeineke 1001 \label{introEq:SM9}
142     \end{equation}
143     As for the expectation value, it can be defined
144     \begin{equation}
145 mmeineke 1003 \langle A \rangle =
146     \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
147     P_{\gamma} A(\boldsymbol{\Gamma})
148 mmeineke 1001 \label{introEq:SM10}
149 mmeineke 1003 \end{equation}
150     Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
151 mmeineke 1008 an integration over all accessible phase space, $P_{\gamma}$ is the
152 mmeineke 1001 probability of being in a given phase state and
153     $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
154     phase state.
155    
156     Because entropy seeks to maximize the number of degenerate states at a
157     given energy, the probability of being in a particular state in
158     $\gamma$ will be directly proportional to the number of allowable
159     states the coupled system is able to assume. Namely,
160     \begin{equation}
161 mmeineke 1003 P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
162     e^{\ln \Omega( E - E_{\gamma})}
163 mmeineke 1001 \label{introEq:SM11}
164     \end{equation}
165 mmeineke 1003 With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
166 mmeineke 1001 \begin{equation}
167 mmeineke 1003 \ln \Omega ( E - E_{\gamma}) =
168     \ln \Omega (E) -
169     E_{\gamma} \frac{\partial \ln \Omega }{\partial E}
170     + \ldots
171 mmeineke 1001 \label{introEq:SM12}
172     \end{equation}
173     Higher order terms are omitted as $E$ is an infinite thermal
174     bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
175     be rewritten:
176     \begin{equation}
177 mmeineke 1003 P_{\gamma} \propto e^{-\beta E_{\gamma}}
178 mmeineke 1001 \label{introEq:SM13}
179     \end{equation}
180 mmeineke 1008 Where $\ln \Omega(E)$ has been factored out of the proportionality as a
181 mmeineke 1003 constant. Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
182     d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
183 mmeineke 1001 \begin{equation}
184 mmeineke 1003 P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
185     {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
186 mmeineke 1001 \label{introEq:SM14}
187     \end{equation}
188 mmeineke 1008 This result is the standard Boltzmann statistical distribution.
189 mmeineke 1001 Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
190     \begin{equation}
191 mmeineke 1003 \langle A \rangle =
192     \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
193     A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
194     {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
195 mmeineke 1001 \label{introEq:SM15}
196     \end{equation}
197    
198     \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
199    
200 mmeineke 1042 In the case of a Molecular Dynamics simulation, rather than
201     calculating an ensemble average integral over phase space as in
202     Eq.~\ref{introEq:SM15}, it becomes easier to caclulate the time
203     average of an observable. Namely,
204     \begin{equation}
205     \langle A \rangle_t = \frac{1}{\tau}
206     \int_0^{\tau} A[\boldsymbol{\Gamma}(t)]\,dt
207     \label{introEq:SM16}
208     \end{equation}
209     Where the value of an observable is averaged over the length of time
210     that the simulation is run. This type of measurement mirrors the
211     experimental measurement of an observable. In an experiment, the
212     instrument analyzing the system must average its observation of the
213     finite time of the measurement. What is required then, is a principle
214     to relate the time average to the ensemble average. This is the
215     ergodic hypothesis.
216 mmeineke 1001
217 mmeineke 1042 Ergodicity is the assumption that given an infinite amount of time, a
218     system will visit every available point in phase
219     space.\cite{Frenkel1996} For most systems, this is a valid assumption,
220     except in cases where the system may be trapped in a local feature
221     (\emph{e.g.}~glasses). When valid, ergodicity allows the unification
222     of a time averaged observation and an ensemble averaged one. If an
223     observation is averaged over a sufficiently long time, the system is
224     assumed to visit all appropriately available points in phase space,
225     giving a properly weighted statistical average. This allows the
226     researcher freedom of choice when deciding how best to measure a given
227     observable. When an ensemble averaged approach seems most logical,
228     the Monte Carlo techniques described in Sec.~\ref{introSec:monteCarlo}
229     can be utilized. Conversely, if a problem lends itself to a time
230     averaging approach, the Molecular Dynamics techniques in
231     Sec.~\ref{introSec:MD} can be employed.
232    
233 mmeineke 1003 \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
234 mmeineke 914
235 mmeineke 953 The Monte Carlo method was developed by Metropolis and Ulam for their
236     work in fissionable material.\cite{metropolis:1949} The method is so
237 mmeineke 955 named, because it heavily uses random numbers in its
238     solution.\cite{allen87:csl} The Monte Carlo method allows for the
239     solution of integrals through the stochastic sampling of the values
240     within the integral. In the simplest case, the evaluation of an
241     integral would follow a brute force method of
242     sampling.\cite{Frenkel1996} Consider the following single dimensional
243     integral:
244     \begin{equation}
245     I = f(x)dx
246     \label{eq:MCex1}
247     \end{equation}
248     The equation can be recast as:
249     \begin{equation}
250 mmeineke 1042 I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
251     \label{introEq:Importance1}
252     \end{equation}
253     Where $\rho(x)$ is an arbitrary probability distribution in $x$. If
254     one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
255     from the distribution $\rho(x)$ on the interval $[a,b]$, then
256     Eq.~\ref{introEq:Importance1} becomes
257     \begin{equation}
258     I= \lim_{\tau \rightarrow \infty}\biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}[a,b]}
259     \label{introEq:Importance2}
260     \end{equation}
261     If $\rho(x)$ is uniformly distributed over the interval $[a,b]$,
262     \begin{equation}
263     \rho(x) = \frac{1}{b-a}
264     \label{introEq:importance2b}
265     \end{equation}
266     then the integral can be rewritten as
267     \begin{equation}
268     I = (b-a)\lim_{\tau \rightarrow \infty}
269     \langle f(x) \rangle_{\text{trials}[a,b]}
270 mmeineke 955 \label{eq:MCex2}
271     \end{equation}
272 mmeineke 914
273 mmeineke 955 However, in Statistical Mechanics, one is typically interested in
274     integrals of the form:
275     \begin{equation}
276 mmeineke 977 \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
277     e^{-\beta V(\mathbf{r}^N)}}%
278     {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
279 mmeineke 955 \label{eq:mcEnsAvg}
280     \end{equation}
281 mmeineke 977 Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
282 mmeineke 1003 and $A$ is some observable that is only dependent on position. This is
283     the ensemble average of $A$ as presented in
284     Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
285     momentum. Therefore the momenta contribution of the integral can be
286     factored out, leaving the configurational integral. Application of the
287     brute force method to this system would yield highly inefficient
288 mmeineke 1008 results. Due to the Boltzmann weighting of this integral, most random
289 mmeineke 955 configurations will have a near zero contribution to the ensemble
290 mmeineke 1003 average. This is where importance sampling comes into
291 mmeineke 955 play.\cite{allen87:csl}
292 mmeineke 914
293 mmeineke 1042 Importance sampling is a method where the distribution, from which the
294     random configurations are chosen, is selected in such a way as to
295     efficiently sample the integral in question. Looking at
296     Eq.~\ref{eq:mcEnsAvg}, and realizing
297 mmeineke 956 \begin {equation}
298 mmeineke 977 \rho_{kT}(\mathbf{r}^N) =
299     \frac{e^{-\beta V(\mathbf{r}^N)}}
300     {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
301     \label{introEq:MCboltzman}
302 mmeineke 956 \end{equation}
303 mmeineke 1008 Where $\rho_{kT}$ is the Boltzmann distribution. The ensemble average
304 mmeineke 977 can be rewritten as
305 mmeineke 956 \begin{equation}
306 mmeineke 977 \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
307     \rho_{kT}(\mathbf{r}^N)
308     \label{introEq:Importance3}
309 mmeineke 956 \end{equation}
310 mmeineke 977 Applying Eq.~\ref{introEq:Importance1} one obtains
311 mmeineke 956 \begin{equation}
312 mmeineke 977 \langle A \rangle = \biggl \langle
313     \frac{ A \rho_{kT}(\mathbf{r}^N) }
314     {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
315     \label{introEq:Importance4}
316 mmeineke 956 \end{equation}
317 mmeineke 977 By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
318     Eq.~\ref{introEq:Importance4} becomes
319 mmeineke 956 \begin{equation}
320 mmeineke 1042 \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{kT}
321 mmeineke 977 \label{introEq:Importance5}
322 mmeineke 956 \end{equation}
323 mmeineke 977 The difficulty is selecting points $\mathbf{r}^N$ such that they are
324     sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$. A solution
325 mmeineke 1009 was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
326 mmeineke 977 the use of a Markov chain whose limiting distribution was
327     $\rho_{kT}(\mathbf{r}^N)$.
328 mmeineke 955
329 mmeineke 1003 \subsection{\label{introSec:markovChains}Markov Chains}
330 mmeineke 955
331 mmeineke 956 A Markov chain is a chain of states satisfying the following
332 mmeineke 977 conditions:\cite{leach01:mm}
333     \begin{enumerate}
334 mmeineke 956 \item The outcome of each trial depends only on the outcome of the previous trial.
335     \item Each trial belongs to a finite set of outcomes called the state space.
336 mmeineke 977 \end{enumerate}
337 mmeineke 1008 If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
338     $\rho_m$ and $\rho_n$ are the probabilities of being in state
339 mmeineke 977 $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively. Further, the two
340     states are linked by a transition probability, $\pi_{mn}$, which is the
341     probability of going from state $m$ to state $n$.
342 mmeineke 955
343 mmeineke 977 \newcommand{\accMe}{\operatorname{acc}}
344    
345 mmeineke 956 The transition probability is given by the following:
346     \begin{equation}
347 mmeineke 977 \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
348     \label{introEq:MCpi}
349 mmeineke 956 \end{equation}
350 mmeineke 977 Where $\alpha_{mn}$ is the probability of attempting the move $m
351     \rightarrow n$, and $\accMe$ is the probability of accepting the move
352     $m \rightarrow n$. Defining a probability vector,
353     $\boldsymbol{\rho}$, such that
354 mmeineke 956 \begin{equation}
355 mmeineke 977 \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
356     \ldots \rho_N \}
357     \label{introEq:MCrhoVector}
358 mmeineke 956 \end{equation}
359 mmeineke 977 a transition matrix $\boldsymbol{\Pi}$ can be defined,
360     whose elements are $\pi_{mn}$, for each given transition. The
361     limiting distribution of the Markov chain can then be found by
362     applying the transition matrix an infinite number of times to the
363     distribution vector.
364 mmeineke 956 \begin{equation}
365 mmeineke 977 \boldsymbol{\rho}_{\text{limit}} =
366     \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
367     \boldsymbol{\Pi}^N
368     \label{introEq:MCmarkovLimit}
369 mmeineke 956 \end{equation}
370     The limiting distribution of the chain is independent of the starting
371     distribution, and successive applications of the transition matrix
372     will only yield the limiting distribution again.
373     \begin{equation}
374 mmeineke 977 \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
375     \boldsymbol{\Pi}
376     \label{introEq:MCmarkovEquil}
377 mmeineke 956 \end{equation}
378    
379 mmeineke 1003 \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
380 mmeineke 956
381 mmeineke 977 In the Metropolis method\cite{metropolis:1953}
382     Eq.~\ref{introEq:MCmarkovEquil} is solved such that
383 mmeineke 1008 $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
384 mmeineke 977 of states. The method accomplishes this by imposing the strong
385     condition of microscopic reversibility on the equilibrium
386     distribution. Meaning, that at equilibrium the probability of going
387     from $m$ to $n$ is the same as going from $n$ to $m$.
388 mmeineke 956 \begin{equation}
389 mmeineke 977 \rho_m\pi_{mn} = \rho_n\pi_{nm}
390     \label{introEq:MCmicroReverse}
391 mmeineke 956 \end{equation}
392 mmeineke 1008 Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
393 mmeineke 977 the Metropolis method. Using Eq.~\ref{introEq:MCpi},
394     Eq.~\ref{introEq:MCmicroReverse} becomes
395 mmeineke 956 \begin{equation}
396 mmeineke 977 \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
397     \frac{\rho_n}{\rho_m}
398     \label{introEq:MCmicro2}
399 mmeineke 956 \end{equation}
400 mmeineke 1008 For a Boltzmann limiting distribution,
401 mmeineke 956 \begin{equation}
402 mmeineke 977 \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
403     = e^{-\beta \Delta \mathcal{U}}
404     \label{introEq:MCmicro3}
405 mmeineke 956 \end{equation}
406     This allows for the following set of acceptance rules be defined:
407     \begin{equation}
408 mmeineke 1003 \accMe( m \rightarrow n ) =
409     \begin{cases}
410     1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
411     e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
412     \end{cases}
413     \label{introEq:accRules}
414 mmeineke 956 \end{equation}
415    
416 mmeineke 1003 Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
417     Metropolis method proceeds as follows
418     \begin{enumerate}
419     \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
420 mmeineke 1008 \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
421 mmeineke 1003 \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$). Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
422 mmeineke 1008 \item Accumulate the average for the configurational observable of interest.
423 mmeineke 1003 \item Repeat from step 2 until the average converges.
424     \end{enumerate}
425 mmeineke 956 One important note is that the average is accumulated whether the move
426     is accepted or not, this ensures proper weighting of the average.
427 mmeineke 1003 Using Eq.~\ref{introEq:Importance4} it becomes clear that the
428     accumulated averages are the ensemble averages, as this method ensures
429 mmeineke 1008 that the limiting distribution is the Boltzmann distribution.
430 mmeineke 956
431 mmeineke 1003 \section{\label{introSec:MD}Molecular Dynamics Simulations}
432 mmeineke 914
433 mmeineke 956 The main simulation tool used in this research is Molecular Dynamics.
434     Molecular Dynamics is when the equations of motion for a system are
435     integrated in order to obtain information about both the positions and
436     momentum of a system, allowing the calculation of not only
437     configurational observables, but momenta dependent ones as well:
438 mmeineke 1042 diffusion constants, relaxation events, folding/unfolding
439     events, etc. With the time dependent information gained from a
440     Molecular Dynamics simulation, one can also calculate time correlation
441     functions of the form\cite{Hansen86}
442     \begin{equation}
443     \langle A(t)\,A(0)\rangle = \lim_{\tau\rightarrow\infty} \frac{1}{\tau}
444     \int_0^{\tau} A(t+t^{\prime})\,A(t^{\prime})\,dt^{\prime}
445     \label{introEq:timeCorr}
446     \end{equation}
447     These correlations can be used to measure fundamental time constants
448     of a system, such as diffusion constants from the velocity
449     autocorrelation or dipole relaxation times from the dipole
450     autocorrelation. Due to the principle of ergodicity,
451 mmeineke 1003 Sec.~\ref{introSec:ergodic}, the average of these observables over the
452     time period of the simulation are taken to be the ensemble averages
453     for the system.
454 mmeineke 914
455 mmeineke 956 The choice of when to use molecular dynamics over Monte Carlo
456     techniques, is normally decided by the observables in which the
457 mmeineke 1042 researcher is interested. If the observables depend on time in
458 mmeineke 956 any fashion, then the only choice is molecular dynamics in some form.
459     However, when the observable is dependent only on the configuration,
460 mmeineke 1042 then for most small systems, Monte Carlo techniques will be more efficient.
461 mmeineke 914
462 mmeineke 956 The focus of research in the second half of this dissertation is
463     centered around the dynamic properties of phospholipid bilayers,
464     making molecular dynamics key in the simulation of those properties.
465 mmeineke 914
466 mmeineke 1003 \subsection{\label{introSec:mdAlgorithm}The Molecular Dynamics Algorithm}
467 mmeineke 914
468 mmeineke 956 To illustrate how the molecular dynamics technique is applied, the
469     following sections will describe the sequence involved in a
470 mmeineke 1003 simulation. Sec.~\ref{introSec:mdInit} deals with the initialization
471     of a simulation. Sec.~\ref{introSec:mdForce} discusses issues
472     involved with the calculation of the forces.
473     Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
474     with the integration of the equations of motion.\cite{Frenkel1996}
475 mmeineke 914
476 mmeineke 1003 \subsection{\label{introSec:mdInit}Initialization}
477 mmeineke 914
478 mmeineke 956 When selecting the initial configuration for the simulation it is
479     important to consider what dynamics one is hoping to observe.
480 mmeineke 1003 Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
481 mmeineke 956 phospholipid membranes. Therefore in these simulations initial
482     positions were selected that in some cases dispersed the lipids in
483 mmeineke 1008 water, and in other cases structured the lipids into performed
484 mmeineke 956 bilayers. Important considerations at this stage of the simulation are:
485     \begin{itemize}
486     \item There are no major overlaps of molecular or atomic orbitals
487 mmeineke 1008 \item Velocities are chosen in such a way as to not give the system a non=zero total momentum or angular momentum.
488     \item It is also sometimes desirable to select the velocities to correctly sample the target temperature.
489 mmeineke 956 \end{itemize}
490    
491     The first point is important due to the amount of potential energy
492     generated by having two particles too close together. If overlap
493     occurs, the first evaluation of forces will return numbers so large as
494 mmeineke 1008 to render the numerical integration of the motion meaningless. The
495 mmeineke 956 second consideration keeps the system from drifting or rotating as a
496     whole. This arises from the fact that most simulations are of systems
497     in equilibrium in the absence of outside forces. Therefore any net
498     movement would be unphysical and an artifact of the simulation method
499 mmeineke 1003 used. The final point addresses the selection of the magnitude of the
500 mmeineke 1008 initial velocities. For many simulations it is convenient to use
501 mmeineke 956 this opportunity to scale the amount of kinetic energy to reflect the
502     desired thermal distribution of the system. However, it must be noted
503     that most systems will require further velocity rescaling after the
504     first few initial simulation steps due to either loss or gain of
505     kinetic energy from energy stored in potential degrees of freedom.
506    
507 mmeineke 1003 \subsection{\label{introSec:mdForce}Force Evaluation}
508 mmeineke 956
509     The evaluation of forces is the most computationally expensive portion
510     of a given molecular dynamics simulation. This is due entirely to the
511     evaluation of long range forces in a simulation, typically pair-wise.
512     These forces are most commonly the Van der Waals force, and sometimes
513 mmeineke 1003 Coulombic forces as well. For a pair-wise force, there are $N(N-1)/ 2$
514     pairs to be evaluated, where $N$ is the number of particles in the
515     system. This leads to the calculations scaling as $N^2$, making large
516 mmeineke 956 simulations prohibitive in the absence of any computation saving
517     techniques.
518    
519     Another consideration one must resolve, is that in a given simulation
520     a disproportionate number of the particles will feel the effects of
521 mmeineke 1003 the surface.\cite{allen87:csl} For a cubic system of 1000 particles
522     arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
523     exposed to the surface. Unless one is simulating an isolated particle
524     group in a vacuum, the behavior of the system will be far from the
525 mmeineke 1008 desired bulk characteristics. To offset this, simulations employ the
526 mmeineke 1003 use of periodic boundary images.\cite{born:1912}
527 mmeineke 956
528     The technique involves the use of an algorithm that replicates the
529 mmeineke 1008 simulation box on an infinite lattice in Cartesian space. Any given
530 mmeineke 956 particle leaving the simulation box on one side will have an image of
531 mmeineke 1003 itself enter on the opposite side (see Fig.~\ref{introFig:pbc}). In
532 mmeineke 1009 addition, this sets that any two particles have an image, real or
533     periodic, within $\text{box}/2$ of each other. A discussion of the
534     method used to calculate the periodic image can be found in
535 mmeineke 1003 Sec.\ref{oopseSec:pbc}.
536 mmeineke 956
537 mmeineke 1003 \begin{figure}
538     \centering
539     \includegraphics[width=\linewidth]{pbcFig.eps}
540 mmeineke 1008 \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
541 mmeineke 1003 \label{introFig:pbc}
542     \end{figure}
543    
544 mmeineke 956 Returning to the topic of the computational scale of the force
545     evaluation, the use of periodic boundary conditions requires that a
546     cutoff radius be employed. Using a cutoff radius improves the
547     efficiency of the force evaluation, as particles farther than a
548 mmeineke 1003 predetermined distance, $r_{\text{cut}}$, are not included in the
549 mmeineke 1008 calculation.\cite{Frenkel1996} In a simulation with periodic images,
550 mmeineke 1063 there are two methods to choose from, both with their own cutoff
551     limits. In the minimum image convention, $r_{\text{cut}}$ has a
552     maximum value of $\text{box}/2$. This is because each atom has only
553     one image that is seen by another atom, and further the image used is
554     the one that minimizes the distance between the two atoms. A system of
555     wrapped images about a central atom therefore has a maximum length
556     scale of box on a side (Fig.~\ref{introFig:rMaxMin}). The second
557     convention, multiple image convention, has a maximum $r_{\text{cut}}$
558     of box. Here multiple images of each atom are replicated in the
559     periodic cells surrounding the central atom, this causes the atom to
560     see multiple copies of several atoms. If the cutoff radius is larger
561     than box, however, then the atom will see an image of itself, and
562     attempt to calculate an unphysical self-self force interaction
563     (Fig.~\ref{introFig:rMaxMult}). Due to the increased complexity and
564     commputaional ineffeciency of the multiple image method, the minimum
565     image method is the periodic method used throughout this research.
566 mmeineke 956
567 mmeineke 1003 \begin{figure}
568     \centering
569     \includegraphics[width=\linewidth]{rCutMaxFig.eps}
570 mmeineke 1063 \caption[An explanation of minimum image convention]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
571     \label{introFig:rMaxMin}
572 mmeineke 1003 \end{figure}
573    
574 mmeineke 1063 \begin{figure}
575     \centering
576     \includegraphics[width=\linewidth]{rCutMaxMultFig.eps}
577     \caption[An explanation of multiple image convention]{The yellow atom is the central wrapping point. The blue atoms are the minimum images of the system about the central atom. The boxes with the green atoms are multiple images of the central box. If $r_{\text{cut}} \geq \{text{box}$ then the central atom sees multiple images of itself (red atom), creating a self-self force evaluation.}
578     \label{introFig:rMaxMult}
579     \end{figure}
580    
581 mmeineke 1006 With the use of an $r_{\text{cut}}$, however, comes a discontinuity in
582     the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
583     discontinuity, one calculates the potential energy at the
584     $r_{\text{cut}}$, and adds that value to the potential, causing
585     the function to go smoothly to zero at the cutoff radius. This
586     shifted potential ensures conservation of energy when integrating the
587     Newtonian equations of motion.
588 mmeineke 956
589 mmeineke 1006 \begin{figure}
590     \centering
591     \includegraphics[width=\linewidth]{shiftedPot.eps}
592 mmeineke 1008 \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
593 mmeineke 1006 \label{introFig:shiftPot}
594     \end{figure}
595    
596 mmeineke 978 The second main simplification used in this research is the Verlet
597     neighbor list. \cite{allen87:csl} In the Verlet method, one generates
598     a list of all neighbor atoms, $j$, surrounding atom $i$ within some
599     cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
600     This list is created the first time forces are evaluated, then on
601     subsequent force evaluations, pair calculations are only calculated
602     from the neighbor lists. The lists are updated if any given particle
603     in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
604     giving rise to the possibility that a particle has left or joined a
605     neighbor list.
606 mmeineke 956
607 mmeineke 1003 \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
608 mmeineke 978
609     A starting point for the discussion of molecular dynamics integrators
610 mmeineke 1008 is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
611 mmeineke 978 expansion of position in time:
612     \begin{equation}
613 mmeineke 1008 q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
614     \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
615     \mathcal{O}(\Delta t^4)
616 mmeineke 978 \label{introEq:verletForward}
617     \end{equation}
618     As well as,
619     \begin{equation}
620 mmeineke 1008 q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
621     \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
622     \mathcal{O}(\Delta t^4)
623 mmeineke 978 \label{introEq:verletBack}
624     \end{equation}
625 mmeineke 1009 Where $m$ is the mass of the particle, $q(t)$ is the position at time
626     $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
627     particle. Adding together Eq.~\ref{introEq:verletForward} and
628 mmeineke 978 Eq.~\ref{introEq:verletBack} results in,
629     \begin{equation}
630 mmeineke 1009 q(t+\Delta t)+q(t-\Delta t) =
631     2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
632 mmeineke 978 \label{introEq:verletSum}
633     \end{equation}
634     Or equivalently,
635     \begin{equation}
636 mmeineke 1012 q(t+\Delta t) \approx
637     2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
638 mmeineke 978 \label{introEq:verletFinal}
639     \end{equation}
640     Which contains an error in the estimate of the new positions on the
641     order of $\Delta t^4$.
642    
643     In practice, however, the simulations in this research were integrated
644 mmeineke 1008 with a velocity reformulation of the Verlet method.\cite{allen87:csl}
645 mmeineke 1012 \begin{align}
646     q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
647     \label{introEq:MDvelVerletPos} \\%
648     %
649     v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
650 mmeineke 978 \label{introEq:MDvelVerletVel}
651 mmeineke 1012 \end{align}
652 mmeineke 978 The original Verlet algorithm can be regained by substituting the
653     velocity back into Eq.~\ref{introEq:MDvelVerletPos}. The Verlet
654     formulations are chosen in this research because the algorithms have
655     very little long term drift in energy conservation. Energy
656     conservation in a molecular dynamics simulation is of extreme
657     importance, as it is a measure of how closely one is following the
658 mmeineke 1008 ``true'' trajectory with the finite integration scheme. An exact
659 mmeineke 978 solution to the integration will conserve area in phase space, as well
660     as be reversible in time, that is, the trajectory integrated forward
661     or backwards will exactly match itself. Having a finite algorithm
662     that both conserves area in phase space and is time reversible,
663     therefore increases, but does not guarantee the ``correctness'' or the
664     integrated trajectory.
665    
666 mmeineke 1001 It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
667 mmeineke 978 does not rigorously preserve the actual Hamiltonian, it does preserve
668     a pseudo-Hamiltonian which shadows the real one in phase space. This
669 mmeineke 1008 pseudo-Hamiltonian is provably area-conserving as well as time
670 mmeineke 978 reversible. The fact that it shadows the true Hamiltonian in phase
671     space is acceptable in actual simulations as one is interested in the
672     ensemble average of the observable being measured. From the ergodic
673 mmeineke 1012 hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
674 mmeineke 978 average will match the ensemble average, therefore two similar
675     trajectories in phase space should give matching statistical averages.
676    
677 mmeineke 979 \subsection{\label{introSec:MDfurther}Further Considerations}
678 mmeineke 1012
679 mmeineke 978 In the simulations presented in this research, a few additional
680     parameters are needed to describe the motions. The simulations
681 mmeineke 1012 involving water and phospholipids in Ch.~\ref{chapt:lipid} are
682 mmeineke 978 required to integrate the equations of motions for dipoles on atoms.
683     This involves an additional three parameters be specified for each
684     dipole atom: $\phi$, $\theta$, and $\psi$. These three angles are
685     taken to be the Euler angles, where $\phi$ is a rotation about the
686     $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
687     $\psi$ is a final rotation about the new $z$-axis (see
688 mmeineke 1012 Fig.~\ref{introFig:eulerAngles}). This sequence of rotations can be
689     accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$,
690 mmeineke 978 defined as follows:
691     \begin{equation}
692 mmeineke 1012 \mathbf{A} =
693     \begin{bmatrix}
694     \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
695     \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
696     \sin\theta\sin\psi \\%
697     %
698     -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
699     -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
700     \sin\theta\cos\psi \\%
701     %
702     \sin\phi\sin\theta &%
703     -\cos\phi\sin\theta &%
704     \cos\theta
705     \end{bmatrix}
706 mmeineke 978 \label{introEq:EulerRotMat}
707     \end{equation}
708    
709 mmeineke 1013 \begin{figure}
710 mmeineke 1014 \centering
711 mmeineke 1013 \includegraphics[width=\linewidth]{eulerRotFig.eps}
712 mmeineke 1037 \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).}
713 mmeineke 1013 \label{introFig:eulerAngles}
714     \end{figure}
715    
716 mmeineke 1012 The equations of motion for Euler angles can be written down
717     as\cite{allen87:csl}
718     \begin{align}
719     \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
720     \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
721     \omega^s_z
722     \label{introEq:MDeulerPhi} \\%
723     %
724     \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
725     \label{introEq:MDeulerTheta} \\%
726     %
727     \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
728     \omega^s_y \frac{\cos\phi}{\sin\theta}
729     \label{introEq:MDeulerPsi}
730     \end{align}
731 mmeineke 978 Where $\omega^s_i$ is the angular velocity in the lab space frame
732 mmeineke 1008 along Cartesian coordinate $i$. However, a difficulty arises when
733 mmeineke 979 attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
734 mmeineke 978 Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
735     both equations means there is a non-physical instability present when
736 mmeineke 1012 $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
737     the rotation matrix, $\mathbf{A}$, directly, thus avoiding the
738     instability. This method was proposed by Dullweber
739     \emph{et. al.}\cite{Dullweber1997}, and is presented in
740 mmeineke 978 Sec.~\ref{introSec:MDsymplecticRot}.
741    
742 mmeineke 1012 \subsection{\label{introSec:MDliouville}Liouville Propagator}
743 mmeineke 978
744 mmeineke 980 Before discussing the integration of the rotation matrix, it is
745     necessary to understand the construction of a ``good'' integration
746     scheme. It has been previously
747 mmeineke 1012 discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
748 mmeineke 980 integrator to be symplectic, or time reversible. The following is an
749     outline of the Trotter factorization of the Liouville Propagator as a
750 mmeineke 1012 scheme for generating symplectic integrators.\cite{Tuckerman92}
751 mmeineke 978
752 mmeineke 980 For a system with $f$ degrees of freedom the Liouville operator can be
753     defined as,
754     \begin{equation}
755 mmeineke 1012 iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
756     F_j\frac{\partial}{\partial p_j} \biggr ]
757 mmeineke 980 \label{introEq:LiouvilleOperator}
758     \end{equation}
759 mmeineke 1012 Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
760     degree of freedom, and $F_j$ is the force on that degree of freedom.
761 mmeineke 1008 $\Gamma$ is defined as the set of all positions and conjugate momenta,
762 mmeineke 1012 $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
763 mmeineke 980 \begin {equation}
764 mmeineke 1012 U(t) = e^{iLt}
765 mmeineke 980 \label{introEq:Lpropagator}
766     \end{equation}
767     This allows the specification of $\Gamma$ at any time $t$ as
768     \begin{equation}
769 mmeineke 1012 \Gamma(t) = U(t)\Gamma(0)
770 mmeineke 980 \label{introEq:Lp2}
771     \end{equation}
772     It is important to note, $U(t)$ is a unitary operator meaning
773     \begin{equation}
774     U(-t)=U^{-1}(t)
775     \label{introEq:Lp3}
776     \end{equation}
777    
778     Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
779     Trotter theorem to yield
780 mmeineke 1012 \begin{align}
781     e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
782     %
783     &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
784     %
785     &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
786     e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
787     \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
788     \end{align}
789     Where $\Delta t = t/P$.
790 mmeineke 980 With this, a discrete time operator $G(\Delta t)$ can be defined:
791 mmeineke 1012 \begin{align}
792     G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
793     e^{iL_1\frac{\Delta t}{2}} \notag \\%
794     %
795     &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
796     U_1 \biggl ( \frac{\Delta t}{2} \biggr )
797 mmeineke 980 \label{introEq:Lp5}
798 mmeineke 1012 \end{align}
799     Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
800 mmeineke 980 unitary. Meaning an integrator based on this factorization will be
801     reversible in time.
802    
803     As an example, consider the following decomposition of $L$:
804 mmeineke 1012 \begin{align}
805     iL_1 &= \dot{q}\frac{\partial}{\partial q}%
806     \label{introEq:Lp6a} \\%
807     %
808     iL_2 &= F(q)\frac{\partial}{\partial p}%
809     \label{introEq:Lp6b}
810     \end{align}
811     This leads to propagator $G( \Delta t )$ as,
812 mmeineke 980 \begin{equation}
813 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
814     e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
815     e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
816     \label{introEq:Lp7}
817 mmeineke 980 \end{equation}
818 mmeineke 1012 Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
819 mmeineke 980 \begin{equation}
820 mmeineke 1012 e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
821 mmeineke 980 \label{introEq:Lp8}
822     \end{equation}
823 mmeineke 1012 Where $c$ is independent of $x$. One obtains the following:
824     \begin{align}
825     \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
826     \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
827     %
828     q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
829     \label{introEq:Lp9b}\\%
830     %
831     \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
832     \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
833     \end{align}
834 mmeineke 980 Or written another way,
835 mmeineke 1012 \begin{align}
836     q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
837     \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
838     \label{introEq:Lp10a} \\%
839     %
840     \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
841     \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
842     \label{introEq:Lp10b}
843     \end{align}
844 mmeineke 980 This is the velocity Verlet formulation presented in
845 mmeineke 1012 Sec.~\ref{introSec:mdIntegrate}. Because this integration scheme is
846 mmeineke 980 comprised of unitary propagators, it is symplectic, and therefore area
847 mmeineke 1008 preserving in phase space. From the preceding factorization, one can
848 mmeineke 980 see that the integration of the equations of motion would follow:
849     \begin{enumerate}
850     \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
851    
852     \item Use the half step velocities to move positions one whole step, $\Delta t$.
853    
854     \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
855    
856     \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
857     \end{enumerate}
858    
859 mmeineke 1012 \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
860 mmeineke 980
861     Based on the factorization from the previous section,
862 mmeineke 1012 Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
863 mmeineke 980 symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
864     alternative method for the integration of orientational degrees of
865     freedom. The method starts with a straightforward splitting of the
866     Liouville operator:
867 mmeineke 1012 \begin{align}
868     iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
869     \mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}}
870     \label{introEq:SR1a} \\%
871     %
872     iL_F &= F(q)\frac{\partial}{\partial p}
873     \label{introEq:SR1b} \\%
874     iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi}
875     \label{introEq:SR1b} \\%
876     \end{align}
877     Where $\tau(\mathbf{A})$ is the torque of the system
878     due to the configuration, and $\pi$ is the conjugate
879 mmeineke 980 angular momenta of the system. The propagator, $G(\Delta t)$, becomes
880     \begin{equation}
881 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
882     e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
883     e^{\Delta t\,iL_{\text{pos}}} \,
884     e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
885     e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
886 mmeineke 980 \label{introEq:SR2}
887     \end{equation}
888 mmeineke 1008 Propagation of the linear and angular momenta follows as in the Verlet
889     scheme. The propagation of positions also follows the Verlet scheme
890 mmeineke 980 with the addition of a further symplectic splitting of the rotation
891 mmeineke 1012 matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
892     $U_{\text{pos}}(\Delta t)$.
893 mmeineke 980 \begin{equation}
894 mmeineke 1012 \mathcal{U}_{\text{rot}}(\Delta t) =
895     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
896     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
897     \mathcal{U}_z (\Delta t)\,
898     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
899     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
900 mmeineke 980 \label{introEq:SR3}
901     \end{equation}
902 mmeineke 1012 Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and
903     $\pi$ about each axis $j$. As all propagations are now
904 mmeineke 980 unitary and symplectic, the entire integration scheme is also
905     symplectic and time reversible.
906    
907 mmeineke 1001 \section{\label{introSec:layout}Dissertation Layout}
908 mmeineke 914
909 mmeineke 1012 This dissertation is divided as follows:Ch.~\ref{chapt:RSA}
910 mmeineke 1001 presents the random sequential adsorption simulations of related
911 mmeineke 1008 pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE}
912 mmeineke 1001 is about the writing of the molecular dynamics simulation package
913 mmeineke 1012 {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
914     phospholipid bilayers using a mesoscale model. And lastly,
915 mmeineke 1008 Ch.~\ref{chapt:conclusion} concludes this dissertation with a
916 mmeineke 1001 summary of all results. The chapters are arranged in chronological
917     order, and reflect the progression of techniques I employed during my
918     research.
919 mmeineke 914
920 mmeineke 1012 The chapter concerning random sequential adsorption simulations is a
921     study in applying Statistical Mechanics simulation techniques in order
922     to obtain a simple model capable of explaining the results. My
923     advisor, Dr. Gezelter, and I were approached by a colleague,
924     Dr. Lieberman, about possible explanations for the partial coverage of a
925     gold surface by a particular compound of hers. We suggested it might
926     be due to the statistical packing fraction of disks on a plane, and
927     set about to simulate this system. As the events in our model were
928     not dynamic in nature, a Monte Carlo method was employed. Here, if a
929     molecule landed on the surface without overlapping another, then its
930     landing was accepted. However, if there was overlap, the landing we
931     rejected and a new random landing location was chosen. This defined
932     our acceptance rules and allowed us to construct a Markov chain whose
933     limiting distribution was the surface coverage in which we were
934     interested.
935 mmeineke 914
936 mmeineke 1001 The following chapter, about the simulation package {\sc oopse},
937     describes in detail the large body of scientific code that had to be
938 mmeineke 1012 written in order to study phospholipid bilayers. Although there are
939 mmeineke 1001 pre-existing molecular dynamic simulation packages available, none
940     were capable of implementing the models we were developing.{\sc oopse}
941     is a unique package capable of not only integrating the equations of
942 mmeineke 1008 motion in Cartesian space, but is also able to integrate the
943 mmeineke 1001 rotational motion of rigid bodies and dipoles. Add to this the
944     ability to perform calculations across parallel processors and a
945     flexible script syntax for creating systems, and {\sc oopse} becomes a
946     very powerful scientific instrument for the exploration of our model.
947    
948 mmeineke 1008 Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
949     able to parameterize a mesoscale model for phospholipid simulations.
950 mmeineke 1012 This model retains information about solvent ordering around the
951 mmeineke 1001 bilayer, as well as information regarding the interaction of the
952 mmeineke 1012 phospholipid head groups' dipoles with each other and the surrounding
953 mmeineke 1001 solvent. These simulations give us insight into the dynamic events
954     that lead to the formation of phospholipid bilayers, as well as
955     provide the foundation for future exploration of bilayer phase
956     behavior with this model.
957    
958     Which leads into the last chapter, where I discuss future directions
959     for both{\sc oopse} and this mesoscale model. Additionally, I will
960     give a summary of results for this dissertation.
961    
962