ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
Revision: 1037
Committed: Fri Feb 6 21:51:01 2004 UTC (20 years, 5 months ago) by mmeineke
Content type: application/x-tex
File size: 41833 byte(s)
Log Message:
added a new euler fig, and fixed some things in the degeneractyy section

File Contents

# User Rev Content
1 mmeineke 914
2    
3 mmeineke 1020 \chapter{\label{chapt:intro}INTRODUCTION AND THEORETICAL BACKGROUND}
4 mmeineke 914
5    
6 mmeineke 953 The techniques used in the course of this research fall under the two
7     main classes of molecular simulation: Molecular Dynamics and Monte
8 mmeineke 1037 Carlo. Molecular Dynamics simulations integrate the equations of
9     motion for a given system of particles, allowing the researcher to
10     gain insight into the time dependent evolution of a system. Diffusion
11 mmeineke 953 phenomena are readily studied with this simulation technique, making
12     Molecular Dynamics the main simulation technique used in this
13     research. Other aspects of the research fall under the Monte Carlo
14     class of simulations. In Monte Carlo, the configuration space
15 mmeineke 1037 available to the collection of particles is sampled stochastically, or
16     randomly. Each configuration is chosen with a given probability based
17     on the Maxwell Boltzmann distribution. These types of simulations are
18     best used to probe properties of a system that are dependent only on
19     the state of the system. Structural information about a system is most
20     readily obtained through these types of methods.
21 mmeineke 914
22 mmeineke 953 Although the two techniques employed seem dissimilar, they are both
23     linked by the overarching principles of Statistical
24 mmeineke 1037 Mechanics. Statistical Meachanics governs the behavior of
25 mmeineke 953 both classes of simulations and dictates what each method can and
26     cannot do. When investigating a system, one most first analyze what
27     thermodynamic properties of the system are being probed, then chose
28     which method best suits that objective.
29 mmeineke 914
30 mmeineke 1003 \section{\label{introSec:statThermo}Statistical Mechanics}
31 mmeineke 914
32 mmeineke 1001 The following section serves as a brief introduction to some of the
33     Statistical Mechanics concepts present in this dissertation. What
34 mmeineke 1037 follows is a brief derivation of Boltzmann weighted statistics and an
35 mmeineke 1001 explanation of how one can use the information to calculate an
36     observable for a system. This section then concludes with a brief
37     discussion of the ergodic hypothesis and its relevance to this
38     research.
39 mmeineke 914
40 mmeineke 1008 \subsection{\label{introSec:boltzman}Boltzmann weighted statistics}
41 mmeineke 914
42 mmeineke 1037 Consider a system, $\gamma$, with total energy $E_{\gamma}$. Let
43     $\Omega(E_{\gamma})$ represent the number of degenerate ways
44     $\boldsymbol{\Gamma}\{r_1,r_2,\ldots r_n,p_1,p_2,\ldots p_n\}$, the
45     collection of positions and conjugate momenta of system $\gamma$, can
46     be configured to give $E_{\gamma}$. Further, if $\gamma$ is a subset
47     of a larger system, $\boldsymbol{\Lambda}\{E_1,E_2,\ldots
48     E_{\gamma},\ldots E_n\}$, the total degeneracy of the system can be
49     expressed as,
50 mmeineke 1001 \begin{equation}
51 mmeineke 1037 \Omega(\boldsymbol{\Lambda}) = \Omega(E_1) \times \Omega(E_2) \times \ldots
52     \Omega(E_{\gamma}) \times \ldots \Omega(E_n)
53     \label{introEq:SM0.1}
54     \end{equation}
55     This multiplicative combination of degeneracies is illustrated in
56     Fig.~\ref{introFig:degenProd}.
57    
58     \begin{figure}
59     \centering
60     \includegraphics[width=\linewidth]{omegaFig.eps}
61     \caption[An explanation of the combination of degeneracies]{Systems A and B both have three energy levels and two indistinguishable particles. When the total energy is 2, there are two ways for each system to disperse the energy. However, for system C, the superset of A and B, the total degeneracy is the product of the degeneracy of each system. In this case $\Omega(\text{C})$ is 4.}
62     \label{introFig:degenProd}
63     \end{figure}
64    
65     Next, consider the specific case of $\gamma$ in contact with a
66     bath. Exchange of energy is allowed between the bath and the system,
67     subject to the constraint that the total energy
68     ($E_{\text{bath}}+E_{\gamma}$) remain constant. $\Omega(E)$, where $E$
69     is the total energy of both systems, can be represented as
70     \begin{equation}
71 mmeineke 1003 \Omega(E) = \Omega(E_{\gamma}) \times \Omega(E - E_{\gamma})
72 mmeineke 1001 \label{introEq:SM1}
73     \end{equation}
74     Or additively as
75     \begin{equation}
76 mmeineke 1003 \ln \Omega(E) = \ln \Omega(E_{\gamma}) + \ln \Omega(E - E_{\gamma})
77 mmeineke 1001 \label{introEq:SM2}
78     \end{equation}
79    
80     The solution to Eq.~\ref{introEq:SM2} maximizes the number of
81 mmeineke 1003 degenerative configurations in $E$. \cite{Frenkel1996}
82 mmeineke 1001 This gives
83     \begin{equation}
84 mmeineke 1003 \frac{\partial \ln \Omega(E)}{\partial E_{\gamma}} = 0 =
85     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
86     +
87     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
88     \frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}
89 mmeineke 1001 \label{introEq:SM3}
90     \end{equation}
91     Where $E_{\text{bath}}$ is $E-E_{\gamma}$, and
92 mmeineke 1003 $\frac{\partial E_{\text{bath}}}{\partial E_{\gamma}}$ is
93 mmeineke 1001 $-1$. Eq.~\ref{introEq:SM3} becomes
94     \begin{equation}
95 mmeineke 1003 \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}} =
96     \frac{\partial \ln \Omega(E_{\text{bath}})}{\partial E_{\text{bath}}}
97 mmeineke 1001 \label{introEq:SM4}
98     \end{equation}
99    
100     At this point, one can draw a relationship between the maximization of
101     degeneracy in Eq.~\ref{introEq:SM3} and the second law of
102 mmeineke 1003 thermodynamics. Namely, that for a closed system, entropy will
103     increase for an irreversible process.\cite{chandler:1987} Here the
104 mmeineke 1001 process is the partitioning of energy among the two systems. This
105     allows the following definition of entropy:
106     \begin{equation}
107 mmeineke 1003 S = k_B \ln \Omega(E)
108 mmeineke 1001 \label{introEq:SM5}
109     \end{equation}
110 mmeineke 1008 Where $k_B$ is the Boltzmann constant. Having defined entropy, one can
111 mmeineke 1001 also define the temperature of the system using the relation
112     \begin{equation}
113 mmeineke 1003 \frac{1}{T} = \biggl ( \frac{\partial S}{\partial E} \biggr )_{N,V}
114 mmeineke 1001 \label{introEq:SM6}
115     \end{equation}
116     The temperature in the system $\gamma$ is then
117     \begin{equation}
118 mmeineke 1003 \beta( E_{\gamma} ) = \frac{1}{k_B T} =
119     \frac{\partial \ln \Omega(E_{\gamma})}{\partial E_{\gamma}}
120 mmeineke 1001 \label{introEq:SM7}
121     \end{equation}
122     Applying this to Eq.~\ref{introEq:SM4} gives the following
123     \begin{equation}
124 mmeineke 1003 \beta( E_{\gamma} ) = \beta( E_{\text{bath}} )
125 mmeineke 1001 \label{introEq:SM8}
126     \end{equation}
127     Showing that the partitioning of energy between the two systems is
128 mmeineke 1003 actually a process of thermal equilibration.\cite{Frenkel1996}
129 mmeineke 1001
130     An application of these results is to formulate the form of an
131     expectation value of an observable, $A$, in the canonical ensemble. In
132     the canonical ensemble, the number of particles, $N$, the volume, $V$,
133     and the temperature, $T$, are all held constant while the energy, $E$,
134     is allowed to fluctuate. Returning to the previous example, the bath
135 mmeineke 1008 system is now an infinitely large thermal bath, whose exchange of
136 mmeineke 1003 energy with the system $\gamma$ holds the temperature constant. The
137 mmeineke 1001 partitioning of energy in the bath system is then related to the total
138 mmeineke 1003 energy of both systems and the fluctuations in $E_{\gamma}$:
139 mmeineke 1001 \begin{equation}
140 mmeineke 1003 \Omega( E_{\gamma} ) = \Omega( E - E_{\gamma} )
141 mmeineke 1001 \label{introEq:SM9}
142     \end{equation}
143     As for the expectation value, it can be defined
144     \begin{equation}
145 mmeineke 1003 \langle A \rangle =
146     \int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
147     P_{\gamma} A(\boldsymbol{\Gamma})
148 mmeineke 1001 \label{introEq:SM10}
149 mmeineke 1003 \end{equation}
150     Where $\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}$ denotes
151 mmeineke 1008 an integration over all accessible phase space, $P_{\gamma}$ is the
152 mmeineke 1001 probability of being in a given phase state and
153     $A(\boldsymbol{\Gamma})$ is some observable that is a function of the
154     phase state.
155    
156     Because entropy seeks to maximize the number of degenerate states at a
157     given energy, the probability of being in a particular state in
158     $\gamma$ will be directly proportional to the number of allowable
159     states the coupled system is able to assume. Namely,
160     \begin{equation}
161 mmeineke 1003 P_{\gamma} \propto \Omega( E_{\text{bath}} ) =
162     e^{\ln \Omega( E - E_{\gamma})}
163 mmeineke 1001 \label{introEq:SM11}
164     \end{equation}
165 mmeineke 1003 With $E_{\gamma} \ll E$, $\ln \Omega$ can be expanded in a Taylor series:
166 mmeineke 1001 \begin{equation}
167 mmeineke 1003 \ln \Omega ( E - E_{\gamma}) =
168     \ln \Omega (E) -
169     E_{\gamma} \frac{\partial \ln \Omega }{\partial E}
170     + \ldots
171 mmeineke 1001 \label{introEq:SM12}
172     \end{equation}
173     Higher order terms are omitted as $E$ is an infinite thermal
174     bath. Further, using Eq.~\ref{introEq:SM7}, Eq.~\ref{introEq:SM11} can
175     be rewritten:
176     \begin{equation}
177 mmeineke 1003 P_{\gamma} \propto e^{-\beta E_{\gamma}}
178 mmeineke 1001 \label{introEq:SM13}
179     \end{equation}
180 mmeineke 1008 Where $\ln \Omega(E)$ has been factored out of the proportionality as a
181 mmeineke 1003 constant. Normalizing the probability ($\int\limits_{\boldsymbol{\Gamma}}
182     d\boldsymbol{\Gamma} P_{\gamma} = 1$) gives
183 mmeineke 1001 \begin{equation}
184 mmeineke 1003 P_{\gamma} = \frac{e^{-\beta E_{\gamma}}}
185     {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
186 mmeineke 1001 \label{introEq:SM14}
187     \end{equation}
188 mmeineke 1008 This result is the standard Boltzmann statistical distribution.
189 mmeineke 1001 Applying it to Eq.~\ref{introEq:SM10} one can obtain the following relation for ensemble averages:
190     \begin{equation}
191 mmeineke 1003 \langle A \rangle =
192     \frac{\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma}
193     A( \boldsymbol{\Gamma} ) e^{-\beta E_{\gamma}}}
194     {\int\limits_{\boldsymbol{\Gamma}} d\boldsymbol{\Gamma} e^{-\beta E_{\gamma}}}
195 mmeineke 1001 \label{introEq:SM15}
196     \end{equation}
197    
198     \subsection{\label{introSec:ergodic}The Ergodic Hypothesis}
199    
200     One last important consideration is that of ergodicity. Ergodicity is
201     the assumption that given an infinite amount of time, a system will
202 mmeineke 1003 visit every available point in phase space.\cite{Frenkel1996} For most
203 mmeineke 1001 systems, this is a valid assumption, except in cases where the system
204 mmeineke 1003 may be trapped in a local feature (\emph{e.g.}~glasses). When valid,
205 mmeineke 1001 ergodicity allows the unification of a time averaged observation and
206 mmeineke 1008 an ensemble averaged one. If an observation is averaged over a
207 mmeineke 1001 sufficiently long time, the system is assumed to visit all
208     appropriately available points in phase space, giving a properly
209     weighted statistical average. This allows the researcher freedom of
210     choice when deciding how best to measure a given observable. When an
211     ensemble averaged approach seems most logical, the Monte Carlo
212 mmeineke 1003 techniques described in Sec.~\ref{introSec:monteCarlo} can be utilized.
213 mmeineke 1001 Conversely, if a problem lends itself to a time averaging approach,
214     the Molecular Dynamics techniques in Sec.~\ref{introSec:MD} can be
215     employed.
216    
217 mmeineke 1003 \section{\label{introSec:monteCarlo}Monte Carlo Simulations}
218 mmeineke 914
219 mmeineke 953 The Monte Carlo method was developed by Metropolis and Ulam for their
220     work in fissionable material.\cite{metropolis:1949} The method is so
221 mmeineke 955 named, because it heavily uses random numbers in its
222     solution.\cite{allen87:csl} The Monte Carlo method allows for the
223     solution of integrals through the stochastic sampling of the values
224     within the integral. In the simplest case, the evaluation of an
225     integral would follow a brute force method of
226     sampling.\cite{Frenkel1996} Consider the following single dimensional
227     integral:
228     \begin{equation}
229     I = f(x)dx
230     \label{eq:MCex1}
231     \end{equation}
232     The equation can be recast as:
233     \begin{equation}
234 mmeineke 977 I = (b-a)\langle f(x) \rangle
235 mmeineke 955 \label{eq:MCex2}
236     \end{equation}
237 mmeineke 977 Where $\langle f(x) \rangle$ is the unweighted average over the interval
238 mmeineke 955 $[a,b]$. The calculation of the integral could then be solved by
239     randomly choosing points along the interval $[a,b]$ and calculating
240     the value of $f(x)$ at each point. The accumulated average would then
241 mmeineke 1008 approach $I$ in the limit where the number of trials is infinitely
242 mmeineke 955 large.
243 mmeineke 914
244 mmeineke 955 However, in Statistical Mechanics, one is typically interested in
245     integrals of the form:
246     \begin{equation}
247 mmeineke 977 \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
248     e^{-\beta V(\mathbf{r}^N)}}%
249     {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
250 mmeineke 955 \label{eq:mcEnsAvg}
251     \end{equation}
252 mmeineke 977 Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
253 mmeineke 1003 and $A$ is some observable that is only dependent on position. This is
254     the ensemble average of $A$ as presented in
255     Sec.~\ref{introSec:statThermo}, except here $A$ is independent of
256     momentum. Therefore the momenta contribution of the integral can be
257     factored out, leaving the configurational integral. Application of the
258     brute force method to this system would yield highly inefficient
259 mmeineke 1008 results. Due to the Boltzmann weighting of this integral, most random
260 mmeineke 955 configurations will have a near zero contribution to the ensemble
261 mmeineke 1003 average. This is where importance sampling comes into
262 mmeineke 955 play.\cite{allen87:csl}
263 mmeineke 914
264 mmeineke 955 Importance Sampling is a method where one selects a distribution from
265     which the random configurations are chosen in order to more
266     efficiently calculate the integral.\cite{Frenkel1996} Consider again
267     Eq.~\ref{eq:MCex1} rewritten to be:
268 mmeineke 956 \begin{equation}
269 mmeineke 977 I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
270     \label{introEq:Importance1}
271 mmeineke 956 \end{equation}
272 mmeineke 977 Where $\rho(x)$ is an arbitrary probability distribution in $x$. If
273     one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
274     from the distribution $\rho(x)$ on the interval $[a,b]$, then
275     Eq.~\ref{introEq:Importance1} becomes
276 mmeineke 956 \begin{equation}
277 mmeineke 977 I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
278     \label{introEq:Importance2}
279 mmeineke 956 \end{equation}
280 mmeineke 977 Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
281 mmeineke 956 \begin {equation}
282 mmeineke 977 \rho_{kT}(\mathbf{r}^N) =
283     \frac{e^{-\beta V(\mathbf{r}^N)}}
284     {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
285     \label{introEq:MCboltzman}
286 mmeineke 956 \end{equation}
287 mmeineke 1008 Where $\rho_{kT}$ is the Boltzmann distribution. The ensemble average
288 mmeineke 977 can be rewritten as
289 mmeineke 956 \begin{equation}
290 mmeineke 977 \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
291     \rho_{kT}(\mathbf{r}^N)
292     \label{introEq:Importance3}
293 mmeineke 956 \end{equation}
294 mmeineke 977 Applying Eq.~\ref{introEq:Importance1} one obtains
295 mmeineke 956 \begin{equation}
296 mmeineke 977 \langle A \rangle = \biggl \langle
297     \frac{ A \rho_{kT}(\mathbf{r}^N) }
298     {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
299     \label{introEq:Importance4}
300 mmeineke 956 \end{equation}
301 mmeineke 977 By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
302     Eq.~\ref{introEq:Importance4} becomes
303 mmeineke 956 \begin{equation}
304 mmeineke 977 \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
305     \label{introEq:Importance5}
306 mmeineke 956 \end{equation}
307 mmeineke 977 The difficulty is selecting points $\mathbf{r}^N$ such that they are
308     sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$. A solution
309 mmeineke 1009 was proposed by Metropolis \emph{et al}.\cite{metropolis:1953} which involved
310 mmeineke 977 the use of a Markov chain whose limiting distribution was
311     $\rho_{kT}(\mathbf{r}^N)$.
312 mmeineke 955
313 mmeineke 1003 \subsection{\label{introSec:markovChains}Markov Chains}
314 mmeineke 955
315 mmeineke 956 A Markov chain is a chain of states satisfying the following
316 mmeineke 977 conditions:\cite{leach01:mm}
317     \begin{enumerate}
318 mmeineke 956 \item The outcome of each trial depends only on the outcome of the previous trial.
319     \item Each trial belongs to a finite set of outcomes called the state space.
320 mmeineke 977 \end{enumerate}
321 mmeineke 1008 If given two configurations, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
322     $\rho_m$ and $\rho_n$ are the probabilities of being in state
323 mmeineke 977 $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively. Further, the two
324     states are linked by a transition probability, $\pi_{mn}$, which is the
325     probability of going from state $m$ to state $n$.
326 mmeineke 955
327 mmeineke 977 \newcommand{\accMe}{\operatorname{acc}}
328    
329 mmeineke 956 The transition probability is given by the following:
330     \begin{equation}
331 mmeineke 977 \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
332     \label{introEq:MCpi}
333 mmeineke 956 \end{equation}
334 mmeineke 977 Where $\alpha_{mn}$ is the probability of attempting the move $m
335     \rightarrow n$, and $\accMe$ is the probability of accepting the move
336     $m \rightarrow n$. Defining a probability vector,
337     $\boldsymbol{\rho}$, such that
338 mmeineke 956 \begin{equation}
339 mmeineke 977 \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
340     \ldots \rho_N \}
341     \label{introEq:MCrhoVector}
342 mmeineke 956 \end{equation}
343 mmeineke 977 a transition matrix $\boldsymbol{\Pi}$ can be defined,
344     whose elements are $\pi_{mn}$, for each given transition. The
345     limiting distribution of the Markov chain can then be found by
346     applying the transition matrix an infinite number of times to the
347     distribution vector.
348 mmeineke 956 \begin{equation}
349 mmeineke 977 \boldsymbol{\rho}_{\text{limit}} =
350     \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
351     \boldsymbol{\Pi}^N
352     \label{introEq:MCmarkovLimit}
353 mmeineke 956 \end{equation}
354     The limiting distribution of the chain is independent of the starting
355     distribution, and successive applications of the transition matrix
356     will only yield the limiting distribution again.
357     \begin{equation}
358 mmeineke 977 \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
359     \boldsymbol{\Pi}
360     \label{introEq:MCmarkovEquil}
361 mmeineke 956 \end{equation}
362    
363 mmeineke 1003 \subsection{\label{introSec:metropolisMethod}The Metropolis Method}
364 mmeineke 956
365 mmeineke 977 In the Metropolis method\cite{metropolis:1953}
366     Eq.~\ref{introEq:MCmarkovEquil} is solved such that
367 mmeineke 1008 $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzmann distribution
368 mmeineke 977 of states. The method accomplishes this by imposing the strong
369     condition of microscopic reversibility on the equilibrium
370     distribution. Meaning, that at equilibrium the probability of going
371     from $m$ to $n$ is the same as going from $n$ to $m$.
372 mmeineke 956 \begin{equation}
373 mmeineke 977 \rho_m\pi_{mn} = \rho_n\pi_{nm}
374     \label{introEq:MCmicroReverse}
375 mmeineke 956 \end{equation}
376 mmeineke 1008 Further, $\boldsymbol{\alpha}$ is chosen to be a symmetric matrix in
377 mmeineke 977 the Metropolis method. Using Eq.~\ref{introEq:MCpi},
378     Eq.~\ref{introEq:MCmicroReverse} becomes
379 mmeineke 956 \begin{equation}
380 mmeineke 977 \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
381     \frac{\rho_n}{\rho_m}
382     \label{introEq:MCmicro2}
383 mmeineke 956 \end{equation}
384 mmeineke 1008 For a Boltzmann limiting distribution,
385 mmeineke 956 \begin{equation}
386 mmeineke 977 \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
387     = e^{-\beta \Delta \mathcal{U}}
388     \label{introEq:MCmicro3}
389 mmeineke 956 \end{equation}
390     This allows for the following set of acceptance rules be defined:
391     \begin{equation}
392 mmeineke 1003 \accMe( m \rightarrow n ) =
393     \begin{cases}
394     1& \text{if $\Delta \mathcal{U} \leq 0$,} \\
395     e^{-\beta \Delta \mathcal{U}}& \text{if $\Delta \mathcal{U} > 0$.}
396     \end{cases}
397     \label{introEq:accRules}
398 mmeineke 956 \end{equation}
399    
400 mmeineke 1003 Using the acceptance criteria from Eq.~\ref{introEq:accRules} the
401     Metropolis method proceeds as follows
402     \begin{enumerate}
403     \item Generate an initial configuration $\mathbf{r}^N$ which has some finite probability in $\rho_{kT}$.
404 mmeineke 1008 \item Modify $\mathbf{r}^N$, to generate configuration $\mathbf{r^{\prime}}^N$.
405 mmeineke 1003 \item If the new configuration lowers the energy of the system, accept the move with unity ($\mathbf{r}^N$ becomes $\mathbf{r^{\prime}}^N$). Otherwise accept with probability $e^{-\beta \Delta \mathcal{U}}$.
406 mmeineke 1008 \item Accumulate the average for the configurational observable of interest.
407 mmeineke 1003 \item Repeat from step 2 until the average converges.
408     \end{enumerate}
409 mmeineke 956 One important note is that the average is accumulated whether the move
410     is accepted or not, this ensures proper weighting of the average.
411 mmeineke 1003 Using Eq.~\ref{introEq:Importance4} it becomes clear that the
412     accumulated averages are the ensemble averages, as this method ensures
413 mmeineke 1008 that the limiting distribution is the Boltzmann distribution.
414 mmeineke 956
415 mmeineke 1003 \section{\label{introSec:MD}Molecular Dynamics Simulations}
416 mmeineke 914
417 mmeineke 956 The main simulation tool used in this research is Molecular Dynamics.
418     Molecular Dynamics is when the equations of motion for a system are
419     integrated in order to obtain information about both the positions and
420     momentum of a system, allowing the calculation of not only
421     configurational observables, but momenta dependent ones as well:
422     diffusion constants, velocity auto correlations, folding/unfolding
423 mmeineke 1003 events, etc. Due to the principle of ergodicity,
424     Sec.~\ref{introSec:ergodic}, the average of these observables over the
425     time period of the simulation are taken to be the ensemble averages
426     for the system.
427 mmeineke 914
428 mmeineke 956 The choice of when to use molecular dynamics over Monte Carlo
429     techniques, is normally decided by the observables in which the
430 mmeineke 1001 researcher is interested. If the observables depend on momenta in
431 mmeineke 956 any fashion, then the only choice is molecular dynamics in some form.
432     However, when the observable is dependent only on the configuration,
433 mmeineke 1008 then most of the time Monte Carlo techniques will be more efficient.
434 mmeineke 914
435 mmeineke 956 The focus of research in the second half of this dissertation is
436     centered around the dynamic properties of phospholipid bilayers,
437     making molecular dynamics key in the simulation of those properties.
438 mmeineke 914
439 mmeineke 1003 \subsection{\label{introSec:mdAlgorithm}The Molecular Dynamics Algorithm}
440 mmeineke 914
441 mmeineke 956 To illustrate how the molecular dynamics technique is applied, the
442     following sections will describe the sequence involved in a
443 mmeineke 1003 simulation. Sec.~\ref{introSec:mdInit} deals with the initialization
444     of a simulation. Sec.~\ref{introSec:mdForce} discusses issues
445     involved with the calculation of the forces.
446     Sec.~\ref{introSec:mdIntegrate} concludes the algorithm discussion
447     with the integration of the equations of motion.\cite{Frenkel1996}
448 mmeineke 914
449 mmeineke 1003 \subsection{\label{introSec:mdInit}Initialization}
450 mmeineke 914
451 mmeineke 956 When selecting the initial configuration for the simulation it is
452     important to consider what dynamics one is hoping to observe.
453 mmeineke 1003 Ch.~\ref{chapt:lipid} deals with the formation and equilibrium dynamics of
454 mmeineke 956 phospholipid membranes. Therefore in these simulations initial
455     positions were selected that in some cases dispersed the lipids in
456 mmeineke 1008 water, and in other cases structured the lipids into performed
457 mmeineke 956 bilayers. Important considerations at this stage of the simulation are:
458     \begin{itemize}
459     \item There are no major overlaps of molecular or atomic orbitals
460 mmeineke 1008 \item Velocities are chosen in such a way as to not give the system a non=zero total momentum or angular momentum.
461     \item It is also sometimes desirable to select the velocities to correctly sample the target temperature.
462 mmeineke 956 \end{itemize}
463    
464     The first point is important due to the amount of potential energy
465     generated by having two particles too close together. If overlap
466     occurs, the first evaluation of forces will return numbers so large as
467 mmeineke 1008 to render the numerical integration of the motion meaningless. The
468 mmeineke 956 second consideration keeps the system from drifting or rotating as a
469     whole. This arises from the fact that most simulations are of systems
470     in equilibrium in the absence of outside forces. Therefore any net
471     movement would be unphysical and an artifact of the simulation method
472 mmeineke 1003 used. The final point addresses the selection of the magnitude of the
473 mmeineke 1008 initial velocities. For many simulations it is convenient to use
474 mmeineke 956 this opportunity to scale the amount of kinetic energy to reflect the
475     desired thermal distribution of the system. However, it must be noted
476     that most systems will require further velocity rescaling after the
477     first few initial simulation steps due to either loss or gain of
478     kinetic energy from energy stored in potential degrees of freedom.
479    
480 mmeineke 1003 \subsection{\label{introSec:mdForce}Force Evaluation}
481 mmeineke 956
482     The evaluation of forces is the most computationally expensive portion
483     of a given molecular dynamics simulation. This is due entirely to the
484     evaluation of long range forces in a simulation, typically pair-wise.
485     These forces are most commonly the Van der Waals force, and sometimes
486 mmeineke 1003 Coulombic forces as well. For a pair-wise force, there are $N(N-1)/ 2$
487     pairs to be evaluated, where $N$ is the number of particles in the
488     system. This leads to the calculations scaling as $N^2$, making large
489 mmeineke 956 simulations prohibitive in the absence of any computation saving
490     techniques.
491    
492     Another consideration one must resolve, is that in a given simulation
493     a disproportionate number of the particles will feel the effects of
494 mmeineke 1003 the surface.\cite{allen87:csl} For a cubic system of 1000 particles
495     arranged in a $10 \times 10 \times 10$ cube, 488 particles will be
496     exposed to the surface. Unless one is simulating an isolated particle
497     group in a vacuum, the behavior of the system will be far from the
498 mmeineke 1008 desired bulk characteristics. To offset this, simulations employ the
499 mmeineke 1003 use of periodic boundary images.\cite{born:1912}
500 mmeineke 956
501     The technique involves the use of an algorithm that replicates the
502 mmeineke 1008 simulation box on an infinite lattice in Cartesian space. Any given
503 mmeineke 956 particle leaving the simulation box on one side will have an image of
504 mmeineke 1003 itself enter on the opposite side (see Fig.~\ref{introFig:pbc}). In
505 mmeineke 1009 addition, this sets that any two particles have an image, real or
506     periodic, within $\text{box}/2$ of each other. A discussion of the
507     method used to calculate the periodic image can be found in
508 mmeineke 1003 Sec.\ref{oopseSec:pbc}.
509 mmeineke 956
510 mmeineke 1003 \begin{figure}
511     \centering
512     \includegraphics[width=\linewidth]{pbcFig.eps}
513 mmeineke 1008 \caption[An illustration of periodic boundary conditions]{A 2-D illustration of periodic boundary conditions. As one particle leaves the right of the simulation box, an image of it enters the left.}
514 mmeineke 1003 \label{introFig:pbc}
515     \end{figure}
516    
517 mmeineke 956 Returning to the topic of the computational scale of the force
518     evaluation, the use of periodic boundary conditions requires that a
519     cutoff radius be employed. Using a cutoff radius improves the
520     efficiency of the force evaluation, as particles farther than a
521 mmeineke 1003 predetermined distance, $r_{\text{cut}}$, are not included in the
522 mmeineke 1008 calculation.\cite{Frenkel1996} In a simulation with periodic images,
523 mmeineke 1003 $r_{\text{cut}}$ has a maximum value of $\text{box}/2$.
524     Fig.~\ref{introFig:rMax} illustrates how when using an
525     $r_{\text{cut}}$ larger than this value, or in the extreme limit of no
526     $r_{\text{cut}}$ at all, the corners of the simulation box are
527     unequally weighted due to the lack of particle images in the $x$, $y$,
528 mmeineke 1008 or $z$ directions past a distance of $\text{box} / 2$.
529 mmeineke 956
530 mmeineke 1003 \begin{figure}
531     \centering
532     \includegraphics[width=\linewidth]{rCutMaxFig.eps}
533 mmeineke 1006 \caption[An explanation of $r_{\text{cut}}$]{The yellow atom has all other images wrapped to itself as the center. If $r_{\text{cut}}=\text{box}/2$, then the distribution is uniform (blue atoms). However, when $r_{\text{cut}}>\text{box}/2$ the corners are disproportionately weighted (green atoms) vs the axial directions (shaded regions).}
534 mmeineke 1003 \label{introFig:rMax}
535     \end{figure}
536    
537 mmeineke 1006 With the use of an $r_{\text{cut}}$, however, comes a discontinuity in
538     the potential energy curve (Fig.~\ref{introFig:shiftPot}). To fix this
539     discontinuity, one calculates the potential energy at the
540     $r_{\text{cut}}$, and adds that value to the potential, causing
541     the function to go smoothly to zero at the cutoff radius. This
542     shifted potential ensures conservation of energy when integrating the
543     Newtonian equations of motion.
544 mmeineke 956
545 mmeineke 1006 \begin{figure}
546     \centering
547     \includegraphics[width=\linewidth]{shiftedPot.eps}
548 mmeineke 1008 \caption[Shifting the Lennard-Jones Potential]{The Lennard-Jones potential (blue line) is shifted (red line) to remove the discontinuity at $r_{\text{cut}}$.}
549 mmeineke 1006 \label{introFig:shiftPot}
550     \end{figure}
551    
552 mmeineke 978 The second main simplification used in this research is the Verlet
553     neighbor list. \cite{allen87:csl} In the Verlet method, one generates
554     a list of all neighbor atoms, $j$, surrounding atom $i$ within some
555     cutoff $r_{\text{list}}$, where $r_{\text{list}}>r_{\text{cut}}$.
556     This list is created the first time forces are evaluated, then on
557     subsequent force evaluations, pair calculations are only calculated
558     from the neighbor lists. The lists are updated if any given particle
559     in the system moves farther than $r_{\text{list}}-r_{\text{cut}}$,
560     giving rise to the possibility that a particle has left or joined a
561     neighbor list.
562 mmeineke 956
563 mmeineke 1003 \subsection{\label{introSec:mdIntegrate} Integration of the equations of motion}
564 mmeineke 978
565     A starting point for the discussion of molecular dynamics integrators
566 mmeineke 1008 is the Verlet algorithm.\cite{Frenkel1996} It begins with a Taylor
567 mmeineke 978 expansion of position in time:
568     \begin{equation}
569 mmeineke 1008 q(t+\Delta t)= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 +
570     \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
571     \mathcal{O}(\Delta t^4)
572 mmeineke 978 \label{introEq:verletForward}
573     \end{equation}
574     As well as,
575     \begin{equation}
576 mmeineke 1008 q(t-\Delta t)= q(t) - v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 -
577     \frac{\Delta t^3}{3!}\frac{\partial q(t)}{\partial t} +
578     \mathcal{O}(\Delta t^4)
579 mmeineke 978 \label{introEq:verletBack}
580     \end{equation}
581 mmeineke 1009 Where $m$ is the mass of the particle, $q(t)$ is the position at time
582     $t$, $v(t)$ the velocity, and $F(t)$ the force acting on the
583     particle. Adding together Eq.~\ref{introEq:verletForward} and
584 mmeineke 978 Eq.~\ref{introEq:verletBack} results in,
585     \begin{equation}
586 mmeineke 1009 q(t+\Delta t)+q(t-\Delta t) =
587     2q(t) + \frac{F(t)}{m}\Delta t^2 + \mathcal{O}(\Delta t^4)
588 mmeineke 978 \label{introEq:verletSum}
589     \end{equation}
590     Or equivalently,
591     \begin{equation}
592 mmeineke 1012 q(t+\Delta t) \approx
593     2q(t) - q(t-\Delta t) + \frac{F(t)}{m}\Delta t^2
594 mmeineke 978 \label{introEq:verletFinal}
595     \end{equation}
596     Which contains an error in the estimate of the new positions on the
597     order of $\Delta t^4$.
598    
599     In practice, however, the simulations in this research were integrated
600 mmeineke 1008 with a velocity reformulation of the Verlet method.\cite{allen87:csl}
601 mmeineke 1012 \begin{align}
602     q(t+\Delta t) &= q(t) + v(t)\Delta t + \frac{F(t)}{2m}\Delta t^2 %
603     \label{introEq:MDvelVerletPos} \\%
604     %
605     v(t+\Delta t) &= v(t) + \frac{\Delta t}{2m}[F(t) + F(t+\Delta t)] %
606 mmeineke 978 \label{introEq:MDvelVerletVel}
607 mmeineke 1012 \end{align}
608 mmeineke 978 The original Verlet algorithm can be regained by substituting the
609     velocity back into Eq.~\ref{introEq:MDvelVerletPos}. The Verlet
610     formulations are chosen in this research because the algorithms have
611     very little long term drift in energy conservation. Energy
612     conservation in a molecular dynamics simulation is of extreme
613     importance, as it is a measure of how closely one is following the
614 mmeineke 1008 ``true'' trajectory with the finite integration scheme. An exact
615 mmeineke 978 solution to the integration will conserve area in phase space, as well
616     as be reversible in time, that is, the trajectory integrated forward
617     or backwards will exactly match itself. Having a finite algorithm
618     that both conserves area in phase space and is time reversible,
619     therefore increases, but does not guarantee the ``correctness'' or the
620     integrated trajectory.
621    
622 mmeineke 1001 It can be shown,\cite{Frenkel1996} that although the Verlet algorithm
623 mmeineke 978 does not rigorously preserve the actual Hamiltonian, it does preserve
624     a pseudo-Hamiltonian which shadows the real one in phase space. This
625 mmeineke 1008 pseudo-Hamiltonian is provably area-conserving as well as time
626 mmeineke 978 reversible. The fact that it shadows the true Hamiltonian in phase
627     space is acceptable in actual simulations as one is interested in the
628     ensemble average of the observable being measured. From the ergodic
629 mmeineke 1012 hypothesis (Sec.~\ref{introSec:statThermo}), it is known that the time
630 mmeineke 978 average will match the ensemble average, therefore two similar
631     trajectories in phase space should give matching statistical averages.
632    
633 mmeineke 979 \subsection{\label{introSec:MDfurther}Further Considerations}
634 mmeineke 1012
635 mmeineke 978 In the simulations presented in this research, a few additional
636     parameters are needed to describe the motions. The simulations
637 mmeineke 1012 involving water and phospholipids in Ch.~\ref{chapt:lipid} are
638 mmeineke 978 required to integrate the equations of motions for dipoles on atoms.
639     This involves an additional three parameters be specified for each
640     dipole atom: $\phi$, $\theta$, and $\psi$. These three angles are
641     taken to be the Euler angles, where $\phi$ is a rotation about the
642     $z$-axis, and $\theta$ is a rotation about the new $x$-axis, and
643     $\psi$ is a final rotation about the new $z$-axis (see
644 mmeineke 1012 Fig.~\ref{introFig:eulerAngles}). This sequence of rotations can be
645     accumulated into a single $3 \times 3$ matrix, $\mathbf{A}$,
646 mmeineke 978 defined as follows:
647     \begin{equation}
648 mmeineke 1012 \mathbf{A} =
649     \begin{bmatrix}
650     \cos\phi\cos\psi-\sin\phi\cos\theta\sin\psi &%
651     \sin\phi\cos\psi+\cos\phi\cos\theta\sin\psi &%
652     \sin\theta\sin\psi \\%
653     %
654     -\cos\phi\sin\psi-\sin\phi\cos\theta\cos\psi &%
655     -\sin\phi\sin\psi+\cos\phi\cos\theta\cos\psi &%
656     \sin\theta\cos\psi \\%
657     %
658     \sin\phi\sin\theta &%
659     -\cos\phi\sin\theta &%
660     \cos\theta
661     \end{bmatrix}
662 mmeineke 978 \label{introEq:EulerRotMat}
663     \end{equation}
664    
665 mmeineke 1013 \begin{figure}
666 mmeineke 1014 \centering
667 mmeineke 1013 \includegraphics[width=\linewidth]{eulerRotFig.eps}
668 mmeineke 1037 \caption[Euler rotation of Cartesian coordinates]{The rotation scheme for Euler angles. First is a rotation of $\phi$ about the $z$ axis (blue rotation). Next is a rotation of $\theta$ about the new $x$ axis (green rotation). Lastly is a final rotation of $\psi$ about the new $z$ axis (red rotation).}
669 mmeineke 1013 \label{introFig:eulerAngles}
670     \end{figure}
671    
672 mmeineke 1012 The equations of motion for Euler angles can be written down
673     as\cite{allen87:csl}
674     \begin{align}
675     \dot{\phi} &= -\omega^s_x \frac{\sin\phi\cos\theta}{\sin\theta} +
676     \omega^s_y \frac{\cos\phi\cos\theta}{\sin\theta} +
677     \omega^s_z
678     \label{introEq:MDeulerPhi} \\%
679     %
680     \dot{\theta} &= \omega^s_x \cos\phi + \omega^s_y \sin\phi
681     \label{introEq:MDeulerTheta} \\%
682     %
683     \dot{\psi} &= \omega^s_x \frac{\sin\phi}{\sin\theta} -
684     \omega^s_y \frac{\cos\phi}{\sin\theta}
685     \label{introEq:MDeulerPsi}
686     \end{align}
687 mmeineke 978 Where $\omega^s_i$ is the angular velocity in the lab space frame
688 mmeineke 1008 along Cartesian coordinate $i$. However, a difficulty arises when
689 mmeineke 979 attempting to integrate Eq.~\ref{introEq:MDeulerPhi} and
690 mmeineke 978 Eq.~\ref{introEq:MDeulerPsi}. The $\frac{1}{\sin \theta}$ present in
691     both equations means there is a non-physical instability present when
692 mmeineke 1012 $\theta$ is 0 or $\pi$. To correct for this, the simulations integrate
693     the rotation matrix, $\mathbf{A}$, directly, thus avoiding the
694     instability. This method was proposed by Dullweber
695     \emph{et. al.}\cite{Dullweber1997}, and is presented in
696 mmeineke 978 Sec.~\ref{introSec:MDsymplecticRot}.
697    
698 mmeineke 1012 \subsection{\label{introSec:MDliouville}Liouville Propagator}
699 mmeineke 978
700 mmeineke 980 Before discussing the integration of the rotation matrix, it is
701     necessary to understand the construction of a ``good'' integration
702     scheme. It has been previously
703 mmeineke 1012 discussed(Sec.~\ref{introSec:mdIntegrate}) how it is desirable for an
704 mmeineke 980 integrator to be symplectic, or time reversible. The following is an
705     outline of the Trotter factorization of the Liouville Propagator as a
706 mmeineke 1012 scheme for generating symplectic integrators.\cite{Tuckerman92}
707 mmeineke 978
708 mmeineke 980 For a system with $f$ degrees of freedom the Liouville operator can be
709     defined as,
710     \begin{equation}
711 mmeineke 1012 iL=\sum^f_{j=1} \biggl [\dot{q}_j\frac{\partial}{\partial q_j} +
712     F_j\frac{\partial}{\partial p_j} \biggr ]
713 mmeineke 980 \label{introEq:LiouvilleOperator}
714     \end{equation}
715 mmeineke 1012 Here, $q_j$ and $p_j$ are the position and conjugate momenta of a
716     degree of freedom, and $F_j$ is the force on that degree of freedom.
717 mmeineke 1008 $\Gamma$ is defined as the set of all positions and conjugate momenta,
718 mmeineke 1012 $\{q_j,p_j\}$, and the propagator, $U(t)$, is defined
719 mmeineke 980 \begin {equation}
720 mmeineke 1012 U(t) = e^{iLt}
721 mmeineke 980 \label{introEq:Lpropagator}
722     \end{equation}
723     This allows the specification of $\Gamma$ at any time $t$ as
724     \begin{equation}
725 mmeineke 1012 \Gamma(t) = U(t)\Gamma(0)
726 mmeineke 980 \label{introEq:Lp2}
727     \end{equation}
728     It is important to note, $U(t)$ is a unitary operator meaning
729     \begin{equation}
730     U(-t)=U^{-1}(t)
731     \label{introEq:Lp3}
732     \end{equation}
733    
734     Decomposing $L$ into two parts, $iL_1$ and $iL_2$, one can use the
735     Trotter theorem to yield
736 mmeineke 1012 \begin{align}
737     e^{iLt} &= e^{i(L_1 + L_2)t} \notag \\%
738     %
739     &= \biggl [ e^{i(L_1 +L_2)\frac{t}{P}} \biggr]^P \notag \\%
740     %
741     &= \biggl [ e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
742     e^{iL_1\frac{\Delta t}{2}} \biggr ]^P +
743     \mathcal{O}\biggl (\frac{t^3}{P^2} \biggr ) \label{introEq:Lp4}
744     \end{align}
745     Where $\Delta t = t/P$.
746 mmeineke 980 With this, a discrete time operator $G(\Delta t)$ can be defined:
747 mmeineke 1012 \begin{align}
748     G(\Delta t) &= e^{iL_1\frac{\Delta t}{2}}\, e^{iL_2\Delta t}\,
749     e^{iL_1\frac{\Delta t}{2}} \notag \\%
750     %
751     &= U_1 \biggl ( \frac{\Delta t}{2} \biggr )\, U_2 ( \Delta t )\,
752     U_1 \biggl ( \frac{\Delta t}{2} \biggr )
753 mmeineke 980 \label{introEq:Lp5}
754 mmeineke 1012 \end{align}
755     Because $U_1(t)$ and $U_2(t)$ are unitary, $G(\Delta t)$ is also
756 mmeineke 980 unitary. Meaning an integrator based on this factorization will be
757     reversible in time.
758    
759     As an example, consider the following decomposition of $L$:
760 mmeineke 1012 \begin{align}
761     iL_1 &= \dot{q}\frac{\partial}{\partial q}%
762     \label{introEq:Lp6a} \\%
763     %
764     iL_2 &= F(q)\frac{\partial}{\partial p}%
765     \label{introEq:Lp6b}
766     \end{align}
767     This leads to propagator $G( \Delta t )$ as,
768 mmeineke 980 \begin{equation}
769 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
770     e^{\Delta t\,\dot{q}\frac{\partial}{\partial q}} \,
771     e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
772     \label{introEq:Lp7}
773 mmeineke 980 \end{equation}
774 mmeineke 1012 Operating $G(\Delta t)$ on $\Gamma(0)$, and utilizing the operator property
775 mmeineke 980 \begin{equation}
776 mmeineke 1012 e^{c\frac{\partial}{\partial x}}\, f(x) = f(x+c)
777 mmeineke 980 \label{introEq:Lp8}
778     \end{equation}
779 mmeineke 1012 Where $c$ is independent of $x$. One obtains the following:
780     \begin{align}
781     \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) &=
782     \dot{q}(0) + \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9a}\\%
783     %
784     q(\Delta t) &= q(0) + \Delta t\, \dot{q}\biggl (\frac{\Delta t}{2}\biggr )%
785     \label{introEq:Lp9b}\\%
786     %
787     \dot{q}(\Delta t) &= \dot{q}\biggl (\frac{\Delta t}{2}\biggr ) +
788     \frac{\Delta t}{2m}\, F[q(0)] \label{introEq:Lp9c}
789     \end{align}
790 mmeineke 980 Or written another way,
791 mmeineke 1012 \begin{align}
792     q(t+\Delta t) &= q(0) + \dot{q}(0)\Delta t +
793     \frac{F[q(0)]}{m}\frac{\Delta t^2}{2} %
794     \label{introEq:Lp10a} \\%
795     %
796     \dot{q}(\Delta t) &= \dot{q}(0) + \frac{\Delta t}{2m}
797     \biggl [F[q(0)] + F[q(\Delta t)] \biggr] %
798     \label{introEq:Lp10b}
799     \end{align}
800 mmeineke 980 This is the velocity Verlet formulation presented in
801 mmeineke 1012 Sec.~\ref{introSec:mdIntegrate}. Because this integration scheme is
802 mmeineke 980 comprised of unitary propagators, it is symplectic, and therefore area
803 mmeineke 1008 preserving in phase space. From the preceding factorization, one can
804 mmeineke 980 see that the integration of the equations of motion would follow:
805     \begin{enumerate}
806     \item calculate the velocities at the half step, $\frac{\Delta t}{2}$, from the forces calculated at the initial position.
807    
808     \item Use the half step velocities to move positions one whole step, $\Delta t$.
809    
810     \item Evaluate the forces at the new positions, $\mathbf{r}(\Delta t)$, and use the new forces to complete the velocity move.
811    
812     \item Repeat from step 1 with the new position, velocities, and forces assuming the roles of the initial values.
813     \end{enumerate}
814    
815 mmeineke 1012 \subsection{\label{introSec:MDsymplecticRot} Symplectic Propagation of the Rotation Matrix}
816 mmeineke 980
817     Based on the factorization from the previous section,
818 mmeineke 1012 Dullweber\emph{et al}.\cite{Dullweber1997}~ proposed a scheme for the
819 mmeineke 980 symplectic propagation of the rotation matrix, $\mathbf{A}$, as an
820     alternative method for the integration of orientational degrees of
821     freedom. The method starts with a straightforward splitting of the
822     Liouville operator:
823 mmeineke 1012 \begin{align}
824     iL_{\text{pos}} &= \dot{q}\frac{\partial}{\partial q} +
825     \mathbf{\dot{A}}\frac{\partial}{\partial \mathbf{A}}
826     \label{introEq:SR1a} \\%
827     %
828     iL_F &= F(q)\frac{\partial}{\partial p}
829     \label{introEq:SR1b} \\%
830     iL_{\tau} &= \tau(\mathbf{A})\frac{\partial}{\partial \pi}
831     \label{introEq:SR1b} \\%
832     \end{align}
833     Where $\tau(\mathbf{A})$ is the torque of the system
834     due to the configuration, and $\pi$ is the conjugate
835 mmeineke 980 angular momenta of the system. The propagator, $G(\Delta t)$, becomes
836     \begin{equation}
837 mmeineke 1012 G(\Delta t) = e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}} \,
838     e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
839     e^{\Delta t\,iL_{\text{pos}}} \,
840     e^{\frac{\Delta t}{2} \tau(\mathbf{A})\frac{\partial}{\partial \pi}} \,
841     e^{\frac{\Delta t}{2} F(q)\frac{\partial}{\partial p}}
842 mmeineke 980 \label{introEq:SR2}
843     \end{equation}
844 mmeineke 1008 Propagation of the linear and angular momenta follows as in the Verlet
845     scheme. The propagation of positions also follows the Verlet scheme
846 mmeineke 980 with the addition of a further symplectic splitting of the rotation
847 mmeineke 1012 matrix propagation, $\mathcal{U}_{\text{rot}}(\Delta t)$, within
848     $U_{\text{pos}}(\Delta t)$.
849 mmeineke 980 \begin{equation}
850 mmeineke 1012 \mathcal{U}_{\text{rot}}(\Delta t) =
851     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
852     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
853     \mathcal{U}_z (\Delta t)\,
854     \mathcal{U}_y \biggl(\frac{\Delta t}{2}\biggr)\,
855     \mathcal{U}_x \biggl(\frac{\Delta t}{2}\biggr)\,
856 mmeineke 980 \label{introEq:SR3}
857     \end{equation}
858 mmeineke 1012 Where $\mathcal{U}_j$ is a unitary rotation of $\mathbf{A}$ and
859     $\pi$ about each axis $j$. As all propagations are now
860 mmeineke 980 unitary and symplectic, the entire integration scheme is also
861     symplectic and time reversible.
862    
863 mmeineke 1001 \section{\label{introSec:layout}Dissertation Layout}
864 mmeineke 914
865 mmeineke 1012 This dissertation is divided as follows:Ch.~\ref{chapt:RSA}
866 mmeineke 1001 presents the random sequential adsorption simulations of related
867 mmeineke 1008 pthalocyanines on a gold (111) surface. Ch.~\ref{chapt:OOPSE}
868 mmeineke 1001 is about the writing of the molecular dynamics simulation package
869 mmeineke 1012 {\sc oopse}. Ch.~\ref{chapt:lipid} regards the simulations of
870     phospholipid bilayers using a mesoscale model. And lastly,
871 mmeineke 1008 Ch.~\ref{chapt:conclusion} concludes this dissertation with a
872 mmeineke 1001 summary of all results. The chapters are arranged in chronological
873     order, and reflect the progression of techniques I employed during my
874     research.
875 mmeineke 914
876 mmeineke 1012 The chapter concerning random sequential adsorption simulations is a
877     study in applying Statistical Mechanics simulation techniques in order
878     to obtain a simple model capable of explaining the results. My
879     advisor, Dr. Gezelter, and I were approached by a colleague,
880     Dr. Lieberman, about possible explanations for the partial coverage of a
881     gold surface by a particular compound of hers. We suggested it might
882     be due to the statistical packing fraction of disks on a plane, and
883     set about to simulate this system. As the events in our model were
884     not dynamic in nature, a Monte Carlo method was employed. Here, if a
885     molecule landed on the surface without overlapping another, then its
886     landing was accepted. However, if there was overlap, the landing we
887     rejected and a new random landing location was chosen. This defined
888     our acceptance rules and allowed us to construct a Markov chain whose
889     limiting distribution was the surface coverage in which we were
890     interested.
891 mmeineke 914
892 mmeineke 1001 The following chapter, about the simulation package {\sc oopse},
893     describes in detail the large body of scientific code that had to be
894 mmeineke 1012 written in order to study phospholipid bilayers. Although there are
895 mmeineke 1001 pre-existing molecular dynamic simulation packages available, none
896     were capable of implementing the models we were developing.{\sc oopse}
897     is a unique package capable of not only integrating the equations of
898 mmeineke 1008 motion in Cartesian space, but is also able to integrate the
899 mmeineke 1001 rotational motion of rigid bodies and dipoles. Add to this the
900     ability to perform calculations across parallel processors and a
901     flexible script syntax for creating systems, and {\sc oopse} becomes a
902     very powerful scientific instrument for the exploration of our model.
903    
904 mmeineke 1008 Bringing us to Ch.~\ref{chapt:lipid}. Using {\sc oopse}, I have been
905     able to parameterize a mesoscale model for phospholipid simulations.
906 mmeineke 1012 This model retains information about solvent ordering around the
907 mmeineke 1001 bilayer, as well as information regarding the interaction of the
908 mmeineke 1012 phospholipid head groups' dipoles with each other and the surrounding
909 mmeineke 1001 solvent. These simulations give us insight into the dynamic events
910     that lead to the formation of phospholipid bilayers, as well as
911     provide the foundation for future exploration of bilayer phase
912     behavior with this model.
913    
914     Which leads into the last chapter, where I discuss future directions
915     for both{\sc oopse} and this mesoscale model. Additionally, I will
916     give a summary of results for this dissertation.
917    
918