ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
Revision: 956
Committed: Sun Jan 18 19:10:32 2004 UTC (20 years, 7 months ago) by mmeineke
Content type: application/x-tex
File size: 13845 byte(s)
Log Message:
added in changes to the Monte carlo andMolecular dynamics section

File Contents

# User Rev Content
1 mmeineke 914
2    
3     \chapter{\label{chapt:intro}Introduction and Theoretical Background}
4    
5    
6    
7     \section{\label{introSec:theory}Theoretical Background}
8    
9 mmeineke 953 The techniques used in the course of this research fall under the two
10     main classes of molecular simulation: Molecular Dynamics and Monte
11     Carlo. Molecular Dynamic simulations integrate the equations of motion
12     for a given system of particles, allowing the researher to gain
13     insight into the time dependent evolution of a system. Diffusion
14     phenomena are readily studied with this simulation technique, making
15     Molecular Dynamics the main simulation technique used in this
16     research. Other aspects of the research fall under the Monte Carlo
17     class of simulations. In Monte Carlo, the configuration space
18     available to the collection of particles is sampled stochastichally,
19     or randomly. Each configuration is chosen with a given probability
20     based on the Maxwell Boltzman distribution. These types of simulations
21     are best used to probe properties of a system that are only dependent
22     only on the state of the system. Structural information about a system
23     is most readily obtained through these types of methods.
24 mmeineke 914
25 mmeineke 953 Although the two techniques employed seem dissimilar, they are both
26     linked by the overarching principles of Statistical
27     Thermodynamics. Statistical Thermodynamics governs the behavior of
28     both classes of simulations and dictates what each method can and
29     cannot do. When investigating a system, one most first analyze what
30     thermodynamic properties of the system are being probed, then chose
31     which method best suits that objective.
32 mmeineke 914
33     \subsection{\label{introSec:statThermo}Statistical Thermodynamics}
34    
35     ergodic hypothesis
36    
37     enesemble averages
38    
39     \subsection{\label{introSec:monteCarlo}Monte Carlo Simulations}
40    
41 mmeineke 953 The Monte Carlo method was developed by Metropolis and Ulam for their
42     work in fissionable material.\cite{metropolis:1949} The method is so
43 mmeineke 955 named, because it heavily uses random numbers in its
44     solution.\cite{allen87:csl} The Monte Carlo method allows for the
45     solution of integrals through the stochastic sampling of the values
46     within the integral. In the simplest case, the evaluation of an
47     integral would follow a brute force method of
48     sampling.\cite{Frenkel1996} Consider the following single dimensional
49     integral:
50     \begin{equation}
51     I = f(x)dx
52     \label{eq:MCex1}
53     \end{equation}
54     The equation can be recast as:
55     \begin{equation}
56     I = (b-a)<f(x)>
57     \label{eq:MCex2}
58     \end{equation}
59     Where $<f(x)>$ is the unweighted average over the interval
60     $[a,b]$. The calculation of the integral could then be solved by
61     randomly choosing points along the interval $[a,b]$ and calculating
62     the value of $f(x)$ at each point. The accumulated average would then
63     approach $I$ in the limit where the number of trials is infintely
64     large.
65 mmeineke 914
66 mmeineke 955 However, in Statistical Mechanics, one is typically interested in
67     integrals of the form:
68     \begin{equation}
69     <A> = \frac{A}{exp^{-\beta}}
70     \label{eq:mcEnsAvg}
71     \end{equation}
72     Where $r^N$ stands for the coordinates of all $N$ particles and $A$ is
73     some observable that is only dependent on position. $<A>$ is the
74     ensemble average of $A$ as presented in
75     Sec.~\ref{introSec:statThermo}. Because $A$ is independent of
76     momentum, the momenta contribution of the integral can be factored
77     out, leaving the configurational integral. Application of the brute
78     force method to this system would yield highly inefficient
79     results. Due to the Boltzman weighting of this integral, most random
80     configurations will have a near zero contribution to the ensemble
81     average. This is where a importance sampling comes into
82     play.\cite{allen87:csl}
83 mmeineke 914
84 mmeineke 955 Importance Sampling is a method where one selects a distribution from
85     which the random configurations are chosen in order to more
86     efficiently calculate the integral.\cite{Frenkel1996} Consider again
87     Eq.~\ref{eq:MCex1} rewritten to be:
88 mmeineke 956 \begin{equation}
89     EQ Here
90     \end{equation}
91     Where $fix$ is an arbitrary probability distribution in $x$. If one
92     conducts $fix$ trials selecting a random number, $fix$, from the
93     distribution $fix$ on the interval $[a,b]$, then Eq.~\ref{fix} becomes
94     \begin{equation}
95     EQ Here
96     \end{equation}
97     Looking at Eq.~ref{fix}, and realizing
98     \begin {equation}
99     EQ Here
100     \end{equation}
101     The ensemble average can be rewritten as
102     \begin{equation}
103     EQ Here
104     \end{equation}
105     Appllying Eq.~ref{fix} one obtains
106     \begin{equation}
107     EQ Here
108     \end{equation}
109     By selecting $fix$ to be $fix$ Eq.~ref{fix} becomes
110     \begin{equation}
111     EQ Here
112     \end{equation}
113     The difficulty is selecting points $fix$ such that they are sampled
114     from the distribution $fix$. A solution was proposed by Metropolis et
115     al.\cite{fix} which involved the use of a Markov chain whose limiting
116     distribution was $fix$.
117 mmeineke 955
118 mmeineke 956 \subsection{Markov Chains}
119 mmeineke 955
120 mmeineke 956 A Markov chain is a chain of states satisfying the following
121     conditions:\cite{fix}
122     \begin{itemize}
123     \item The outcome of each trial depends only on the outcome of the previous trial.
124     \item Each trial belongs to a finite set of outcomes called the state space.
125     \end{itemize}
126     If given two configuartions, $fix$ and $fix$, $fix$ and $fix$ are the
127     probablilities of being in state $fix$ and $fix$ respectively.
128     Further, the two states are linked by a transition probability, $fix$,
129     which is the probability of going from state $m$ to state $n$.
130 mmeineke 955
131 mmeineke 956 The transition probability is given by the following:
132     \begin{equation}
133     EQ Here
134     \end{equation}
135     Where $fix$ is the probability of attempting the move $fix$, and $fix$
136     is the probability of accepting the move $fix$. Defining a
137     probability vector, $fix$, such that
138     \begin{equation}
139     EQ Here
140     \end{equation}
141     a transition matrix $fix$ can be defined, whose elements are $fix$,
142     for each given transition. The limiting distribution of the Markov
143     chain can then be found by applying the transition matrix an infinite
144     number of times to the distribution vector.
145     \begin{equation}
146     EQ Here
147     \end{equation}
148    
149     The limiting distribution of the chain is independent of the starting
150     distribution, and successive applications of the transition matrix
151     will only yield the limiting distribution again.
152     \begin{equation}
153     EQ Here
154     \end{equation}
155    
156     \subsection{fix}
157    
158     In the Metropolis method \cite{fix} Eq.~ref{fix} is solved such that
159     $fix$ matches the Boltzman distribution of states. The method
160     accomplishes this by imposing the strong condition of microscopic
161     reversibility on the equilibrium distribution. Meaning, that at
162     equilibrium the probability of going from $m$ to $n$ is the same as
163     going from $n$ to $m$.
164     \begin{equation}
165     EQ Here
166     \end{equation}
167     Further, $fix$ is chosen to be a symetric matrix in the Metropolis
168     method. Using Eq.~\ref{fix}, Eq.~\ref{fix} becomes
169     \begin{equation}
170     EQ Here
171     \end{equation}
172     For a Boltxman limiting distribution
173     \begin{equation}
174     EQ Here
175     \end{equation}
176     This allows for the following set of acceptance rules be defined:
177     \begin{equation}
178     EQ Here
179     \end{equation}
180    
181     Using the acceptance criteria from Eq.~\ref{fix} the Metropolis method
182     proceeds as follows
183     \begin{itemize}
184     \item Generate an initial configuration $fix$ which has some finite probability in $fix$.
185     \item Modify $fix$, to generate configuratioon $fix$.
186     \item If configuration $n$ lowers the energy of the system, accept the move with unity ($fix$ becomes $fix$). Otherwise accept with probability $fix$.
187     \item Accumulate the average for the configurational observable of intereest.
188     \item Repeat from step 2 until average converges.
189     \end{itemize}
190     One important note is that the average is accumulated whether the move
191     is accepted or not, this ensures proper weighting of the average.
192     Using Eq.~\ref{fix} it becomes clear that the accumulated averages are
193     the ensemble averages, as this method ensures that the limiting
194     distribution is the Boltzman distribution.
195    
196 mmeineke 914 \subsection{\label{introSec:md}Molecular Dynamics Simulations}
197    
198 mmeineke 956 The main simulation tool used in this research is Molecular Dynamics.
199     Molecular Dynamics is when the equations of motion for a system are
200     integrated in order to obtain information about both the positions and
201     momentum of a system, allowing the calculation of not only
202     configurational observables, but momenta dependent ones as well:
203     diffusion constants, velocity auto correlations, folding/unfolding
204     events, etc. Due to the principle of ergodicity, Eq.~\ref{fix}, the
205     average of these observables over the time period of the simulation
206     are taken to be the ensemble averages for the system.
207 mmeineke 914
208 mmeineke 956 The choice of when to use molecular dynamics over Monte Carlo
209     techniques, is normally decided by the observables in which the
210     researcher is interested. If the observabvles depend on momenta in
211     any fashion, then the only choice is molecular dynamics in some form.
212     However, when the observable is dependent only on the configuration,
213     then most of the time Monte Carlo techniques will be more efficent.
214 mmeineke 914
215 mmeineke 956 The focus of research in the second half of this dissertation is
216     centered around the dynamic properties of phospholipid bilayers,
217     making molecular dynamics key in the simulation of those properties.
218 mmeineke 914
219 mmeineke 956 \subsection{Molecular dynamics Algorithm}
220 mmeineke 914
221 mmeineke 956 To illustrate how the molecular dynamics technique is applied, the
222     following sections will describe the sequence involved in a
223     simulation. Sec.~\ref{fix} deals with the initialization of a
224     simulation. Sec.~\ref{fix} discusses issues involved with the
225     calculation of the forces. Sec.~\ref{fix} concludes the algorithm
226     discussion with the integration of the equations of motion. \cite{fix}
227 mmeineke 914
228 mmeineke 956 \subsection{initialization}
229 mmeineke 914
230 mmeineke 956 When selecting the initial configuration for the simulation it is
231     important to consider what dynamics one is hoping to observe.
232     Ch.~\ref{fix} deals with the formation and equilibrium dynamics of
233     phospholipid membranes. Therefore in these simulations initial
234     positions were selected that in some cases dispersed the lipids in
235     water, and in other cases structured the lipids into preformed
236     bilayers. Important considerations at this stage of the simulation are:
237     \begin{itemize}
238     \item There are no major overlaps of molecular or atomic orbitals
239     \item Velocities are chosen in such a way as to not gie the system a non=zero total momentum or angular momentum.
240     \item It is also sometimes desireable to select the velocities to correctly sample the target temperature.
241     \end{itemize}
242    
243     The first point is important due to the amount of potential energy
244     generated by having two particles too close together. If overlap
245     occurs, the first evaluation of forces will return numbers so large as
246     to render the numerical integration of teh motion meaningless. The
247     second consideration keeps the system from drifting or rotating as a
248     whole. This arises from the fact that most simulations are of systems
249     in equilibrium in the absence of outside forces. Therefore any net
250     movement would be unphysical and an artifact of the simulation method
251     used. The final point addresses teh selection of the magnitude of the
252     initial velocities. For many simulations it is convienient to use
253     this opportunity to scale the amount of kinetic energy to reflect the
254     desired thermal distribution of the system. However, it must be noted
255     that most systems will require further velocity rescaling after the
256     first few initial simulation steps due to either loss or gain of
257     kinetic energy from energy stored in potential degrees of freedom.
258    
259     \subsection{Force Evaluation}
260    
261     The evaluation of forces is the most computationally expensive portion
262     of a given molecular dynamics simulation. This is due entirely to the
263     evaluation of long range forces in a simulation, typically pair-wise.
264     These forces are most commonly the Van der Waals force, and sometimes
265     Coulombic forces as well. For a pair-wise force, there are $fix$
266     pairs to be evaluated, where $n$ is the number of particles in the
267     system. This leads to the calculations scaling as $fix$, making large
268     simulations prohibitive in the absence of any computation saving
269     techniques.
270    
271     Another consideration one must resolve, is that in a given simulation
272     a disproportionate number of the particles will feel the effects of
273     the surface. \cite{fix} For a cubic system of 1000 particles arranged
274     in a $10x10x10$ cube, 488 particles will be exposed to the surface.
275     Unless one is simulating an isolated particle group in a vacuum, the
276     behavior of the system will be far from the desired bulk
277     charecteristics. To offset this, simulations employ the use of
278     periodic boundary images. \cite{fix}
279    
280     The technique involves the use of an algorithm that replicates the
281     simulation box on an infinite lattice in cartesian space. Any given
282     particle leaving the simulation box on one side will have an image of
283     itself enter on the opposite side (see Fig.~\ref{fix}).
284     \begin{equation}
285     EQ Here
286     \end{equation}
287     In addition, this sets that any given particle pair has an image, real
288     or periodic, within $fix$ of each other. A discussion of the method
289     used to calculate the periodic image can be found in Sec.\ref{fix}.
290    
291     Returning to the topic of the computational scale of the force
292     evaluation, the use of periodic boundary conditions requires that a
293     cutoff radius be employed. Using a cutoff radius improves the
294     efficiency of the force evaluation, as particles farther than a
295     predetermined distance, $fix$, are not included in the
296     calculation. \cite{fix} In a simultation with periodic images, $fix$
297     has a maximum value of $fix$. Fig.~\ref{fix} illustrates how using an
298     $fix$ larger than this value, or in the extreme limit of no $fix$ at
299     all, the corners of the simulation box are unequally weighted due to
300     the lack of particle images in the $x$, $y$, or $z$ directions past a
301     disance of $fix$.
302    
303     With the use of an $fix$, however, comes a discontinuity in the potential energy curve (Fig.~\ref{fix}).
304    
305    
306 mmeineke 914 \section{\label{introSec:chapterLayout}Chapter Layout}
307    
308     \subsection{\label{introSec:RSA}Random Sequential Adsorption}
309    
310     \subsection{\label{introSec:OOPSE}The OOPSE Simulation Package}
311    
312     \subsection{\label{introSec:bilayers}A Mesoscale Model for Phospholipid Bilayers}