ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 955 by mmeineke, Sun Jan 18 05:26:10 2004 UTC vs.
Revision 956 by mmeineke, Sun Jan 18 19:10:32 2004 UTC

# Line 85 | Line 85 | Eq.~\ref{eq:MCex1} rewritten to be:
85   which the random configurations are chosen in order to more
86   efficiently calculate the integral.\cite{Frenkel1996} Consider again
87   Eq.~\ref{eq:MCex1} rewritten to be:
88 + \begin{equation}
89 + EQ Here
90 + \end{equation}
91 + Where $fix$ is an arbitrary probability distribution in $x$.  If one
92 + conducts $fix$ trials selecting a random number, $fix$, from the
93 + distribution $fix$ on the interval $[a,b]$, then Eq.~\ref{fix} becomes
94 + \begin{equation}
95 + EQ Here
96 + \end{equation}
97 + Looking at Eq.~ref{fix}, and realizing
98 + \begin {equation}
99 + EQ Here
100 + \end{equation}
101 + The ensemble average can be rewritten as
102 + \begin{equation}
103 + EQ Here
104 + \end{equation}
105 + Appllying Eq.~ref{fix} one obtains
106 + \begin{equation}
107 + EQ Here
108 + \end{equation}
109 + By selecting $fix$ to be $fix$ Eq.~ref{fix} becomes
110 + \begin{equation}
111 + EQ Here
112 + \end{equation}
113 + The difficulty is selecting points $fix$ such that they are sampled
114 + from the distribution $fix$.  A solution was proposed by Metropolis et
115 + al.\cite{fix} which involved the use of a Markov chain whose limiting
116 + distribution was $fix$.
117  
118 + \subsection{Markov Chains}
119  
120 + A Markov chain is a chain of states satisfying the following
121 + conditions:\cite{fix}
122 + \begin{itemize}
123 + \item The outcome of each trial depends only on the outcome of the previous trial.
124 + \item Each trial belongs to a finite set of outcomes called the state space.
125 + \end{itemize}
126 + If given two configuartions, $fix$ and $fix$, $fix$ and $fix$ are the
127 + probablilities of being in state $fix$ and $fix$ respectively.
128 + Further, the two states are linked by a transition probability, $fix$,
129 + which is the probability of going from state $m$ to state $n$.
130  
131 + The transition probability is given by the following:
132 + \begin{equation}
133 + EQ Here
134 + \end{equation}
135 + Where $fix$ is the probability of attempting the move $fix$, and $fix$
136 + is the probability of accepting the move $fix$.  Defining a
137 + probability vector, $fix$, such that
138 + \begin{equation}
139 + EQ Here
140 + \end{equation}
141 + a transition matrix $fix$ can be defined, whose elements are $fix$,
142 + for each given transition.  The limiting distribution of the Markov
143 + chain can then be found by applying the transition matrix an infinite
144 + number of times to the distribution vector.
145 + \begin{equation}
146 + EQ Here
147 + \end{equation}
148 +
149 + The limiting distribution of the chain is independent of the starting
150 + distribution, and successive applications of the transition matrix
151 + will only yield the limiting distribution again.
152 + \begin{equation}
153 + EQ Here
154 + \end{equation}
155 +
156 + \subsection{fix}
157 +
158 + In the Metropolis method \cite{fix} Eq.~ref{fix} is solved such that
159 + $fix$ matches the Boltzman distribution of states.  The method
160 + accomplishes this by imposing the strong condition of microscopic
161 + reversibility on the equilibrium distribution.  Meaning, that at
162 + equilibrium the probability of going from $m$ to $n$ is the same as
163 + going from $n$ to $m$.
164 + \begin{equation}
165 + EQ Here
166 + \end{equation}
167 + Further, $fix$ is chosen to be a symetric matrix in the Metropolis
168 + method.  Using Eq.~\ref{fix}, Eq.~\ref{fix} becomes
169 + \begin{equation}
170 + EQ Here
171 + \end{equation}
172 + For a Boltxman limiting distribution
173 + \begin{equation}
174 + EQ Here
175 + \end{equation}
176 + This allows for the following set of acceptance rules be defined:
177 + \begin{equation}
178 + EQ Here
179 + \end{equation}
180 +
181 + Using the acceptance criteria from Eq.~\ref{fix} the Metropolis method
182 + proceeds as follows
183 + \begin{itemize}
184 + \item Generate an initial configuration $fix$ which has some finite probability in $fix$.
185 + \item Modify $fix$, to generate configuratioon $fix$.
186 + \item If configuration $n$ lowers the energy of the system, accept the move with unity ($fix$ becomes $fix$).  Otherwise accept with probability $fix$.
187 + \item Accumulate the average for the configurational observable of intereest.
188 + \item Repeat from step 2 until average converges.
189 + \end{itemize}
190 + One important note is that the average is accumulated whether the move
191 + is accepted or not, this ensures proper weighting of the average.
192 + Using Eq.~\ref{fix} it becomes clear that the accumulated averages are
193 + the ensemble averages, as this method ensures that the limiting
194 + distribution is the Boltzman distribution.
195 +
196   \subsection{\label{introSec:md}Molecular Dynamics Simulations}
197  
198 < time averages
198 > The main simulation tool used in this research is Molecular Dynamics.
199 > Molecular Dynamics is when the equations of motion for a system are
200 > integrated in order to obtain information about both the positions and
201 > momentum of a system, allowing the calculation of not only
202 > configurational observables, but momenta dependent ones as well:
203 > diffusion constants, velocity auto correlations, folding/unfolding
204 > events, etc.  Due to the principle of ergodicity, Eq.~\ref{fix}, the
205 > average of these observables over the time period of the simulation
206 > are taken to be the ensemble averages for the system.
207  
208 < time integrating schemes
208 > The choice of when to use molecular dynamics over Monte Carlo
209 > techniques, is normally decided by the observables in which the
210 > researcher is interested.  If the observabvles depend on momenta in
211 > any fashion, then the only choice is molecular dynamics in some form.
212 > However, when the observable is dependent only on the configuration,
213 > then most of the time Monte Carlo techniques will be more efficent.
214  
215 < time reversible
215 > The focus of research in the second half of this dissertation is
216 > centered around the dynamic properties of phospholipid bilayers,
217 > making molecular dynamics key in the simulation of those properties.
218  
219 < symplectic methods
219 > \subsection{Molecular dynamics Algorithm}
220  
221 < Extended ensembles (NVT NPT)
221 > To illustrate how the molecular dynamics technique is applied, the
222 > following sections will describe the sequence involved in a
223 > simulation.  Sec.~\ref{fix} deals with the initialization of a
224 > simulation.  Sec.~\ref{fix} discusses issues involved with the
225 > calculation of the forces.  Sec.~\ref{fix} concludes the algorithm
226 > discussion with the integration of the equations of motion. \cite{fix}
227  
228 < constrained dynamics
228 > \subsection{initialization}
229  
230 + When selecting the initial configuration for the simulation it is
231 + important to consider what dynamics one is hoping to observe.
232 + Ch.~\ref{fix} deals with the formation and equilibrium dynamics of
233 + phospholipid membranes.  Therefore in these simulations initial
234 + positions were selected that in some cases dispersed the lipids in
235 + water, and in other cases structured the lipids into preformed
236 + bilayers.  Important considerations at this stage of the simulation are:
237 + \begin{itemize}
238 + \item There are no major overlaps of molecular or atomic orbitals
239 + \item Velocities are chosen in such a way as to not gie the system a non=zero total momentum or angular momentum.
240 + \item It is also sometimes desireable to select the velocities to correctly sample the target temperature.
241 + \end{itemize}
242 +
243 + The first point is important due to the amount of potential energy
244 + generated by having two particles too close together.  If overlap
245 + occurs, the first evaluation of forces will return numbers so large as
246 + to render the numerical integration of teh motion meaningless.  The
247 + second consideration keeps the system from drifting or rotating as a
248 + whole.  This arises from the fact that most simulations are of systems
249 + in equilibrium in the absence of outside forces.  Therefore any net
250 + movement would be unphysical and an artifact of the simulation method
251 + used.  The final point addresses teh selection of the magnitude of the
252 + initial velocities.  For many simulations it is convienient to use
253 + this opportunity to scale the amount of kinetic energy to reflect the
254 + desired thermal distribution of the system.  However, it must be noted
255 + that most systems will require further velocity rescaling after the
256 + first few initial simulation steps due to either loss or gain of
257 + kinetic energy from energy stored in potential degrees of freedom.
258 +
259 + \subsection{Force Evaluation}
260 +
261 + The evaluation of forces is the most computationally expensive portion
262 + of a given molecular dynamics simulation.  This is due entirely to the
263 + evaluation of long range forces in a simulation, typically pair-wise.
264 + These forces are most commonly the Van der Waals force, and sometimes
265 + Coulombic forces as well.  For a pair-wise force, there are $fix$
266 + pairs to be evaluated, where $n$ is the number of particles in the
267 + system.  This leads to the calculations scaling as $fix$, making large
268 + simulations prohibitive in the absence of any computation saving
269 + techniques.
270 +
271 + Another consideration one must resolve, is that in a given simulation
272 + a disproportionate number of the particles will feel the effects of
273 + the surface. \cite{fix} For a cubic system of 1000 particles arranged
274 + in a $10x10x10$ cube, 488 particles will be exposed to the surface.
275 + Unless one is simulating an isolated particle group in a vacuum, the
276 + behavior of the system will be far from the desired bulk
277 + charecteristics.  To offset this, simulations employ the use of
278 + periodic boundary images. \cite{fix}
279 +
280 + The technique involves the use of an algorithm that replicates the
281 + simulation box on an infinite lattice in cartesian space.  Any given
282 + particle leaving the simulation box on one side will have an image of
283 + itself enter on the opposite side (see Fig.~\ref{fix}).
284 + \begin{equation}
285 + EQ Here
286 + \end{equation}
287 + In addition, this sets that any given particle pair has an image, real
288 + or periodic, within $fix$ of each other.  A discussion of the method
289 + used to calculate the periodic image can be found in Sec.\ref{fix}.
290 +
291 + Returning to the topic of the computational scale of the force
292 + evaluation, the use of periodic boundary conditions requires that a
293 + cutoff radius be employed.  Using a cutoff radius improves the
294 + efficiency of the force evaluation, as particles farther than a
295 + predetermined distance, $fix$, are not included in the
296 + calculation. \cite{fix} In a simultation with periodic images, $fix$
297 + has a maximum value of $fix$.  Fig.~\ref{fix} illustrates how using an
298 + $fix$ larger than this value, or in the extreme limit of no $fix$ at
299 + all, the corners of the simulation box are unequally weighted due to
300 + the lack of particle images in the $x$, $y$, or $z$ directions past a
301 + disance of $fix$.
302 +
303 + With the use of an $fix$, however, comes a discontinuity in the potential energy curve (Fig.~\ref{fix}).
304 +
305 +
306   \section{\label{introSec:chapterLayout}Chapter Layout}
307  
308   \subsection{\label{introSec:RSA}Random Sequential Adsorption}

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines