ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/mattDisertation/Introduction.tex
(Generate patch)

Comparing trunk/mattDisertation/Introduction.tex (file contents):
Revision 956 by mmeineke, Sun Jan 18 19:10:32 2004 UTC vs.
Revision 977 by mmeineke, Thu Jan 22 21:13:55 2004 UTC

# Line 53 | Line 53 | I = (b-a)<f(x)>
53   \end{equation}
54   The equation can be recast as:
55   \begin{equation}
56 < I = (b-a)<f(x)>
56 > I = (b-a)\langle f(x) \rangle
57   \label{eq:MCex2}
58   \end{equation}
59 < Where $<f(x)>$ is the unweighted average over the interval
59 > Where $\langle f(x) \rangle$ is the unweighted average over the interval
60   $[a,b]$. The calculation of the integral could then be solved by
61   randomly choosing points along the interval $[a,b]$ and calculating
62   the value of $f(x)$ at each point. The accumulated average would then
# Line 66 | Line 66 | integrals of the form:
66   However, in Statistical Mechanics, one is typically interested in
67   integrals of the form:
68   \begin{equation}
69 < <A> = \frac{A}{exp^{-\beta}}
69 > \langle A \rangle = \frac{\int d^N \mathbf{r}~A(\mathbf{r}^N)%
70 >        e^{-\beta V(\mathbf{r}^N)}}%
71 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
72   \label{eq:mcEnsAvg}
73   \end{equation}
74 < Where $r^N$ stands for the coordinates of all $N$ particles and $A$ is
75 < some observable that is only dependent on position. $<A>$ is the
76 < ensemble average of $A$ as presented in
77 < Sec.~\ref{introSec:statThermo}. Because $A$ is independent of
78 < momentum, the momenta contribution of the integral can be factored
79 < out, leaving the configurational integral. Application of the brute
80 < force method to this system would yield highly inefficient
74 > Where $\mathbf{r}^N$ stands for the coordinates of all $N$ particles
75 > and $A$ is some observable that is only dependent on
76 > position. $\langle A \rangle$ is the ensemble average of $A$ as
77 > presented in Sec.~\ref{introSec:statThermo}. Because $A$ is
78 > independent of momentum, the momenta contribution of the integral can
79 > be factored out, leaving the configurational integral. Application of
80 > the brute force method to this system would yield highly inefficient
81   results. Due to the Boltzman weighting of this integral, most random
82   configurations will have a near zero contribution to the ensemble
83   average. This is where a importance sampling comes into
# Line 86 | Line 88 | EQ Here
88   efficiently calculate the integral.\cite{Frenkel1996} Consider again
89   Eq.~\ref{eq:MCex1} rewritten to be:
90   \begin{equation}
91 < EQ Here
91 > I = \int^b_a \frac{f(x)}{\rho(x)} \rho(x) dx
92 > \label{introEq:Importance1}
93   \end{equation}
94 < Where $fix$ is an arbitrary probability distribution in $x$.  If one
95 < conducts $fix$ trials selecting a random number, $fix$, from the
96 < distribution $fix$ on the interval $[a,b]$, then Eq.~\ref{fix} becomes
94 > Where $\rho(x)$ is an arbitrary probability distribution in $x$.  If
95 > one conducts $\tau$ trials selecting a random number, $\zeta_\tau$,
96 > from the distribution $\rho(x)$ on the interval $[a,b]$, then
97 > Eq.~\ref{introEq:Importance1} becomes
98   \begin{equation}
99 < EQ Here
99 > I= \biggl \langle \frac{f(x)}{\rho(x)} \biggr \rangle_{\text{trials}}
100 > \label{introEq:Importance2}
101   \end{equation}
102 < Looking at Eq.~ref{fix}, and realizing
102 > Looking at Eq.~\ref{eq:mcEnsAvg}, and realizing
103   \begin {equation}
104 < EQ Here
104 > \rho_{kT}(\mathbf{r}^N) =
105 >        \frac{e^{-\beta V(\mathbf{r}^N)}}
106 >        {\int d^N \mathbf{r}~e^{-\beta V(\mathbf{r}^N)}}
107 > \label{introEq:MCboltzman}
108   \end{equation}
109 < The ensemble average can be rewritten as
109 > Where $\rho_{kT}$ is the boltzman distribution.  The ensemble average
110 > can be rewritten as
111   \begin{equation}
112 < EQ Here
112 > \langle A \rangle = \int d^N \mathbf{r}~A(\mathbf{r}^N)
113 >        \rho_{kT}(\mathbf{r}^N)
114 > \label{introEq:Importance3}
115   \end{equation}
116 < Appllying Eq.~ref{fix} one obtains
116 > Applying Eq.~\ref{introEq:Importance1} one obtains
117   \begin{equation}
118 < EQ Here
118 > \langle A \rangle = \biggl \langle
119 >        \frac{ A \rho_{kT}(\mathbf{r}^N) }
120 >        {\rho(\mathbf{r}^N)} \biggr \rangle_{\text{trials}}
121 > \label{introEq:Importance4}
122   \end{equation}
123 < By selecting $fix$ to be $fix$ Eq.~ref{fix} becomes
123 > By selecting $\rho(\mathbf{r}^N)$ to be $\rho_{kT}(\mathbf{r}^N)$
124 > Eq.~\ref{introEq:Importance4} becomes
125   \begin{equation}
126 < EQ Here
126 > \langle A \rangle = \langle A(\mathbf{r}^N) \rangle_{\text{trials}}
127 > \label{introEq:Importance5}
128   \end{equation}
129 < The difficulty is selecting points $fix$ such that they are sampled
130 < from the distribution $fix$.  A solution was proposed by Metropolis et
131 < al.\cite{fix} which involved the use of a Markov chain whose limiting
132 < distribution was $fix$.
129 > The difficulty is selecting points $\mathbf{r}^N$ such that they are
130 > sampled from the distribution $\rho_{kT}(\mathbf{r}^N)$.  A solution
131 > was proposed by Metropolis et al.\cite{metropolis:1953} which involved
132 > the use of a Markov chain whose limiting distribution was
133 > $\rho_{kT}(\mathbf{r}^N)$.
134  
135 < \subsection{Markov Chains}
135 > \subsubsection{\label{introSec:markovChains}Markov Chains}
136  
137   A Markov chain is a chain of states satisfying the following
138 < conditions:\cite{fix}
139 < \begin{itemize}
138 > conditions:\cite{leach01:mm}
139 > \begin{enumerate}
140   \item The outcome of each trial depends only on the outcome of the previous trial.
141   \item Each trial belongs to a finite set of outcomes called the state space.
142 < \end{itemize}
143 < If given two configuartions, $fix$ and $fix$, $fix$ and $fix$ are the
144 < probablilities of being in state $fix$ and $fix$ respectively.
145 < Further, the two states are linked by a transition probability, $fix$,
146 < which is the probability of going from state $m$ to state $n$.
142 > \end{enumerate}
143 > If given two configuartions, $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$,
144 > $\rho_m$ and $\rho_n$ are the probablilities of being in state
145 > $\mathbf{r}^N_m$ and $\mathbf{r}^N_n$ respectively.  Further, the two
146 > states are linked by a transition probability, $\pi_{mn}$, which is the
147 > probability of going from state $m$ to state $n$.
148  
149 + \newcommand{\accMe}{\operatorname{acc}}
150 +
151   The transition probability is given by the following:
152   \begin{equation}
153 < EQ Here
154 < \end{equation}
155 < Where $fix$ is the probability of attempting the move $fix$, and $fix$
156 < is the probability of accepting the move $fix$.  Defining a
157 < probability vector, $fix$, such that
153 > \pi_{mn} = \alpha_{mn} \times \accMe(m \rightarrow n)
154 > \label{introEq:MCpi}
155 > \end{equation}
156 > Where $\alpha_{mn}$ is the probability of attempting the move $m
157 > \rightarrow n$, and $\accMe$ is the probability of accepting the move
158 > $m \rightarrow n$.  Defining a probability vector,
159 > $\boldsymbol{\rho}$, such that
160   \begin{equation}
161 < EQ Here
161 > \boldsymbol{\rho} = \{\rho_1, \rho_2, \ldots \rho_m, \rho_n,
162 >        \ldots \rho_N \}
163 > \label{introEq:MCrhoVector}
164   \end{equation}
165 < a transition matrix $fix$ can be defined, whose elements are $fix$,
166 < for each given transition.  The limiting distribution of the Markov
167 < chain can then be found by applying the transition matrix an infinite
168 < number of times to the distribution vector.
165 > a transition matrix $\boldsymbol{\Pi}$ can be defined,
166 > whose elements are $\pi_{mn}$, for each given transition.  The
167 > limiting distribution of the Markov chain can then be found by
168 > applying the transition matrix an infinite number of times to the
169 > distribution vector.
170   \begin{equation}
171 < EQ Here
171 > \boldsymbol{\rho}_{\text{limit}} =
172 >        \lim_{N \rightarrow \infty} \boldsymbol{\rho}_{\text{initial}}
173 >        \boldsymbol{\Pi}^N
174 > \label{introEq:MCmarkovLimit}
175   \end{equation}
148
176   The limiting distribution of the chain is independent of the starting
177   distribution, and successive applications of the transition matrix
178   will only yield the limiting distribution again.
179   \begin{equation}
180 < EQ Here
180 > \boldsymbol{\rho}_{\text{limit}} = \boldsymbol{\rho}_{\text{initial}}
181 >        \boldsymbol{\Pi}
182 > \label{introEq:MCmarkovEquil}
183   \end{equation}
184  
185 < \subsection{fix}
185 > \subsubsection{\label{introSec:metropolisMethod}The Metropolis Method}
186  
187 < In the Metropolis method \cite{fix} Eq.~ref{fix} is solved such that
188 < $fix$ matches the Boltzman distribution of states.  The method
189 < accomplishes this by imposing the strong condition of microscopic
190 < reversibility on the equilibrium distribution.  Meaning, that at
191 < equilibrium the probability of going from $m$ to $n$ is the same as
192 < going from $n$ to $m$.
187 > In the Metropolis method\cite{metropolis:1953}
188 > Eq.~\ref{introEq:MCmarkovEquil} is solved such that
189 > $\boldsymbol{\rho}_{\text{limit}}$ matches the Boltzman distribution
190 > of states.  The method accomplishes this by imposing the strong
191 > condition of microscopic reversibility on the equilibrium
192 > distribution.  Meaning, that at equilibrium the probability of going
193 > from $m$ to $n$ is the same as going from $n$ to $m$.
194   \begin{equation}
195 < EQ Here
195 > \rho_m\pi_{mn} = \rho_n\pi_{nm}
196 > \label{introEq:MCmicroReverse}
197   \end{equation}
198 < Further, $fix$ is chosen to be a symetric matrix in the Metropolis
199 < method.  Using Eq.~\ref{fix}, Eq.~\ref{fix} becomes
198 > Further, $\boldsymbol{\alpha}$ is chosen to be a symetric matrix in
199 > the Metropolis method.  Using Eq.~\ref{introEq:MCpi},
200 > Eq.~\ref{introEq:MCmicroReverse} becomes
201   \begin{equation}
202 < EQ Here
202 > \frac{\accMe(m \rightarrow n)}{\accMe(n \rightarrow m)} =
203 >        \frac{\rho_n}{\rho_m}
204 > \label{introEq:MCmicro2}
205   \end{equation}
206 < For a Boltxman limiting distribution
206 > For a Boltxman limiting distribution,
207   \begin{equation}
208 < EQ Here
208 > \frac{\rho_n}{\rho_m} = e^{-\beta[\mathcal{U}(n) - \mathcal{U}(m)]}
209 >        = e^{-\beta \Delta \mathcal{U}}
210 > \label{introEq:MCmicro3}
211   \end{equation}
212   This allows for the following set of acceptance rules be defined:
213   \begin{equation}
# Line 193 | Line 229 | distribution is the Boltzman distribution.
229   the ensemble averages, as this method ensures that the limiting
230   distribution is the Boltzman distribution.
231  
232 < \subsection{\label{introSec:md}Molecular Dynamics Simulations}
232 > \subsection{\label{introSec:MD}Molecular Dynamics Simulations}
233  
234   The main simulation tool used in this research is Molecular Dynamics.
235   Molecular Dynamics is when the equations of motion for a system are
# Line 216 | Line 252 | making molecular dynamics key in the simulation of tho
252   centered around the dynamic properties of phospholipid bilayers,
253   making molecular dynamics key in the simulation of those properties.
254  
255 < \subsection{Molecular dynamics Algorithm}
255 > \subsubsection{Molecular dynamics Algorithm}
256  
257   To illustrate how the molecular dynamics technique is applied, the
258   following sections will describe the sequence involved in a
# Line 225 | Line 261 | discussion with the integration of the equations of mo
261   calculation of the forces.  Sec.~\ref{fix} concludes the algorithm
262   discussion with the integration of the equations of motion. \cite{fix}
263  
264 < \subsection{initialization}
264 > \subsubsection{initialization}
265  
266   When selecting the initial configuration for the simulation it is
267   important to consider what dynamics one is hoping to observe.
# Line 256 | Line 292 | kinetic energy from energy stored in potential degrees
292   first few initial simulation steps due to either loss or gain of
293   kinetic energy from energy stored in potential degrees of freedom.
294  
295 < \subsection{Force Evaluation}
295 > \subsubsection{Force Evaluation}
296  
297   The evaluation of forces is the most computationally expensive portion
298   of a given molecular dynamics simulation.  This is due entirely to the

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines