ViewVC Help
View File | Revision Log | Show Annotations | View Changeset | Root Listing
root/group/trunk/chrisDissertation/Introduction.tex
(Generate patch)

Comparing trunk/chrisDissertation/Introduction.tex (file contents):
Revision 3001 by chrisfen, Thu Sep 7 20:42:27 2006 UTC vs.
Revision 3019 by chrisfen, Fri Sep 22 13:45:24 2006 UTC

# Line 42 | Line 42 | techniques.\cite{Allen87,Frenkel02,Leach01}
42  
43   \section{\label{sec:MolecularDynamics}Molecular Dynamics}
44  
45 < \section{\label{sec:MovingParticles}Propagating Particle Motion}
45 > As stated above, in molecular dynamics we focus on evolving
46 > configurations of molecules over time. In order to use this as a tool
47 > for understanding experiments and testing theories, we want the
48 > configuration to evolve in a manner that mimics real molecular
49 > systems. To do this, we start with clarifying what we know about a
50 > given configuration of particles at time $t_1$, basically that each
51 > particle in the configuration has a position ($q$) and velocity
52 > ($\dot{q}$). We now want to know what the configuration will be at
53 > time $t_2$. To find out, we need the classical equations of
54 > motion, and one useful formulation of them is the Lagrangian form.
55 >
56 > The Lagrangian ($L$) is a function of the positions and velocities that
57 > takes the form,
58 > \begin{equation}
59 > L = K - V,
60 > \label{eq:lagrangian}
61 > \end{equation}
62 > where $K$ is the kinetic energy and $V$ is the potential energy. We
63 > can use Hamilton's principle, which states that the integral of the
64 > Lagrangian over time has a stationary value for the correct path of
65 > motion, to say that the variation of the integral of the Lagrangian
66 > over time is zero,\cite{Tolman38}
67 > \begin{equation}
68 > \delta\int_{t_1}^{t_2}L(q,\dot{q})dt = 0.
69 > \end{equation}
70 > This can be written as a summation of integrals to give
71 > \begin{equation}
72 > \int_{t_1}^{t_2}\sum_{i=1}^{3N}\left(\frac{\partial L}{\partial q_i}\delta q_i
73 >        + \frac{\partial L}{\partial \dot{q}_i}\delta \dot{q}_i\right)dt = 0.
74 > \end{equation}
75 > Using fact that $\dot q$ is the derivative with respect to time of $q$
76 > and integrating the second partial derivative in the parenthesis by
77 > parts, this equation simplifies to
78 > \begin{equation}
79 > \int_{t_1}^{t_2}\sum_{i=1}^{3N}\left(
80 >        \frac{d}{dt}\frac{\partial L}{\partial \dot{q}_i}
81 >        - \frac{\partial L}{\partial q_i}\right)\delta {q}_i dt = 0,
82 > \end{equation}
83 > and the above equation is only valid for
84 > \begin{equation}
85 > \frac{d}{dt}\frac{\partial L}{\partial \dot{q}_i}
86 >        - \frac{\partial L}{\partial q_i} = 0\quad\quad(i=1,2,\dots,3N).
87 > \label{eq:formulation}
88 > \end{equation}
89 > To obtain the classical equations of motion for the particles, we can
90 > substitute equation (\ref{eq:lagrangian}) into the above equation with
91 > $m\dot{\mathbf{r}}^2/2$ for the kinetic energy, giving
92 > \begin{equation}
93 > \frac{d}{dt}(m\dot{\mathbf{r}})+\frac{dV}{d\mathbf{r}}=0,
94 > \end{equation}
95 > or more recognizably,
96 > \begin{equation}
97 > \mathbf{f} = m\mathbf{a},
98 > \end{equation}
99 > where $\mathbf{f} = -dV/d\mathbf{r}$ and $\mathbf{a} =
100 > d^2\mathbf{r}/dt^2$. This Lagrangian formulation shown in equation
101 > (\ref{eq:formulation}) is generalized, and it can be used to determine
102 > equations of motion in forms outside of the typical Cartesian case
103 > shown here.
104 >
105 > \subsection{\label{sec:Verlet}Verlet Integration}
106 >
107 > In order to perform molecular dynamics, we need an algorithm that
108 > integrates the equations of motion described above. Ideal algorithms
109 > are both simple in implementation and highly accurate. There have been
110 > a large number of algorithms developed for this purpose; however, for
111 > reasons discussed below, we are going to focus on the Verlet class of
112 > integrators.\cite{Gear66,Beeman76,Berendsen86,Allen87,Verlet67,Swope82}
113 >
114 > In Verlet's original study of computer ``experiments'', he directly
115 > integrated the Newtonian second order differential equation of motion,
116 > \begin{equation}
117 > m\frac{d^2\mathbf{r}_i}{dt^2} = \sum_{j\ne i}\mathbf{f}(r_{ij}),
118 > \end{equation}
119 > with the following simple algorithm:
120 > \begin{equation}
121 > \mathbf{r}_i(t+\delta t) = -\mathbf{r}_i(t-\delta t) + 2\mathbf{r}_i(t)
122 >        + \sum_{j\ne i}\mathbf{f}(r_{ij}(t))\delta t^2,
123 > \label{eq:verlet}
124 > \end{equation}
125 > where $\delta t$ is the time step of integration.\cite{Verlet67} It is
126 > interesting to note that equation (\ref{eq:verlet}) does not include
127 > velocities, and this makes sense since they are not present in the
128 > differential equation. The velocities are necessary for the
129 > calculation of the kinetic energy, and they can be calculated at each
130 > time step with the following equation:
131 > \begin{equation}
132 > \mathbf{v}_i(t) = \frac{\mathbf{r}_i(t+\delta t)-\mathbf{r}_i(t-\delta t)}
133 >                       {2\delta t}.
134 > \end{equation}
135 >
136 > Like the equation of motion it solves, the Verlet algorithm has the
137 > beneficial property of being time-reversible, meaning that you can
138 > integrate the configuration forward and then backward and end up at
139 > the original configuration. Some other methods for integration, like
140 > predictor-corrector methods, lack this property in that they require
141 > higher order information that is discarded after integrating
142 > steps. Another interesting property of this algorithm is that it is
143 > symplectic, meaning that it preserves area in phase-space. Symplectic
144 > algorithms system stays in the region of phase-space dictated by the
145 > ensemble and enjoy greater long-time energy
146 > conservations.\cite{Frenkel02}
147 >
148 > While the error in the positions calculated using the Verlet algorithm
149 > is small ($\mathcal{O}(\delta t^4)$), the error in the velocities is
150 > quite a bit larger ($\mathcal{O}(\delta t^2)$).\cite{Allen87} Swope
151 > {\it et al.}  developed a corrected for of this algorithm, the
152 > `velocity Verlet' algorithm, which improves the error of the velocity
153 > calculation and thus the energy conservation.\cite{Swope82} This
154 > algorithm involves a full step of the positions,
155 > \begin{equation}
156 > \mathbf{r}(t+\delta t) = \mathbf{r}(t) + \delta t\mathbf{v}(t)
157 >                                + \frac{1}{2}\delta t^2\mathbf{a}(t),
158 > \end{equation}
159 > and a half step of the velocities,
160 > \begin{equation}
161 > \mathbf{v}\left(t+\frac{1}{2}\delta t\right) = \mathbf{v}(t)
162 >                                        + \frac{1}{2}\delta t\mathbf{a}(t).
163 > \end{equation}
164 > After calculating new forces, the velocities can be updated to a full
165 > step,
166 > \begin{equation}
167 > \mathbf{v}(t+\delta t) = \mathbf{v}\left(t+\frac{1}{2}\delta t\right)
168 >                                + \frac{1}{2}\delta t\mathbf{a}(t+\delta t).
169 > \end{equation}
170 > By integrating in this manner, the error in the velocities reduces to
171 > $\mathcal{O}(\delta t^3)$. It should be noted that the error in the
172 > positions increases to $\mathcal{O}(\delta t^3)$, but the resulting
173 > improvement in the energies coupled with the maintained simplicity,
174 > time-reversibility, and symplectic nature make it an improvement over
175 > the original form.
176  
177   \subsection{\label{sec:IntroIntegrate}Rigid Body Motion}
178  
179   Rigid bodies are non-spherical particles or collections of particles
180   (e.g. $\mbox{C}_{60}$) that have a constant internal potential and
181   move collectively.\cite{Goldstein01} Discounting iterative constraint
182 < procedures like {\sc shake} and {\sc rattle}, they are not included in
183 < most simulation packages because of the algorithmic complexity
184 < involved in propagating orientational degrees of
185 < freedom.\cite{Ryckaert77,Andersen83,Krautler01} Integrators which
186 < propagate orientational motion with an acceptable level of energy
187 < conservation for molecular dynamics are relatively new inventions.
182 > procedures like {\sc shake} and {\sc rattle} for approximating rigid
183 > bodies, they are not included in most simulation packages because of
184 > the algorithmic complexity involved in propagating orientational
185 > degrees of freedom.\cite{Ryckaert77,Andersen83,Krautler01} Integrators
186 > which propagate orientational motion with an acceptable level of
187 > energy conservation for molecular dynamics are relatively new
188 > inventions.
189  
190   Moving a rigid body involves determination of both the force and
191   torque applied by the surroundings, which directly affect the
# Line 292 | Line 423 | scheme utilized in {\sc oopse}.
423   integrator, while the quaternion scheme will require ~154 hours of CPU
424   time. This demonstrates the computational advantage of the integration
425   scheme utilized in {\sc oopse}.
295
296 \subsection{Periodic Boundary Conditions}
426  
298 \begin{figure}
299 \centering
300 \includegraphics[width=4.5in]{./figures/periodicImage.pdf}
301 \caption{With periodic boundary conditions imposed, when particles
302 move out of one side the simulation box, they wrap back in the
303 opposite side. In this manner, a finite system of particles behaves as
304 an infinite system.}
305 \label{fig:periodicImage}
306 \end{figure}
307
427   \section{Accumulating Interactions}
428  
429   In the force calculation between {\tt moveA} and {\tt moveB} mentioned
# Line 423 | Line 542 | switching transition.
542   that the higher the order of the polynomial, the more abrupt the
543   switching transition.
544  
545 < \subsection{Long-Range Interaction Corrections}
545 > \subsection{\label{sec:LJCorrections}Long-Range Interaction Corrections}
546  
547   While a good approximation, accumulating interaction only from the
548   nearby particles can lead to some issues, because the further away
# Line 452 | Line 571 | chapter \ref{chap:electrostatics}.
571   this problem, as well as useful corrective techniques, is presented in
572   chapter \ref{chap:electrostatics}.
573  
574 < \section{Measuring Properties}
574 > \subsection{Periodic Boundary Conditions}
575 >
576 > In typical molecular dynamics simulations there are no restrictions
577 > placed on the motion of particles outside of what the inter-particle
578 > interactions dictate. This means that if a particle collides with
579 > other particles, it is free to move away from the site of the
580 > collision. If we consider the entire system as a collection of
581 > particles, they are not confined by walls of the ``simulation box''
582 > and can freely move away from the other particles. With no boundary
583 > considerations, particles moving outside of the simulation box
584 > enter a vacuum. This is correct behavior for cluster simulations in a
585 > vacuum; however, if we want to simulate bulk or spatially infinite
586 > systems, we need to use periodic boundary conditions.
587 >
588 > \begin{figure}
589 > \centering
590 > \includegraphics[width=4.5in]{./figures/periodicImage.pdf}
591 > \caption{With periodic boundary conditions imposed, when particles
592 > move out of one side the simulation box, they wrap back in the
593 > opposite side. In this manner, a finite system of particles behaves as
594 > an infinite system.}
595 > \label{fig:periodicImage}
596 > \end{figure}
597 > In periodic boundary conditions, as a particle moves outside one wall
598 > of the simulation box, the coordinates are remapped such that the
599 > particle enters the opposing side of the box. This process is easy to
600 > visualize in two dimensions as shown in figure \ref{fig:periodicImage}
601 > and can occur in three dimensions, though it is not as easy to
602 > visualize. Remapping the actual coordinates of the particles can be
603 > problematic in that the we are restricting the distance a particle can
604 > move from it's point of origin to a diagonal of the simulation
605 > box. Thus, even though we are not confining the system with hard
606 > walls, we are confining the particle coordinates to a particular
607 > region in space. To avoid this ``soft'' confinement, it is common
608 > practice to allow the particle coordinates to expand in an
609 > unrestricted fashion while calculating interactions using a wrapped
610 > set of ``minimum image'' coordinates. These minimum image coordinates
611 > need not be stored because they are easily calculated on the fly when
612 > determining particle distances.
613  
614 + \section{Calculating Properties}
615 +
616 + In order to use simulations to model experimental process and evaluate
617 + theories, we need to be able to extract properties from the
618 + results. In experiments, we can measure thermodynamic properties such
619 + as the pressure and free energy. In computer simulations, we can
620 + calculate properties from the motion and configuration of particles in
621 + the system and make connections between these properties and the
622 + experimental thermodynamic properties through statistical mechanics.
623 +
624 + The work presented in the later chapters use the canonical ($NVT$),
625 + isobaric-isothermal ($NPT$), and microcanonical ($NVE$) statistical
626 + mechanics ensembles. The different ensembles lend themselves to more
627 + effectively calculating specific properties. For instance, if we
628 + concerned ourselves with the calculation of dynamic properties, which
629 + are dependent upon the motion of the particles, it is better to choose
630 + an ensemble that does not add system motions to keep properties like
631 + the temperature or pressure constant. In this case, the $NVE$ ensemble
632 + would be the most appropriate choice. In chapter \ref{chap:ice}, we
633 + discuss calculating free energies, which are non-mechanical
634 + thermodynamic properties, and these calculations also depend on the
635 + chosen ensemble.\cite{Allen87} The Helmholtz free energy ($A$) depends
636 + on the $NVT$ partition function ($Q_{NVT}$),
637 + \begin{equation}
638 + A = -k_\textrm{B}T\ln Q_{NVT},
639 + \end{equation}
640 + while the Gibbs free energy ($G$) depends on the $NPT$ partition
641 + function ($Q_{NPT}$),
642 + \begin{equation}
643 + G = -k_\textrm{B}T\ln Q_{NPT}.  
644 + \end{equation}
645 + It is also useful to note that the conserved quantities of the $NVT$
646 + and $NPT$ ensembles are related to the Helmholtz and Gibbs free
647 + energies respectively.\cite{Melchionna93}
648 +
649 + Integrating the equations of motion is a simple method to obtain a
650 + sequence of configurations that sample the chosen ensemble. For each
651 + of these configurations, we can calculate an instantaneous value for a
652 + chosen property like the density in the $NPT$ ensemble, where the
653 + volume is allowed to fluctuate. The density for the simulation is
654 + calculated from an average over the densities for the individual
655 + configurations. Being that the configurations from the integration
656 + process are related to one another by the time evolution of the
657 + interactions, this average is technically a time average. In
658 + calculating thermodynamic properties, we would actually prefer an
659 + ensemble average that is representative of all accessible states of
660 + the system. We can calculate thermodynamic properties from the time
661 + average by taking advantage of the ergodic hypothesis, which states
662 + that over a long period of time, the time average and the ensemble
663 + average are the same.
664 +
665 + In addition to the average, the fluctuations of that particular
666 + property can be determined via the standard deviation. Fluctuations
667 + are useful for measuring various thermodynamic properties in computer
668 + simulations. In section \ref{sec:t5peThermo}, we use fluctuations in
669 + properties like the enthalpy and volume to calculate various
670 + thermodynamic properties, such as the constant pressure heat capacity
671 + and the isothermal compressibility.
672 +
673   \section{Application of the Techniques}
674  
675   In the following chapters, the above techniques are all utilized in

Diff Legend

Removed lines
+ Added lines
< Changed lines
> Changed lines