--- trunk/chrisDissertation/Introduction.tex 2006/09/21 21:47:17 3015 +++ trunk/chrisDissertation/Introduction.tex 2006/09/21 23:21:37 3016 @@ -42,19 +42,150 @@ techniques.\cite{Allen87,Frenkel02,Leach01} \section{\label{sec:MolecularDynamics}Molecular Dynamics} -\section{\label{sec:MovingParticles}Propagating Particle Motion} +As stated above, in molecular dynamics we focus on evolving +configurations of molecules over time. In order to use this as a tool +for understanding experiments and testing theories, we want the +configuration to evolve in a manner that mimics real molecular +systems. To do this, we start with clarifying what we know about a +given configuration of particles at time $t_1$, basically that each +particle in the configuration has a position ($q$) and velocity +($\dot{q}$). We now want to know what the configuration will be at +time $t_2$. To find out, we need the classical equations of +motion, and one useful formulation of them is the Lagrangian form. + +The Lagrangian ($L$) is a function of the positions and velocites that +takes the form, +\begin{equation} +L = K - V, +\label{eq:lagrangian} +\end{equation} +where $K$ is the kinetic energy and $V$ is the potential energy. We +can use Hamilton's principle, which states that the integral of the +Lagrangian over time has a stationary value for the correct path of +motion, to say that the variation of the integral of the Lagrangian +over time is zero,\cite{Tolman38} +\begin{equation} +\delta\int_{t_1}^{t_2}L(q,\dot{q})dt = 0. +\end{equation} +This can be written as a summation of integrals to give +\begin{equation} +\int_{t_1}^{t_2}\sum_{i=1}^{3N}\left(\frac{\partial L}{\partial q_i}\delta q_i + + \frac{\partial L}{\partial \dot{q}_i}\delta \dot{q}_i\right)dt = 0. +\end{equation} +Using fact that $\dot q$ is the derivative with respect to time of $q$ +and integrating the second partial derivative in the parenthesis by +parts, this equation simplifies to +\begin{equation} +\int_{t_1}^{t_2}\sum_{i=1}^{3N}\left( + \frac{d}{dt}\frac{\partial L}{\partial \dot{q}_i} + - \frac{\partial L}{\partial q_i}\right)\delta {q}_i dt = 0, +\end{equation} +and the above equation is only valid for +\begin{equation} +\frac{d}{dt}\frac{\partial L}{\partial \dot{q}_i} + - \frac{\partial L}{\partial q_i} = 0\quad\quad(i=1,2,\dots,3N). +\label{eq:formulation} +\end{equation} +To obtain the classical equations of motion for the particles, we can +substitute equation (\ref{eq:lagrangian}) into the above equation with +$m\dot{\mathbf{r}}^2/2$ for the kinetic energy, giving +\begin{equation} +\frac{d}{dt}(m\dot{\mathbf{r}})+\frac{dV}{d\mathbf{r}}=0, +\end{equation} +or more recognizably, +\begin{equation} +\mathbf{f} = m\mathbf{a}, +\end{equation} +where $\mathbf{f} = -dV/d\mathbf{r}$ and $\mathbf{a} = +d^2\mathbf{r}/dt^2$. This Lagrangian formulation shown in equation +(\ref{eq:formulation}) is generalized, and it can be used to determine +equations of motion in forms outside of the typical Cartesian case +shown here. + +\subsection{\label{sec:Verlet}Verlet Integration} + +In order to perform molecular dynamics, we need an algorithm that +integrates the equations of motion described above. Ideal algorithms +are both simple in implementation and highly accurate. There have been +a large number of algorithms developed for this purpose; however, for +reasons discussed below, we are going to focus on the Verlet class of +integrators.\cite{Gear66,Beeman76,Berendsen86,Allen87,Verlet67,Swope82} + +In Verlet's original study of computer ``experiments'', he directly +integrated the Newtonian second order differential equation of motion, +\begin{equation} +m\frac{d^2\mathbf{r}_i}{dt^2} = \sum_{j\ne i}\mathbf{f}(r_{ij}), +\end{equation} +with the following simple algorithm: +\begin{equation} +\mathbf{r}_i(t+\delta t) = -\mathbf{r}_i(t-\delta t) + 2\mathbf{r}_i(t) + + \sum_{j\ne i}\mathbf{f}(r_{ij}(t))\delta t^2, +\label{eq:verlet} +\end{equation} +where $\delta t$ is the time step of integration.\cite{Verlet67} It is +interesting to note that equation (\ref{eq:verlet}) does not include +velocities, and this makes sense since they are not present in the +differential equation. The velocities are necessary for the +calculation of the kinetic energy, and they can be calculated at each +time step with the following equation: +\begin{equation} +\mathbf{v}_i(t) = \frac{\mathbf{r}_i(t+\delta t)-\mathbf{r}_i(t-\delta t)} + {2\delta t}. +\end{equation} + +Like the equation of motion it solves, the Verlet algorithm has the +beneficial property of being time-reversible, meaning that you can +integrate the configuration forward and then backward and end up at +the original configuration. Some other methods for integration, like +predictor-corrector methods, lack this property in that they require +higher order information that is discarded after integrating +steps. Another interesting property of this algorithm is that it is +symplectic, meaning that it preserves area in phase-space. Symplectic +algorithms system stays in the region of phase-space dictated by the +ensemble and enjoy greater long-time energy +conservations.\cite{Frenkel02} + +While the error in the positions calculated using the Verlet algorithm +is small ($\mathcal{O}(\delta t^4)$), the error in the velocities is +quite a bit larger ($\mathcal{O}(\delta t^2)$).\cite{Allen87} Swope +{\it et al.} developed a corrected for of this algorithm, the +`velocity Verlet' algorithm, which improves the error of the velocity +calculation and thus the energy conservation.\cite{Swope82} This +algorithm involves a full step of the positions, +\begin{equation} +\mathbf{r}(t+\delta t) = \mathbf{r}(t) + \delta t\mathbf{v}(t) + + \frac{1}{2}\delta t^2\mathbf{a}(t), +\end{equation} +and a half step of the velocities, +\begin{equation} +\mathbf{v}\left(t+\frac{1}{2}\delta t\right) = \mathbf{v}(t) + + \frac{1}{2}\delta t\mathbf{a}(t). +\end{equation} +After calculating new forces, the velocities can be updated to a full +step, +\begin{equation} +\mathbf{v}(t+\delta t) = \mathbf{v}\left(t+\frac{1}{2}\delta t\right) + + \frac{1}{2}\delta t\mathbf{a}(t+\delta t). +\end{equation} +By integrating in this manner, the error in the velocities reduces to +$\mathcal{O}(\delta t^3)$. It should be noted that the error in the +positions increases to $\mathcal{O}(\delta t^3)$, but the resulting +improvement in the energies coupled with the maintained simplicity, +time-reversibility, and symplectic nature make it an improvement over +the original form. \subsection{\label{sec:IntroIntegrate}Rigid Body Motion} Rigid bodies are non-spherical particles or collections of particles (e.g. $\mbox{C}_{60}$) that have a constant internal potential and move collectively.\cite{Goldstein01} Discounting iterative constraint -procedures like {\sc shake} and {\sc rattle}, they are not included in -most simulation packages because of the algorithmic complexity -involved in propagating orientational degrees of -freedom.\cite{Ryckaert77,Andersen83,Krautler01} Integrators which -propagate orientational motion with an acceptable level of energy -conservation for molecular dynamics are relatively new inventions. +procedures like {\sc shake} and {\sc rattle} for approximating rigid +bodies, they are not included in most simulation packages because of +the algorithmic complexity involved in propagating orientational +degrees of freedom.\cite{Ryckaert77,Andersen83,Krautler01} Integrators +which propagate orientational motion with an acceptable level of +energy conservation for molecular dynamics are relatively new +inventions. Moving a rigid body involves determination of both the force and torque applied by the surroundings, which directly affect the @@ -293,18 +424,6 @@ scheme utilized in {\sc oopse}. time. This demonstrates the computational advantage of the integration scheme utilized in {\sc oopse}. -\subsection{Periodic Boundary Conditions} - -\begin{figure} -\centering -\includegraphics[width=4.5in]{./figures/periodicImage.pdf} -\caption{With periodic boundary conditions imposed, when particles -move out of one side the simulation box, they wrap back in the -opposite side. In this manner, a finite system of particles behaves as -an infinite system.} -\label{fig:periodicImage} -\end{figure} - \section{Accumulating Interactions} In the force calculation between {\tt moveA} and {\tt moveB} mentioned @@ -452,8 +571,105 @@ chapter \ref{chap:electrostatics}. this problem, as well as useful corrective techniques, is presented in chapter \ref{chap:electrostatics}. -\section{Measuring Properties} +\subsection{Periodic Boundary Conditions} +In typical molecular dynamics simulations there are no restrictions +placed on the motion of particles outside of what the interparticle +interactions dictate. This means that if a particle collides with +other particles, it is free to move away from the site of the +collision. If we consider the entire system as a collection of +particles, they are not confined by walls of the ``simulation box'' +and can freely move away from the other particles. With no boundary +considerations, particles moving outside of the simulation box +enter a vacuum. This is correct behavior for cluster simulations in a +vacuum; however, if we want to simulate bulk or spatially infinite +systems, we need to use periodic boundary conditions. + +\begin{figure} +\centering +\includegraphics[width=4.5in]{./figures/periodicImage.pdf} +\caption{With periodic boundary conditions imposed, when particles +move out of one side the simulation box, they wrap back in the +opposite side. In this manner, a finite system of particles behaves as +an infinite system.} +\label{fig:periodicImage} +\end{figure} +In periodic boundary conditions, as a particle moves outside one wall +of the simulation box, the coordinates are remapped such that the +particle enters the opposing side of the box. This process is easy to +visualize in two dimensions as shown in figure \ref{fig:periodicImage} +and can occur in three dimensions, though it is not as easy to +visualize. Remapping the actual coordinates of the particles can be +problematic in that the we are restricting the distance a particle can +move from it's point of origin to a diagonal of the simulation +box. Thus, even though we are not confining the system with hard +walls, we are confining the particle coordinates to a particular +region in space. To avoid this ``soft'' confinement, it is common +practice to allow the particle coordinates to expand in an +unrestricted fashion while calculating interactions using a wrapped +set of ``minimum image'' coordinates. These minimum image coordinates +need not be stored because they are easily calculated on the fly when +determining particle distances. + +\section{Calculating Properties} + +In order to use simulations to model experimental process and evaluate +theories, we need to be able to extract properties from the +results. In experiments, we can measure thermodynamic properties such +as the pressure and free energy. In computer simulations, we can +calculate properties from the motion and configuration of particles in +the system and make connections between these properties and the +experimental thermodynamic properties through statistical mechanics. + +The work presented in the later chapters use the canonical ($NVT$), +isobaric-isothermal ($NPT$), and microcanonical ($NVE$) statistical +mechanics ensembles. The different ensembles lend themselves to more +effectively calculating specific properties. For instance, if we +concerned ourselves with the calculation of dynamic properties, which +are dependent upon the motion of the particles, it is better to choose +an ensemble that does not add system motions to keep properties like +the temperature or pressure constant. In this case, the $NVE$ ensemble +would be the most appropriate choice. In chapter \ref{chap:ice}, we +discuss calculating free energies, which are non-mechanical +thermodynamic properties, and these calculations also depend on the +chosen ensemble.\cite{Allen87} The Helmholtz free energy ($A$) depends +on the $NVT$ partition function ($Q_{NVT}$), +\begin{equation} +A = -k_\textrm{B}T\ln Q_{NVT}, +\end{equation} +while the Gibbs free energy ($G$) depends on the $NPT$ partition +function ($Q_{NPT}$), +\begin{equation} +G = -k_\textrm{B}T\ln Q_{NPT}. +\end{equation} +It is also useful to note that the conserved quantities of the $NVT$ +and $NPT$ ensembles are related to the Helmholtz and Gibbs free +energies respectively.\cite{Melchionna93} + +Integrating the equations of motion is a simple method to obtain a +sequence of configurations that sample the chosen ensemble. For each +of these configurations, we can calculate an instantaneous value for a +chosen property like the density in the $NPT$ ensemble, where the +volume is allowed to fluctuate. The density for the simulation is +calculated from an average over the densities for the individual +configurations. Being that the configurations from the integration +process are related to one another by the time evolution of the +interactions, this average is technically a time average. In +calculating thermodynamic properties, we would actually prefer an +ensemble average that is representative of all accessible states of +the system. We can calculate thermodynamic properties from the time +average by taking advantage of the ergodic hypothesis, which states +that over a long period of time, the time average and the ensemble +average are the same. + +In addition to the average, the fluctuations of that particular +property can be determined via the standard deviation. Fluctuations +are useful for measuring various thermodynamic properties in computer +simulations. In section \ref{sec:t5peThermo}, we use fluctuations in +propeties like the enthalpy and volume to calculate various +thermodynamic properties, such as the constant pressure heat capacity +and the isothermal compressibility. + \section{Application of the Techniques} In the following chapters, the above techniques are all utilized in