Grid-based dynamic electronic publication: A case study using combined experiment and simulation studies of crown ethers at the air/water interface.
Esther R Rousay, Hongchen Fu, Jamie M Robinson, Jeremy G Frey, Jonathan W Essex
School of Chemistry, University of Southampton,
Highfield, Southampton, SO17 1BJ, UK

Abstract The Publication@Source Paradigm and Challenges Body Molecular Dynamics Simulations Comparisons and Conclusions Acknowledgements Appendix:The TriScapeRDF browser References Glossary Search
Introduction Introduction - Compute Facility Figures Density Profiles Orientational Distribution

Introduction - Compute Facility

The molecular dynamics simulations were carried out on the Iridis Beowulf cluster cluster at Southampton University consisting of 548 processors made up of 292 1GHz Intel Pentium III?s, 214 1.8 GHz Intel Xeons, 32 1.5 GHz Intel Pentium IV?s and 10 0.8 GHz Itaniums. In total, these processors have more than 300 Gb of memory and 12 Tb of local disk storage backed by a RAID5 disk array, which has 2.8Tb of storage. The system presented in this work required about 50 Gbyte of storage for 5 ns of simulation. This is a considerable amount of data to store, especially when it is considered that this simulation is just one of a series of several simulations to be run on this one system. The work is temporarily stored on one of the RAID5 disks on iridis, but must be moved elsewhere for analysis and long-term storage.

The NAMD molecular dynamics simulations program is particularly efficient for parallel implementation. One of the main issues in parallelisation of calculations is that the computational load needs to be equally distributed among the processors. In molecular simulation, the spatial decomposition of computational load can make the region of space mapped to each processor become very irregular and the evaluation of many different types of forces becomes very hard to compute. In NAMD, the entire model is split into uniform cubes of space (patches) and these are assigned, along with the calculation of interactions in the system, to the processors in such a way as to balance the computational load as much as is possible. An incremental load balancer assesses and, if necessary, adjusts, this assignment throughout the simulation.

For convenience, given the presence of the IRIDIS cluster at Southampton the simulation was run on this one system. However, the computation has been designed in such a way to make use of a larger distributed computational grid so that future computationally intensive simulations at higher crown ether concentrations, in the presence of potentially complexing cations, and for runs sufficiently long to establish the equilibration between the bulk and the interface, can be accommodated. However, the estimated storage capacity required for all the simulations presently running on the benzocrown systems, is slightly under 200 Gb. The current simulations are being run in order to incorporate the pressure profiling calculations, and for this, the large trajectory file (the dcd) must be output every 5 timesteps. In order to keep the data storage down to a minimum, these files are converted into coordinate files as soon as the pressure profile calculations have been performed. The coordinates are saved only every 50 timesteps for the crown ethers, and every 500 timesteps for the waters. In this way, sufficient data is stored for analysis of the trajectories without impossibly large filestores being required. Never-the-less this data capacity will provide some stress for any data grid associated with the computational grids.


Previous Page Section Contents Next

Select Theme :