MCMC2 (version 1.1): A Monte Carlo code for multiply-charged clusters
详细信息    查看全文
文摘
This new version of the MCMC2 program for modeling the thermodynamic and structural properties of multiply-charged clusters by means of parallel classical Monte Carlo methods provides some enhancements and corrections to the earlier version聽[1]. In particular, histograms for negatively and positively charged particles are separated, parallel Monte Carlo simulations can be performed by attempting exchanges between all the replica pairs and not only one randomly chosen pair, a new random number generator is supplied, and the contribution of Coulomb repulsion to the total heat capacity is corrected. The main functionalities of the original MCMC2 code (e.g., potential-energy surfaces and Monte Carlo algorithms) have not been modified.

New version program summary

Program title: MCMC2.

Catalogue identifier: AENZ_v1_1

Program summary URL:

Program obtainable from: CPC Program Library, Queen鈥檚 University, Belfast, N. Ireland.

Licensing provisions: Standard CPC license, .

No. of lines in distributed program, including test data, etc.: 148028

No. of bytes in distributed program, including test data, etc.: 1501936

Distribution format: tar.gz

Programming language: Fortran 90 with MPI extensions for parallelization

Computers: x86 and IBM platforms

Operating system:

1.

CentOS 5.6 Intel Xeon X5670 2.93 GHz, gfortran/ifort(version 13.1.0) +聽MPICH2

2.

CentOS 5.3 Intel Xeon E5520 2.27 GHz, gfortran/g95/pgf90 +聽MPICH2

3.

Red Hat Enterprise 5.3 Intel Xeon X5650 2.67 GHz, gfortran +聽IntelMPI

4.

IBM Power 6 4.7 GHz, xlf +聽PESS (IBM parallel library)

Has the code been vectorised or parallelized?: Yes, parallelized using MPI extensions. Number of CPUs used: up to 999.

RAM (per CPU core): 10-20 MB. The physical memory needed for the simulation depends on the cluster size, the values indicated are typical for small clusters ().

Classification: 23

Catalogue identifier of previous version: AENZ_v1_0

Journal reference of previous version: Comput. Phys. Comm. 184 (2013) 873

Nature of problem: We provide a general parallel code to investigate structural and thermodynamic properties of multiply-charged clusters.

Solution method: Parallel Monte Carlo methods are implemented for the exploration of the configuration space of multiply-charged clusters. Two parallel Monte Carlo methods were found appropriate to achieve such a goal: the Parallel Tempering method, where replicas of the same cluster at different temperatures are distributed among different CPUs, and Parallel Charging where replicas (at the same temperature) having different particle charges or numbers of charged particles are distributed on different CPUs.

Restrictions: The current version of the code uses Lennard-Jones interactions, as the main cohesive interaction between spherical particles, and electrostatic interactions (charge-charge, charge-induced dipole, induced dipole-induced dipole, polarisation). The Monte Carlo simulations can only be performed in the NVT ensemble in the present code.

Unusual features: The Parallel Charging methods, based on the same philosophy as Parallel Tempering but with particle charges and number of charged particles as parameters instead of temperature, is an interesting new approach to explore energy landscapes. Splitting of the simulations is allowed and averages are accordingly updated.

Running time: The running time depends on the number of Monte Carlo steps, cluster size, and the type of interactions selected (e.g., polarisation turned on or off, and method used for calculating the induced dipoles). Typically a complete simulation can last from a few tens of minutes or a few hours for small clusters (, not including polarisation interactions), to one week for large clusters ( not including polarisation interactions), and several weeks for large clusters () when including polarisation interactions. A restart procedure has been implemented that enables a splitting of the simulation accumulation phase.

Reasons for new version: The new version corrects some bugs identified in the previous version. It also provides the user with some new functionalities such as the separation of histograms for positively and negatively charged particles, a new scheme to perform parallel Monte Carlo simulations and a new random number generator.

Summary of revisions

1.

Additional features of MCMC2 version 1.1

(a)

Histograms for positively and negatively charged particles. The first version of MCMC2 was able to produce a large variety of histograms to investigate the structure of charged clusters and their propensity to undergo evaporation: angular histograms for charged particles (鈥渁ngdist-yyy.dat鈥?and 鈥渟urfang-yyy.dat鈥?files), radial histograms (鈥渞hist-x-yyy.dat鈥?files), and histograms for tracking the number of charged surface particles (鈥渟urfnb-yyy.dat鈥?files) or the number of particles that tend to evaporate (鈥渆vapnb-yyy.dat鈥?files). Although the program could handle clusters composed of both positively and negatively charged particles, these histograms did not separate these two classes of particles. This has been corrected by the addition of two columns in the histogram files. These columns are labelled with signs 鈥?鈥?and 鈥溾垝鈥?that refer to positively and negatively charged particles, respectively. The study of clusters composed of both positively and negatively charged particles should therefore be made easier [2]. Most of the keywords corresponding to histograms have not been changed except the keyword for radial histograms that now includes the possibility not to print any histogram (see keyword 鈥淩ADIAL鈥?in the 鈥淢odified keywords鈥?section).

(b)

Full replica pair exchange in Monte Carlo simulations. In previous publications聽[2, 3] the exchange between replica configurations was based on two steps: the random selection of one replica pair and the calculation of the Parallel Tempering or Parallel Charging criteria to accept or reject the exchange of configurations between replicas. By default these exchanges were attempted every Monte Carlo sweeps, where a sweep is composed of Monte Carlo steps for an -particle cluster. The main goal of parallel Monte Carlo algorithms is to improve the convergence speed and we might thus be interested in attempting more than one exchange of replica configurations every sweeps, provided that is large enough for the final configuration (after the sweeps) to be decorrelated from the initial configuration (before the sweeps). In the present version of the code we have implemented a second way to perform parallel Monte Carlo simulations where exchanges of replica configurations are attempted every sweeps by alternatively selecting odd pairs (i.e., pairs involving replicas 1-2, 3-4, etc.) and even pairs (i.e., pairs involving replicas 2-3, 4-5, etc.). The two methods should be equivalent for large numbers of sweeps although the second method proposed in the present version is deemed to converge faster than the method based on random choices of replica pairs. Modifications brought to keywords 鈥淧C鈥?and 鈥淧T鈥?are reported in the 鈥淢odified keywords鈥?section.

(c)

Lagged Fibonacci random number generator. Random numbers are generated by means of a LAPACK random number generator in the first version of MCMC2 聽[1]. Knowing that the main goal of MCMC2 is to handle Monte Carlo simulations on large clusters (from hundreds up to thousands of particles at least), we were particularly concerned in delivering a random number generator already thoroughly tested on neutral or charged clusters for such large systems. One of us (ML) developed a lagged Fibonacci random number generator based on the works by Kirkpatrick et聽al.聽[4] and Bhanot et聽al.聽[5]. In particular, this random number generator was successfully used in accurate diffusion quantum Monte Carlo investigations of the structure and energetics of small helium clusters聽[6], a benchmark convergence study of the dissociation energy of the HF dimer聽[7], and the fragmentation dynamics of ionized doped helium clusters聽[8, 9]. Modifications brought to keyword 鈥淪EED鈥?are reported in the 鈥淢odified keywords鈥?section. As an example, heat capacity curves of neutral clusters obtained after performing Parallel Tempering simulations with the two random number generators used in MCMC2 are plotted in Fig. 1 of Supplementary materials. We can notice that the melting peak is perfectly defined by all four methods, some small deviations may occur for the premelting peak whose convergence is harder to achieve.

2.

Modifications or corrections to MCMC2 version 1.0

(a)

In MCMC2 (version 1.0) we have computed the heat capacity related to Coulomb energy fluctuations that we have improperly called the 鈥淐oulomb part of heat capacity鈥? Indeed, when defining the potential energy of a charged cluster as the sum of the Lennard-Jones (LJ) interactions and the Coulomb interactions we can define several quantities:

NGLC 2004-2010.National Geological Library of China All Rights Reserved.
Add:29 Xueyuan Rd,Haidian District,Beijing,PRC. Mail Add: 8324 mailbox 100083
For exchange or info please contact us via email.