Command Palette
Search for a command to run...
LAMMPS-Bench Molecular Dynamics Benchmark Dataset
Date
Publish URL
Paper URL
License
GPL
*This dataset supports online use.Click here to jump.
The LAMMPS Benchmark dataset is used to test and compare the performance of LAMMPS (molecular dynamics simulation software) on different hardware or configurations.
These datasets are not scientific experimental data, but are used to evaluate computing performance (speed, scaling, efficiency). They contain specific architectures, force field files, input scripts, initial atomic coordinates, etc., and are provided by LAMMPS officially in the bench/ folder.
The relevant paper results areLAMMPS – a flexible simulation tool for particle-based materials modeling at the atomic, meso, and continuum scales", released in 2022 by Sandia National Laboratories in collaboration with Michigan State University, Temple University and other institutions.
This dataset contains 5 benchmark problems, which are listed in the Benchmark section of the LAMMPS documentation and the Benchmark page of the LAMMPS official website (https://www.lammps.org/bench.html) are discussed in .
This dataset also contains a subdirectory:
POTENTIALS: Benchmarking scripts for various potentials in LAMMPS
The results of all these benchmarks are shown and discussed on the Benchmark page of the LAMMPS website:https://www.lammps.org/bench.html
The rest of this introduction describes the five benchmark problems included in the dataset and how to run them on a CPU (either serially or in parallel). The subdirectories contain their respective README files, which should be read before running these scripts.
Here are 5 benchmark questions:
LJ = atomic fluid, using the Lennard-Jones potential with a cutoff radius of 2.5 σ (approximately 55 neighbors per atom), time-integrated in the NVE ensemble.
Chain = bead-spring polymer melt composed of 100 monomers, using FENE bonds and Lennard-Jones pair interactions with a cutoff radius of 2^(1/6) σ (approximately 5 neighbors per atom), time-integrated in the NVE ensemble.
EAM = metallic solid, using the embedded atom potential (EAM potential) of copper (Cu) with a cutoff radius of 4.95 Å (approximately 45 neighbors per atom), time-integrated in the NVE ensemble.
Chute = granular chute flow, using a potential function with a friction history term and a cutoff radius of 1.1σ (approximately 7 neighbors per atom), time-integrated in the NVE ensemble.
Rhodo = rhodopsin in a solvated lipid bilayer, using the CHARMM force field with a LJ cutoff radius of 10 Å (approximately 440 neighbors per atom). Long-range Coulomb interactions were calculated using the particle-particle/particle-network method (PPPM) and time integration in the NPT ensemble.
Each of the five problems contains 32,000 atoms and runs for 100 time steps. Each test can be run as a serial benchmark (single-processor) or in parallel. In parallel mode, each benchmark can be run as a fixed-size or scaled-size problem. For fixed-size benchmarks, the same 32K atom system is run on different numbers of processors. For scaled-size benchmarks, the system size scales with the number of processors. For example, a 256K atom system runs on 8 processors; a 32 million atom system runs on 1,024 processors; and so on.
This dataset includes some example log files from runs on different machines and with different numbers of processors, which you can use to compare your results. For example, the log file name log.date.chain.lmp.scaled.foo.P represents a scaled version of the Chain benchmark run on machine "foo" with P processors, using the LAMMPS version identified by the date. Note that the Eam and Lj benchmarks may not give exactly the same results on different machines because the "velocity loop geom" option assigns velocities based on atomic coordinates—see the documentation for the velocity command for more details.
The CPU time (in seconds) of the execution is displayed in the "Loop time" line of the log file, for example:
Loop time of 3.89418 on 8 procs for 100 steps with 32000 atoms
Timing results for these problems on various machines can be found on the Benchmark page of the LAMMPS website.
Here's how to run each problem, assuming the LAMMPS executable is called lmp_mpi:
And use the mpirun command to start parallel execution:
Serial operation (single processor):
lmp_mpi -in in.lj
lmp_mpi -in in.chain
lmp_mpi -in in.eam
lmp_mpi -in in.chute
lmp_mpi -in in.rhodo
Parallel fixed-size runs (8 processors in this example):
mpirun -np 8 lmp_mpi -in in.lj
mpirun -np 8 lmp_mpi -in in.chain
mpirun -np 8 lmp_mpi -in in.eam
mpirun -np 8 lmp_mpi -in in.chute
mpirun -np 8 lmp_mpi -in in.rhodo
Parallel scalable operation (16 processors in this example):
mpirun -np 16 lmp_mpi -var x 2 -var y 2 -var z 4 -in in.lj
mpirun -np 16 lmp_mpi -var x 2 -var y 2 -var z 4 -in in.chain.scaled
mpirun -np 16 lmp_mpi -var x 2 -var y 2 -var z 4 -in in.eam
mpirun -np 16 lmp_mpi -var x 4 -var y 4 -in in.chute.scaled
mpirun -np 16 lmp_mpi -var x 2 -var y 2 -var z 4 -in in.rhodo.scaled
For Chute operation, Pz = 1 is required.
So P = Px * Py, just set the variables x and y.
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.