|
MPM-Geomechanics
Material Point Method for simulating geo-materials under large deformation conditions
|
The tests use GoogleTest. It is necessary to import this library by cloning the official repository into the folder /external. Each developer must clone this repository independently.
The simplest way to compile on Windows and Linux is by using the .bash file at /build/qa with MSYS2 MINGW64 console line, simply execute the following command in the directory MPM-Geomechanics/build/qa:
Alternatively, you can use the following commands:
These commands will generate two executables: MPM-Geomechanics-tests and MPM-Geomechanics-benchmark.
MPM-Geomechanics-tests: Run testing using GoogleTest. All files ending with .test.cpp are testing files, you can find them in the directory qa/tests.MPM-Geomechanics-benchmark: Run benchmark using GoogleTest. All files ending with .benchmark.cpp are performance files. You can find them in the directory qa/benchmark.To run the benchmark correctly, a JSON file named benchmark-configuration.json is required. This file allows the user to specify values for each test. If the file does not exist or a value is missing, a default value will be used.
The executable MPM-Geomechanics-benchmark allows the following command-line arguments:
<directory>: Specifies which file should be used to run the benchmark. If no file is specified, the program will use benchmark-configuration.json located in the same directory as the executable. Example: MPM-Geomechanics-benchmark configuration-file.jsonThe executable MPM-Geomechanics-benchmark allows the following command-line flags:
--log: Shows more information about missing keys in the benchmark-configuration.json fileThe performance test can also be executed using the start-multi-benchmark.py script, which allows running benchmarks with one or more executables downloaded as artifacts from GitHub and stores the log results in separate folders. Each executable is automatically downloaded from GitHub as an artifact, using an ID specified in start-multi-benchmark-configuration.json. Additionally, the benchmark configuration (number of martial points and number of threads) can be defined in the same file.
Example of a start-multi-benchmark-configuration.json file:
As you can see, this setup will run 3x3 (particles x threads) tests for win_benchmark_executable and win_github_id, totaling 18 tests (3x3x2)
The executable start-multi-benchmark.py allows the following command-line flags:
--clear: Removes the benchmark folder (if it exists) before executing the performance tests.--cache: Uses the previously created cache instead of the start-multi-benchmark-configuration.json file.