Return to site

Install Miniconda Docker

broken image


Miniconda

CuDF can be installed with conda (miniconda, or the full Anaconda distribution) from the rapidsai channel: For cudf version 0.18: # for CUDA 10.1 conda install -c rapidsai -c nvidia -c numba -c conda-forge cudf=0.18 python=3.7 cudatoolkit=10.1 # or, for CUDA 10.2 conda install -c rapidsai -c nvidia -c numba -c conda-forge cudf=0.18. # try to uninstall with conda: conda update ruamel-yaml # get error-2 # PackageNotInstalledError: Package is not installed in prefix. # prefix: C: Users miniconda3 # package name: ruamel-yaml # Remove index cache, lock files, unused cache packages, and tarballs. Conda clean -a -vv conda deactivate # Remove any files related to 'ruamel. There various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections. After your easy installation, DeePMD-kit (dp) and LAMMPS (lmp) will be available to execute. You can try dp-h and lmp-h to see the help. Mpirun is also available considering you may want to run LAMMPS in.

  • Download and install
    • Easy installation methods
    • Install the python interaction
    • Install the C++ interface

Please follow our github webpage to download the latest released version and development version.

Easy installation methods¶

There various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections.

After your easy installation, DeePMD-kit (dp) and LAMMPS (lmp) will be available to execute. You can try dp-h and lmp-h to see the help. mpirun is also available considering you may want to run LAMMPS in parallel.

Offline packages¶

Both CPU and GPU version offline packages are avaiable in the Releases page.

With conda¶

DeePMD-kit is avaiable with conda. Install Anaconda or Miniconda first.

To install the CPU version:

To install the GPU version containing CUDA 10.1:

With Docker¶

Install Miniconda Docker Ubuntu

A docker for installing the DeePMD-kit is available here.

To pull the CPU version:

To pull the GPU version:

Install the python interface¶

Install the Tensorflow's python interface¶

First, check the python version on your machine

We follow the virtual environment approach to install the tensorflow's Python interface. The full instruction can be found on the tensorflow's official website. Now we assume that the Python interface will be installed to virtual environment directory $tensorflow_venv

It is notice that everytime a new shell is started and one wants to use DeePMD-kit, the virtual environment should be activated by

if one wants to skip out of the virtual environment, he/she can do

If one has multiple python interpreters named like python3.x, it can be specified by, for example

If one does not need the GPU support of deepmd-kit and is concerned about package size, the CPU-only version of tensorflow should be installed by

To verify the installation, run

One should remember to activate the virtual environment every time he/she uses deepmd-kit.

Install the DeePMD-kit's python interface¶

Execute

To test the installation, one may execute

It will print the help information like

Install Miniconda Dockerfile

Install the C++ interface¶

If one does not need to use DeePMD-kit with Lammps or I-Pi, then the python interface installed in the previous section does everything and he/she can safely skip this section.

Install the Tensorflow's C++ interface¶

Check the compiler version on your machine

The C++ interface of DeePMD-kit was tested with compiler gcc >= 4.8. It is noticed that the I-Pi support is only compiled with gcc >= 4.9.

First the C++ interface of Tensorflow should be installed. It is noted that the version of Tensorflow should be in consistent with the python interface. You may follow the instruction to install the corresponding C++ interface.

Install the DeePMD-kit's C++ interface¶

Clone the DeePMD-kit source code

For convenience, you may want to record the location of source to a variable, saying deepmd_source_dir by

Now goto the source code directory of DeePMD-kit and make a build place.

I assume you want to install DeePMD-kit into path $deepmd_root, then execute cmake

where the variable tensorflow_root stores the location where the tensorflow's C++ interface is installed. The DeePMD-kit will automatically detect if a CUDA tool-kit is available on your machine and build the GPU support accordingly. If you want to force the cmake to find CUDA tool-kit, you can speicify the key USE_CUDA_TOOLKIT,

Installation miniconda docker

and you may further asked to provide CUDA_TOOLKIT_ROOT_DIR. If the cmake has executed successfully, then

If everything works fine, you will have the following executable and libraries installed in $deepmd_root/bin and $deepmd_root/lib

Install Miniconda Docker Command

Install LAMMPS's DeePMD-kit module¶

DeePMD-kit provide module for running MD simulation with LAMMPS. Now make the DeePMD-kit module for LAMMPS.

DeePMD-kit will generate a module called USER-DEEPMD in the build directory. Now download the LAMMPS code (29Oct2020 or later), and uncompress it:

The source code of LAMMPS is stored in directory lammps-stable_29Oct2020. Now go into the LAMMPS code and copy the DeePMD-kit module like this

Now build LAMMPS

The option -j4 means using 4 processes in parallel. You may want to use a different number according to your hardware.

Install Miniconda Docker

If everything works fine, you will end up with an executable lmp_mpi.

Miniconda

CuDF can be installed with conda (miniconda, or the full Anaconda distribution) from the rapidsai channel: For cudf version 0.18: # for CUDA 10.1 conda install -c rapidsai -c nvidia -c numba -c conda-forge cudf=0.18 python=3.7 cudatoolkit=10.1 # or, for CUDA 10.2 conda install -c rapidsai -c nvidia -c numba -c conda-forge cudf=0.18. # try to uninstall with conda: conda update ruamel-yaml # get error-2 # PackageNotInstalledError: Package is not installed in prefix. # prefix: C: Users miniconda3 # package name: ruamel-yaml # Remove index cache, lock files, unused cache packages, and tarballs. Conda clean -a -vv conda deactivate # Remove any files related to 'ruamel. There various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections. After your easy installation, DeePMD-kit (dp) and LAMMPS (lmp) will be available to execute. You can try dp-h and lmp-h to see the help. Mpirun is also available considering you may want to run LAMMPS in.

  • Download and install
    • Easy installation methods
    • Install the python interaction
    • Install the C++ interface

Please follow our github webpage to download the latest released version and development version.

Easy installation methods¶

There various easy methods to install DeePMD-kit. Choose one that you prefer. If you want to build by yourself, jump to the next two sections.

After your easy installation, DeePMD-kit (dp) and LAMMPS (lmp) will be available to execute. You can try dp-h and lmp-h to see the help. mpirun is also available considering you may want to run LAMMPS in parallel.

Offline packages¶

Both CPU and GPU version offline packages are avaiable in the Releases page.

With conda¶

DeePMD-kit is avaiable with conda. Install Anaconda or Miniconda first.

To install the CPU version:

To install the GPU version containing CUDA 10.1:

With Docker¶

Install Miniconda Docker Ubuntu

A docker for installing the DeePMD-kit is available here.

To pull the CPU version:

To pull the GPU version:

Install the python interface¶

Install the Tensorflow's python interface¶

First, check the python version on your machine

We follow the virtual environment approach to install the tensorflow's Python interface. The full instruction can be found on the tensorflow's official website. Now we assume that the Python interface will be installed to virtual environment directory $tensorflow_venv

It is notice that everytime a new shell is started and one wants to use DeePMD-kit, the virtual environment should be activated by

if one wants to skip out of the virtual environment, he/she can do

If one has multiple python interpreters named like python3.x, it can be specified by, for example

If one does not need the GPU support of deepmd-kit and is concerned about package size, the CPU-only version of tensorflow should be installed by

To verify the installation, run

One should remember to activate the virtual environment every time he/she uses deepmd-kit.

Install the DeePMD-kit's python interface¶

Execute

To test the installation, one may execute

It will print the help information like

Install Miniconda Dockerfile

Install the C++ interface¶

If one does not need to use DeePMD-kit with Lammps or I-Pi, then the python interface installed in the previous section does everything and he/she can safely skip this section.

Install the Tensorflow's C++ interface¶

Check the compiler version on your machine

The C++ interface of DeePMD-kit was tested with compiler gcc >= 4.8. It is noticed that the I-Pi support is only compiled with gcc >= 4.9.

First the C++ interface of Tensorflow should be installed. It is noted that the version of Tensorflow should be in consistent with the python interface. You may follow the instruction to install the corresponding C++ interface.

Install the DeePMD-kit's C++ interface¶

Clone the DeePMD-kit source code

For convenience, you may want to record the location of source to a variable, saying deepmd_source_dir by

Now goto the source code directory of DeePMD-kit and make a build place.

I assume you want to install DeePMD-kit into path $deepmd_root, then execute cmake

where the variable tensorflow_root stores the location where the tensorflow's C++ interface is installed. The DeePMD-kit will automatically detect if a CUDA tool-kit is available on your machine and build the GPU support accordingly. If you want to force the cmake to find CUDA tool-kit, you can speicify the key USE_CUDA_TOOLKIT,

and you may further asked to provide CUDA_TOOLKIT_ROOT_DIR. If the cmake has executed successfully, then

If everything works fine, you will have the following executable and libraries installed in $deepmd_root/bin and $deepmd_root/lib

Install Miniconda Docker Command

Install LAMMPS's DeePMD-kit module¶

DeePMD-kit provide module for running MD simulation with LAMMPS. Now make the DeePMD-kit module for LAMMPS.

DeePMD-kit will generate a module called USER-DEEPMD in the build directory. Now download the LAMMPS code (29Oct2020 or later), and uncompress it:

The source code of LAMMPS is stored in directory lammps-stable_29Oct2020. Now go into the LAMMPS code and copy the DeePMD-kit module like this

Now build LAMMPS

The option -j4 means using 4 processes in parallel. You may want to use a different number according to your hardware.

Install Miniconda Docker

If everything works fine, you will end up with an executable lmp_mpi.

The DeePMD-kit module can be removed from LAMMPS source code by





broken image