Shift Current Prediction (DPA3-$\sigma$)

This model is based on the DPA3 architecture for predicting shift current in materials.

The training data follow a long-tail distribution, thus the model is trained in log1p space using log1p(x) = log(1 + x). Predictions are also in log1p space.

Dependency

Install DeepMD:

pip install deepmd-kit

Usage

Basic command:

dp --pt test \
   -m model.weights.pt \
   -f [INPUT_FILE] \
   -n 0 \
   -d [OUTPUT_PREFIX]
  • -m model.weights.pt: path to the trained model.
  • -f [INPUT_FILE]: a text file listing all systems to be evaluated.
  • -d [OUTPUT_PREFIX]: prefix of the output result files.

Example:

dp --pt test \
   -m model.weights.pt \
   -f sys_test.txt \
   -n 0 \
   -d test_result

Input format

1. System list file ([INPUT_FILE])

[INPUT_FILE] is a plain text file. Each line contains the path to a DeepMD-format system directory, for example:

.../mp-14_Se_32_spg152_gap0.88eV/
.../mp-19_Te_32_spg152_gap0.19eV/
.../mp-154_N2_23_spg198_gap7.34eV/
.../mp-181_KGa3_spg119_gap0.22eV/
.../mp-189_SiRu_23_spg198_gap0.23eV/

2. System directory layout (DeepMD npy format)

Each system directory must follow the standard DeepMD npy structure, such as:

system_X/
└── set.000/
    β”œβ”€β”€ box.npy
    β”œβ”€β”€ coord.npy
    β”œβ”€β”€ v.npy
β”œβ”€β”€ type_map.raw
└── type.raw

Notes:

  • The .npy dataset can be converted from VASP using official DeepMD tools.
  • A placeholder v.npy file is required; writing zeros in it is sufficient.

Output

Running inference produces a file like:

test_result_property.out.0

A typical block looks like:

# /path/to/system_X/: data_property pred_property
0.0000000000000000e+00  2.04...
# /path/to/system_Y/: data_property pred_property
0.0000000000000000e+00  2.35...
  • Lines starting with # indicate the system being evaluated.
  • Each numeric line contains the reference value (if available) and the model prediction.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support