Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 21 additions & 6 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,12 +25,6 @@ conda activate htocsp
mamba env update --file environment.yml
```

## MACE Setup
To add

## UMA Setup
To add

## CHARMM Setup
One can request a [free academic version of CHARMM](https://brooks.chem.lsa.umich.edu/register/) and then install it via the following commands.
*Note, make sure you compile charmm with the simplest option with qchem, openmm, quantum and colfft.*
Expand Down Expand Up @@ -77,6 +71,27 @@ $ charmm < charmm.in

You should see quickly see the output of `NORMAL TERMINATION`.

## MACE Setup

MACE setup does not require any additional installation beyond what is already included in the main `htocsp` environment. No extra configuration is needed—simply ensure you have completed the **Python Setup** and **CHARMM Setup** sections above.

## UMA Setup

Create the UMA environment from the `environment_UMA.yml` file.

### HuggingFace Token Login

This UMA workflow requires access to models hosted on [HuggingFace](https://huggingface.co), you'll need to authenticate with a HuggingFace token. Generate or retrieve your token from [https://huggingface.co/settings/tokens](https://huggingface.co/settings/tokens) and then log in:

```bash
conda activate htocsp-uma
python - << 'EOF'
from huggingface_hub import login
login(token="YOUR_TOKEN_HERE")
EOF
```

Replace `YOUR_TOKEN_HERE` with your actual HuggingFace API token. This will store your credentials locally for future use.

## Quick Test

Expand Down
48 changes: 48 additions & 0 deletions examples/5-updated-bt.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
"""
This is an example to perform CSP based on the reference crystal.
The structures with good matches will be output to *-matched.cif.
"""
from pyxtal.optimize import WFS, DFS, QRS
from pyxtal import pyxtal
import argparse
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument("-g", "--gen", dest="gen", type=int, default=1,
help="Number of generation, default: 5")
parser.add_argument("-p", "--pop", dest="pop", type=int, default=10,
help="Population size, default: 10")
parser.add_argument("-n", "--ncpu", dest="ncpu", type=int, default=1,
help="cpu number, default: 1")
parser.add_argument("-a", "--algo", dest="algo", default='WFS',
help="algorithm, default: WFS")
parser.add_argument("--ffstyle", dest='ffstyle', default='gaff',
help="forcefield style, default: gaff")
parser.add_argument("--mlp", dest="mlp", default="MACE",
help="Choose MLP calculator: MACE, UMA, NequIP, etc.")
parser.add_argument("--skip_mlp", dest="skip_mlp", action="store_true",
help="Disable MLP optimization stage")
parser.add_argument("--check", dest='stable', action='store_true',
help="enable stability check")
parser.add_argument("-c", "--code", dest="code",
help="CSD code")

options = parser.parse_args()
data = {"Sp-HOF-5a_UMA": "Nc8nc(N)nc(c7ccc(/C(=C(c2ccc(c1nc(N)nc(N)n1)cc2)/c4ccc(c3nc(N)nc(N)n3)cc4)c6ccc(c5nc(N)nc(N)n5)cc6)cc7)n8"}
sg = [1,7,12]
smiles = data[options.code]
# Sampling
fun = globals().get(options.algo)
go = fun(smiles,
options.code,
sg,
#fracs = [0.8, 0.2],
tag = options.code.lower(),
N_gen = options.gen,
N_pop = options.pop,
N_cpu = options.ncpu,
ff_style = options.ffstyle,
skip_mlp = False, #not options.skip_mlp,
mlp = options.mlp,
check_stable = options.stable)
go.run()

26 changes: 26 additions & 0 deletions examples/myrun-bt-updated
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
#!/bin/sh -l
#SBATCH --partition=Orion
#SBATCH -J Sp-HOF-5a_UMA
#SBATCH --nodes=1
#SBATCH --ntasks-per-node=1
#SBATCH --cpus-per-task=16
#SBATCH --mem=96G
#SBATCH --time=120:00:00
#SBATCH --hint=nomultithread

# stop hidden threading in each worker (NumPy/MKL/OpenBLAS/etc.)
export OMP_NUM_THREADS=1
export MKL_NUM_THREADS=1
export OPENBLAS_NUM_THREADS=1
export NUMEXPR_NUM_THREADS=1
export BLIS_NUM_THREADS=1
export VECLIB_MAXIMUM_THREADS=1
export TBB_NUM_THREADS=1
export MKL_DYNAMIC=FALSE
export OPENBLAS_MAIN_FREE=1

# Print the hostname of the node executing this job
echo "Running on node: $(hostname)"
NCPU=$SLURM_CPUS_PER_TASK
srun python 5-updated-bt.py -a DFS -p 32 --mlp "UMA" -n ${NCPU} -g 2500 -c ${SLURM_JOB_NAME} > log-${SLURM_JOB_NAME}