Skip to content

Conversation

@asglover
Copy link
Collaborator

add seven net and nequix configs to the repo and benchmark suite

@asglover asglover added the ci-ready Triggers CI checks for a pull request label Nov 10, 2025
Copy link
Member

@vbharadwaj-bk vbharadwaj-bk left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks fine.. but do the benchmark replication commands here work https://passionlab.github.io/OpenEquivariance/tests_and_benchmarks/#replicating-our-benchmarks? Doesn't have to look the same since competing package versions have advanced, but plots shouldn't be badly formatted / out of whack.

https://passionlab.github.io/OpenEquivariance/tests_and_benchmarks/#replicating-our-benchmarks

@asglover
Copy link
Collaborator Author

asglover commented Nov 11, 2025

This is a good point. I was thinking about this because I think the specialization of the benchmarking framework towards recreating the presentation of the results is a little problematic as is. I didn't get into it in this PR, but I could create another which addresses it and we could merge that first, and only merge this after.

For example, when I was performing the benchmarking for these configs, I could not use cuEquivariance because it does not support configs with irreps like 64x1o + 128x1e, which are used for these configs, so then I have to remove it from the tested implementations, but then, the plotting function explicitly looks to normalize against cueq... so the plotting breaks. And it's not just these new configs, the plotting doesn't work out of the box in many situations where the benchmarking parameters differ from the defaults.

My proposal would be to make the plotting functions that get called by the benchmarking equipment more general purpose, so they always work, then break out the "reproduce paper plots" functionality into some scripts called like, "paper_roofline_fig.py" which are specific calls to the general purpose benchmarking machinery, and then these scripts contain and isolate the specific plotting instructions like normalizing against a certain implementations, expecting certain names, expecting certain datatypes.

What do you think of this proposal?

@vbharadwaj-bk
Copy link
Member

Making the plotting general is okay - but if you want to merge this quickly, I would suggest just disabling the additional problems conditionally when the flag --plot is set. That's only a few lines and will keep the existing functionality stable.

We don't really expect to generate plots again and are keeping that code around purely for reproducibility / good science -- on the other hand we have gotten quite a few requests for JAX support..

@asglover
Copy link
Collaborator Author

Maybe I should just add the configs to the repo but not make any changes to the benchmarking file at all. The most minimal change.

@asglover asglover added ci-ready Triggers CI checks for a pull request and removed ci-ready Triggers CI checks for a pull request labels Nov 11, 2025
@asglover asglover merged commit e723ea7 into main Nov 11, 2025
2 checks passed
@asglover asglover deleted the seven-net-configs branch November 11, 2025 21:04
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ci-ready Triggers CI checks for a pull request

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants