Skip to content

Comments

simplified MultisliceCalculator no longer uses _process_frame_worker_torch#39

Merged
h-walk merged 9 commits intomainfrom
simplifyCalculatorWorker2
Jan 9, 2026
Merged

simplified MultisliceCalculator no longer uses _process_frame_worker_torch#39
h-walk merged 9 commits intomainfrom
simplifyCalculatorWorker2

Conversation

@tpchuckles
Copy link
Collaborator

Why?

_process_frame_worker_torch does some pretty incredible pack/unpack to get all the args passed in. this was necessary when trying to implement multiprocessing, but i don't think it helped, nor is necessary, given the various improvements and speed gains from torch.

all these pack/unpack shenanigans also make maintenance difficult. adding new features (e.g., i would like to add vectorized processing of multiple different probes, to implement spatial/temporal decoherence), but this is tough in it's current state.

it's also a challenge for code readability (imo). one must keep track of what variable was what between MultisliceCalculator.run and _process_frame_worker_torch.

plus, this is an opportunity to clean up a lot of the "if torch" and "if slices" stuff, and varying shapes of the propagated array (whether or not there is in index for probes or layers).

also, half the code inside _process_frame_worker_torch was just torch device and dtype checking, and we really don't expect the worker to have different dtypes/devices from what MultisliceCalculator has.

Changes

first, i moved all the logic from _process_frame_worker_torch into the frame loop of MultisliceCalculator.run.

then, I greatly cleaned up the logic. I have made use of the backend in some places, with backend.zeros and the new backend.expand_dims, to account for the varying shapes of things.

exit_waves_batch is always l,p,x,y (since i expand_dims if l or p are missing), meaning we can always loop through layers (get rid of an if/else), to do the FFT and load into frame_data (which is always p,x,y,l,1, as before).

testing

earlier this evening I wrote up tests/18_caching.py, which runs a series of generic calculations (e.g., with and without multiple probe positions, with and without multiple timesteps, with and without layer-by-layer results cached). results are also saved for future comparison via differ.

I then ran the same test in this branch 1) with no psi_data folder (new caches) 2) with a self-generated psi_data folder (demonstrating that this branch creates and reloads its own caches as expected) and 3) with a psi_data generated from main, pre-merge (demonstrating that this branch can reload previously-generated caches, and that these changes are backwards compatible).

@tpchuckles
Copy link
Collaborator Author

tpchuckles commented Jan 1, 2026

testing status
[x] tested with torch on cpu
[x] tested on numpy on cpu
[x] tested with torch on gpu

test consists of:

  1. run main branch 18_caching.py
  2. run this branch 18_caching.py
  3. rerun this branch 18_caching.py (ensure this branch's caches are good)
  4. copy in main branches psi_data (overwriting this branch's)
  5. rerun this branch 18_caching.py (ensure this branch is compatible with old caches)
    grep for errors to ensure we don't miss any

watch out:, main currently will NOT run on gpu. just copy this branch's multislice/potentials.py in. the only difference should be the loading of the potential slices appropriately (looks like this was never tested on gpu before, somehow)

life pro tip:
add to your .bashrs, or run in the terminal:
alias untorch='mv path/to/your/pip/site-packages/torch path/to/your/pip/site-packages/torch_bak'
alias retorch='mv path/to/your/pip/site-packages/torch_bak path/to/your/pip/site-packages/torck'
then you can just execute "untorch" or "retorch" to enable/disable torch on the fly
;-)

@h-walk h-walk merged commit 7807b91 into main Jan 9, 2026
h-walk added a commit that referenced this pull request Jan 9, 2026
@tpchuckles tpchuckles deleted the simplifyCalculatorWorker2 branch January 10, 2026 23:48
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants