Annotated pyro5 command

How to run PydPiper using the Pyro5 backend, as demonstrated by the Tamarack pipeline
pydpiper
Oxford
longitudinal
Author

Linh Pham

Published

July 29, 2025

Modified

July 29, 2025

PydPiper and its longitudinal registration pipeline sibling, Tamarack, can use either Makeflow or Pyro5 as its their back end workflow managers. The results will be the same regardless of which back end is used, though I do find it easier to monitor pipeline progress and debug errors using Pyro5.

Makeflow may also crash with large number of images in the pipeline due to a deep recursion within its code (a function calls itself again many times, which can happen when there are many files and task dependencies in a pipeline).

This can be fixed by changing the soft stack limit prior to running the pipeline with makeflow.

ulimit -s unlimited

The soft stack limit can also be increased before running Pyro5, just in case, but it likely won’t need that command as much as the Makeflow backend.

Annotated Tamarack command with Pyro5 back end

With all that said, here’s how to use the Pyro5 back end on the Oxford BMRC. I’m currently working with longitudinal data, so the command is registration_tamarack.py, but it can also be MBM.py if you’re working with cross-sectional data.

I’m putting comments next to the main differences between running with makeflow vs. Pyro5 back ends and cross-sectional vs. longitudinal registration settings.

1module load parallel \
module use --append /well/lerch/shared/tools/modules/ \
module load minctoolkit \
conda activate pydpiper \

export APPTAINER_BIND=/well/,/gpfs3 \

2registration_tamarack.py \
--pipeline-name sample-registration \ 
--maget-registration-method minctracc \ 
--subject-matter mousebrain \ 
3--csv-file /path_to_your_registration_csv/sample_registration.csv \
4--pride-of-models /path_to_your_pride_of_model/sample_pride_of_models.csv \
5--common-time-point 80 \
--run-maget --maget-atlas-library /well/lerch/shared/tools/atlases/Dorr_2008_Steadman_2013_Ullmann_2013/in-vivo-MEMRI_90micron/ \  
--maget-nlin-protocol /well/lerch/shared/tools/protocols/nonlinear/default_nlin_MAGeT_minctracc_prot.csv \ 
--maget-masking-nlin-protocol /well/lerch/shared/tools/protocols/nonlinear/default_nlin_MAGeT_minctracc_prot.csv \
6--lsq12-protocol /well/lerch/shared/tools/protocols/linear/Pydpiper_testing_default_lsq12.csv --lsq6-simple \
7--num-executors 26 \
8--mem 20 \
9--queue-type slurm \
10--queue-name 'short,win' \
11--use-singularity  \
12--container-path /well/lerch/shared/tools/mice.sif_latest.sif \
1
Lines to call necessary modules, tools, and container needed to run the pipeline. Comparing to the Makeflow back end, we don’t need to do alias SE='apptainer exec /well/lerch/shared/tools/mice.sif_latest.sif' prior to calling the registration command.
2
The command to register images from a longitudinal data set. Notice we don’t need the flag --backend makeflow because the default back end for registration_tamarack.py and MBM.py is actually Pyro5.
3
A csv containing names of files to be registered in this pipeline. Filename denotes the location of the image to be registered using absolute paths. Group denotes the time point for which the image will be aligned toward during LSQ6 and non-linear registration stages. Snippet of an example csv is shown in the next section.
4
A csv file containing the locations of initial models for the pipeline. Because we are registering the images longitudinally, there are multiple initial models. Each initial model serves as the LSQ6 alignment starting goal for images that fall under that time point. An example csv is shown in the next section.
5
The common time point for image registration. Usually is the oldest age in the data set.
6
LSQ6 initial alignment choice. Chosen because all images fed into the longitudinal pipeline are already in LSQ6 space.
7
Number of executors. Usually set to be the same as the number of files being registered.
8
Memory in gigabyte assigned per task in the pipeline. Can be increased if needed. A trial and error process in my experience. I don’t follow any hard and fast rule – if pipeline is running too slow, then I increase this number.
9
The type of job scheduler for the cluster. The options are pbs, sge, slurm. BMRC uses slurm.
10
The cluster partitions where pipeline tasks are run. On BMRC, we use the short and win partitions.
11
Flag to indicate that our tools and pipelines live inside a singularity container, and that the pipeline should be ran inside the singularity container. This allows us to not set alias SE and execute our container (see comment 1).
12
Path to our singularity container.

Sample CSVs needed for Tamarack pipeline

Snippet of a sample registration CSV.

subject_id,treatment,filename,data_origin,group
03_01,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_01_ses-1_FLASH_denoise_N_I_lsq6.mnc,autessa,5
03_01,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_01_ses-2_FLASH_denoise_N_I_lsq6.mnc,autessa,5
03_01,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_01_ses-3_FLASH_denoise_N_I_lsq6.mnc,autessa,7
03_01,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_01_ses-4_FLASH_denoise_N_I_lsq6.mnc,autessa,7
03_02,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_02_ses-1_FLASH_denoise_N_I_lsq6.mnc,autessa,5
03_02,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_02_ses-2_FLASH_denoise_N_I_lsq6.mnc,autessa,5
03_02,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_02_ses-3_FLASH_denoise_N_I_lsq6.mnc,autessa,7
03_02,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_02_ses-4_FLASH_denoise_N_I_lsq6.mnc,autessa,7
03_03,CTL,/well/lerch/users/xrs336/autessa-lsq6/sub-MCH_NEO_HIT_03_03_ses-1_FLASH_denoise_N_I_lsq6.mnc,autessa,5

Pride of models CSV.

time_point,model_file
3,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p03/p03_mouse_brain.mnc
5,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p05/p05_mouse_brain.mnc
7,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p07/p07_mouse_brain.mnc
10,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p10/p10_mouse_brain.mnc
17,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p17/p17_mouse_brain.mnc
23,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p23/p23_mouse_brain.mnc
36,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p36/p36_mouse_brain.mnc
65,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p65/p65_mouse_brain.mnc
80,/well/lerch/users/xrs336/test_initial_models/all-cohorts-test-run-070225/p80/p80_mouse_brain.mnc