Hi @BillMills, I've been thinking a little bit about what you told me about global module variables being a problem when running with multi processors. This is awkward as some of the EN checks (particularly EN std level check) relies on this. I wonder if a possible work around is to remove the parallel running from within the Python code and set it up so that we can tell it to process a particular set of profiles in serial. E.g. we would run it like so:
python AutoQC.py test 0 1000
Where the 0 and 1000 tells it to process profiles 0 through 1000. Then to get the parallel processing we could have a shell script or another Python program that automatically spawns lots of the AutoQC processes, each processing a different set of profiles.
I think this will avoid the global variables issue as each AutoQC process has its own set of global variables and will be processing the profiles one at a time, but we will still get the speed up from running in parallel. What do you think?
Hi @BillMills, I've been thinking a little bit about what you told me about global module variables being a problem when running with multi processors. This is awkward as some of the EN checks (particularly EN std level check) relies on this. I wonder if a possible work around is to remove the parallel running from within the Python code and set it up so that we can tell it to process a particular set of profiles in serial. E.g. we would run it like so:
python AutoQC.py test 0 1000Where the 0 and 1000 tells it to process profiles 0 through 1000. Then to get the parallel processing we could have a shell script or another Python program that automatically spawns lots of the AutoQC processes, each processing a different set of profiles.
I think this will avoid the global variables issue as each AutoQC process has its own set of global variables and will be processing the profiles one at a time, but we will still get the speed up from running in parallel. What do you think?