Hello all,

I was writing code that was supposed to spawn multiple jobs from a sqlite DB and my first idea was to implement the multiprocessing Pool function. I'm not sure if this question would be better suited somewhere related to shell scripting, but I'm not sure how to call the script properly. It works fine if I'm doing Pool(3) on my local mac.

The problem is the Cray machine needs you to run aprun to run in the background. So if I were to say aprun -n 64 myscript.py which was a Pool(64) it actually spawns 64 myscript.py's which obviously isn't what I want do. The particular architecture of the machine allows for you to do -n 1 -d 32 myscript.py which runs fine, but isn't scalable to anything over 32 processors. I've looked over the aprun documentation and can't quite figure out how to get this to run. Do I need to implement mpi4py and multiprocessing in some way? I'm severely confused. Sorry for not posting any code, but it seems a little pointless as this is mainly a problem of how to implement Pool in a HPC environment.

Thanks for your time.

Thanks for the response. I think my issue has more to do with how to run a Pool on the backend of a supercomputer. The code is working. Does anyone have any experience with how to go about this? I don't think the concurrent futures module (although it looks like it has some nice features) will take care of this problem for me, unless I'm missing something entirely.

Sorry, I have no access to a High Performance Computer like a Cray. My basement is just too small to fit one in.

This article has been dead for over six months. Start a new discussion instead.