Pyfluent for running jobs on HPC using PBS scheduler
just a quick question, sorry for not adding further details.
I have a case/dat file, already initialized, with 100 iterations made. I save these files and try to continue to iterate using 2 approaches:
1) read the files through a journal and launch fluent using a jobscript for PBS scheduler
2) read the files in a python script and launch fluent inside it (the jobscript calls the .py file and passes it the number of cpus and memory to be used during the calculation, as described in pyfluent documentation).
The job starts in both cases with the same fluent launcher options (pyfluent.launch_fluent(product_version="23.2.0", version="3d", precision="double", mode="solver", show_gui = False, additional_arguments = "-g -pcheck=0 -pshmem -pib.ofed -ssh")), the same number of cpus and the same request of memory, but:
1) completes the calculations successfully
2) the calculation freezes at some point, without reporting errors, just stops at 501/600 iterations. The terminal reports that the job is still running, but it will never finish.
I am using Python 3.7 on linux OS, in a virtual environment.
Any idea on what could cause this issue?