DPF threading and RSPLIT APDL Command

Mike.Thompson Member, Employee Posts: 279
First Anniversary First Comment 5 Likes Ansys Employee

I was curious if anyone knows the best methods to do post processing on lots of different mesh entities. Example: You have a model with lots of bolts and you want to do time-consuming post processing on each one. I am thinking of creating multiple smaller results files for each bolt with RSPLIT command. Then use threading module with mech_dpf to do post processing on each bolt-rst in separate threads.

Or, should I keep it in a single results file, and use a results extraction operator on a single thread in sequence, and send that raw data out for further processing on separate thread?

Would this be an efficient way to do this, or am I missing an easier, more obvious way?


  • Pernelle Marone-Hitz
    Pernelle Marone-Hitz Member, Moderator, Employee Posts: 804
    First Comment First Anniversary Ansys Employee Solution Developer Community of Practice Member

    I think @Pierre Thieffry should be able to provide some useful inputs on this one.

  • Pierre Thieffry
    Pierre Thieffry Member, Moderator, Employee Posts: 98
    First Anniversary Ansys Employee Solution Developer Community of Practice Member Photogenic

    @Mike.Thompson I recently worked on a multi-thread dpf example. I don't think you need to split the rst file.

    You could either:

    • load your entire result field for the full model and perform operations on each bolt on different dpf servers using multi-threading
    • perform all operations including the retrieval of the result field for each bolt on different dpf servers.

    Second option would be probably a bit more memory efficient. First option is if you need to operate on more parts of the model than just the bolt.

    Here's a skeleton script for multi-threading:

    import threading
    import ansys.dpf.core as dpf
    import time,sys
    import os
    class returnValueThread(threading.Thread):
            Class defined so the threads will return custom values
        def __init__(self, *args, **kwargs):
            super().__init__(*args, **kwargs)
            self.result = None
        def run(self):
            if self._target is None:
                return  # could alternatively raise an exception, depends on the use case
                self.result = self._target(*self._args, **self._kwargs)
            except Exception as exc:
                print(f'{type(exc).__name__}: {exc}', file=sys.stderr)  # properly handle the exception
        def join(self, *args, **kwargs):
            super().join(*args, **kwargs)
            return self.result
    def worker_function_for_bolt(worker parameters)
        return bolt_data
    def getGlobalData(rst_file,server):    
        return global_data
    if __name__ == '__main__':
        # Results file 
        rst_file= r'...'
        # Create dpf servers
        for proc in range(nproc):
            server = dpf.start_local_server(
                as_global=False, config=dpf.AvailableServerConfigs.InProcessServer)
        # Get global fields to be used in susbsequent operations. 
        # Done only once and on first server
        # We cycle over server ids to spread the work over multiple servers
        for boltid in bolt_list:            
            x = returnValueThread(target=worker_function_for_bolt, args=[worker args])
            sidx += 1
            if sidx ==nproc:
                sidx = 0
        # Gather output from threads
        for th in threads_list:
            bolt_results.update(th.join()) # bolt_resutls will gather the outputs from the worker function