DPF threading and RSPLIT APDL Command

Mike.Thompson
Mike.Thompson Member, Employee Posts: 367
25 Answers 100 Comments Second Anniversary 25 Likes
✭✭✭✭

I was curious if anyone knows the best methods to do post processing on lots of different mesh entities. Example: You have a model with lots of bolts and you want to do time-consuming post processing on each one. I am thinking of creating multiple smaller results files for each bolt with RSPLIT command. Then use threading module with mech_dpf to do post processing on each bolt-rst in separate threads.

Or, should I keep it in a single results file, and use a results extraction operator on a single thread in sequence, and send that raw data out for further processing on separate thread?

Would this be an efficient way to do this, or am I missing an easier, more obvious way?

Answers

  • Pernelle Marone-Hitz
    Pernelle Marone-Hitz Member, Moderator, Employee Posts: 871
    100 Answers 500 Comments 250 Likes First Anniversary
    ✭✭✭✭

    I think @Pierre Thieffry should be able to provide some useful inputs on this one.

  • Pierre Thieffry
    Pierre Thieffry Member, Moderator, Employee Posts: 107
    25 Answers Second Anniversary 10 Comments 25 Likes
    ✭✭✭✭

    @Mike.Thompson I recently worked on a multi-thread dpf example. I don't think you need to split the rst file.

    You could either:

    • load your entire result field for the full model and perform operations on each bolt on different dpf servers using multi-threading
    • perform all operations including the retrieval of the result field for each bolt on different dpf servers.

    Second option would be probably a bit more memory efficient. First option is if you need to operate on more parts of the model than just the bolt.

    Here's a skeleton script for multi-threading:

    import threading
    
    
    import ansys.dpf.core as dpf
    import time,sys
    import os
    
    
    class returnValueThread(threading.Thread):
        """
            Class defined so the threads will return custom values
        """
        def __init__(self, *args, **kwargs):
            super().__init__(*args, **kwargs)
            self.result = None
    
        def run(self):
            if self._target is None:
                return  # could alternatively raise an exception, depends on the use case
            try:
                self.result = self._target(*self._args, **self._kwargs)
            except Exception as exc:
                print(f'{type(exc).__name__}: {exc}', file=sys.stderr)  # properly handle the exception
    
        def join(self, *args, **kwargs):
            super().join(*args, **kwargs)
            return self.result
    
    
    def worker_function_for_bolt(worker parameters)
    
        return bolt_data
    
    def getGlobalData(rst_file,server):    
    
        return global_data
    
    
    if __name__ == '__main__':
    
        nservers=4
    
        # Results file 
        rst_file= r'...'
    
        # Create dpf servers
        servers_list=[]
        for proc in range(nproc):
            server = dpf.start_local_server(
                as_global=False, config=dpf.AvailableServerConfigs.InProcessServer)
            servers_list.append(server)
    
        # Get global fields to be used in susbsequent operations. 
        # Done only once and on first server
        global_data=getGlobalData(rst_file,servers_list[0])
    
    
        bolt_results={}
    
        sidx=0   
        # We cycle over server ids to spread the work over multiple servers
        threads_list=[]
        for boltid in bolt_list:            
            server=servers_list[sidx]
    
            x = returnValueThread(target=worker_function_for_bolt, args=[worker args])
            x.start()
            threads_list.append(x)
            sidx += 1
            if sidx ==nproc:
                sidx = 0
    
        # Gather output from threads
        for th in threads_list:
            bolt_results.update(th.join()) # bolt_resutls will gather the outputs from the worker function
    
        dpf.server.shutdown_all_session_servers()