Mechanical DPF crash with large RST files

Hello there!
Recently I've moved from ye'olde Mechanical Scripts Python reader to DPF in search for more advanced post processing functionalities and improved performance. But now I'm facing issues with extracting data from large .RST (20-100GB). I'm working on a bigger ACT plugin for weld fatigue calculations, here below is only a sample of code that should extract S1 stress. It does work perfectly fine for smaller RST files (tested up to 8GB) with ~20 Result Sets.
It crashes at line "s1_fields = s1_op.outputs.fields_container.GetData()" for RST1: 28GB 84 Result Sets and RST2:100GB 512 Result Sets. For RST1 it pops up a window with Ansys Dump File, for RST2 it simply closes the window. No error messages in Log in both cases.
If I reduce the time_scoping to singe time step, it works. But when I try to iterate through the time sets, and put s1_op.inputs.time_scoping.Connect(time_scoping) inside the loop, it fails again. I guess the buffer fills up and crashes the whole environment. I wonder if the same would occur if standalone DPF was used. But at the moment, it is preferable to use DPF inside mechanical for result plotting and easier interaction with end user. Have you faced this issue before? Is there any solution to improve its performance from within Mechanical WB?
import mech_dpf import Ans.DataProcessing as dpf analysis_id = 0 node_Ids = [95667,95666,95665] model = ExtAPI.DataModel.Project.Model analysis = model.Analyses[analysis_id] rst_path = r"{}file.rst".format(analysis.Solution.ResultFileDirectory) # Read in RST file dataSource = dpf.DataSources() dataSource.ResultFilePath = rst_path # Create result operator s1_op = dpf.operators.result.stress_principal_1() # Get the time data corresponding to result sets time_provider = dpf.operators.metadata.time_freq_provider() time_provider.inputs.data_sources.Connect(dataSource) numSets = time_provider.outputs.time_freq_support.GetData().NumberSets timeids = time_provider.outputs.time_freq_support.GetData().TimeFreqs.Data result_set_ids = [] for i in range(numSets): result_set_ids.append(i+1) # Create time scoping operator time_scoping = dpf.Scoping() time_scoping.Location = dpf.locations.time_freq_sets time_scoping.Ids = result_set_ids # Create mesh scoping operator mesh_scoping = dpf.Scoping() mesh_scoping.Location = "Nodal" mesh_scoping.Ids = node_Ids # S1 s1_op.inputs.data_sources.Connect(dataSource) s1_op.inputs.time_scoping.Connect(time_scoping) s1_op.inputs.mesh_scoping.Connect(mesh_scoping) output = {} try: s1_fields = s1_op.outputs.fields_container.GetData() for set_id in result_set_ids: output[set_id] = {} s1_field = s1_fields[set_id - 1] for node_Id in node_Ids: S1 = s1_field.GetEntityDataById(node_Id) output[set_id][node_Id] = S1 except Exception as e: ExtAPI.Log.WriteMessage("Error: {}".format(e))
Answers
-
Hello @Mateusz ,
In which WB version have you been experiencing this? There has been some issues like this in older versions, but from 2024R1 onwards this should be fixed.
If the issue still happens in 2024R1, please open an support request so we can have a look at the model.
Thanks1 -
Hi Pernelle,
I'm currently running the 2023R2 version. I'll try to get access to 2024R1 to verify if the issue still occurs.
Thank you for a swift response, I'll hope to come back with an answer soon1 -
Hi @Mateusz ,
Thanks for reaching out on this. As @Pernelle Marone-Hitz said, we had many improvements in recent releases. In the current one we are actively working on improving the performances on large models.
Meanwhile one thing which brings performances, is to use streams instead of data sources, but you need to ensure that you release the handles at the end of the script.
updated your script belowimport mech_dpf
import Ans.DataProcessing as dpf
analysis_id = 0
node_Ids = [95667,95666,95665]
model = ExtAPI.DataModel.Project.Model
analysis = model.Analyses[analysis_id]
rst_path = r"{}file.rst".format(analysis.Solution.ResultFileDirectory)Read in RST file
dataSource = dpf.DataSources()
dataSource.ResultFilePath = rst_pathstream_provider = dpf.operators.metadata.streams_provider()
stream_provider.inputs.data_sources.Connect(dataSource)
streams = stream_provider.outputs.getstreams_container()Create result operator
s1_op = dpf.operators.result.stress_principal_1()
Get the time data corresponding to result sets
time_provider = dpf.operators.metadata.time_freq_provider()
time_provider.inputs.streams_container.Connect(stream_provider.outputs.streams_container)
numSets = time_provider.outputs.time_freq_support.GetData().NumberSets
timeids = time_provider.outputs.time_freq_support.GetData().TimeFreqs.Data
result_set_ids = []
for i in range(numSets):
result_set_ids.append(i+1)Create time scoping operator
time_scoping = dpf.Scoping()
time_scoping.Location = dpf.locations.time_freq_sets
time_scoping.Ids = result_set_idsCreate mesh scoping operator
mesh_scoping = dpf.Scoping()
mesh_scoping.Location = "Nodal"
mesh_scoping.Ids = node_IdsS1
c
s1_op.inputs.time_scoping.Connect(time_scoping)
s1_op.inputs.mesh_scoping.Connect(mesh_scoping)
output = {}
try:
s1_fields = s1_op.outputs.fields_container.GetData()
for set_id in result_set_ids:
output[set_id] = {}
s1_field = s1_fields[set_id - 1]
for node_Id in node_Ids:
S1 = s1_field.GetEntityDataById(node_Id)
output[set_id][node_Id] = S1
except Exception as e:
ExtAPI.Log.WriteMessage("Error: {}".format(e))
streams.ReleaseHandles()0 -
Hi,
I've tried it on 24R1. And it stopped crashing. But now I am only able to output the displacements, each stress type output I've tried fails - I'm getting a container with nothing in it
For displacements it works fineIn 23R2 I could limit its "view" to only a single time step, and receive the results just fine. Now this does not work. I've tried to remove the time and mesh scoping from the inputs, the field has some portion of the results in it, but not from a single node I've tried to output using mesh scoping. It states 247,410 entities (I presume nodal results), but the model has 709,182 nodes.
@Ramdane I've checked the streams instead of dataSource as input, but it does not make any difference, for either 23R2 nor 24R1.
The code below:
import mech_dpf import Ans.DataProcessing as dpf analysis_id = 0 node_Ids = [264951,450955,265243] model = ExtAPI.DataModel.Project.Model analysis = model.Analyses[analysis_id] rst_path = r"{}file.rst".format(analysis.Solution.ResultFileDirectory) dataSource = dpf.DataSources() dataSource.ResultFilePath = rst_path s1_op = dpf.operators.result.stress_principal_1() d_op = dpf.operators.result.displacement() time_provider = dpf.operators.metadata.time_freq_provider() time_provider.inputs.data_sources.Connect(dataSource) numSets = time_provider.outputs.time_freq_support.GetData().NumberSets timeids = time_provider.outputs.time_freq_support.GetData().TimeFreqs.Data result_set_ids = [] for i in range(numSets): result_set_ids.append(i+1) time_scoping = dpf.Scoping() time_scoping.Location = dpf.locations.time_freq_sets time_scoping.Ids = result_set_ids mesh_scoping = dpf.Scoping() mesh_scoping.Location = "Nodal" mesh_scoping.Ids = node_Ids s1_op.inputs.data_sources.Connect(dataSource) s1_op.inputs.time_scoping.Connect(time_scoping) s1_op.inputs.mesh_scoping.Connect(mesh_scoping) d_op.inputs.data_sources.Connect(dataSource) d_op.inputs.time_scoping.Connect(time_scoping) d_op.inputs.mesh_scoping.Connect(mesh_scoping) output = {} try: s1_fields = s1_op.outputs.fields_container.GetData() d_fields = d_op.outputs.fields_container.GetData() for set_id in result_set_ids: output[set_id] = {} s1_field = s1_fields[set_id - 1] d_field = d_fields[set_id - 1] for node_Id in node_Ids: output[set_id][node_Id] = {} S1 = s1_field.GetEntityDataById(node_Id) d = d_field.GetEntityDataById(node_Id) output[set_id][node_Id]["S1"] = S1 output[set_id][node_Id]["d"] = d except Exception as e: ExtAPI.Log.WriteMessage("Error: {}".format(e))
BTW. the `` Markdown for code gets messed up if # are used for comments. See Ramdane post above. Each Python comment is in Bold and the whole formatting is gone.
0 -
Hi @Mateusz , good to read that at least the crash issue is resolved.
For the new issues (stress output being empty), does this happen on any model or for all models?
Any chance you can test in 24R2 which is now available?0 -
Hi @Pernelle Marone-Hitz
I am 99% sure now the issue is with RESWRITE APDL function we are using to combine multiple RST into one. DPF is only able to read displacement from such combined RST. I've created a temporary solution that creates a dictionary with all the results from each analysis that are before the Summary Static Structural in Mechanical Tree. Below is the portion of the code. Two question:
1. Can you combine RST files using DPF? Or have a more elegant way to create a combined dictionary with all necessary input.
2. Can you directly access analysis ID (not the object ID of analysis, but a 0,1,2,3) from function argument 'analysis'? I had to write a loop through the AnalysisList to getdef weld_fatigue_dpf_sum(analysis, node_Ids): model = ExtAPI.DataModel.Project.Model analysis_obj_Id = analysis.Id for i, analysis_ids in enumerate(ExtAPI.DataModel.AnalysisList): if analysis_ids.Id == analysis_obj_Id: break weld_fatigue_rst_output = {} weld_fatigue_rst_output[1] = {} for analysis_id in range(i): analysis = model.Analyses[analysis_id] rst_path = r"{}file.rst".format(analysis.Solution.ResultFileDirectory) dataSource = dpf.DataSources() dataSource.ResultFilePath = rst_path
0 -
Hi @Mateusz ,
As far as I'm aware we can't combine .rst files with DPF. However you can reference several data sources (like in this example: https://discuss.ansys.com/discussion/2233/how-to-assess-the-maximum-stress-range-for-a-fatigue-assessment-when-there-are-several-load-cases).
For the analysis number topic, I never useAnalysisList
, just directly refer tomodel.Analyses[i]
.1