I hope you’re doing well. I am encountering an issue while running an HFSS simulation in SBR+ transient mode on the cluster using a SLURM script. The simulation completes without errors, but no .transient or transient data files are generated—only .profile files appear in the results directory.
Interestingly, when I run the same project through the GUI desktop, the transient files are generated correctly alongside the .profile files. This leads me to believe there might be an issue with how the batch job is being handled or configured. Ideally, there should be a one-to-one correspondence between .profile and transient result files. I am sharing the slurm script, I can also share the .aedt file too. If you have any Idea please help me.
`#!/bin/bash
SBATCH --ntasks=1
SBATCH --cpus-per-task=32
SBATCH --partition=standard
SBATCH --mail-type=ALL # Send email on start, end, and fail
SBATCH --output=sbr_0.log # Output log file
SBATCH --mem-per-cpu=12G # Memory per CPU-core
SBATCH -t 48:00:00 # Time limit: 3 hours 30 minutes
echo "Starting script"
module load ansys-em/23.1
echo "Modules loaded:"
module list
echo
Set working directory to current directory
WORKDIR=$(pwd)
cd "$WORKDIR"
Set required environment variables
export PBS_JOBID="${SLURM_JOBID}"
export ANS_IGNOREOS=1
export ANS_NODEPCHECK=1
Create Options.txt for distributed solve
OptFile="${WORKDIR}/Options.txt"
cat - < "$OptFile"
\$begin 'Config'
'HFSS/SolveAdaptiveOnly'=0
'HFSS/AllowOffCore'=1
'HFSS/CreateStartingMesh'=1
'HFSS/DefaultProcessPriority'=Normal
'HFSS/EnableGPU'=0
'HFSS/EnableGPUForSBR'=0
'HFSS/MPIVersion'=Default
'HFSS/HPCLicenseType'='pool'
'HFSS/MPIVendor'='Intel'
\$end 'Config'
EOF
Use Ansys executable from the known path
ANSYSEXE="/software/ansys/EM/23.1/Linux64/current/ansysedt"
echo "Using Ansys executable at: $ANSYSEXE"
Run HFSS with distributed MPI solve
"$ANSYSEXE" -ng -monitor -distributed -machinelist num=32 \
-batchoptions "$OptFile" \
-batchsolve "$WORKDIR/sim_new.aedt"`