-
Notifications
You must be signed in to change notification settings - Fork 61
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Substantially slow speed for small eigenmode problems #337
Comments
HI @ghazi221b, Looking at that log file, I am not seeing evidence of any bug. There's a few things to consider:
In order to achieve good results from Palace it's important to consider your mesh, and whether or not you have appropriate resolution. You are significantly over refining the mesh, to no additional benefit. This can be particularly seen by looking at the error estimation, the example mesh as ~1.7e-3 error reported with 16544 DOF, you have ~2.35e-3 reported with 321856 DOF. So your system is ~20 times larger, and ~5 times worse conditioned, and gives less accurate results. I suggest that you start by improving your mesh. |
Thank you, Lowering the solution order to P2 gives me a reasonable number of unknowns and a good solution time. I did have a few follow-up questions:
Thanks again for all the help |
Hi @ghazi221b, Glad you made some progress. It seems like your issues are primarily due to using gmsh, so I suggest you refer to their documentation
What expression are you attempting to evaluate? |
Hey, thanks again for the help. I am trying to improve my mesh workflow, and since I have always been more familiar with Cubit I am trying to use the Genesis or Nastran formats to use Palace however, the job doesn't run and I get errors reading the files for both formats The settings I use for the formats is as follows: Simultaneously, I have been using gmsh to explore eigenmode analysis with AMR for larger geometries. For this particular geometry, I can't get AMR to work. The first pass runs fine and gives similar results to HFSS, but then no adaptive meshing happens. I believe the issue lies with the fact that the indicator norm is 'nan' so the AMR is unable to continue Eigenmode_AMR.json Any help on these issues would be great. Thank you for providing the detailed response on the field calculation abilities of Palace. I very much appreciate the response. We are looking at the surface participation ratio as well. For which, I think there are several methodologies 1, 2. I know they broadly do similar things but we are interested in seeing which approach is most accurate and how they can be improved upon. Once again thanks for all the help! |
I am not familiar with the Genesis or Nastran mesh exporting mechanism, for the genesis format I believe that is cubit, which means I suggest you open an issue with mfem in terms of reading that mesh, because we are using their capability. For the Nastran, I've reproduced the failure and it was caused by some small bug in treating the end of the mesh file, a fix is available at #347 if you give that a try. I managed to read in your mesh there. The issue was somehow during saving you were placing lots of meta characters on various lines, carriage returns, and ^M etc. I have provisionally fixed it by doing a bit more sanitization of the inputs, but you should check that.
Looking at the log file here, you have a tetrahedron with anisotropy of 434569
which I think is causing the error estimation calculation, which involves a mass matrix inversion using a Jacobi method, to go functionally singular, giving the nan. I suggest you inspect your mesh for slivers and try and fix it. This is in part an issue to do with the current error estimation capability, but is mostly an issue with your mesh. We have some things we are working on to try address this type of issue, but there's no quick fix. HFSS uses a very different type of estimator (it's hard to find out exactly what, but it seems like a strong form residual estimator, for which there's no implicit solve that could throw a nan).
On alternative surface participation ratio calculations, all of those are performed in surfacepostoperator.cpp and you're free to try and modify the statements in there which are coming from |
-->I am a new Palace user trying to get better at using the package. I tried to replicate the "Eigenmodes of a Cylindrical Cavity" from the documentation by making my own mesh using gmsh with 9264 elements and simulating with Palace. It took 9728 seconds to solve the problem with 1 MPI process and 109 seconds to solve the problem with 128 MPI Processes.
I think these solution times are quite large, considering I wasn't using AMR or asking for any special post-processing. I just wanted to confirm if this is the speed that other people are getting or if I am screwing up.
To reproduce: Please provide minimal example that reproduces the error. For existing
examples, please provide a link.
I am attaching the mesh files, the json file and the output logs from both the 1 MPI and 128 MPI run to help better diagnose the issue
Cylinder_9264_file.json
Cylinder_9264_mesh.txt
Output_1_MPI.txt
Output_128_MPI.txt
Environment: Any environment details, such as operating system, compiler, etc.
--> I installed Palace using the install from source steps given in the documentation. I believe my school's HPC cluster uses the following cpu
First time posting an issue, so please let me know if any other information needs to be provided.
The text was updated successfully, but these errors were encountered: