-
Notifications
You must be signed in to change notification settings - Fork 29
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/gradient jump penalty #1364
base: develop
Are you sure you want to change the base?
Conversation
… assign it the the RHS of the weak form equation
Gradient Jump Denalty
gradient_jump_penalty draft
Pipe from personal fork
…) for cross-element communication via gsop
Change on facet-associated fields
…tremeFLOW-develop
…y (lx, lx, lx, nelv) to avoid field registry error
This reverts commit 0d1f6d7.
Pipe from own fork -- resolving git conflicts with streass formulation
@Shiyu-Sandy-Du Given the recent changes in |
I don't quite get it. Merge cannot be successful until conflicts are resolved isn't it? |
I ment that you should merge the latest develop into your gradient jump branch. There will for sure be conflicts, but if these are resolved it should be possible to merge |
Oh I see! Actually it's the current workflow. Maybe the develop branch I tried to merge was several days old which still leads to a little bit conflicts :-) |
Pipe from personal fork
update .depends
generate dependency file using updated makedepf90
Here in this pull request, gradient jump penalty (GJP) is implemented, which is motivated by field pollution from the elementary interfaces due to the
C0
continuous field from continuous Galerkin method. One can see Moura et. al. 2022 for mathematical details. The idea is to add internal penalty to the RHS of the weak form equation which is computed from facets.Typically, when we have such pollutions, increasing the resolution is always a good choice for implicit LES as discussed with Siavash, but when it comes to explicit LES where the resolution has to be related with the plus unit, one may add GJP as a remedy instead.
Here is a point worthy a mention:
(lx+2) ** 3 * nelv
(as proposed by @adampep) for quantities associated with facets as they have different values even on the same point as long as associated to different facets. An clear example is the facet normaln1
on vertices of elements in Cartesian mesh: it's-1
when associated to facet 1 but0
when associated to facet 4 on the interface of facet 1 and 4.And three points for discussion in order to avoid memory copying:
At the end of
subroutine absvolflux_compute
, absolute value is assigned tothis%absvolflux
, in CPU is fine. But in GPU, is there any operation for its counterpartthis%absvolflux_d
? If any, then we can avoid thedevice_memcpy
here.Similar issue here need to call
device_memcpy
: Insubroutine pick_facet_value_hex
, slices of arrays are copied to other arrays. Can I do the similar things for the gpu counterparts stored as c pointer in device?Similar issue again: In
subroutine gradient_jump_penalty_compute_hex_el
, the idea in CPU is to compute arrays from other arrays by different indexing by looping (as is commonly used for CPU). Seems a bit like point 2 mentioned above since they are both problems of indexing of an array by a c pointer in device.And for performance, the memory copy issue does not influence a lot in a test of 3200 elements with 8 GPUs in a node (i.e. 400 elements per GPU) since the GJP does not increase a lot of computational time when the pressure solver requires several iterations :)
Thanks very much for any suggestions!