-
Notifications
You must be signed in to change notification settings - Fork 191
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add double gyre example in Docs #3087
base: main
Are you sure you want to change the base?
Conversation
* first attempt for a double gyre example; some instability rises * change buoyancy forcing to flux BC * this seems like a working double-gyre but blows up eventually * adds biharmonic; BC's don't work * now example includes biharmonic lateral viscosity/diffusivity * buoyancy relaxation is a BC, not forcing * updates to run on latest Oceanagigans * some more attempts but still blows up... * updates Manifest * yet another version that blows up :( * Updates double gyre example; * Tweaks comments and presentation * Smaller time-step for hurricane-strength winds * The checkpointer will save us * update double_gyre script to latest Oceananigans shenanigans * remove nonhydrostaticmodel [skip ci] * some tidying up [skip ci] * some updates [skip ci] * updates double gyre script * fix deprecation in time-step wizard * updates [skip ci] * adds double gyre * barotropic streamfunction plot * Update double gyre script * Woops * simplify bc and forcing * use CairoMakie --------- Co-authored-by: Gregory Wagner <[email protected]> Co-authored-by: ali-ramadhan <[email protected]>
examples/double_gyre.jl
Outdated
lines!(ax, grid.Δzᵃᵃᶜ[1:grid.Nz], grid.zᵃᵃᶜ[1:grid.Nz]) | ||
scatter!(ax, grid.Δzᵃᵃᶜ[1:Nz], grid.zᵃᵃᶜ[1:Nz]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use zspacings
here rather than explicitly referencing grid properties
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
zspacings
and znodes
examples/double_gyre.jl
Outdated
tracer_advection = WENO(), | ||
buoyancy = BuoyancyTracer(), | ||
coriolis = BetaPlane(latitude=45), | ||
closure = (vertical_diffusive_closure, horizontal_diffusive_closure), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
closure = (vertical_diffusive_closure, horizontal_diffusive_closure), | |
closure = RiBasedVerticalDiffusivity(), |
examples/double_gyre.jl
Outdated
|
||
@info "Making an animation from the saved data..." | ||
|
||
anim = @animate for i in 1:length(times) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We'll have to change all this to use CairoMakie
(see the other examples)
examples/double_gyre.jl
Outdated
momentum_advection = WENO(), | ||
tracer_advection = WENO(), | ||
buoyancy = BuoyancyTracer(), | ||
coriolis = BetaPlane(latitude=45), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
coriolis = BetaPlane(latitude=45), | |
coriolis = HydrostaticSphericalCoriolis(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change assumes you changed the grid to be lat-lon
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right! Beta plane won’t work for lat Lon
examples/double_gyre.jl
Outdated
# ## Model building | ||
|
||
model = HydrostaticFreeSurfaceModel(; grid, | ||
free_surface = ImplicitFreeSurface(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
use split explicit free surface
examples/double_gyre.jl
Outdated
|
||
model = HydrostaticFreeSurfaceModel(; grid, | ||
free_surface = ImplicitFreeSurface(), | ||
momentum_advection = WENO(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
momentum_advection = WENO(), | |
momentum_advection = VectorInvariant(vorticity_scheme = WENO(grid), | |
divergence_scheme = WENO(grid), | |
vertical_scheme = WENO(grid)) |
might need some reformatting
@siddharthabishnu I added some suggestions to get started! There may also be some useful code here: |
examples/double_gyre.jl
Outdated
|
||
using Oceananigans | ||
using Oceananigans.Units | ||
using Oceananigans.Grids: xnode, ynode, znode |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using Oceananigans.Grids: xnode, ynode, znode |
I believe these are not needed atm
Thanks Greg and Navid for these useful suggestions. I missed some of them before. I am implementing them now. I will upload visualization results soon. |
Just a reminder: if we are going to use |
examples/double_gyre.jl
Outdated
lines!(ax, grid.Δzᵃᵃᶜ[1:grid.Nz], grid.zᵃᵃᶜ[1:grid.Nz]) | ||
scatter!(ax, grid.Δzᵃᵃᶜ[1:Nz], grid.zᵃᵃᶜ[1:Nz]) | ||
|
||
Δz_cpu = Array(grid.Δzᵃᵃᶜ.parent)[1+Hz:Nz+Hz] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why not use zspacings(grid, Center())
?
this line here is correct but not that user friendly.
julia> using Oceananigans
help?> zspacings
search: zspacings minimum_zspacing
zspacings(grid, ℓx, ℓy, ℓz; with_halos=true)
Return the spacings over the interior nodes on grid in the z-direction for the location ℓx, ℓy, ℓz. For Bounded directions, Face nodes include
the boundary points.
julia> using Oceananigans
julia> grid = LatitudeLongitudeGrid(size=(20, 15, 10), longitude=(0, 20), latitude=(-15, 15), z=(-100, 0));
julia> zspacings(grid, Center(), Center(), Center())
10.0
julia> using CUDA
julia> CUDA.@allowscalar show(Δz)
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Right, to the extent possible, we want to use function-based interfaces where available and avoid referencing grid properties. In particularly there are always alternatives to .parent
. We also are trying (but have not totally succeeding) to "hide" halos from users. So we don't reference halos in any examples.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
zspacings
may be broken on GPU though so we might need to fix that
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if it's broken let's fix it! But I believe it's the show()
method for x/y/z/λ/φspacings
that is broken. See:
julia> using Oceananigans
[ Info: Oceananigans will use 4 threads
julia> grid = RectilinearGrid(GPU(), size=(16, 16, 16), x=(0,1), y=(0,1), z=0:16)
16×16×16 RectilinearGrid{Float64, Periodic, Periodic, Bounded} on GPU with 3×3×3 halo
├── Periodic x ∈ [0.0, 1.0) regularly spaced with Δx=0.0625
├── Periodic y ∈ [0.0, 1.0) regularly spaced with Δy=0.0625
└── Bounded z ∈ [0.0, 16.0] variably spaced with min(Δz)=1.0, max(Δz)=1.0
julia> Δz = zspacings(grid, Center())
16-element view(OffsetArray(::CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, -2:19), 1:16) with eltype Float64:
Error showing value of type SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true}:
ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore are only permitted from the REPL for prototyping purposes.
If you did intend to index this array, annotate the caller with @allowscalar.
Stacktrace:
[1] error(s::String)
@ Base ./error.jl:35
[2] assertscalar(op::String)
@ GPUArraysCore ~/.julia/packages/GPUArraysCore/HaQcr/src/GPUArraysCore.jl:103
[3] getindex
@ ~/.julia/packages/GPUArrays/TnEpb/src/host/indexing.jl:9 [inlined]
[4] getindex
@ ~/.julia/packages/OffsetArrays/TcCEq/src/OffsetArrays.jl:436 [inlined]
[5] getindex
@ ./subarray.jl:315 [inlined]
[6] _getindex
@ ./abstractarray.jl:1274 [inlined]
[7] getindex
@ ./abstractarray.jl:1241 [inlined]
[8] isassigned(::SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true}, ::Int64, ::Int64)
@ Base ./abstractarray.jl:565
[9] alignment(io::IOContext{Base.TTY}, X::AbstractVecOrMat, rows::Vector{Int64}, cols::Vector{Int64}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
@ Base ./arrayshow.jl:68
[10] _print_matrix(io::IOContext{Base.TTY}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{Int64}, colsA::UnitRange{Int64})
@ Base ./arrayshow.jl:207
[11] print_matrix(io::IOContext{Base.TTY}, X::SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64) (repeats 2 times)
@ Base ./arrayshow.jl:171
[12] print_array
@ ./arrayshow.jl:358 [inlined]
[13] show(io::IOContext{Base.TTY}, #unused#::MIME{Symbol("text/plain")}, X::SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true})
@ Base ./arrayshow.jl:399
[14] display(d::REPL.REPLDisplay{REPL.LineEditREPL}, mime::MIME{Symbol("text/plain")}, x::SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true})
@ OhMyREPL ~/.julia/packages/OhMyREPL/oDZvT/src/output_prompt_overwrite.jl:8
[15] display(d::REPL.REPLDisplay, x::Any)
@ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:272
[16] display(x::Any)
@ Base.Multimedia ./multimedia.jl:328
[17] #invokelatest#2
@ ./essentials.jl:729 [inlined]
[18] invokelatest
@ ./essentials.jl:726 [inlined]
[19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{Nothing, AbstractDisplay})
@ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:296
[20] (::REPL.var"#45#46"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
@ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:278
[21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
@ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:521
[22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
@ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:276
[23] (::REPL.var"#do_respond#66"{Bool, Bool, REPL.var"#77#87"{REPL.LineEditREPL, REPL.REPLHistoryProvider}, REPL.LineEditREPL, REPL.LineEdit.Prompt})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
@ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:857
[24] #invokelatest#2
@ ./essentials.jl:729 [inlined]
[25] invokelatest
@ ./essentials.jl:726 [inlined]
[26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
@ REPL.LineEdit ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/LineEdit.jl:2510
[27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
@ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:1248
[28] (::REPL.var"#49#54"{REPL.LineEditREPL, REPL.REPLBackendRef})()
@ REPL ./task.jl:484
julia> typeof(Δz)
SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What's broken is a show method for subarrays of CuArrays?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Or the problem is OffsetArrays of CuArrays
u_timeseries = FieldTimeSeries(filename, "u"; architecture = CPU()) | ||
v_timeseries = FieldTimeSeries(filename, "v"; architecture = CPU()) | ||
s_timeseries = FieldTimeSeries(filename, "speed"; architecture = CPU()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
CPU is default and pretty much always desired (unless you want to do post-processing on the GPU) so we may not need to specify architecture
here
examples/double_gyre.jl
Outdated
U = mean(interior(U_timeseries)[:, :, :, end:end], dims=4)[:, :, 1, 1] | ||
V = mean(interior(V_timeseries)[:, :, :, end:end], dims=4)[:, :, 1, 1] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This might work
U = mean(interior(U_timeseries)[:, :, :, end:end], dims=4)[:, :, 1, 1] | |
V = mean(interior(V_timeseries)[:, :, :, end:end], dims=4)[:, :, 1, 1] | |
U = interior(U_timeseries[end], :, :, 1]) | |
V = interior(V_timeseries[end], :, :, 1]) |
There are quite a few issues, which will be resolved in the upcoming commits.
db35e1f
to
3037407
Compare
As an update, I have pushed a commit consisting of a working version of the double gyre example, where Δz is replaced by zspacings and znodes, the plots are improved with additional attributes, and visualization is performed on the CPU even if the code runs on the GPU. I still need to incorporate some of the suggested modifications listed above. I am uploading the plots and animation here. In today's meeting with Navid and Simone, we looked at switching to the lat-lon grid and more. In the upcoming commits, I will run for longer time on a lat-lon grid, introduce checkpoints for restarting the simulation, and add a topography. double_gyre.jld2.mp4 |
8b22045
to
5f0cc3a
Compare
model = HydrostaticFreeSurfaceModel(; grid, | ||
free_surface = ImplicitFreeSurface(), | ||
momentum_advection = WENO(), | ||
tracer_advection = WENO(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@simone-silvestri, WENO()
won't work for lat-lon grid, right? since spacings are variable one needs to cal WENO(grid)
, correct?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, spacings are indeed variable, but since we are using a dimension-by-dimension reconstruction there is no need to use variable coefficients (dx is constant on a longitude line and the same goes for dy on a latitude line)
examples/double_gyre.jl
Outdated
zspacings_CPU = zspacings(grid_CPU, Center(), Center(), Center()) | ||
znodes_CPU = znodes(grid_CPU, Center(), Center(), Center()) | ||
|
||
Hx, Hy, Hz = halo_size(grid) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hx, Hy, Hz = halo_size(grid) |
do we need Hx, Hy, Hz
?
examples/double_gyre.jl
Outdated
|
||
using Oceananigans | ||
using Oceananigans.Units | ||
using Oceananigans.Grids: xnode, ynode, znode, halo_size, on_architecture |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using Oceananigans.Grids: xnode, ynode, znode, halo_size, on_architecture | |
using Oceananigans.Grids: on_architecture |
Major modifications include: (a) Switching to zspacings and znodes from Δz; (b) Creating a 'CPU version' of grid attributes for performing visualization on the CPU without any issue, even if the code runs on the GPU; (c) Improving the plots by adding a variety of attributes; (d) Adding a logical variable 'run_simulation' with default value 'true', which can be switched to false while post-processing the output after running a simulation.
5f0cc3a
to
b894c39
Compare
examples/double_gyre.jl
Outdated
const Lλ = 40.0 # [°] longitude extent of the domain | ||
const Lφ = 60.0 # [°] latitude extent of the domain | ||
const Lz = 1.8kilometers # depth [m] | ||
const φ₀ = 15.0 # [°N] latitude of the center of the domain |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was thinking that it may be cleaner if we define the domain via, e.g.,
const Lλ = 40.0 # [°] longitude extent of the domain | |
const Lφ = 60.0 # [°] latitude extent of the domain | |
const Lz = 1.8kilometers # depth [m] | |
const φ₀ = 15.0 # [°N] latitude of the center of the domain | |
const λ_west = -30 # [°] longitude of west boundary | |
const λ_west = +30 # [°] longitude of east boundary | |
const φ_south = 15 # [°] latitude of south boundary | |
const φ_north = 75 # [°] latitude of north boundary |
What do others think?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I like it!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Go for it!
In this commit, we switched the (a) grid from RectilinearGrid to LatitudeLongitudeGrid; (b) momentum_advection from WENO() to VectorInvariant(); and (c) coriolis from BetaPlane(latitude = 45) to HydrostaticSphericalCoriolis(). Unfortunately, the numerical solution is no longer stable and blows up after a few time steps. Coarsening the spatial resolution and reducing the time step (to lower the Courant number) has no effect.
d18ccbd
to
1b353fa
Compare
In this commit, we (a) specified the end-points as opposed to the extents of the horizontal domain; (b) specified the vorticity and vertical schemes to WENO in the vector invariant form of the momentum advection; and (c) increased the halo size from the default (3, 3, 3) to (4, 4, 4) to be compatible with (b). The numerical solution is still blowing up!
(a) Reduced the buoyancy relaxation time scale to 10 days; (b) Switched from VerticalScalarDiffusivity to ConvectiveAdjustmentVerticalDiffusivity; (c) Switched from ImplicitFreeSurface to SplitExplicitFreeSurface; (d) Specified the surface buoyancy forcing as a boundary condition; (e) Removed the time step wizard for now; (f) Replaced record with CairoMakie.record to avoid clash with CUDA record. The solution is still blowing up.
11571e9
to
43723b6
Compare
(a) Modified the profiles of the wind stress, buoyancy forcing, and the bottom drag; (b) Modified the horizontal scalar diffusivity; (c) Plotted the wind stress and surface buoyancy forcing profiles; (d) By turning off each one at a time, pinpointed the cause of the blow-up to the buoyancy forcing (and not the wind stress); (e) Removed unnecessary inline functions.
Put back spaces between hashes (#) to restore the style for the literation of the examples.
(a) Switched from HydrostaticSphericalCoriolis() to HydrostaticSphericalCoriolis(scheme = ActiveCellEnstrophyConservingScheme()); (b) Modified the formulation of the barotropic streamfunction and fixed a bug in plotting its heatmap. The modifications in this commit and the last few have not been able to prevent the blow-up of the numerical solution within less than a year.
A question on how this relates to the MITgcm example found here. In the MITgcm the wind stress is added as a body force with the function
This has units of In this example, we force the momentum using a flux bounary condition. In partuclar, the But when we are specifying a flux boundary condtion on the zonal velocity I thought we are specifying I wonder if it should be something like this
but I am curious to know what other people think. |
I believe that the term we need to add in Oceananigans as boundary condition forcing is the actual kinematic wind stress (dimensions Length^2 / Time^2); I have not idea whether MITgcm needs something else to be honest. But even if MITgcm needs some other quantity that's irrelevant here. The horizontal momentum equation for the hydrostatic free-surface model reads: with |
Thanks @navidcy for the reply. This is helpful. This means that when we integrate the zonal momentum equation over a volume (as part of deriving the finite volume method), we get the vertical integral in the |
In MITgcm a flux boundary condition is implemented as a body force in the top grid cell. We call this a "flux" in Oceananigans. As for units --- the units of |
Thanks @glwagner , very helpful! |
This PR continues #1085 the work that started on a fork long ago.
@siddharthabishnu feel free to continue working on this Oceananigans branch now ;)
Some of the things we discussed to do: