Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add double gyre example in Docs #3087

Draft
wants to merge 11 commits into
base: main
Choose a base branch
from
Draft

Add double gyre example in Docs #3087

wants to merge 11 commits into from

Conversation

navidcy
Copy link
Collaborator

@navidcy navidcy commented Apr 25, 2023

This PR continues #1085 the work that started on a fork long ago.

@siddharthabishnu feel free to continue working on this Oceananigans branch now ;)

Some of the things we discussed to do:

  • convert to lat-lon grid
  • add some bathymetry, e.g., bathtub-type of domain prescribed by some analytical function
  • use Ri-based diffusivity for BL closure
  • remove horizontal viscosities/diffusivities
  • potentially run long, save checkpoint, and then resume to run for 6-12 months in the docs

* first attempt for a double gyre example; some instability rises

* change buoyancy forcing to flux BC

* this seems like a working double-gyre but blows up eventually

* adds biharmonic; BC's don't work

* now example includes biharmonic lateral viscosity/diffusivity

* buoyancy relaxation is a BC, not forcing

* updates to run on latest Oceanagigans

* some more attempts but still blows up...

* updates Manifest

* yet another version that blows up :(

* Updates double gyre example;

* Tweaks comments and presentation

* Smaller time-step for hurricane-strength winds

* The checkpointer will save us

* update double_gyre script to latest Oceananigans shenanigans

* remove nonhydrostaticmodel [skip ci]

* some tidying up [skip ci]

* some updates [skip ci]

* updates double gyre script

* fix deprecation in time-step wizard

* updates [skip ci]

* adds double gyre

* barotropic streamfunction plot

* Update double gyre script

* Woops

* simplify bc and forcing

* use CairoMakie

---------

Co-authored-by: Gregory Wagner <[email protected]>
Co-authored-by: ali-ramadhan <[email protected]>
@navidcy navidcy changed the title Double gyre Add double gyre example in Docs Apr 25, 2023
@navidcy navidcy added science 🌊 Sometimes addictive, sometimes allergenic documentation 📜 The sacred scrolls labels Apr 25, 2023
Comment on lines 38 to 39
lines!(ax, grid.Δzᵃᵃᶜ[1:grid.Nz], grid.zᵃᵃᶜ[1:grid.Nz])
scatter!(ax, grid.Δzᵃᵃᶜ[1:Nz], grid.zᵃᵃᶜ[1:Nz])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use zspacings here rather than explicitly referencing grid properties

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

zspacings and znodes

tracer_advection = WENO(),
buoyancy = BuoyancyTracer(),
coriolis = BetaPlane(latitude=45),
closure = (vertical_diffusive_closure, horizontal_diffusive_closure),
Copy link
Member

@glwagner glwagner Apr 25, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
closure = (vertical_diffusive_closure, horizontal_diffusive_closure),
closure = RiBasedVerticalDiffusivity(),


@info "Making an animation from the saved data..."

anim = @animate for i in 1:length(times)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We'll have to change all this to use CairoMakie (see the other examples)

momentum_advection = WENO(),
tracer_advection = WENO(),
buoyancy = BuoyancyTracer(),
coriolis = BetaPlane(latitude=45),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
coriolis = BetaPlane(latitude=45),
coriolis = HydrostaticSphericalCoriolis(),

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change assumes you changed the grid to be lat-lon

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right! Beta plane won’t work for lat Lon

# ## Model building

model = HydrostaticFreeSurfaceModel(; grid,
free_surface = ImplicitFreeSurface(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use split explicit free surface


model = HydrostaticFreeSurfaceModel(; grid,
free_surface = ImplicitFreeSurface(),
momentum_advection = WENO(),
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
momentum_advection = WENO(),
momentum_advection = VectorInvariant(vorticity_scheme = WENO(grid),
divergence_scheme = WENO(grid),
vertical_scheme = WENO(grid))

might need some reformatting

@glwagner
Copy link
Member

@siddharthabishnu I added some suggestions to get started!

There may also be some useful code here:

https://github.com/CliMA/ClimaOcean.jl/blob/main/src/IdealizedSimulations/neverworld_simulation.jl


using Oceananigans
using Oceananigans.Units
using Oceananigans.Grids: xnode, ynode, znode
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
using Oceananigans.Grids: xnode, ynode, znode

I believe these are not needed atm

@siddharthabishnu
Copy link
Contributor

Thanks Greg and Navid for these useful suggestions. I missed some of them before. I am implementing them now. I will upload visualization results soon.

@navidcy
Copy link
Collaborator Author

navidcy commented May 1, 2023

Just a reminder: if we are going to use HydrostaticSphericalCoriolis in this example we should also include it in the Coriolis section in the docs.

lines!(ax, grid.Δzᵃᵃᶜ[1:grid.Nz], grid.zᵃᵃᶜ[1:grid.Nz])
scatter!(ax, grid.Δzᵃᵃᶜ[1:Nz], grid.zᵃᵃᶜ[1:Nz])

Δz_cpu = Array(grid.Δzᵃᵃᶜ.parent)[1+Hz:Nz+Hz]
Copy link
Collaborator Author

@navidcy navidcy May 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not use zspacings(grid, Center())?
this line here is correct but not that user friendly.

julia> using Oceananigans

help?> zspacings
search: zspacings minimum_zspacing

  zspacings(grid, ℓx, ℓy, ℓz; with_halos=true)


  Return the spacings over the interior nodes on grid in the z-direction for the location ℓx, ℓy, ℓz. For Bounded directions, Face nodes include
  the boundary points.

  julia> using Oceananigans

  julia> grid = LatitudeLongitudeGrid(size=(20, 15, 10), longitude=(0, 20), latitude=(-15, 15), z=(-100, 0));

  julia> zspacings(grid, Center(), Center(), Center())
  10.0

julia> using CUDA

julia> CUDA.@allowscalar show(Δz)
[1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0, 1.0]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Right, to the extent possible, we want to use function-based interfaces where available and avoid referencing grid properties. In particularly there are always alternatives to .parent. We also are trying (but have not totally succeeding) to "hide" halos from users. So we don't reference halos in any examples.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

zspacings may be broken on GPU though so we might need to fix that

Copy link
Collaborator Author

@navidcy navidcy May 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if it's broken let's fix it! But I believe it's the show() method for x/y/z/λ/φspacings that is broken. See:

julia> using Oceananigans
[ Info: Oceananigans will use 4 threads

julia> grid = RectilinearGrid(GPU(), size=(16, 16, 16), x=(0,1), y=(0,1), z=0:16)
16×16×16 RectilinearGrid{Float64, Periodic, Periodic, Bounded} on GPU with 3×3×3 halo
├── Periodic x  [0.0, 1.0)  regularly spaced with Δx=0.0625
├── Periodic y  [0.0, 1.0)  regularly spaced with Δy=0.0625
└── Bounded  z  [0.0, 16.0] variably spaced with min(Δz)=1.0, max(Δz)=1.0

julia> Δz = zspacings(grid, Center())
16-element view(OffsetArray(::CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}, -2:19), 1:16) with eltype Float64:
Error showing value of type SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true}:
ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore are only permitted from the REPL for prototyping purposes.
If you did intend to index this array, annotate the caller with @allowscalar.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] assertscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/HaQcr/src/GPUArraysCore.jl:103
  [3] getindex
    @ ~/.julia/packages/GPUArrays/TnEpb/src/host/indexing.jl:9 [inlined]
  [4] getindex
    @ ~/.julia/packages/OffsetArrays/TcCEq/src/OffsetArrays.jl:436 [inlined]
  [5] getindex
    @ ./subarray.jl:315 [inlined]
  [6] _getindex
    @ ./abstractarray.jl:1274 [inlined]
  [7] getindex
    @ ./abstractarray.jl:1241 [inlined]
  [8] isassigned(::SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true}, ::Int64, ::Int64)
    @ Base ./abstractarray.jl:565
  [9] alignment(io::IOContext{Base.TTY}, X::AbstractVecOrMat, rows::Vector{Int64}, cols::Vector{Int64}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:68
 [10] _print_matrix(io::IOContext{Base.TTY}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{Int64}, colsA::UnitRange{Int64})
    @ Base ./arrayshow.jl:207
 [11] print_matrix(io::IOContext{Base.TTY}, X::SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64) (repeats 2 times)
    @ Base ./arrayshow.jl:171
 [12] print_array
    @ ./arrayshow.jl:358 [inlined]
 [13] show(io::IOContext{Base.TTY}, #unused#::MIME{Symbol("text/plain")}, X::SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true})
    @ Base ./arrayshow.jl:399
 [14] display(d::REPL.REPLDisplay{REPL.LineEditREPL}, mime::MIME{Symbol("text/plain")}, x::SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true})
    @ OhMyREPL ~/.julia/packages/OhMyREPL/oDZvT/src/output_prompt_overwrite.jl:8
 [15] display(d::REPL.REPLDisplay, x::Any)
    @ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:272
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:328
 [17] #invokelatest#2
    @ ./essentials.jl:729 [inlined]
 [18] invokelatest
    @ ./essentials.jl:726 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{Nothing, AbstractDisplay})
    @ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:296
 [20] (::REPL.var"#45#46"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:278
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:521
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:276
 [23] (::REPL.var"#do_respond#66"{Bool, Bool, REPL.var"#77#87"{REPL.LineEditREPL, REPL.REPLHistoryProvider}, REPL.LineEditREPL, REPL.LineEdit.Prompt})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:857
 [24] #invokelatest#2
    @ ./essentials.jl:729 [inlined]
 [25] invokelatest
    @ ./essentials.jl:726 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/LineEdit.jl:2510
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/julia-1.8/usr/share/julia/stdlib/v1.8/REPL/src/REPL.jl:1248
 [28] (::REPL.var"#49#54"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ./task.jl:484

julia> typeof(Δz)
SubArray{Float64, 1, OffsetArrays.OffsetVector{Float64, CUDA.CuArray{Float64, 1, CUDA.Mem.DeviceBuffer}}, Tuple{UnitRange{Int64}}, true}

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What's broken is a show method for subarrays of CuArrays?

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or the problem is OffsetArrays of CuArrays

Comment on lines +178 to +179
u_timeseries = FieldTimeSeries(filename, "u"; architecture = CPU())
v_timeseries = FieldTimeSeries(filename, "v"; architecture = CPU())
s_timeseries = FieldTimeSeries(filename, "speed"; architecture = CPU())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CPU is default and pretty much always desired (unless you want to do post-processing on the GPU) so we may not need to specify architecture here

U = mean(interior(U_timeseries)[:, :, :, end:end], dims=4)[:, :, 1, 1]
V = mean(interior(V_timeseries)[:, :, :, end:end], dims=4)[:, :, 1, 1]
Copy link
Member

@glwagner glwagner May 3, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This might work

Suggested change
U = mean(interior(U_timeseries)[:, :, :, end:end], dims=4)[:, :, 1, 1]
V = mean(interior(V_timeseries)[:, :, :, end:end], dims=4)[:, :, 1, 1]
U = interior(U_timeseries[end], :, :, 1])
V = interior(V_timeseries[end], :, :, 1])

There are quite a few issues, which will be resolved in the upcoming commits.
@siddharthabishnu
Copy link
Contributor

siddharthabishnu commented May 17, 2023

As an update, I have pushed a commit consisting of a working version of the double gyre example, where Δz is replaced by zspacings and znodes, the plots are improved with additional attributes, and visualization is performed on the CPU even if the code runs on the GPU. I still need to incorporate some of the suggested modifications listed above. I am uploading the plots and animation here. In today's meeting with Navid and Simone, we looked at switching to the lat-lon grid and more. In the upcoming commits, I will run for longer time on a lat-lon grid, introduce checkpoints for restarting the simulation, and add a topography.

double_gyre_grid_spacing.pdf

double_gyre_circulation.pdf

double_gyre.jld2.mp4

model = HydrostaticFreeSurfaceModel(; grid,
free_surface = ImplicitFreeSurface(),
momentum_advection = WENO(),
tracer_advection = WENO(),
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@simone-silvestri, WENO() won't work for lat-lon grid, right? since spacings are variable one needs to cal WENO(grid), correct?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, spacings are indeed variable, but since we are using a dimension-by-dimension reconstruction there is no need to use variable coefficients (dx is constant on a longitude line and the same goes for dy on a latitude line)

zspacings_CPU = zspacings(grid_CPU, Center(), Center(), Center())
znodes_CPU = znodes(grid_CPU, Center(), Center(), Center())

Hx, Hy, Hz = halo_size(grid)
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Hx, Hy, Hz = halo_size(grid)

do we need Hx, Hy, Hz?


using Oceananigans
using Oceananigans.Units
using Oceananigans.Grids: xnode, ynode, znode, halo_size, on_architecture
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
using Oceananigans.Grids: xnode, ynode, znode, halo_size, on_architecture
using Oceananigans.Grids: on_architecture

Major modifications include:

(a) Switching to zspacings and znodes from Δz;
(b) Creating a 'CPU version' of grid attributes for performing visualization on
    the CPU without any issue, even if the code runs on the GPU;
(c) Improving the plots by adding a variety of attributes;
(d) Adding a logical variable 'run_simulation' with default value 'true', which
    can be switched to false while post-processing the output after running a
    simulation.
Comment on lines 15 to 18
const Lλ = 40.0 # [°] longitude extent of the domain
const Lφ = 60.0 # [°] latitude extent of the domain
const Lz = 1.8kilometers # depth [m]
const φ₀ = 15.0 # [°N] latitude of the center of the domain
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking that it may be cleaner if we define the domain via, e.g.,

Suggested change
const = 40.0 # [°] longitude extent of the domain
const = 60.0 # [°] latitude extent of the domain
const Lz = 1.8kilometers # depth [m]
const φ₀ = 15.0 #N] latitude of the center of the domain
const λ_west = -30 # [°] longitude of west boundary
const λ_west = +30 # [°] longitude of east boundary
const φ_south = 15 # [°] latitude of south boundary
const φ_north = 75 # [°] latitude of north boundary

What do others think?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like it!

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Go for it!

In this commit, we switched the
(a) grid from RectilinearGrid to LatitudeLongitudeGrid;
(b) momentum_advection from WENO() to VectorInvariant(); and
(c) coriolis from BetaPlane(latitude = 45) to HydrostaticSphericalCoriolis().

Unfortunately, the numerical solution is no longer stable and blows up after a
few time steps. Coarsening the spatial resolution and reducing the time step (to
lower the Courant number) has no effect.
In this commit, we

(a) specified the end-points as opposed to the extents of the horizontal domain;
(b) specified the vorticity and vertical schemes to WENO in the vector invariant
    form of the momentum advection; and
(c) increased the halo size from the default (3, 3, 3) to (4, 4, 4) to be
    compatible with (b).

The numerical solution is still blowing up!
(a) Reduced the buoyancy relaxation time scale to 10 days;
(b) Switched from VerticalScalarDiffusivity to
    ConvectiveAdjustmentVerticalDiffusivity;
(c) Switched from ImplicitFreeSurface to SplitExplicitFreeSurface;
(d) Specified the surface buoyancy forcing as a boundary condition;
(e) Removed the time step wizard for now;
(f) Replaced record with CairoMakie.record to avoid clash with CUDA record.

The solution is still blowing up.
(a) Modified the profiles of the wind stress, buoyancy forcing, and the bottom
    drag;
(b) Modified the horizontal scalar diffusivity;
(c) Plotted the wind stress and surface buoyancy forcing profiles;
(d) By turning off each one at a time, pinpointed the cause of the blow-up to
    the buoyancy forcing (and not the wind stress);
(e) Removed unnecessary inline functions.
Put back spaces between hashes (#) to restore the style for the literation of
the examples.
@siddharthabishnu
Copy link
Contributor

@navidcy, based on your comment here, I have put the spaces between the hashes to restore the literation style of the examples.

(a) Switched from HydrostaticSphericalCoriolis() to
    HydrostaticSphericalCoriolis(scheme = ActiveCellEnstrophyConservingScheme());
(b) Modified the formulation of the barotropic streamfunction and fixed a bug in
    plotting its heatmap.

The modifications in this commit and the last few have not been able to prevent
the blow-up of the numerical solution within less than a year.
@francispoulin
Copy link
Collaborator

francispoulin commented May 26, 2024

A question on how this relates to the MITgcm example found here.

In the MITgcm the wind stress is added as a body force with the function

F_u = tau_x / ( rho_c dz_s)

This has units of m/s^2, which his what we need to force the momentum equation.

In this example, we force the momentum using a flux bounary condition. In partuclar, the u_stress term has the same units astau/rho, which is m^2/s^2, as stated in line 76 of double_gyre.jl.

But when we are specifying a flux boundary condtion on the zonal velocity I thought we are specifying du/dz, which has units of 1/s. Could it be that we need to multiply by a time scale and divide by a length scale squared for this to be correct?

I wonder if it should be something like this

F_u/dz_s = tau_x / ( rho_c dz_s^2)

but I am curious to know what other people think.

@navidcy @glwagner @simone-silvestri

@navidcy
Copy link
Collaborator Author

navidcy commented May 28, 2024

I believe that the term we need to add in Oceananigans as boundary condition forcing is the actual kinematic wind stress (dimensions Length^2 / Time^2); I have not idea whether MITgcm needs something else to be honest. But even if MITgcm needs some other quantity that's irrelevant here.

The horizontal momentum equation for the hydrostatic free-surface model reads:

$$\partial_t \boldsymbol{u} = \dotsb -\partial_z \boldsymbol{\tau} + \dotsb$$

with $\boldsymbol{\tau}$ is the kinematic wind stress $\boldsymbol{\tau}_s$ at the surface, the bottom drag $\boldsymbol{\tau}_b$ at the ocean's bottom and $-\nu\partial_z\boldsymbol{u}$ in the ocean's interior.

@francispoulin
Copy link
Collaborator

Thanks @navidcy for the reply. This is helpful.

This means that when we integrate the zonal momentum equation over a volume (as part of deriving the finite volume method), we get the vertical integral in the z derivative and hence why we get the flux at the top minus the flux at the bottom. These should be averages over each horizontal cell and we also divide by a length scale. I presume that's why the MITgcm divides by the depth of the top cell. Interesting that they seem to use a body force but it should result in the same dynamics I suppose.

@glwagner
Copy link
Member

In MITgcm a flux boundary condition is implemented as a body force in the top grid cell. We call this a "flux" in Oceananigans.

As for units --- the units of Flux boundary condition are a velocity times the prognosed quantity. Thus, for a velocity, Flux is m^2 / s^2. The equations of motion are divided by reference density a priori, so that the reference density does not appear unless needed for some other reason (for example if computing buoyancy from a nonlinear equation of state that computes the density perturbation, like TEOS10).

@francispoulin
Copy link
Collaborator

Thanks @glwagner , very helpful!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
documentation 📜 The sacred scrolls science 🌊 Sometimes addictive, sometimes allergenic
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants