Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Implement Stochastic Update #230

Closed
costachris opened this issue Nov 8, 2022 · 6 comments · Fixed by #233
Closed

Implement Stochastic Update #230

costachris opened this issue Nov 8, 2022 · 6 comments · Fixed by #233
Assignees

Comments

@costachris
Copy link
Member

costachris commented Nov 8, 2022

Aim: Add functionality to perform stochastic updates (in parameter space) with contraction. Update option should be available across all EnsembleKalmanProcesses and take the from:
$$u_{n+1} = a*u_n + xi_{n} \qquad (1)$$
Stochastic Ensemble Kalman Update
The stochastic modification changes the prediction step in the following manner (Huang et al. 2022 eqn. 41)

$$ \widehat{m}_{n+1} = m_{n} ; \qquad \widehat{u}^{j}_{n + 1} = \widehat{m}_{n+1} + \sqrt{\frac{1}{1 - \Delta{\tau}}} \left(u^{j}_{n} - m_{n} \right) \qquad (2)$$

where $m$ is the ensemble average and $\Delta \tau$ is a hyperparameter lying in the range $(0, 1)$. The RHS of equation 2 is equivalent to adding random noise directly, and equates to $xi_{n}$ in eqn. 1. The forward map $G$ must be computed after the prediction step (eqn. 2) before the analysis step (updating covariance estimate Cov(G, G), Cov(u, G), etc.)

@costachris
Copy link
Member Author

costachris commented Nov 8, 2022

Proposed Solution:
Create a common stochastic_update function that can be employed by all EnsembleKalmanProcesses, which modifies the existing updates by adding a step equivalent to eqn. 2 above. The overarching update_ensemble! method for each EnsembleKalmanProcesses will then take a flag (default to false) as to whether to apply a stochastic update. For example, in EKI the flag will be pass through the functions (starting at update_ensemble! ), and eki_update will call stochastic_update in addition to the existing covariance updates. When true, the code will apply both eqn. 2 and the analysis step (updating covariances, etc.). $\Delta \tau$ should also be passed when the stochastic update flag is true. @Zhengyu-Huang is $\Delta \tau$ the same as the artificial timestep: $\Delta t$ in the EKP code?

@Zhengyu-Huang
Copy link
Contributor

@costachris Thank you for the great effort! Yes, in my implementation, $\Delta \tau = \Delta t$ , and I need it between 0 and 1, I generally use 0.5. If the stochastic_update is on, set dt = min(0.5, dt)

$xi_n \sim N(0, \frac{\Delta \tau }{1 - \Delta \tau} C_n)$, then both implementations are the same, you can also choose $xi_n \sim N(0, \frac{\Delta \tau }{1 - \Delta \tau} C_0)$

@odunbar
Copy link
Collaborator

odunbar commented Nov 9, 2022

Yeah looks good, Implementing this within each process update_ensemble! would be fine.

If you wanted to be slick, you could instead change the update_ensemble!(ekp{Process},...) calls into update_ensemble!(ekp, ... , ::Process) calls, and add a new method

function update_ensemble!(ekp, ..., stoch_flag, contraction_flag)
    new_ensemble  = update_ensemble!(ekp, ... , get_process(ekp))
    new_ensemble = stoch_flag ? stochastic_update!(new_ensemble, ...) : new_ensemble
    new_ensemble = contraction_flag ? contract!(new_ensemble, ...) : new_ensemble
end

or something, but I understand if this is too much out of scope!

@costachris costachris changed the title Implement Stochastic Update with Contraction Implement Stochastic Update Nov 9, 2022
@ilopezgp
Copy link
Contributor

@costachris Thank you for the great effort! Yes, in my implementation, Δτ=Δt , and I need it between 0 and 1, I generally use 0.5. If the stochastic_update is on, set dt = min(0.5, dt)

xin∼N(0,Δτ1−ΔτCn), then both implementations are the same, you can also choose

I am concerned about enforcing Δτ=Δt in general, since as @Zhengyu-Huang says Δτ needs to be less than 1. In general, batching requires Δt>1 to ensure we are estimating a batch-independent uncertainty measure. This would mean we cannot use batching with the stochastic update.

@ilopezgp ilopezgp self-assigned this Nov 12, 2022
@costachris
Copy link
Member Author

@costachris Thank you for the great effort! Yes, in my implementation, Δτ=Δt , and I need it between 0 and 1, I generally use 0.5. If the stochastic_update is on, set dt = min(0.5, dt)
xin∼N(0,Δτ1−ΔτCn), then both implementations are the same, you can also choose

I am concerned about enforcing Δτ=Δt in general, since as @Zhengyu-Huang says Δτ needs to be less than 1. In general, batching requires Δt>1 to ensure we are estimating a batch-independent uncertainty measure. This would mean we cannot use batching with the stochastic update.

@Zhengyu-Huang is there a way around this or does this implementation have to be used without batching? Is the only reason $\Delta \tau < 1$ to prevent the refactor from blowing up when $\Delta \tau = 1$ or is there a more fundamental reason?

@costachris costachris mentioned this issue Nov 16, 2022
5 tasks
@odunbar
Copy link
Collaborator

odunbar commented Jan 18, 2023

To summarize our meeting,

  • We wish to add a scaling factor to \Delta_t called s, the default will be 0. the default \Delta_t will be 1
  • if s=0 we don't do any parameter inflation. If s>0 we run one of the following after the EKI update (toggled by a flag)
  1. A multiplicative inflation is given by eqn(2) above [does not break subspace property]
  2. An additive inflation given by Daniel's first equation for $\xi_n$ [breaks subspace property]

Optional: add the option in the additive inflation, to add a scaled prior covariance (i.e. where $C_n$ is replaced by $C_0$)

@costachris costachris linked a pull request Feb 2, 2023 that will close this issue
5 tasks
@bors bors bot closed this as completed in #233 Feb 10, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants