Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalid SAT environment while optimizing #1023

Open
Halbaroth opened this issue Jan 8, 2024 · 8 comments
Open

Invalid SAT environment while optimizing #1023

Halbaroth opened this issue Jan 8, 2024 · 8 comments
Assignees
Labels
backlog bug optimization This issue is related to optimization in models.

Comments

@Halbaroth
Copy link
Collaborator

Halbaroth commented Jan 8, 2024

The current implementation of unsat_rec_prem is wrong in presence of optimization if we try to use the SAT API directly. Consider the input problem:

(set-option :produce-models true)
(set-logic ALL)
(declare-const x Int)
(assert (<= x 10))
(maximize x)
(check-sat)

This problem is SAT and we expect to be able to reuse the environment after the (check-sat) statement. If we use the Dolmen frontend, it's okay because we throw away the previous environment and produce a fresh environment with all the assertions before each (check-sat).

If we use the imperative SAT API, we want to write:

let env = SAT.empty () in 
SAT.declare env `x`;
SAT.assume env `x <= 10`;
SAT.optimize env ~is_max:true `x`;
SAT.unsat env;
SAT.assume env `x <= 10`;

But the current implementation will raise the exception Unsat during the last statement. The reason is simple: in analyze_unknown_for_objectives we call again the solver until getting an unsatisfiable context. In other words, in presence of optimization, the environment env after calling SAT.unsat is always unsatisfiable. In the previous example, it will contain both the assertions x <= 10 and x > 10.

A naive solution consists in protecting the penultimate environment by using the push/pop mechanism and retrieve the model and objectives values produced before popping. Notice that we shouldn't produce an environment with extra push levels, otherwise the behaviour of the pop statement will be wrong after calling SAT.unsat.

To implement this solution, we can save the new assertions discovered in analyze_unknown_for_objectives in a stack, perform a (push 1) and a (pop 1) around the assume and unsat in unsat_rec_prem and after getting an unsat exception, get the tail of the stack and assert it without push/pop around it.

If you have a better solution, I'll be glad to implement it ;)

@Halbaroth Halbaroth added the bug label Jan 8, 2024
@Halbaroth Halbaroth self-assigned this Jan 8, 2024
@Halbaroth
Copy link
Collaborator Author

Halbaroth commented Jan 8, 2024

Another solution (a bit better in terms of performance I guess) consists in using a different kind of guards to protect the environments during the exploration for optimization. These guards won't appear in the stack_guard. In this way, the pop statement will behave correctly after calling SAT.unsat and we can use an internal push/pop mechanism to protect the environments in unsat_rec_prem.

@bclement-ocp
Copy link
Collaborator

That is an interesting point! Sounds like we indeed need to push and pop things (which we might have needed for #993 anyways -- at least we won't be able to assert at level 0 as we do anymore). I don't know if it is very useful to allow additional assertions after an optimization without resetting the state; adding assertions to a SAT context is useful because we can continue the search from where we left off (also it is necessary for the interaction with the instantiation mechanism) but with optimization we can't really do that, we have to re-do everything anyways.

Hence I'm not sure there is much value in replaying the decision trail, in which case we can simply use an internal guard for optimization and pop it at the end as you suggest.

using a different kind of guards to protect the environments during the exploration for optimization. These guards won't appear in the stack_guard.

Can you expand on that? I don't understand how that would work.

@Halbaroth
Copy link
Collaborator Author

I agree we don't need to support incremental asserting after optimization.

Can you expand on that? I don't understand how that would work.

After consideration, it seems my idea doesn't work in some specific cases.

Let's imagine we discover a contradiction after adding three new assertions during the optimization: A1, A2, A3.
In terms of the API, my solution looks like that:

let env = SAT.empty () in
(* We assert the original problem. *)
SAT.assume ...
...
(* Beginning the optimization in SatML here. *)
try
  SAT.push env 1;
  SAT.assume env A1;
  SAT.unsat env;
  SAT.assume env A2;
  SAT.unsat env;
  SAT.assume env A3;
  SAT.unsat env; (* <- We got an exception here. *)
  SAT.pop env 1; (* Not called in this example. *)
with Unsat ->
  let mdl = env.last_saved_model in 
  let objs = env.last_saved_objectives in 
  SAT.pop env 1;
  env.last_saved_model <- mdl; 
  env.last_saved_objectives <- objs

The environment env now is sat and contains the right model and objective values. But it's ugly to save the models before
popping.

The current implementation of the pop function is:

  let pop env to_pop =
    Util.loop
      ~f:(fun _n () () ->
          SAT.pop env.satml;
          let guard_to_neg = Stack.pop env.guards.stack_guard in
          let inst = Inst.pop ~guard:guard_to_neg env.inst in
          assert (not (Stack.is_empty env.guards.stack_guard));
          let b = Stack.top env.guards.stack_guard in
          Steps.pop_steps ();
          env.last_saved_model <- None;
          env.last_saved_objectives <- None;
          env.inst <- inst;
          env.guards.current_guard <- b
        )
      ~max:to_pop
      ~elt:()
      ~init:()

In fact, I don't see any good reason to remove the model in this function. If we use pop, we remove constraints on the problem, which means the saved model is still a model of the problem after popping. The same goes for objective values. The only issue will come from the declare statement but we could filter the model in get-model to remove undeclared symbols in the saved model.

@bclement-ocp
Copy link
Collaborator

After consideration, it seems my idea doesn't work in some specific cases.

Are you talking about the example above with A1, A2, A3? It seems to me that this should work.

it's ugly to save the models before popping.

Not sure if it is that ugly, but it's not sufficient in the presence of (get-assertions) because the boolean_model will be wrong (we need to restore the decisions). Although to be fair I think that this is true for the whole optimization infrastructure.

(Now that I think about it — I believe the needed information should be guaranteed to be in the model after #1019 and #957 and we could stop using the boolean_model for (get-assertions))

The same goes for objective values.

Is this really true?

(declare-const x Int)
(maximize x)
(push 1)
(assert (< x 10))
(check-sat)
(pop 1)

@Halbaroth
Copy link
Collaborator Author

Halbaroth commented Jan 8, 2024

Are you talking about the example above with A1, A2, A3? It seems to me that this should work.

No, I was talking about an idea in my mind. The example above should work indeed.

Not sure if it is that ugly, but it's not sufficient in the presence of (get-assertions) because the boolean_model will be wrong (we need to restore the decisions). Although to be fair I think that this is true for the whole optimization infrastructure.

Really? I thought the function Satml.pop restored the previous state of the trail?

@Halbaroth
Copy link
Collaborator Author

Is this really true?

Ok, we have to optimize from scratch after each check-sat.

@bclement-ocp
Copy link
Collaborator

Really? I thought the function Satml.pop restored the previous state of the trail?

It does restore the state of the trail prior to the push, but prior to the push we did not have a model!

@Halbaroth
Copy link
Collaborator Author

Uhm, I see :/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backlog bug optimization This issue is related to optimization in models.
Projects
None yet
Development

No branches or pull requests

2 participants