Skip to content

MDD decomposition for Table constraint #891

Draft
WoutPiessens wants to merge 2 commits intomasterfrom
table_mdd_encoding
Draft

MDD decomposition for Table constraint #891
WoutPiessens wants to merge 2 commits intomasterfrom
table_mdd_encoding

Conversation

@WoutPiessens
Copy link
Copy Markdown
Collaborator

The MDD decomposition for the Table global constraint has been added.

At this moment, the decomposition only happens for top-level Table constraints. This decomposition should also be valid for positively reified Table constraints, however.

@IgnaceBleukx
Copy link
Copy Markdown
Collaborator

So following offline discussions, it is probably worthwhile splitting up the logic of this PR by implementing an "MDD" global constraint, with the same interface as the one in cpmpy/tools/xcsp3/globals.py.

Then, the Table decomposition would consist of making the set of transitions to pass to the MDD global; and then the "decompose_positive" for MDD would be the one with the flow constraints, similar to what is in the current PR.
Not sure where the compressing-phase would be? I suppose it can be either a pre-process step when decomposing the MDD, or be implemented when constructing the MDD in the first place (i.e., in the decompose_positive() function of Table)

What do you think?

I have not gone through the full PR yet, but one thing I noticed is that you make the direct encodings of the integer variables manually? We have some optimizations in CPMpy that do this automatically in a later phase of the transfromation stack.
So, to construct the flow constraints you can write cp.sum([var == val for var, val in incoming]) == 1 or something instead : )

@WoutPiessens
Copy link
Copy Markdown
Collaborator Author

Yes, I do agree that implementing an MDD global constraint is the best way to go, it will help make the code more modular. Now that I think about it, I think it is preferable to compress the MDD as a pre-processing step in MDD's decompose_positive function, since you would ideally want to do this for any MDD to obtain a stronger decomposition, not just from MDDs resulting from a Table constraint. And it is possible to construct the hash table (used for compressing the MDD efficiently) only at that point.

For the variables in the flow constraints, the issue is that they are often not direct encoding variables, but rather edge variables (that are linked to the direct encoding variables, in constraints such as [[var == val]] = e_1 + e_2 +...., since one direct encoding variable can have more than one edge variable linked to it). Therefore I believe there is no other option but to make them manually. Direct encoding variables in get_corresponding_bv() are written as (var == val), so they make use of CPMpy's optimizations.

@WoutPiessens
Copy link
Copy Markdown
Collaborator Author

We will split this PR into three separate PRs:

  1. Add a MDD global constraint
  2. Add the Table decomposition using MDD
  3. Work on alternative decompositions for global constraints that are only valid when the constraint is positively reified (or top-level)

@WoutPiessens WoutPiessens marked this pull request as draft April 13, 2026 12:17
@hbierlee
Copy link
Copy Markdown
Contributor

One thing I thought about today:

As I said offline, my view is that adding the MDD as a new (public) global is not for the sake of modularity (you can achieve that by a private class MDD as well, e.g. an inner class of Table decompose to make it really private). The real reason to make the MDD a public global constraint should be because it's useful to users (e.g. used in modelling, or supported by solvers, and so on). However, a public MDD global does come with more obligations, though, namely maintenance, better documentation, and (here's the new thing I thought about) the requirement to make the flow decomposition also fully reifiable. So just something to be aware off :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants