previous_group previous up next next_group
Previous: 2. Filtered databases: Up: 3. Results for Satisfiability Next: 4. Results for Projection

7. Conclusion

First of all, we notice that sometimes exceptions could heavily penalize the execution. Let us take an example with an empty system of hundreds of constraints that raises an overflow exception for a 32-bit implementation for the satisfiability test. This constraint system may continue to expand in a sequence of much larger systems to be manipulated in our analyses. If the 64-bit implementation can solve this constraint system, i.e. without exception, then instead of manipulating systems of hundreds of constraints, we only have to deal with an empty system with no constraint, which means much faster execution.

We can see that 64-bit computation is more accurate than 32-bit computation for at most a half execution time slow-down. Experiences show that even with 64-bit computation, a very important number of overflow exceptions are encountered, thus 64-bit is preferable.

In general, JANUS 64-bit has a better performance over $C^3$ Simplex 64-bit, Fourier-Motzkin 64-bit and Double Description Method 64-bit. However, the fact that $C^3$ Fourier-Motzkin 64-bit and Double Description method can solve a number of constraint systems that JANUS 64-bit cannot (see 6_subsec:satisfiability_exceptions), raises a question: Is it worth to use them in such rare cases?

We remark that $C^3$ Fourier-Motzkin and Double Description method have problem mostly with timeout. Then one may consider them as an ultimate choice when we absolutely want a definitive answer. However, we cannot assure the termination of these algorithms, because memory space is limited.

For the Simplex algorithm, we can see that magnitude is the main problem, not the explosion of system size as for Fourier-Motzkin algorithm. So a treatment of large numbers might resolve the case. For example, we might try GNU multi-precision library instead of 64-bit computing.

The degenerated polyhedra is another problem for Simplex method, so pre-processing phases are to be studied. We can also use approaches proposed in [Mer05], such as Cartesian factorization or dimension space mapping. For a large constraint system that none of the above implementations can deal with, we do not know whether a polyhedron decomposition can improve the situation or not ([Mer05], page $85$ to $91$).

The poor performance of the satisfiability test using Double Description method with the sampled databases and PerfectClub filtered database suggests that New POLKA, POLYLIB and PPL libraries that use this approach should meet difficulties when dealing with large size constraint systems.

For the SPEC95 filtered database, unfortunately, we cannot build any heuristics to take advantages of the Double Description method. We are aware of the size estimating algorithm presented in CDD library [FUK], but it is not cheap enough to be used as heuristics.

Finally, our experimental results have shown an unexpected fact: The most relevant criterion is the number of dimensions but not the number of constraints. Criteria more predictive than the dimension space have not been found. In most of cases, the dimension space cannot provide good heuristics, i.e. we cannot provide a cut which separates the red zones and the green zones in those histograms in order to achieve the best performance out of two implementations.


previous_group previous up next next_group
Previous: 2. Filtered databases: Up: 3. Results for Satisfiability Next: 4. Results for Projection
Nguyen Que Duong
2006-09-16