In this chapter, we have presented the first large scale and real life experimental evaluations of key polyhedral operators. These evaluations require a lot of work because of the diversity of implementations and because of numerous missing implementations had to be coded by the author.
Our benchmark system is quite different from other benchmarks in that, besides the performance, the stability analyses, i.e., analyses of ability to cope with computational problems, we have integrated polyhedral characteristics in our evaluations.
Moreover, since our set of tests is generated from PIPS [PIPb,IJT91], an static analyzer performing on standard benchmarks with large size applications (PerfectClub and SPEC95 benchmarks), the results provide richer information than the bibliographical experimental results.
Indeed, pre-existing evaluations are not satisfying for several reasons: our bibliographic study has revealed the fact that, it doesn't exist yet a mechanism to evaluate effectively these works, especially in the domain of program analysis and transformation with real-life examples. Conducted evaluations are based on at most one hundred problems, mostly theoretical, without analyses on quantity, on criteria, on exceptions (cases where algorithms fail because of resource limits), etc...
An example of existing comparisons can be found in [SOG96], where JANUS is compared with Omega test [PUG91], whose results have discovered only performance-related issues concerning the nightmare problem, due to the fact that the OMEGA tool include many overhead factors that cannot be reduced by the author of [SOG96].
There is also a set of tests provided by CDD [FUK] and LRS [Avi], then used by PPL developers in order to evaluate the PPL library's performance, with respect to the other libraries. These evaluations are based on the vertex/facet enumeration problem with a set of less than two hundred hand-made inputs, which are supported by the libraries POLYLIB [LOE,Wil], New POLKA [JEA,JEA00], the LINEAR [PIPa,CAI00], CDD [FUK] and LRS [Avi]. Although these inputs supply varying complexities for these libraries, they are far from satisfying, given the differences among applications.
Apart from clarifying whether CPU or memory efficiency or both are the intended measures of interest, our benchmark offer problem-related analyses (such as polyhedral size parameters, their origin), stability comparisons, incoherent results checking, precision of computing comparisons, etc... Those features are not available elsewhere.
The average execution time and the ability to deal with exceptions were of high interest. Furthermore, these results gave us an idea of the impact of choices made at higher levels, or the interaction in the sequence of operations. For instance, a suitable projection approximation can reduce the polyhedral size of several satisfiability tests afterward.
The results obtained in the satisfiability part have proved to be useful, where an external algorithm, JANUS, was extended to 64-bit precision and integrated by me in our analysis and transformation tool PIPS, because of its demonstrated better performance.
The dual conversion or double description approach is interesting since this operator can be used to implement projection, minimization and, mostly, convex hull computation. We have implemented the satisfiability test, projection and minimization using this approach, in order to compare with approach.
The decomposition of polyhedra by Corinne ANCOURT and Fabien COELHO can be used in order to speed-up the computation as shown in the convex hull part. This technique, or the strongly related one in [Mer05], can be applied for the other operators, including the satisfiability test.
Our evaluations first confirm the excellence of JANUS compared to all other implementations and algorithms as suggested in [SOG96] for integer satisfiability.
They also contain new comparisons of implementations that were not considered before, such as the Fourier-Motzkin method, the minimization operator implementation using the direct constraint manipulation in library, etc.
Then, they provide information on the connection between polyhedral problems, which are inputs for polyhedral libraries, and static analyzer's analyses on programs.
Furthermore, our framework studies the direct impact when using the 64-bit instead of 32-bit computation in polyhedral libraries, which is not possible from previous work.
Our experimental results show unexpectedly that the best predictive criterion is the number of dimensions and not the number of constraints. No criterion more predictive has been found. In case no clear winner is obtained, new heuristics can be constructed from polyhedral characteristics as described in the dual conversion operator part.
We have observed an unexpected sensibility for experimental data sets, for example in the comparisons of number of exceptions between JANUS and Simplex.
We also have seen the limit interest for hard cases, where the filtered databases provide poor information, except for the satisfiability test.
Bug detection and non-regression testing can be used to detect, for example, the difference between integer and rational answer. Such differences have been detected between JANUS and Simplex, which is rarely present, using our benchmark.
Our system of benchmarking is of course not complete and there are many things left to be done. For example, there are no experimental results related to backup algorithms dealing with exceptions, and making approximations.
We intend to add into our evaluations more operators and more libraries. Designs and implementations of new algorithms such as approximate algorithms for polyhedral operators or improvements using the decomposition of polyhedra are of interest.
Indeed, the Cartesian factorization proposed by [HMPV03], as well as the decomposition by inclusion test are prime candidates for evaluation.
About the POLYBENCH framework, we need to improve the sampling as well as the repeating rule for small execution time.
We also plan to add memory usage comparisons into our comparisons, and try out other criteria in order to use heuristic-based approach.
A memorization scheme to avoid several executions of the same algorithm, as well as the resume capability are to be implemented.