previous_group previous up next next_group
Previous: 3. Convex Hull Up: 1. Benchmarking Next: 4. Building a Polyhedral

3. Related Work

Much research has been concentrated on possible improvements to these algorithms, based on a mathematical background. A list of implementations concerning these operators has been made, including some ``complete'' polyhedral libraries like the POLYLIB [LOE,Wil], the New POLKA [JEA,JEA00], the LINEAR $C^3$ [PIPa,CAI00], or just libraries that focus on some particular operators, like CDD [FUK], LRS [Avi] or JANUS [SOG,SOG96]. For example, JANUS is a piece of software that deals with the integer satisfiability problem, whereas the calculation of convex hull of two or more polyhedra is maybe the most discussed polyhedral operator in the literature [BAY].

However, our bibliographic study shows that it doesn't exist yet a mechanism to evaluate effectively existing polyhedral libraries, especially in the domain of program analysis and transformation for real-life examples. Published evaluations, e.g. [SOG96] and [Ba], are based on at most two hundreds instances, mostly theoretical, without analyses on polyhedral characteristics, e.g. dimension space, numbers of constraints, etc., on numbers of exceptions, cases where algorithms fail because of resource limits, etc. Apart from clarifying whether CPU or memory efficiency or both are the intended measures of interest, we offer problem-related analyses, such as polyhedral criteria and their origin, stability comparisons, i.e. the ability of coping with difficult cases, incoherent results checking, computing precision comparisons, etc. [*] Our comparisons cover not only the satisfiability test and dual conversion operators but also other important polyhedral operators, such as projection, minimization and convex hull operators.

An example of previous comparisons can be found in [SOG96], where JANUS is compared with the Omega test [PUG91], whose results discover performance-related issues concerning only one problem, the nightmare problem, due to overhead factors in the Omega tool.

There is also a set of tests provided by CDD [FUK] and LRS [Avi], then used by PPL developers in order to evaluate the PPL library's performance [Ba], compared to other libraries. These evaluations are based on the vertex/facet enumeration problem with a set of less than two hundred inputs, which are supported by the libraries POLYLIB [LOE,Wil], New POLKA [JEA,JEA00], the LINEAR $C^3$ [PIPa,CAI00], CDD [FUK] and LRS [Avi]. Though these inputs supply varied complexity operations for these programs, they are far from satisfying, given the differences among the applications. It is known that even with the same application, big variations may be observed for different inputs. Thus we can only measure the performance of the application with the biggest variety of inputs we can come up with. The results then can be used to choose the library that gives us the best overall performance.


previous_group previous up next next_group
Previous: 3. Convex Hull Up: 1. Benchmarking Next: 4. Building a Polyhedral
Nguyen Que Duong
2006-09-16