/* Copyright (C) 1991-2014 Free Software Foundation, Inc. This file is part of the GNU C Library. The GNU C Library is free software; you can redistribute it and/or modify it under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or (at your option) any later version. The GNU C Library is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU Lesser General Public License for more details. You should have received a copy of the GNU Lesser General Public License along with the GNU C Library; if not, see . */ /* This header is separate from features.h so that the compiler can include it implicitly at the start of every compilation. It must not itself include or any other header that includes because the implicit include comes before any feature test macros that may be defined in a source file before it first explicitly includes a system header. GCC knows the name of this header in order to preinclude it. */ /* glibc's intent is to support the IEC 559 math functionality, real and complex. If the GCC (4.9 and later) predefined macros specifying compiler intent are available, use them to determine whether the overall intent is to support these features; otherwise, presume an older compiler has intent to support these features and define these macros by default. */ /* wchar_t uses ISO/IEC 10646 (2nd ed., published 2011-03-15) / Unicode 6.0. */ /* We do not support C11 . */ PIPS: Automatic Parallelizer

The PIPS Workbench Project

Centre de Recherche en Informatique, Maths & Systems, MINES ParisTech

The new web site is at: pips4u.org

What's new?

  • Basic C support
  • Parallelization for the KAAPI runtime soon
  • code generation for parallel heterogeneous architectures with hardware synthesis for the PHRASE and CoMap projects
  • PIPS is available with SubVersioN (SVN)
  • multimédia instruction set (MMX like) code generation with SAC (SIMD Architecture Compiler)
  • Semantics analysis of CFG sharpened
  • Semantics analysis for non-integer types: boolean, float and strings
  • Array Bound Checking with minimal overhead
  • Array Alias analysis
  • Variable Freeze for code specialization
  • Perfect Club benchmarks: adm, arc2d, bdna, dyfesm, flo52, mdg, ocean, qcd, smsi, spec77, spice, track, trdf
  • Spec-CFP95 benchmarks: applu, apsi , fpppp, hydro2d, mgrid, su2cor, swim, tomcatv, turb3d, wave5
  • Number of simultaneously analyzed modules above 700 (150 K lines)!

Last update: $Date: 2010-12-23 16:17:34 +0100 (jeu., 23 déc. 2010) $ URL: http://www.cri.mines-paristech.fr/pips E-mail: pips@cri.mines-paristech.fr


The goal of the PIPS project is to develop a free, open and extensible workbench for automatically analyzing and transforming scientific and signal processing applications. The PIPS workbench is especially relevant for people interested in whole program compilation, reverse-engineering, program verification, source-to-source program optimization and parallelization. Its interprocedural analyses help with program understanding and with checking legality and impact of automatic program transformations. These transformations are used to reduce the execution cost and latency, as well as the optimization cost itself.

Techniques developed for PIPS can be re-used for signal processing code written in C, because pointers, data structures and dynamic allocation are not used much, and for Java code optimization because array boundary checking must be minimized conservatively.

PIPS is a free open workbench which has been used as support to develop new analyses or program transformations by several teams from CEA-DAM, Southampton University, SRU, ENST Bretagne and ENS Cachan. These new developments benefit at no cost from the general infrastructure and from already available analyses and transformations. PIPS has also been used to develop a HPF prototype compiler and to study optimizations for HPF.

PIPS is not primarily designed to support compiler back-end research like SUIF but is much better as a Fortran source-to-source tool because the data types (e.g. complex) the code structures, the comments and, to some extent, the initial statement numbers are preserved.

What is in Pips ?

  • C
  • Fortran 77
  • Fortran 95
  • Fortran 77 + High Performance Fortran directives
  • Array Privatization
  • Array Section Privatization
  • Array Element Regions
  • Call Graph
  • Control Flow Graph
  • Continuation Conditions
  • Dependences
  • Memory Effects
  • Preconditions
  • Program Complexity
  • Reduction Detection
  • Scalar Variable Privatization
  • Transformers
  • Use-Def Chains
  • Atomization
  • Cloning
  • Control Restructuration
  • Dead Code Elimination
  • Useless Definition Elimination
  • Declaration Cleaning

  • Call Graph
  • Control Flow Graph
  • Dependence Graph
  • Fortran 77 (After transformation, optimization, restructuration)
  • Fortran 77 + Commented information (Analyses)
  • Fortran 77 + doall (parallel loops)
  • Fortran 77 + CRAY directives
  • Fortran 77 + OpenMP
  • Fortran 77 + PVM
  • Fortran 77 + MPI
  • C
  • SmallTalk
  • Coarse Grain Parallelization
  • Expression Optimizations
  • Forward Substitution
  • Loop Distribution
  • Loop Interchange
  • Loop Normalize
  • Loop Reductions
  • Loop Unrolling
  • Partial Evaluation
  • Parallelization
  • Strip Mining


PIPS was initially designed as an automatic parallelizer for scientific programs. It takes as input Fortran 77 codes and emphasizes interprocedural techniques for program analyses. PIPS specific interprocedural analyses are precondition and array region computations. PIPS automatically computes affine preconditions for integer scalar variables which are much more powerful than standard constant propagation or forward substitution as shown below:

   I = 1
      N = 10
      J = 3
C P(I,J,N) {I==1, J==3, N==10}
      DO WHILE (I.LT.N)
C P(I,J,N) {N==10, I<=9, 5<=2I+J, J<=3}
         PRINT *, I
         IF (T(I).GT.0.) THEN
            J = J-2
         I = I+1
C P(I,J,N) {N==10, I==10, 5<=2I+J, J<=3}
      PRINT *, I, J

PIPS also computes several kinds of polyhedral array regions (READ, WRITE, IN and OUT) which are used for array and partial array interprocedural privatization as well as for interprocedural parallelization. Here, for instance, array TI is automatically privatized in spite of its initialization thru the call to PVNMUT:

      DO K = K1, K2
         DO J = J1, JA
            CALL PVNMUT(TI)
            T(J,K,NN) = S*TI(1)
            T(J,K,NN+1) = S*TI(2)
            T(J,K,NN+2) = S*TI(3)

As mentioned above, PIPS can also be used as a reverse engineering tool. Region analyses provide useful summaries of procedure effects, while precondition-based partial evaluation and dead-code elimination reduce code size. Cloning has also be used successfully to split a routine implementing several functionalities into a set of routines implementing each exactly one functionality. Automatic cleaning of declarations is useful when commons are over-declared thru include statements.

Analyses and Transformations

All analyses and transformations are listed in the documentation of the PIPS API and of PIPS interfaces. Static analyses compute call graphs, memory effects, use-def chains, dependence graphs, interprocedural checks, transformers, preconditions, continuation conditions, complexity estimation, reduction detection, array regions (read, write, in and out, may or exact), aliases and complementary sections. The results of analyses can be displayed with the source code, with the call graph or with an elapsed interprocedural control flow graph, as texts or as graphs. The dependence graphs can also be displayed either as texts or graphs.

Several parallelization algorithms are available, including Allen&Kennedy, as well as automatic code distribution, including a prototype HPF compiler, hpfc. Different views of the parallel outputs are available: HPF, OpenMP, Fortran90, Cray Fortran. Furthermore, the polyhedral method of Pr. Feautrier also is implemented.

Program transformations include loop distribution, scalar and array privatization, atomizers (reduction of a statements to a three-address form), loop unrolling (partial and full), strip-mining, loop interchange, partial evaluation, dead-code elimination, use-def elimination, control restructuring, loop normalization, declaration cleaning, cloning, forward substitution and expression optimizations.


PIPS is based on linear algebra techniques for analyses, e.g. dependence testing, as well as for code generation, e.g. loop interchange or tiling.

Analyses and transformations are driven by a make system, (pipsmake) which enforces consistency across analyses and modules, and results are stored for future interprocedural use in a database by a resource manager (pipsdbm). The workbench is made of phases that are called on demand to perform the analyses or transformations required by the user. Thus interprocedural requests are very easy to formulate: only the final in demand result must be specified.

PIPS Overall Structure (Note: OpenMP and MPI outputs are now available)

PIPS is built on top of two other tools. The first one is Newgen which manages data structures a la IDL. It provides basic manipulation functions for data structures described in a declaration file. It supports persistent data and type hierarchies. An introductory paper (in English) and a tutorial (in French) are available.

All PIPS data-types are based on Newgen. A description of PIPS internal representation is available in a technical report. The mapping of Fortran onto the PIPS internal representation is described in Technical Report TR E/105, Section 2.

The second tool is the Linear C3 library which handles vectors, matrices, linear constraints and structures based on these such as polyhedra. The algorithms used are designed for integer and/or rational coefficients. This library is extensively used for analyses such as dependence test, precondition and region computation, and for transformations, such as tiling, and for code generation, such as send and receive code in HPF compilation. The Linear C3 library is a joint project with IRISA and PRISM laboratories, partially funded by CNRS. IRISA contributed an implementation of Chernikova algorithm and PRISM a C implementation of PIP (Parametric Integer Programming).

User Interfaces

Five user interfaces are available: a Shell interface (Pips), a line interface (Tpips user manual and tpips man) and three window-based interfaces. The three X-window interfaces Wpips, Epips and Jpips to the workbench are the best suited for new users (see the wpips man and WPips and EPips user manual). The last interface is Java based but not yet available. Click here for a Wpips screen snapshot.



PIPS has been developed at CRI since 1988, thru several research projects funded by the French DoD (DRET), the French NSF (CNRS) and the European Union (ESPRIT programs). The initial design was made by Rémi TRIOLET, François IRIGOIN and Pierre JOUVELOT. This initial design has proved good enough since then and has not required any major change. The overview paper presented at ICS'91 still is up-to-date, although major functionalities have been added since.

Pips Bibliography

  • Semantical Interprocedural Parallelization: An Overview of the PIPS Project,
    F. Irigoin, P. Jouvelot, R. Triolet, 1991 International Conference on Supercomputing, Cologne, June 1991. Also available as Tech. Report A/201.
  • PIPS: a Workbench for Program Parallelization and Optimization
    A-299 color transparencies presented at European Parallel Tool Meeting 1996 (EPTM'96), Corinne Ancourt, Fabien Coelho, Béatrice Creusillet, François Irigoin, Pierre Jouvelot and Ronan Keryell. October 23, 1996, ONERA, FRANCE.
  • PIPS --- A Workbench for Interprocedural Program Analyses and Parallelization
    Corinne Ancourt, Béatrice Apvrille, Fabien Coelho, François Irigoin, Pierre Jouvelot, Ronan Keryell, Gzip report. April 22, 1994, The slides of the Villeneuve d'Ascq '94 Franco-British N+N Meeting on DATA PARALLEL LANGUAGES AND COMPILERS FOR PORTABLE PARALLEL COMPUTING
USER INTERFACES Five user interfaces:
  • WPips & EPips User Manual (Paralléliseur Interprocédural de Programmes Scientifiques) --- Linear Algebra based Automatic Parallelizer & Program Transformer,
    Ronan Keryell, April 9, 1996. Technical Report A-288 in PostScript or in HTML
  • Wpips Window interface,
  • Tpips user manual HTML format
  • Tpips Line interface
  • Pips Shell interface ,
  • Jpips A Java interface (in development)
  • PIPS Development Environment,
    Corinne Ancourt, Fabien Coelho, Béatrice Creusillet, François Irigoin, Pierre Jouvelot, Ronan Keryell, May 1996-2008. Report A-291.
  • PIPS: Représentation Intermédiaire,
    F. Irigoin, P. Jouvelot, R. Triolet, documentation interne du projet PIPS, septembre 1988. New version, 2008, TR E/166.
  • NewGen User Manual.
    P. Jouvelot, R. Triolet, December 1990, Tech. Report.
  • NewGen: A Language-Independent Program Generator.
    P. Jouvelot, R. Triolet, July 12, 1989, Tech. Report A/191.
  • A French tutorial in HTML file
  • PIPSMAKE et PIPSDBM: Motivations et fonctionnalités.
    F. Irigoin , 27 Septembre 1990, TR E/133.
  • The PIPSMAKE High-Level Software Interface, and its associated properties for Low Level Tuning of PIPS.

On-going research at CRI

This part is quite outdated, even if developments are going on... :-( Look at the new version of the site please.

The current research objectives are:

  • expression optimization for superscalar and VLIW architectures (Julien Zory)
  • analysis refinement to support automatic computation of module signatures (Olivier Albiez, Nicky Williams-Preston)
  • new effective encodings of sets of integer to support signal processing application better (Oliver Albiez)
  • compilation of signal processing specifications (Corinne Ancourt)
These objectives were defined with our industrial partners (CEA, EDF, SAGEM, Thomson-CSF).

External Collaboration

A team at CEA, lead by Benoit de Dinechin, contributed several phases which implement techniques developed by Paul Feautrier (PRISM) for TMC CM-5 and Cray T-3D machines, as well as extensions of these techniques (see Polyhedric method). These external phases were easily blended within PIPS thanks to NewGen, which constraints the number of data structures used by programmers and provide a general framework for low-level classes (list, hash-table,...), and thanks to pipsmake which hides the intra- and inter-procedural chaining of analyses and transformations from the programmer as well as from the user.


Ten years after its inception, PIPS as a workbench is still alive and well. PIPS provides a robust infrastructure for new experiments in compilation, program analysis, optimization, transformation and parallelization. The PIPS developer environment is described in a technical report but it also is possible to develop new phases on top of but outside of PIPS since all (in fact, most...) PIPS data structures can be reloaded using Newgen primitives from C or Lisp programs.

PIPS is written in C and developed under Linux. PIPS can also be used under Solaris, under AIX and Digital UNIX. Two versions are available: Integer values are represented either by 32-bit or by 64-bit words. The source code is available on request and executables are downloadable.

Maintenance and Support

Send your comments, questions and requests to pips-support@cri.mines-paristech.fr.