PIPS High-Level Software Interface
Pipsmake Configuration

Rémi Triolet, François Irigoin
and many other contributors
MINES ParisTech
Mathématiques et Systèmes
Centre de Recherche en Informatique
77305 Fontainebleau Cedex
France

 Id: pipsmake-rc.tex 23183 2016-09-19 07:23:25Z coelho  

You can get a printable version of this document on
http://www.cri.ensmp.fr/pips/pipsmake-_rc.htdoc/pipsmake-_rc.pdf and a HTML version on http://www.cri.ensmp.fr/pips/pipsmake-_rc.htdoc.

Chapter 1
Introduction

This paper describes high-level objects and functions that are potentially user-visible in a PIPS1  [33] interactive environment. It defines the internal software interface between a user interface and program analyses and transformations. This is clearly not a user guide but can be used as a reference guide, the best one before source code because PIPS user interfaces are very closely mapped on this document: some of their features are automatically derived from it.

Objects can be viewed and functions activated by one of PIPS existing user interfaces: tpips2 , the tty style interface which is currently recommended, pips3  [11], the old batch interface, improved by many shell scripts4 , wpips and epips, the X-Window System interfaces. The epips interface is an extension of wpips which uses Emacs to display more information in a more convenient way. Unfortunately, right now these window-based interfaces are no longer working and have been replaced by gpips. It is also possible to use PIPS through a Python API, pyps.

From a theoretical point of view, the object types and functions available in PIPS define an heterogeneous algebra with constructors (e.g. parser), extractors (e.g. prettyprinter) and operators (e.g. loop unrolling). Very few combinations of functions make sense, but many functions and object types are available. This abundance is confusing for casual and experiences users as well, and it was deemed necessary to assist them by providing default computation rules and automatic consistency management similar to make. The rule interpretor is called pipsmake6 and described in [10]. Its key concepts are the phase, which correspond to a PIPS function made user-visible, for instance, a parser, the resources, which correspond to objects used or defined by the phases, for instance, a source file or an AST (parsed code), and the virtual rules, which define the set of input resources used by a phase and the set of output resources defined by the phase. Since PIPS is an interprocedural tool, some real inpu resources are not known until execution. Some variables such as CALLERS or CALLEES can be used in virtual rules. They are expanded at execution to obtain an effective rule with the precise resources needed.

For debugging purposes and for advanced users, the precise choice and tuning of an algorithm can be made using properties. Default properties are installed with PIPS but they can be redefined, partly or entirely, by a properties.rc file located in the current directory. Properties can also be redefined from the user interfaces, for example with the command setproperty when the tpips interface is used.

As far as their static structures are concerned, most object types are described in more details in PIPS Internal Representation of Fortran and C code7 . A dynamic view is given here. In which order should functions be applied? Which object do they produce and vice-versa which function does produce such and such objects? How does PIPS cope with bottom-up and top-down interprocedurality?

Resources produced by several rules and their associated rule must be given alias names when they should be explicitly computed or activated by an interactive interfaceFI: I do not understand.. This is otherwise not relevant. The alias names are used to generate automatically header files and/or test files used by PIPS interfaces.

No more than one resource should be produced per line of rule because different files are automatically extracted from this one8 . Another caveat is that all resources whose names are suffixed with _file are considered printable or displayable, and the others are considered binary data, even though they may be ASCII strings.

This LATEX file is used by several procedures to derive some pieces of C code and ASCII files. The useful information is located in the PipsMake areas, a very simple literate programming environment... For instance alias information is used to generate automatically menus for window-based interfaces such as wpips or gpips. Object (a.k.a resource) types and functions are renamed using the alias declaration. The name space of aliases is global. All aliases must have different names. Function declarations are used to build a mapping table between function names and pointer to C functions, phases.h. Object suffixes are used to derive a header file, resources.h, with all resource names. Parts of this file are also extracted to generate on-line information for wpips and automatic completion for tpips.

The behavior of PIPS can be slightly tuned by using properties and some environment variables. Most properties are linked to a particular phase, for instance to prettyprint, but some are linked to PIPS infrastructure and are presented in Chapter 2.

1.1 Informal Pipsmake Syntax

To understand and to be able to write new rules for pipsmake, a few things need to be known.

1.1.1 Example

The rule:

proper_references       > MODULE.proper_references
        < PROGRAM.entities
        < MODULE.code
        < CALLEES.summary_effects
means that the method proper_references is used to generate the proper_references resource of a given MODULE. But to generate this resource, the method needs access to the resource holding the symbole table, entities, of the PROGRAM currently analyzed, the code resource (the instructions) of the given MODULE and the summary_effects 6.2.4 resource (the side effects on the memory) of the functions and procedures called by the given MODULE, the CALLEES.

Properties are also declared in this file. For instance

ABORT_ON_USER_ERROR FALSE
declares a property to stop interpreting user commands when an error is made and sets its default value to false, which makes sense most of the time for interactive uses of PIPS. But for non-regression tests, it may be better to turn on this property.

1.1.2 Pipsmake variables

The following variables are defined to handle interprocedurality:

PROGRAM:
the whole application currently analyzed;
MODULE:
the current MODULE (a procedure or function);
ALL:
all the MODULEs of the current PROGRAM, functions and compilation units;
ALLFUNC:
all the MODULEs of the current PROGRAM that are functions;
CALLEES:
all the MODULEs called in the given MODULE;
CALLERS:
all the MODULEs that call the given MODULE.

These variables are used in the rule definitions and instantiated before pipsmake infers which resources are pre-requisites for a rule.

The environment variable PIPS_IGNORE_FUNCTION_RX is taken as a regular expression to filter out unwanted functions, such as static functions, inlined or not, which arise in some standard header files from time to time. For instance, with gcc 4.8, you should define

export PIPS_IGNORE_FUNCTION_RX=’!__bswap_’

1.2 Properties and Environment Variables

This paper also defines and describes global variables used to modify or fine tune PIPS behavior. Since global variables are useful for some purposes, but always dangerous, PIPS programmers are required to avoid them or to declare them explicitly as properties. Properties have an ASCII name and can have boolean, integer or string values.

Casual users should not use them. Some properties are modified for them by the user interface and/or the high-level functions. Some property combinations may be meaningless. More experienced users can set their values, using their names and a user interface.

Experienced users can also modify properties by inserting a file called properties.rc in their local directory. Of course, they cannot declare new properties, since they would not be recognized by the PIPS system. The local property file is read after the default property file, $PIPS_ROOT/etc/properties.rc. Some user-specified property values may be ignored because they are modified by a PIPS function before it had a chance to have any effect. Unfortunately, there is no explicit indication of usefulness for the properties in this report.

The default property file can be used to generate a custom version of properties.rc. It is derived automatically from this documentation, Documentation/pipsmake-rc.tex.

PIPS behavior can also be altered by Shell environment variables. Their generic names is XXXX_DEBUG_LEVEL, where XXXX is a library or a phase or an interface name (of course, there are exceptions). Theoretically these environment variables are also declared as properties, but this is generally forgotten by programmers. A debug level of 0 is equivalent to no tracing. The amount of tracing increases with the debug level. The maximum useful value is 9.

Another Shell environment variable, NEWGEN_MAX_TABULATED_ELEMENTS, is useful to analyze large programs. Its default value is 12,000 but it is not uncommon to have to set it up to 200,000.

Properties and environement variables are listed below on a source library basis. Properties used in more than one library or used by PIPS infrastructure are presented first. Section 2.3 contains information about properties related to infrastructure, external and user interface libraries. Properties for analyses are grouped in Chapter 6. Properties for program transformations, parallelization and distribution phases are listed in the next section in Chapters 9 and 8. User output produced by different kinds of prettyprinters are presented in Chapter 10. Chaper 11 is dedicated to properties of the libraries added by CEA to implement Feautrier’s method.

1.3 Outline

Rule and object declaration are grouped in chapters: input files (Chapter 3), syntax analysis and abstract syntax tree (Chapter 4), analyses (Chapter 6), parallelizations (Chapter 8), program transformations (Chapter 9) and prettyprinters of output files (Chapter 10). Chapter 11 describes several analyses defined by Paul Feautrier. Chapter 12 contains a set of menu declarations for the window-based interfaces.

Virtually every PIPS programmer contributed some lines in this report. Inconsistencies are likely. Please report them to the PIPS team9 !

Contents

1 Introduction
 1.1 Informal Pipsmake Syntax
  1.1.1 Example
  1.1.2 Pipsmake variables
 1.2 Properties and Environment Variables
 1.3 Outline
2 Global Options
 2.1 Fortran Loops
 2.2 Logging
 2.3 PIPS Infrastructure
  2.3.1 Newgen
  2.3.2 C3 Linear Library
  2.3.3 PipsMake Library
  2.3.4 PipsDBM Library
  2.3.5 Top Level Library
  2.3.6 Warning Management
  2.3.7 Option for C Code Generation
 2.4 User and Programming Interfaces
  2.4.1 Tpips Command Line Interface
  2.4.2 Pyps API
3 Input Files
 3.1 User File
 3.2 Preprocessing and Splitting
  3.2.1 Fortran 77 Preprocessing and Splitting
   3.2.1.1 Fortran 77 Syntactic Verification
   3.2.1.2 Fortran 77 File Preprocessing
   3.2.1.3 Fortran 77 Split
   3.2.1.4 Fortran Syntactic Preprocessing
  3.2.2 C Preprocessing and Splitting
   3.2.2.1 C Syntactic Verification
  3.2.3 Fortran 90 Preprocessing and Splitting
  3.2.4 Source File Hierarchy
 3.3 Source Files
 3.4 Regeneration of User Source Files
4 Building the Internal Representation
 4.1 Entities
 4.2 Parsed Code and Callees
  4.2.1 Fortran 77
   4.2.1.1 Fortran 77 Restrictions
   4.2.1.2 Some Additional Remarks
   4.2.1.3 Some Unfriendly Features
   4.2.1.4 Declaration of the Standard Fortran 77 Parser
  4.2.2 Declaration of HPFC Parser
  4.2.3 Declaration of the C Parsers
   4.2.3.1 Language parsed by the C Parsers
   4.2.3.2 Handling of C Code
   4.2.3.3 Compilation Unit Parser
   4.2.3.4 C Parser
   4.2.3.5 C Symbol Table
   4.2.3.6 Properties Used by the C Parsers
  4.2.4 Fortran 90
 4.3 Controlized Code (Hierarchical Control Flow Graph)
  4.3.1 Properties for Clean Up Sequences
  4.3.2 Symbol Table Related to a Module Code
 4.4 Parallel Code
5 Pedagogical Phases
 5.1 Using XML backend
 5.2 Prepending a comment
 5.3 Prepending a call
 5.4 Add a pragma to a module
6 Static Analyses
 6.1 Call Graph
 6.2 Memory Effects
  6.2.1 Proper Memory Effects
  6.2.2 Filtered Proper Memory Effects
  6.2.3 Cumulated Memory Effects
  6.2.4 Summary Data Flow Information (SDFI)
  6.2.5 IN and OUT Effects
  6.2.6 Proper and Cumulated References
  6.2.7 Effect Properties
   6.2.7.1 Effects Filtering
   6.2.7.2 Checking Pointer Updates
   6.2.7.3 Dereferencing Effects
   6.2.7.4 Effects of References to a Variable Length Array (VLA)
   6.2.7.5 Memory Effects vs Environment Effects
   6.2.7.6 Time Effects
   6.2.7.7 Effects of Unknown Functions
   6.2.7.8 Other Properties Impacting EFfects
 6.3 Live Memory Access Paths
 6.4 Reductions
  6.4.1 Reduction Propagation
  6.4.2 Reduction Detection
 6.5 Chains (Use-Def Chains)
  6.5.1 Menu for Use-Def Chains
  6.5.2 Standard Use-Def Chains (a.k.a. Atomic Chains)
  6.5.3 READ/WRITE Region-Based Chains
  6.5.4 IN/OUT Region-Based Chains
  6.5.5 Chain Properties
   6.5.5.1 Add use-use Chains
   6.5.5.2 Remove Some Chains
 6.6 Dependence Graph (DG)
  6.6.1 Menu for Dependence Tests
  6.6.2 Fast Dependence Test
  6.6.3 Full Dependence Test
  6.6.4 Semantics Dependence Test
  6.6.5 Dependence Test with Convex Array Regions
  6.6.6 Dependence Properties (Ricedg)
   6.6.6.1 Dependence Test Selection
   6.6.6.2 Statistics
   6.6.6.3 Algorithmic Dependences
   6.6.6.4 Optimization
 6.7 Flinter
 6.8 Loop Statistics
 6.9 Semantics Analysis
  6.9.1 Transformers
   6.9.1.1 Menu for Transformers
   6.9.1.2 Fast Intraprocedural Transformers
   6.9.1.3 Full Intraprocedural Transformers
   6.9.1.4 Fast Interprocedural Transformers
   6.9.1.5 Full Interprocedural Transformers
   6.9.1.6 Full Interprocedural Transformers with points-to
   6.9.1.7 Refine Full Interprocedural Transformers
   6.9.1.8 Summary Transformer
  6.9.2 Preconditions
   6.9.2.1 Initial Precondition or Program Precondition
   6.9.2.2 Intraprocedural Summary Precondition
   6.9.2.3 Interprocedural Summary Precondition
   6.9.2.4 Menu for Preconditions
   6.9.2.5 Intra-Procedural Preconditions
   6.9.2.6 Fast Inter-Procedural Preconditions
   6.9.2.7 Full Inter-Procedural Preconditions
  6.9.3 Total Preconditions
   6.9.3.0.1 Status:
   6.9.3.1 Menu for Total Preconditions
   6.9.3.2 Intra-Procedural Total Preconditions
   6.9.3.3 Inter-Procedural Total Preconditions
   6.9.3.4 Summary Total Precondition
   6.9.3.5 Summary Total Postcondition
   6.9.3.6 Final Postcondition
  6.9.4 Semantic Analysis Properties
   6.9.4.1 Value types
   6.9.4.2 Array Declarations and Accesses
   6.9.4.3 Type Information
   6.9.4.4 Integer Division
   6.9.4.5 Flow Sensitivity
   6.9.4.6 Context for statement and expression transformers
   6.9.4.7 Interprocedural Semantics Analysis
   6.9.4.8 Fix Point and Transitive Closure Operators
   6.9.4.9 Normalization Level
   6.9.4.10 Evaluation of sizeof
   6.9.4.11 Prettyprint
   6.9.4.12 Debugging
  6.9.5 Reachability Analysis: The Path Transformer
 6.10 Continuation conditions
 6.11 Complexities
  6.11.1 Menu for Complexities
  6.11.2 Uniform Complexities
  6.11.3 Summary Complexity
  6.11.4 Floating Point Complexities
  6.11.5 Complexity properties
   6.11.5.1 Debugging
   6.11.5.2 Fine Tuning
   6.11.5.3 Target Machine and Compiler Selection
   6.11.5.4 Evaluation Strategy
 6.12 Convex Array Regions
  6.12.1 Menu for Convex Array Regions
  6.12.2 MAY READ/WRITE Convex Array Regions
  6.12.3 MUST READ/WRITE Convex Array Regions
  6.12.4 Summary READ/WRITE Convex Array Regions
  6.12.5 IN Convex Array Regions
  6.12.6 IN Summary Convex Array Regions
  6.12.7 OUT Summary Convex Array Regions
  6.12.8 OUT Convex Array Regions
  6.12.9 Properties for Convex Array Regions
 6.13 Alias Analysis
  6.13.1 Dynamic Aliases
  6.13.2 Init Points-to Analysis
  6.13.3 Interprocedural Points to Analysis
  6.13.4 Fast Interprocedural Points to Analysis
  6.13.5 Intraprocedural Points to Analysis
  6.13.6 Initial Points-to or Program Points-to
  6.13.7 Pointer Values Analyses
  6.13.8 Properties for pointer analyses
   6.13.8.1 Impact of Types
   6.13.8.2 Heap Modeling
   6.13.8.3 Type Handling
   6.13.8.4 Dereferenceing of Null and Undefined Pointers
   6.13.8.5 Limits of Points-to Analyses
  6.13.9 Menu for Alias Views
 6.14 Complementary Sections
  6.14.1 READ/WRITE Complementary Sections
  6.14.2 Summary READ/WRITE Complementary Sections
7 Dynamic Analyses (Instrumentation)
 7.1 Array Bound Checking
  7.1.1 Elimination of Redundant Tests: Bottom-Up Approach
  7.1.2 Insertion of Unavoidable Tests
  7.1.3 Interprocedural Array Bound Checking
  7.1.4 Array Bound Checking Instrumentation
 7.2 Alias Verification
  7.2.1 Alias Propagation
  7.2.2 Alias Checking
 7.3 Used Before Set
8 Parallelization, Distribution and Code Generation
 8.1 Code Parallelization
  8.1.1 Parallelization properties
   8.1.1.1 Properties controlling Rice parallelization
  8.1.2 Menu for Parallelization Algorithm Selection
  8.1.3 Allen & Kennedy’s Parallelization Algorithm
  8.1.4 Def-Use Based Parallelization Algorithm
  8.1.5 Parallelization and Vectorization for Cray Multiprocessors
  8.1.6 Coarse Grain Parallelization
  8.1.7 Global Loop Nest Parallelization
  8.1.8 Coerce Parallel Code into Sequential Code
  8.1.9 Detect Computation Intensive Loops
  8.1.10 Limit parallelism using complexity
  8.1.11 Limit Parallelism in Parallel Loop Nests
 8.2 SIMDizer for SIMD Multimedia Instruction Set
  8.2.1 SIMD Atomizer
  8.2.2 Loop Unrollling for SIMD Code Generation
  8.2.3 Tiling for SIMD Code Generation
  8.2.4 Preprocessing of Reductions for SIMD Code Generation
  8.2.5 Redundant Load-Store Elimination
  8.2.6 Undo Some Atomizer Transformations (?)
  8.2.7 If Conversion
  8.2.8 Loop Unswitching
  8.2.9 Scalar Renaming
  8.2.10 Tree Matching for SIMD Code Generation
  8.2.11 SIMD properties
   8.2.11.1 Auto-Unroll
   8.2.11.2 Memory Organisation
   8.2.11.3 Pattern file
 8.3 Code Distribution
  8.3.1 Shared-Memory Emulation
  8.3.2 HPF Compiler
   8.3.2.1 HPFC Filter
   8.3.2.2 HPFC Initialization
   8.3.2.3 HPF Directive removal
   8.3.2.4 HPFC actual compilation
   8.3.2.5 HPFC completion
   8.3.2.6 HPFC install
   8.3.2.7 HPFC High Performance Fortran Compiler properties
  8.3.3 STEP: MPI code generation from OpenMP programs
   8.3.3.1 STEP Directives
   8.3.3.2 STEP Analysis
   8.3.3.3 STEP code generation
  8.3.4 PHRASE: high-level language transformation for partial evaluation in reconfigurable logic
   8.3.4.1 Phrase Distributor Initialisation
   8.3.4.2 Phrase Distributor
   8.3.4.3 Phrase Distributor Control Code
  8.3.5 Safescale
   8.3.5.1 Distribution init
   8.3.5.2 Statement Externalization
  8.3.6 CoMap: Code Generation for Accelerators with DMA
   8.3.6.1 Phrase Remove Dependences
   8.3.6.2 Phrase comEngine Distributor
   8.3.6.3 PHRASE ComEngine properties
  8.3.7 Parallelization for Terapix architecture
   8.3.7.1 Isolate Statement
   8.3.7.2 GPU XML Output
   8.3.7.3 Delay Communications
   8.3.7.4 Hardware Constraints Solver
   8.3.7.5 kernelize
   8.3.7.6 Communication Generation
  8.3.8 Code Distribution on GPU
  8.3.9 Task code generation for StarPU runtime
  8.3.10 SCALOPES: task code generation for the SCMP architecture with SESAM HAL
   8.3.10.1 First approach
   8.3.10.2 General Solution
 8.4 Automatic Resource-Constrained Static Task Parallelization
  8.4.1 Sequence Dependence DAG (SDG)
  8.4.2 BDSC-Based Hierarchical Task Parallelization (HBDSC)
  8.4.3 SPIRE(PIPS) generation
  8.4.4 SPIRE-Based Parallel Code Generation
9 Program Transformations
 9.1 Loop Transformations
  9.1.1 Introduction
  9.1.2 Loop range Normalization
  9.1.3 Loop Distribution
  9.1.4 Statement Insertion
  9.1.5 Loop Expansion
  9.1.6 Loop Fusion
  9.1.7 Index Set Splitting
  9.1.8 Loop Unrolling
   9.1.8.1 Regular Loop Unroll
   9.1.8.2 Full Loop Unroll
  9.1.9 Loop Fusion
  9.1.10 Strip-mining
  9.1.11 Loop Interchange
  9.1.12 Hyperplane Method
  9.1.13 Loop Nest Tiling
  9.1.14 Symbolic Tiling
  9.1.15 Loop Normalize
  9.1.16 Guard Elimination and Loop Transformations
  9.1.17 Tiling for sequences of loop nests
 9.2 Redundancy Elimination
  9.2.1 Loop Invariant Code Motion
  9.2.2 Partial Redundancy Elimination
  9.2.3 Identity Elimination
 9.3 Control-Flow Optimizations
  9.3.1 Control Simplification (a.k.a. Dead Code Elimination)
   9.3.1.1 Properties for Control Simplification
  9.3.2 Dead Code Elimination (a.k.a. Use-Def Elimination)
  9.3.3 Loop bound minimization
  9.3.4 Control Restructurers
   9.3.4.1 Unspaghettify
   9.3.4.2 Restructure Control
   9.3.4.3 DO Loop Recovery
   9.3.4.4 For Loop to DO Loop Conversion
   9.3.4.5 For Loop to While Loop Conversion
   9.3.4.6 Do While to While Loop Conversion
   9.3.4.7 Spaghettify
   9.3.4.8 Full Spaghettify
  9.3.5 Control Flow Normalisation (STF)
  9.3.6 Trivial Test Elimination
  9.3.7 Finite State Machine Generation
   9.3.7.1 FSM Generation
   9.3.7.2 Full FSM Generation
   9.3.7.3 FSM Split State
   9.3.7.4 FSM Merge States
   9.3.7.5 FSM Properties
  9.3.8 Control Counters
 9.4 Expression Transformations
  9.4.1 Atomizers
   9.4.1.1 General Atomizer
   9.4.1.2 Limited Atomizer
   9.4.1.3 Atomizer Properties
  9.4.2 Partial Evaluation
  9.4.3 Reduction Detection
  9.4.4 Reduction Replacement
  9.4.5 Forward Substitution
  9.4.6 Expression Substitution
  9.4.7 Rename Operators
  9.4.8 Array to Pointer Conversion
  9.4.9 Expression Optimization Using Algebraic Properties
  9.4.10 Common Subexpression Elimination
 9.5 Hardware Accelerator
  9.5.1 FREIA Software
  9.5.2 FREIA SPoC
  9.5.3 FREIA Terapix
  9.5.4 FREIA OpenCL
  9.5.5 FREIA Sigma-C for Kalray MPPA-256
 9.6 Function Level Transformations
  9.6.1 Inlining
  9.6.2 Unfolding
  9.6.3 Outlining
  9.6.4 Cloning
 9.7 Declaration Transformations
  9.7.1 Declarations Cleaning
  9.7.2 Array Resizing
   9.7.2.1 Top Down Array Resizing
   9.7.2.2 Bottom Up Array Resizing
   9.7.2.3 Full Bottom Up Array Resizing
   9.7.2.4 Array Resizing Statistic
   9.7.2.5 Array Resizing Properties
  9.7.3 Scalarization
   9.7.3.1 Scalarization Based on Convex Array Regions
   9.7.3.2 Scalarization Based on Constant Array References
   9.7.3.3 Scalarization Based on Memory Effects and Dependence Graph
  9.7.4 Induction Variable Substitution
  9.7.5 Strength Reduction
  9.7.6 Flatten Code
  9.7.7 Split Update Operators
  9.7.8 Split Initializations (C Code)
  9.7.9 Set Return Type
  9.7.10 Cast Actual Parameters at Call Sites
  9.7.11 Scalar and Array Privatization
   9.7.11.1 Scalar Privatization
   9.7.11.2 Declaration Localization
   9.7.11.3 Array Privatization
  9.7.12 Scalar and Array Expansion
   9.7.12.1 Scalar Expansion
   9.7.12.2 Array Expansion
  9.7.13 Variable Length Array
   9.7.13.1 Check Initialize Variable Length Array
   9.7.13.2 Initialize Variable Length Array
  9.7.14 Freeze variables
 9.8 Miscellaneous transformations
  9.8.1 Type Checker
  9.8.2 Manual Editing
  9.8.3 Transformation Test
 9.9 Extensions Transformations
  9.9.1 OpenMP Pragma
10 Output Files (Prettyprinted Files)
 10.1 Parsed Printed Files (User View)
  10.1.1 Menu for User Views
  10.1.2 Standard User View
  10.1.3 User View with Transformers
  10.1.4 User View with Preconditions
  10.1.5 User View with Total Preconditions
  10.1.6 User View with Continuation Conditions
  10.1.7 User View with Convex Array Regions
  10.1.8 User View with Invariant Convex Array Regions
  10.1.9 User View with IN Convex Array Regions
  10.1.10 User View with OUT Convex Array Regions
  10.1.11 User View with Complexities
  10.1.12 User View with Proper Effects
  10.1.13 User View with Cumulated Effects
  10.1.14 User View with IN Effects
  10.1.15 User View with OUT Effects
 10.2 Printed File (Sequential Views)
  10.2.1 Html output
  10.2.2 Menu for Sequential Views
  10.2.3 Standard Sequential View
  10.2.4 Sequential View with Transformers
  10.2.5 Sequential View with Initial Preconditions
  10.2.6 Sequential View with Complexities
  10.2.7 Sequential View with Preconditions
  10.2.8 Sequential View with Total Preconditions
  10.2.9 Sequential View with Continuation Conditions
  10.2.10 Sequential View with Convex Array Regions
   10.2.10.1 Sequential View with Plain Pointer Regions
   10.2.10.2 Sequential View with Proper Pointer Regions
   10.2.10.3 Sequential View with Invariant Pointer Regions
   10.2.10.4 Sequential View with Plain Convex Array Regions
   10.2.10.5 Sequential View with Proper Convex Array Regions
   10.2.10.6 Sequential View with Invariant Convex Array Regions
   10.2.10.7 Sequential View with IN Convex Array Regions
   10.2.10.8 Sequential View with OUT Convex Array Regions
   10.2.10.9 Sequential View with Privatized Convex Array Regions
  10.2.11 Sequential View with Complementary Sections
  10.2.12 Sequential View with Proper Effects
  10.2.13 Sequential View with Cumulated Effects
  10.2.14 Sequential View with IN Effects
  10.2.15 Sequential View with OUT Effects
  10.2.16 Sequential View with Live Paths
  10.2.17 Sequential View with Proper Reductions
  10.2.18 Sequential View with Cumulated Reductions
  10.2.19 Sequential View with Static Control Information
  10.2.20 Sequential View with Points-To Information
  10.2.21 Sequential View with Simple Pointer Values
  10.2.22 Prettyprint Properties
   10.2.22.1 Language
   10.2.22.2 Layout
   10.2.22.3 Target Language Selection
   10.2.22.3.1 Parallel output style
   10.2.22.3.2 Default sequential output style
   10.2.22.4 Display Analysis Results
   10.2.22.5 Display Internals for Debugging
   10.2.22.5.1 Warning:
   10.2.22.6 Declarations
   10.2.22.7 FORESYS Interface
   10.2.22.8 HPFC Prettyprinter
   10.2.22.9 C Internal Prettyprinter
   10.2.22.10 Interface to Emacs
 10.3 Printed Files with the Intraprocedural Control Graph
  10.3.1 Menu for Graph Views
  10.3.2 Standard Graph View
  10.3.3 Graph View with Transformers
  10.3.4 Graph View with Complexities
  10.3.5 Graph View with Preconditions
  10.3.6 Graph View with Preconditions
  10.3.7 Graph View with Regions
  10.3.8 Graph View with IN Regions
  10.3.9 Graph View with OUT Regions
  10.3.10 Graph View with Proper Effects
  10.3.11 Graph View with Cumulated Effects
  10.3.12 ICFG Properties
  10.3.13 Graph Properties
   10.3.13.1 Interface to Graphics Prettyprinters
 10.4 Parallel Printed Files
  10.4.1 Menu for Parallel View
  10.4.2 Fortran 77 Parallel View
  10.4.3 HPF Directives Parallel View
  10.4.4 OpenMP Directives Parallel View
  10.4.5 Fortran 90 Parallel View
  10.4.6 Cray Fortran Parallel View
 10.5 Call Graph Files
  10.5.1 Menu for Call Graphs
  10.5.2 Standard Call Graphs
  10.5.3 Call Graphs with Complexities
  10.5.4 Call Graphs with Preconditions
  10.5.5 Call Graphs with Total Preconditions
  10.5.6 Call Graphs with Transformers
  10.5.7 Call Graphs with Proper Effects
  10.5.8 Call Graphs with Cumulated Effects
  10.5.9 Call Graphs with Regions
  10.5.10 Call Graphs with IN Regions
  10.5.11 Call Graphs with OUT Regions
 10.6 DrawGraph Interprocedural Control Flow Graph Files (DVICFG)
  10.6.1 Menu for DVICFG’s
  10.6.2 Minimal ICFG with graphical filtered Proper Effects
 10.7 Interprocedural Control Flow Graph Files (ICFG)
  10.7.1 Menu for ICFG’s
  10.7.2 Minimal ICFG
  10.7.3 Minimal ICFG with Complexities
  10.7.4 Minimal ICFG with Preconditions
  10.7.5 Minimal ICFG with Preconditions
  10.7.6 Minimal ICFG with Transformers
  10.7.7 Minimal ICFG with Proper Effects
  10.7.8 Minimal ICFG with filtered Proper Effects
  10.7.9 Minimal ICFG with Cumulated Effects
  10.7.10 Minimal ICFG with Regions
  10.7.11 Minimal ICFG with IN Regions
  10.7.12 Minimal ICFG with OUT Regions
  10.7.13 ICFG with Loops
  10.7.14 ICFG with Loops and Complexities
  10.7.15 ICFG with Loops and Preconditions
  10.7.16 ICFG with Loops and Total Preconditions
  10.7.17 ICFG with Loops and Transformers
  10.7.18 ICFG with Loops and Proper Effects
  10.7.19 ICFG with Loops and Cumulated Effects
  10.7.20 ICFG with Loops and Regions
  10.7.21 ICFG with Loops and IN Regions
  10.7.22 ICFG with Loops and OUT Regions
  10.7.23 ICFG with Control
  10.7.24 ICFG with Control and Complexities
  10.7.25 ICFG with Control and Preconditions
  10.7.26 ICFG with Control and Total Preconditions
  10.7.27 ICFG with Control and Transformers
  10.7.28 ICFG with Control and Proper Effects
  10.7.29 ICFG with Control and Cumulated Effects
  10.7.30 ICFG with Control and Regions
  10.7.31 ICFG with Control and IN Regions
  10.7.32 ICFG with Control and OUT Regions
 10.8 Data Dependence Graph File
  10.8.1 Menu For Dependence Graph Views
  10.8.2 Effective Dependence Graph View
  10.8.3 Loop-Carried Dependence Graph View
  10.8.4 Whole Dependence Graph View
  10.8.5 Filtered Dependence Graph View
  10.8.6 Filtered Dependence daVinci Graph View
  10.8.7 Impact Check
  10.8.8 Chains Graph View
  10.8.9 Chains Graph Graphviz Dot View
  10.8.10 Data Dependence Graph Graphviz Dot View
   10.8.10.1 Properties Used to Select Arcs to Display
  10.8.11 Properties for Dot output
 10.9 Fortran to C prettyprinter
  10.9.1 Properties for Fortran to C prettyprinter
 10.10 Prettyprinters Smalltalk
 10.11 Prettyprinter for the Polyhderal Compiler Collection (PoCC)
  10.11.1 Rstream interface
 10.12 Regions to loops
 10.13 Prettyprinter for CLAIRE
11 Feautrier Methods (a.k.a. Polyhedral Method)
 11.1 Static Control Detection
 11.2 Scheduling
 11.3 Code Generation for Affine Schedule
 11.4 Prettyprinters for CM Fortran
12 User Interface Menu Layouts
 12.1 View Menu
 12.2 Transformation Menu
13 Conclusion
14 Known Problems

Chapter 2
Global Options

Options are called properties in PIPS. Most of them are related to a specific phase, for instance the dependence graph computation. They are declared next to the corresponding phase declaration. But some are related to one library or even to several libraries and they are declared in this chapter.

Skip this chapter on first reading. Also skip this chapter on second reading because you are unlikely to need these properties until you develop in PIPS.

2.1 Fortran Loops

Are DO loops bodies executed at least once (F-66 style), or not (Fortran 77)?

 
ONE_TRIP_DO FALSE  

is useful for use/def and semantics analysis but is not used for region analyses. This dangerous property should be set to FALSE. It is not consistently checked by PIPS phases, because nobody seems to use this obsolete Fortran feature anymore.

2.2 Logging

With

 
LOG_TIMINGS FALSE  

it is possible to display the amount of real, cpu and system times directly spent in each phase as well as the times spent reading/writing data structures from/to PIPS database. The computation of total time used to complete a pipsmake request is broken down into global times, a set of phase times which is the accumulation of the times spent in each phase, and a set of IO times, also accumulated through phases.

Note that the IO times are included in the phase times.

With

 
LOG_MEMORY_USAGE FALSE  

it is possible to log the amount of memory used by each phase and by each request. This is mainly useful to check if a computation can be performed on a given machine. This memory log can also be used to track memory leaks. Valgrind may be more useful to track memory leaks.

2.3 PIPS Infrastructure

PIPS infrastructure is based on a few external libraries, Newgen and Linear, and on three key PIPS1 libraries:

2.3.1 Newgen

Newgen offers some debugging support to check object consistency (gen_consistent_p and gen_defined_p), and for dynamic type checking. See Newgen documentation[50][51].

2.3.2 C3 Linear Library

This library is external and offers an independent debugging system.

The following properties specify how null (

 
SYSTEM_NULL "<nullsystem>"  

), undefined

 
SYSTEM_UNDEFINED "<undefinedsystem>"  

) or non feasible systems

 
SYSTEM_NOT_FEASIBLE "{0==-1}"  

are prettyprinted by PIPS.

2.3.3 PipsMake Library

With

 
CHECK_RESOURCE_USAGE FALSE  

it is possible to log and report differences between the set of resources actually read and written by the procedures called by pipsmake and the set of resources declared as read or written in pipsmake.rc file.

 
ACTIVATE_DEL_DERIVED_RES TRUE  

controls the rule activation process that may delete from the database all the derived resources from the newly activated rule to make sure that non-consistent resources cannot be used by accident.

 
PIPSMAKE_CHECKPOINTS 0  

controls how often resources should be saved and freed. 0 means never, and a positive value means every n applications of a rule. This feature was added to allow long big automatic tpips scripts that can coredump and be restarted latter on close to the state before the core. As another side effect, it allows to free the memory and to keep memory consumption as moderate as possible, as opposed to usual tpips runs which keep all memory allocated. Note that it should not be too often saved, because it may last a long time, especially when entities are considered on big workspaces. The frequency may be adapted in a script, rarely at the beginning to more often latter.

2.3.4 PipsDBM Library

Shell environment variables PIPSDBM_DEBUG_LEVEL can be set to ? to check object consistency when they are stored in the database, and to ? to check object consistency when they are stored or retrieved (in case an intermediate phase has corrupted some data structure unwillingly).

You can control what is done when a workspace is closed and resources are saved. The

 
PIPSDBM_RESOURCES_TO_DELETE "obsolete"  

property can be set to to ”obsolete” or to ”all”.

Note that it is not managed from pipsdbm but from pipsmake, which knows what is obsolete or not.

2.3.5 Top Level Library

The top-level library is built on top of the pipsmake and pipsdbm libraries to factorize functions useful to build a PIPS user interface or API.

Property

 
USER_LOG_P TRUE  

controls the logging of the session in the database of the current workspace. This log can be processed by PIPS utility logfile2tpips to generate automatically a tpips script which can be used to replay the current PIPS session, workspace by workspace, regardless of the PIPSuser interface used.

Property

 
ABORT_ON_USER_ERROR FALSE  

specifies how user errors impact execution once the error message is printed on stderr: return and go ahead, usually when PIPS is used interactively (default behavior), or abort and core dump for debugging purposes and for script executions, especially non-regression tests.

Property

 
CLOSE_WORKSPACE_AND_QUIT_ON_ERROR FALSE  

specifies that user and internal errors must preserve as much as possible the workspace created by PIPS. This behavior stores on disk, as much as possible, all information available on the process that has just failed. This is useful when PIPS is called by another tool. This is not compatible with ABORT_ON_USER_ERROR 2.3.5, which seeks an immediate termination of the PIPS process.

Property

 
MAXIMUM_USER_ERROR 2  

specifies the number of user error allowed before the programs brutally aborts.

Property

 
ACTIVE_PHASES "PRINT_SOURCEPRINT_CODEPRINT_PARALLELIZED77_CODEPRINT_CALL_GRAPHPRINT_ICFGTRANSFORMERS_INTER_FULLINTERPROCEDURAL_SUMMARY_PRECONDITIONPRECONDITIONS_INTER_FULLATOMIC_CHAINSRICE_SEMANTICS_DEPENDENCE_GRAPHMAY_REGIONS"  

specifies which pipsmake phases should be used when several phases can be used to produce the same resource. This property is used when a workspace is created. A workspace is the database maintained by PIPS to contain all resources defined for a whole application or for the whole set of files used to create it.

Property

 
PIPSMAKE_WARNINGS TRUE  

controls whether to show warning when reading and activating pipsmake rules. Turning it off is useful when validating with a specialized version of PIPS, as some undesirable warnings can be shown then.

Resources that create ambiguities for pipsmake are at least:

This list must be updated according to new rules and new resources declared in this file. Note that no default parser is usually specified in this property, because it is selected automatically according to the source file suffixes when possible.

Until October 2009, the active phases were:

ACTIVE_PHASES "PRINT_SOURCE PRINT_CODE PRINT_PARALLELIZED77_CODE  
               PRINT_CALL_GRAPH PRINT_ICFG TRANSFORMERS_INTRA_FAST  
               INTRAPROCEDURAL_SUMMARY_PRECONDITION  
               PRECONDITIONS_INTRA ATOMIC_CHAINS  
               RICE_FAST_DEPENDENCE_GRAPH MAY_REGIONS"

They still are used for the old non-regression tests.

Property

 
CONSISTENCY_ENFORCED_P FALSE  

specifies that properties cannot be set once a PIPS database has been created. Pipsmake does not know the impacts of properties on the resources. Setting a property can make a resource obsolete, but pipsmake is going to use it as consistent. To avoid the issue, set CONSISTENCY_ENFORCED_P 2.3.5 to true and tpips2 will detect a user error if a property is possibly altered during a processing phase.

2.3.6 Warning Management

User warnings may be turned off. Definitely, this is not the default option! Most warnings must be read to understand surprising results. This property is used by library misc.

 
NO_USER_WARNING FALSE  

By default, PIPS reports errors generated by system call stat which is used in library pipsdbm to check the time a resource has been written and hence its temporal consistency.

 
WARNING_ON_STAT_ERROR TRUE  

Error messages are also copied in the Warnings file.

2.3.7 Option for C Code Generation

The syntactic constraints of C89 have been eased for declarations in C99, where it is possible to intersperse statement declarations within executable statements. This property is used to request C89 compatible code generation.

 
C89_CODE_GENERATION FALSE  

So the default option is to generate C99 code, which may be changed because it is likely to make the code generated by PIPS unparsable by PIPS.

There is no guarantee that each code generation phase is going to comply with this property. It is up to each developper to decide if this global property is to be used or not in his/her local phase.

2.4 User and Programming Interfaces

2.4.1 Tpips Command Line Interface

tpips is one of PIPS user interfaces.

 
TPIPS_IS_A_SHELL FALSE  

controls whether tpips should behave as an extended shell and consider any input command that is not a tpips command a Shell command.

2.4.2 Pyps API

This property is automatically set to TRUE when pyps is running.

 
PYPS FALSE  

Chapter 3
Input Files

3.1 User File

An input program is a set of user Fortran 77, Fortran 90 or C source files and a name, called a workspace. The files are looked for in the current directory, then by using the colon-separated PIPS_SRCPATH variable for other directories where they might be found. The first occurrence of the file name in the ordered directories is chosen, which is consistent with PATH and MANPATH behaviour.

The source files are splitted by PIPS at the program initialization phase to produce one PIPS-private source file for each procedure, subroutine or function, and for each block data. A function like fsplit is used and the new files are stored in the workspace, which simply is a UNIX sub-directory of the current directory. These new files have names suffixed by .f.orig.

Since PIPS performs interprocedural analyses, it expects to find a source code file for each procedure or function called. Missing modules can be replaced by stubs, which can be made more or less precise with respect to their effects on formal parameters and global variables. A stub may be empty. Empty stubs can be automatically generated if the code is properly typed (see Section 3.3).

The user source files should not be edited by the user once PIPS has been started because these editions are not going to be taken into account unless a new workspace is created. But their preprocessed copies, the PIPS source files, safely can be edited while running PIPS. The automatic consistency mechanism makes sure that any information displayed to the user is consistent with the current state of the sources files in the workspace. These source files have names terminated by the standard suffix, .f.

New user source files should be automatically and completely re-built when the program is no longer under PIPS control, i.e. when the workspace is closed. An executable application can easily be regenerated after code transformations using the tpips1 interface and requesting the PRINTED_FILE resources for all modules, including compilation units in C:

display PRINTED_FILE[%ALL]

Note that compilation units can be left out with:

display PRINTED_FILE[%ALLFUNC]

In both cases with C source code, the order of modules may be unsuitable for direct recompilation and compilation units should be included anyway, but this is what is done by explicitly requesting the code regeneration as described in § 3.4.

Note that PIPS expects proper ANSI Fortran 77 code. Its parser was not designed to locate syntax errors. It is highly recommended to check source files with a standard Fortran compiler (see Section 3.2) before submitting them to PIPS.

3.2 Preprocessing and Splitting

3.2.1 Fortran 77 Preprocessing and Splitting

The Fortran 77 files specified as input to PIPS by the user are preprocessed in various ways.

3.2.1.1 Fortran 77 Syntactic Verification

If the PIPS_CHECK_FORTRAN shell environment variable is defined to false or no or 0, the syntax of the source files is not checked by compiling it with a C compiler.If the PIPS_CHECK_FORTRAN shell environment variable is defined to true or yes or 1, the syntax of the file is checked by compiling it with a Fortran 77 compiler. If the PIPS_CHECK_FORTRAN shell environment variable is not defined, the check is performed according to CHECK_FORTRAN_SYNTAX_BEFORE_RUNNING_PIPS 3.2.1.1.

The Fortran compiler is defined by the PIPS_FLINT environment variable. If it is undefined, the default compiler is f77 -c -ansi).

In case of failure, a warning is displayed. Note that if the program cannot be compiled properly with a Fortran compiler, it is likely that many problems will be encountered within PIPS.

The next property also triggers this preliminary syntactic verification.

 
CHECK_FORTRAN_SYNTAX_BEFORE_RUNNING_PIPS TRUE  

PIPS requires source code for all leaves in its visible call graph. By default, a user error is raised by Function initializer if a user request cannot be satisfied because some source code is missing. It also is possible to generate some synthetic code (also known as stubs) and to update the current module list but this is not a very satisfying option because all interprocedural analysis results are going to be wrong. The user should retrieve the generated .f files in the workspace, under the Tmp directory, and add some assignments (def ) and uses to mimic the action of the real code to have a sufficient behavior from the point of view of the analysis or transformations you want to apply on the whole program. The user modified synthetic files should then be saved and used to generate a new workspace.

If PREPROCESSOR_MISSING_FILE_HANDLING 3.2.1.1 is set to "query", a script can optionally be set to handle the interactive request using PREPROCESSOR_MISSING_FILE_GENERATOR 3.2.1.1. This script is passed the function name and prints the filename on standard output. When empty, it uses an internal one.

Valid settings: error or generate or query.

 
PREPROCESSOR_MISSING_FILE_HANDLING "error"  
 
PREPROCESSOR_MISSING_FILE_GENERATOR ""  

The generated stub can have various default effect, say to prevent over-optimistic parallelization.

 
STUB_MEMORY_BARRIER FALSE  
 
STUB_IO_BARRIER FALSE  

3.2.1.2 Fortran 77 File Preprocessing

If the file suffix is .F then the file is preprocessed. By default PIPS uses gfortran -E for Fortran files. This preprocessor can be changed by setting the PIPS_FPP environment variable.

Moreover the default preprocessing options are -P -D__PIPS__ -D__HPFC__ and they can be extended (not replaced...) with the PIPS_FPP_FLAGS environment variable.

3.2.1.3 Fortran 77 Split

The file is then split into one file per module using a PIPS specialized version of fsplit2 . This preprocessing also handles

  1. Hollerith constants by converting them to the quoted syntax3 ;
  2. unnamed modules by adding MAIN000 or PROGRAM MAIN000 or or DATA000 or BLOCK DATA DATA000 according to needs.

The output of this phase is a set of .f_initial files in per-module subdirectories. They constitute the resource INITIAL_FILE.

3.2.1.4 Fortran Syntactic Preprocessing

A second step of preprocessing is performed to produce SOURCE_FILE files with standard Fortran suffix .f from the .f_initial files. The two preprocessing steps are shown in Figure 3.1.


PIC


Figure 3.1: Preprocessing phases: from a user file to a source file


Each module source file is then processed by top-level to handle Fortran include and to comment out IMPLICIT NONE which are not managed by PIPS. Also this phase performs some transformations of complex constants to help the PIPS parser. Files referenced in Fortran include statements are looked for from the directory where the Fortran file is. The Shell variable PIPS_CPP_FLAGS is not used to locate these include files.

3.2.2 C Preprocessing and Splitting

The C preprocessor is applied before the splitting. By default PIPS uses cpp -C for C files. This preprocessor can be changed by setting the PIPS_CPP environment variable.

Moreover the -D__PIPS__ -D__HPFC__ -U__GNUC__ preprocessing options are used and can be extended (not replaced) with the PIPS_CPP_FLAGS environment variable.

This PIPS_CPP_FLAGS variable can also be used to locate the include files. Directories to search are specified with the -Ifile option, as usual for the C preprocessor.

3.2.2.1 C Syntactic Verification

If the PIPS_CHECK_C shell environment variable is defined to false or no or 0, the syntax of the source files is not checked by compiling it with a C compiler. If the PIPS_CHECK_C shell environment variable is defined to true or yes or 1, the syntax of the file is checked by compiling it with a C compiler. If the PIPS_CHECK_C shell environment variable is not defined, the check is performed according to CHECK_C_SYNTAX_BEFORE_RUNNING_PIPS 3.2.2.1.

The environment variable PIPS_CC is used to define the C compiler available. If it is undefined, the compiler chosen is gcc -c ).

In case of failure, a warning is displayed.

If the environement variable PIPS_CPP_FLAGS is defined, it should contain the options -Wall and -Werror for the check to be effective.

The next property also triggers this preliminary syntactic verification.

 
CHECK_C_SYNTAX_BEFORE_RUNNING_PIPS TRUE  

Although its default value is FALSE, it is much safer to set it to true when dealing with new sources files. PIPS is not designed to process non-standard source code. Bugs in source files are not well explained or localized. They can result in weird behaviors and inexpected core dumps. Before complaining about PIPS, it is higly recommended to set this property to TRUE.

Note: the C and Fortran syntactic verifications could be controlled by a unique property.

3.2.3 Fortran 90 Preprocessing and Splitting

The Fortran 90 parser is a separate program, derived from gcc Fortran parser. It is activated directly when the workspace is created, and not by pipsmake.

3.2.4 Source File Hierarchy

The source files may be placed in different directories and have the same name, which makes resource management more difficult. The default option is to assume that no file name conflicts occur. This is the historical option and it leads to much simpler module names.

 
PREPROCESSOR_FILE_NAME_CONFLICT_HANDLING FALSE  

3.3 Source Files

A source_file contains the code of exactly one module. Source files are created from user source files at program initialization by fsplit or a similar function if fsplit is not available (see Section 3.2). A source file may be updated by the user4 , but not by PIPS. Program transformations are performed on the internal representation (see 4) and visible in the prettyprinted output (see 10).

Source code splitting and preprocessing, e.g. cpp, are performed by the function create_workspace() from the top-level library, in collaboration with db_create_workspace() from library pipsdbm which creates the workspace directory. The user source files have names suffixed by .f or .F if cpp must be applied. They are split into original user_files with suffix .f.orig. These so-called original user files are in fact copies stored in the workspace. The syntactic PIPS preprocessor is applied to generate what is known as a source_file by PIPS. This process is fully automatized and not visible from PIPS user interfaces. However, the cpp preprocessor actions can be controlled using the Shell environment variable PIPS_CPP_FLAGS.

Function initializer is only called when the source code is not found. If the user code is properly typed, it is possible to force initializer to generate empty stubs by setting properties PREPROCESSOR_MISSING_FILE_HANDLING 3.2.1.1 and, to avoid inconsistency, PARSER_TYPE_CHECK_CALL_SITES 4.2.1.4. But remember that many Fortran codes use subroutines with variable numbers of arguments and with polymorphic types. Fortran varargs mechanism can be achieved by using or not the second argument according to the first one. Polymorphism can be useful to design an IO package or generic array subroutine, e.g. a subroutine setting an array to zero or a subroutine to copy an array into another one.

The current default option is to generate a user error if some source code is missing. This decision was made for two reasons:

  1. too many warnings about typing are generated as soon as polymorphism is used;
  2. analysis results and code transformations are potentially wrong because no memory effects are synthesized; see Properties MAXIMAL_PARAMETER_EFFECTS_FOR_UNKNOWN_FUNCTIONS 6.2.7.7 and MAXIMAL_EFFECTS_FOR_UNKNOWN_FUNCTIONS 6.2.7.7.

Sometimes, a function happen to be defined (and not only declared) inside a header file with the inline keyword. In that case PIPS can consider it as a regular module or just ignore it, as its presence may be system-dependant. Property IGNORE_FUNCTION_IN_HEADER 3.3 control this behavior and must be set before workspace creation.

 
  IGNORE_FUNCTION_IN_HEADER TRUE  

Modules can be flagged as “stubs”, aka functions provided to PIPS but which shouldn’t be inlined or modified. Property PREPROCESSOR_INITIALIZER_FLAG_AS_STUB 3.3 controls if the initializer should declare new files as stubs.

bootstrap_stubs > PROGRAM.stubs

flag_as_stub                     > PROGRAM.stubs
                < PROGRAM.stubs
 
PREPROCESSOR_INITIALIZER_FLAG_AS_STUB TRUE  

initializer                     > MODULE.user_file
                                > MODULE.initial_file

Note: the generation of the resource user_file here above is mainly directed in having the resource concept here. More thought is needed to have the concept of user files managed by pipsmake.

MUST appear after initializer:

filter_file                     > MODULE.source_file
                < MODULE.initial_file
                < MODULE.user_file

In C, the initializer can generate directly a c_source_file and its compilation unit.

c_initializer                     > MODULE.c_source_file
                                  > COMPILATION_UNIT.c_source_file
                                  > MODULE.input_file_name

3.4 Regeneration of User Source Files

The unsplit 3.4 phase regenerates user files from available printed_file. The various modules that where initially stored in single file are appended together in a file with the same name. Not that just fsplit is reversed, not a preprocessing through cpp. Also the include file preprocessing is not reversed.

Regeneration of user files. The various modules that where initially stored in single file are appended together in a file with the same name.


alias unsplit ’User files Regeneration’

unsplit                         > PROGRAM.user_file
                < ALL.user_file
                < ALL.printed_file

unsplit_parsed                  > PROGRAM.user_file
                < ALL.user_file
                < ALL.parsed_printed_file

Chapter 4
Building the Internal Representation

The abstract syntax tree, a.k.a intermediate representation, a.k.a. internal representation, is presented in [34] and in PIPS Internal Representation of Fortran and C code1 .

4.1 Entities

Program entities are stored in PIPS unique symbol table2 , called entities. Fortran entities, like intrinsics and operators, are created by bootstrap at program initialization. The symbol table is updated with user local and global variables when modules are parsed or linked together. This side effect is not disclosed to pipsmake.

bootstrap                       > PROGRAM.entities

The entity data structure is described in PIPS Internal Representation of Fortran and C code3 .

The declaration of new intrinsics is not easy because it was assumed that there number was fixed and limited by the Fortran standard. In fact, Fortran extensions define new ones. To add a new intrinsic, C code in bootstrap/bootstrap.c and in effects-generic/intrinsics.c must be added to declare its name, type and Read/Write memory effects.

Information about entities generated by the parsers is printed out conditionally to property: PARSER_DUMP_SYMBOL_TABLE 4.2.1.4. which is set to false by default. Unless you are debugging the parser, do not set this property to TRUE but display the symbol table file. See Section 4.2.1.4 for Fortran and Section 4.2.3 for C.

4.2 Parsed Code and Callees

Each module source code is parsed to produce an internal representation called parsed_code and a list of called module names, callees.

4.2.1 Fortran 77

Source code is assumed to be fully Fortran-77 compliant. The syntax should be checked by a standard Fortran compiler, e.g. fort77 or at least gfortran, before the PIPS Fortran 77 parser is activated. On the first encountered error, the parser may be able to emit a useful message or the non-analyzed part of the source code is printed out.

PIPS input language is standard Fortran 77 with few extensions and some restrictions. The input character set includes underscore, _, and varying length variable names, i.e. they are not restricted to 6 characters are supported as well as dependent types for arrays.

4.2.1.1 Fortran 77 Restrictions

  1. ENTRY statements are not recognized and a user error is generated. Very few cases of this obsolete feature were encountered in the codes initially used to benchmark PIPS. ENTRY statements have to be replaced manually by SUBROUTINE or FUNCTION and appropriate commons. If the parser bumps into a call to an ENTRY point, it may wrongly diagnose a missing source code for this entry, or even generate a useless but pipsmake satisfying stub if the corresponding property has been set (see Section 3.3).
  2. Multiple returns are not in PIPS Fortran.
  3. ASSIGN and assigned GOTO are not in PIPS Fortran.
  4. Computed GOTOs are not in PIPS Fortran. They are automatically replaced by a IF...ELSEIF...ENDIF construct in the parser.
  5. Functional formal parameters are not accepted. This is deeply exploited in pipsmake.
  6. Integer PARAMETERs must be initialized with integer constant expressions because conversion functions are not implemented.
  7. DO loop headers should have no label. Add a CONTINUE just before the loop when it happens. This can be performed automatically if the property PARSER_SIMPLIFY_LABELLED_LOOPS 4.2.1.4 is set to TRUE. This restriction is imposed by the parallelization phases, not by the parser.
  8. Complex constants, e.g. (0.,1.), are not directly recognized by the parser. They must be replaced by a call to intrinsic CMPLX. The PIPS preprocessing replaces them by a call to COMPLX_.
  9. Function formulae are not recognized by the parser. An undeclared array and/or an unsupported macro is diagnosed. They may be substituted in an unsafe way by the preprocessor if the property

    PARSER_EXPAND_STATEMENT_FUNCTIONS 4.2.1.4

    is set. If the substitution is considered possibly unsafe, a warning is displayed.

These parser restrictions were based on funding constraints. They are mostly alleviated by the preprocessing phase. PerfectClub and SPEC-CFP95 benchmarks are handled without manual editing, but for ENTRY statements which are obsoleted by the current Fortran standard.

4.2.1.2 Some Additional Remarks

4.2.1.3 Some Unfriendly Features

  1. Source code is read in columns 1-72 only. Lines ending in columns 73 and beyond usually generate incomprehensible errors. A warning is generated for lines ending after column 72.
  2. Comments are carried by the following statement. Comments carried by RETURN, ENDDO, GOTO or CONTINUE statements are not always preserved because the internal representation transforms these statements or because the parallelization phase regenerates some of them. However, they are more likely to be hidden by the prettyprinter. There is a large range of prettyprinter properties to obtain less filtered view of the code.
  3. Formats and character constants are not properly handled. Multi-line formats and constants are not always reprinted in a Fortran correct form.
  4. Declarations are exploited on-the-fly. Thus type and dimension information must be available before common declaration. If not, wrong common offsets are computed at first and fixed later in Function EndOfProcedure). Also, formal arguments implicitly are declared using the default implicit rule. If it is necessary to declare them, this new declarations should occur before an IMPLICIT declaration. Users are surprised by the type redefinition errors displayed.

4.2.1.4 Declaration of the Standard Fortran 77 Parser

parser                          > MODULE.parsed_code
                                > MODULE.callees
        < PROGRAM.entities
        < MODULE.source_file

For parser debugging purposes, it is possible to print a summary of the symbol table, when enabling this property:

 
PARSER_DUMP_SYMBOL_TABLE FALSE  

This should be avoided and the resource symbol_table_file be displayed instead.

The prettyprint of the symbol table for a Fortran or C module is generated with:

parsed_symbol_table        > MODULE.parsed_symbol_table_file
    < PROGRAM.entities
    < MODULE.parsed_code

Input Format

Some subtle errors occur because the PIPS parser uses a fixed format. Columns 73 to 80 are ignored, but the parser may emit a warning if some characters are encountered in this comment field.

 
PARSER_WARN_FOR_COLUMNS_73_80 TRUE  

ANSI extension

PIPS has been initially developed to parse correct Fortran compliant programs only. Real applications use lots of ANSI extensions… and they are not always correct! To make sure that PIPS output is correct, the input code should be checked against ANSI extensions using property

CHECK_FORTRAN_SYNTAX_BEFORE_PIPS

(see Section 3.2) and the property below should be set to false.

 
PARSER_ACCEPT_ANSI_EXTENSIONS TRUE  

Currently, this property is not used often enough in PIPS parser which let go many mistakes... as expected by real users!

Array Range Extension

PIPS has been developed to parse correct Fortran-77 compliant programs only. Array ranges are used to improve readability. They can be generated by PIPS prettyprinter. They are not parsed as correct input by default.

 
PARSER_ACCEPT_ARRAY_RANGE_EXTENSION FALSE  

Type Checking

Each argument list at calls to a function or a subroutine is compared to the functional type of the callee. Turn this off if you need to support variable numbers of arguments or if you use overloading and do not want to hear about it. For instance, an IO routine can be used to write an array of integers or an array of reals or an array of complex if the length parameter is appropriate.

Since the functional typing is shaky, let’s turn it off by default!

 
PARSER_TYPE_CHECK_CALL_SITES FALSE  

Loop Header with Label

The PIPS implementation of Allen&Kennedy algorithm cannot cope with labeled DO loops because the loop, and hence its label, may be replicated if the loop is distributed. The parser can generate an extra CONTINUE statement to carry the label and produce a label-free loop. This is not the standard option because PIPS is designed to output code as close as possible to the user source code.

 
PARSER_SIMPLIFY_LABELLED_LOOPS FALSE  

Most PIPS analyses work better if do loop bounds are affine. It is sometimes possible to improve results for non-affine bounds by assigning the bound to an integer variables and by using this variable as bound. But this is implemented for Fortran, but not for C.

 
PARSER_LINEARIZE_LOOP_BOUNDS FALSE  

Entry

The entry construct can be seen as an early attempt at object-oriented programming. The same object can be processed by several function. The object is declared as a standard subroutine or function and entry points are placed in the executable code. The entry points have different sets of formal parameters, they may share some common pieces of code, they share the declared variables, especially the static ones.

The entry mechanism is dangerous because of the flow of control between entries. It is now obsolete and is not analyzed directly by PIPS. Instead each entry may be converted into a first class function or subroutine and static variables are gathered in a specific common. This is the default option. If the substitution is not acceptable, the property may be turned off and entries results in a parser error.

 
PARSER_SUBSTITUTE_ENTRIES TRUE  

Alternate Return

Alternate returns are put among the obsolete Fortran features by the Fortran 90 standard. It is possible (1) to refuse them (option ”NO”), or (2) to ignore them and to replace alternate returns by STOP (option ”STOP”), or (3) to substitute them by a semantically equivalent code based on return code values (option ”RC” or option ”HRC”). Option (2) is useful if the alternate returns are used to propagate error conditions. Option (3) is useful to understand the impact of the alternate returns on the control flow graph and to maintain the code semantics. Option ”RC” uses an additional parameter while option ”HRC” uses a set of PIPS run-time functions to hide the set and get of the return code which make declaration regeneration less useful. By default, the first option is selected and alternate returns are refused.

To produce an executable code, the declarations must be regenerated: see property PRETTYPRINT_ALL_DECLARATIONS 10.2.22.6 in Section 10.2.22.6. This is not necessary with option ”HRC”. Fewer new declarations are needed if variable PARSER_RETURN_CODE_VARIABLE 4.2.1.4 is implicitly integer because its first letter is in the I-N range.

With option (2), the code can still be executed if alternate returns are used only for errors and if no errors occur. It can also be analyzed to understand what the normal behavior is. For instance, OUT regions are more likely to be exact when exceptions and errors are ignored.

Formal and actual label variables are replaced by string variables to preserve the parameter ordre and as much source information as possible. See PARSER_FORMAL_LABEL_SUBSTITUTE_PREFIX 4.2.1.4 which is used to generate new variable names.

 
PARSER_SUBSTITUTE_ALTERNATE_RETURNS "NO"  
 
PARSER_RETURN_CODE_VARIABLE "I_PIPS_RETURN_CODE_"  
 
PARSER_FORMAL_LABEL_SUBSTITUTE_PREFIX "FORMAL_RETURN_LABEL_"  

The internal representation can be hidden and the alternate returns can be prettyprinted at the call sites and modules declaration by turning on the following property:

 
PRETTYPRINT_REGENERATE_ALTERNATE_RETURNS FALSE  

Using a mixed C / Fortran RI is troublesome for comments handling: sometimes the comment guard is stored in the comment, sometime not. Sometimes it is on purpose, sometimes it is not. When following property is set to true, PIPS4 does its best to prettyprint comments correctly.

 
PRETTYPRINT_CHECK_COMMENTS TRUE  

If all modules have been processed by PIPS, it is possible not to regenerate alternate returns and to use a code close to the internal representation. If they are regenerated in the call sites and module declaration, they are nevertheless not used by the code generated by PIPS which is consistent with the internal representation.

Here is a possible implementation of the two PIPS run-time subroutines required by the hidden return code (”HRC”) option:

subroutine SET_I_PIPS_RETURN_CODE_(irc)
common /PIPS_RETURN_CODE_COMMON/irc_shared
irc_shared = irc
end
subroutine GET_I_PIPS_RETURN_CODE_(irc)
common /PIPS_RETURN_CODE_COMMON/irc_shared
irc = irc_shared
end

Note that the subroutine names depend on the PARSER_RETURN_CODE_VARIABLE 4.2.1.4 property. They are generated by prefixing it with SET_ and GET_. There implementation is free. The common name used should not conflict with application common names. The ENTRY mechanism is not used because it would be desugared by PIPS anyway.

Assigned GO TO

By default, assigned GO TO and ASSIGN statements are not accepted. These constructs are obsolete and will not be part of future Fortran standards.

However, it is possible to replace them automatically in a way similar to computed GO TO. Each ASSIGN statement is replaced by a standard integer assignment. The label is converted to its numerical value. When an assigned GO TO with its optional list of labels is encountered, it is transformed into a sequence of logical IF statement with appropriate tests and GO TO’s. According to Fortran 77 Standard, Section 11.3, Page 11-2, the control variable must be set to one of the labels in the optional list. Hence a STOP statement is generated to interrupt the execution in case this happens, but note that compilers such as SUN f77 and g77 do not check this condition at run-time (it is undecidable statically).

 
PARSER_SUBSTITUTE_ASSIGNED_GOTO FALSE  

Assigned GO TO without the optional list of labels are not processed. In other words, PIPS make the optional list mandatory for substitution. It usually is quite easy to add manually the list of potential targets.

Also, ASSIGN statements cannot be used to define a FORMAT label. If the desugaring option is selected, an illegal program is produced by PIPS parser.

Statement Function

This property controls the processing of Fortran statement functions by text substitution in the parser. No other processing is available and the parser stops with an error message when a statement function declaration is encountered.

The default used to be not to perform this unchecked replacement, which might change the semantics of the program because type coercion is not enforced and actual parameters are not assigned to intermediate variables. However most statement functions do not require these extra-steps and it is legal to perform the textual substitution. For user convenience, the default option is textual substitution.

Note that the parser does not have enough information to check the validity of the transformation, but a warning is issued if legality is doubtful. If strange results are obtained when executing codes transformed with PIPS, his property should be set to false.

A better method would be to represent them somehow a local functions in the internal representation, but the implications for pipsmake and other issues are clearly not all foreseen…(Fabien Coelho).

 
PARSER_EXPAND_STATEMENT_FUNCTIONS TRUE  

4.2.2 Declaration of HPFC Parser

This parser takes a different Fortran file but applies the same processing as the previous parser. The Fortran file is the result of the preprocessing by the hpfc_filter 8.3.2.1 phase of the original file in order to extract the directives and switch them to a Fortran 77 parsable form. As another side-effect, this parser hides some callees from pipsmake. This callees are temporary functions used to encode HPF directives. Their call sites are removed from the code before requesting full analyses to PIPS. This parser is triggered automatically by the hpfc_close 8.3.2.5 phase when requested. It should never be selected or activated by hand.

hpfc_parser                     > MODULE.parsed_code
                                > MODULE.callees
        < PROGRAM.entities
        < MODULE.hpfc_filtered_file

4.2.3 Declaration of the C Parsers

Three C parsers are used by PIPS5 . The first one, called the C preprocessor parser, is used to break down a C file or a set of C files into multiple files, with one function per file and a global file with all external declarations, the compilation unit. This is performed when PIPS6 is launched and its workspace is created.

The second one is called the C parser. It is designed to parse the function files. The last one is called the compilation unit parser and it deals with the compilation unit file.

4.2.3.1 Language parsed by the C Parsers

The C parsers are all based on the same initial set of lexical and syntactic rules designed for C77. They support some C99 extensions such as VLA, declarations in for loops...

The language parsed is larger than the language handled interprocedurally by PIPS:

  1. recursion is not supported;
  2. pointers to function are not supported;
  3. internal functions, a gcc extension, are not supported.

4.2.3.2 Handling of C Code

A C file is seen in PIPS as a compilation unit, that contains all the objects declarations that are global to this file, and as many as module (function or procedure) definitions defined in this file.

Thus the compilation unit contains the file-global macros, the include statements, the local and global variable definitions, the type definitions, and the function declarations if any found in the C file.

When the PIPS workspace is created by PIPS preprocessor, each C file is preprocessed7 using for instance gcc -E8 and broken into a new which contains only the file-global variable declarations, the function declarations and the type definitions, and one C file for each C function defined in the initial C file.

The new compilation units must be parsed before the new files, containing each one exactly one function definition, can be parsed. The new compilation units are named like the initial file names but with a bang extension.

For example, considering a C file foo.c with 2 function definitions:

enum { N = 2008 }
typedef float data_t; 
data_t matrix[N][N]; 
extern int errno; 
 
int calc(data_t a[N][N]) { 
  [...] 
} 
 
int main(int argc, char *argv[]) { 
  [..] 
}

After preprocessing, it leads to a file foo.cpp_processed.c that is then split into a new foo!.cpp_processed.c compilation unit containing

enum { N = 2008 }
typedef float data_t; 
data_t matrix[N][N]; 
extern int errno; 
 
int calc(data_t a[N][N]);} 
 
int main(int argc, char *argv[]);

and 2 module files containing the definitions of the 2 functions, a calc.c

int calc(data_t a[N][N]) { 
  [...] 
}

and a main.c

int main(int argc, char *argv[]) { 
  [..] 
}

Note that it is possible to have an empty compilation unit and no module file if the original file does not contain sensible C informations (such as an empty file containing only blank characters and so on).

4.2.3.3 Compilation Unit Parser

compilation_unit_parser         > COMPILATION_UNIT.declarations
        < COMPILATION_UNIT.c_source_file

The resource COMPILATION_UNIT.declarations produced by compilation_unit_parser is a special resource used to force the parsing of the new compilation unit before the parsing of its associated functions. It is in fact a hash table containing the file-global C keywords and typedef names defined in each compilation unit.

In fact phase compilation_unit_parser also produces parsed_code and callees resources for the compilation unit. This is done to work around the fact that rule c_parser was invoked on compilation units by later phases, in particular for the computation of initial preconditions, breaking the declarations of function prototypes. These two resources are not declared here because pipsmake gets confused between the different rules to compute parsed code : there is no simple way to distinguish between compilation units and modules at some times and handling them similarly at other times.

4.2.3.4 C Parser

c_parser                        > MODULE.parsed_code
                                > MODULE.callees
        < PROGRAM.entities
        < MODULE.c_source_file
        < MODULE.input_file_name
        < COMPILATION_UNIT.declarations

If you want to parse some C code using tpips, it is possible to select the C parser with

activate C_PARSER
but this is not necessary as the parser is selected according to the source file extension. Some properties useful (have a look at properties) to deal with a C program are

PRETTYPRINT_C_CODE TRUE (obsolete, replaced by PRETTYPRINT_LANGUAGE ‘‘C’’)
PRETTYPRINT_STATEMENT_NUMBER FALSE
PRETTYPRINT_BLOCK_IF_ONLY TRUE

4.2.3.5 C Symbol Table

A prettyprint of the symbol table for a C module can be generated with passes parsed_symbol_table ?? and symbol_table ??.

The EXTENDED_VARIABLE_INFORMATION 4.2.3.5 property can be used to extend the information available for variables. By default the entity name, the offset and the size are printed. Using this property the type and the user name, which may be different from the internal name, are also displayed.

 
EXTENDED_VARIABLE_INFORMATION FALSE  

4.2.3.6 Properties Used by the C Parsers

The C_PARSER_RETURN_SUBSTITUTION 4.2.3.6 property can be used to handle properly multiple returns within one function. The current default value is false, which preserves best the source aspect but modifies the control flow because the calls to return are assumed to flow in sequence. If the property is set to true, C return statement are replaced, when necessary, either by a simple goto for void functions, or by an assignment of the returned value to a special variable and a goto. A unique return statement is placed at the syntactic end of the function. For functions with no return statement or with a unique return statement placed at the end of their bodies, this property is useless.

 
C_PARSER_RETURN_SUBSTITUTION FALSE  

The C99 for-loop with a declaration such as for(int i = a;...;...) can be represented in the RI with a naive representation such as:

{ 
  int i = a; 
  for(;...;...) 
}

This is done when the C_PARSER_GENERATE_NAIVE_C99_FOR_LOOP_DECLARATION 4.2.3.6 property is set to TRUE

 
C_PARSER_GENERATE_NAIVE_C99_FOR_LOOP_DECLARATION FALSE  

Else, we can generate more or less other representation. For example, with some declaration splitting, we can generate a more representative version:

{ 
  int i; 
  for(i = a;...;...) 
}

if C_PARSER_GENERATE_COMPACT_C99_FOR_LOOP_DECLARATION 4.2.3.6 property set to FALSE.

 
C_PARSER_GENERATE_COMPACT_C99_FOR_LOOP_DECLARATION FALSE  

Else, we can generate a more compact (but newer representation that can choke some parts of PIPS9 ...) like:

  statement with ”int_i;” declaration 
    instruction for(i = a;...;...) 
}

This representation is not yet implemented.

4.2.4 Fortran 90

The Fortran 90 parser is not integrated in pipsmake. It is activated earlier when the workspace is created.

4.3 Controlized Code (Hierarchical Control Flow Graph)

PIPS analyses and transformations take advantage of a hierarchical control flow graph (HCFG), which preserves structured part of code as such, and uses a control flow graph only when no syntactic representation is available (see [33]). The encoding of the relationship between structured and unstructured parts of code is explained elsewhere, mainly in the PIPS Internal Representation of Fortran and C code10 .

The controlizer 4.3 is the historical controlizer phase that removes GOTO statements in the parsed code and generates a similar representation with small CFGs. It was developped for Fortran 77 code.

The Fortran controlizer phase was too hacked and undocumented to be improved and debugged for C99 code so a new version has been developed, documented and is designed to be simpler and easier to understand. But, for comparison, the Fortran controlizer phase can still be used.

controlizer                     > MODULE.code
        < PROGRAM.entities
        < MODULE.parsed_code

For debugging and validation purpose, by setting at most one of the PIPS_USE_OLD_CONTROLIZER or PIPS_USE_NEW_CONTROLIZER environment variables, you can force the use of the specific version of the controlizer you want to use. This override the setting by activateRonan?.

Note that the controlizer choice impacts the HCFG when Fortran entries are used. If you do not know what Fortran entries are, it is deprecated stuff anyway...

The new_controlizer 4.3 removes GOTO statements in the parsed code and generates a similar representation with small CFGs. It is designed to work according to C and C99 standards. Sequences of sequence and variable declarations are handled properly. However, the prettyprinter is tuned for code generated by controlizer 4.3, which does not always minimize the number of goto statements regenerated.

The hierarchical control flow graph built by the controlizer 4.3 is pretty crude. The partial control flow graphs, called unstructured statements, are derived from syntactic constructs. The control scope of an unstructured is the smallest enclosing structured construct, whether a loop, a test or a sequence. Thus some statements, which might be seen as part of structured code, end up as nodes of an unstructured.

Note that sequences of statements are identified as such by controlizer 4.3. Each of them appears as a unique node.

Also, useless CONTINUE statements may be added as provisional landing pads and not removed. The exit node should never have successors but this may happen after some PIPS function calls. The exit node, as well as several other nodes, also may be unreachable. After clean up, there should be no unreachable node or the only unreachable node should be the exit node. Function unspaghettify 9.3.4.1 (see Section 9.3.4.1) is applied by default to clean up and to reduce the control flow graphs after controlizer 4.3.

The GOTO statements are transformed in arcs but also in CONTINUE statements to preserve as many user comments as possible.

The top statement of a module returned by the controlizer 4.3 used to contain always an unstructured instruction with only one node. Several phases in PIPS assumed that this always is the case, although other program transformations may well return any kind of top statement, most likely a block. This is no longer true. The top statement of a module may contain any kind of instruction.

Here is declared the C and C99 controlizer:

new_controlizer                     > MODULE.code
        < PROGRAM.entities
        < MODULE.parsed_code

Control restructuring eliminates empty sequences but as empty true or false branch of structured IF. This semantic property of PIPS Internal Representation of Fortran and C code11 is enforced by libraries effects, regions, hpfc, effects-generic.

 
WARN_ABOUT_EMPTY_SEQUENCES FALSE  

By unsetting this property unspaghettify 9.3.4.1 is not applied implicitly in the controlizer phase.

 
UNSPAGHETTIFY_IN_CONTROLIZER TRUE  

The next property is used to convert C for loops into C while loops. The purpose is to speed up the re-use of Fortran analyses and transformation for C code. This property is set to false by default and should ultimately disappear. But for new user convenience, it is set to TRUE by activate_language() when the language is C.

 
FOR_TO_WHILE_LOOP_IN_CONTROLIZER FALSE  

The next property is used to convert C for loops into C do loops when syntactically possible. The conversion is not safe because the effect of the loop body on the loop index is not checked. The purpose is to speed up the re-use of Fortran analyses and transformation for C code. This property is set to false by default and should disappear soon. But for new user convenience, it is set to TRUE by activate_language() when the language is C.

 
FOR_TO_DO_LOOP_IN_CONTROLIZER FALSE  

This can also explicitly applied by calling the phase described in § 9.3.4.4.

FORMAT Restructuring

To able deeper code transformation, FORMATs can be gathered at the very beginning of the code or at the very end according to the following options in the unspaghettify or control restructuring phase.

 
GATHER_FORMATS_AT_BEGINNING FALSE  
 
GATHER_FORMATS_AT_END FALSE  

4.3.1 Properties for Clean Up Sequences

To display the statistics about cleaning-up sequences and removing useless CONTINUE or empty statement.

 
CLEAN_UP_SEQUENCES_DISPLAY_STATISTICS FALSE  

There is a trade-off between keeping the comments associated to labels and goto and the cleaning that can be do on the control graph.

By default, do not fuse empty control nodes that have labels or comments:

 
FUSE_CONTROL_NODES_WITH_COMMENTS_OR_LABEL FALSE  

By default, do not fuse sequences with internal declarations. Turning this to TRUE results in variable renamings when the same variable name is used at several places in the analyzed module.

 
CLEAN_UP_SEQUENCES_WITH_DECLARATIONS FALSE  

4.3.2 Symbol Table Related to a Module Code

The prettyprint of the symbol table for a Fortran or C module is generated with:

symbol_table        > MODULE.symbol_table_file
    < PROGRAM.entities
    < MODULE.code

4.4 Parallel Code

The internal representation includes special field to declare parallel constructs such as parallel loops. A parallel code internal representation does not differ fundamentally from a sequential code.

Chapter 5
Pedagogical Phases

Although this phases should be spread elsewhere in this manual, we have put some pedagogical phases useful to jump into PIPS first.

5.1 Using XML backend

A phase that displays, in debug mode, statements matching an XPath expression on the internal representation:

alias simple_xpath_test ’Output debug information about XPath matching’

simple_xpath_test > MODULE.code
  < PROGRAM.entities
  < MODULE.code

5.2 Prepending a comment

Prepends a comment to the first statement of a module. Useful to apply post-processing after PIPS.

alias prepend_comment ’Prepend a comment to the first statement of a module’

prepend_comment > MODULE.code
  < PROGRAM.entities
  < MODULE.code

The comment to add is selected by this property:

 
PREPEND_COMMENT "/*ThiscommentisaddedbyPREPEND_COMMENTphase*/"  

5.3 Prepending a call

This phase inserts a call to function MY_TRACK just before the first statement of a module. Useful as a pedagogical example to explore the internal representation and Newgen. Not to be used for any pratical purpose as it is bugged. Debugging it is a pedagogical exercise.

alias prepend_call ’Insert a call to MY_TRACK just before the first statement of a module’

prepend_call > MODULE.code
             > MODULE.callees
  < PROGRAM.entities
  < MODULE.code

The called function could be defined by this property:

 
PREPEND_CALL "MY_TRACK"  

but it is not.

5.4 Add a pragma to a module

This phase prepend or appends a pragma to a module.

alias add_pragma ’Prepends or append a pragma to the code of a module’

add_pragma > MODULE.code
  < PROGRAM.entities
  < MODULE.code

The pragma name can be defined by this property:

 
PRAGMA_NAME "MY_PRAGMA"  

The pragma can be append or prepend thanks to this property:

 
PRAGMA_PREPEND TRUE  

The pass clear_pragma 5.4 clears all pragma, this should be done on any input with unhandled pragma, we don’t what semantic we might break.

clear_pragma > MODULE.code
  < PROGRAM.entities
  < MODULE.code

The pass pragma_outliner 5.4 is used for outlining a sequence of statements contained between two given sentinel pragmas using properties PRAGMA_OUTLINER_BEGIN 5.4 and PRAGMA_OUTLINER_END 5.4. The name of the new function is controlled using PRAGMA_OUTLINER_PREFIX 5.4.

pragma_outliner      >     MODULE.code
    > MODULE.callees
    < PROGRAM.entities
    < MODULE.code
    < MODULE.cumulated_effects
    < MODULE.regions
    < CALLEES.summary_regions
    < MODULE.summary_regions
    < MODULE.transformers
    < MODULE.preconditions
 
PRAGMA_OUTLINER_BEGIN "begin"  
 
PRAGMA_OUTLINER_END "end"  
 
PRAGMA_OUTLINER_PREFIX "pips_outlined"  

Remove labels that are not usefull

remove_useless_label > MODULE.code
  < PROGRAM.entities
  < MODULE.code

Loop labels can be kept thanks to this property:

 
REMOVE_USELESS_LABEL_KEEP_LOOP_LABEL FALSE  

Chapter 6
Static Analyses

Analyses encompass the computations of call graphs, the memory effects, reductions, use-def chains, dependence graphs, interprocedural checks (flinter), semantics information (transformers and preconditions), continuations, complexities, convex array regions, dynamic aliases and complementary regions.

6.1 Call Graph

All lists of callees are needed to build the global lists of callers for each module. The callers and callees lists are used by pipsmake to control top-down and bottom-up analyses. The call graph is assumed to be a DAG, i.e. no recursive cycle exists, but it is not necessarily connected.

The height of a module can be used to schedule bottom-up analyses. It is zero if the module has no callees. Else, it is the maximal height of the callees plus one.

The depth of a module can be used to schedule top-down analyses. It is zero if the module has no callers. Else, it it the maximal depth of the callers plus one.

callgraph                       > ALL.callers
                                > ALL.height
                                > ALL.depth
        < ALL.callees

The following pass generates a uDrawGraph1 version of the callgraph. Its quite partial since it should rely on an hypothetical all callees, direct and indirect, resource.

alias dvcg_file ’Graphical Call Graph’
alias graph_of_calls ’For current module’
alias full_graph_of_calls ’For all modules’

graph_of_calls               > MODULE.dvcg_file
        < ALL.callees

full_graph_of_calls          > PROGRAM.dvcg_file
        < ALL.callees

6.2 Memory Effects

The data structures used to represent memory effects and their computation are described in [34]. Another description is available on line, in PIPS Internal Representation of Fortran and C code2 Technical Report.

Note that the standard name in the Dragon book is likely to be Gen and Kill sets in the standard data flow analysis framework, but PIPS uses the more general concept of effect developped by P. Jouvelot and D. Gifford [38] and its analyses are mostly based on the abstract syntac tree (AST) rather than the control flow graph (CFG).

6.2.1 Proper Memory Effects

The proper memory effects of a statement basically are a list of variables that may be read (used) or written (defined) by the statement. They are used to build use-def chains (see [1] or a later edition) and then the dependence graph.

Proper means that the effects of a compound statement do not include the effects of lower level statements. For instance, the body of a loop, true and false branches of a test statement, control nodes in an unstructured statement ... are ignored to compute the proper effects of a loop, a test or an unstructured.

Two families of effects are computed : pointer_effects are effects in which intermediary access paths may refer to different memory locations at different program points; regular effects are constant path effects, which means that their intermediary access paths all refer to unique memory locations. The same distinction holds for convex array regions (see section 6.12).

proper_effects_with_points_to and proper_effects_with_pointer_values are alternatives to compute constant path proper effects using points-to (see subsection 6.13.5) or pointer values analyses (see subsection 6.13.7). This is still at an experimental stage.

Summary effects (see Section 6.2.4) of a called module are used to compute the proper effects at the corresponding call sites. They are translated from the callee’s scope into the caller’s scope. The translation is based on the actual-to-formal binding. If too many actual arguments are defined, a user warning is issued but the processing goes on because a simple semantics is available: ignore useless actual arguments. If too few actual arguments are provided, a user error is issued because the effects of the call are not defined.

Variables private to loops are handled like regular variable.

See proper_effects 6.2.1

See proper_effects 6.2.1

proper_pointer_effects                  > MODULE.proper_pointer_effects
        < PROGRAM.entities
        < MODULE.code
        < CALLEES.summary_pointer_effects

proper_effects                  > MODULE.proper_effects
        < PROGRAM.entities
        < MODULE.code
        < CALLEES.summary_effects

When pointers are used, points-to information is useful to obtain precise proper memory effects.

Because points-to analysis is able to detect some cases of segfaults, variables that are not defined/written can nevertheless have a different abstract value at the beginning and at the end of a piece of code.

This leads to difficulties with memory effects. Firstly, if a piece of code is not reachable because a segfault always occurs before it is executed, it has no memory effects (as usual, the PIPS output is pretty surprising...). Secondly, cumulated memory effects can either be any effect that is linked to any execution of a piece of code or any effect that happens when the executions reach the end of the piece of code.

So EffectsWithPointsTo contains weird results either because the code always segfault somewhere or because the code might segfault because a function argument is not checked before it is used. Hence, effects disappear or must effects become may effects.

proper_effects_with_points_to > MODULE.proper_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.points_to
        < CALLEES.summary_effects

proper_effects_with_pointer_values > MODULE.proper_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.simple_pointer_values
        < CALLEES.summary_effects

6.2.2 Filtered Proper Memory Effects

To be continued...by whom?

This phase collects information about where a given global variable is actually modified in the program.

filter_proper_effects         > MODULE.filtered_proper_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects
        < CALLEES.summary_effects

6.2.3 Cumulated Memory Effects

Cumulated effects of statements are lists of read or written variables, just like the proper effects (see Section 6.2.1).

Cumulated means that the effects of a compound statement, do loop, test or unstructured, include the effects of the lower level statements such as a loop body or a test branch.

For return, exit and abort statements (only for the main function or what is consider as the main function), cumulated effects will also add the read on LUNS (Logical Units) that are present for the function. The goal of adding read LUNS for these statements is to improve OUT Effects (and Regions) especially for the last statements that make a write on LUNS.

cumulated_effects            > MODULE.cumulated_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects

TODO: inline documentation

cumulated_effects_with_points_to        > MODULE.cumulated_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects

TODO: inline documentation

cumulated_effects_with_pointer_values   > MODULE.cumulated_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects

TODO: inline documentation

cumulated_pointer_effects   > MODULE.cumulated_pointer_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_pointer_effects

TODO: inline documentation

cumulated_pointer_effects_with_points_to > MODULE.cumulated_pointer_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_pointer_effects
        < MODULE.points_to

TODO: inline documentation

cumulated_pointer_effects_with_pointer_values > MODULE.cumulated_pointer_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_pointer_effects
        < MODULE.simple_pointer_values

6.2.4 Summary Data Flow Information (SDFI)

Summary data flow information is the simplest interprocedural information needed to take procedure into account in a parallelizer. It was introduced in Parafrase (see [40]) under this name, but should be called summary memory effects in PIPS context.

The summary_effects 6.2.4 of a module are the cumulated memory effects of its top level statement (see Section 6.2.3), but effects on local dynamic variables are ignored (because they cannot be observed by the callers3 ) and subscript expressions of remaining effects are eliminated.

summary_pointer_effects                 > MODULE.summary_pointer_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_pointer_effects

summary_effects                 > MODULE.summary_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects

6.2.5 IN and OUT Effects

IN and OUT memory effects of a statement s are memory locations whose input values are used by statement s or whose output values are used by statement s continuation. Variables allocated in the statement are not part of the IN or OUT effects. Variables defined before they are used ar not part of the IN effects. OUT effects require an interprocedural analysis4

in_effects > MODULE.in_effects
           > MODULE.cumulated_in_effects
         < PROGRAM.entities
         < MODULE.code
         < MODULE.cumulated_effects
         < CALLEES.in_summary_effects

in_summary_effects > MODULE.in_summary_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.in_effects

out_summary_effects > MODULE.out_summary_effects
        < PROGRAM.entities
        < MODULE.code
        < CALLERS.out_effects

out_effects > MODULE.out_effects
        < PROGRAM.entities
        < MODULE.code
        < MODULE.out_summary_effects
        < MODULE.cumulated_in_effects

6.2.6 Proper and Cumulated References

The concept of proper references is not yet clearly defined. The original idea is to keep track of the actual objects of Newgen domain reference used in the program representation of the current statement, while retaining if they correspond to a read or a write of the corresponding memory locations. Proper references are represented as effects.

For C programs, where memory accesses are not necessarily represented by objects of Newgen domain reference, the semantics of this analysis is unclear.

Cumulated references gather proper references over the program code, without taking into account the modification of memory stores by the program execution.

FC: I should implement real summary references?

proper_references       > MODULE.proper_references
        < PROGRAM.entities
        < MODULE.code
        < CALLEES.summary_effects

cumulated_references    > MODULE.cumulated_references
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_references

6.2.7 Effect Properties

Effects are a first or second level analysis. They are analyzed using only some information about pointers, either none, or points-to or pointer values. They are used by many passes such as dependence graph analysis (Rice Pass), semantics analysis (Transformers passes), convex array regions analysis...

It is often tempting, useful or necessary to ignore some effects. It is always safer to ignore effects when they are used by a pass to avoid possible inconsistencies due to other passes using effects. However, it may be necessary to ignore some effects in an effect pass because effects are merged and cannot be unmerged by a later pass. Of course, this is not a standard setting for PIPS and the semantics of the resulting codes or later analyses is unknown in general, but to the person who makes the decision for a subset of input codes or for experimental reasons.

6.2.7.1 Effects Filtering

Filter this variable in phase filter_proper_effects 6.2.2.

 
EFFECTS_FILTER_ON_VARIABLE ""  

Property USER_EFFECTS_ON_STD_FILES 6.2.7.1 is used to control the way the user uses stdout, stdin and stderr. The default case (FALSE) means that the user does not modify these global variables. When set to TRUE, they are considered as user variables, and dereferencing them through calls to stdio functions leads to less precise effects.

 
USER_EFFECTS_ON_STD_FILES FALSE  

6.2.7.2 Checking Pointer Updates

When set to TRUE, EFFECTS_POINTER_MODIFICATION_CHECKING 6.2.7.2 enables pointer modification checking during the computation of cumulated effects and/or RW convex array regions. Since this is still at experimentation level, it’s default value is FALSE. This property should disappear when pointer modification analyses are more mature.

 
EFFECTS_POINTER_MODIFICATION_CHECKING FALSE  

6.2.7.3 Dereferencing Effects

The default (and correct) behaviour for the computation of effects is to transform dereferencing paths into constant paths using the information available, either none or points-to or pointer values, and abstract locations used to represent sets of locations.

When property CONSTANT_PATH_EFFECTS 6.2.7.3 is set to FALSE, the latter transformation is skipped. Effects are then equivalent to pointer_effects. This property is available for backward compatibility and experimental purpose. It must be born in mind that analyses and transformations using the resulting effects may yield uncorrect results. This property also affects the computation of convex array regions.

 
CONSTANT_PATH_EFFECTS TRUE  

Since CONSTANT_PATH_EFFECTS 6.2.7.3 may be set to FALSE erroneously, some tests are included in conflicts testing to avoid generating wrong code. However, these tests are costly, and can be turned off by setting TRUST_CONSTANT_PATH_EFFECTS_IN_CONFLICTS 6.2.7.3 to FALSE. This must be used with care and only when there is no aliasing.

 
  TRUST_CONSTANT_PATH_EFFECTS_IN_CONFLICTS FALSE  

EFFECTS_IGNORE_DEREFERENCING 6.2.7.3 is set to FALSE. This must be used with extreme care and only when pointer operations are known no to matter with the analysis performed because only a subset of input codes is used. Constant path effects are obtained by filtering constant path effects and by dropping effects due to a pointer-related address computation as in *p=3; or p->i=3;.

 
EFFECTS_IGNORE_DEREFERENCING FALSE  

6.2.7.4 Effects of References to a Variable Length Array (VLA)

Property VLA_EFFECT_READ 6.2.7.4 makes a read effect on variables dimension of an variable-length array (vla) at each use of it. For instance, with an array declared as a[size], at each occurence of a, like a[i], a READ effect will be made for size. Normally, no reason to set it to FALSE. For parallelization purpose, maybe want to set it to false?

 
VLA_EFFECT_READ TRUE  

6.2.7.5 Memory Effects vs Environment Effects

Property MEMORY_EFFECTS_ONLY 6.2.7.5 is used to restrict the action kind of an effect action to store. In other words, variable declarations and type declarations are not considered to alter the execution state when this property is set to TRUE. This is fine for Fortran code because variables cannot be declared among executable statements and because new type cannot be declared. But this leads to wrong result for C code when loop distribution or use-def elimination is performed.

Currently, PIPS does not have the capability to store default values depending on the source code language. The default value is TRUE to avoid disturbing too many phases of PIPS at the same time while environment and type declaration effects are introduced.

 
MEMORY_EFFECTS_ONLY TRUE  

6.2.7.6 Time Effects

Some programs do measure execution times. All code placed between measurement points must not be moved out, as can happen when loops are distributed or, more generally, instructions are rescheduled. Since loops using time effects are not parallel, a clock variable is always updated when a time-related function is called. This is sufficient to avoid most problems, but not all of them because time effects of all other executed statements are kept implicit, i.e. the real time clock is not updated: and loops can still be distributed. If time measurements are key, this property must be turned on. By default, it is turned off.

 
TIME_EFFECTS_USED FALSE  

6.2.7.7 Effects of Unknown Functions

Some source code is sometimes missing. PIPS5 does not have any way to guess the memory effects of functions whose source code is missing. Several approaches are possible to approximate the exact effects. Two optimistic ones are implemented: either we assume that the function only computes a result and has no side effects thru pointer parameters, global variables or static variables (default option), or we assume the maximal possible effects through pointers (this should be clarified: for all pointers p, *p is written) but not thru static or global variables.

 
MAXIMAL_PARAMETER_EFFECTS_FOR_UNKNOWN_FUNCTIONS FALSE  

For safety, a pessimitic option is be implemented and a maximal memory effect, *ANYMODULE*:*ANYWHERE*, is associated to such unknown functions.

 
MAXIMAL_EFFECTS_FOR_UNKNOWN_FUNCTIONS FALSE  

These two properties should not be true simultaneously.

6.2.7.8 Other Properties Impacting EFfects

Property ALIASING_ACROSS_TYPES 6.13.8.1 has an impact on effect computation. When the locations used or defined by a memory effect are unknown, an abstract location is used to represent either all possible locations of the program (TRUE) or all locations of a certain type (FALSE).

6.3 Live Memory Access Paths

There are many cases in which it is necessary to know if a variable may be used in the remainder of the execution of the analyzed application. For instance, a global variable cannot be privatized, or a global array be scalarized, if we do not know whether their values are used afterwards, or a copy-out must be generated, which is not currently implemented in the simplest algorithms. Similarly, preconditions do not need to propagate information about variables that are no longer alive.

Traditional liveness analyzes deals with scalar variables. However, with C code, it is interesting to be able to distinguish between different structure fields for instance, and it may also be interesting to deal with array regions. So we have retained for these analyzes an internal representation as effects, thus allowing to deal with general memory access paths, and to rely on the existing machinery of effects/regions computations.

For each statement or function, we compute two sets: a Live_in set contains the memory paths which are alive in the store preceding the statement execution, while a Live_out set contains the memory paths alive in the store immediately following the statement execution. For sequences of instructions, the Live_out set of an instruction is equal to the Live_in set of the next instruction. However, this not true for the last statement of conditional or loop bodies and the first statements of the next instructions.

live_out_summary_paths     > MODULE.live_out_summary_paths
        < PROGRAM.entities
        < MODULE.code
        < CALLERS.live_out_paths

live_paths     > MODULE.live_in_paths
               > MODULE.live_out_paths
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.in_effects
        < MODULE.live_out_summary_paths

live_in_summary_paths     > MODULE.live_in_summary_paths
        < PROGRAM.entities
        < MODULE.code
        < MODULE.live_in_paths

6.4 Reductions

The proper reductions are computed from a code.

proper_reductions > MODULE.proper_reductions
  < PROGRAM.entities
  < MODULE.code
  < MODULE.proper_references
  < CALLEES.summary_effects
  < CALLEES.summary_reductions

The cumulated reductions propagate the reductions in the code, upwards.

cumulated_reductions > MODULE.cumulated_reductions
  < PROGRAM.entities
  < MODULE.code
  < MODULE.proper_references
  < MODULE.cumulated_effects
  < MODULE.proper_reductions

This pass summarizes the reductions candidates found in a module for export to its callers. The summary effects should be used to restrict attention to variable of interest in the translation?

summary_reductions > MODULE.summary_reductions
  < PROGRAM.entities
  < MODULE.code
  < MODULE.cumulated_reductions
  < MODULE.summary_effects

Some possible (simple) transformations could be added to the code to mark reductions in loops, for latter use in the parallelization.

The following is NOT implemented. Anyway, should the cumulated_reductions be simply used by the prettyprinter instead?

loop_reductions > MODULE.code
  < PROGRAM.entities
  < MODULE.code
  < MODULE.cumulated_reductions

6.4.1 Reduction Propagation

tries to transform

{ 
 a = b + c; 
 r = r + a; 
}

into

{ 
 r = r +b ; 
 r = r +c ; 
}

reduction_propagation > MODULE.code
  < PROGRAM.entities
  < MODULE.code
  < MODULE.proper_reductions
  < MODULE.dg

6.4.2 Reduction Detection

tries to transform

{ 
 a = b + c; 
 b = d + a; 
}

which hides a reduction on b into

{ 
 b = b + c ; 
 b = d + b ; 
}

when possible

reduction_detection > MODULE.code
  < PROGRAM.entities
  < MODULE.code
  < MODULE.dg

6.5 Chains (Use-Def Chains)

Use-def and def-use chains are a standard data structure in optimizing compilers [1]. These chains are used as a first approximation of the dependence graph. Chains based on convex array regions (see Section 6.12) are more effective for interprocedural parallelization.

If chains based on convex array regions have been selected, the simplest dependence test must be used because regions carry more information than any kind of preconditions. Preconditions and loop bound information already are included in the region predicate.

6.5.1 Menu for Use-Def Chains

alias chains ’Use-Def Chains’

alias atomic_chains ’Standard’
alias region_chains ’Regions’
alias in_out_regions_chains ’In-Out Regions’

6.5.2 Standard Use-Def Chains (a.k.a. Atomic Chains)

The algorithm used to compute use-def chains is original because it is based on PIPS hierarchical control flow graph and not on a unique control flow graph.

This algorithm generates inexistent dependencies on loop indices. These dependence arcs appear between DO loop headers and implicit DO loops in IO statements, or between one DO loop header and unrelated DO loop bound expressions using that index variable. It is easy to spot the problem because loop indices are not privatized. A prettyprint option,

PRETTYPRINT_ALL_PRIVATE_VARIABLES 10.2.22.5.1

must be set to true to see if the loop index is privatized or not. The problem disappears when some loop indices are renamed.

The problem is due to the internal representation of DO loops: PIPS has no way to distinguish between initialization effects and increment effects. They have to be merged as proper loop effects. To reduce the problem, proper effects of DO loops do not include the index read effect due to the loop incrementation.

Artificial arcs are added to... (Pierre Jouvelot, help!).

atomic_chains                   > MODULE.chains
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects

6.5.3 READ/WRITE Region-Based Chains

Such chains are required for effective interprocedural parallelization. The dependence graph is annotated with proper regions, to avoid inaccuracy due to summarization at simple statement level (see Section 6.12).

Region-based chains are only compatible with the Rice Fast Dependence Graph option (see Section 6.6.1) which has been extended to deal with them6 . Other dependence tests do not use region descriptors (their convex system), because they cannot improve the Rice Fast Dependence test based on regions.

Regions chains are built using proper regions which are particular READ and WRITE regions. For simple statements (assignments, calls to intrinsic functions), summarization is avoided to preserve accuracy. At this inner level of the program control flow graph, the extra amount of memory necessary to store regions without computing their convex hull should not be too high compared to the expected gain for dependence analysis. For tests and loops, proper regions contain the regions associated to the condition or the range. And for external calls, proper regions are the summary regions of the callee translated into the caller’s name space, to which are merely appended the regions of the expressions passed as argument (no summarization for this step).

region_chains                   > MODULE.chains
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_regions

6.5.4 IN/OUT Region-Based Chains

Beware : this option is for experimental use only; resulting parallel code may not be equivalent to input code (see the explanations below).

When in_out_regions_chains 6.5.4 is selected, IN and OUT regions (see Sections 6.12.5 and 6.12.8) are used at call sites instead of READ and WRITE regions. For all other statements, usual READ and WRITE regions are used.

As a consequence, arrays and scalars which could be declared as local in callees, but are exposed to callers because they are statically allocated or are formal parameters, are ignored, increasing the opportunities to detect parallel loops. But as the program transformation which consists in privatizing variables in modules is not yet implemented in PIPS, the code resulting from the parallelization with in_out_regions_chains 6.5.4 may not be equivalent to the original sequential code. The privatization here is non-standard: for instance, variables declared in commons or static should be stack allocated to avoid conflicts.

As for region-based chains (see Section 6.5.3), the simplest dependence test should be selected for best results.

in_out_regions_chains           > MODULE.chains
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_regions
        < MODULE.in_regions
        < MODULE.out_regions

The following loop in Subroutine inout cannot be parallelized legally because Subroutine foo uses a static variable, y. However, PIPS will display this loop as (potentially) parallel if the in_out option is selected for use-def chain computation. Remember that IN/OUT regions require MUST regions to obtain interesting results (see Section 6.12.5).

      subroutine inout(a,n)
      real a(n)
      
      do i = 1, n
         call foo(a(i))
      enddo
      
      end
      
      subroutine foo(x)
      save y
      
      y = x
      x = x + y
      
      end

6.5.5 Chain Properties

6.5.5.1 Add use-use Chains

It is possible to put use-use dependence arcs in the dependence graph. This is useful for estimation of cache memory traffic and of communication for distributed memory machine (e.g. you can parallelize only communication free loops). Beware of use-use dependence on scalar variables. You might expect scalars to be broadcasted and/or replicated on each processor but they are not handled that way by the parallelization process unless you manage to have them declared private with respect to all enclosing loops.

This feature is not supported by PIPS user interfaces. Results may be hard to interpret. It is useful to print the dependence graph.

 
KEEP_READ_READ_DEPENDENCE FALSE  

6.5.5.2 Remove Some Chains

It is possible to mask effects on local variables in loop bodies. This is dangerous with current version of Allen & Kennedy which assumes that all the edges are present, the ones on private variables being partially ignored but for loop distribution. In other words, this property should always be set to false.

 
CHAINS_MASK_EFFECTS FALSE  

It also is possible to keep only true data-flow (Def – Use) dependences in the dependence graph. This was an attempt at mimicking the effect of direct dependence analysis and at avoiding privatization. However, direct dependence analysis is not implemented in the standard tests and spurious def-use dependence arcs are taken into account.

 
CHAINS_DATAFLOW_DEPENDENCE_ONLY FALSE  

These last two properties are not consistent with PIPS current development (1995/96). It is assumed that all dependence arcs are present in the dependence graph. Phases using the latter should be able to filter out irrelevant arcs, e.g. pertaining to privatized variables.

6.6 Dependence Graph (DG)

The dependence graph is used primarily by the parallelization algorithms. A dependence graph is a refinement of use-def chains (Section 6.5). It is location-based and not value-based.

There are several ways to compute a dependence graph. Some of them are fast (Banerjee’s one for instance) but provide poor results, others might be slower (Rémi Triolet’s one for instance) but produce better results.

Three different dependence tests are available, all based on Fourier-Motzkin elimination improved with a heuristics for the integer domain. The fast version uses subscript expressions only (unless convex array regions were used to compute use-def chains, in which case regions are used instead). The full version uses subscript expressions and loop bounds. The semantics version uses subscript expressions and preconditions (see 6.9).

Note that, for interprocedural parallelization, precise array regions only are used by the fast dependence test if the proper kind of use-def chains has been previously selected (see Section 6.5.3).

There are several kinds of dependence graphs. Most of them share the same overall data structure: a graph with labels on arcs and vertices. usually, the main differences are in the labels that decorate arcs; for instance, Kennedy’s algorithm requires dependence levels (which loop actually creates the dependence) while algorithms originated from CSRD prefer DDVs (relations between loop indices when the dependence occurs). Dependence cones introduced in [26353637] are even more precise [56].

The computations of dependence level and dependence cone [55] are both implemented in PIPS. DDV’s are not computed. Currently, only dependence levels are exploited by parallelization algorithms.

The dependence graph can be printed with or without filters (see Section 10.8). The standard dependence graph includes all arcs taken into account by the parallelization process (Allen & Kennedy [2]), except those that are due to scalar private variables and that impact the distribution process only. The loop carried dependence graph does not include intra-iteration dependences and is a good basis for iteration scheduling. The whole graph includes all arcs, but input dependence arcs.

It is possible to gather some statistics about dependences by turning on property RICEDG_PROVIDE_STATISTICS 6.6.6.2 (more details in the properties). A Shell script from PIPS utilities, print-dg-statistics, can be used in combination to extract the most relevant information for a whole program.

During the parallelization phases, is is possible to ignore arcs related to states of the libc, such as the heap memory management, because thread-safe libraries do perform the updates within critical sections. But these arcs are part of the use-def chains and of the dependence graph. If they were removed instead of being ignored, use-def elimination would remove all free statements.

The main contributors for the design and development of dependence analysis are Rémi Triolet, François Irigoin and Yi-qing Yang [55]. The code was improved by Corinne Ancourt and Béatrice Creusillet.

6.6.1 Menu for Dependence Tests

alias dg ’Dependence Test’

alias rice_fast_dependence_graph ’Preconditions Ignored’
alias rice_full_dependence_graph ’Loop Bounds Used’
alias rice_semantics_dependence_graph ’Preconditions Used’
alias rice_regions_dependence_graph ’Regions Used’

6.6.2 Fast Dependence Test

Use subscript expressions only, unless convex array regions were used to compute use-def chains, in which case regions are used instead. rice_regions_dependence_graph is a synonym for this rule, but emits a warning if region_chains is not selected.

rice_fast_dependence_graph      > MODULE.dg
        < PROGRAM.entities
        < MODULE.code
        < MODULE.chains
        < MODULE.cumulated_effects

6.6.3 Full Dependence Test

Use subscript expressions and loop bounds.

rice_full_dependence_graph      > MODULE.dg
        < PROGRAM.entities
        < MODULE.code
        < MODULE.chains
        < MODULE.cumulated_effects

6.6.4 Semantics Dependence Test

Uses subscript expressions and preconditions (see 6.9).

rice_semantics_dependence_graph > MODULE.dg
        < PROGRAM.entities
        < MODULE.code
        < MODULE.chains
        < MODULE.preconditions
        < MODULE.cumulated_effects

6.6.5 Dependence Test with Convex Array Regions

Synonym for rice_fast_dependence_graph, except that it emits a warning when region_chains is not selected.

rice_regions_dependence_graph      > MODULE.dg
        < PROGRAM.entities
        < MODULE.code
        < MODULE.chains
        < MODULE.cumulated_effects

6.6.6 Dependence Properties (Ricedg)

6.6.6.1 Dependence Test Selection

This property seems to be now obsolete. The dependence test choice is now controlled directly and only by rules in pipsmake. The procedures called by these rules may use this property. Anyway, it is useless to set it manually.

 
DEPENDENCE_TEST "full"  

6.6.6.2 Statistics

Provide the following counts during the dependence test. There are three parts: numbers of dependencies and independences (fields 1-10), dimensions of referenced arrays and dependence natures (fields 11-25) and the same information for constant dependencies (fields 26-40), decomposition of the dependence test in elementary steps (fields 41-49), use and complexity of Fourier-Motzkin’s pair-wise elimination (fields 50, 51 and 52-68).

The results are stored in the current workspace in MODULE.resulttestfast, MODULE.resultesttestfull, or MODULE.resulttestseman according to the test selected.

 
RICEDG_PROVIDE_STATISTICS FALSE  

Provide the statistics above and count all array reference pairs including these involved in call statement.

 
RICEDG_STATISTICS_ALL_ARRAYS FALSE  

6.6.6.3 Algorithmic Dependences

This property can be set to only take into account true flow dependences (Def – Use) during the computation of SCC by the Allen&Kennendy algorithm.

Note that this is different from the CHAINS_DATAFLOW_DEPENDENCE_ONLY property, which is set to compute a partial data dependence graph.

Warning: if set, this property may potentially yields incorrect parallel code because dynamic single assignment is not guaranteed.

 
RICE_DATAFLOW_DEPENDENCE_ONLY FALSE  

6.6.6.4 Optimization

The default option is to compute the dependence graph only for loops which can be parallelized using Allen & Kennedy algorithm. However it is possible to compute the dependences in all cases, even for loop containing test, goto, etc... by setting this option to TRUE.

Of course, this information is not used by the parallelization phase which is restricted to loops meeting the A&K conditions. By the way, the hierarchical control flow graph is not exploited either by the parallelization phase.

 
COMPUTE_ALL_DEPENDENCES FALSE  

6.7 Flinter

Function flinter 6.7 performs some intra and interprocedural checks about formal/actual argument pairs, use of COMMONs,... It was developed by Laurent Aniort and Fabien Coelho. Ronan Keryell added the uninitialized variable checking.

alias flinted_file ’Flint View’
flinter                         > MODULE.flinted_file
        < PROGRAM.entities
        < MODULE.code
        < CALLEES.code
        < MODULE.proper_effects
        < MODULE.chains

In the past, flinter 6.7 used to require MODULE.summary_effects to check the parameter passing modes and to make sure that no module would attempt an assignment on an expression. However, this kind of bug is detected by the effect analysis… which was required by flinter.

Resource CALLEES.code is not explicitly required but it produces the global symbols which function flinter 6.7 needs to check parameter lists.

6.8 Loop Statistics

Computes statistics about loops in module. It computes the number of perfectly and imperfectly nested loops and gives their depths. And it gives the number of nested loops which we can treat with our algorithm.

loop_statistics > MODULE.stats_file
        < PROGRAM.entities
        < MODULE.code

Note: it does not seem to behave like a standard analysis, associating information to the internal representation. Instead, an ASCII file seems to be created.

6.9 Semantics Analysis

PIPS semantics analysis targets mostly integer scalar variables. It is a two-pass process, with a bottom-up pass computing transformers 6.9.1, and a top-down pass propagating preconditions 6.9.2. Transformers and preconditions are specially powerful case of return and jump functions [12]. They abstract relations between program states with polyhedra and encompass most standard interprocedural constant propagations as well as most interval analyses. It is a powerful relational symbolic analysis.

Unlike [16] their computations are based on PIPS Hierarchical Control Flow Graph and on syntactic constructs instead of a standard flow graph. The best presentation of this part of PIPS is in [27].

A similar analysis is available in Parafrase-2 []. It handles polynomial equations between scalar integer variables. SUIF [] also performs some kind of semantics analysis.

The semantics analysis part of PIPS was designed and developed by François Irigoin.

6.9.1 Transformers

RK: The following is hard to read without any example for someone that knows nothing about PIPS... FI: do you want to have everything in this documentation?

A transformer is an approximate relation between the symbolic initial values of scalar variables and their values after the execution of a statement, simple or compound (see [34] and [27]). In abstract interpretation terminology, a transformer is an abstract command linking the input abstract state of a statement and its output abstract state.

By default, only integer scalar variables are analyzed, but properties can be set to handle boolean, string and floating point scalar variables7 : SEMANTICS_ANALYZE_SCALAR_INTEGER_VARIABLES 6.9.4.1 SEMANTICS_ANALYZE_SCALAR_BOOLEAN_VARIABLES 6.9.4.1 SEMANTICS_ANALYZE_SCALAR_STRING_VARIABLES 6.9.4.1 SEMANTICS_ANALYZE_SCALAR_FLOAT_VARIABLES 6.9.4.1 SEMANTICS_ANALYZE_SCALAR_COMPLEX_VARIABLES 6.9.4.1 SEMANTICS_ANALYZE_SCALAR_POINTER_VARIABLES 6.9.4.1 SEMANTICS_ANALYZE_CONSTANT_PATH 6.9.4.1

Transformers can be computed intraprocedurally by looking at each function independently or they can be computed interprocedurally starting with the leaves of the call tree8 .

Intraprocedural algorithms use cumulated_effects 6.2.3 to handle procedure calls correctly. In some respect, they are interprocedural since call statements are accepted. Interprocedural algorithms use the summary_transformer 6.9.1.8 of the called procedures.

Fast algorithms use a very primitive non-iterative transitive closure algorithm (two possible versions: flow sensitive or flow insensitive). Full algorithms use a transitive closure algorithm based on vector subspace (i.e. à la Karr [39]) or one based on the discrete derivatives [295]. The iterative fix point algorithm for transformers (i.e. Halbwachs/Cousot [16] is implemented but not used because the results obtained with transitive closure algorithms are faster and up-to-now sufficient. Property SEMANTICS_FIX_POINT_OPERATOR 6.9.4.8 is set to select the transitive closure algorithm used.

Additional information, such as array declarations and array references, can be used to improve transformers. See the property documentation for:

SEMANTICS_TRUST_ARRAY_DECLARATIONS 6.9.4.2 SEMANTICS_TRUST_ARRAY_REFERENCES 6.9.4.2

Within one procedure, the transformers can be computed in forward mode, using precondition information gathered along. Transformers can also be recomputed once the preconditions are available. In both cases, more precise transformers are obtained because the statement can be better modelized using precondition information. For instance, a non-linear expression can turn out to be linear because the values of some variables are numerically known and can be used to simplify the initial expression. See properties:

SEMANTICS_RECOMPUTE_EXPRESSION_TRANSFORMERS 6.9.4.6

SEMANTICS_COMPUTE_TRANSFORMERS_IN_CONTEXT 6.9.4.6

SEMANTICS_RECOMPUTE_FIX_POINTS_WITH_PRECONDITIONS 6.9.4.8

and phase refine_transformers 6.9.1.7.

Unstructured control flow graphs can lead to very long transformer computations, whose results are usually not interesting. Their sizes are limited by two properties:

SEMANTICS_MAX_CFG_SIZE2 6.9.4.5 SEMANTICS_MAX_CFG_SIZE1 6.9.4.5

discussed below.

Default value were set in the early nineties to obtain results fast enough for live demonstrations. They have not been changed to preserve the non-regression tests. However since 2005, processors are fast enough to use the most precise options in all cases.

A transformer map contains a transformer for each statement of a module. It is a mapping from statements to transformers (type statement_mapping, which is not a NewGen file). Transformers maps are stored on and retrieved from disk by pipsdbm.

6.9.1.1 Menu for Transformers

alias transformers ’Transformers’
alias transformers_intra_fast ’Quick Intra-Procedural Computation’
alias transformers_inter_fast ’Quick Inter-Procedural Computation’
alias transformers_intra_full ’Full Intra-Procedural Computation’
alias transformers_inter_full ’Full Inter-Procedural Computation’
alias transformers_inter_full_with_points_to ’Full Inter-Procedural with points-to Computation’
alias refine_transformers ’Refine Transformers’

6.9.1.2 Fast Intraprocedural Transformers

Build the fast intraprocedural transformers.

transformers_intra_fast         > MODULE.transformers
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.summary_effects
        < MODULE.proper_effects

6.9.1.3 Full Intraprocedural Transformers

Build the improved intraprocedural transformers.

transformers_intra_full         > MODULE.transformers
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.summary_effects
        < MODULE.proper_effects

6.9.1.4 Fast Interprocedural Transformers

Build the fast interprocedural transformers.

transformers_inter_fast         > MODULE.transformers
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.summary_effects
        < CALLEES.summary_transformer
        < MODULE.proper_effects
        < PROGRAM.program_precondition

6.9.1.5 Full Interprocedural Transformers

Build the improved interprocedural transformers (This should be used as default option.).

transformers_inter_full         > MODULE.transformers
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.summary_effects
        < CALLEES.summary_transformer
        < MODULE.proper_effects
        < PROGRAM.program_precondition

6.9.1.6 Full Interprocedural Transformers with points-to

Build the improved interprocedural transformers with points-to informations

transformers_inter_full_with_points_to         > MODULE.transformers
        < PROGRAM.entities
        < MODULE.code
        < MODULE.points_to
        < MODULE.cumulated_effects
        < MODULE.summary_effects
        < CALLEES.summary_transformer
        < MODULE.proper_effects
        < PROGRAM.program_precondition

6.9.1.7 Refine Full Interprocedural Transformers

Rebuild the interprocedural transformers using interprocedural preconditions. Intraprocedural preconditions are also used to refine all transformers.

refine_transformers         > MODULE.transformers
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.summary_effects
        < CALLEES.summary_transformer
        < MODULE.proper_effects
        < MODULE.transformers
        < MODULE.preconditions
        < MODULE.summary_precondition
        < PROGRAM.program_precondition

6.9.1.8 Summary Transformer

A summary transformer is an interprocedural version of the module statement transformer, obtained by eliminating dynamic local, a.k.a. stack allocated, variables. The filtering is based on the module summary effects. Note: each module has a UNIQUE top-level statement.

A summary_transformer 6.9.1.8 is of Newgen type transformer.

summary_transformer             > MODULE.summary_transformer
        < PROGRAM.entities
        < MODULE.transformers
        < MODULE.summary_effects

6.9.2 Preconditions

A precondition for a statement s in a module m is a predicate true for every state reachable from the initial state of m, in which s is executed. A precondition is of NewGen type ”transformer” (see PIPS Internal Representation of Fortran and C code9 ) and preconditions is of type statement_mapping.

Option preconditions_intra 6.9.2.5 associates a precondition to each statement, assuming that no information is available at the module entry point.

Inter-procedural preconditions may be computed with intra-procedural transformers but the benefit is not clear. Intra-procedural preconditions may be computed with inter-procedural transformers. This is faster that a full interprocedural analysis because there is no need for a top-down propagation of summary preconditions. This is compatible with code transformations like partial_eval 9.4.2, simplify_control 9.3.1 and dead_code_elimination 9.3.2.

Since these two options for transformer and precondition computations are independent and that transformers_inter_full 6.9.1.5 and preconditions_inter_full 6.9.2.7 must be both (independently) selected to obtain the best possible results. These two options are recommended.

6.9.2.1 Initial Precondition or Program Precondition

All DATA initializations contribute to the global initial state of the program. The contribution of each module is computed independently. Note that variables statically initialized behave as static variables and are preserved between calls according to Fortran standard. The module initial states are abstracted by an initial precondition based on integer scalar variables only.

Note: To be extended to handle C code. To be extended to handle properly unknown modules.

initial_precondition     > MODULE.initial_precondition
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.summary_effects

All initial preconditions, including the initial precondition for the main, are combined to define the program precondition which is an abstraction of the program initial state.

program_precondition     > PROGRAM.program_precondition
        < PROGRAM.entities
        < ALL.initial_precondition

The program precondition can only be used for the initial state of the main procedure. Although it appears below for all interprocedural analyses and it always is computed, it only is used when a main procedure is available.

6.9.2.2 Intraprocedural Summary Precondition

A summary precondition is of type ”transformer”, but the argument list must be empty as it is a simple predicate on the initial state. So in fact it is a state predicate.

The intraprocedural summary precondition uses DATA statement for the main module and is the TRUE constant for all other modules.

intraprocedural_summary_precondition            > MODULE.summary_precondition
        < PROGRAM.entities
        < MODULE.initial_precondition

Interprocedural summary preconditions can be requested instead. They are not described in the same section in order to introduce the summary precondition resource at the right place in pipsmake.rc.

No menu is declared to select either intra- or interprocedural summary preconditions.

6.9.2.3 Interprocedural Summary Precondition

By default, summary preconditions are computed intraprocedurally. The interprocedural option must be explicitly activated.

An interprocedural summary precondition for a module is derived from all its call sites. Of course, preconditions must be known for all its callers’ statements. The summary precondition is the convex hull of all call sites preconditions, translated into a proper environment which is not necessarily the module’s frame. Because of invisible global and static variables and aliasing, it is difficult for a caller to know which variables might be used by the caller to represent a given memory location. To avoid the problem, the current summary precondition is always translated into the caller’s frame. So each module must first translate its summary precondition, when receiving it from the resource manager (pipsdbm) before using it.

Note: the previous algorithm was based on a on-the-fly reduction by convex hull. Each time a call site was encountered while computing a module preconditions, the callee’s summary precondition was updated. This old scheme was more efficient but not compatible with program transformations because it was impossible to know when the summary preconditions of the modules had to be reset to the infeasible (a.k.a. empty) precondition.

An infeasible precondition means that the module is never called although a main is present in the workspace. If no main module is available, a TRUE precondition is generated. Note that, in both cases, the impact of static initializations propagated by link edition is taken into account although this is prohibited by the Fortran Standard which requires a BLOCKDATA construct for such initializations. In other words, a module which is never called has an impact on the program execution and its declarations should not be destroyed.

interprocedural_summary_precondition            > MODULE.summary_precondition
        < PROGRAM.entities
        < PROGRAM.program_precondition
        < CALLERS.preconditions
        < MODULE.callers

The following rule is obsolete. It is context sensitive and its results depends on the history of commands performed on the workspace.

summary_precondition            > MODULE.summary_precondition
        < PROGRAM.entities
        < CALLERS.preconditions
        < MODULE.callers

6.9.2.4 Menu for Preconditions

alias preconditions ’Preconditions’

alias preconditions_intra ’Intra-Procedural Analysis’
alias preconditions_inter_fast ’Quick Inter-Procedural Analysis’
alias preconditions_inter_full ’Full Inter-Procedural Analysis’
alias preconditions_intra_fast ’Fast intra-Procedural Analysis’

6.9.2.5 Intra-Procedural Preconditions

Only build the preconditions in a module without any interprocedural propagation. The fast version uses a fast but crude approximation of preconditions for unstructured code.

preconditions_intra            > MODULE.preconditions
        < PROGRAM.entities
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.summary_effects
        < MODULE.summary_transformer
        < MODULE.summary_precondition
        < MODULE.code

preconditions_intra_fast            > MODULE.preconditions
        < PROGRAM.entities
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.summary_effects
        < MODULE.summary_transformer
        < MODULE.summary_precondition
        < MODULE.code

6.9.2.6 Fast Inter-Procedural Preconditions

Option preconditions_inter_fast 6.9.2.6 uses the module own precondition derived from its callers as initial state value and propagates it downwards in the module statement.

The fast versions use no fix-point operations for loops.


preconditions_inter_fast        > MODULE.preconditions
        < PROGRAM.entities
        < PROGRAM.program_precondition
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.summary_precondition
        < MODULE.summary_effects
        < CALLEES.summary_effects
        < MODULE.summary_transformer

6.9.2.7 Full Inter-Procedural Preconditions

Option preconditions_inter_full 6.9.2.7 uses the module own precondition derived from its callers as initial state value and propagates it downwards in the module statement.

The full versions use fix-point operations for loops.

preconditions_inter_full        > MODULE.preconditions
        < PROGRAM.entities
        < PROGRAM.program_precondition
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.summary_precondition
        < MODULE.summary_effects
        < CALLEES.summary_transformer
        < MODULE.summary_transformer

6.9.3 Total Preconditions

Total preconditions are interesting to optimize the nominal behavior of a terminating application. It is assumed that the application ends in the main procedure. All other exits, aborts or stops, explicit or implicit such as buffer overflows and zero divide and null pointer dereferencing, are considered exceptions. This also applies at the module level. Modules nominally return. Other control flows are considered exceptions. Non-terminating modules have an empty total precondition10 . The standard preconditions can be refined by anding with the total preconditions to get information about the nominal behavior. Similar sources of increased accuracy are the array declarations and the array references, which can be exploited directly with properties described in section 6.9.4.2. These two properties should be set to true whenever possible.

Hence, a total precondition for a statement s in a module m is a predicate true for every state from which the final state of m, in which s is executed, is reached. It is an over-approximation of the theoretical total precondition. So, if the predicate is false, the final control state cannot be reached. A total precondition is of NewGen type ”transformer” (see PIPS Internal Representation of Fortran and C code11 ) and total_preconditions is of type statement_mapping.

The relationship with continuations (see Section 6.10) is not clear. Total preconditions should be more general but no must version exist.

Option total_preconditions_intra 6.9.3.2 associates a precondition to each statement, assuming that no information is available at the module return point.

Inter-procedural total preconditions may be computed with intra-procedural transformers but the benefit is not clear. Intra-procedural total preconditions may be computed with inter-procedural transformers. This is faster than a full interprocedural analysis because there is no need for a top-down propagation of summary total postconditions.

Since these two options for transformer and total precondition computations are independent, transformers_inter_full 6.9.1.5 and total_preconditions_inter 6.9.3.3 must be both (independently) selected to obtain the best possible results.

Status: This is a set of experimental passes. The intraprocedural part is implemented. The interprocedural part is not implemented yet, waiting for an expressed practical interest. Neither C for loops nor repeat loops are supported.

6.9.3.1 Menu for Total Preconditions

alias total_preconditions ’Total Preconditions’

alias total_preconditions_intra ’Total Intra-Procedural Analysis’
alias total_preconditions_inter ’Total Inter-Procedural Analysis’

6.9.3.2 Intra-Procedural Total Preconditions

Only build the total preconditions in a module without any interprocedural propagation. No specific condition must be met when reaching a RETURN statement.

total_preconditions_intra            > MODULE.total_preconditions
        < PROGRAM.entities
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < MODULE.summary_effects
        < MODULE.summary_transformer
        < MODULE.code

6.9.3.3 Inter-Procedural Total Preconditions

Option total_preconditions_inter 6.9.3.3 uses the module own total postcondition derived from its callers as final state value and propagates it backwards in the module statement. This total module postcondition must be true when the RETURN statement is reached.


total_preconditions_inter        > MODULE.total_preconditions
        < PROGRAM.entities
        < PROGRAM.program_postcondition
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < MODULE.summary_total_postcondition
        < MODULE.summary_effects
        < CALLEES.summary_effects
        < MODULE.summary_transformer

The program postcondition is only used for the main module.

6.9.3.4 Summary Total Precondition

The summary total precondition of a module is the total precondition of its statement limited to information observable by callers, just like a summary transformer (see Section 6.9.1.8).

A summary total precondition is of type ”transformer”.

summary_total_precondition            > MODULE.summary_total_precondition
        < PROGRAM.entities
        < CALLERS.total_preconditions

6.9.3.5 Summary Total Postcondition

A final postcondition for a module is derived from all its call sites. Of course, total postconditions must be known for all its callers’ statements. The summary total postcondition is the convex hull of all call sites total postconditions, translated into a proper environment which is not necessarily the module’s frame. Because of invisible global and static variables and aliasing, it is difficult for a caller to know which variables might be used by the caller to represent a given memory location. To avoid the problem, the current summary total postcondition is always translated into the caller’s frame. So each module must first translate its summary total postcondition, when receiving it from the resource manager (pipsdbm) before using it.

A summary total postcondition is of type ”transformer”.

summary_total_postcondition            > MODULE.summary_total_postcondition
        < PROGRAM.entities
        < CALLERS.total_preconditions
        < MODULE.callers

6.9.3.6 Final Postcondition

The program postcondition cannot be derived from the source code. It should be defined explicitly by the user. By default, the predicate is always true. But you might want some variables to have specific values, e.g. KMAX==1, or signs,KMAX>1 or relationships KMAX>JMAX.

program_postcondition     > PROGRAM.program_postcondition

6.9.4 Semantic Analysis Properties

6.9.4.1 Value types

By default, the semantic analysis is restricted to scalar integer variables as they are key variables to understand scientific code behavior. However it is possible to analyze scalar variables with other data types. Fortran LOGICAL variables are represented as 0/1 integers. Character string constants and floating point constants are represented as undefined values.

The analysis is thus limited to constant propagation for character strings and floating point values whereas integer, boolean and pointer variables are processed with a relational analysis.

Character string constants of fixed maximal length could be translated into integers but the benefit is not yet assessed because they are not much used in the benchmark and commercial applications we have studied. The risk is to increase significantly the number of overflows encountered during the analysis.

For the pointer analysis, it’s strongly recommended to activate proper_effects_with_points_to 6.2.1 before performing this analysis.

In interprocedural analysis, or in presence of formal parameter, to performe the pointer analysis, it’s strongly recommended to set SEMANTICS_ANALYZE_CONSTANT_PATH 6.9.4.1 at TRUE. SEMANTICS_ANALYZE_CONSTANT_PATH 6.9.4.1 can also serve to analyse the structures?

 
SEMANTICS_ANALYZE_SCALAR_INTEGER_VARIABLES TRUE  
 
SEMANTICS_ANALYZE_SCALAR_BOOLEAN_VARIABLES FALSE  
 
SEMANTICS_ANALYZE_SCALAR_STRING_VARIABLES FALSE  
 
SEMANTICS_ANALYZE_SCALAR_FLOAT_VARIABLES FALSE  
 
SEMANTICS_ANALYZE_SCALAR_COMPLEX_VARIABLES FALSE  
 
SEMANTICS_ANALYZE_SCALAR_POINTER_VARIABLES FALSE  
 
SEMANTICS_ANALYZE_CONSTANT_PATH FALSE  

6.9.4.2 Array Declarations and Accesses

For every module, array declaration are assumed to be correct with respect to the standard: the upper bound must be greater than or equal to the lower bound. When implicit, the lower bound is one. The star upper bound is neglected.

This property is turned off by default because it might slow down PIPS quite a lot without adding any useful information because loop bounds are usually different from array bounds.

 
SEMANTICS_TRUST_ARRAY_DECLARATIONS FALSE  

For every module, array references are assumed to be correct with respect to the declarations: the subscript expressions must have values lower than or equal to the upper bound and greater than or equal to the lower bound.

This property is turned off by default because it might slow down PIPS quite a lot without adding any useful information.

 
SEMANTICS_TRUST_ARRAY_REFERENCES FALSE  

6.9.4.3 Type Information

Type information for integer variables is ignored by default. The behavior of natural integer is assumed and no wrap-around is assumed to ever happen. The properties described here could as well be named:

SEMANTICS_ASSUME_NO_INTEGER_OVERFLOW

Type range information is difficult to turn into useful information. It implies some handling of wrap-around behaviors. It is likely to cause lots of overflows with int and long int variables. It should be used for unsigned char and unsigned short int only with the current implementation.

This is still an experimental development. By default this property is not set, and it should only be set by PIPS12 developpers.

This property is turned off by default because it might slow down PIPS quite a lot without adding any useful information. It is also turned off because it is experimental and should only be used by developpers.

 
SEMANTICS_USE_TYPE_INFORMATION FALSE  

Type information can also be used only when computing transformers or when computing preconditions:

 
SEMANTICS_USE_TYPE_INFORMATION_IN_TRANSFORMERS FALSE  
 
SEMANTICS_USE_TYPE_INFORMATION_IN_PRECONDITIONS FALSE  

It is not clear why you would like to assume overflows when computing transformers and not when computing preconditions, but the opposite makes sense.

Note that simple statements such as i++ have no precise convex transformer because of the wrap-around to 0. Assuming the type declaration unsigned char i, the transformer maps the new value of i to the interval [0..256].

If standard transformers are used, the variable values defined by the integer preconditions must be remapped in the type interval λ using a modulo definition. For instance, Value v is defined as v = λv1 + v2, v and v1 are projected and v2 is renamed v.

Each time a precondition is used in to compute a transformer, it must be normalized according to its type, even when the condition happens to be found without precondition, as a test condition or a loop bound.

For instance, in the example below:

void foo(unsigned char i, unsigned char j) {  
if(i<j) {  
  i++, j++;  
  if(i<j)  
    // true branch  
  else  
    // false branch  
}}

you might wrongly assume that the false branch is never reached. But it is not true if j==255 initially.

In the same way, the sequence:

unsigned char i, j;  
i = 257;  
j = 3/i;

will not be analyzed properly if the precondition for the division is not fixed with type information.

As a consequence, transformers should not be computed in context (see SEMANTICS_COMPUTE_TRANSFORMERS_IN_CONTEXT 6.9.4.6) with the current implementation if type information has an impact on the result. It is necessary to compute the precondition first and then to refine the transformers with them (see refine_transformers 6.9.1.7).

To sum up, the basic semantics analysis assumes that no integer overflow occurs during an execution. If integer overflows are known to occur, it is safer to set SEMANTICS_USE_TYPE_INFORMATION 6.9.4.3. But this property destroys information gathered about arithmetic information. To obtain more accurate results, set property SEMANTICS_USE_TYPE_INFORMATION_IN_PRECONDITIONS ?? to compute transformers without overflows and to remap the preconditions later. The transformers can then be refined with these first preconditions and more accurate preconditions found. As explained above, this is not safe because all these developements are experimental and because precondition information given by test and loop conditions is used without paying attention to type information.

6.9.4.4 Integer Division

Integer divisions are defined by an equation linking the quotient q, the dividendd1, the divisor d2 and the remainder r.

d1 = q× d2 + r

Programming languages like C and Fortran specify that the divident d1 and the remainder r have the same sign. If d1 is positive, the remainder is constrained by:

0 ≤ r < |d2|

Else, it is constrained by:

|d2| < r ≤ 0

Hence, if the sign of d1 is unknown, the remainder is less constrained:

|d2| < r < |d2|

Since integer divisions are usually used with positive integer variables used to index arrays, the accuracy of the analysis can be improved by setting the following property to true:

 
SEMANTICS_ASSUME_POSITIVE_REMAINDERS TRUE  

Since the result is not always correct, this property should be set to false, but for historic reasons it is true by default.

6.9.4.5 Flow Sensitivity

Perform “meet” operations for semantics analysis. This property is managed by pipsmake which often sets it to TRUE. See comments in pipsmake documentation to turn off convex hull operations for a module or more if they last too long.

 
SEMANTICS_FLOW_SENSITIVE FALSE  

Complex control flow graph may require excessive computation resources. This may happen when analyzing a parser for instance.

 
SEMANTICS_ANALYZE_UNSTRUCTURED TRUE  

To reduce execution time, this property is complemented with a heuristics to turn off the analysis of very complex unstructured.

If the control flow graph counts more than SEMANTICS_MAX_CFG_SIZE1 6.9.4.5 vertices, use effects only.

 
SEMANTICS_MAX_CFG_SIZE2 20  

If the control flow graph counts more than SEMANTICS_MAX_CFG_SIZE1 6.9.4.5 but less than SEMANTICS_MAX_CFG_SIZE2 6.9.4.5 vertices, perform the convex hull of its elementary transformers and take the fixpoint of it. Note that SEMANTICS_MAX_CFG_SIZE2 6.9.4.5 is assumed to be greater than or equal to SEMANTICS_MAX_CFG_SIZE1 6.9.4.5.

 
SEMANTICS_MAX_CFG_SIZE1 20  

6.9.4.6 Context for statement and expression transformers

Without preconditions, transformers can be precise only for affine expressions. Approximate transformers can sometimes be derived for other expressions, involving for instance products of variables or divisions.

However, a precondition of an expression can be used to refine the approximation. For instance, some non-linear expressions can become affine because some of the variables have constant values, and some non-linear expressions can be better approximated because the variables signs or ranges are known.

To be backward compatible and to be conservative for PIPS execution time, the default value is false.

Not implemented yet.

 
SEMANTICS_RECOMPUTE_EXPRESSION_TRANSFORMERS FALSE  

Intraprocedural preconditions can be computed at the same time as transformers and used to improve the accuracy of expression and statement transformers. Non-linear expressions can sometimes have linear approximations over the subset of all possible stores defined by a precondition. In the same way, the number of convex hulls can be reduced if a test branch is never used or if a loop is always entered.

 
SEMANTICS_COMPUTE_TRANSFORMERS_IN_CONTEXT FALSE  

The default value is false for reverse compatibility and for speed.

6.9.4.7 Interprocedural Semantics Analysis

To be refined later; basically, use callee’s transformers instead of callee’s effects when computing transformers bottom-up in the call graph; when going top-down with preconditions, should we care about unique call site and/or perform meet operation on call site preconditions ?

 
SEMANTICS_INTERPROCEDURAL FALSE  

This property is used internally and is not user selectable.

6.9.4.8 Fix Point and Transitive Closure Operators

CPU time and memory space are cheap enough to compute loop fix points for transformers. This property implies SEMANTICS_FLOW_SENSITIVE 6.9.4.5 and is not user-selectable.

 
SEMANTICS_FIX_POINT FALSE  

The default fix point operator, called transfer, is good for induction variables but it is not good for all kinds of code. The default fix point operator is based on the transition function associated to a loop body. A computation of eigenvectors for eigenvalue 1 is used to detect loop invariants. This fails when no transition function but only a transition relation is available. Only equations can be found.

The second fix point operator, called pattern, is based on a pattern matching of elementary equations and inequalities of the loop body transformer. Obvious invariants are detected. This fix point operator is not better than the previous one for induction variables but it can detect invariant equations and inequalities.

A third fix point operator, called derivative, is based on finite differences. It was developed to handled DO loops desugared into WHILE loops as well as standard DO loops. The loop body transformer on variable values is projected onto their finite differences. Invariants, both equations and inequalities, are deduced directly from the constraints on the differences and after integration. This third fix point operator should be able to find at least as many invariants as the two previous one, but at least some inequalities are missed because of the technique used. For instance, constraints on a flip-flop variable can be missed. Unlike Cousot-Halbwachs fix point (see below), it does not use Chernikova steps and it should not slow down analyses.

This property is user selectable and its default value is derivative. The default value is the only one which is now seriously maintained.

 
SEMANTICS_FIX_POINT_OPERATOR "derivative"  

The next property is experimental and its default value is 1. It is used to unroll while loops virtually, i.e. at the semantics equation level, to cope with periodic behaviors such as flip-flops. It is effective only for standard while loops and the only possible value other than 1 is 2.

 
SEMANTICS_K_FIX_POINT 1  

The next property SEMANTICS_PATTERN_MATCHING_FIX_POINT has been removed and replaced by option pattern of the previous property.

This property was defined to select one of Cousot-Halbwachs’s heuristics and to compute fix points with inequalities and equalities for loops. These heuristics could be used to compute fix points for transformers and/or preconditions. This option implies SEMANTICS_FIX_POINT 6.9.4.8 and SEMANTICS_FLOW_SENSITIVE 6.9.4.5. It has not been implemented yet in PIPS13 because its accuracy has not yet been required, but is now badly named because there is no direct link between inequality and Halbwachs. Its default value is false and it is not user selectable.

 
SEMANTICS_INEQUALITY_INVARIANT FALSE  

Because of convexity, some fix points may be improved by using some of the information carried by the preconditions. Hence, it may be profitable to recompute loop fix point transformer when preconditions are being computed.

The default value is false because this option slows down PIPS and does not seem to add much useful information in general.

 
SEMANTICS_RECOMPUTE_FIX_POINTS_WITH_PRECONDITIONS FALSE  

The next property is used to refine the computation of preconditions inside nested loops. The loop body is reanalyzed to get one transformer for each control path and the identity transformer is left aside because it is useless to compute the loop body precondition. This development is experimental and turned off by default.

 
SEMANTICS_USE_TRANSFORMER_LISTS FALSE  

The next property is only useful if the previous one is set to true. Instead of computing the fix point of the convex hull of the transformer list, it computes the convex hull of the derivative constraints. Since it is a new feature, it is set to false by default, but it should become the default option because it should always be more accurate, at least indirectly because the systems are smaller. The number of overflows is reduced, as well as the execution time. In practice, these improvements have not been measured. This development is experimental and turned off by default.

 
SEMANTICS_USE_DERIVATIVE_LIST FALSE  

The next property is only useful if Property SEMANTICS_USE_TRANSFORMER_LISTS 6.9.4.8 is set to true. Instead of computing the precondition derived from the transitive closure of a transformer list, semantics also computes the preconditions associated to different projections of the transformer list and use as loop precondition the intersection of these preconditions. Although it is a new feature, it is set to true by default for the validation’s sake. See test case Semantics/maisonneuve09.c: it improves the accuracy, but not as much as SEMANTICS_USE_DERIVATIVE_LIST 6.9.4.8. This development is experimental and turned off by default.

 
SEMANTICS_USE_LIST_PROJECTION TRUE  

The string Property SEMANTICS_LIST_FIX_POINT_OPERATOR 6.9.4.8 is used to select a particular heuristic to compute an approximation of the transitive closure of a list of transformers. It is only useful if Property SEMANTICS_USE_TRANSFORMER_LISTS 6.9.4.8 is selected. The current default value is “depth_two”. An experimental value is “max_depth”.

 
SEMANTICS_LIST_FIX_POINT_OPERATOR "depth_two"  

Preconditions can (used to) preserve initial values of the formal parameters. This is not often useful in C because programmers usually avoid modifying scalar parameters, especially integer ones. However, old values create problems in region computation because preconditions seem to be used instead of tranformer ranges. Filtering out the initial value does reduce the precision of the precondition analysis, but this does not impact the transformer analysis. Since the advantage is really limited to preconditions and for the region’s sake, the default value is set to true. Turn it to false if you have a doubt about the preconditions really available.

The loop index is usually dead on loop exit. So keeping information about its value is useless... most of the times. However, it is preserved by default.

 
SEMANTICS_KEEP_DO_LOOP_EXIT_CONDITION TRUE  
 
SEMANTICS_FILTER_INITIAL_VALUES TRUE  
6.9.4.9 Normalization Level

Normalizing transformer and preconditions systems is a delicate issue which is not mathematically defined, and as such is highly empirical. It’s a tradeoff between eliminating redundant information, keeping an internal storage not too far from the prettyprinting for non-regression testing, exposing useful information for subsequent analyses,... all this at a reasonable cost.

Several levels of normalization are possible. These levels do not correspond to graduations on a normalization scale, but are different normalization heuristics. A level of 4 includes a preliminary lexicographic sort of contraints, which is very user friendly, but currently implies strings manipulations which are quite costly. It has been recently chosen to perform this normalization only before storing transformers and preconditions to the database (SEMANTICS_NORMALIZATION_LEVEL_BEFORE_STORAGE with a default value of 4). However, this can still have a serious impact on performances. With any other value, the normalization level is equel to 2.

 
SEMANTICS_NORMALIZATION_LEVEL_BEFORE_STORAGE 4  

6.9.4.10 Evaluation of sizeof

Property EVAL_SIZEOF 9.4.2 can be set to true to force the static evaluation of size of. Potentially, the computed transformers and preconditions are only valid for the target architecture defined in ri-util-local.h.

6.9.4.11 Prettyprint

Preconditions reflect by default all knowledge gathered about the current state (i.e. store). However, it is possible to restrict the information to variables actually read or written, directly or indirectly, by the statement following the precondition.

 
SEMANTICS_FILTERED_PRECONDITIONS FALSE  

6.9.4.12 Debugging

Output semantics results on stdout

 
SEMANTICS_STDOUT FALSE  

Debug level for semantics used to be controlled by a property. A Shell variable, SEMANTICS_DEBUG_LEVEL, is used instead.

6.9.5 Reachability Analysis: The Path Transformer

A set of operations on array regions are defined in PIPS such as the function regions_intersection. However, this is not sufficient because regions should be combined with what we call path transformers in order to propagate the memory stores used in them. Indeed, regions operations must be performed with respect to the abstraction of the same memory store.

A path transformer permits to compare array regions of statements originally defined in different memory stores.

The path transformer between two statements computes the possible changes performed by a piece of code delimited by two statements Sbegin and Send enclosed within a statement S. A path transformer is represented by a convex polyhedron over program variables and its computation is based on transformers; thus, it is a system of linear inequalities. The goal of the following phase is to compute the path transformer between two statements labeled by sb and se.

path_transformer                 > MODULE.path_transformer_file
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects
        < MODULE.transformers
        < MODULE.preconditions
        < MODULE.cumulated_effects

The next properties are used by Phase path_transformer to label the two statements Sbegin and Send.

 
PATH_TRANSFORMER_BEGIN "sb"  
 
PATH_TRANSFORMER_END "se"  

The next property is used to allow an empty path or not which is necessary for the test of dependence. If its value is false, we return an empty transformer for an empty path. Otherwise, an identity transformer is returned.

 
IDENTITY_EMPTY_PATH_TRANSFORMER TRUE  

6.10 Continuation conditions

Continuation conditions are attached to each statement. They represent the conditions under which the program will not stop in this statement. Under- and over-approximations of these conditions are computed.

continuation_conditions > MODULE.must_continuation
                        > MODULE.may_continuation
                        > MODULE.must_summary_continuation
                        > MODULE.may_summary_continuation
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < CALLEES.must_summary_continuation
        < CALLEES.may_summary_continuation

6.11 Complexities

Complexities are symbolic approximations of the execution times of statements. They are computed interprocedurally and based on polynomial approximations of execution times. Non-polynomial execution times are represented by unknown variables which are not free with respect to the program variables. Thus non-polynomial expressions are equivalent to polynomial expressions over a larger set of variables.

Probabilities for tests should also result in unknown variables (still to be implemented). See [57].

A summary_complexity is the approximation of a module execution times. It is translated and used at call sites.

Complexity estimation could be refined (i.e. the number of unknown variables reduced) by using transformers to combine elementary complexities using local states, rather than preconditions to combine elementary complexities relatively to the module initial state. The same options exist for region computation. The initial version [45] used the initial state for combinations. The new version [18] delays evaluation of variable values as long as possible but does not really use local states.

The first version of the complexity estimator was designed and developed by Pierre Berthomier. It was restricted to intra-procedural analysis. This first version was enlarged and validated on real code for SPARC-2 machines by Lei Zhou [57]. Since, it has been modified slightly by François Irigoin. For simple programs, complexity estimation are strongly correlated with execution times. The estimations can be used to see if program transformations are beneficial.

Known bugs: tests and while loops are not correctly handled because a fixed probably of 0.5 is systematically assumed.

6.11.1 Menu for Complexities

alias complexities      ’Complexities’
alias uniform_complexities      ’Uniform’
alias fp_complexities   ’FLOPs’
alias any_complexities  ’Any’

6.11.2 Uniform Complexities

Complexity estimation is based on a set of basic operations and fixed execution times for these basic operation. The choice of the set is critical but fixed. Experiments by Lei Zhou showed that it should be enlarged. However, the basic times, which also are critical, are tabulated. New sets of tables can easily be developed for new processors.

Uniform complexity tables contain a unit execution time for all basic operations. They nevertheless give interesting estimations for SPARC SS-10, especially for -O2/-O3 optimized code.

uniform_complexities                    > MODULE.complexities
        < PROGRAM.entities
        < MODULE.code
        < MODULE.preconditions
        < CALLEES.summary_complexity

6.11.3 Summary Complexity

Local variables are eliminated from the complexity associated to the top statement of a module in order to obtain the modules’ summary complexity.

summary_complexity              > MODULE.summary_complexity
        < PROGRAM.entities
        < MODULE.code
        < MODULE.complexities

6.11.4 Floating Point Complexities

Tables for floating point complexity estimation are set to 0 for non-floating point operations, and to 1 for all floating point operations, including intrinsics like SIN.

fp_complexities                    > MODULE.complexities
        < PROGRAM.entities
        < MODULE.code
        < MODULE.preconditions
        < CALLEES.summary_complexity

This enables the default specification within the properties to be considered.

any_complexities                    > MODULE.complexities
        < PROGRAM.entities
        < MODULE.code
        < MODULE.preconditions
        < CALLEES.summary_complexity

6.11.5 Complexity properties

The following properties control the static estimation of dynamic code execution time.

6.11.5.1 Debugging

Trace the walk across a module’s internal representation:

 
COMPLEXITY_TRACE_CALLS FALSE  

Trace all intermediate complexities:

 
COMPLEXITY_INTERMEDIATES FALSE  

Print the complete cost table at the beginning of the execution:

 
COMPLEXITY_PRINT_COST_TABLE FALSE  

The cost table(s) contain machine and compiler dependent information about basic execution times, e.g. time for a load or a store.

6.11.5.2 Fine Tuning

It is possible to specify a list of variables which must remain literally in the complexity formula, although their numerical values are known (this is OK) or although they have multiple unknown and unrelated values during any execution (this leads to an incorrect result).

Formal parameters and imported global variables are left unevaluated.

They have relatively high priority (FI: I do not understand this comment by Lei).

This list should be empty by default (but is not for unknown historical reasons):

 
COMPLEXITY_PARAMETERS "IMAXLOOP"  

Controls the printing of accuracy statistics:

 
COMPLEXITY_PRINT_STATISTICS 0  

6.11.5.3 Target Machine and Compiler Selection

This property is used to select a set of basic execution times. These times depend on the target machine, the compiler and the compilation options used. It is shown in [57] that fixed basic times can be used to obtain accurate execution times, if enough basic times are considered, and if the target machine has a simple RISC processor. For instance, it is not possible to use only one time for a register load. It is necessary to take into account the nature of the variable, i.e. formal parameter, dynamic variable, global variable, and the nature of the access, e.g. the dimension of an accessed array. The cache can be ignored an replacer by an average hit ratio.

Different set of elementary cost tables are available:

In the future, we might add a sparc-2 table...

The different elementary table names are defined in complexity-local.h. They presently are operation, memory, index, transcend and trigo.

The different tables required are to be found in $PIPS_LIBDIR/complexity/xyz, where xyz is specified by this property:

 
COMPLEXITY_COST_TABLE "all_1"  

6.11.5.4 Evaluation Strategy

For the moment, we have designed two ways to solve the complexity combination problem. Since symbolic complexity formulae use program variables it is necessary to specify in which store they are evaluated. If two complexity formulae are computed relatively to two different stores, they cannot be directly added.

The first approach, which is implemented, uses the module initial store as universal store for all formulae (but possibly for the complexity of elementary statements). In some way, symbolic variable are evaluated as early as possible as soon as it is known that they won’t make it in the module summary complexity.

This first method is easy to implement when the preconditions are available but it has at least two drawbacks:

The second approach, which is not implemented, delay variable evaluation as late as possible. Complexities are computed and given relatively to the stores used by each statements. Two elementary complexities are combined together using the earliest store. The two stores are related by a transformer (see Section 6.9.4). Such an approach is used to compute MUST regions as precisely as possible (see Section 6.12.9).

A simplified version of the late evaluation was implemented. The initial store of the procedure is the only reference store used as with the early evaluation, but variables are not evaluated right away. They only are evaluated when it is necessary to do so. This not an ideal solution, but it is easy to implement and reduces considerably the number of unknown values which have to be put in the formulae to have correct results.

 
COMPLEXITY_EARLY_EVALUATION FALSE  

6.12 Convex Array Regions

Convex array regions are functions mapping a memory store onto a convex set of array elements. They are used to represent the memory effects of modules or statements. Hence, they are expressed with respect to the initial store of the module or to the store immediately preceding the execution of the statement they are associated with. The latter is now standard in PIPS. Comprehensive information about convex array regions and their associated algorithms is available in Creusillet’s PhD Dissertation [20].

Apart from the array name and its dimension descriptors (or ϕ variables), an array region contains three additional informations:

For instance, the convex array region:

  

  <A(ϕ1,ϕ2)-W-EXACT-{ϕ1==I, ϕ1==ϕ2}>

where the region parameters ϕ1 and ϕ2 respectively represent the first and second dimensions of A, corresponds to an assignment of the element A(I,I).

Internally, convex array regions are of type effect and as such can be used to build use-def chains (see Section 6.5.3). Regions chains are built using proper regions which are particular READ and WRITE regions. For simple statements (assignments, calls to intrinsic functions), summarization is avoided to preserve accuracy. At this inner level of the program control flow graph, the extra amount of memory necessary to store regions without computing their convex hull should not be too high compared to the expected gain for dependence analysis. For tests and loops, proper regions contain the regions associated to the condition or the range. And for external calls, proper regions are the summary regions of the callee translated into the caller’s name space, to which are merely appended the regions of the expressions passed as argument (no summarization for this step).

Ressource proper_regions is equivalent to proper_effects (see Section 6.2.1), and regions to cumlulated_effects (see Section 6.2.3). So they share some features, like LUNS present for regions/cumlulated_effects on return/exit/abort statements.

Together with READ/WRITE regions and IN regions are computed their invariant versions for loop bodies (MODULE.inv_regions and MODULE.inv_in_regions). For a given loop body, they are equal to the corresponding regions in which all variables that may be modified by the loop body (except the current loop index) are eliminated from the descriptors (convex polyhedron). For other statements, they are equal to the empty list of regions.

In the following trivial example,

k = 0; 
for(i=0; i<N; i++) 
{ 
// regions for loop body: 
//    <a[phi1]-W-EXACT-{PHI1==K,K==I}> 
// invariant regions for loop body: 
//    <a[phi1]-W-EXACT-{PHI1==I}> 
  k = k+1; 
  a[k] = k; 
}

notice that the variable k which is modified in the loop body, and which appears in the loop body region polyhedron, does not appear anymore in the invariant region polyhedron.

MAY READ and WRITE region analysis was first designed by Rémi Triolet [48] and then revisited by François Irigoin [49]. Alexis Platonoff [45] implemented the first version of region analysis in PIPS. These regions were computed with respect to the initial stores of the modules. François Irigoin and, mainly, Béatrice Creusillet [181920], added new functionalities to this first version as well as functions to compute MUST regions, and IN and OUT regions.

MAY and MUST regions also compute useful_variables_regions ressource. This ressource computes the regions used by a variable at the memory state of is declaration. So it associates for each entity variable of a module, a R/W regions. It was already compute during the computation if the regions but not memorize. The store of this ressource was added by Nelson Lossing.

Array regions for C programs are currently under development.

6.12.1 Menu for Convex Array Regions

alias regions ’Array regions’

alias may_regions ’MAY regions’
alias must_regions ’EXACT or MAY regions’
alias useful_variables_regions ’Useful Variables regions’

6.12.2 MAY READ/WRITE Convex Array Regions

This function computes the MAY pointer regions in a module.

may_pointer_regions             > MODULE.proper_pointer_regions
                                > MODULE.pointer_regions
                                > MODULE.inv_pointer_regions
                                > MODULE.useful_variables_pointer_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < CALLEES.summary_pointer_regions

This function computes the MAY regions in a module.

may_regions                     > MODULE.proper_regions
                                > MODULE.regions
                                > MODULE.inv_regions
                                > MODULE.useful_variables_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < CALLEES.summary_regions

6.12.3 MUST READ/WRITE Convex Array Regions

This function computes the MUST regions in a module.

must_pointer_regions            > MODULE.proper_pointer_regions
                                > MODULE.pointer_regions
                                > MODULE.inv_pointer_regions
                                > MODULE.useful_variables_pointer_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < CALLEES.summary_pointer_regions

This function computes the MUST pointer regions in a module using simple points_to information to disambiguate dereferencing paths.

must_pointer_regions_with_points_to > MODULE.proper_pointer_regions
                                    > MODULE.pointer_regions
                                    > MODULE.inv_pointer_regions
                                    > MODULE.useful_variables_pointer_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < MODULE.points_to
        < CALLEES.summary_pointer_regions

This function computes the MUST regions in a module.

must_regions                    > MODULE.proper_regions
                                > MODULE.regions
                                > MODULE.inv_regions
                                > MODULE.useful_variables_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < CALLEES.summary_regions

This function computes the MUST regions in a module using information on pointer targets given by points-to.

must_regions_with_points_to     > MODULE.proper_regions
                                > MODULE.regions
                                > MODULE.inv_regions
                                > MODULE.useful_variables_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.points_to
        < MODULE.transformers
        < MODULE.preconditions
        < CALLEES.summary_regions

This function computes the MUST regions in a module using information on pointer targets given by pointer values.

must_regions_with_pointer_values > MODULE.proper_regions
                                > MODULE.regions
                                > MODULE.inv_regions
                                > MODULE.useful_variables_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.simple_pointer_values
        < MODULE.transformers
        < MODULE.preconditions
        < CALLEES.summary_regions

6.12.4 Summary READ/WRITE Convex Array Regions

Module summary regions provides an approximation of the effects it’s execution has on its callers variables as well as on global and static variables of its callees.

summary_pointer_regions                 > MODULE.summary_pointer_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.pointer_regions

summary_regions                 > MODULE.summary_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.regions

6.12.5 IN Convex Array Regions

IN convex array regions are flow sensitive regions. They are read regions not covered (i.e. not previously written) by assignments in the local hierarchical control-flow graph. There is no way with the current pipsmake-rc and pipsmake to express the fact that IN (and OUT) regions must be calculated using must_regions 6.12.3 (a new kind of resources, must_regions 6.12.3, should be added). The user must be knowledgeable enough to select must_regions 6.12.3 first.

in_regions                      > MODULE.in_regions
                                > MODULE.cumulated_in_regions
                                > MODULE.inv_in_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.summary_effects
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < MODULE.regions
        < MODULE.inv_regions
        < CALLEES.in_summary_regions

6.12.6 IN Summary Convex Array Regions

This pass computes the IN convex array regions of a module. They contain the array elements and scalars whose values impact the output of the module.

in_summary_regions              > MODULE.in_summary_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.transformers
        < MODULE.preconditions
        < MODULE.in_regions

6.12.7 OUT Summary Convex Array Regions

This pass computes the OUT convex array regions of a module. They contain the array elements and scalars whose values impact the continuation of the module.

See Section 6.12.8.

out_summary_regions             > MODULE.out_summary_regions
        < PROGRAM.entities
        < CALLERS.out_regions

6.12.8 OUT Convex Array Regions

OUT convex array regions are also flow sensitive regions. They are downward exposed written regions which are also used (i.e. imported) in the continuation of the program. They are also called exported regions. Unlike READ, WRITE and IN regions, they are propagated downward in the call graph and in the hierarchical control flow graphs of the subroutines.

out_regions                     > MODULE.out_regions
        < PROGRAM.entities
        < MODULE.code
        < MODULE.transformers
        < MODULE.preconditions
        < MODULE.regions
        < MODULE.inv_regions
        < MODULE.summary_effects
        < MODULE.cumulated_effects
        < MODULE.cumulated_in_regions
        < MODULE.inv_in_regions
        < MODULE.out_summary_regions

6.12.9 Properties for Convex Array Regions

If MUST_REGIONS is true, then it computes convex array regions using the algorithm described in report E/181/CRI, called T-1 algorithm. It provides more accurate regions, and preserve MUST approximations more often. As it is more costly, its default value is FALSE. EXACT_REGIONS is true for the moment for backward compatibility only.

 
EXACT_REGIONS TRUE  
 
MUST_REGIONS FALSE  

The default option is to compute regions without taking into account declared array bounds. The next property can be turned to TRUE to systematically add them in the region descriptors. Both options have their advantages and drawbacks, but the second one implies that the PIPS14 user is sure that her/his program is correct with respect to array accesses. In case of doubt, you might want to run pass array_bound_check_bottom_up 7.1.1 or array_bound_check_top_down 7.1.2.

 
REGIONS_WITH_ARRAY_BOUNDS FALSE  

Property MEMORY_IN_OUT_EFFECTS_ONLY 6.12.9’s default value is set to TRUE to avoid computing IN and OUT effects or regions on non-memory effects, even if MEMORY_EFFECTS_ONLY 6.2.7.5 is set to FALSE.

 
MEMORY_IN_OUT_EFFECTS_ONLY TRUE  

The current implementation of effects, simple effects as well as convex array regions, relies on a generic engine which is independent of the effect descriptor representation. The current representation for array regions, parameterized integer convex polyhedra, allows various patterns an provides the ability to exploit context information at a reasonable expense. However, some very common patterns such as nine-point stencils used in seismic computations or red-black patterns cannot be represented. It has been a long lasting temptation to try other representations [20].

A Complementary sections (see Section 6.14) implementation was formerly began as a set of new phases by Manjunathaiah Muniyappa, but is not maintained anymore.

And Nga Nguyen more recently created two properties to switch between regions and disjunctions of regions (she has already prepared basic operators). For the moment, they are always FALSE.

 
DISJUNCT_REGIONS FALSE  
 
DISJUNCT_IN_OUT_REGIONS FALSE  

Statistics may be obtained about the computation of convex array regions. When the next property (REGIONS_OP_STATISTICS) is set to TRUE statistics are provided about operators on regions (union, intersection, projection,…). The second next property turns on the collection of statistics about the interprocedural translation.

 
REGIONS_OP_STATISTICS FALSE  
 
REGIONS_TRANSLATION_STATISTICS FALSE  

6.13 Alias Analysis

6.13.1 Dynamic Aliases

Dynamic aliases are pairs (formal parameter, actual parameter) of convex array regions generated at call sites. An “IN alias pair” is generated for each IN region of a called module and an “OUT alias pair” for each OUT region. For EXACT regions, the transitive, symmetric and reflexive closure of the dynamic alias relation results in the creation of equivalence classes of regions (for MAY regions, the closure is different and does not result in an equivalence relation, but nonetheless allows us to define alias classes). A set of alias classes is generated for a module, based on the IN and OUT alias pairs of all the modules below it in the callgraph. The alias classes for the whole workspace are those of the module which is at the root of the callgraph, if the callgraph has a unique root. As an intermediate phase between the creation of the IN and OUT alias pairs and the creation of the alias classes, “alias lists” are created for each module. An alias list for a module is the transitive closure of the alias pairs (IN or OUT) for a particular path through the callgraph subtree rooted in this module.

in_alias_pairs > MODULE.in_alias_pairs
        < PROGRAM.entities
        < MODULE.callers
        < MODULE.in_summary_regions
        < CALLERS.code
        < CALLERS.cumulated_effects
        < CALLERS.preconditions

out_alias_pairs > MODULE.out_alias_pairs
        < PROGRAM.entities
        < MODULE.callers
        < MODULE.out_summary_regions
        < CALLERS.code
        < CALLERS.cumulated_effects
        < CALLERS.preconditions

alias_lists > MODULE.alias_lists
        < PROGRAM.entities
        < MODULE.in_alias_pairs
        < MODULE.out_alias_pairs
        < CALLEES.alias_lists

alias_classes > MODULE.alias_classes
        < PROGRAM.entities
        < MODULE.alias_lists

6.13.2 Init Points-to Analysis

This phase generates synthetic points-to relations for formal parameters. It creates synthetic sinks, i.e. stubs, for formal parameters and provides an initial set of points-to to the intraprocedural_points_to_analysis 6.13.5.

Currently, it assumes that no sharing exists between the formal parameters and within the data structures pointed to by the formal parameters. Two properties should control this behavior, ALIASING_ACROSS_FORMAL_PARAMETERS 6.13.8.1 and ALIASING_ACROSS_TYPES 6.13.8.1. The first one supersedes the property ALIASING_INSIDE_DATA_STRUCTURE 6.13.8.1.

alias init_points_to_analysis  ’Init Points To Analysis’

init_points_to_analysis > MODULE.init_points_to_list
           < PROGRAM.entities
           < MODULE.code

6.13.3 Interprocedural Points to Analysis

This pass is being implemented by Amira Mensi. The interprocedural_points_to_analysis 6.13.3 is implemented in order to compute points-to relations in an interprocedural way, based on Wilson algorithm. This phase computes both Gen and Kill sets at the level of the call site.It requires another resource which is computed by intraprocedural_points_to_analysis 6.13.5.

alias interprocedural_points_to_analysis  ’Interprocedural Points To Analysis’

interprocedural_points_to_analysis > MODULE.points_to
                   > MODULE.points_to_out
                   > MODULE.points_to_in
        ! SELECT.proper_effects_with_points_to
        ! SELECT.cumulated_effects_with_points_to
        < PROGRAM.entities
        < MODULE.code
        < CALLEES.summary_effects
        < CALLEES.points_to_out
        < CALLEES.points_to_in

6.13.4 Fast Interprocedural Points to Analysis

This pass is being implemented by Amira Mensi. The fast_interprocedural_points_to_analysis 6.13.4 is implemented in order to compute points-to relations in an interprocedural way, based on Wilson algorithm. This phase computes only Kill sets at the call site level. It requires another resource which is computed by intraprocedural_points_to_analysis 6.13.5.

alias fast_interprocedural_points_to_analysis  ’Fast Interprocedural Points To Analysis’

fast_interprocedural_points_to_analysis > MODULE.points_to
                   > MODULE.points_to_out
                   > MODULE.points_to_in
        ! SELECT.proper_effects_with_points_to
        ! SELECT.cumulated_effects_with_points_to
        < PROGRAM.entities
        < MODULE.code
        < CALLEES.summary_effects
        < CALLEES.points_to_out
        < CALLEES.points_to_in

6.13.5 Intraprocedural Points to Analysis

This function is being implemented by Amira Mensi. The intraprocedural_points_to_analysis 6.13.5 is implemented in order to compute points-to relations, based on Emami algorithm. Emami algorithm is a top-down analysis which calcules the points-to relations by applying specific rules to each assignement pattern identified. This phase requires another resource which is init_points_to_analysis 6.13.2. Ressources points_to_in and points_to_out will be used to compute the transfer function later. They represent points-to relation at the beginning of functions where sources are formal parameters or global variables. Points_to_out are points-to relations at the end of function’s body, it contains return value and it’s sink, formal parameters, gloabl variables and heap allocated variables which can be visible beyond function’s scope. And using effects to compute calls impact on points-to analysis.

alias intraprocedural_points_to_analysis  ’Intraprocedural Points To Analysis’

intraprocedural_points_to_analysis > MODULE.points_to
                   > MODULE.points_to_out
                   > MODULE.points_to_in
        ! SELECT.proper_effects_with_points_to
        ! SELECT.cumulated_effects_with_points_to
        < PROGRAM.entities
        < MODULE.code
        < CALLEES.summary_effects
        < CALLEES.points_to_out
        < CALLEES.points_to_in

The pointer effects are useful, but they are recomputed for each expression and subexpression by the points-to analysis.

6.13.6 Initial Points-to or Program Points-to

Because no top-down points-to analysis is available, this two passes are useless. A top-down points-to analysis would be useful to check that the restrict assumption about formal parameters is met by the actual parameters. It might make possible a slighlty more precise points-to information in the functions. Hopefully, the formal context and the points-to stubs provide enough equivalent information to the passes that use points-to information.

initial_points_to     > MODULE.initial_points_to
        < PROGRAM.entities
        < MODULE.code
        < MODULE.points_to_out

All initial points-to are combined to define the program points-to which is an abstraction of the program initial state.

program_points_to     > PROGRAM.program_points_to
        < PROGRAM.entities
        < ALL.initial_points_to

The program points-to can only be used for the initial state of the main procedure. Although it appears below for all interprocedural analyses and it always is computed, it only is used when a main procedure is available.

6.13.7 Pointer Values Analyses

Computes the initial pointer values from the global or static declarations of the module.

initial_simple_pointer_values > MODULE.initial_simple_pointer_values
           < PROGRAM.entities
           < MODULE.code

Computes the initial pointer values of the program from the global declarations and the static declarations inside the program modules. They are computed by merging the initial pointer values of all the modules (this may include those which do not belong to actually realizable paths).

program_simple_pointer_values > PROGRAM.program_simple_pointer_values
           < PROGRAM.entities
           < ALL.initial_simple_pointer_values

Pointer values analysis is another kind of pointer analysis which tries to gather Pointer Values both in terms of other pointer values but also of memory addresses. This phase is under development.

alias simple_pointer_values  ’Pointer Values Analysis’

simple_pointer_values > MODULE.simple_pointer_values
                      > MODULE.in_simple_pointer_values
                      > MODULE.out_simple_pointer_values
           < PROGRAM.entities
           < MODULE.code
           < PROGRAM.program_simple_pointer_values
           < CALLEES.in_simple_pointer_values
           < CALLEES.out_simple_pointer_values

6.13.8 Properties for pointer analyses

The following properties are defined to ensure the safe use of intraprocedural_points_to_analysis 6.13.5.

6.13.8.1 Impact of Types

The property ALIASING_ACROSS_TYPES 6.13.8.1 specifies that two pointers of different effective types can be aliased. The default and safe value is TRUE; when it is turned to FALSE two pointers of different types are never aliased.

 
ALIASING_ACROSS_TYPES TRUE  

The property ALIASING_ACROSS_FORMAL_PARAMETERS 6.13.8.1 is used to handle the aliasing between formal parameters and global variables of pointer type. When it is set to TRUE, two formal parameters or a formal one and a global pointer or two global pointers can be aliased. If it is turned to FALSE, such pointers are assumed to be unaliased for intraprocedural analysis and generally for root module(i.e. modules without callers). The default value is FALSE. It is the only value currently implemented.

 
ALIASING_ACROSS_FORMAL_PARAMETERS FALSE  

The nest property specifies that one data structure can recursively contain two pointers pointing to the same location. If it is turned to FALSE, it is assumed that two different not included memory access paths cannot point to the same memory locations. The safe value is TRUE, but parallelization is hindered. Often, the user can guarantee that data structures do not exhibit any sharing. Optimistically, FALSE is the default value.

 
ALIASING_INSIDE_DATA_STRUCTURE FALSE  

Property ALIASING_ACROSS_IO_STREAMS 6.13.8.1 can be set to FALSE to specify that two io streams (two variables declared as FILE *) cannot be aliased, neither the locations to which they point. The safe and default value is TRUE

 
ALIASING_ACROSS_IO_STREAMS TRUE  

6.13.8.2 Heap Modeling

The following string property defines the lattice of maximal elements to use when precise information is lost. Three values are possible: ”unique”, ”function” and ”area”. The first value is the default value. A unique identifier is defined to represent any set of unknown locations. The second value defines a separate identifier for each function and compilation unit. Note that compilation units require more explanation about this definition and about the conflict detection scheme. The third value, ”area”, requires a separate identifier for each area of each function or compilation unit. These abstract lcoation lattice values are further refined if the property ALIASING_ACROSS_TYPES 6.13.8.1 is set to FALSE. The abstract location API hides all these local maximal values from its callers. Note that the dereferencing of any such top abstract location returns the very top of all abstract locations.

The ABSTRACT_HEAP_LOCATIONS 6.13.8.2 specifies the modeling of the heap. The possible values are ”unique”, ”insensitive”, ”flow-sensitive” and ”context-sensitive”. Each value defines a strictly refined analysis with respect to analyses defined by previous values [This may not be a good idea, since flow and context sensitivity are orthogonal].

The default value, ”unique”, implies that the heap is a unique array. It is enough to parallelize simple loops containing pointer-based references such as ”p[i]”.

In the ”insensitive” case and all other cases, one array is allocated in each function to modelize the heap.

In the ”flow-sensitive” case, the statement numbers of the malloc() call sites are used to subscribe this array, as well as all indices of the surrounding loops [Two improvements in one property...]. Only the first half of the property is implemented.

In the ”context_sensitive” case, the interprocedural translation of memory acces paths based on the abstract heap are prefixed by the same information regarding the call site: function containing the call site, statement number of the call site and indices of surrounding loops. This is not implemented.

Note that the naming of options is not fully compatible with the usual notations in pointer analyses. Note also that the insensitive case is redundant with context sensitive case: in the later case, a unique heap associated to malloc() would carry exactly the same amount of information [flow and context sensitivity are orthogonal].

Finally, note that abstract heap arrays are distinguished according to their types if the property ALIASING_ACROSS_TYPES 6.13.8.1 is set to FALSE [impact on abstract heap location API]. Else, the heap array is of type unknown. If a heap abstract location is dereferenced without any point-to information nor heap aliasing information, the safe result is the top abstract location.

 
ABSTRACT_HEAP_LOCATIONS "unique"  

Property POINTS_TO_SUCCESSFUL_MALLOC_ASSUMED 6.13.8.2 is used to control the analysis of a malloc call. The call may return either a unique target in the heap, or a pair of targets, one in the heap and NULL. The default value is true for historical reasons and because the result is shorter and correct almost all the time.

 
POINTS_TO_SUCCESSFUL_MALLOC_ASSUMED TRUE  

6.13.8.3 Type Handling

The property POINTS_TO_STRICT_POINTER_TYPES 6.13.8.3 is used to handle pointer arithmetic. According to C standard(section 6.5.6, item 8) the following C code :

int *p, i;  
p = \&i;  
p++ ;

is correct and p points to the same area, expressed by the points to analysis as i[*]. The default value is FALSE, meaning that p points to an array element. When it’s set to TRUE typing becone strict ; meaning that p points to an integer and the behavior is undefined. So the analysis stops with a pips_user_error(illegal pointer arithmetic)

 
POINTS_TO_STRICT_POINTER_TYPES FALSE  

6.13.8.4 Dereferenceing of Null and Undefined Pointers

The property POINTS_TO_UNINITIALIZED_POINTER_DEREFERENCING 6.13.8.4 specifies what to do when an uninitialized pointer is or may be dereferenced. The safe value is FALSE. The points-to analysis assumed that no undefined pointer is ever dereferenced. So if a pointer may be undefined and is dereferenced, the arc is considered impossible and removed from the points-to information. If not other arc provides some value for this pointer, the code is assumed dead and the current points-to set is reduced to the empty set. A warning about dead code is emitted. However the property can be set to TRUE and the dereferencing of an undefined pointer is accepted and results in an anywhere location.

 
POINTS_TO_UNINITIALIZED_POINTER_DEREFERENCING FALSE  

The property POINTS_TO_NULL_POINTER_DEREFERENCING 6.13.8.4 is very similar to the previous one. It specifies what to do when an null pointer is or may be dereferenced. The safe value is FALSE. The points-to analysis assumed that no null pointer is ever dereferenced. So if a pointer may be undefined and is dereferenced, the arc is considered impossible and removed from the points-to information. If not other arc provides some value for this pointer, the code is assumed dead and the current points-to set is reduced to the empty set. A warning about dead code is emitted. However the property can be set to TRUE and the dereferencing of an undefined pointer is accepted and results in an anywhere location.

 
POINTS_TO_NULL_POINTER_DEREFERENCING FALSE  

The property POINTS_TO_NULL_POINTER_INITIALIZATION 6.13.8.4 allows the initialization of pointers that are formal parameters or global variables to NULL when computing a calling context. The most accurate property value is TRUE, which makes sure that generated points-to stubs are different from NULL because two arcs are always generated: an arc towards the new points-to stub and an arc towards the NULL location. Thus it prevents from dereferencing a null pointer when dereferencing a points-to stub and it allows the comparison of two points-to stubs when a condition such as p!=q is interpreted or the comparison of one points-to stub to NULL as in p!=NULL. This property must be set to TRUE for the points-to analysis to return valid results since the constant path lattice used implies that NULL is not included in points-to stubes. Also, setting it to FALSE make any formal recursive data structures infinite since NULL is never found by the analyzer. Basically, this property should be removed.

 
POINTS_TO_NULL_POINTER_INITIALIZATION TRUE  

6.13.8.5 Limits of Points-to Analyses

The integer property POINTS_TO_OUT_DEGREE_LIMIT 6.13.8.5 specifies the maximum number of arcs exiting a given vertex of a poins-to graph. When the maximum out degree is reached for a given source vertex, all the corresponding sink vertices are fused into one new vertex, the minimal upper bound of the initial vertices according to the abstract address lattice, and the points-to graph is updated accordingly. New nodes are created as long as the limit is not reached. The freeing of a list spine can generate an unbounded out degree (see for instance Pointers/list05.c).

 
POINTS_TO_OUT_DEGREE_LIMIT 5  

The integer property POINTS_TO_PATH_LIMIT 6.13.8.5 specifies the maximum number of occurences of an object of a given type in a non-cyclic path generated by the points-to graph. New nodes are created as long as no such path exists. When the limit is reached, a cycle is created.

 
POINTS_TO_PATH_LIMIT 2  

The integer property POINTS_TO_SUBSCRIPT_LIMIT 6.13.8.5 specifies the maximum number of subscript of an object can be generated via pointer arithmetic. When the limit is reached, an unbounded subscript, *, is used to model any possible subscript value.

 
POINTS_TO_SUBSCRIPT_LIMIT 2  

6.13.9 Menu for Alias Views

alias alias_file ’Alias View’

alias print_in_alias_pairs ’In Alias Pairs’
alias print_out_alias_pairs ’Out Alias Pairs’
alias print_alias_lists ’Alias Lists’
alias print_alias_classes ’Alias Classes’

Display the dynamic alias pairs (formal region, actual region) for the IN regions of the module.

print_in_alias_pairs > MODULE.alias_file
        < PROGRAM.entities
        < MODULE.cumulated_effects
        < MODULE.in_alias_pairs

Display the dynamic alias pairs (formal region, actual region) for the OUT regions of the module.

print_out_alias_pairs > MODULE.alias_file
        < PROGRAM.entities
        < MODULE.cumulated_effects
        < MODULE.out_alias_pairs

Display the transitive closure of the dynamic aliases for the module.

print_alias_lists > MODULE.alias_file
        < PROGRAM.entities
        < MODULE.cumulated_effects
        < MODULE.alias_lists

Display the dynamic alias equivalence classes for this module and those below it in the callgraph.

print_alias_classes > MODULE.alias_file
        < PROGRAM.entities
        < MODULE.cumulated_effects
        < MODULE.alias_classes

6.14 Complementary Sections

alias compsec ’Complementary Sections’

A new representation of array regions added in PIPS by Manjunathaiah Muniyappa. This anlysis is not maintained anymore.

6.14.1 READ/WRITE Complementary Sections

This function computes the complementary sections in a module.

complementary_sections > MODULE.compsec
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.transformers
        < MODULE.preconditions
        < CALLEES.summary_compsec

6.14.2 Summary READ/WRITE Complementary Sections

summary_complementary_sections > MODULE.summary_compsec
        < PROGRAM.entities
        < MODULE.code
        < MODULE.compsec

Chapter 7
Dynamic Analyses (Instrumentation)

Dynamic analyses are performed at run-time. At compile-time, the property than can be proved or disproved are exploited, but in doubt a run-time is added to the source code. The current dynamic analyses implemented in PIPSare array bound checking, Fortran alias and used-before-set analyses.

7.1 Array Bound Checking

Array bound checking refers to determining whether all array references are within their declared range in all of their uses in a program. These array bound checks may be analysed intraprocedurally or interprocedurally, depending on the need for accuracy.

There are two versions of intraprocedural array bounds checking: array bound check bottom up, array bound check top down. The first approach relies on checking every array access and on the elimination of redundant tests by advanced dead code elimination based on preconditions. The second approach is based on exact convex array regions. They are used to prove that all accessed in a compound statement are correct.

These two dynamic analyses are implemented for Fortran. They are described in Nga Nguyen’s PhD (see [42]) and in [43]. They may work for C code, but this has not been validated.

7.1.1 Elimination of Redundant Tests: Bottom-Up Approach

This transformation takes as input the current module, adds array range checks (lower and upper bound checks) to every statement that has one or more array accesses. The output is the module with those added tests.

If one test is trivial or exists already for the same statement, it is no need to be generated in order to reduce the number of tests. As Fortran language permits an assumed-size array declarator with the unbounded upper bound of the last dimension, no range check is generated for this case also.

Associated with each test is a bound violation error message and in case of real access violation, a STOP statement will be put before the current statement.

This phase should always be followed by the partial_redundancy_elimination 9.2.2 for logical expression in order to reduce the number of bound checks.

alias array_bound_check_bottom_up ’Elimination of Redundant Tests’
array_bound_check_bottom_up            > MODULE.code
        < PROGRAM.entities
        < MODULE.code

7.1.2 Insertion of Unavoidable Tests

This second implementation is based on the array region analyses phase which benefits some interesting proven properties:

  1. If a MAY region correspond to one node in the control flow graph that represents a block of code of program is included in the declared dimensions of the array, no bound check is needed for this block of code.
  2. If a MUST region correspond to one node in the control flow graph that represents a block of code of program contains elements which are outside the declared dimensions of the array, there is certainly bound violation in this block of code. An error can be detected just in compilation time.

If none of these two properties are satisfied, we consider the approximation of region. In case of MUST region, if the exact bound checks can be generated, they will be inserted before the block of code. If not, like in case of MAY region, we continue to go down to the children nodes in the control flow graph.

The main advantage of this algorithm is that it permits to detect the sure bound violations or to tell that there is certainly no bound violation as soon as possible, thanks to the context given by preconditions and the top-down analyses.

alias array_bound_check_top_down ’Insertion of Unavoidable Tests’
array_bound_check_top_down   > MODULE.code
        < PROGRAM.entities
        < MODULE.code
        < MODULE.regions

7.1.3 Interprocedural Array Bound Checking

This phase checks for out of bound errors when passing arrays or array elements as arguments in procedure call. It ensures that there is no bound violation in every array access in the callee procedure, with respect to the array declarations in the caller procedure.

alias array_bound_check_interprocedural ’Interprocedural Array Bound Checking’
array_bound_check_interprocedural             > MODULE.code
        < PROGRAM.entities
        < MODULE.code
        < MODULE.preconditions

7.1.4 Array Bound Checking Instrumentation

We provide here a tool to calculate the number of dynamic bound checks from both initial and PIPS generated code.

These transformations are implemented by Thi Viet Nga Nguyen (see [42]).

alias array_bound_check_instrumentation ’Array Bound Checking Instrumentation’
array_bound_check_instrumentation > MODULE.code
        < PROGRAM.entities
        < MODULE.code

Array bounds checking refers to determining whether all array reference are within their declared range in all of its uses in a program. Here are array bounds checking options for code instrumentation, in order to compute the number of bound checks added. We can use only one property for these two case, but the meaning is not clear. To be changed ?

 
INITIAL_CODE_ARRAY_BOUND_CHECK_INSTRUMENTATION TRUE  
 
PIPS_CODE_ARRAY_BOUND_CHECK_INSTRUMENTATION FALSE  

In practice, bound violations may often occur with arrays in a common block. The standard is violated, but programmers think that they are not dangerous because the allocated size of the common is not reached. The following property deals with this kind of bad programming practice. If the array is a common variable, it checks if the reference goes beyond the size of the common block or not.

 
ARRAY_BOUND_CHECKING_WITH_ALLOCATION_SIZE FALSE  

The following property tells the verification phases (array bound checking, alias checking or uninitialized variables checking) to instrument codes with the STOP or the PRINT message. Logically, if a standard violation is detected, the program will stop immediately. Furthermore, the STOP message gives the partial redundancy elimination phase more information to remove redundant tests occurred after this STOP. However, for the debugging purposes, one may need to display all possible violations such as out-of-bound or used-before-set errors, but not to stop the program. In this case, a PRINT message is chosen. By default, we use the STOP message.

 
PROGRAM_VERIFICATION_WITH_PRINT_MESSAGE FALSE  

7.2 Alias Verification

7.2.1 Alias Propagation

Aliasing occurs when two or more variables refer to the same storage location at the same program point. Alias analysis is critical for performing most optimizations correctly because we must know for certain that we have to take into account all the ways a location, or the value of a variable, may (or must) be used or changed. Compile-time alias information is also important for program verification, debugging and understanding.

In Fortran 77, parameters are passed by address in such a way that, as long as the actual argument is associated with a named storage location, the called subprogram can change the value of the actual argument by assigning a value to the corresponding formal parameter. So new aliases can be created between formal parameters if the same actual argument is passed to two or more formal parameters, or between formal parameters and global parameters if an actual argument is an object in common storage which is also visible in the called subprogram or other subprograms in the call chain below it.

Both intraprocedural and interprocedural alias determinations are important for program analysis. Intraprocedural aliases occur due to pointers in languages like LISP, C, C++ or Fortran 90, union construct in C or EQUIVALENCE in Fortran. Interprocedural aliases are generally created by parameter passing and by access to global variables, which propagates intraprocedural aliases across procedures and introduces new aliases.

The basic idea for computing interprocedural aliases is to follow all the possible chains of argument-parameters and nonlocal variable-parameter bindings at all call sites. We introduce a naming memory locations technique which guarantees the correctness and enhances the precision of data-flow analysis. The technique associates sections, offsets of actual parameters to formal parameters following a certain call path. Precise alias information are computed for both scalar and array variables. The analysis is called alias propagation.

This analysis is implemented by Thi Viet Nga Nguyen (see [42]).


alias_propagation           > MODULE.alias_associations
        < PROGRAM.entities
        < MODULE.code
        < CALLERS.alias_associations
        < CALLERS.code

7.2.2 Alias Checking

With the call-by-reference mechanism in Fortran 77, new aliases can be created between formal parameters if the same actual argument is passed to two or more formal parameters, or between formal parameters and global parameters if an actual argument is an object in common storage which is also visible in the called subprogram or other subprograms in the call chain below it.

Restrictions on association of entities in Fortran 77 (Section 15.9.3.6 [7]) say that neither aliased formal parameters nor the variable in the common block may become defined during execution of the called subprogram or the others subprograms in the call chain.

This phase uses information from the alias_propagation 7.2.1 analysis and computes the definition informations of variables in a program, and then to verify statically if the program violates the standard restriction on alias or not. If these informations are not known at compile-time, we instrument the code with tests that check the violation dynamically during execution of program.

This verification is implemented by Thi Viet Nga Nguyen (see [42]).

alias alias_check ’Alias Check’
alias_check   > MODULE.code
        < PROGRAM.entities
        < MODULE.alias_associations
        < MODULE.cumulated_effects
        < ALL.code

This is a property to control whether the alias propagation and alias checking phases use information from MAIN program or not. If the current module is never called by the main program, we do no alias propagation and alias checking for this module if the property is on. However, we can do nothing with modules that have no callers at all, because this is a top-down approach.

 
ALIAS_CHECKING_USING_MAIN_PROGRAM FALSE  

7.3 Used Before Set

This analysis checks if the program uses a variable or an array element which has not been assigned a value. In this case, anything may happen: the program may appear to run normally, or may crash, or may behave unpredictably. We use IN regions that give a set of read variables not previously written. Depending on the nature of the variable: local, formal or global, we have different cases. In principle, it works as follows: if we have a MUST IN region at the module statement, the corresponding variable must be used before being defined, a STOP is inserted. Else, we insert an initialization function and go down, insert a verification function before each MUST IN at each sub-statements.

This is a top-down analysis that process a procedure before all its callees. Information given by callers is used to verify if we have to check for the formal parameters in the current module or not. In addition, we produce information in the resource MODULE.ubs to tell if the formal parameters of the called procedures have to be checked or not.

This verification is implemented by Thi Viet Nga Nguyen (see [42]).

alias used_before_set ’Used Before Set’
used_before_set   > MODULE.ubs
        < PROGRAM.entities
        < MODULE.code
        < MODULE.in_regions
        < CALLERS.ubs

Chapter 8
Parallelization, Distribution and Code Generation

8.1 Code Parallelization

PIPS basic parallelization function, rice_all_dependence 8.1.3, produces a new version of the Module code with DOALL loops exhibited using Allen & Kennedy’s algorithm. The DOALL syntactic construct is non-standard but easy to understand and usual in text book like [54]. As parallel prettyprinter option, it is possible to use Fortran 90 array syntax (see Section 10.4). For C, the loops can be output as for-loop decorated with OpenMP pragma.

Remember that Allen & Kennedy’s algorithm can only be applied on loops with simple bodies, i.e. sequences of assignments, because it performs loop distribution and loop regeneration without taking control dependencies into account. If the loop body contains tests and branches, the coarse grain parallelization algorithm should be used (see 8.1.6).

Loop index variables are privatized whenever possible, using a simple algorithm. Dependence arcs related to the index variable and stemming from the loop body must end up inside the loop body. Else, the loop index is not privatized because its final value is likely to be needed after the loop end and because no copy-out scheme is supported.

A better privatization algorithm for all scalar variable may be used as a preliminary code transformation. An array privatizer is also available (see Section 9.7.11). A non-standard PRIVATE declaration is used to specify which variables should be allocated on stack for each loop iteration. An HPF or OpenMP format can also be selected.

Objects of type parallelized_code differs from objects of type code for historic reasons, to simplify the user interface and because most algorithms cannot be applied on DOALL loops. This used to be true for pre-condition computation, dependence testing and so on... It is possible neither to re-analyze parallel code, nor to re-parse it (although it would be interesting to compute the complexity of a parallel code) right now but it should evolves. See § 8.1.8.

8.1.1 Parallelization properties

There are few properties that control the parallelization behaviour.

8.1.1.1 Properties controlling Rice parallelization

TRUE to make all possible parallel loops, FALSE to generate real (vector, innermost parallel?) code:

 
GENERATE_NESTED_PARALLEL_LOOPS TRUE  

Show statistics on the number of loops parallelized by pips:

 
PARALLELIZATION_STATISTICS FALSE  

To select whether parallelization and loop distribution is done again for already parallel loops:

 
PARALLELIZE_AGAIN_PARALLEL_CODE FALSE  

The motivation is we may want to parallelize with a coarse grain method first, and finish with a fine grain method here to try to parallelize what has not been parallelized. When applying à la Rice parallelizing to parallelize some (still) sequential code, we may not want loop distribution on already parallel code to preserve cache resources, etc.

Thread-safe libraries are protected by critical sections. Their functions can be called safely from different execution threads. For instance, a loop whose body contains calls to malloc can be parallelized. The underlying state changes do no hinder parallelization, at least if the code is not sensitive to pointer values.

 
PARALLELIZATION_IGNORE_THREAD_SAFE_VARIABLES FALSE  

Since this property is used to mask arcs in the dependence graph, it must be exploited by each parallelization phase independently. It is not used to derived a simplified version of the use-def chains or of the dependence graph to avoid wrong result with use-def elimination, which is based on the same graph.

8.1.2 Menu for Parallelization Algorithm Selection

Entries in menu for the resource parallelized_code and for the different parallelization algorithms with may be activated or selected. Note that the nest parallelization algorithm is not debugged.

alias parallelized_code ’Parallelization’

alias rice_all_dependence ’All Dependences’
alias rice_data_dependence ’True Dependences Only’
alias rice_cray ’CRAY Microtasking’
alias nest_parallelization ’Loop Nest Parallelization’
alias coarse_grain_parallelization ’Coarse Grain Parallelization’
alias internalize_parallel_code ’Consider a parallel code as a sequential one’

8.1.3 Allen & Kennedy’s Parallelization Algorithm

Use Allen & Kennedy’s algorithm and consider all dependences.

rice_all_dependence             > MODULE.parallelized_code
        < PROGRAM.entities
        < MODULE.code MODULE.dg

8.1.4 Def-Use Based Parallelization Algorithm

Several other parallelization functions for shared-memory target machines are available. Function rice_data_dependence 8.1.4 only takes into account data flow dependences, a.k.a true dependences. It is of limited interest because transitive dependences are computed. It is not equivalent at all to performing array and scalar expansion based on direct dependence computation (Brandes, Feautrier, Pugh). It is not safe when privatization is performed before parallelization.

This phase is named after the historical classification of data dependencies in output dependence, anti-dependence and true or data dependence. It should not be used for standard parallelization, but only for experimental parallelization by knowledgeable users, aware that the output code may be illegal.

rice_data_dependence            > MODULE.parallelized_code
        < PROGRAM.entities
        < MODULE.code MODULE.dg

8.1.5 Parallelization and Vectorization for Cray Multiprocessors

Function rice_cray 8.1.5 targets Cray vector multiprocessors. It selects one outermost parallel loop to use multiple processors and one innermost loop for the vector units. It uses Cray microtasking directives. Note that a prettyprinter option must also be selected independently (see Section 10.4).

rice_cray                   > MODULE.parallelized_code
        < PROGRAM.entities
        < MODULE.code MODULE.dg

8.1.6 Coarse Grain Parallelization

Function coarse_grain_parallelization 8.1.6 implements a loop parallelization algorithm based on convex array regions. It considers only one loop at a time, its body being abstracted by its invariant read and write regions. No loop distribution is performed, but any kind of loop body is acceptable whereas Allen & Kennedy algorithm only copes with very simple loop bodies.

For nasty reasons about effects that are statement addresses to effects mapping, this pass changes the code instead of producing a parallelized_code resource. It is not a big deal since often we want to modify the code again and we should use internalize_parallel_code 8.1.8 just after if its behavior were modified.

coarse_grain_parallelization > MODULE.code
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.preconditions
        < MODULE.inv_regions

Function coarse_grain_parallelization_with_reduction 8.1.6 extend the standard coarse_grain_parallelization 8.1.6 by using reduction detection informations.

coarse_grain_parallelization_with_reduction > MODULE.reduction_parallel_loops
        < PROGRAM.entities
        < MODULE.code
        < MODULE.cumulated_effects
        < MODULE.cumulated_reductions
        < MODULE.proper_reductions
        < MODULE.inv_regions

8.1.7 Global Loop Nest Parallelization

Function nest_parallelization 8.1.7 is an attempt at combining loop transformations and parallelization for perfectly nested loops. Different parameters are computed like loop ranges and contiguous directions for references. Loops with small ranges are fully unrolled. Loops with large ranges are strip-mined to obtain vector and parallel loops. Loops with medium ranges simply are parallelized. Loops with unknown range also are simply parallelized.

For each loop direction, the amount of spatial and temporal localities is estimated. The loop with maximal locality is chosen as innermost loop.

This algorithm still is in the development stage. It can be tried to check that loops are interchanged when locality can be improvedInternship!. An alternative for static control section, is to use the interface with PoCC (see Section 10.11).

nest_parallelization                    > MODULE.parallelized_code
        < PROGRAM.entities
        < MODULE.code MODULE.dg

8.1.8 Coerce Parallel Code into Sequential Code

To simplify the user interface and to display with one click a parallelized program, programs in PIPS are parallelized code instead of standard code.PV:not clear As a consequence, parallelized programs cannot be further analyzed and transformed because sequential code and parallelized code do not have the same resource type. Most pipsmake rules apply to code but not to parallelized code. Unfortunately, improving the parallelized code with some other transformations such as dead-code elimination is also useful. Thus this pseudo-transformation is added to coerce a parallel code into a classical (sequential) one. Parallelization is made an internal code transformation in PIPS with this rule.

Although this is not the effective process, parallel loops are tagged as parallel and loop local variables may be added in a code resource because of a previous privatization phase.

If you display the “generated” code, it may not be displayed as a parallel one if the PRETTYPRINT_SEQUENTIAL_STYLE 10.2.22.3.2 is set to a parallel output style (such as omp). Anyway, the information is available in code.

Note this transformation may no be usable with some special parallelizations in PIPS such as WP65 or HPFC that generate other resource types that may be quite different.

internalize_parallel_code             > MODULE.code
        < MODULE.parallelized_code

8.1.9 Detect Computation Intensive Loops

Generate a pragma on each loop that seems to be computation intensive according to a simple cost model.

The computation intensity is derived from the complexity and the memory footprint. It assumes the cost model:

                                 memoryxf ootprint   complexity
executionxtime = startupxoverhead + -----------------+ ----------
                                     bandwidth       frequency
A loop is marked with pragma COMPUTATION_INTENSITY_PRAGMA 8.1.9 if the communication costs are lower than the execution cost as given by uniform_complexities 6.11.2.

computation_intensity > MODULE.code
< MODULE.code
< MODULE.regions
< MODULE.complexities

This correspond to the transfer startup overhead. Time unit is the same as in complexities.

 
COMPUTATION_INTENSITY_STARTUP_OVERHEAD 10  

This corresponds to the memory bandwidth in octet per time unit.

 
COMPUTATION_INTENSITY_BANDWIDTH 100  

And This is the processor frequency, in operation per time unit.

 
COMPUTATION_INTENSITY_FREQUENCY 1000  

This is the generated pragma.

 
COMPUTATION_INTENSITY_PRAGMA "pipsintensiveloop"  

Those values have limited meaning here, only their ratio have some. Having COMPUTATION_INTENSITY_FREQUENCY 8.1.9 and COMPUTATION_INTENSITY_BANDWIDTH 8.1.9 of the same magnitude clearly limits the number of generated pragmas…

8.1.10 Limit parallelism using complexity

Parallel loops which are considered as not complex enough are replaced by sequential ones using a simple cost model based on complexity (see uniform_complexities 6.11.2).

limit_parallelism_using_complexity > MODULE.code
< MODULE.code
< MODULE.complexities

8.1.11 Limit Parallelism in Parallel Loop Nests

This phase restricts the parallelism of parallel do-loop nests by limiting the number of top-level parallel do-loops to be below a given limit. The too many innermost parallel loops are replaced by sequential loops, if any. This is useful to keep enough coarse-grain parallelism and respecting some hardware or optimization constraints. For example on GPU, in CUDA there is a 2D limitation on grids of thread blocks, in OpenCL it is limited to 3D. Of course, since the phase works onto parallel loop nest, it might be interesting to use a parallelizing phase such as internalize_parallel_code (see § 8.1.8) or coarse grain parallelization before applying limit_nested_parallelism.

limit_nested_parallelism          > MODULE.code
        < MODULE.code

PIPS relies on the property NESTED_PARALLELISM_THRESHOLD 8.1.11 to determine the desired level of nested parallelism.

 
NESTED_PARALLELISM_THRESHOLD 0  

8.2 SIMDizer for SIMD Multimedia Instruction Set

The SAC project aims at generating efficient code for processors with SIMD extension instruction set such as VMX, SSE4, etc. which are also refered to as Superword Level Parallelism (SLP). For more information, see https://info.enstb.org/projets/sac, or better, see Serge Guelton’s PhD dissertation.

Some phases use ACCEL_LOAD 8.2 and ACCEL_STORE 8.2 to generate DMA calls and ACCEL_WORK 8.2.

 
ACCEL_LOAD "SIMD_LOAD"  
 
ACCEL_STORE "SIMD_STORE"  
 
ACCEL_WORK "SIMD_"  

8.2.1 SIMD Atomizer

Here is yet another atomizer, based on new_atomizer (see Section 9.4.1.2), used to reduce complex statements to three-address code close to assembly code. There are only some minor differences with respect to new_atomizer, except that it does not break down simple expressions, that is, expressions that are the sum of a reference and a constant such as tt i+1. This is needed to generate code that could potentially be efficient, whereas the original atomizer would most of the time generate inefficient code.

alias simd_atomizer ’SIMD Atomizer’

simd_atomizer                      > MODULE.code
        < PROGRAM.entities
        < MODULE.code

Use the SIMD_ATOMIZER_ATOMIZE_REFERENCE 8.2.1 property to make the SIMD Atomizer go wild: unlike other atomizer, it will break the content of a reference. SIMD_ATOMIZER_ATOMIZE_LHS 8.2.1 can be used to tell the atomizer to atomize both lhs and rhs.

 
SIMD_ATOMIZER_ATOMIZE_REFERENCE FALSE  
 
SIMD_ATOMIZER_ATOMIZE_LHS FALSE  

The SIMD_OVERRIDE_CONSTANT_TYPE_INFERENCE 8.2.1 property is used by the SAC library to know if it must override C constant type inference. In C, an integer constant always as the minimum size needed to hold its value, starting from an int. In sac we may want to have it converted to a smaller size, in situation like char b;/*...*/;char a = 2 + b;. Otherwise the result of 2+b is considered as an int. if SIMD_OVERRIDE_CONSTANT_TYPE_INFERENCE 8.2.1 is set to TRUE, the result of 2+b will be a char.

 
SIMD_OVERRIDE_CONSTANT_TYPE_INFERENCE FALSE  

8.2.2 Loop Unrollling for SIMD Code Generation

Tries to unroll the code for making the simdizing process more efficient. It thus tries to compute the optimal unroll factor, allowing to pack the most instructions together. Sensible to SIMDIZER_AUTO_UNROLL_MINIMIZE_UNROLL 8.2.11.1 and SIMDIZER_AUTO_UNROLL_SIMPLE_CALCULATION 8.2.11.1.

alias simdizer_auto_unroll ’SIMD-Auto Unroll’

simdizer_auto_unroll        > MODULE.code
< PROGRAM.simd_treematch
< PROGRAM.simd_operator_mappings
        < PROGRAM.entities
        < MODULE.code

Similiar to simdizer_auto_unroll 8.2.2 but at the loop level.

Sensible to LOOP_LABEL 9.1.1.

loop_auto_unroll        > MODULE.code
        < PROGRAM.entities
        < MODULE.code

8.2.3 Tiling for SIMD Code Generation

Tries to tile the code to make the simdizing process more efficient.

Sensible to LOOP_LABEL 9.1.1 to select the loop nest to tile.

simdizer_auto_tile        > MODULE.code
        < PROGRAM.entities
        < MODULE.cumulated_effects
        < MODULE.code

8.2.4 Preprocessing of Reductions for SIMD Code Generation

This phase tries to pre-process reductions, so that they can be vectorized efficiently by the simdizer 8.2.10 phase. When multiple reduction statements operating on the same variable with the same operation are detected inside a loop body, each “instance” of the reduction is renamed, and some code is added before and after the loop to initialize the new variables and compute the final result.

alias simd_remove_reductions ’SIMD Remove Reductions’

simd_remove_reductions      > MODULE.code
                            > MODULE.callees
        ! MODULE.simdizer_init
        < PROGRAM.entities
        < MODULE.cumulated_reductions
        < MODULE.code
        < MODULE.dg
 
SIMD_REMOVE_REDUCTIONS_PREFIX "RED"  
 
SIMD_REMOVE_REDUCTIONS_PRELUDE ""  
 
SIMD_REMOVE_REDUCTIONS_POSTLUDE ""  

8.2.5 Redundant Load-Store Elimination

Remove useless load store calls (and more).


redundant_load_store_elimination      > MODULE.code
> MODULE.callees
        < PROGRAM.entities
        < MODULE.code
        < MODULE.out_regions
        < MODULE.chains

If REDUNDANT_LOAD_STORE_ELIMINATION_CONSERVATIVE 8.2.5 is set to false, redundant_load_store_elimination 8.2.5 will remove any statement not implied in the computation of out regions, otherwise it will not remove statement that modifies aprameters reference.

 
REDUNDANT_LOAD_STORE_ELIMINATION_CONSERVATIVE TRUE  

8.2.6 Undo Some Atomizer Transformations (?)

...

alias deatomizer ’Deatomizer’

deatomizer                  > MODULE.code
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects
        < MODULE.dg

8.2.7 If Conversion

This phase is the first phase of the if-conversion algorithm. The complete if conversion algorithm is performed by applying the three following phase: if_conversion_init 8.2.7, if_conversion 8.2.7 and if_conversion_compact 8.2.7.

Use IF_CONVERSION_INIT_THRESHOLD 8.2.7 to control whether if conversion will occur or not: beyhond this number of call, no conversion is done.

 
IF_CONVERSION_INIT_THRESHOLD 40  

alias if_conversion_init ’If-conversion init’

if_conversion_init                  > MODULE.code
        < PROGRAM.entities
        < MODULE.code
        < MODULE.summary_complexity

This phase is the second phase of the if-conversion algorithm. The complete if conversion algorithm is performed by applying the three following phase: if_conversion_init 8.2.7, if_conversion 8.2.7 and if_conversion_compact 8.2.7.

 
IF_CONVERSION_PHI "__C-conditional__"  

alias if_conversion ’If-conversion’

if_conversion                  > MODULE.code
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects

This phase is the third phase of the if-conversion algorithm. The complete if conversion algorithm is performed by applying the three following phase: if_conversion_init 8.2.7, if_conversion 8.2.7 and if_conversion_compact 8.2.7.

alias if_conversion_compact ’If-conversion compact’

if_conversion_compact                  > MODULE.code
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects
        < MODULE.dg
Converts max in loop bounds into tests. Also sets variable so that simplify_control 9.3.1 works afterwards.

8.2.8 Loop Unswitching

This phase apply loop distribution on a loop-invariant test. So it transforms a loop with an if/then/else inside into an if/then/else with the loop duplicated into the “then” and “else” branches.

loop_nest_unswitching                  > MODULE.code
        < PROGRAM.entities
        < MODULE.code

8.2.9 Scalar Renaming

The Scalar Renaming pass tries to minimize dependencies in the code by renaming scalars when legal.

scalar_renaming           > MODULE.code
        < PROGRAM.entities
        < MODULE.dg
        < MODULE.proper_effects

8.2.10 Tree Matching for SIMD Code Generation

This function initialize a treematch used by simdizer 8.2.10 for simd-oriented pattern matching

simd_treematcher > PROGRAM.simd_treematch
This function initialize operator matchings used by simdizer 8.2.10 for simd-oriented pattern matching

simd_operator_mappings > PROGRAM.simd_operator_mappings

simdizer_init  > MODULE.code
< PROGRAM.entities
< MODULE.code

Function simdizer 8.2.10 is an attempt at generating SIMD code for SIMD multimedia instruction set such as MMX, SSE2, VIS,... This transformation performs the core vectorization, transforming sequences of similar statements into vector operations.

alias simdizer ’Generate SIMD code’

simdizer                    > MODULE.code
                            > MODULE.callees
                            > PROGRAM.entities
! MODULE.simdizer_init
< PROGRAM.simd_treematch
< PROGRAM.simd_operator_mappings
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects
        < MODULE.cumulated_effects
        < MODULE.dg

When set to true, following property tells the simdizer to try to padd arrays when it seems to be profitable

 
SIMDIZER_ALLOW_PADDING FALSE  

Skip generation of load and stores, using generic functions instead.

 
SIMDIZER_GENERATE_DATA_TRANSFERS TRUE  

This phase is to be called after simdization of affectation operator. It performs type substitution from char/short array to in array using the packing from the simdization phase For example, four consecutive load from a char array could be a single load from an int array. This prove to be useful for c to vhdl compilers such as c2h.

alias simd_memory_packing ’Generate Optimized Load Store’

simd_memory_packing  > MODULE.code
        < PROGRAM.entities
        < MODULE.code

8.2.11 SIMD properties

This property is used to set the target register size, expressed in bits, for places where this is needed (for instance, auto-unroll with simple algorithm).

 
SAC_SIMD_REGISTER_WIDTH 64  

8.2.11.1 Auto-Unroll

This property is used to control how the auto unroll phase computes the unroll factor. By default, the minimum unroll factor is used. It is computed by using the minimum of the optimal factor for each statement. If the property is set to FALSE, then the maximum unroll factor is used instead.

 
SIMDIZER_AUTO_UNROLL_MINIMIZE_UNROLL TRUE  

This property controls how the “optimal” unroll factor is computed. Two algorithms can be used. By default, a simple algorithm is used, which simply compares the actual size of the variables used to the size of the registers to find out the best unroll factor. If the property is set to FALSE, a more complex algorithm is used, which takes into account the actual SIMD instructions.

 
SIMDIZER_AUTO_UNROLL_SIMPLE_CALCULATION TRUE  

8.2.11.2 Memory Organisation

This property is used by the sac library to know which elements of multi-dimensional array are consecutive in memory. Let us consider the three following references a(i,j,k), a(i,j,k+1) and a(i+1,j,k). Then, if SIMD_FORTRAN_MEM_ORGANISATION 8.2.11.2 is set to TRUE, it means that a(i,j,k) and a(i+1,j,k) are consecutive in memory but a(i,j,k) and a(i,j,k+1) are not. However, if SIMD_FORTRAN_MEM_ORGANISATION 8.2.11.2 is set to FALSE, a(i,j,k) and a(i,j,k+1) are consecutive in memory but a(i,j,k) and a(i+1,j,k) are not.

 
SIMD_FORTRAN_MEM_ORGANISATION TRUE  

8.2.11.3 Pattern file

This property is used by the sac library to know the path of the pattern definition file. If the file is not found, the execution fails.

 
SIMD_PATTERN_FILE "patterns.def"  

8.3 Code Distribution

Different automatic code distribution techniques are implemented in PIPS for distributed-memory machines. The first one is based on the emulation of a shared-memory. The second one is based on HPF. A third one target architectures with hardware coprocessors. Another one is currently developed at IT Sud Paris that generate MPI code from OpenMP one.

8.3.1 Shared-Memory Emulation

WP651  [303132] produces a new version of a module transformed to be executed on a distributed memory machine. Each module is transformed into two modules. One module, wp65_compute_file, performs the computations, while the other one, wp65_bank_file, emulates a shared memory.

This rule does not have data structure outputs, as the two new program generated have computed names. This does not fit the pipsmake framework too well, but is OK as long as nobody wishes to apply PIPS on the generated code, e.g. to propagate constant or eliminate dead code.

Note that use-use dependencies are used to allocate temporary arrays in local memory (i.e. in the software cache).

This compilation scheme was designed by Corinne Ancourt and François Irigoin. It uses theoretical results in [6]. Its input is a very small subset of Fortran program (e.g. procedure calls are not supported). It was implemented by the designers, with help from Lei Zhou.

alias wp65_compute_file ’Distributed View’
alias wp65_bank_file ’Bank Distributed View’
wp65                            > MODULE.wp65_compute_file
                                > MODULE.wp65_bank_file
        ! MODULE.privatize_module
        < PROGRAM.entities
        < MODULE.code
        < MODULE.dg
        < MODULE.cumulated_effects
        < MODULE.chains
        < MODULE.proper_effects

Name of the file for the target model:

 
WP65_MODEL_FILE "model.rc"  

8.3.2 HPF Compiler

The HPF compiler2 is a project by itself, developed by Fabien Coelho in the PIPS framework.

A whole set of rules is used by the PIPS HPF compiler3 , HPFC4 . By the way, the whole compiler is just a big hack according to Fabien Coelho.

8.3.2.1 HPFC Filter

The first rule is used to apply a shell to put HPF-directives in an f77 parsable form. Some shell script based on sed is used. The hpfc_parser 4.2.2 must be called to analyze the right file. This is triggered automatically by the bang selection in the hpfc_close 8.3.2.5 phase.

hpfc_filter             > MODULE.hpfc_filtered_file
    < MODULE.source_file

8.3.2.2 HPFC Initialization

The second HPFC rule is used to initialize the hpfc status and other data structures global to the compiler. The HPF compiler status is bootstrapped. The compiler status stores (or should store) all relevant information about the HPF part of the program (data distribution, IO functions and so on).

hpfc_init              > PROGRAM.entities
                       > PROGRAM.hpfc_status
    < PROGRAM.entities

8.3.2.3 HPF Directive removal

This phase removes the directives (some special calls) from the code. The remappings (implicit or explicit) are also managed at this level, through copies between differently shaped arrays.

To manage calls with distributed arguments, I need to apply the directive extraction bottom-up, so that the callers will know about the callees through the hpfc_status. In order to do that, I first thought of an intermediate resource, but there was obscure problem with my fake calls. Thus the dependence static then dynamic directive analyses is enforced at the bang sequence request level in the hpfc_close 8.3.2.5 phase.

The hpfc_static_directives 8.3.2.3 phase analyses static mapping directives for the specified module. The hpfc_dynamic_directives 8.3.2.3 phase does manages realigns and function calls with prescriptive argument mappings. In order to do so it needs its callees’ required mappings, hence the need to analyze beforehand static directives. The code is cleaned from the hpfc_filter 8.3.2.1 artifacts after this phase, and all the proper information about the HPF stuff included in the routines is stored in hpfc_status.

hpfc_static_directives         > MODULE.code
                        > PROGRAM.hpfc_status
    < PROGRAM.entities
    < PROGRAM.hpfc_status
    < MODULE.code

hpfc_dynamic_directives         > MODULE.code
                        > PROGRAM.hpfc_status
    < PROGRAM.entities
    < PROGRAM.hpfc_status
    < MODULE.code
    < MODULE.proper_effects

8.3.2.4 HPFC actual compilation

This rule launches the actual compilation. Four files are generated:

  1. the host code that mainly deals with I/Os,
  2. the SPMD node code,
  3. and some initialization stuff for the runtime (2 files).

Between this phase and the previous one, many PIPS standard analyses are performed, especially the regions and preconditions. Then this phase will perform the actual translation of the program into a host and SPMD node code.

hpfc_compile           > MODULE.hpfc_host
                       > MODULE.hpfc_node
                       > MODULE.hpfc_parameters
                       > MODULE.hpfc_rtinit
                       > PROGRAM.hpfc_status
    < PROGRAM.entities
    < PROGRAM.hpfc_status
    < MODULE.regions
    < MODULE.summary_regions
    < MODULE.preconditions
    < MODULE.code
    < MODULE.cumulated_references
    < CALLEES.hpfc_host

8.3.2.5 HPFC completion

This rule deals with the compiler closing. It must deal with commons. The hpfc parser selection is put here.

hpfc_close             > PROGRAM.hpfc_commons
    ! SELECT.hpfc_parser
    ! SELECT.must_regions
    ! ALL.hpfc_static_directives
    ! ALL.hpfc_dynamic_directives
    < PROGRAM.entities
    < PROGRAM.hpfc_status
    < MAIN.hpfc_host

8.3.2.6 HPFC install

This rule performs the installation of HPFC generated files in a separate directory. This rule is added to make hpfc usable from wpips and epips. I got problems with the make and run rules, because it was trying to recompute everything from scratch. To be investigated later on.

hpfc_install            > PROGRAM.hpfc_installation
    < PROGRAM.hpfc_commons

hpfc_make

hpfc_run

8.3.2.7 HPFC High Performance Fortran Compiler properties

Debugging levels considered by HPFC: HPFC_{,DIRECTIVES,IO,REMAPPING}_DEBUG_LEVEL.

These booleans control whether some computations are directly generated in the output code, or computed through calls to dedicated runtime functions. The default is the direct expansion.

 
HPFC_EXPAND_COMPUTE_LOCAL_INDEX TRUE  
 
HPFC_EXPAND_COMPUTE_COMPUTER TRUE  
 
HPFC_EXPAND_COMPUTE_OWNER TRUE  
 
HPFC_EXPAND_CMPLID TRUE  
 
HPFC_NO_WARNING FALSE  

Hacks control…

 
HPFC_FILTER_CALLEES FALSE  
 
GLOBAL_EFFECTS_TRANSLATION TRUE  

These booleans control the I/O generation.

 
HPFC_SYNCHRONIZE_IO FALSE  
 
HPFC_IGNORE_MAY_IN_IO FALSE  

Whether to use lazy or non-lazy communications

 
HPFC_LAZY_MESSAGES TRUE  

Whether to ignore FCD (Fabien Coelho Directives…) or not. These directives are used to instrument the code for testing purposes.

 
HPFC_IGNORE_FCD_SYNCHRO FALSE  
 
HPFC_IGNORE_FCD_TIME FALSE  
 
HPFC_IGNORE_FCD_SET FALSE  

Whether to measure and display the compilation times for remappings, and whether to generate outward redundant code for remappings. Also whether to generate code that keeps track dynamically of live mappings. Also whether not to send data to a twin (a processor that holds the very same data for a given array).

 
HPFC_TIME_REMAPPINGS FALSE  
 
HPFC_REDUNDANT_SYSTEMS_FOR_REMAPS FALSE  
 
HPFC_OPTIMIZE_REMAPPINGS TRUE  
 
HPFC_DYNAMIC_LIVENESS TRUE  
 
HPFC_GUARDED_TWINS TRUE  

Whether to use the local buffer management. 1 MB of buffer is allocated.

 
HPFC_BUFFER_SIZE 1000000  
 
HPFC_USE_BUFFERS TRUE  

Whether to use in and out convex array regions for input/output compiling

 
HPFC_IGNORE_IN_OUT_REGIONS TRUE  

Whether to extract more equalities from a system, if possible.

 
HPFC_EXTRACT_EQUALITIES TRUE  

Whether to try to extract the underlying lattice when generating code for systems with equalities.

 
HPFC_EXTRACT_LATTICE TRUE  

8.3.3 STEP: MPI code generation from OpenMP programs

RK: IT SudParis : insert your documentation here; FI: or a pointer towards you documentation

8.3.3.1 STEP Directives

The step_parser 8.3.3.1 phase identifies the OpenMP constructs. The directive semantics are stored in the MODULE.step_directives ressource.

step_parser                 > MODULE.step_directives
                            > MODULE.code
   < MODULE.code

8.3.3.2 STEP Analysis

The step_analyse_init 8.3.3.2 phase init the PROGRAM.step_comm ressources

step_analyse_init            > PROGRAM.step_comm

The step_analyse 8.3.3.2 phase triggers the convex array regions analyses to compute SEND and RECV regions leading to MPI messages and checks whether a given SEND region corresponding to a directive construct is consumed by a RECV region corresponding to a directive construct. In this case, communications can be optimized.

step_analyse                 > PROGRAM.step_comm
                             > MODULE.step_send_regions
                             > MODULE.step_recv_regions
   < PROGRAM.entities
   < PROGRAM.step_comm
   < MODULE.step_directives
   < MODULE.code
   < MODULE.preconditions
   < MODULE.transformers
   < MODULE.cumulated_effects
   < MODULE.regions
   < MODULE.in_regions
   < MODULE.out_regions
   < MODULE.chains
   < CALLEES.code
   < CALLEES.step_send_regions
   < CALLEES.step_recv_regions

8.3.3.3 STEP code generation

Based on the OpenMP construct and analyses, new modules are generated to translate the original code with OpenMP directives. The default code transformation for OpenMP construct is driven by the STEP_DEFAULT_TRANSFORMATION 8.3.3.3 property. The different value allowed are :

 
STEP_DEFAULT_TRANSFORMATION "HYBRID"  

The step_compile 8.3.3.3 phase generates source code for OpenMP constructs depending of the transformation desired. Each OpenMP construct could have a specific transformation define by STEP clauses (without specific clauses, the STEP_DEFAULT_TRANSFORMATION 8.3.3.3 is used). The specific STEP clauses allowed are :

step_compile               > MODULE.step_file
   < PROGRAM.entities
   < PROGRAM.step_comm
   < MODULE.step_directives
   < MODULE.code

The step_install 8.3.3.3 phase copy the generated source files in the directory specified by the STEP_INSTALL_PATH 8.3.3.3 property.

step_install
   < ALL.step_file
 
STEP_INSTALL_PATH ""  

8.3.4 PHRASE: high-level language transformation for partial evaluation in reconfigurable logic

The PHRASE project is an attempt to automatically (or semi-automatically) transform high-level language programs into code with partial execution on some accelerators such as reconfigurable logic (such as FPGAs) or data-paths.

This phases allow to split the code into portions of code delimited by PHRASE-pragma (written by the programmer) and a control program managing them. Those portions of code are intended, after transformations, to be executed in reconfigurable logic. In the PHRASE project, the reconfigurable logic is synthesized with the Madeo tool that take SmallTalk code as input. This is why we have a SmallTalk pretty-printer (see section 10.10).

8.3.4.1 Phrase Distributor Initialisation

This phase is a preparation phase for the Phrase Distributor phrase_distributor 8.3.4.2: the portions of code to externalize are identified and isolated here. Comments are modified by this phase.

alias phrase_distributor_init ’PHRASE Distributor initialization’

phrase_distributor_init                  > MODULE.code
        < PROGRAM.entities
        < MODULE.code

This phase is automatically called by the following phrase_distributor 8.3.4.2.

8.3.4.2 Phrase Distributor

The job of distribution is done here. This phase should be applied after the initialization (Phrase Distributor Initialisation phrase_distributor_init 8.3.4.1), so this one is automatically applied first.

alias phrase_distributor ’PHRASE Distributor’

phrase_distributor                       > MODULE.code
                                         > MODULE.callees
        ! MODULE.phrase_distributor_init
        < PROGRAM.entities
        < MODULE.code
        < MODULE.in_regions
        < MODULE.out_regions
        < MODULE.dg

8.3.4.3 Phrase Distributor Control Code

This phase add control code for PHRASE distribution. All calls to externalized code portions are transformed into START and WAIT calls. Parameters communication (send and receive) are also handled here

alias phrase_distributor_control_code ’PHRASE Distributor Control Code’

phrase_distributor_control_code          > MODULE.code
        < PROGRAM.entities
        < MODULE.code
        < MODULE.in_regions
        < MODULE.out_regions
        < MODULE.dg

8.3.5 Safescale

The Safescale project is an attempt to automatically (or semi-automatically) transform sequential code written in C language for the Kaapi runtime.

8.3.5.1 Distribution init

This phase is intended for the analysis of a module given with the aim of finding blocks of code delimited by specific pragmas from it.

alias safescale_distributor_init ’Safescale distributor init’

safescale_distributor_init                  > MODULE.code
        < PROGRAM.entities
        < MODULE.code

8.3.5.2 Statement Externalization

This phase is intended for the externalization of a block of code.

alias safescale_distributor ’Safescale distributor’

safescale_distributor                  > MODULE.code
                                       > MODULE.callees
        ! MODULE.safescale_distributor_init
        < PROGRAM.entities
        < MODULE.code
        < MODULE.regions
        < MODULE.in_regions
        < MODULE.out_regions

8.3.6 CoMap: Code Generation for Accelerators with DMA

8.3.6.1 Phrase Remove Dependences

alias phrase_remove_dependences ’Phrase Remove Dependences’

phrase_remove_dependences                      > MODULE.code
                                               > MODULE.callees
        ! MODULE.phrase_distributor_init
        < PROGRAM.entities
        < MODULE.code
        < MODULE.in_regions
        < MODULE.out_regions
        < MODULE.dg

8.3.6.2 Phrase comEngine Distributor

This phase should be applied after the initialization (Phrase Distributor Initialisation or phrase_distributor_init 8.3.4.1). The job of comEngine distribution is done here.

alias phrase_comEngine_distributor ’PHRASE comEngine Distributor’

phrase_comEngine_distributor                       > MODULE.code
                                                   > MODULE.callees
        ! MODULE.phrase_distributor_init
        < PROGRAM.entities
        < MODULE.code
        < MODULE.in_regions
        < MODULE.out_regions
        < MODULE.dg
        < MODULE.summary_complexity

8.3.6.3 PHRASE ComEngine properties

This property is set to TRUE if we want to synthesize only one process on the HRE.

 
COMENGINE_CONTROL_IN_HRE TRUE  

This property holds the fifo size of the ComEngine.

 
COMENGINE_SIZE_OF_FIFO 128  

8.3.7 Parallelization for Terapix architecture

8.3.7.1 Isolate Statement

Isolate the statement given in ISOLATE_STATEMENT_LABEL 8.3.7.1 in a separated memory. Data transfer are generated using the same DMA as kernel_load_store 8.3.7.5.

The algorithm is based on Read and write regions (no in / out yet) to compute the data that must be copied and allocated. Rectangular hull of regions are used to match allocator and data transfers prototypes. If an analysis fails, definition regions are use instead. If a sizeof is involved, EVAL_SIZEOF 9.4.2 must be set to true.

isolate_statement > MODULE.code
> MODULE.callees
< MODULE.code
< MODULE.regions
< PROGRAM.entities
 
ISOLATE_STATEMENT_LABEL ""  

As a side effect of isolate_statement pass, some new variables are declared into the function. A prefix can be used for the names of those variables using the property ISOLATE_STATEMENT_VAR_PREFIX. It is also possible to insert a suffix using the property ISOLATE_STATEMENT_VAR_SUFFIX. The suffix will be inserted between the original variable name and the instance number of the copy.

 
ISOLATE_STATEMENT_VAR_PREFIX ""  
 
ISOLATE_STATEMENT_VAR_SUFFIX ""  

By default we cannot isolate a statement with some complex effects on the non local memory. But if we know we can (for example ), we can override this behaviour by setting the following property:

 
ISOLATE_STATEMENT_EVEN_NON_LOCAL FALSE  

8.3.7.2 GPU XML Output

Dump XML for a function, intended to be used for SPEAR. Track back the parameters that are used for the iteration space.

gpu_xml_dump > MODULE.gpu_xml_file
         < PROGRAM.entities
         < MODULE.code

8.3.7.3 Delay Communications

Optimize the load/store dma by delaying the stores and performing the stores as soon as possible. Interprocedural version.

It uses ACCEL_LOAD 8.2 and ACCEL_STORE 8.2 to distinguish loads and stores from other calls.

The communication elimination makes the assumption that a load/store pair can always be removed.

delay_communications_inter                    > MODULE.code
        > MODULE.callees
        ! CALLEES.delay_communications_inter
        ! MODULE.delay_load_communications_inter
        ! MODULE.delay_store_communications_inter
        < PROGRAM.entities
        < MODULE.code
        < MODULE.regions
        < MODULE.dg

delay_load_communications_inter                    > MODULE.code
        > MODULE.callees
        > CALLERS.code
        > CALLERS.callees
        < PROGRAM.entities
        < MODULE.code
        < CALLERS.code
        < MODULE.proper_effects
        < MODULE.cumulated_effects
        < MODULE.dg

delay_store_communications_inter                    > MODULE.code
        > MODULE.callees
        > CALLERS.code
        > CALLERS.callees
        < PROGRAM.entities
        < MODULE.code
        < CALLERS.code
        < MODULE.proper_effects
        < MODULE.cumulated_effects
        < MODULE.dg

Optimize the load/store dma by delaying the stores and performing the stores as soon as possible. Intra Procedural version.

It uses ACCEL_LOAD 8.2 and ACCEL_STORE 8.2 to distinguish loads and stores from other calls.

The communication elimination makes the assumption that a load/store pair can always be removed.

delay_communications_intra                    > MODULE.code
        > MODULE.callees
        ! MODULE.delay_load_communications_intra
        ! MODULE.delay_store_communications_intra
        < PROGRAM.entities
        < MODULE.code
        < MODULE.regions
        < MODULE.dg

delay_load_communications_intra                    > MODULE.code
        > MODULE.callees
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects
        < MODULE.cumulated_effects
        < MODULE.dg

delay_store_communications_intra                    > MODULE.code
        > MODULE.callees
        < PROGRAM.entities
        < MODULE.code
        < MODULE.proper_effects
        < MODULE.cumulated_effects
        < MODULE.dg

8.3.7.4 Hardware Constraints Solver

if SOLVE_HARDWARE_CONSTRAINTS_TYPE 8.3.7.4 is set to VOLUME, Given a loop label, a maximum memory footprint and an unknown entity, try to find the best value for SOLVE_HARDWARE_CONSTRAINTS_UNKNOWN 8.3.7.4 to make memory footprint of SOLVE_HARDWARE_CONSTRAINTS_LABEL 8.3.7.4 reach but not exceed SOLVE_HARDWARE_CONSTRAINTS_LIMIT 8.3.7.4. If it is set to NB_PROC, it tries to find the best value for SOLVE_HARDWARE_CONSTRAINTS_UNKNOWN 8.3.7.4 to make the maximum range of first dimension of all regions accessed by SOLVE_HARDWARE_CONSTRAINTS_LABEL 8.3.7.4 equals to SOLVE_HARDWARE_CONSTRAINTS_LIMIT 8.3.7.4.

solve_hardware_constraints > MODULE.code
< MODULE.code
< MODULE.regions
< PROGRAM.entities
 
SOLVE_HARDWARE_CONSTRAINTS_LABEL ""  
 
SOLVE_HARDWARE_CONSTRAINTS_LIMIT 0  
 
SOLVE_HARDWARE_CONSTRAINTS_UNKNOWN ""  
 
SOLVE_HARDWARE_CONSTRAINTS_TYPE ""  

8.3.7.5 kernelize

Bootstraps the kernel ressource

bootstrap_kernels > PROGRAM.kernels

Add a kernel to the list of kernels known to pips

flag_kernel > PROGRAM.kernels
< PROGRAM.kernels

Generate unoptimized load / store information for each call to the module.

kernel_load_store > CALLERS.code
> CALLERS.callees
> PROGRAM.kernels
< PROGRAM.kernels
< CALLERS.code
    < CALLERS.regions
        < CALLERS.preconditions

The legacy kernel_load_store 8.3.7.5 approach is limited because it generates the DMA around a call, and isolate_statement 8.3.7.1 engine does not perform well in interprocedural.

The following properties are used to specify the names of runtime functions. Since they are used in Par4All, their default names begin with P4A_. To have an idea about their prototype, have a look to the Par4All accelerator runtime or in validation/AcceleratorUtils/include/par4all.c.

Enable/disable the scalar handling by kernel load store.

 
KERNEL_LOAD_STORE_SCALAR FALSE  

The ISOLATE_STATEMENT_EVEN_NON_LOCAL 8.3.7.1 property can be used to force the generation even with non local memory access. But beware it would not solve all the issues...

The following properties can be used to customized the allocate/load/store functions:

 
KERNEL_LOAD_STORE_ALLOCATE_FUNCTION "P4A_accel_malloc"  
 
KERNEL_LOAD_STORE_DEALLOCATE_FUNCTION "P4A_accel_free"  

The following properties are used to name the dma functions to use for scalars:

 
KERNEL_LOAD_STORE_LOAD_FUNCTION "P4A_copy_to_accel"  
 
KERNEL_LOAD_STORE_STORE_FUNCTION "P4A_copy_from_accel"  

and for 1-dimension arrays:

 
KERNEL_LOAD_STORE_LOAD_FUNCTION_1D "P4A_copy_to_accel_1d"  
 
KERNEL_LOAD_STORE_STORE_FUNCTION_1D "P4A_copy_from_accel_1d"  

and in 2 dimensions:

 
KERNEL_LOAD_STORE_LOAD_FUNCTION_2D "P4A_copy_to_accel_2d"  
 
KERNEL_LOAD_STORE_STORE_FUNCTION_2D "P4A_copy_from_accel_2d"  

and in 3 dimensions:

 
KERNEL_LOAD_STORE_LOAD_FUNCTION_3D "P4A_copy_to_accel_3d"  
 
KERNEL_LOAD_STORE_STORE_FUNCTION_3D "P4A_copy_from_accel_3d"  

and in 4 dimensions:

 
KERNEL_LOAD_STORE_LOAD_FUNCTION_4D "P4A_copy_to_accel_4d"  
 
KERNEL_LOAD_STORE_STORE_FUNCTION_4D "P4A_copy_from_accel_4d"  

and in 5 dimensions:

 
KERNEL_LOAD_STORE_LOAD_FUNCTION_5D "P4A_copy_to_accel_5d"  
 
KERNEL_LOAD_STORE_STORE_FUNCTION_5D "P4A_copy_from_accel_5d"  

and in 6 dimensions:

 
KERNEL_LOAD_STORE_LOAD_FUNCTION_6D "P4A_copy_to_accel_6d"  
 
KERNEL_LOAD_STORE_STORE_FUNCTION_6D "P4A_copy_from_accel_6d"  

As a side effect of kernel load store pass, some new variables are declared into the function. A prefix can be used for the names of those variables using the property KERNEL_LOAD_STORE_VAR_PREFIX 8.3.7.5. It is also possible to insert a suffix using the property KERNEL_LOAD_STORE_VAR_PREFIX 8.3.7.5. The suffix will be inserted between the original variable name and the instance number of the copy.

 
KERNEL_LOAD_STORE_VAR_PREFIX "p4a_var_"  
 
KERNEL_LOAD_STORE_VAR_SUFFIX ""  

Split a parallel loop with a local index into three parts: a host side part, a kernel part and an intermediate part. The intermediate part simulates the parallel code to the kernel from the host

kernelize > MODULE.code
> MODULE.callees
> PROGRAM.kernels
                ! MODULE.privatize_module
! MODULE.coarse_grain_parallelization
< PROGRAM.entities
< MODULE.code
< PROGRAM.kernels
The property KERNELIZE_NBNODES 8.3.7.5 is used to set the number of nodes for this kernel. KERNELIZE_KERNEL_NAME 8.3.7.5 is used to set the name of generated kernel. KERNELIZE_HOST_CALL_NAME 8.3.7.5 is used to set the name of generated call to kernel (host side).
 
KERNELIZE_NBNODES 128  
 
KERNELIZE_KERNEL_NAME ""