benchmark programs, establishing a fair comparison by imposing uniform ... costs for a typical modern processor suggest that the new algorithms are ...... lists are used to manage space, as in manual dynamic storage allocation. .... Aside from remarks in Wilson's survey [Wilson, 1992] and Jones and Lins' book [Jones.
PROPERTIES OF AGE-BASED AUTOMATIC MEMORY RECLAMATION ALGORITHMS
A Dissertation Presented by DARKO STEFANOVIC´
Submitted to the Graduate School of the University of Massachusetts Amherst in partial fulfillment of the requirements for the degree of DOCTOR OF PHILOSOPHY February 1999 Department of Computer Science
c Copyright by Darko Stefanovi´c 1999 All Rights Reserved
PROPERTIES OF AGE-BASED AUTOMATIC MEMORY RECLAMATION ALGORITHMS
A Dissertation Presented by DARKO STEFANOVIC´
Approved as to style and content by:
J. Eliot B. Moss, Chair
Kathryn S. McKinley, Member
Jack C. Wileden, Member
C. Mani Krishna, Member
James F. Kurose, Department Chair Department of Computer Science
To my parents
ACKNOWLEDGEMENTS
Throughout my graduate studies, it has been an honor and an intellectual challenge to have Eliot Moss as my advisor. The imprint of his scientific method on my development is deep, and I trust that it is reflected in the present work. I have been fortunate to have Kathryn McKinley as my unofficial second advisor. I am grateful to her, not only for her incisive technical commentary, but also for her unflagging enthusiasm and encouragement. I owe a debt of gratitude to Steve Heller and his group at Sun Microsystems Laboratories for allowing me to use an experimental Java virtual machine, and especially to David Detlefs, who tirelessly explained to me the intricacies of garbage collection in that system, and ran many weeks of trace generation. This project could not have been realized without the past efforts of members of the Object Systems Laboratory: Tony Hosking, Amer Diwan, Rick Hudson, and Eric Brown, who designed and implemented the garbage collector toolkit and the Smalltalk system I have used. I should also like to acknowledge the support of the current group members: John Cavazos, Asjad Khan, Todd Wright, Abhishek Chandra, and Sharad Singhai; their hard system-building work provided the impetus to carry on with this study. John Ridgway has always found the time to guide me through the hidden corners of typesetting, and I thank him for that. It has been a great pleasure to work with them, and with many other members of the laboratory over the years. I should like to acknowledge the benefit of discussions I have had during the development of the present work with Cathy McGeoch, Andrew Appel, and Amer Diwan. The dissertation itself has been greatly improved by the comments of my examining committee members Jack Wileden and Mani Krishna. v
Upon my arrival in the town of Amherst I had the good fortune to come to know Mrs Joann C. Scott, and I am grateful for the friendship and wisdom she has shown me ever since. Lastly, I am grateful to my parents and my sister for their persistent encouragement and unquestioning confidence. I thank my parents for their support throughout the years of my studies, which made this work possible, and for having instilled in me the spirit of free inquiry, which made it worthwhile. —— This work has been supported by the National Science Foundation under grant IRI-9632284, and by gifts from Digital Equipment Corporation (now Compaq), Sun Microsystems, and Hewlett-Packard.
vi
ABSTRACT
PROPERTIES OF AGE-BASED AUTOMATIC MEMORY RECLAMATION ALGORITHMS FEBRUARY 1999 DARKO STEFANOVIC´ Dipl.Ing., UNIVERSITY OF BELGRADE M.S., UNIVERSITY OF MASSACHUSETTS AMHERST Ph.D., UNIVERSITY OF MASSACHUSETTS AMHERST Directed by: Professor J. Eliot B. Moss
Dynamic memory management enables a programmer to allocate objects for arbitrary preiods of time. It is an important feature of modern programming languages, and is fundamental to object-oriented languages. Automatic reclamation, also known as garbage collection, automates the detection of the time when a data item can no longer be used. The work described herein considers garbage collection algorithms that base their decisions solely upon the relative age of data. This age-based class of algorithms generalizes previously defined generational garbage collection algorithms and includes promising new algorithms. The work identifies relevant performance factors and reports them for a set of object-oriented benchmark programs, establishing a fair comparison by imposing uniform maximum space constraints. A precise tracing and garbage collection algorithm evaluation framework provides accurate results and thus meaningful comparisons. vii
The results indicate, contrary to assumptions in the literature, that the new algorithms copy less data than the generational algorithms, even though they retain a higher percentage of reclaimable data. In agreement with the assumptions in the literature, the results indicate that generational algorithms do less pointer-tracking work. Thus, given suitable relative costs of copying and pointer-tracking, the new algorithms perform better. Estimations of the relative costs for a typical modern processor suggest that the new algorithms are usually superior.
viii
TABLE OF CONTENTS
Page ACKNOWLEDGEMENTS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . v ABSTRACT . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . vii LIST OF TABLES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xiii LIST OF FIGURES . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xv
Chapter 1. INTRODUCTION . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 2. BACKGROUND . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1
Dynamic memory management . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.1.1 2.1.2 2.1.3 2.1.4
Liveness and reachability . . . Preserving live data . . . . . . Incremental collection . . . . . Generational copying collection 2.1.4.1
2.1.5 2.2 2.3
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
.8 10 11 11
Pointer maintenance . . . . . . . . . . . . . . . . . . . . . 13
Mature spaces . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
Object survival and mortality . . . . . . . . . . . . . . . . . . . . . . . . . . 15 Theoretical models of memory management . . . . . . . . . . . . . . . . . . . 17
3. AGE-BASED GARBAGE COLLECTION ALGORITHMS . . . . . . . . . . . . 19 3.1
Conceptual design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19 3.1.1 3.1.2 3.1.3
True-age schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21 A renewal-age scheme . . . . . . . . . . . . . . . . . . . . . . . . . . 29 Towards optimal schemes . . . . . . . . . . . . . . . . . . . . . . . . 30
ix
3.1.4 3.1.5 3.1.6 3.2
Implementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33 3.2.1 3.2.2 3.2.3 3.2.4 3.2.5
3.3
Collection failure and recovery . . . . . . . . . . . . . . . . . . . . . 31 Garbage cycles . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31 Locality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
Blocks . . . . . . . . . . . . . . . . . . . . . . . . . . Remembered sets . . . . . . . . . . . . . . . . . . . . Write barrier . . . . . . . . . . . . . . . . . . . . . . . A write barrier design for large address spaces . . . . . An estimation of copying and pointer-maintenance costs
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
. . . . .
33 35 38 39 43
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46
4. EVALUATION: SETTING . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52 4.1
Evaluation using program traces . . . . . . . . . . . . . . . . . . . . . . . . . 53 4.1.1 4.1.2 4.1.3
4.2
Trace content . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Trace format . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54 Trace generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 55
Benchmarks and languages . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 4.2.1 4.2.2
Languages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 57 Benchmarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58 4.2.2.1 4.2.2.2 4.2.2.3
4.2.3 4.3
Collecting benchmarks . . . . . . . . . . . . . . . . . . . . 59 Selecting benchmarks . . . . . . . . . . . . . . . . . . . . . 60 The final benchmark set . . . . . . . . . . . . . . . . . . . . 61
Properties of benchmarks . . . . . . . . . . . . . . . . . . . . . . . . 63
Simulating garbage collection . . . . . . . . . . . . . . . . . . . . . . . . . . 66 4.3.1 4.3.2 4.3.3
General simulation algorithm . . . . . . . . . . . . . . . . . . . . . . 68 Age-based simulation . . . . . . . . . . . . . . . . . . . . . . . . . . 69 Block-based simulation . . . . . . . . . . . . . . . . . . . . . . . . . 69
5. EVALUATION: COPYING COST . . . . . . . . . . . . . . . . . . . . . . . . 112 5.1
Method for evaluating different configurations of a collector . . . . . . . . . 112 5.1.1
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 113 5.1.1.1 5.1.1.2
The FC-TOF collector . . . . . . . . . . . . . . . . . . . . 114 The FC-ROF collector . . . . . . . . . . . . . . . . . . . 115
x
5.1.1.3 5.1.1.4 5.1.1.5 5.1.1.6 5.2 5.3
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
115 115 116 116
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119
Methods for examining the lower limits of copying cost and the feasibility of adaptation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 121 5.4.1 5.4.2 5.4.3
Locally-optimal copying cost . . . . . . . . . . . . . . . . . . . . . 121 Window motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122 Demise point analysis . . . . . . . . . . . . . . . . . . . . . . . . . 122 5.4.3.1 5.4.3.2 5.4.3.3
5.4.4
5.4.5
Visualization: demise maps . . . . . . . . . . . . . . . . . 123 Visualization: heap profiles . . . . . . . . . . . . . . . . . 124 Visualization: position mortality . . . . . . . . . . . . . . 125
Case studies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 5.4.4.1 5.4.4.2
Benchmark Richards . . . . . . . . . . . . . . . . . . . . 126 Benchmark Lambda-Fact5 . . . . . . . . . . . . . . . . . 127
Adaptive FC collection . . . . . . . . . . . . . . . . . . . . . . . . 128 5.4.5.1 5.4.5.2
5.5
. . . .
Method for comparing various collection schemes . . . . . . . . . . . . . . . 117 Method for recognizing and measuring the excess retention cost . . . . . . . 118 5.3.1
5.4
The FC-TYF collector . The FC-DOF collector The FC-DYF collector . The GYF collectors . .
Eliminating the “static” data in Richards . . . . . . . . . . 128 Choosing the collected region in Lambda-Fact5 . . . . . . 131
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 136
6. EVALUATION: COSTS OTHER THAN COPYING . . . . . . . . . . . . . . . 276 6.1
Method for evaluating the pointer maintenance cost . . . . . . . . . . . . . . 276 6.1.1 6.1.2 6.1.3 6.1.4
6.2
Results . . . . . . . . . . . . . Block size . . . . . . . . . . . Comparison between collectors Additional observations . . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
277 280 281 286
Methods for examining the directions of pointers in the heap . . . . . . . . . 288 6.2.1 6.2.2
Pointer stores in the ideal heap . . . . . . . . . . . . . . . . . . . . 288 Pointer stores and remembered sets in age-based collectors . . . . . . 290
xi
6.3
Method for evaluating the collection start-up cost . . . . . . . . . . . . . . . 291 6.3.1
6.4
Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 293
7. CONCLUDING REMARKS . . . . . . . . . . . . . . . . . . . . . . . . . . . . 460
BIBLIOGRAPHY . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 463
xii
LIST OF TABLES
Table
Page
4.1
Properties of benchmarks used . . . . . . . . . . . . . . . . . . . . . . . . . 62
5.1
Copying cost dependence on the adaptation threshold . . . . . . . . . . . . 130
5.2
Copying cost dependence on external bias . . . . . . . . . . . . . . . . . . 134
6.1
Properties of benchmarks used: run-time allocation and pointer stores . . . 278
6.2
Costs of block FC-DOF for StandardNonInteractive . . . . . . . . . . . . . 295
6.3
Costs of block FC-DOF for HeapSim . . . . . . . . . . . . . . . . . . . . 301
6.4
Costs of block FC-DOF for Lambda-Fact5 . . . . . . . . . . . . . . . . . . 310
6.5
Costs of block FC-DOF for Richards . . . . . . . . . . . . . . . . . . . . 322
6.6
Costs of block FC-DOF for JavaBYTEmark . . . . . . . . . . . . . . . . . 326
6.7
Costs of block FC-DOF for Bloat-Bloat . . . . . . . . . . . . . . . . . . . 327
6.8
Costs of block FC-DOF for Toba . . . . . . . . . . . . . . . . . . . . . . . 331
6.9
Costs of block 2GYF for StandardNonInteractive . . . . . . . . . . . . . . 335
6.10
Costs of block 2GYF for HeapSim . . . . . . . . . . . . . . . . . . . . . . 343
6.11
Costs of block 2GYF for Lambda-Fact5 . . . . . . . . . . . . . . . . . . . 354
6.12
Costs of block 2GYF for Richards . . . . . . . . . . . . . . . . . . . . . . 370
6.13
Costs of block 2GYF for JavaBYTEmark . . . . . . . . . . . . . . . . . . 375
6.14
Costs of block 2GYF for Bloat-Bloat . . . . . . . . . . . . . . . . . . . . . 376
xiii
6.15
Costs of block 2GYF for Toba . . . . . . . . . . . . . . . . . . . . . . . . 382
xiv
LIST OF FIGURES
Figure
Page
1.1
Surviving objects in a newly allocated area. . . . . . . . . . . . . . . . . . . . 2
1.2
A collection queue. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
2.1
A typical object mortality function. . . . . . . . . . . . . . . . . . . . . . . 16
2.2
A typical object survivor function. . . . . . . . . . . . . . . . . . . . . . . . 16
3.1
Viewing the heap as an age-ordered list. . . . . . . . . . . . . . . . . . . . . 20
3.2
Pseudocode for the conceptual design. . . . . . . . . . . . . . . . . . . . . . 22
3.3
TYF: True youngest-first collection (general case).
3.4
Pseudocode for GYF collectors. . . . . . . . . . . . . . . . . . . . . . . . . 24
3.5
Generational youngest-first collection (2GYF). . . . . . . . . . . . . . . . . 25
3.6
Generational youngest-first collection with flexible generation boundaries (NGYF). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
3.7
FC-TYF: True youngest-first collection (fixed collected region). . . . . . . . . 27
3.8
FC-TOF: True oldest-first collection (fixed collected region). . . . . . . . . . 27
3.9
FC-DOF: Deferred older-first collection. . . . . . . . . . . . . . . . . . . . . 28
3.10
Deferred older-first collection, window motion example. . . . . . . . . . . . 29
3.11
FC-DYF: Deferred younger-first collection. . . . . . . . . . . . . . . . . . . 30
3.12
FC-ROF: Renewal-oldest-first collection. . . . . . . . . . . . . . . . . . . . 30
3.13
Heap organized as an age-ordered list of blocks. . . . . . . . . . . . . . . . . 33 xv
. . . . . . . . . . . . . . 23
3.14
Filling preceding block to avoid fragmentation. . . . . . . . . . . . . . . . . 35
3.15
Directional filtering of pointer stores at run-time (DOF). . . . . . . . . . . . 36
3.16
Directional filtering of pointer stores at collection time (DOF). . . . . . . . . 37
3.17
DOF heap organization in a large address space. . . . . . . . . . . . . . . . . 40
3.18
Directional filtering for DOF in a large address space. . . . . . . . . . . . . . 42
3.19
Write barrier code. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47
3.20
Write barrier assembly code. . . . . . . . . . . . . . . . . . . . . . . . . . . 48
3.21
Remembered set processing code. . . . . . . . . . . . . . . . . . . . . . . . 49
3.22
Remembered set processing assembly code. . . . . . . . . . . . . . . . . . . 50
3.23
Age-based scheme name abbreviations. . . . . . . . . . . . . . . . . . . . . 51
4.1
Non-generational collection performance: Interactive-AllCallsOn. . . . . . . . 71
4.2
Non-generational collection performance: Interactive-TextEditing.
4.3
Non-generational collection performance: StandardNonInteractive. . . . . . . 73
4.4
Non-generational collection performance: HeapSim. . . . . . . . . . . . . . 74
4.5
Non-generational collection performance: Lambda-Fact5. . . . . . . . . . . 75
4.6
Non-generational collection performance: Lambda-Fact6. . . . . . . . . . . 76
4.7
Non-generational collection performance: Swim. . . . . . . . . . . . . . . . 77
4.8
Non-generational collection performance: Tomcatv. . . . . . . . . . . . . . . 78
4.9
Non-generational collection performance: Tree-Replace-Binary. . . . . . . . 79
4.10
Non-generational collection performance: Tree-Replace-Random. . . . . . . 80
4.11
Non-generational collection performance: Richards. . . . . . . . . . . . . . 81
4.12
Non-generational collection performance: JavaBYTEmark. . . . . . . . . . . 82
xvi
. . . . . . 72
4.13
Non-generational collection performance: Bloat-Bloat. . . . . . . . . . . . . 83
4.14
Non-generational collection performance: Toba. . . . . . . . . . . . . . . . . 84
4.15
Object size distribution: Interactive-AllCallsOn. . . . . . . . . . . . . . . . . 85
4.16
Object size distribution: Interactive-TextEditing. . . . . . . . . . . . . . . . . 86
4.17
Object size distribution: StandardNonInteractive. . . . . . . . . . . . . . . . 87
4.18
Object size distribution: HeapSim. . . . . . . . . . . . . . . . . . . . . . . . 88
4.19
Object size distribution: Lambda-Fact5. . . . . . . . . . . . . . . . . . . . . 89
4.20
Object size distribution: Lambda-Fact6. . . . . . . . . . . . . . . . . . . . . 90
4.21
Object size distribution: Swim. . . . . . . . . . . . . . . . . . . . . . . . . . 91
4.22
Object size distribution: Tomcatv. . . . . . . . . . . . . . . . . . . . . . . . 92
4.23
Object size distribution: Tree-Replace-Binary. . . . . . . . . . . . . . . . . . 93
4.24
Object size distribution: Tree-Replace-Random. . . . . . . . . . . . . . . . 94
4.25
Object size distribution: Richards. . . . . . . . . . . . . . . . . . . . . . . . 95
4.26
Object size distribution: JavaBYTEmark. . . . . . . . . . . . . . . . . . . . 96
4.27
Object size distribution: Bloat-Bloat. . . . . . . . . . . . . . . . . . . . . . . 97
4.28
Object size distribution: Toba. . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.29
Object lifetime distribution: Interactive-AllCallsOn. . . . . . . . . . . . . . . 99
4.30
Object lifetime distribution: Interactive-TextEditing. . . . . . . . . . . . . . . 99
4.31
Object lifetime distribution: StandardNonInteractive. . . . . . . . . . . . . 100
4.32
Object lifetime distribution: HeapSim.
4.33
Object lifetime distribution: Lambda-Fact5. . . . . . . . . . . . . . . . . . 100
4.34
Object lifetime distribution: Lambda-Fact6. . . . . . . . . . . . . . . . . . 101
4.35
Object lifetime distribution: Swim.
. . . . . . . . . . . . . . . . . . . 100
. . . . . . . . . . . . . . . . . . . . . 101
xvii
4.36
Object lifetime distribution: Tomcatv. . . . . . . . . . . . . . . . . . . . . 101
4.37
Object lifetime distribution: Tree-Replace-Binary. . . . . . . . . . . . . . . 102
4.38
Object lifetime distribution: Tree-Replace-Random. . . . . . . . . . . . . 102
4.39
Object lifetime distribution: Richards. . . . . . . . . . . . . . . . . . . . . 102
4.40
Object lifetime distribution: JavaBYTEmark. . . . . . . . . . . . . . . . . 103
4.41
Object lifetime distribution: Bloat-Bloat. . . . . . . . . . . . . . . . . . . . 103
4.42
Object lifetime distribution: Toba. . . . . . . . . . . . . . . . . . . . . . . 103
4.43
Live profile: Interactive-AllCallsOn. . . . . . . . . . . . . . . . . . . . . . 104
4.44
Live profile: Interactive-TextEditing. . . . . . . . . . . . . . . . . . . . . . 105
4.45
Live profile: StandardNonInteractive. . . . . . . . . . . . . . . . . . . . . 105
4.46
Live profile: HeapSim. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.47
Live profile: Lambda-Fact5. . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.48
Live profile: Lambda-Fact6. . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.49
Live profile: Swim. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.50
Live profile: Tomcatv. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.51
Live profile: Tree-Replace-Binary. . . . . . . . . . . . . . . . . . . . . . . 108
4.52
Live profile: Tree-Replace-Random. . . . . . . . . . . . . . . . . . . . . . 109
4.53
Live profile: Richards. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109
4.54
Live profile: JavaBYTEmark. . . . . . . . . . . . . . . . . . . . . . . . . 110
4.55
Live profile: Bloat-Bloat. . . . . . . . . . . . . . . . . . . . . . . . . . . . 110
4.56
Live profile: Toba. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111
5.1
Window motion in the FC-DOF scheme. . . . . . . . . . . . . . . . . . . . 132
xviii
5.2
Window survivor ratios in the FC-DOF scheme. . . . . . . . . . . . . . . . 133
5.3
Window motion in the FC-OPT scheme. . . . . . . . . . . . . . . . . . . . 134
5.4
Window survivor ratios in the FC-OPT scheme. . . . . . . . . . . . . . . . 135
5.5
Window motion in the adaptive scheme. . . . . . . . . . . . . . . . . . . . 136
5.6
Window survivor ratios in the adaptive scheme. . . . . . . . . . . . . . . . 137
5.7
Copying cost comparison: Interactive-AllCallsOn, V
764. . . . . . . . . . 138
5.8
Copying cost comparison: Interactive-AllCallsOn, V
931. . . . . . . . . . 138
5.9
Copying cost comparison: Interactive-AllCallsOn, V
1135. . . . . . . . . 139
5.10
Copying cost comparison: Interactive-AllCallsOn, V
1384. . . . . . . . . 139
5.11
Copying cost comparison: Interactive-AllCallsOn, V
1687. . . . . . . . . 139
5.12
Copying cost comparison: Interactive-AllCallsOn, V
2057. . . . . . . . . 140
5.13
Copying cost comparison: Interactive-TextEditing, V
1338. . . . . . . . . 141
5.14
Copying cost comparison: Interactive-TextEditing, V
1631. . . . . . . . . 141
5.15
Copying cost comparison: Interactive-TextEditing, V
1988. . . . . . . . . 142
5.16
Copying cost comparison: Interactive-TextEditing, V
2424. . . . . . . . . 142
5.17
Copying cost comparison: Interactive-TextEditing, V
2955. . . . . . . . . 142
5.18
Copying cost comparison: StandardNonInteractive, V
1414. . . . . . . . 143
5.19
Copying cost comparison: StandardNonInteractive, V
1723. . . . . . . . 143
5.20
Copying cost comparison: StandardNonInteractive, V
2101. . . . . . . . 144
5.21
Copying cost comparison: StandardNonInteractive, V
2561. . . . . . . . 144
5.22
Copying cost comparison: StandardNonInteractive, V
3122. . . . . . . . 144
5.23
Copying cost comparison: StandardNonInteractive, V
3805. . . . . . . . 145
5.24
Copying cost comparison: StandardNonInteractive, V
4639. . . . . . . . 145
xix
5.25
Copying cost comparison: StandardNonInteractive, V
5655. . . . . . . . 145
5.26
Copying cost comparison: StandardNonInteractive, V
6894. . . . . . . . 146
5.27
Copying cost comparison: HeapSim, V
113397. . . . . . . . . . . . . . 147
5.28
Copying cost comparison: HeapSim, V
138231. . . . . . . . . . . . . . 147
5.29
Copying cost comparison: HeapSim, V
168503. . . . . . . . . . . . . . 148
5.30
Copying cost comparison: HeapSim, V
205405. . . . . . . . . . . . . . 148
5.31
Copying cost comparison: HeapSim, V
250387. . . . . . . . . . . . . . 148
5.32
Copying cost comparison: HeapSim, V
305221. . . . . . . . . . . . . . 149
5.33
Copying cost comparison: HeapSim, V
372063. . . . . . . . . . . . . . 149
5.34
Copying cost comparison: Lambda-Fact5, V
7673. . . . . . . . . . . . . 150
5.35
Copying cost comparison: Lambda-Fact5, V
9354. . . . . . . . . . . . . 150
5.36
Copying cost comparison: Lambda-Fact5, V
11402. . . . . . . . . . . . 151
5.37
Copying cost comparison: Lambda-Fact5, V
13899. . . . . . . . . . . . 151
5.38
Copying cost comparison: Lambda-Fact5, V
16943. . . . . . . . . . . . 151
5.39
Copying cost comparison: Lambda-Fact5, V
20654. . . . . . . . . . . . 152
5.40
Copying cost comparison: Lambda-Fact5, V
25177. . . . . . . . . . . . 152
5.41
Copying cost comparison: Lambda-Fact6, V
16669. . . . . . . . . . . . 153
5.42
Copying cost comparison: Lambda-Fact6, V
20320. . . . . . . . . . . . 153
5.43
Copying cost comparison: Lambda-Fact6, V
24770. . . . . . . . . . . . 154
5.44
Copying cost comparison: Lambda-Fact6, V
30194. . . . . . . . . . . . 154
5.45
Copying cost comparison: Lambda-Fact6, V
36807. . . . . . . . . . . . 154
5.46
Copying cost comparison: Lambda-Fact6, V
44868. . . . . . . . . . . . 155
5.47
Copying cost comparison: Lambda-Fact6, V
54693. . . . . . . . . . . . 155
xx
5.48
Copying cost comparison: Lambda-Fact6, V
66671. . . . . . . . . . . . 155
5.49
Copying cost comparison: Lambda-Fact6, V
81272. . . . . . . . . . . . 156
5.50
Copying cost comparison: Swim, V
21383. . . . . . . . . . . . . . . . . 157
5.51
Copying cost comparison: Swim, V
26066. . . . . . . . . . . . . . . . . 157
5.52
Copying cost comparison: Swim, V
31774. . . . . . . . . . . . . . . . . 158
5.53
Copying cost comparison: Swim, V
38733. . . . . . . . . . . . . . . . . 158
5.54
Copying cost comparison: Swim, V
47215. . . . . . . . . . . . . . . . . 158
5.55
Copying cost comparison: Swim, V
57555. . . . . . . . . . . . . . . . . 159
5.56
Copying cost comparison: Swim, V
70160. . . . . . . . . . . . . . . . . 159
5.57
Copying cost comparison: Swim, V
85524. . . . . . . . . . . . . . . . . 159
5.58
Copying cost comparison: Swim, V
104254. . . . . . . . . . . . . . . . 160
5.59
Copying cost comparison: Tomcatv, V
37292. . . . . . . . . . . . . . . 161
5.60
Copying cost comparison: Tomcatv, V
45459. . . . . . . . . . . . . . . 161
5.61
Copying cost comparison: Tomcatv, V
55414. . . . . . . . . . . . . . . 162
5.62
Copying cost comparison: Tomcatv, V
67550. . . . . . . . . . . . . . . 162
5.63
Copying cost comparison: Tomcatv, V
82343. . . . . . . . . . . . . . . 162
5.64
Copying cost comparison: Tomcatv, V
100376. . . . . . . . . . . . . . . 163
5.65
Copying cost comparison: Tomcatv, V
122358. . . . . . . . . . . . . . . 163
5.66
Copying cost comparison: Tomcatv, V
149154. . . . . . . . . . . . . . . 163
5.67
Copying cost comparison: Tomcatv, V
181818. . . . . . . . . . . . . . . 164
5.68
Copying cost comparison: Tree-Replace-Binary, V
11926. . . . . . . . . 165
5.69
Copying cost comparison: Tree-Replace-Binary, V
14538. . . . . . . . . 165
5.70
Copying cost comparison: Tree-Replace-Binary, V
17722. . . . . . . . . 166
xxi
5.71
Copying cost comparison: Tree-Replace-Binary, V
21603. . . . . . . . . 166
5.72
Copying cost comparison: Tree-Replace-Binary, V
26334. . . . . . . . . 166
5.73
Copying cost comparison: Tree-Replace-Random, V
15985. . . . . . . . 167
5.74
Copying cost comparison: Tree-Replace-Random, V
19486. . . . . . . . 167
5.75
Copying cost comparison: Tree-Replace-Random, V
23754. . . . . . . . 168
5.76
Copying cost comparison: Tree-Replace-Random, V
28956. . . . . . . . 168
5.77
Copying cost comparison: Tree-Replace-Random, V
35297. . . . . . . . 168
5.78
Copying cost comparison: Tree-Replace-Random, V
43027. . . . . . . . 169
5.79
Copying cost comparison: Tree-Replace-Random, V
52450. . . . . . . . 169
5.80
Copying cost comparison: Tree-Replace-Random, V
63936. . . . . . . . 169
5.81
Copying cost comparison: Richards, V
1826. . . . . . . . . . . . . . . . 170
5.82
Copying cost comparison: Richards, V
2225. . . . . . . . . . . . . . . . 170
5.83
Copying cost comparison: Richards, V
2713. . . . . . . . . . . . . . . . 171
5.84
Copying cost comparison: Richards, V
3307. . . . . . . . . . . . . . . . 171
5.85
Copying cost comparison: Richards, V
4032. . . . . . . . . . . . . . . . 171
5.86
Copying cost comparison: Richards, V
4914. . . . . . . . . . . . . . . . 172
5.87
Copying cost comparison: Richards, V
5991. . . . . . . . . . . . . . . . 172
5.88
Copying cost comparison: Richards, V
7303. . . . . . . . . . . . . . . . 172
5.89
Copying cost comparison: Richards, V
8902. . . . . . . . . . . . . . . . 173
5.90
Copying cost comparison: JavaBYTEmark, V
72807. . . . . . . . . . . . 174
5.91
Copying cost comparison: JavaBYTEmark, V
88752. . . . . . . . . . . . 174
5.92
Copying cost comparison: JavaBYTEmark, V
108188. . . . . . . . . . . 175
5.93
Copying cost comparison: JavaBYTEmark, V
131881. . . . . . . . . . . 175
xxii
5.94
Copying cost comparison: Bloat-Bloat, V
246766. . . . . . . . . . . . . 176
5.95
Copying cost comparison: Bloat-Bloat, V
300808. . . . . . . . . . . . . 176
5.96
Copying cost comparison: Bloat-Bloat, V
366682. . . . . . . . . . . . . 177
5.97
Copying cost comparison: Bloat-Bloat, V
446984. . . . . . . . . . . . . 177
5.98
Copying cost comparison: Bloat-Bloat, V
544872. . . . . . . . . . . . . 177
5.99
Copying cost comparison: Bloat-Bloat, V
664195. . . . . . . . . . . . . 178
5.100 Copying cost comparison: Bloat-Bloat, V
809650. . . . . . . . . . . . . 178
5.101 Copying cost comparison: Bloat-Bloat, V
986959. . . . . . . . . . . . . 178
5.102 Copying cost comparison: Toba, V
353843. . . . . . . . . . . . . . . . . 179
5.103 Copying cost comparison: Toba, V
431335. . . . . . . . . . . . . . . . . 179
5.104 Copying cost comparison: Toba, V
525794. . . . . . . . . . . . . . . . . 180
5.105 Copying cost comparison: Toba, V
640941. . . . . . . . . . . . . . . . . 180
5.106 Copying cost comparison: Toba, V
781303. . . . . . . . . . . . . . . . . 180
5.107 Copying cost comparison: Toba, V
952404. . . . . . . . . . . . . . . . . 181
5.108 Copying cost comparison: Toba, V
1160976. . . . . . . . . . . . . . . . 181
5.109 Copying cost comparison: Toba, V
1415223. . . . . . . . . . . . . . . . 181
5.110 Copying cost comparison: Toba, V
1725148. . . . . . . . . . . . . . . . 182
5.111 Best configuration comparison: Interactive-AllCallsOn. . . . . . . . . . . . 183 5.112 Best configuration comparison: Interactive-TextEditing. . . . . . . . . . . . 184 5.113 Best configuration comparison: StandardNonInteractive. . . . . . . . . . . 184 5.114 Best configuration comparison: HeapSim. . . . . . . . . . . . . . . . . . . 185 5.115 Best configuration comparison: Lambda-Fact5. . . . . . . . . . . . . . . . 185 5.116 Best configuration comparison: Lambda-Fact6. . . . . . . . . . . . . . . . 186 xxiii
5.117 Best configuration comparison: Swim. . . . . . . . . . . . . . . . . . . . . 186 5.118 Best configuration comparison: Tomcatv. . . . . . . . . . . . . . . . . . . 187 5.119 Best configuration comparison: Tree-Replace-Binary. . . . . . . . . . . . . 187 5.120 Best configuration comparison: Tree-Replace-Random. . . . . . . . . . . . 188 5.121 Best configuration comparison: Richards. . . . . . . . . . . . . . . . . . . 188 5.122 Best configuration comparison: JavaBYTEmark. . . . . . . . . . . . . . . 189 5.123 Best configuration comparison: Bloat-Bloat. . . . . . . . . . . . . . . . . . 189 5.124 Best configuration comparison: Toba. . . . . . . . . . . . . . . . . . . . . 190 5.125 Best configuration comparison: Interactive-AllCallsOn. . . . . . . . . . . . 191 5.126 Best configuration comparison: Interactive-TextEditing. . . . . . . . . . . . 192 5.127 Best configuration comparison: StandardNonInteractive. . . . . . . . . . . 192 5.128 Best configuration comparison: HeapSim. . . . . . . . . . . . . . . . . . . 193 5.129 Best configuration comparison: Lambda-Fact5. . . . . . . . . . . . . . . . 193 5.130 Best configuration comparison: Lambda-Fact6. . . . . . . . . . . . . . . . 194 5.131 Best configuration comparison: Swim. . . . . . . . . . . . . . . . . . . . . 194 5.132 Best configuration comparison: Tomcatv. . . . . . . . . . . . . . . . . . . 195 5.133 Best configuration comparison: Tree-Replace-Binary. . . . . . . . . . . . . 195 5.134 Best configuration comparison: Tree-Replace-Random. . . . . . . . . . . . 196 5.135 Best configuration comparison: Richards. . . . . . . . . . . . . . . . . . . 196 5.136 Best configuration comparison: JavaBYTEmark. . . . . . . . . . . . . . . 197 5.137 Best configuration comparison: Bloat-Bloat. . . . . . . . . . . . . . . . . . 197 5.138 Best configuration comparison: Toba. . . . . . . . . . . . . . . . . . . . . 198 5.139 Excess retention ratios: Interactive-AllCallsOn, V xxiv
764. . . . . . . . . . . 199
5.140 Excess retention ratios: Interactive-AllCallsOn, V
931. . . . . . . . . . . 199
5.141 Excess retention ratios: Interactive-AllCallsOn, V
1135. . . . . . . . . . . 200
5.142 Excess retention ratios: Interactive-AllCallsOn, V
1384. . . . . . . . . . . 200
5.143 Excess retention ratios: Interactive-AllCallsOn, V
1687. . . . . . . . . . . 200
5.144 Excess retention ratios: Interactive-AllCallsOn, V
2057. . . . . . . . . . . 201
5.145 Excess retention ratios: Interactive-TextEditing, V
1338. . . . . . . . . . 202
5.146 Excess retention ratios: Interactive-TextEditing, V
1631. . . . . . . . . . 202
5.147 Excess retention ratios: Interactive-TextEditing, V
1988. . . . . . . . . . 203
5.148 Excess retention ratios: Interactive-TextEditing, V
2424. . . . . . . . . . 203
5.149 Excess retention ratios: Interactive-TextEditing, V
2955. . . . . . . . . . 203
5.150 Excess retention ratios: StandardNonInteractive, V
1414. . . . . . . . . . 204
5.151 Excess retention ratios: StandardNonInteractive, V
1723. . . . . . . . . . 204
5.152 Excess retention ratios: StandardNonInteractive, V
2101. . . . . . . . . . 205
5.153 Excess retention ratios: StandardNonInteractive, V
2561. . . . . . . . . . 205
5.154 Excess retention ratios: StandardNonInteractive, V
3122. . . . . . . . . . 205
5.155 Excess retention ratios: StandardNonInteractive, V
3805. . . . . . . . . . 206
5.156 Excess retention ratios: StandardNonInteractive, V
4639. . . . . . . . . . 206
5.157 Excess retention ratios: StandardNonInteractive, V
5655. . . . . . . . . . 206
5.158 Excess retention ratios: StandardNonInteractive, V
6894. . . . . . . . . . 207
5.159 Excess retention ratios: HeapSim, V
113397. . . . . . . . . . . . . . . . 208
5.160 Excess retention ratios: HeapSim, V
138231. . . . . . . . . . . . . . . . 208
5.161 Excess retention ratios: HeapSim, V
168503. . . . . . . . . . . . . . . . 209
5.162 Excess retention ratios: HeapSim, V
205405. . . . . . . . . . . . . . . . 209
xxv
5.163 Excess retention ratios: HeapSim, V
250387. . . . . . . . . . . . . . . . 209
5.164 Excess retention ratios: HeapSim, V
305221. . . . . . . . . . . . . . . . 210
5.165 Excess retention ratios: HeapSim, V
372063. . . . . . . . . . . . . . . . 210
5.166 Excess retention ratios: Lambda-Fact5, V
7673. . . . . . . . . . . . . . 211
5.167 Excess retention ratios: Lambda-Fact5, V
9354. . . . . . . . . . . . . . 211
5.168 Excess retention ratios: Lambda-Fact5, V
11402. . . . . . . . . . . . . . 212
5.169 Excess retention ratios: Lambda-Fact5, V
13899. . . . . . . . . . . . . . 212
5.170 Excess retention ratios: Lambda-Fact5, V
16943. . . . . . . . . . . . . . 212
5.171 Excess retention ratios: Lambda-Fact5, V
20654. . . . . . . . . . . . . . 213
5.172 Excess retention ratios: Lambda-Fact5, V
25177. . . . . . . . . . . . . . 213
5.173 Excess retention ratios: Lambda-Fact6, V
16669. . . . . . . . . . . . . . 214
5.174 Excess retention ratios: Lambda-Fact6, V
20320. . . . . . . . . . . . . . 214
5.175 Excess retention ratios: Lambda-Fact6, V
24770. . . . . . . . . . . . . . 215
5.176 Excess retention ratios: Lambda-Fact6, V
30194. . . . . . . . . . . . . . 215
5.177 Excess retention ratios: Lambda-Fact6, V
36807. . . . . . . . . . . . . . 215
5.178 Excess retention ratios: Lambda-Fact6, V
44868. . . . . . . . . . . . . . 216
5.179 Excess retention ratios: Lambda-Fact6, V
54693. . . . . . . . . . . . . . 216
5.180 Excess retention ratios: Lambda-Fact6, V
66671. . . . . . . . . . . . . . 216
5.181 Excess retention ratios: Lambda-Fact6, V
81272. . . . . . . . . . . . . . 217
5.182 Excess retention ratios: Swim, V
21383. . . . . . . . . . . . . . . . . . 218
5.183 Excess retention ratios: Swim, V
26066. . . . . . . . . . . . . . . . . . 218
5.184 Excess retention ratios: Swim, V
31774. . . . . . . . . . . . . . . . . . 219
5.185 Excess retention ratios: Swim, V
38733. . . . . . . . . . . . . . . . . . 219 xxvi
5.186 Excess retention ratios: Swim, V
47215. . . . . . . . . . . . . . . . . . 219
5.187 Excess retention ratios: Swim, V
57555. . . . . . . . . . . . . . . . . . 220
5.188 Excess retention ratios: Swim, V
70160. . . . . . . . . . . . . . . . . . 220
5.189 Excess retention ratios: Swim, V
85524. . . . . . . . . . . . . . . . . . 220
5.190 Excess retention ratios: Swim, V
104254. . . . . . . . . . . . . . . . . . 221
5.191 Excess retention ratios: Tomcatv, V
37292. . . . . . . . . . . . . . . . . 222
5.192 Excess retention ratios: Tomcatv, V
45459. . . . . . . . . . . . . . . . . 222
5.193 Excess retention ratios: Tomcatv, V
55414. . . . . . . . . . . . . . . . . 223
5.194 Excess retention ratios: Tomcatv, V
67550. . . . . . . . . . . . . . . . . 223
5.195 Excess retention ratios: Tomcatv, V
82343. . . . . . . . . . . . . . . . . 223
5.196 Excess retention ratios: Tomcatv, V
100376. . . . . . . . . . . . . . . . 224
5.197 Excess retention ratios: Tomcatv, V
122358. . . . . . . . . . . . . . . . 224
5.198 Excess retention ratios: Tomcatv, V
149154. . . . . . . . . . . . . . . . 224
5.199 Excess retention ratios: Tomcatv, V
181818. . . . . . . . . . . . . . . . 225
5.200 Excess retention ratios: Tree-Replace-Binary, V
11926. . . . . . . . . . 226
5.201 Excess retention ratios: Tree-Replace-Binary, V
14538. . . . . . . . . . 226
5.202 Excess retention ratios: Tree-Replace-Binary, V
17722. . . . . . . . . . 227
5.203 Excess retention ratios: Tree-Replace-Binary, V
21603. . . . . . . . . . 227
5.204 Excess retention ratios: Tree-Replace-Binary, V
26334. . . . . . . . . . 227
5.205 Excess retention ratios: Tree-Replace-Random, V
15985. . . . . . . . . 228
5.206 Excess retention ratios: Tree-Replace-Random, V
19486. . . . . . . . . 228
5.207 Excess retention ratios: Tree-Replace-Random, V
23754. . . . . . . . . 229
5.208 Excess retention ratios: Tree-Replace-Random, V
28956. . . . . . . . . 229
xxvii
5.209 Excess retention ratios: Tree-Replace-Random, V
35297. . . . . . . . . 229
5.210 Excess retention ratios: Tree-Replace-Random, V
43027. . . . . . . . . 230
5.211 Excess retention ratios: Tree-Replace-Random, V
52450. . . . . . . . . 230
5.212 Excess retention ratios: Tree-Replace-Random, V
63936. . . . . . . . . 230
5.213 Excess retention ratios: Richards, V
1826. . . . . . . . . . . . . . . . . 231
5.214 Excess retention ratios: Richards, V
2225. . . . . . . . . . . . . . . . . 231
5.215 Excess retention ratios: Richards, V
2713. . . . . . . . . . . . . . . . . 232
5.216 Excess retention ratios: Richards, V
3307. . . . . . . . . . . . . . . . . 232
5.217 Excess retention ratios: Richards, V
4032. . . . . . . . . . . . . . . . . 232
5.218 Excess retention ratios: Richards, V
4914. . . . . . . . . . . . . . . . . 233
5.219 Excess retention ratios: Richards, V
5991. . . . . . . . . . . . . . . . . 233
5.220 Excess retention ratios: Richards, V
7303. . . . . . . . . . . . . . . . . 233
5.221 Excess retention ratios: Richards, V
8902. . . . . . . . . . . . . . . . . 234
5.222 Excess retention ratios: JavaBYTEmark, V
72807. . . . . . . . . . . . . 235
5.223 Excess retention ratios: JavaBYTEmark, V
88752. . . . . . . . . . . . . 235
5.224 Excess retention ratios: JavaBYTEmark, V
108188. . . . . . . . . . . . 236
5.225 Excess retention ratios: JavaBYTEmark, V
131881. . . . . . . . . . . . 236
5.226 Excess retention ratios: Bloat-Bloat, V
246766. . . . . . . . . . . . . . . 237
5.227 Excess retention ratios: Bloat-Bloat, V
300808. . . . . . . . . . . . . . . 237
5.228 Excess retention ratios: Bloat-Bloat, V
366682. . . . . . . . . . . . . . . 238
5.229 Excess retention ratios: Bloat-Bloat, V
446984. . . . . . . . . . . . . . . 238
5.230 Excess retention ratios: Bloat-Bloat, V
544872. . . . . . . . . . . . . . . 238
5.231 Excess retention ratios: Bloat-Bloat, V
664195. . . . . . . . . . . . . . . 239
xxviii
5.232 Excess retention ratios: Bloat-Bloat, V
809650. . . . . . . . . . . . . . . 239
5.233 Excess retention ratios: Bloat-Bloat, V
986959. . . . . . . . . . . . . . . 239
5.234 Excess retention ratios: Toba, V
353843. . . . . . . . . . . . . . . . . . 240
5.235 Excess retention ratios: Toba, V
431335. . . . . . . . . . . . . . . . . . 240
5.236 Excess retention ratios: Toba, V
525794. . . . . . . . . . . . . . . . . . 241
5.237 Excess retention ratios: Toba, V
640941. . . . . . . . . . . . . . . . . . 241
5.238 Excess retention ratios: Toba, V
781303. . . . . . . . . . . . . . . . . . 241
5.239 Excess retention ratios: Toba, V
952404. . . . . . . . . . . . . . . . . . 242
5.240 Excess retention ratios: Toba, V
1160976. . . . . . . . . . . . . . . . . 242
5.241 Excess retention ratios: Toba, V
1415223. . . . . . . . . . . . . . . . . 242
5.242 Excess retention ratios: Toba, V
1725148. . . . . . . . . . . . . . . . . 243
5.243 Copying cost, with FC-OPT: StandardNonInteractive, V
1414. . . . . . . 244
5.244 Copying cost, with FC-OPT: StandardNonInteractive, V
1723. . . . . . . 244
5.245 Copying cost, with FC-OPT: StandardNonInteractive, V
2101. . . . . . . 245
5.246 Copying cost, with FC-OPT: StandardNonInteractive, V
2561. . . . . . . 245
5.247 Copying cost, with FC-OPT: StandardNonInteractive, V
3122. . . . . . . 245
5.248 Copying cost, with FC-OPT: StandardNonInteractive, V
3805. . . . . . . 246
5.249 Copying cost, with FC-OPT: StandardNonInteractive, V
4639. . . . . . . 246
5.250 Copying cost, with FC-OPT: StandardNonInteractive, V
5655. . . . . . . 246
5.251 Copying cost, with FC-OPT: StandardNonInteractive, V
6894. . . . . . . 247
5.252 Copying cost, with FC-OPT: Lambda-Fact5, V
7673. . . . . . . . . . . . 248
5.253 Copying cost, with FC-OPT: Lambda-Fact5, V
9354. . . . . . . . . . . . 248
5.254 Copying cost, with FC-OPT: Lambda-Fact5, V
11402. . . . . . . . . . . 249
xxix
5.255 Copying cost, with FC-OPT: Lambda-Fact5, V
13899. . . . . . . . . . . 249
5.256 Copying cost, with FC-OPT: Lambda-Fact5, V
16943. . . . . . . . . . . 249
5.257 Copying cost, with FC-OPT: Lambda-Fact5, V
20654. . . . . . . . . . . 250
5.258 Copying cost, with FC-OPT: Lambda-Fact5, V
25177. . . . . . . . . . . 250
5.259 Copying cost, with FC-OPT: Tree-Replace-Random, V
15985. . . . . . . 251
5.260 Copying cost, with FC-OPT: Tree-Replace-Random, V
19486. . . . . . . 251
5.261 Copying cost, with FC-OPT: Tree-Replace-Random, V
23754. . . . . . . 252
5.262 Copying cost, with FC-OPT: Tree-Replace-Random, V
28956. . . . . . . 252
5.263 Copying cost, with FC-OPT: Tree-Replace-Random, V
35297. . . . . . . 252
5.264 Copying cost, with FC-OPT: Tree-Replace-Random, V
43027. . . . . . . 253
5.265 Copying cost, with FC-OPT: Tree-Replace-Random, V
52450. . . . . . . 253
5.266 Copying cost, with FC-OPT: Tree-Replace-Random, V
63936. . . . . . . 253
5.267 Demise graphs: Interactive-AllCallsOn. . . . . . . . . . . . . . . . . . . . 254 5.268 Demise graphs: Interactive-TextEditing. . . . . . . . . . . . . . . . . . . . 255 5.269 Demise graphs: StandardNonInteractive. . . . . . . . . . . . . . . . . . . 256 5.270 Demise graphs: HeapSim. . . . . . . . . . . . . . . . . . . . . . . . . . . 257 5.271 Demise graphs: Lambda-Fact5. . . . . . . . . . . . . . . . . . . . . . . . 258 5.272 Demise graphs: Lambda-Fact6. . . . . . . . . . . . . . . . . . . . . . . . 259 5.273 Demise graphs: Swim. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 5.274 Demise graphs: Tomcatv. . . . . . . . . . . . . . . . . . . . . . . . . . . 261 5.275 Demise graphs: Tree-Replace-Binary. . . . . . . . . . . . . . . . . . . . . 262 5.276 Demise graphs: Tree-Replace-Random. . . . . . . . . . . . . . . . . . . . 263 5.277 Demise graphs: Richards. . . . . . . . . . . . . . . . . . . . . . . . . . . 264 xxx
5.278 Demise graphs: JavaBYTEmark. . . . . . . . . . . . . . . . . . . . . . . . 265 5.279 Demise graphs: Bloat-Bloat. . . . . . . . . . . . . . . . . . . . . . . . . . 266 5.280 Demise graphs: Toba. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 5.281 Demise position as mortality: Interactive-AllCallsOn. . . . . . . . . . . . . 268 5.282 Demise position as mortality: Interactive-TextEditing. . . . . . . . . . . . . 269 5.283 Demise position as mortality: StandardNonInteractive. . . . . . . . . . . . 269 5.284 Demise position as mortality: HeapSim. . . . . . . . . . . . . . . . . . . . 270 5.285 Demise position as mortality: Lambda-Fact5. . . . . . . . . . . . . . . . . 270 5.286 Demise position as mortality: Lambda-Fact6. . . . . . . . . . . . . . . . . 271 5.287 Demise position as mortality: Swim. . . . . . . . . . . . . . . . . . . . . . 271 5.288 Demise position as mortality: Tomcatv. . . . . . . . . . . . . . . . . . . . 272 5.289 Demise position as mortality: Tree-Replace-Binary. . . . . . . . . . . . . . 272 5.290 Demise position as mortality: Tree-Replace-Random. . . . . . . . . . . . . 273 5.291 Demise position as mortality: Richards. . . . . . . . . . . . . . . . . . . . 273 5.292 Demise position as mortality: JavaBYTEmark. . . . . . . . . . . . . . . . 274 5.293 Demise position as mortality: Bloat-Bloat. . . . . . . . . . . . . . . . . . . 274 5.294 Demise position as mortality: Toba. . . . . . . . . . . . . . . . . . . . . . 275 6.1
Copying and pointer cost tradeoff: Interactive-AllCallsOn. . . . . . . . . . . 389
6.2
Copying and pointer cost tradeoff: Interactive-TextEditing. . . . . . . . . . 390
6.3
Copying and pointer cost tradeoff: StandardNonInteractive. . . . . . . . . . 390
6.4
Copying and pointer cost tradeoff: HeapSim. . . . . . . . . . . . . . . . . 391
6.5
Copying and pointer cost tradeoff: Lambda-Fact5. . . . . . . . . . . . . . 391
xxxi
6.6
Copying and pointer cost tradeoff: Lambda-Fact6. . . . . . . . . . . . . . 392
6.7
Copying and pointer cost tradeoff: Swim. . . . . . . . . . . . . . . . . . . 392
6.8
Copying and pointer cost tradeoff: Tomcatv. . . . . . . . . . . . . . . . . . 393
6.9
Copying and pointer cost tradeoff: Tree-Replace-Binary. . . . . . . . . . . 393
6.10
Copying and pointer cost tradeoff: Bloat-Bloat. . . . . . . . . . . . . . . . 394
6.11
Copying and pointer cost tradeoff: Toba. . . . . . . . . . . . . . . . . . . 394
6.12
Pointer store heap position: Interactive-AllCallsOn. . . . . . . . . . . . . . 395
6.13
Pointer store heap position: Interactive-TextEditing. . . . . . . . . . . . . . 396
6.14
Pointer store heap position: StandardNonInteractive. . . . . . . . . . . . . 397
6.15
Pointer store heap position: HeapSim. . . . . . . . . . . . . . . . . . . . . 398
6.16
Pointer store heap position: Lambda-Fact5. . . . . . . . . . . . . . . . . . 399
6.17
Pointer store heap position: Lambda-Fact6. . . . . . . . . . . . . . . . . . 400
6.18
Pointer store heap position: Swim. . . . . . . . . . . . . . . . . . . . . . . 401
6.19
Pointer store heap position: Tomcatv. . . . . . . . . . . . . . . . . . . . . 402
6.20
Pointer store heap position: Tree-Replace-Binary. . . . . . . . . . . . . . . 403
6.21
Pointer store heap position: Tree-Replace-Random. . . . . . . . . . . . . . 404
6.22
Pointer store heap position: Richards. . . . . . . . . . . . . . . . . . . . . 405
6.23
Pointer store heap position: JavaBYTEmark. . . . . . . . . . . . . . . . . 406
6.24
Pointer store heap position: Bloat-Bloat. . . . . . . . . . . . . . . . . . . . 407
6.25
Pointer store heap position: Toba. . . . . . . . . . . . . . . . . . . . . . . 408
6.26
DOF pointer position distributions: StandardNonInteractive. . . . . . . . . 409
6.27
DOF pointer position distributions: HeapSim. . . . . . . . . . . . . . . . . 410
6.28
DOF pointer position distributions: Richards. . . . . . . . . . . . . . . . . 411
xxxii
6.29
DOF pointer position distributions: Lambda-Fact5. . . . . . . . . . . . . . 412
6.30
DOF pointer position distributions: Bloat-Bloat. . . . . . . . . . . . . . . . 413
6.31
DOF pointer position distributions: JavaBYTEmark.
. . . . . . . . . . . . 414
6.32
Relative invocation counts: Interactive-AllCallsOn, V
764. . . . . . . . . 415
6.33
Relative invocation counts: Interactive-AllCallsOn, V
931. . . . . . . . . 415
6.34
Relative invocation counts: Interactive-AllCallsOn, V
1135. . . . . . . . . 416
6.35
Relative invocation counts: Interactive-AllCallsOn, V
1384. . . . . . . . . 416
6.36
Relative invocation counts: Interactive-AllCallsOn, V
1687. . . . . . . . . 416
6.37
Relative invocation counts: Interactive-AllCallsOn, V
2057. . . . . . . . . 417
6.38
Relative invocation counts: Interactive-TextEditing, V
1338. . . . . . . . 418
6.39
Relative invocation counts: Interactive-TextEditing, V
1631. . . . . . . . 418
6.40
Relative invocation counts: Interactive-TextEditing, V
1988. . . . . . . . 419
6.41
Relative invocation counts: Interactive-TextEditing, V
2424. . . . . . . . 419
6.42
Relative invocation counts: Interactive-TextEditing, V
2955. . . . . . . . 419
6.43
Relative invocation counts: StandardNonInteractive, V
1414. . . . . . . . 420
6.44
Relative invocation counts: StandardNonInteractive, V
1723. . . . . . . . 420
6.45
Relative invocation counts: StandardNonInteractive, V
2101. . . . . . . . 421
6.46
Relative invocation counts: StandardNonInteractive, V
2561. . . . . . . . 421
6.47
Relative invocation counts: StandardNonInteractive, V
3122. . . . . . . . 421
6.48
Relative invocation counts: StandardNonInteractive, V
3805. . . . . . . . 422
6.49
Relative invocation counts: StandardNonInteractive, V
4639. . . . . . . . 422
6.50
Relative invocation counts: StandardNonInteractive, V
5655. . . . . . . . 422
6.51
Relative invocation counts: StandardNonInteractive, V
6894. . . . . . . . 423
xxxiii
6.52
Relative invocation counts: HeapSim, V
113397. . . . . . . . . . . . . . 424
6.53
Relative invocation counts: HeapSim, V
138231. . . . . . . . . . . . . . 424
6.54
Relative invocation counts: HeapSim, V
168503. . . . . . . . . . . . . . 425
6.55
Relative invocation counts: HeapSim, V
205405. . . . . . . . . . . . . . 425
6.56
Relative invocation counts: HeapSim, V
250387. . . . . . . . . . . . . . 425
6.57
Relative invocation counts: HeapSim, V
305221. . . . . . . . . . . . . . 426
6.58
Relative invocation counts: HeapSim, V
372063. . . . . . . . . . . . . . 426
6.59
Relative invocation counts: Lambda-Fact5, V
7673. . . . . . . . . . . . 427
6.60
Relative invocation counts: Lambda-Fact5, V
9354. . . . . . . . . . . . 427
6.61
Relative invocation counts: Lambda-Fact5, V
11402. . . . . . . . . . . . 428
6.62
Relative invocation counts: Lambda-Fact5, V
13899. . . . . . . . . . . . 428
6.63
Relative invocation counts: Lambda-Fact5, V
16943. . . . . . . . . . . . 428
6.64
Relative invocation counts: Lambda-Fact5, V
20654. . . . . . . . . . . . 429
6.65
Relative invocation counts: Lambda-Fact5, V
25177. . . . . . . . . . . . 429
6.66
Relative invocation counts: Lambda-Fact6, V
16669. . . . . . . . . . . . 430
6.67
Relative invocation counts: Lambda-Fact6, V
20320. . . . . . . . . . . . 430
6.68
Relative invocation counts: Lambda-Fact6, V
24770. . . . . . . . . . . . 431
6.69
Relative invocation counts: Lambda-Fact6, V
30194. . . . . . . . . . . . 431
6.70
Relative invocation counts: Lambda-Fact6, V
36807. . . . . . . . . . . . 431
6.71
Relative invocation counts: Lambda-Fact6, V
44868. . . . . . . . . . . . 432
6.72
Relative invocation counts: Lambda-Fact6, V
54693. . . . . . . . . . . . 432
6.73
Relative invocation counts: Lambda-Fact6, V
66671. . . . . . . . . . . . 432
6.74
Relative invocation counts: Lambda-Fact6, V
81272. . . . . . . . . . . . 433
xxxiv
6.75
Relative invocation counts: Swim, V
21383. . . . . . . . . . . . . . . . 434
6.76
Relative invocation counts: Swim, V
26066. . . . . . . . . . . . . . . . 434
6.77
Relative invocation counts: Swim, V
31774. . . . . . . . . . . . . . . . 435
6.78
Relative invocation counts: Swim, V
38733. . . . . . . . . . . . . . . . 435
6.79
Relative invocation counts: Swim, V
47215. . . . . . . . . . . . . . . . 435
6.80
Relative invocation counts: Swim, V
57555. . . . . . . . . . . . . . . . 436
6.81
Relative invocation counts: Swim, V
70160. . . . . . . . . . . . . . . . 436
6.82
Relative invocation counts: Swim, V
85524. . . . . . . . . . . . . . . . 436
6.83
Relative invocation counts: Swim, V
104254. . . . . . . . . . . . . . . . 437
6.84
Relative invocation counts: Tomcatv, V
37292. . . . . . . . . . . . . . . 438
6.85
Relative invocation counts: Tomcatv, V
45459. . . . . . . . . . . . . . . 438
6.86
Relative invocation counts: Tomcatv, V
55414. . . . . . . . . . . . . . . 439
6.87
Relative invocation counts: Tomcatv, V
67550. . . . . . . . . . . . . . . 439
6.88
Relative invocation counts: Tomcatv, V
82343. . . . . . . . . . . . . . . 439
6.89
Relative invocation counts: Tomcatv, V
100376. . . . . . . . . . . . . . 440
6.90
Relative invocation counts: Tomcatv, V
122358. . . . . . . . . . . . . . 440
6.91
Relative invocation counts: Tomcatv, V
149154. . . . . . . . . . . . . . 440
6.92
Relative invocation counts: Tomcatv, V
181818. . . . . . . . . . . . . . 441
6.93
Relative invocation counts: Tree-Replace-Binary, V
11926.
. . . . . . . 442
6.94
Relative invocation counts: Tree-Replace-Binary, V
14538.
. . . . . . . 442
6.95
Relative invocation counts: Tree-Replace-Binary, V
17722.
. . . . . . . 443
6.96
Relative invocation counts: Tree-Replace-Binary, V
21603.
. . . . . . . 443
6.97
Relative invocation counts: Tree-Replace-Binary, V
26334.
. . . . . . . 443
xxxv
6.98
Relative invocation counts: Tree-Replace-Random, V
15985. . . . . . . 444
6.99
Relative invocation counts: Tree-Replace-Random, V
19486. . . . . . . 444
6.100 Relative invocation counts: Tree-Replace-Random, V
23754. . . . . . . 445
6.101 Relative invocation counts: Tree-Replace-Random, V
28956. . . . . . . 445
6.102 Relative invocation counts: Tree-Replace-Random, V
35297. . . . . . . 445
6.103 Relative invocation counts: Tree-Replace-Random, V
43027. . . . . . . 446
6.104 Relative invocation counts: Tree-Replace-Random, V
52450. . . . . . . 446
6.105 Relative invocation counts: Tree-Replace-Random, V
63936. . . . . . . 446
6.106 Relative invocation counts: Richards, V
1826. . . . . . . . . . . . . . . 447
6.107 Relative invocation counts: Richards, V
2225. . . . . . . . . . . . . . . 447
6.108 Relative invocation counts: Richards, V
2713. . . . . . . . . . . . . . . 448
6.109 Relative invocation counts: Richards, V
3307. . . . . . . . . . . . . . . 448
6.110 Relative invocation counts: Richards, V
4032. . . . . . . . . . . . . . . 448
6.111 Relative invocation counts: Richards, V
4914. . . . . . . . . . . . . . . 449
6.112 Relative invocation counts: Richards, V
5991. . . . . . . . . . . . . . . 449
6.113 Relative invocation counts: Richards, V
7303. . . . . . . . . . . . . . . 449
6.114 Relative invocation counts: Richards, V
8902. . . . . . . . . . . . . . . 450
6.115 Relative invocation counts: JavaBYTEmark, V
72807. . . . . . . . . . . 451
6.116 Relative invocation counts: JavaBYTEmark, V
88752. . . . . . . . . . . 451
6.117 Relative invocation counts: JavaBYTEmark, V
108188.
. . . . . . . . . 452
6.118 Relative invocation counts: JavaBYTEmark, V
131881.
. . . . . . . . . 452
6.119 Relative invocation counts: Bloat-Bloat, V
246766. . . . . . . . . . . . . 453
6.120 Relative invocation counts: Bloat-Bloat, V
300808. . . . . . . . . . . . . 453
xxxvi
6.121 Relative invocation counts: Bloat-Bloat, V
366682. . . . . . . . . . . . . 454
6.122 Relative invocation counts: Bloat-Bloat, V
446984. . . . . . . . . . . . . 454
6.123 Relative invocation counts: Bloat-Bloat, V
544872. . . . . . . . . . . . . 454
6.124 Relative invocation counts: Bloat-Bloat, V
664195. . . . . . . . . . . . . 455
6.125 Relative invocation counts: Bloat-Bloat, V
809650. . . . . . . . . . . . . 455
6.126 Relative invocation counts: Bloat-Bloat, V
986959. . . . . . . . . . . . . 455
6.127 Relative invocation counts: Toba, V
353843. . . . . . . . . . . . . . . . 456
6.128 Relative invocation counts: Toba, V
431335. . . . . . . . . . . . . . . . 456
6.129 Relative invocation counts: Toba, V
525794. . . . . . . . . . . . . . . . 457
6.130 Relative invocation counts: Toba, V
640941. . . . . . . . . . . . . . . . 457
6.131 Relative invocation counts: Toba, V
781303. . . . . . . . . . . . . . . . 457
6.132 Relative invocation counts: Toba, V
952404. . . . . . . . . . . . . . . . 458
6.133 Relative invocation counts: Toba, V
1160976. . . . . . . . . . . . . . . 458
6.134 Relative invocation counts: Toba, V
1415223. . . . . . . . . . . . . . . 458
6.135 Relative invocation counts: Toba, V
1725148. . . . . . . . . . . . . . . 459
xxxvii
(This page intentionally left blank)
CHAPTER 1 INTRODUCTION
Garbage collection, the automatic reclamation of dynamically allocated memory, is a mature memory-management technique. It has been in use since 1958 [McCarthy, 1960] and is now accepted as a customary element for dynamic memory management in run-time systems of modern programming languages, especially object-oriented ones, for reasons of safety and for its clear software engineering benefits [Gosling et al., 1996]. The increasing acceptance of object-oriented languages makes it more important than ever for garbage collection to work as fast as possible. Multi-generational copying collectors are the most popular memory reclamation algorithms in use today. Generational collectors have two real advantages over original nongenerational collectors: pause times are shorter (on average), and total cost is lower. There are, however, three tenets that are commonly held as explanations why generational collectors have these advantages, and that have discouraged exploration of alternative strategies. The first has to do with a property, empirically confirmed in object-based systems, that young objects die fast (more precisely, that object mortality is a decreasing function). The popular wisdom is that generational collectors exploit this property by concentrating effort on collecting young objects. Furthermore, whenever only a part of the heap is collected at once, the remainder must be assumed live for the purposes of the collection, and all pointers from outside the region collected into it are roots of the collection. This approximation is conservative, and considers more data live than is actually the case, and thus copying cost is greater. The second tenet is that this excess copying effort is small in generational collection because pointers from outside the region into it are, by construction, pointers from older objects to
1
younger objects, and such pointers are few. The third tenet is that for the same reason it is easy to maintain the write barrier (actions taken during the execution of the program when a pointer is stored) and remembered sets (the sets of pointers crossing into a region). Taken together, these three tenets do more than explain the good performance of generational collection. They have been understood in the past to rule out using any strategy that might collect older objects in preference to younger. Any such strategy would apparently concentrate effort on regions of lower mortality. It would also cause the collected region to have many incoming pointers from outside, thus from younger objects, and have both a high excess copying penalty and a high maintenance cost for the write barrier and remembered sets. Our study shows that, although each of the three tenets is intuitively and plausibly based on truthful observations of particular garbage collectors’ behavior, alternatives to youngest-first generational collection are nevertheless feasible. The several plausible inferences drawn from the observations do not in fact rule out good incremental copying collectors other than generational: as we shall discover, they merely set the stage for a performance trade-off.
fraction surviving
1
0
0
V/2
V age Figure 1.1. Surviving objects in a newly allocated area.
2
First, the intuition that effort should go to regions with high mortality is countered by the intuition that within a given area that has been newly allocated, its older half has less live data than its younger half, so it ought to be better to collect the older half. 1 A plot of the survivor function of the distribution of object lifetimes (Figure 1.1) graphically illustrates this fact—the survivor function is always a decreasing function, thus the area under the curve, which is the amount of surviving data, is smaller in the segment V 2 V than in the segment 0 V 2 . In
fact, it would be best to collect just the oldest object in the area. Figure 1.2 illustrates the point: imagine that newly allocated objects are placed in a queue, which plays the role of the youngest generation in generational collectors (also known as the nursery). When objects emerge from the queue, they are tested for liveness. Ideally, the length of the queue should be equal to the lifetime of each object, so that no object exits the queue alive. However, objects have diverse lifetimes, which a queue of fixed length cannot match. Yet if objects allocated in close succession have similar lifetimes, then the queue can manage them well, and if it can adjust the queue length to the objects’ lifetimes, so much the better. Our design for the deferred older-first collector in Section 3.1.1, p.26, can be seen as an attempt to realize an approximation to a collection queue, within the restrictions of a finite and fixed space available for the queue and its survivors.
?
old end
young end
queue
liveness test
allocation Figure 1.2. A collection queue.
With regard to the empirical observations on which the first tenet was founded, we should point out that, even though a decreasing mortality has been attested in all measured systems
1
This point is subtle. For instance, in discussing garbage collection in the implementation of SML, Appel states both that the longer one waits before a collection, the more records have turned into garbage [Appel, 1992, p. 228], and that newer cells have a higher proportion of garbage, which is why the garbage collection efforts should be concentrated on them [Appel, 1992, p. 208]. Both of these statements are true, but not in the same context. If the meaning of characterizations such as these is improperly or simplistically understood, it is easy to derive incorrect inferences.
3
for relatively young objects, the behavior for relatively older objects, and certainly for very long-lived objects in long-running programs, remains unexplored. Concerning the second tenet, excess copying: we empirically examined the difference in copying cost between a collector that assumes that everything outside the collected region is live, and one that has an oracle to tell apart the live from the dead. For the collection schemes we considered, we found that, as long as the scheme preserves the objects in a logical order of allocation (thus always having data of a contiguous age group in the region collected), the excess copying is usually not excessive. We then measured the statistics of pointer direction and found that neither older-to-younger nor younger-to-older direction dominates across all studied traces (Smalltalk and Java). But, more importantly, we found that pointer distances (differences in age between the object containing the pointer and the object pointed to) are very small indeed, whether positive or negative.2 While this property works in favor of generational collection, it also works in favor of any other age-order-preserving collection scheme, by reducing the number of pointers that must be maintained (Section 3.2.2). Concerning the third tenet, the implementation overheads: it withstood our validation. Indeed, the circumstance that neither pointer direction dominates is not significant, in light of another empirical fact: most pointer stores are into new, or very recently allocated objects. Generational collectors do not need to record these stores, regardless of direction. The other age-based collectors we introduce here must record the younger-to-older pointer stores into new objects, and pay a significant price for it. Nevertheless, we shall see that for many programs, it is worth paying this price to receive the benefit of lower copying cost. In this study, we explore generational and other collection schemes among those that preserve the relative order of objects, using simulation studies and a prototype collector imple-
2
DeTreville reports that in Taos, an operating system written in Modula-2+, the younger-to-older pointer stores barely outnumber (by 1 96 106 to 1 17 106 ) older-to-younger pointer stores [DeTreville, 1990b, p. 29]. The reported temporal distances are also very short; since they were obtained with respect to a particular garbage collector execution, they are not directly comparable to our measurements of Smalltalk and Java distances in Section 6.2.
4
mentation. We examine the obvious opposite of youngest-first collection, which is a strict oldest-first: choosing a fixed amount of oldest data. It sometimes works well, but if the chosen amount is small, the collector works poorly. Similarly, a strict youngest-first collector, which always chooses a fixed amount of youngest data (i.e., the nursery), works poorly. These observations lead us to explore strategies in which the collector chooses the collected region more flexibly, and visits both older and younger regions. A deferred older-first collector does so in the simplest possible manner, by moving a window of collection across the heap, but looking at older objects first. It performs remarkably well, sometimes close to the performance of a locally-optimal collector. The intuitive reason for this good performance is also remarkably simple: as the window of collection moves towards younger objects, it finds itself eventually at a position corresponding to an age by which most objects have died; and, since few objects survive in the window, the window does not move very fast. Thus, this collector homes in on the region where it is most profitable to collect, and stays there for a long time. A welcome property is that the best configurations observed in our experiments are those with a small window size, and, since the amount copied per collection is limited by window size, the low total copying cost is accompanied by predictably short pause times. Pointer-maintenance costs are inevitably higher in this collector than they are in generational collection. Nevertheless, the trade-off between these two cost components favors the newly proposed collector for a number of programs we examine. In Chapter 2, we present the state of scholarship and introduce the terms used in the study of garbage collection algorithms. We discuss a classification of age-based collectors and the design and implementation of new algorithms in Chapter 3. We set the stage for the experimental evaluation of different collectors by examining the intrinsic properties of benchmark programs in Chapter 4. The copying cost of collection is the subject of Chapter 5, while the pointer-maintenance cost is discussed in Chapter 6.
5
Note on organization. Chapters 4–6 include long series of figures and tables. These are printed en masse at the end of each chapter, so as not to disrupt the flow of text, yet keep them close to it.
6
CHAPTER 2 BACKGROUND
This chapter introduces the concepts and the terminology used in the subsequent discussion while surveying the vast field of literature related to memory management and garbage collection.
2.1 Dynamic memory management Under the appellation of dynamic memory management, we study the provision in the runtime system of a programming language of mechanisms for allocation of contiguous blocks of memory of arbitrary size and unrestricted temporal extent at the request of the program. These blocks of memory are known as objects. In object-oriented languages, there is usually a one-to-one correspondence between a linguistic object and its representation as a dynamically allocated block of memory. In other contexts, the representations of a C struct created using a call to malloc, a Pascal record created using new, or a LISP cons cell, are all objects. A significant amount of research has gone into algorithms for managing dynamically allocated data under the restriction that the objects must not be moved, and that the program must explicitly request object deallocation following its last use of an object. These algorithms concentrate on fast allocation, typically using free lists, and efficient memory usage, avoiding fragmentation. Wilson wrote an excellent survey of the field [Wilson et al., 1995]. Our focus, however, is on algorithms that automatically detect that the last use of an object must have occurred, by proving that no further use of the object is possible. These algorithms
7
are called garbage collectors. They remove the burden of object deallocation from the programmer, and in so doing eliminate an important source of programming errors. 1 The garbage collector considers an object to be garbage (or dead) if it is not reachable: the program cannot access it by any chain of pointer traversals. Any object that is potentially reachable is considered live. Although the name implies that garbage is collected, it is usually the live objects that the collector works with: it preserves them, while reusing the space of the garbage objects. We shall briefly introduce the concepts needed to understand the arguments made later; for a comprehensive treatment, the reader should consult Cohen’s survey of the early development of garbage collection techniques [Cohen, 1981], Wilson’s survey, which concentrates on newer algorithms [Wilson, 1992], as well as Jones and Lins’s textbook presentation of the field [Jones and Lins, 1996].
2.1.1 Liveness and reachability Let us first look at simple (non-generational) collection, in which the entire heap is subject to collection. Ideally, any object that will not be used by the program after the time of collection can be discarded as garbage. Unfortunately, this criterion is not effectively computable. Instead, the consensus is to use pointer reachability as a conservative approximation. Given a set of root pointers into the heap, owned by the program in its data outside the heap (stack, registers, etc.), everything reachable from this set is considered live. In other words, the transitive closure of the points-to relation, starting with the roots, defines the survivors of the collection. This approximation is safe, since no computation can use an object unless it can refer to it by a pointer, and if it does not already have that pointer it can only obtain it by following some chain of pointers starting from some other pointer it does have. Pointer reachability is simple to compute and is found in the core of all garbage collectors. Between the approximation of
1
A study of software errors in IBM MVS ascribes almost one half to pointer and array access errors [Sullivan and Chillarege, 1991].
8
liveness by reachability and the unachievable exact liveness according to future use, there can be a gradation of techniques to make the approximation more accurate using different static analyses of the program (compile-time garbage collection, live variable analysis, etc.). We do not consider these here; hereafter we equate liveness with reachability from the root set of pointers. We are concerned, however, with another distinction that arises only with region-based collection, which divides the heap H into a collected region C and a remainder U . Live objects in the collected region are those reachable from the set of root pointers. The goal of regionbased collection is to avoid having to look at the (normally large) remainder. Instead, a writebarrier ensures that all pointers crossing from U to C are readily available at time of collection. All such pointers, called the remembered set for region C , are added to the root set, and then the transitive closure of pointer reachability is computed only within C . Let us call this result S r , for survivors by remembered-set reachability. The actual set of live data is S
S r . Calculation
using remembered-set reachability treats all objects in the uncollected remainder U as if live, since it does not examine U , and thus potentially overestimates the set S. In any region-based collector, there is potential for such excess retention or “nepotism” [Ungar and Jackson, 1988] on each garbage collection. How much excess retention is there? Clearly, a collection scheme could have dramatically high amounts of excess retention if it were the case that (1) pointers tended to cross region boundaries predominantly into the collected region, thus there were many pointers in the remembered-set, and (2) many of these pointers were from objects in the remainder that are actually dead, and (3) the pointer structure caused the transitive closure operation falsely to mark very large amounts of data as live, starting from such false pointers. The effects on performance would be profound, since both the cost of each collection and the frequency of collections would be high. We have assumed in the preceding that the set of external roots in the stack and registers is accurately known. This is predicated on the ability of the collector to examine and interpret
9
the stack and registers. A system may opt for self-descriptive data, which incurs the space overhead of tagging each word, or a more compact form using stack maps describing stack contents at admissible garbage collection points. Additionally, a data-flow liveness analysis may reduce the number of registers in the root set [Agesen et al., 1998]. On the contrary, if the system is uncooperative, then it may be the case that neither the values in the stack and registers, nor the values in the heap can be accurately interpreted: it may be impossible to distinguish some pointer representations from some non-pointer representations. Such is the situation with languages such as C and C++, which were designed without garbage collection in mind. Conservative garbage collectors (or ambiguous-roots collectors) have been developed to cope with this problem [Boehm and Weiser, 1988; Demers et al., 1990; Boehm, 1993]. Although the analyses in this study have been designed and experimentally tested with accurate-roots collection in mind, they may be adapted to an ambiguous-roots setting.
2.1.2 Preserving live data The collector must preserve the live objects for future use by the program. However, a distinction is made in collector design among techniques that keep objects in place, and those that move them. Mark/sweep collectors do not move objects [McCarthy, 1960]. Instead, free lists are used to manage space, as in manual dynamic storage allocation. The mark phase automates the detection of objects that have become garbage, while the sweep phase invokes the deallocation routine of the underlying dynamic storage allocator for each garbage object in turn. Copying collectors, which are the focus of our study, move the live objects into a new memory area, the “to-space,” allowing the entire old area, or “from-space,” to be reused for allocation [Minsky, 1963; Fenichel and Yochelson, 1969]. The copying is accompanied by the update of pointers that point to the moved objects. The detection of live objects, by transitive closure for pointer reachability, and the copying into to-space are usually interleaved in a Cheney scan [Cheney, 1970] which also eliminates the need for an auxiliary stack to compute the closure. With the use of two semispaces allocation is from a contiguous area,
10
hence simple and fast. The collector visits only live objects, which are typically fewer than garbage objects. However, the memory requirement of the copying collector is, in the worst case, double that of the mark/sweep collector.
2.1.3 Incremental collection Collectors that collect entire heaps at once incur very long pause times. In applications tolerating only short and predictable pauses, incremental collection is needed. The coordination between the collector and the running program to allow the collector to make progress piecemeal, or even concurrently with the program, is not simple. It is accomplished either using a read-barrier mechanism, or a write-barrier mechanism. With a read barrier, the program is alerted when it tries to follow a pointer to an object that the collector has moved [Baker, 1978]. Because read operations are extremely frequent, read barriers have used hardware support for efficiency [Zorn, 1989].2 With a write barrier, when the program stores a pointer into an object, the write is recorded. Whereas hardware support for the write barrier can be provided as well, as in the SOAR Smalltalk system [Ungar, 1986], pointer update operations are typically much fewer in number than pointer reads, so a software barrier in the form of in-line code is generally thought to be acceptable.
2.1.4 Generational copying collection Generational copying collection divides the heap according to the age of objects into multiple areas, or generations, and collects younger generations more frequently than older generations [Lieberman and Hewitt, 1983; Moon, 1984; Ungar, 1984]. There are subtle differences among these algorithms as described in the literature, because most descriptions combine the organization of the heap into generations with particular policies for promotion of survivors of garbage collections. Even when these algorithms are designed as non-incremental (“stop-andcopy”), they can be effectively incremental, by virtue of the fact that most collections are only
2
Zorn finds that a software read barrier could be implemented with a CPU overhead of not more than 20%, however [Zorn, 1990a].
11
of the youngest generation, and thus most pauses are short. Numerous practical implementations and studies of predictive and adaptive management of generations have reported good performance [Caudill and Wirfs-Brock, 1986; Courts, 1988; Shaw, 1988; Sobalvarro, 1988; Ungar and Jackson, 1988; Wilson, 1989; Wilson and Moher, 1989; Appel, 1989b; Wilson et al., 1991; Hudson et al., 1991; Stefanovi´c, 1993b; Stefanovi´c and Moss, 1994; Diwan et al., 1995; Barrett and Zorn, 1995]. The performance of a collector is affected by a number of factors: the number of generations in the system; the promotion policy, or how long an object must remain in one generation before it is advanced into the next older; when to initiate collection; and how to track the pointers from older to younger generations [Wilson, 1992]. This space of design choices is immense. Moreover, the allocation characteristics of languages and individual programs are varied, and they strongly affect the performance of collectors. The performance metrics—time overhead, pause time, and space overhead—are insidiously hard to define and measure. Does the time overhead include the write barrier, and how can the cost of the write barrier be extricated from other run-time costs? Is maximum pause time more important than average pause time, or some percentile of the distribution thereof? Should one measure the peak memory usage or the average memory usage? Should peak usage reflect the space needed only during copying to hold the to-space? Is it reasonable to let the oldest generation be “tenured” (allowed to grow indefinitely and never collected) or should measurements be taken with respect to a constant available space? The answers to these questions must depend on the intended application. As a result of the inherent complexity of generational collector design and of the methodological difficulties in evaluation, the understanding of generational garbage collectors has remained incomplete. Aside from remarks in Wilson’s survey [Wilson, 1992] and Jones and Lins’ book [Jones and Lins, 1996, p.151], which admit the possibility of collecting an older generation independently of a younger one, generational collectors are youngest-first: when collecting an older generation, all younger ones must also be collected. The number of generations is typ-
12
ically small, so the choice of the size of the collected region is limited. Barrett and Zorn examined an adaptive scheme which dynamically chose an age boundary for each collection [Barrett and Zorn, 1995]. Our work instead proposes to collect among older (but not necessarily oldest) objects preferentially, postponing the youngest objects until they have had time to become older and die. The work closest to ours in this respect is Clinger and Hansen’s “non-predictive” collector, which organizes objects according to the time elapsed since last collected and performs well with a hypothetical exponential distribution of lifetimes [Clinger and Hansen, 1997]. However, as we shall see in our empirical study in Chapter 5, that particular organization does not work remarkably well.
2.1.4.1 Pointer maintenance In order to collect one region (a set of youngest generations) without collecting the remainder of the heap, the set of pointers crossing the boundary from the remainder and into the collected region must be known. Pointers installed in an object at the time of its creation can only point to older objects and can never cross the boundary in the interesting direction, only pointers installed by updates can. Each updating pointer store must be checked: if the stored pointer is p
q, and p is in an older generation than q, then a record of this store will
allow the pointer to be found at the time of collection of q. Early generational collectors used indirect pointers and region entry and exit tables, mechanisms that are very expensive without hardware support for pointer operations, and not transparent to the language. Ungar’s collector used a remembered set of updated objects in the old (tenured) generation [Ungar, 1984]. A set can alternatively record the precise location of the updated object fields (“slots”), trading extra space for reduced time to find the updated fields at garbage collection time [Hosking et al., 1992]. If there are few repeated stores into the same location between collections, a data structure simpler than a set may be preferable, such as a store list [Appel, 1989b] or a sequential store buffer [Hosking et al., 1992].
13
Virtual memory mechanisms can be used to record the approximate location of updated object fields to within a page. Pages are write-protected, the first update of a page causes a page protection fault, and the software fault handler makes a note of the page (in an auxiliary bit array) and unprotects it. This scheme is called page marking. Unfortunately, virtual memory primitives in current operating systems are not well adapted to the task (non-pointer stores and null pointer stores equally cause protection faults) and are slow (because of user/kernel mode switching) [Zorn, 1990a]; moreover, the granularity of pages is too coarse (the ratio of page size to the typical number of pointer updates within it is high) [Hosking et al., 1992]. Software surrogates of page marking are known as word marking [Sobalvarro, 1988], or more generally, card marking, in which the address space is conceptually divided into sub-page units (cards). Each pointer store is augmented with code to index into an auxiliary card table and set the bit corresponding to the card of the updated field. Whereas word marking records the precise location of updated fields, card marking with cards larger than a word saves auxiliary table space at the cost of scanning individual cards at garbage collection time [Hosking et al., 1992]. Card tables as described tend to retain scattered marked entries over time, and it is better to summarize them at collection time (into a concise remembered-set representation) so that the card table reflects only incremental updates [Hosking and Hudson, 1993].
2.1.5 Mature spaces Bishop [Bishop, 1977] and later Hudson and Moss [Hudson and Moss, 1992] proposed dividing the heap into spaces managed by different algorithms. Bishop allows several independent areas, not necessarily all garbage-collected, and uses lists of incoming and outgoing inter-area links. Hudson and Moss divide the heap by age into a young and a mature space, which is in agreement with the intuition that the patterns of pointer structure linking the objects lose the strong directional bias (see Section 6.2) after a certain age, hence the generational algorithms, which exploit this bias, lose efficiency. The mature space is divided into areas of bounded size (or blocks of fixed size in an implementation in the Beta language [Grarup and
14
Seligmann, 1993]), which are processed one at a time. It is assumed that a generational algorithm manages the young space and engages the incremental mature space algorithm only when promoting out of the oldest generation of the young space.
2.2 Object survival and mortality A recent book on garbage collection states [Jones and Lins, 1996, p.144]: The insight behind generational garbage collection is that storage reclamation can be made more efficient and less obtrusive by concentrating effort on reclaiming those objects most likely to be garbage, i.e., young objects. Hayes and others observe that youngest objects have the highest mortality [Hayes, 1991; Baker, 1993; Hayes, 1993; Stefanovi´c and Moss, 1994], which is a refinement of the somewhat vague observation that most objects have a very short lifetime while some live much longer [Lieberman and Hewitt, 1983; Ungar, 1984; Shaw, 1988; DeTreville, 1990a; Zorn, 1990b]. Figure 2.1 graphs a typical mortality function using Tomcatv written in Smalltalk as an example (see Section 4.2.2.3), with the age of objects on the horizontal axis, and for a given age, the rate at which they die on the vertical axis. Both axes use a logarithmic scale. Is high mortality the same as being most likely to be garbage? No, since the very newest objects are patently not garbage. In statistical terms, it is the survivor function, and not the mortality, that tells us what is live, and the survivor function is always 1 at age 0. Formally, the survivor function s t is the fraction of original allocation that is still live at age t . The mortality m t
s t st
is the age-specific death rate, or likelihood that an object will die in
the next instant provided it has survived until age t [Cox and Oakes, 1984; Elandt-Johnson and Johnson, 1980; Baker, 1993]. Figure 2.2 plots a typical survivor function, again using Tomcatv. The horizontal axis is object age on a logarithmic scale. The vertical axis is the
fraction of original allocation still live at a given age. Given a region of just-allocated data of size V , is it more profitable to collect its younger half, ages 0 to V 2, or its older half, ages V 2 through V ? Statistically speaking, the answer 15
Tomcatv 1
0.1
Mortality
0.01
0.001
0.0001
1e-05 1
10
100 1000 Age (words)
10000
100000
Figure 2.1. A typical object mortality function. Tomcatv 1
Survivor function
0.8
0.6
0.4
0.2
0 1
10
100 1000 Age (words)
10000
100000
Figure 2.2. A typical object survivor function.
must be to collect the older half, which is clear from the survivor function diagram, Figure 1.1, or the simple calculation:
V V 2s
t dt
V 2 s 0
t dt , because the survivor function s t is non-
increasing. In practice, we have to decide where to put the survivors, which complicates this simple analysis. The analysis suggests, however, the following revision of the preceding quotation: Generational collection is efficient, provided its generations are well configured, because it concentrates on a set of young objects that includes the objects most likely to be garbage, albeit along with objects least likely to be garbage. If the collector can concentrate only on the objects most likely to be garbage, then it may be able to achieve even lower copying cost. This discussion was entirely independent of the 16
actual shape of the survivor and mortality curves, but the relative merits of generational and other collectors will certainly depend on them. Assumed or measured object lifetime distributions can be used to predict the performance of collectors [Clinger and Hansen, 1997]. However, the implicit assumption that object lifetimes are independent identically distributed random variables is far from true: phase behavior of programs must lead to highly correlated object lifetimes. The collector performance calculations are tied to the assumption of a steady state, but this assumption restricts the class of admissible distributions. The questions of the feasibility and the utility of analytical modelling of object lifetimes therefore remain open [Stefanovi´c et al., 1998a; Appel, 1997; Stefanovi´c et al., 1998b].
2.3 Theoretical models of memory management Object-based computation appears in its purest form in the Storage Modification Machine, or pointer machine [van Emde Boas, 1990; Kolmogorov and Uspenskii, 1958; Sch¨onhage, 1980], a model of computation based on a finite directed graph structure of storage instead of an array of memory registers as in the more commonly used RAM model. The graph structure is dynamic. The model incorporates the notion of new node creation. The requirement for what we call garbage collection is expressed abstractly in the following dictum [van Emde Boas, 1990, p. 32]: “The map p does not have to be surjective; however, nodes which can not be reached by tracing a word w in ∆ starting from the center a will play no subsequent role during the computations of the SMM, and therefore nodes may be assumed to have disappeared when they become unreachable.” A complementary aspect of theoretical approaches is the development of semantic models of memory management: deriving the properties of dynamically allocated data by linguistic means, usually in the context of typed functional languages [Morrisett et al., 1995]. These models indicate which objects are garbage and why, and additionally allow one to express memory management actions syntactically, and then reason about their correctness. (Unfor-
17
tunately, these models do not answer quantitative questions of the kind posed in our study.) Similar models are desirable for memory management in object-oriented languages, but their semantic modelling in general remains less settled [Abadi and Cardelli, 1996]. A related discipline is that of compile-time garbage collection, or type and data-flow analyses to improve the run-time performance of collection in (usually) functional languages [Appel, 1989a; Harrison, 1989; Baker, 1990; Goldberg and Gloger, 1992; Boehm and Shao, 1993; Fradet, 1994; Aditya et al., 1994; Mohnen, 1995]. The primary idea is to eliminate the need for heap allocation in the first place, by proving that certain objects can be allocated on the stack instead, which is considered (although not universally, cf. [Appel, 1987]) to be faster. More ambitious analyses aim to provide hints to a generational collector about object lifetimes [Yi and Harrison, 1992].
18
CHAPTER 3 AGE-BASED GARBAGE COLLECTION ALGORITHMS
In this chapter we establish a classification of garbage collection algorithms that preserve the age ordering of objects in the heap. We introduce new algorithms within this classification. Unlike generational algorithms, the new algorithms do not always collect a youngest subset of heap objects. We proceed to discuss the repercussions of this design difference on the implementation of copying and pointer maintenance in the collector.
3.1 Conceptual design The garbage collection algorithms we consider are all “generational” in the broadest sense of the word: when a collection happens, each scheme partitions the heap H into two regions: the collected region C , in which the collector examines the objects for liveness, and if live, they survive the collection; and the uncollected remainder region U , in which the collector assumes the objects to be live and does not examine them. The non-generational collector is a degenerate case in which the latter region is empty. The collector further partitions the set C into the set of survivor objects S and the set of garbage objects G, by computing root pointers into C and the closure of the points-to relation within C . To make the freed space conveniently available for future allocation, a copying collector manipulates the survivors S by moving them to the “to-space”. This manipulation is a primary cost of collection [Jones and Lins, 1996].
19
Denoting by X the total volume of objects in a set X , we use as a measure of the copying cost of collection the “mark/cons”1 ratio, the ratio of the volume copied to the volume freed (i.e., available for allocation in the subsequent cycle): µ
C S G
It is desirable to minimize µ. If the initial partition into
.
and U is such that most of C
falls into G, the copied volume of survivors S will be small. However, if arbitrary schemes are considered, then clearly the best scheme is one that always makes the initial partition so that G
C, and thus µ
0. But is it realistic? The computational effort to perform the initial
partition ideally is as great as to perform a non-generational collection—except if an oracle were available. We consider as realistic only those schemes which compute the initial partition very cheaply. We restrict our attention to a class of schemes that keep objects in a linear order according to a certain object timestamp. Imagine objects in the heap as if arranged from left to right, with the oldest timestamp on the left, and the youngest timestamp on the right, as in Figure 3.1. The region collected, C , is restricted to be a contiguous subsequence of this sequence of heap objects, thus the cost of the initial partition, simply choosing the two boundaries, in some prescribed manner, is virtually nil. We call these schemes age-based collection.
allocation oldest
youngest
Figure 3.1. Viewing the heap as an age-ordered list.
We consider two notions of timestamp: true age and renewal age. In the first case, each object is given a permanent timestamp once, at the time of allocation. The relative order of objects in the sequence of heap objects never changes. In the second case, upon collection, the collector gives all survivors fresh, renewed, timestamps, as if they were newly allocated. It then moves these survivors to the young end of the sequence of heap objects.
1 The name is used for historical reasons: in the original mark/sweep collector for LISP [McCarthy, 1960] the live objects were marked during collection, whereas the allocation consisted solely of LISP pairs, known as cons cells.
20
True-age timestamps permit the collector to focus attention on particular age groups, i.e., on sets of objects allocated consecutively. They arise naturally from considerations of object lifetime behavior—clusters of objects allocated together tend to expire together [Hayes, 1993]. On the other hand, the notion of renewal-age timestamps can arise from considerations of an a priori exponential lifetime distribution [Clinger and Hansen, 1997]. Traditional generational collection schemes are, in the main, age-based. They are based on true age, and the region collected is some subsequence that always includes youngest (most recently allocated) objects.2 In the following, we introduce and categorize alternative collection schemes according to their choice of objects for collection. The age-based collection algorithm is also presented in the form of code in Figure 3.2.
3.1.1 True-age schemes The simplest age-based collector and the base point of our comparisons is the non-generational collector. It will be labelled NONGEN in the discussion and figures below. The region of collection in NONGEN is always the entire heap: C
H, U
0/ .
We now consider age-based schemes, distinguished by different choices they make for the collected region. A true youngest-first (TYF) collector always chooses some youngest (rightmost) subsequence of the sequence of heap objects (Figure 3.3). Generational collector schemes are variants of true youngest-first collection, differing in the size of C and in how they trigger collections [Barrett and Zorn, 1995]. In the basic design [Jones and Lins, 1996, p.147], new allocation is into one fixed-size part of the heap (the nursery), and the remainder is reserved for older objects (the older generation). Whenever the nursery fills up, it is collected, and the survivors promoted to the older generation (Figure 3.5). When the older generation fills up, then the following collection collects it together with the
2
A slight divergence from the age-based criterion may arise in copying collectors, because the order of the objects in to-space is determined by the heap traversal (typically breadth-first, as in the standard Cheney scan [Jones and Lins, 1996; Cheney, 1970]), and is in general different from original order in from-space. Hence, the age order of objects is perturbed slightly within S for each collection. In compacting collectors reordering does not occur at all.
21
initialize: available space : heap size current timestamp : 0 allocate (num words): if available space num words then create new object o with o size : num words and o timestamp : current timestamp current timestamp : current timestamp + num words available space : available space - num words else Garbage collection: set timestamp boundaries tmin , tmax according to scheme C : o o timestamp tmin tmax U: H C S0 : o o C r Roots U r o S : TC C S0 , where transitive closure is computed as: Sn oo C o Sn 1 o o ∞ S n 0 Sn (union finite) G: C S available space : available space G discard objects in G if G 0 then fail fi
"! ! ! !
Figure 3.2. Pseudocode for the conceptual design.
22
youngest
oldest
C
U S
U
C S
Collection 1
Collection 2
Figure 3.3. TYF: True youngest-first collection (general case).
C : collected region U : remainder (region not collected) S : region of survivors : area freed for new allocation Two successive collections are shown. For sake of illustration, the surviving amount is 2 on collection 1, and 3 on collection 2. The collected region is enclosed in vertical bars.
Legend for Figures 3.3–3.12. nursery. In a two-generation collector, that collection considers the entire heap. The collector deliberately reserves space, so that, unlike in TYF, the region chosen for collection contains exactly the objects allocated since the last collection (except for full heap collections). The operation of the GYF collector is summarized in the code of Figure 3.4. We study two- and three-generation schemes: 2GYF (2 Generations; Youngest-First), and 3GYF. We stipulate that the size of each generation is strictly greater than 0, and therefore 3GYF never degenerates into 2GYF. When discussing generational schemes generally, we use
the label GYF. We also examine a scheme in which the older generation is allowed to grow and use all the space of the nursery, and vice versa [Appel, 1989b], and no space is held in reserve; this scheme is denoted NGYF, and its operation is shown in Figure 3.6. The simplest youngest-first collector is one that always chooses a constant amount of youngest objects to collect; we call this scheme FC-TYF (Fixed size of Collected region; True Youngest-First). Note, as is clear from the diagram in Figure 3.7, that survivors of the preceding collection remain in the collected region. One can expect this collector to be inefficient,
23
initialize: Generations 1–N: set ∑N heap size g 1 nominal sizeg for g 1 N do Geng : 0/ reserve : heap size next gen : 1 allocate (num words): if min nominal size1 current size1 reserve num words then create new object o in Gen1 with o size : num words current size1 : current size1 num words reserve : reserve num words else Garbage collection: gen : next gen gen C: g 1 Geng U: H C S0 : o o C r Roots U r o S : TC C S0 for g 1 gen do S g : S Geng Gen1 : 0/ for g 2 gen+1 do Geng : Geng S g 1 , except if gen N then GenN : S N S N 1 reserve : heap size ∑N g 1 Geng next gen : min g : g g Geng nominal sizeg fi
! !
!
!
Figure 3.4. Pseudocode for GYF collectors.
24
older generation
nursery youngest
reserve oldest
U
C S
from reserve
Collection 1
from freed reserve
Collection 2
freed
reserve oldest
C
U
S
oldest
C put on reserve
Collection 3
S
freed
(full heap)
Figure 3.5. Generational youngest-first collection (2GYF).
because objects are repeatedly copied, and collections are frequent, because only the space freed is available for new allocation. Indeed, the older region remains untouched after it is initially allocated. Contrast this with the GYF collector, which does not use all the available space, putting some on reserve, but is then able to collect at regular intervals that correspond to the nursery size, and does not copy survivors of minor collections again until the next full collection. A true oldest-first (TOF) collector always chooses an oldest (leftmost) subsequence of the sequence of heap objects (Figure 3.8). An FC-TOF collector considers a constant amount of oldest objects. It is clear that survivors of the collection will be collected repeatedly. If the oldest objects in the heap survive indefinitely, as in many programs, and the collected region is small and only encompasses such objects, FC-TOF necessarily fails because it finds no garbage in the collected region. However, we encountered some programs where FC-TOF, with a moderate size of the collected region, works better than other age-based collectors: intuitively, it gives objects enough time to die before they enter the collected region.
25
older generation
nursery
oldest
youngest
U
older generation
U
C S
U
older generation
C
older generation
S
older generation
C
Collection 1
nursery
nursery
nursery
C S
Collection 2
Collection 3
Collection 4
S
Figure 3.6. Generational youngest-first collection with flexible generation boundaries (NGYF).
The simplicity of FC-TYF and FC-TOF schemes is in the fact that collected regions are fixed. However, it is intuitively clear that performance suffers because some objects are subjected to collection repeatedly, while others are not subjected to collection soon enough. Thus, we must look for schemes that consider all parts of the heap at appropriate intervals. Note that GYF, which is known to work well in practice, revisits the entire heap, and at regular intervals,
by means of full collections. Let us instead examine ways to revisit all of the heap piecemeal. A deferred older-first (DOF) collector chooses a middle subsequence of heap objects, which is immediately to the right of the survivors of the previous collection (Figure 3.9). Thus the region of collection sweeps the heap rightwards, as a sliding window of collection. If the remaining area to the right is smaller than the window, the collector completes the sweep by
26
youngest
oldest
C
U S
C
U S
Collection 1
Collection 2
Figure 3.7. FC-TYF: True youngest-first collection (fixed collected region).
youngest
oldest
C S
U
C
Collection 1
U
S
Collection 2
Figure 3.8. FC-TOF: True oldest-first collection (fixed collected region).
collecting the remaining area of youngest objects.3 The FC-DOF collector employs a window of constant size. The intuition for the potentially good performance of the FC-DOF collector can be gleaned from the diagram in Figure 3.10, which shows a series of eight collections, and indicates how the window of collection might move across the heap. The most important aspect is that if the window is in a position that results in small survivor sets, then the window moves by only a
3 In the original description of the DOF algorithms, we proposed that as soon as the remaining area to the right is smaller than the window, the window is reset to the left end. However, we subsequently found that better worst-case copying cost is achieved with the current description. Additionally, full sweep is required for certain pointer filtering techniques (see below, Section 3.2.2).
27
youngest
oldest
C
U
U
S
Collection 1
C
U
U
S
C
U
S
Collection 2
Collection 3
Figure 3.9. FC-DOF: Deferred older-first collection.
small amount from one collection to the next. If it continues to move slowly, then it is likely to remain for a long time in the same region, corresponding to the same age of objects, and it continues to enjoy small survivor sets (Collections 4–8). The outcome is that considerable progress is made in allocation while but a small amount is copied: µ is low. The question then is how long the window can remain in such a good position, and, once it has left it, how costly is the return, which involves finishing the rightward sweep, resetting to the left end, and sweeping over older objects back to the “sweet spot.” A deferred younger-first (DYF) collector similarly sweeps the heap, but leftwards (Figure 3.11). As soon as the remaining area to the left is smaller than the window, the window is reset to the right end. The FC-DYF collector employs a window of constant size. Despite the apparent symmetry of definition, DYF does not enjoy the window-motion benefit of DOF: the window moves leftward by one window size even when there are no survivors. We shall find that for this reason DYF never performs well.
28
oldest
youngest
S
Collection 1
C
S
C
Collection 2
S
C
Collection 3 C
S
Collection 4
C
S
S empty
Collection 5
C
Collection 6
C
S
S
Collection 7 Collection 8
C
Figure 3.10. Deferred older-first collection, window motion example.
3.1.2 A renewal-age scheme The collector proposed recently by Clinger and Hansen [Clinger and Hansen, 1997] and its generalization, the “oldest-first” collector [Stefanovi´c et al., 1998a], are also age-based, but according to renewal age, and older objects form the collected region. Similar ideas for circular management of heaps can be traced to earlier literature [Baker, 1978; Bekkers et al., 1986; Lang and Dupont, 1987]. A renewal-oldest-first collector always chooses a leftmost subsequence of the sequence of heap objects—but the survivors are placed to the right of the uncollected remainder (Figure 3.12). Therefore, even though the collection window remains at the left, “old” end, the collector does not process the survivors of the collection again, until
29
youngest
oldest
C
U
U
S
C
Collection 2
U
S
Collection 1
Figure 3.11. FC-DYF: Deferred younger-first collection.
they have been pushed back to the left by new allocation after several succeeding collections. Note, however, the conspicuous crossover of the logical paths of survivor and remainder objects, which results in irreversible mixing of true object ages in the heap, even in compacting collection.
oldest
(renewal time)
youngest
(renewal time)
U S
C U
Collection 1
U
C U
S
Collection 2
Figure 3.12. FC-ROF: Renewal-oldest-first collection.
3.1.3 Towards optimal schemes There is a final category of schemes, clearly unrealistic for actual implementations, that we consider. Recall that a deferred oldest-first collector collects a middle subsequence of
30
age-ordered heap objects. Contemplate for a moment a generalization: let us choose any subsequence of a given size, not necessarily starting from a known point, but instead whichever subsequence yields the lowest mark-cons ratio on this collection. Thus we arrive at the locallyoptimal collector scheme FC-OPT.4 Is there a globally optimal scheme? The global optimum need not coincide with the aggregate outcome of locally optimal decisions. The overall copying cost can be reduced by selecting the amount of garbage released in such a way that the next collection comes at a particularly propitious time, namely when there are few live objects. However, the consideration of globally optimal schemes is beyond the scope of this investigation. Instead, we shall only use the results of the locally-optimal collector schemes as an upper bound on the minimum copying cost, for comparisons with realistic schemes.
3.1.4 Collection failure and recovery It is possible, especially for an FC collector with a small setting for C , to find no garbage in the collected region. If that happens, we let the FC collector fail for the purposes of this study. An implementation could increase the heap size temporarily, or retry collection on another region. A moving-window scheme can retry collection at the next window position. Alternatively, the whole heap can be collected. Note that GYF schemes by design occasionally consider the whole heap, hence they enjoy an advantage over the FC schemes as simulated here, with the disadvantage that their maximum pause times are no lower than with non-generational collection.
3.1.5 Garbage cycles A cycle of garbage objects can be so large that it cannot fit in the collected region. Even if the cycle is smaller than the size of the collected region, it can consist of objects of sufficiently
4
Similarly, we could explore the limits of true oldest-first and true youngest-first collection: instead of fixing the amount to be collected, let the collector choose that amount which results in the lowest mark/cons ratio µ for the current collection, but we leave this exploration for future work.
31
diverse age that it is never entirely within the collected region. 5 In either case, the cycle cannot be reclaimed. Traditional generational collection, at the risk of long pause times, finds such cycles by collecting the whole heap occasionally. Since the presence of such cycles is liable to lead to poor performance and ultimately failure, the same remedy, suggested in the preceding section, applies. Alternatively, the purely age-based scheme can be modified, with provisions similar to the train algorithm for the mature object space, which tackles the problem of cycles while maintaining incrementality [Hudson and Moss, 1992; Seligmann and Grarup, 1995]. Note that the results presented here are based on a simulator and a prototype implementation which do not implement any mechanism to enforce reclamation of cycles.
3.1.6 Locality Our study is based on extensive simulations, in order to explore a large number of collector configurations, and indeed a number of altogether different schemes. This focuses attention on the primary costs, namely copying and pointer maintenance. However, we cannot measure certain other costs, without integrating our implementation with a live object system. The chief of these costs is the effect on locality [Zorn, 1991; Reinhold, 1993; Gonc¸alves, 1995; Diwan et al., 1995; Appel and Shao, 1996]. The schemes that we found to perform well with respect to copying cost also share the property that they do not change the relative order of objects; think of them as compacting. Therefore, they do not adversely affect the locality of access within the mutator. Unlike traditional generational collection, which repeatedly collects from the same memory region of young objects and occasionally the whole heap, some of the proposed schemes visit different parts of the heap more regularly. Although it is tempting to predict the memory hierarchy behavior of these schemes by observing the patterns of collected region motion, the complexity of interaction of allocation, copying and pointer maintenance
5
This scenario is made unlikely by the fact that an unreclaimed cycle of garbage objects migrates to and concentrates in the oldest end of the heap. There its adverse effect on collector performance is indistinguishable from that of truly live but permanent objects (Section 5.4.3).
32
accesses with the memory hierarchy requires that the behavior should be measured in a fully implemented system instead.
3.2 Implementation In preceding sections, we outlined several age-based garbage collection algorithms that we use to explore reducing copying cost beyond generational youngest-first collectors, by prescribing which portions of the heap should constitute the collected region. The following chapter will show that the promise of the DOF collector is actually delivered, but now we consider how to implement in practice a collector with a collected region that ranges outside the space of youngest objects. If we look at all the age-based schemes together, we find two requirements for implementation. First, it must be possible to identify any set of objects according to their age, and use it as the collected region. Second, it must be possible to enumerate pointers into the collected set from outside, in order to collect it correctly.
3.2.1 Blocks The first requirement can be fulfilled efficiently by dividing the heap into blocks and linking blocks into an age-ordered list (Figure 3.13). The collected region must consist of a number of blocks. This restriction on the freedom to choose the collected region is acceptable: constant overheads of collection will surely render too expensive any scheme that chooses extremely small collected regions. Therefore, imposing a block as the minimum size of the collected region, as well as the measure of granularity of window positioning, must be appropriate for some block size. allocation oldest block
youngest block
Figure 3.13. Heap organized as an age-ordered list of blocks.
33
Hence, the heap address space is divided into blocks. Each block consists of 2 b bytes, and is aligned on a 2b -byte boundary. Assume for the time being that no object is larger than a block. Each object is entirely contained within one block. The block table is an auxiliary data structure, an array that lists all the blocks available in the address space and records the status of each (free, or in use). Additional information for blocks in use is accessed indirectly, to avoid the space overhead for unused blocks. Given the address of a heap object, or a field within an object, a constant b-bit-shift operation calculates an index into the block table corresponding to the block in which the object lies. Blocks in use are doubly linked in age order, so that age-based collectors can easily follow their collection policies. Each block records the next free address within it, so that objects can be added to it: by the mutator in allocation, or by the garbage collection in survivor copying. During collection, each block records whether it is in the collected region or not. During the Cheney scan phase of garbage collection, a block also records the next address to be scanned. In generational schemes, the block additionally records the number of the generation to which it belongs. The stipulation that whole blocks are subject to collection raises concern about possible fragmentation: if the survivors from an entire collected region must be placed in blocks themselves, then we can expect, on average, half a block of wasted space. The situation is easily remedied for the DOF collector, if the collector promotes survivors into any available space in the adjacent block to the left (youngest among uncollected blocks older than the collected region), as shown in Figure 3.14. Because of the rightward sweep of the collected region, any incomplete block left at the the end of one collection is filled in subsequent collections, and fragmentation is eliminated. Objects larger than one block are allocated in a special area, the large object space, but each large object is logically assigned to a block. The collector never physically copies large objects, but rather logically moves them from a particular block to another by manipulating
34
oldest block
youngest block
C Collection 1
U
C U
U
U
Collection 2
U
Figure 3.14. Filling preceding block to avoid fragmentation.
efficient indexing data structures. The implementation of large object space can be adapted from the one in the GC Toolkit [Hudson et al., 1991]. The handling of objects in large object space is more expensive than of those in ordinary blocks [Hicks et al., 1998]; an examination of the size distributions of objects (Section 4.2.3) shows, however, that large objects are a rare occurrence for our benchmarks.
3.2.2 Remembered sets The second requirement is to be able to find pointers into the collected region, no matter which blocks, or how many blocks, are in the region. The most general solution is to track the set of pointers into each block (the remembered set), and then compute the union of the sets for all blocks in the collected region. In contrast, note that a garbage collection algorithm that defines fixed boundaries for the collected region can safely record just the pointers that cross that boundary. For instance, a generational collector maintains one remembered set per generation, no matter how large the generation is. The need to maintain remembered sets for each block can cause significantly larger space overhead, as well as time overhead for recording and processing the pointers. The generational collector does not even maintain truthful remembered sets for each generation: by design, a collection of generation n always collects all younger generations as well,
35
therefore it is only necessary to know the pointers into generation n from older generations. 6 As we shall see in Section 6.1.3, this structure greatly reduces the number of pointers that must be tracked. Therefore, while the design using per-block remembered sets achieves the desired functionality for the fully-general age-based schemes, we must also consider the implications of the particular age-based scheme on the sets of pointer stores that the system can safely ignore. Note that the majority of stored pointers have the source and target within the same block, as we shall see in Section 6.2. Since a block is always entirely within the collected region, or entirely outside it, such local pointers do not need to be recorded in the remembered sets. This optimization applies to all age-based schemes. It eliminates a majority of pointer stores for reasonable block sizes at very low cost.
oldest
youngest
region of next collection
Figure 3.15. Directional filtering of pointer stores at run-time (DOF).
A more aggressive elimination criterion is possible. Suppose that the stored pointer is
p
q, and that the object p is subject to collection before object q. Then the pointer will be
invalidated before q is collected, either because p is moved if live, or because p is dead. Hence the pointer p
q should not be recorded in the remembered set of q. Thus the general rule
is to filter out those stores for which the source will be collected before the target, or at the same time as the target. Some age-based schemes have strict policies for the positioning of the collection window in the heap, and it is therefore possible to establish whether the source or the target of a pointer store will be collected first. Among them is DOF, with the full, cyclic,
6
Note as a special case that in the two-generation collector the (single) remembered set is always empty following a collection, thus the write barrier at collection time can be elided.
36
sweep of the heap. The directional filtering for the DOF scheme is shown in Figure 3.15 for pointer stores in the mutator. collected region
oldest
youngest
survivors Figure 3.16. Directional filtering of pointer stores at collection time (DOF).
Distinguishing the region that will be collected next, and the regions older and younger than it, there are 12 cases of pointer stores at run-time. One half are eliminated by observing that the source will be collected before the target, and additionally, the case with both source and target within the region of the following collection is eliminated. In Figure 3.16 we show the filtering scheme for pointer stores during collection. The dashed lines indicate the pointers as they existed before the collection, and the full lines indicate the pointers established during the collection, as survivor objects are copied into to-space. Here the filter eliminates 3 out of 6 cases. The actual benefits of directional filtering depend on the distribution of different pointer store directions, and we shall return to this question in Section 6.2. In the design of remembered sets for the newly proposed collectors, there is an additional consideration that did not arise for generational collectors. Namely, when a collection copies objects from the collected set C into the survivors set S, then any pointers from C into the uncollected remainder U , recorded in the remembered sets of the blocks of U , must be discarded prior to collection. Otherwise, the remembered sets would contain entries for source addresses in the from-space of the collection, which, following the collection, are invalid. The
37
collection then reestablishes remembered set entries for the surviving pointers from S into U . Therefore, the data structure for remembered sets must support deletion efficiently. (In generational collection, the pointers from C to U are younger-to-older, hence they are not recorded in remembered sets to begin with.)
3.2.3 Write barrier We now consider how to construct remembered sets. In between collections, new pointers come into being by means of initialization of pointer-typed fields of newly allocated objects, and by updates of these fields in older objects. Pointer updates are tracked using a writebarrier mechanism. As for pointer initialization, either the initializing stores are tracked using the same write-barrier mechanism, as it is, in effect, done in the Smalltalk system, or the newly allocated area is scanned for pointers at garbage collection time. We consider two write-barrier mechanisms. The first write-barrier mechanism immediately inserts stores into remembered sets. When a store occurs in the mutator, the following filtering actions occur. First, it is established that the store is of a pointer as opposed to a non-pointer value. In dynamically typed languages such as Smalltalk, this check is at run-time. In statically typed languages, such as Java and Modula-3, the write barrier code is only emitted for pointer stores, so this check disappears. Second, the write barrier verifies that the stored pointer value (the target) is not nil, since such stores do not create new pointers. Third, the write barrier verifies that the pointer crosses block boundaries. This check involves the comparison for equality of the high-order bits (full address length less b) of the pointer source and target. Fourth, if the particular age-based scheme permits further elimination of pointers according to the age of the source and target, then the write barrier may eliminate some additional stores. Fifth and last, the pointer source is inserted into the remembered set of the block corresponding to the target. The second write-barrier mechanism is a hybrid of card marking and remembered sets [Hosking and Hudson, 1993]. The heap is logically divided into cards of size 2 c bytes, where
38
c
b. Experience with generational collectors suggests that cards are typically much smaller
than blocks: e.g., while blocks of 64 KB were used in a Smalltalk implementation, the most efficient card sizes were in the range 256 bytes – 1 KB [Hosking et al., 1992; Hosking and Hudson, 1993]. A card table is an array with an entry for each card of the heap. At the end of a collection, all entries of the card table are clean. In between collections, at every pointer update (the store of a pointer p
q) the write barrier marks the entry for the card
containing p as dirty. At the beginning of each collection, the collector scans the card table to find dirtied cards. For each dirtied card table entry, it then scans the corresponding card in memory, i.e., it examines all pointer-valued fields in all the objects in the card, and updates appropriate remembered sets—in our example, it is the remembered set of the block containing the address q. When cards are examined and summarized into remembered sets, the same filtering criteria apply as discussed above for the case of immediate insertion into remembered sets. One benefit of card marking is that duplicate entries are immediately eliminated. The price is paid instead in scanning the cards, since the location of the updated pointer source is recorded only imprecisely. (When scanning a card, the collector may examine many more pointers, depending on the size of the card, in addition to the one (or ones) that caused the card table entry to be dirtied.) We shall see, however, that there are few duplicates within a block (Section 6.1.3), thus card marking is not likely to be profitable.
3.2.4 A write barrier design for large address spaces Returning to the write-barrier mechanism without card marking, we observe that it can be implemented especially efficiently if a large virtual address space is available. The goal is to make the sequence of write-barrier instructions short for the common case, namely that of stores that the barrier does not record. A large address space allows a heap organization that avoids the need to index into a block table and is able to make the pointer update filtering decisions solely from the two addresses of pointer source and target, which are already available in hardware registers for the store itself.
39
high addresses zone 1 allocation
low addresses zone 2
zone 3
zone 4
zone 3
zone 4
copying
Collections during sweep cycle 1 zone 1
zone 2 allocation
copying
Collections during sweep cycle 2
Figure 3.17. DOF heap organization in a large address space.
We illustrate the heap organization in Figure 3.17.7 The address space is divided into zones, each zone consisting of a large number of blocks. At any moment exactly two adjacent zones are in use for the heap. Initially, all allocation is into zone 1, at the high end of the address space, whereas all copying by the collector is into the next lower zone 2. Both allocation and copying proceed from higher to older addresses within a zone. As soon as the amount allocated in zone 1 becomes equal to the specified heap size (expressed as a number of blocks), garbage collection begins in the DOF manner, with the collection window sliding over zone 1, from high addresses to low. The survivors of collection are copied over into zone 2. Subsequent collections are triggered when the total amount in the heap (i.e., in zones 1 and 2) again equals the specified heap size. This process continues until either: (a) the window of collection catches up with allocation in zone 1, or (b) the allocation in zone 1 reaches the limit, i.e., runs into zone 2. Case (a) corresponds to the natural end of the window sweep of the DOF collector (Figure 3.9). Case (b) is coerced into case (a) by enlarging the window to encompass the entire remainder of zone 1. Thus in both cases a full sweep of all objects allocated in zone 1 is completed. At this point, which corresponds to the resetting of the window in Figure 3.9, zone
7
Note that the address-ordered layout of Figure 3.17 is logically equivalent to the age-ordered layout of Figure 3.9.
40
1 is abandoned, and its role is assumed by zone 2: allocation proceeds in zone 2, from the last copied object on towards lower addresses. Zone 3 is introduced for the survivors of subsequent collections.8 In this fashion, allocation and collection can proceed until all zones have been exhausted. Under the assumption that we have a large virtual address space, most programs will not exhaust all available zones; if that does occur, however, the remedy is to move the data back into the high addresses, at the price of one non-generational collection (or relocation). Choosing a good size for the zones is challenging. On the one hand, a zone should accommodate the amount allocated during one sweep of the collection window: if it does not, then the premature end of sweep will be forced (case (b) above), which will result in a longerthan-normal pause time and disturb the desirable operation regime of the DOF collector. On the other hand, when the DOF collector is in its most desirable regime, Figure 3.10, then the window moves slowly, and the amount allocated during a sweep is large. Therefore, since both the number of zones, and the size of a zone must be large, a large virtual address space is truly necessary for this organization to be effective. The directional filtering at run-time, shown in the age-order layout in Figure 3.15, becomes very simple in this address layout, as shown in Figure 3.18: if the address of the pointer source is higher than the address of the target, the pointer store need not be recorded. Thus the directional filter can be implemented in a single comparison instruction. Note that the store of a null pointer, normally represented as a machine 0, is also caught by this filter, hence a separate prior check is not needed. The block-local filter remains as described above, but it can be coalesced with the directional check: one of the values compared in the directional check is first rounded to the nearest block boundary. In all, the filters require only a handful of instructions (a bit-masking operation, a comparison, and a conditional branch) and involve no memory accesses (Figure 3.20, p. 48, on the left).
8
The operating system must cooperate by allowing the garbage collector to acquire and explicitly dispose of segments of the address space.
41
high addresses
low addresses allocation
copying
Figure 3.18. Directional filtering for DOF in a large address space.
Thus, the majority of pointer stores, those that do not need to be recorded, are handled quickly. We now consider the remainder. We have observed (as we report in Section 6.1.3) that there are very few duplicate pointer stores for the remembered set of a block in general, and, in particular, between two collections. Hence card marking is not advisable. However, if we wish to keep the cost of the write barrier predictable and low, we should not immediately insert the pointer store, that is, the address of the pointer source, into the remembered set of the pointer target. Instead we can write the same address into a sequential store buffer [Hudson et al., 1991], and subsequently flush the store buffer into the remembered set. There will be a sequential store buffer for each block in current use. At collection time, we form the set of remembered pointers into the collected region by considering the “union” of the remembered sets of all blocks in the region. We chose to maintain the remembered set of a block between collections as a sequential store buffer; now, rather than construct the remember set explicitly and then compute the union, we simply merge the store lists for the blocks involved, and eliminate pointers that go between two blocks both in the collected region.9 Recall that when remembered sets are explicitly maintained, it is necessary to purge the sets of uncollected blocks of entries for pointers from collected blocks, prior to collection. With the address layout proposed here, together with directional filtering, that is no longer necessary,
9
Note that each store buffer is likely to be a nearly sorted list (with decreasing addresses), because most pointer stores are into the newly allocated area, and thus the sequence of pointer source addresses is correlated with the direction of allocation. This circumstance may influence the choice of the merge algorithm. Also note that an actual merge operation is not absolutely required: it is possible, and it may be less expensive, to recognize duplicate entries on the fly.
42
since entries cannot become stale: the collected region lies within the highest valid addresses, and no pointers from higher, now invalid addresses, were recorded in the remembered sets for the collected region, since such pointers would have been higher-to-lower address pointers. The directional filtering described here is cheaper than the traditional generational test [Hosking et al., 1992], which requires indexing into a block table to determine the generations of the pointer source block and the pointer target block, at the cost of two memory accesses. For fairness, however, under the assumption of a large address space, the generational test can also be simplified. We divide the address space into a number of very large blocks, and assign each generation to one block. We place the nursery at the highest addresses, and older generations at progressively lower addresses. With this heap layout, an address comparison acts as a generational test to eliminate younger-to-older stores, and simultaneously handles the null pointer store; a block-local filter is needed to eliminate older-to-younger stores within a generation. Thus the write barrier cost components—the cost incurred for each pointer store and the additional cost incurred for each pointer store that must be recorded—will be comparable in implementations of the collectors studied. Similarly, the costs incurred at the time of processing each remembered set entry will be comparable. Therefore, in this study we focus on measuring the number of times different collectors invoke each of these operations. We find significant differences between the collectors. Chapter 6 describes these results in detail.
3.2.5 An estimation of copying and pointer-maintenance costs In subsequent chapters we shall obtain accurate operation counts for the copying and pointer-maintenance actions of garbage collection. If in addition to the number of operations, we know the cost of each operation, we can calculate total collection costs, and by comparing with an actual implementation, validate the cost model. We performed preliminary experiments with the proposed write barrier on a 292 MHz Alpha 21164 processor. The implementation is presented in Figure 3.19 as source code and in
43
Figure 3.20 as Alpha assembly code compiled using gcc -O2. The null-pointer, block-local, and directional filtering are accomplished using a single test. The estimated overhead of the barrier if the test eliminates the store is 2 cycles.10 Stores that are not eliminated are recorded, without duplicate removal, in a sequential store list corresponding to the target block. A sequential store list holds a maximum of 15 entries. When a store list fills up, a new one is allocated from a large linear arena; thus each block owns a linked list of sequential store lists. The measured overhead of the barrier, including the writing into a sequential store list and management of sequential store lists, is 11 cycles per store. Figures 3.21 and 3.22 present the code executed at garbage collection time to scan the list of sequential store lists of a block. The collector retrieves each entry, interprets it as a pointer source, and then reads the current target value of this pointer. (The target may have changed since the time when this store was recorded, so the collector verifies that the current target is within the collected region.) Finally, the collector retrieves the header word of the target object. If the header word is in fact a forwarding pointer, then the object need not, and must not, be copied again. If the header word is a true header, then this is the first time the collector has encountered the object in this collection, and therefore the collector copies the object into to-space. (A forwarding pointer is an object address and its least significant bit is 0, because objects are word-aligned, while addresses are expressed in bytes. We can define a true header to have 1 in its least significant bit to make headers and forwarding pointers distinguishable.) The cost of this entire sequence, sans copying, is 13 cycles per processed entry. Thus, assuming that filtering succeeds in eliminating 95% of pointer stores on average, and assuming that all recorded stores are eventually processed, the average cost per pointer store is estimated at 0 95 2
0 05 11
13
3 1 cycles.
10
The measured overhead on the four-way issue Alpha 21164 processor ranges from 1 to 3 cycles for the three instructions in the test, depending on the surrounding code; the average and median of 2 is used in the absence of any assumptions about the surrounding mutator code.
44
In addition to the break-down of pointer-maintenance costs, we performed an experiment to measure the components of copying cost in detail. Whereas in the aggregate, the copying cost is roughly proportional to the amount copied, here we examine the contribution of individual operations to that cost. Our model assumes a copying collector, in which a first phase of root processing (processing of global roots and remembered sets) is followed by a Cheney scan of to-space. For each object copied, the collector pays the following three costs: the cost of processing the pointer that keeps the object reachable, the overhead of installing a forwarding pointer and preparation for copying the words of the object, and the overhead of preparation for scanning the pointer fields of the object. Together these three are the per-object cost α obj . For each word copied, there is a cost αword . Finally, in the scanning phase there is an additional cost for each scanned pointer that does not cause an object to be copied (whereas scanning a pointer that does cause an object to be copied is subsumed in αobj ). There are three circumstances when this happens: the pointer is null; the pointer points outside the collected region; or the pointer points to an object that has already been copied. In our large address space model, these three circumstances are identified as follows: category null/lower encompasses null pointers and pointers to addresses below the collected region (addresses above the collected region are not possible at collection time), and category duplicate encompasses pointers into the collected region to objects that have already been copied. The per-pointer-field costs for these two categories are αnull/lower and αduplicate , respectively. The full copying cost is then
c
αobj nobj
αword nword
αnull/lower nnull/lower
αduplicate nduplicate
We measured the operation costs α in a synthetic copying experiment, in which heaps of different size, and with different object sizes and pointer contents were constructed and copied by a collector. We used an object format applicable to a typed language such as Java: an object header word points to a class record, which points to class-specialized methods to copy and scan the objects. We estimate the following values for the operation costs on the Alpha 21164, for operation within the primary cache: αobj 45
65 cycles, αword
2 5 cycles,
αnull/lower
15 cycles, and αduplicate
17 cycles. These estimates could be used in conjunction
with an instrumented garbage collector to validate the detailed model of copying cost. One must be cautious with these numbers for processor cycles spent on particular operations. They were obtained in a tight loop, and may not reflect actual costs accurately. On the one hand, the cost of memory accesses is discounted, since block table entries are cacheresident. On the other hand, cycle counts may be overestimated, since the processor has no opportunity to interleave the write barrier instructions with other mutator instructions.
3.3 Summary The proposed class of age-based collectors represents a logical generalization of the concept of generational collection. Age is the simplest criterion for choosing the collection region, and there are established techniques for encoding object age with sufficient accuracy and reasonable overhead; we have also suggested new techniques applicable to large address spaces that promise to reduce overheads further even in the more complex context of age-based collection. The collectors as described are all of the stop-the-world variety. No explicit provisions for incrementality are suggested. However, we believe that collectors with a small fixed size collected region can provide sufficiently low maximum pause times that they can be considered truly incremental (even if they have to do so at the price of increased total time). We described the collectors as operating within a unified heap, in which all objects lie. Hence, the same basic policy is applied to newly allocated objects as to older objects, and even long-lived semi-permanent objects. Surely the motivation that has led researchers in the past to consider dividing the heap into ephemeral and mature spaces remains valid, and different age-based schemes could be applied within each division. We described simple oldest-first, youngest-first, and moving window schemes (Figure 3.23 summarizes the employed abbreviations). The space of other possible designs is vast, and
46
#define #define #define #define #define #define #define #define typedef
BLOCK SHIFT 16 BLOCK SHIFT) BLOCK BYTES (1 BLOCK MASK ( (BLOCK BYTES - 1)) BLOCK ADDR(addr) (addr & BLOCK MASK) SB SHIFT 7 SB BYTES (1 SB SHIFT) SB WORDS (1 (SB SHIFT-3)) SB MASKLOW (SB BYTES - 1) struct bte
long int ** store buffer pointer; /* where to insert the next entry */ ... other block information ... BTE; #define BTE SHIFT (BLOCK SHIFT - log2 (sizeof(BTE))) /* the store: */ *reg source = reg target; /* the write barrier: */ if (reg source BLOCK ADDR(reg target)
/* record the pointer: */ btep = block table offset + (BLOCK ADDR(reg target) p = btep store buffer pointer; p --; *p = reg source; if ((p & SB MASKLOW) == 0)
BTE SHIFT);
/* allocate new store list, set p to beginning of list: */ previous ssb arena pointer = ssb arena pointer; ssb arena pointer -= SB WORDS; if (ssb arena pointer ssb arena low limit)
... allocate new arena ...
btep
*ssb arena pointer = previous ssb arena pointer; p = ssb arena pointer; store buffer pointer = p;
Figure 3.19. Write barrier code.
47
$42: record the pointer: register usage: sra $2,13,$1 ;; BTE SHIFT $0 is reg target addq $5,$1,$3 $2 is BLOCK ADDR(reg target) ldq $2,0($3) $2 is also p subq $2,8,$2 $2 is also previous ssb arena pointer and $2,127,$1 $3 is btep stq $11,0($2) $5 is block table offset bne $1,$46 $10 is ssb arena pointer allocate new store list: $11 is reg source bis $10,$10,$2 $13 is ssb arena low limit subq $10,128,$10 cmple $13,$10,$1 the store and the write barrier: ...conditional branch to allocate zapnot $0,252,$2 ;; clear low 16 bits new arena (not shown)... cmplt $11,$2,$1 stq $2,0($10) stq $0,0($11) bis $10,$10,$2 bne $1,$42 $46: $43: continuation of mutator stq $2,0($3) br $31,$43
Figure 3.20. Write barrier assembly code.
may offer potential for better performance; for example, hybrid schemes can be envisaged to combine generational collectors’ reserve space management with sweeping windows.
48
#define SB SHIFT 7 #define SB BYTES (1 SB SHIFT) #define SB WORDS (1 (SB SHIFT-3)) #define SB MASKLOW (SB BYTES - 1) #define SB CHUNK ENTRIES (SB WORDS-1); #define SB LINKMASK VALUE (SB CHUNK ENTRIES*8); #define FWD POINTER MASK 1 #define FWD POINTER TAG 0 p = actual block table [1] . store buffer pointer; while (p != 0)
while ((p & SB MASKLOW) != SB LINKMASK VALUE) reg source = *p; reg target = *reg source; if (reg target reg collection window low)
/* target is in the collected region */ if ((*reg target & FWD POINTER MASK) != FWD POINTER TAG)
/* the word in the object header is not a forwarding pointer, because it does have the low bit set */ reg target new = ...new address of target object after copying...; *reg source = reg target new;
p++;
p = *p;
Figure 3.21. Remembered set processing code.
49
register usage: $2 is p $11 is reg source $10 is reg target $14 is reg collection window low lda $5,actual block table $60: ldq $2,8($5) beq $2,$59 br $31,$73 $66: ldq $11,0($2) ;; reg source = *p ldq $10,0($11) cmpule $10,$14,$1 ;; target check bne $1,$67 ldl $1,0($10) ;; load header word blbc $1,$67 ;; branch if low bit is 0 ... forward pointer (not shown); new address is in $0 ... stq $0,0($11) $67: addq $2,8,$2 ;; p++ $73: and $2,127,$1 cmpeq $1,120,$1 beq $1,$66 ldq $2,0($2) ;; p = *p bne $2,$73 $59:
Figure 3.22. Remembered set processing assembly code.
50
NONGEN: non-generational collector TYF: true youngest-first collector TOF: true oldest-first collector ROF: oldest-first collector with the renewal-age modification DOF: old-to-young sweeping collector (deferred-older-first) DYF: young-to-old sweeping collector (deferred-younger-first) OPT: locally-optimal collector FC: prefix for age-based schemes with a collection region of fixed size GYF: generational collectors in general 2GYF, 3GYF: generational collectors with generations of fixed size, with two
and three generations, respectively NGYF: generational collector with two variable-sized generations
Figure 3.23. Age-based scheme name abbreviations.
51
CHAPTER 4 EVALUATION: SETTING
The evaluation of garbage-collection algorithms requires us to pose a large number of questions in order to expose the strengths and weaknesses of algorithms in different circumstances. We developed a number of different experimental methods to answer them and we present these methods in detail here and in the following two chapters: the discussion of each method is immediately followed by a presentation of the experimental results. The three main approaches to a comparative performance evaluation of garbage collector performance, offering different perspectives, are analytical modelling, trace-based simulation, and prototype implementation. Analytical modelling consists of constructing a mathematical description of the operation of the garbage collection algorithm, and applying it to a mathematical description of the load, i.e., the allocation and lifetime properties of objects and pointers. When successful, analytical modelling provides succinct models, and has useful predictive power with respect to the choice of collector configuration. Unfortunately, the premise of having reliable realistic mathematical descriptions of lifetime properties of objects, and especially properties of pointers, is not easily met with current knowledge. Trace-based simulation consists of constructing a software simulation of the operation of the garbage collection algorithm, and applying it to an empirically obtained trace of the load, i.e., the sequence of allocations and other object-related operations of a real program. Simulation isolates the memory management tasks from other (computational) tasks that a real program performs, and allows a variety of configurations to be tried. It is even possible to try collection algorithms that would be excluded in practice, such as those using backtracking, or oracles, in their decisions. The analysis of simulation results provides explanatory and predic-
52
tive power, but is not as succinct as analytical modelling. It is usually not difficult to extract traces when a working garbage collector is given, thus one can reasonably obtain traces from more than one object system and analyze them within the same simulation framework. Prototype implementation consists of constructing complete garbage collection algorithms and implanting them in a working object-based system, in order to measure how each performs in operation. Implementation has the advantage that it accounts for the full cost of memory management; on the other hand, it is difficult to extract the contribution of each cost factor. Furthermore, that cost is necessarily a reflection of a number of implementation decisions. From a practical standpoint, it is a usually a difficult task to retrofit (a number of) garbage collectors onto a working system. Our study primarily uses trace-based simulation using specially built object-level simulators. A prototype implementation was also completed according to the design of Section 3.2, and integrated with a trace simulator, which allows us to gather many of the performance statistics as in a working object system. In the remainder of this chapter, we discuss our experimental setting: the design of the garbage collection simulators and the set of benchmark programs used. The following two chapters present the findings: Chapter 5 examines the copying cost of collection, and Chapter 6 examines non-copying costs of collection, viz., the cost of collector invocation, and the pointer management costs.
4.1 Evaluation using program traces The primary method used in this study is evaluation by trace-based simulation. The experiment proceeds in two phases. First, by instrumenting an object-based system and executing a program, a trace is produced. Second, the trace is used to direct the execution of different garbage collection algorithms and to derive statistical measures of the program.
53
4.1.1 Trace content The information contained in the trace must permit accurate simulation of each garbage collection algorithm. To the point, the trace must identify all objects that the program allocated in the heap, providing for each object its time of allocation, its time of demise, and its size. 1 Furthermore, all stores of pointer values into the objects in the heap must be present in the trace. This information is sufficient for a garbage collector simulator to determine which objects are reachable in any chosen collection area at any chosen time. A real collector first identifies global roots by scanning the stack, and then computes the transitive closure of the points-to relation to find the reachable, hence live, objects. As it will be described below, the simulated collector relies instead on accurate allocation and demise records: all objects for which an allocation record has been seen but not a demise record, are live. In addition, the trace allows the collector to maintain an accurate image of the pointers among heap objects. In particular, the pointers from the uncollected region into the collected region are accurately known. Thus the closure of the points-to relation can be computed in the simulator to complete the set of collection survivors.
4.1.2 Trace format The common format for the traces is as follows. Each trace is a text file, in which each line records an event. Three kinds of events are represented: object allocation (A-records), object demise (D-records), and field update (U-records). An allocation record has the form:
A allocation-address size where allocation-address is the hexadecimal byte-address of the starting word of the object, and size is the number of words in the object. A demise record has the form:
1
These are the properties meaningful in every object-based system. In some systems it may be worthwhile to report more detailed information, when it is defined in the language at hand, such as the type (or class) of the object, or the allocation point. However, since age-based garbage collection algorithms do not make use of such information, we shall assume that only the general properties, applicable to all object-based systems, are recorded.
54
D allocation-address where allocation-address is the hexadecimal byte-address of the starting word of the object. An update record has the form:
U field-address value-address where field-address is the hexadecimal byte-address of the field which is being updated, and value-address is the pointer value stored. (The special value -1 represents the store of a null pointer.) The form of the traces reflects the assumption that an object is a contiguous sequence of fields, each field being an integer number of words. A pointer-valued field takes one word. A word is the smallest quantum of allocated space, and therefore all sizes and amounts are reported in words. In the systems studied (Smalltalk and Java 32-bit implementations), a word is four bytes long. Another assumption implicit in the trace format is that the location of an object is immutable. This assumption was made for simplicity: rather than have a separate notion of object identity, the address (whether allocation address or a field address within the object) uniquely identifies the object. Without loss of generality, it may also be specified that allocation addresses of objects in the trace follow consecutively, as if the objects were allocated from an infinite allocation area starting at zero.
4.1.3 Trace generation Having described the content of the trace, let us consider how to generate a trace. For this study, the method is to augment an existing object-based system and its garbage collector with code to produce the required trace records. The existing system is assumed to have the functionality to identify each object allocation, object demise, and pointer store. Every object-based system can identify object allocations. Every system with a garbage collector can also identify object demise. However, if we rely on the garbage collector existing in the system (often a state-of-the-art generational collector), we must ensure that the collections consider the entire heap, as in a non-generational collector,
55
to avoid the inaccuracies of excess retention (Section 5.3). 2 Following each such full collection, the reachability of objects is accurately known; any objects that are now not reachable, but were reachable at the previous full collection have died in the intervening period. If the time of demise must be known with accuracy, the period between full collections must be short. In fact, it is critical for the correctness of simulated collections to be able to determine reachability at every possible garbage collection instant (in simulation), i.e., following every object allocation (in simulation). Therefore, a full collection (in trace generation) must also follow every object allocation. As a result, the trace generation mechanism is very expensive: full collections are expensive, and many are needed. In fact, the cost is roughly proportional to the integral T 0
v t dt of the live data amount over the duration of the program. 3
Update records, the final component of the trace, are easily generated by instrumenting the write barrier in a system with a generational collector, since the collector must be informed of pointer stores. One must only ensure that in the existing system the write barrier is indeed invoked for each pointer store, that is, the system does not optimize away some pointer stores (e.g., initializing stores or null pointer stores), which it is allowed to do for operation of a generational collector, but not for trace generation. Let us consider some questions about traces generated in this manner. Any measurement disturbs the system measured, and generating object traces is no exception. If the traced program relies on timing properties, such as by querying a real-time clock, its behavior will be severely disturbed by the heavy overhead of tracing. We do not have such programs among our benchmarks. If an application is multi-threaded, as many interactive Java programs are, and
2
It is additionally assumed throughout that the collector is accurate (as opposed to conservative). It is certainly possible to obtain traces with a conservative collector, such as the one found in the Kaffe virtual machine for Java, but the utility of such imprecise traces is dubious: the garbage collection schemes in this study are meant to be accurate, not conservative. 3
Note that the underlying assumption is that the existing object-based system and its garbage collector are to be exploited with minimally intrusive modifications—as dictated by technical considerations when dealing with large and complex systems, and also dictated by legal and commercial considerations when systems are not available for inspection and modification in their entirety. If, on the other hand, a system can be redesigned for the express purpose of generating our traces, then better solutions are possible than repeated full collections, in the form of incremental computation of object reachability.
56
if garbage collection requires synchronization of all threads, then the concurrency behavior of the application will be changed under tracing. The observed order of execution will be a valid order, but not a likely, or the “intended” order. Finally, recall that in the common trace format the object addresses are immutable. If the tracing object-based system uses moving collection, it is necessary, but easy, to translate object addresses into the absolute form.
4.2 Benchmarks and languages We first consider the object-based systems (“languages”) used in the study, and then the benchmark programs, detailing their general properties.
4.2.1 Languages We used two object-based systems. The first, Smalltalk, is an older purely object-oriented language [Goldberg and Robson, 1983], which consists of three parts: the virtual machine, implemented in an earlier effort in the Object Systems Laboratory [Moss, 1987; Hudson et al., 1991; Hosking et al., 1992], the basic virtual image of Smalltalk-80 from PARC Place/Digitalk, and the additional classes and methods of the benchmark programs. The virtual machine includes the language-independent garbage collector toolkit, also developed in the Object Systems Laboratory [Hudson et al., 1991]. The virtual machine provides the execution (by interpretation) of the bytecodes in the methods contained in the basic virtual image and the benchmark classes. In the Smalltalk system, all of these classes and methods are present as objects in a benchmark virtual image, which is loaded at program start-up to build an initial heap state. When generating the trace, we distinguish between these objects and objects subsequently allocated as a result of program execution. The main results reported here reflect the program execution objects alone. This choice is justified by the observation that image objects could better be treated as static, instead of as part of the garbage-collected heap [Clinger and Hansen, 1997].
57
Another property of this implementation of Smalltalk is that method activation records (frames) are internally treated somewhat like objects [Moss, 1987]. However, frames are not treated as objects for the purposes of trace generation. The second system used, Java,4 is a contemporary object-oriented language that has found much use in Internet-based downloadable applications. It consists of a virtual machine, and a set of basic and benchmark classes and methods, loaded on demand. The Java virtual machine modified to generate traces for this study is an experimental machine, called the ExactVM, developed by Sun Microsystems Laboratories, which is equipped with a garbage collector that satisfies all the requirements outlined above. The Java virtual machine used has the option of running as interpreter, or as a just-in-time compiler. The same objects are allocated, and in the same order, in either case, and pointer updates, and, consequently, object demise points, also happen in the same order—as long as the compilation does not interleave the compiled code from successive bytecodes. Since any differences in behavior, if at all present, are incidental and not material to this study, they can be ignored. The interpreted mode of operation was then chosen for reasons of simplicity.5
4.2.2 Benchmarks The selection of benchmark programs for evaluating garbage collector performance is not an easy task. There is no widely-accepted suite of benchmarks, unlike for general computer performance evaluation, where there are several (e.g., SPEC95). It was therefore necessary to collect and select useful benchmarks from various sources, and it was also necessary to develop the selection criteria. 4
Java is a trademark of Sun Microsystems.
5
To generate traces using the just-in-time compilation mode, it is necessary to modify the code-generation phase of the compiler so that it will produce appropriate code at object allocations and at pointer stores. Our experience with the Kaffe virtual machines shows this to be a relatively straightforward exercise if the compiler is cleanly written. It is, however, a futile exercise, because the performance advantage of compiled code is entirely annulled by the overhead of trace generation.
58
4.2.2.1 Collecting benchmarks Collecting a large preliminary set of benchmarks posed distinct challenges in both languages under consideration. For Smalltalk, although the language is no longer in vogue, a number of public-domain programs (i.e., classes) implementing interesting applications exist and can be obtained on the Internet. Unfortunately, many of these programs depend on the presence of some vendor-specific classes, or are otherwise incompatible with the virtual image used in our Smalltalk-80 system. To the programs we could find, we added some that we have written ourselves and used previously in a study of Smalltalk garbage collection performance [Hosking et al., 1992]. We also translated two SPECfp95 programs from FORTRAN into Smalltalk. That version incompatibilities should be encountered in Smalltalk, a language which in its present form dates back eighteen years, is perhaps not surprising. However, we found that significant incompatibilities exist even in Java, a much younger language. They are primarily caused by changes in the class library between versions 1 0 and 1 1, assumed by most applications, and version 1 2, which the tracing Java virtual machine uses. A more significant difficulty in collecting Java benchmarks is the fact that the vast majority of publicly available Java code is in the form of Java applets, partial programs which execute under the control of a Web-browser and depend on it for the user interface. A browser must contain a Java virtual machine, but that machine is not accessible for instrumentation. We can only use full-fledged Java applications, which run directly on a virtual machine. It is perhaps not a great loss, since the majority of observed applets appear to be graphical widgets and similar toy programs, which neither run long nor allocate much data. On the other hand, the resulting benchmark set is skewed in favor of language-processing tools (compilers and optimizers). Additionally, we had access to the traces (although not the sources) of a preliminary set of Java programs considered for inclusion in the SPEC JVM98 suite.
59
4.2.2.2 Selecting benchmarks Having collected a larger set of programs (circa 30 Smalltalk and 15 Java) we considered which objective criteria should decide their utility as benchmarks of garbage collector performance. The first criterion must be that the program exercises the memory management functions at all. For instance, some Java graphics applications are CPU-intensive, but allocate a negligible amount of data, and are of no use in this study. Even programs that do allocate an appreciable amount of data do not necessarily exercise the garbage collector. Many programs, in particular most of the SPEC JVM98 programs, display a monotone-increasing heap profile: they slowly allocate pieces of some large data structure from start of execution to the end. Since the heap must be at least as large as the maximum amount of live data ever in it (in our cost model and simulation, the heap size is fixed), the program never fills the heap and the garbage collector never runs. Thus, loosely speaking, we need the criterion that the garbage collector runs at all. In fact, we need a stronger criterion. Consider that, over the execution of the program, the garbage collector will run at different moments depending on the garbage collection algorithm. The amount actually live depends, somewhat unpredictably, upon the precise moment of collection. (Perhaps the most obvious effect is at the end of program execution—since no copying cost is charged past the last collection.) It is thus necessary to reduce the “random” variation in measured collection performance that the “random” collection moments introduce, so as to expose underlying trends. It is therefore desirable to have a large number of collections to even out the “randomness”; in other words, it is desirable to have benchmarks that allocate a very large amount of data relative to their average live amount. We can formalize this requirement by observing that the number of collections performed by a non-generational collector is approximately proportional to the ratio of the amount allocated to the average live amount (see below). We require this number to be more than 10 to admit a measurement. Furthermore, we admit a benchmark only if admissible measurement of non-generational collection is possible for a heap size three times larger than minimum. Thus,
60
if a benchmark is admitted, the range of admissible heap sizes is at least 1:2, non-generational collector requires more than 10 collections, and our investigated schemes typically require hundreds of collections, so the aforementioned “randomness” is largely avoided.
4.2.2.3 The final benchmark set Our selection process resulted in a set of 11 Smalltalk benchmarks and 3 Java benchmarks. Their basic properties are listed in Table 4.1. Execution time was measured in normal operation, without any instrumentation and with default generational garbage collector configurations, on a 50 MHz SPARCstation 20 workstation. The amount of data allocated by the program is expressed in words (each word being 4 bytes). The remaining columns give the number of objects allocated, the maximum live amount in words (which is also the minimum required heap size to execute the program), and the number of pointer stores. The results of non-generational collection for these programs are reported in Figures 4.1– 4.10, pp. 71–80 (at the end of this chapter). Let us take as an example Figure 4.5. In the graph in Figure 4.5(a), we have plotted the absolute mark/cons ratio µnon-generational against total heap size V . It is easy to check that the shape of the µ curve closely fits the theoretical prediction, µ non-generational
1 V v
1
[Appel, 1987;
Jones and Lins, 1996], where v is the effective live data amount. In the graph in Figure 4.5(b), we have plotted the number of garbage collections n against total heap size V . The horizontal dotted line is drawn at n
10, thus the region below the horizontal dotted line (and to the
right of the dotted vertical) is the region of heap sizes so large that they result in 10 or fewer non-generational collections. Because of inaccuracies that may be introduced under such conditions, we do not compare results for this region. We now describe individual benchmarks, providing where possible details of their structure.
61
Table 4.1. Properties of benchmarks used Benchmark
Ex. time (s)
Words alloc.
Objects alloc.
Max. live
Ptr. stores
Interactive-AllCallsOn Interactive-TextEditing StandardNonInteractive HeapSim Lambda-Fact5 Lambda-Fact6 Swim Tomcatv Tree-Replace-Binary Tree-Replace-Random Richards
1.91 2.30 6.11 67625 6.88 55.86 11.65 3.92 3.32 4.55 10819
Smalltalk 18 359 22 747 204 954 3 791 306 277 940 1 216 247 1 533 642 2 117 849 209 600 925 236 4 400 543
1 722 3 556 46 157 1 084 650 53 580 241 864 444 908 624 612 35 785 189 549 652 954
627 1 098 1 160 93 026 6 295 13 675 17 542 30 593 9 784 13 114 1 498
855 3 509 7 251 30 298 91 877 404 670 134 355 286 032 39 642 168 513 763 626
JavaBYTEmark Bloat-Bloat Toba
353.76 387.54 178.05
Java 1 161 949 37 364 458 38 897 724
109 896 3 429 007 4 168 057
59 728 202 435 290 276
49 061 4 927 497 3 027 982
Interactive-AllCallsOn and Interactive-TextEditing. Two tests from the standard sequence
specified in the Smalltalk-80 image [Goldberg and Robson, 1983], comprising only the macro-tests. Previously used in Ref. [Hosking et al., 1992]. StandardNonInteractive. A subset of the standard sequence of tests as specified in the
Smalltalk-80 image [Goldberg and Robson, 1983], comprising the tests of basic functionality. HeapSim. Program to simulate the behavior of a garbage-collected heap, not unlike the
simplest of the tools used in this study. It is however instructed to simulate a heap in which object lifetimes follow a synthetic (exponential) distribution, and consequently the objects of the simulator itself exhibit highly synthetic behavior. Lambda-Fact5 and Lambda-Fact6. An untyped lambda-calculus interpreter, evaluating
the expressions 5! and 6! in the standard Church numerals encoding [Barendregt, 1984, p.140]. Previously used in Ref. [Hosking et al., 1992]. Both input sizes are used in order to explore the effects of scale. 62
Swim. The SPEC95 benchmark 102.swim, translated into Smalltalk by the author:
shallow water model with a square grid. Tomcatv. The SPEC95 benchmark 101.tomcatv, translated into Smalltalk by the
author: a mesh-generation program. Tree-Replace-Binary. A synthetic program that builds a large binary tree, then repeat-
edly replaces randomly chosen subtrees at fixed height with newly built subtrees. (This benchmark was named Destroy in Ref. [Hosking et al., 1992; Hosking and Hudson, 1993].) Tree-Replace-Random is a variant which replaces subtrees at randomly chosen heights. Richards. The well-known operating-system event-driven simulation benchmark. Pre-
viously used in Ref. [Hosking et al., 1992]. Our set of Java programs is as follows: JavaBYTEmark. A port of the BYTEmark benchmarks to Java, from the BYTE Maga-
zine Web-site. Bloat-Bloat. The program Bloat, version 0.6, [Nystrom, 1998] manipulating the class
files from its own distribution. Toba. The Java-bytecode-to-C translator Toba working on Pizza [Odersky and Wadler,
1997] class files [Proebsting et al., 1998].
4.2.3 Properties of benchmarks We now examine some properties of the benchmark programs. We have already seen their behavior under non-generational collection. Before we proceed to their behavior under different age-based collectors, we look at those properties that are independent of any collection algorithm. We examine the distribution of object sizes, the distribution of object lifetimes, and the time-varying live amount profile. We shall later refer to these properties when we look for 63
causes of particular performance behavior, but we shall find that their predictive value is not as great as expected. We first look at the distribution of object sizes: the typical sizes of objects, and how much sizes deviate from the typical. The size of a block must be substantially larger than the typical object size in order to avoid fragmentation. More to the point, the fraction of objects larger than the block size must be small, to minimize the overhead of allocating in the Large Object Space, and the overhead of subsequent collections of objects in Large Object Space [Hicks et al., 1998]. The distribution we measure is a time-averaged property: we do not consider how the numbers of small and large object in the heap change over time. The size distributions for our set of benchmarks are presented in Figures 4.15– 4.28 (pp. 85– 98). The distributions are computed according to four different weighting criteria: (a) by count of objects alone; (b) by volume, which is the object size itself (to reflect the fraction of the heap occupied by objects of a particular size); (c) by lifetime (to take into account that a longer-lived object is more likely to be encountered in collection); and (d) by both volume and lifetime. Each row of a figure corresponds to a weighting criterion. The left column gives the distribution in raw form, as a histogram, whereas the right column gives the cumulative distribution, normalized to 1, so that fractions can be read off easily. Most programs have several dominant object sizes that constitute most of their allocated volume. Weighting is important: whereas, for instance, 99.5% of allocated objects in Swim are 10 words or smaller, these objects account for 95% weighted by volume, 92.5% weighted by lifetime, and just 50% weighted by both volume and lifetime (Figure 4.21, p. 91). The latter is the most realistic measure of the distribution as seen by a garbage collector, 6 and as we see, if small objects have short lifetimes, their participation in the load can be much less significant than their numbers may suggest. We note similar behavior—that smaller objects
6
We note that age-based collectors that examine less than the whole heap are able to bias the distribution of the objects they see in particular ways with respect to the lifetime, though not with respect to size. Other (non-age-based) collectors are free to use size in their allocation decisions.
64
have shorter lifetimes than larger objects—in all Smalltalk and Java programs except three. The three exceptions are HeapSim, Lambda-Fact5, and Lambda-Fact6, in which a single size represents such an overwhelming fraction of allocation that no differentials can be observed, hence the cumulative distribution curves for different weightings are indistinguishable. 7 As we have noted, many arguments about garbage collection performance invoke the properties of object lifetime distributions [Baker, 1993; Clinger and Hansen, 1997], sometimes assuming a priori distributions in order to derive tractable mathematical models. Even though it is not our goal to develop analytical models, it is proper to evaluate our benchmark set with respect to object lifetime distributions. We shall see that the relationship of the distribution to actual performance is tenuous. We present the lifetime distributions by means of the survivor function and the mortality function (Section 2.2), shown for our benchmark set in Figures 4.29– 4.42 (pp. 99–103). For each benchmark, the lifetimes of all allocated objects were taken together, weighted by volume (i.e., by object size) and a cumulative distribution was found. Thus, the lifetime distribution is another time-averaged property. Each figure (a) shows the survivor function, plotting object age on a logarithmic scale to show the interesting behavior at young ages more clearly. Each figure (b) shows the mortality function, obtained by numerical differentiation and suitable smoothing from the survivor function. Because of the inherent error of this numerical process, applied to data that are not intrinsically smooth, the mortality curves should be taken cum grano salis, as merely indicative of the lifetimes. The fact that the survivor function is rapidly decreasing, even with a logarithmic scale for the age axis, confirms the observation made many times in the literature, that object lifetimes
7
The most striking observation is that Java programs have large maximum object sizes, whereas only one Smalltalk program, HeapSim, has objects larger than 140 words. We note that block size in implementation is restricted to be a power-of-two multiple of the virtual memory page size, and that page sizes are of the order 1000–2000 words (8 KB, or 2048 words for Sun Solaris 2.5.1 running on an UltraSparc). Therefore, there is no additional constraint upon block size to make a large object space unnecessary for these Smalltalk program executions. The same program may allocate larger objects on a different input. Whether the potential reduction in memory management overhead, by not having a large object space, is significant, and worth the expense of compile-time analysis needed to prove that no large objects are allocated, is beyond the scope of this work.
65
tend to be short. A few programs exhibit significant fractions of allocated data living to ages close to the duration of the program (Swim, Tomcatv, HeapSim, Tree-Replace-*, Bloat-Bloat, Toba). In others, the survivor function approaches negligible values at an age two orders of
magnitude below the duration of the program. It is tempting to make performance predictions by extrapolating from lifetime distributions. We can speculate that the presence of sharp peaks in the mortality curve (as in StandardNonInteractive, Figure 4.31, and Richards, Figure 4.39) indicates an opportunity for good perfor-
mance of the FC-DOF collector, provided that the age of high mortality translates into a heap region, compatible with the collection window, with low copying cost. On the other hand, there is little to distinguish the survivor functions of Lambda-Fact5 (Figure 4.33) and Richards (Figure 4.39), yet we shall see that their behavior under age-based collection is radically different. In addition to the preceding time-averaged properties, we look at the live profile: the amount of live data in the heap as a function of time. Our traces already express exact liveness information, thus it is easy to produce the live profiles, given in Figures 4.43– 4.56. The live profiles reveal periodic patterns, as in Figures 4.48 and 4.49, which result from iterative or recursive control structure of the program.
4.3 Simulating garbage collection The primary goal of garbage collection algorithm simulation in this study is to estimate the copying cost of collection. The simulators accurately compute the amount of data copied in collection; this amount is a good estimate of copying cost. In addition, the simulators compute the number of objects copied, the number of garbage collections invoked, and other secondary statistics. We first describe object-level simulators. They do not model the costs realistically incurred in remembered set maintenance, although they themselves do maintain per-object remembered sets. These costs will be addressed in Section 4.3.3 using block-level simulation based on the prototype implementation. The simulators do not model the locality effects, namely the lo-
66
cality of access within the collection process, the locality from one collection to another, the locality interference between the mutator and the collector, and lastly, instruction locality. The organization of the simulator is entirely different from that of a realistic garbage collector, thus any timings or locality measurements of the simulated collector are meaningless; moreover, the computational actions of the mutator are not available at all in the trace. These effects can be of interest on modern hardware, and they should be considered in future implementation experiments. However, the primary cost, which is that of copying, is accurately measured by simulation. Simulation has additional benefits, such as the freedom to explore configuration spaces thoroughly and at low cost since computation is eliminated and only memory management is simulated, the freedom to explore new collection algorithms in abstracto, and the ability to operate with a common trace format uniformly for various source languages. The number of garbage collections invoked is easily and accurately obtained by simulation. It is of some importance in evaluating the performance of an algorithm. Each invocation of the collector entails a considerable overhead: the collector must find the global roots of the collection, typically by scanning the execution stack (or multiple stacks in multiple threads of computation). For this reason, the scheme with fewer collector invocations would be preferred, ceteris paribus. However, the number of invocations can be viewed from a different angle: the fewer collections there are, the more work is done by each, and the longer the pauses are. In an interactive or real-time setting especially, it is desirable to limit pause times. More precisely, a garbage collection algorithm that imposes an upper bound on the amount of work at each collection may be preferable, in spite of a larger number of collector invocations, to a collector that performs a smaller number of collections, each of longer duration. 8 In summary, it is not appropriate to use the number of garbage collections invoked as a performance measure without reference to the intended application.
8
Traditional generational collection performs a larger number of collections than non-generational collection, and these collections are on average shorter. However, it occasionally collects the entire heap, and thus its upper bound on the amount copied in a collection is not better than with non-generational collection.
67
4.3.1 General simulation algorithm The simulation algorithm resembles in its outline the operation of an actual garbage collector (Section 3.1), but differs in important details. We begin with an informal account. Let us first consider the actions of the simulator while it is simulating the actions of the mutator given in the trace. The simulator sets up a certain, immutable, heap size and an initial allocation area size, and then begins to read the trace. For each A record, the allocation area decreases by the size of the allocated object, a running total of volume allocated increases by the same amount, and a data structure is created to represent the object. The simulator maintains a global set of data structures representing all objects present in the heap. A splaytree data structure [Sleator and Tarjan, 1983] is used for the purpose, because of its good temporal-locality properties. For each D record, the simulator marks the corresponding data structure as known-dead, but does not discard it immediately. For each U record, the simulator first identifies the object in which the updated field is and the object to which the pointer points, and then it records the newly stored pointer in the data structures of the two objects. (The data structure for an object contains both the outgoing pointers and the incoming pointers.) Eventually, following the allocation of an object, the allocation area will be exhausted, and at that point the simulator begins simulating a collection. The first task is to choose the subset of objects that will be subjected to collection (the collected region C ). How this choice is made differs between collection algorithms, as described in Chapter 3. The remaining objects will be the uncollected set U . The simulator finds the roots of the collection as follows. First, all objects in C that are not known-dead are roots. These objects would be reachable from the stack and globals at this point in actual execution. Second, all objects in C which have an incoming pointer from U are roots. These objects must be assumed live because all of U is assumed live in region-based collection. The simulator then computes the transitive closure of the points-to relation over the root set within C , using a simple worklist mechanism. The result is the survivor set S. Data structures
68
corresponding to the garbage set G
C S can now safely be discarded from the global set.
Finally, the sum of the sizes of objects in S is added to the running sum of volume copied, and a collection counter is incremented. The available allocation area increases by the amount freed, that is, the sum of the sizes of objects in G. If this amount is still not greater than zero, the simulator announces that the collector has failed. As we have seen, the collection simulator keeps track of the total volume of allocated objects, as well as the total volume of survivor objects over all performed collections. By definition, the total volume of survivor objects is the measure of the copying cost of collection. The mark/cons ratio µ is also directly available as the ratio of the total volume of survivor objects to the total volume of allocated objects.
4.3.2 Age-based simulation In age-based simulation, the relative order of object allocation remains the (logical) order of objects in the heap, and determines collection decisions. Therefore, the global set is implemented as a (doubly-linked) list. New allocation adds objects to the young (right) end of the list. The choice of the collected region is restricted to selecting a contiguous sublist C of the global list. The collection then finds the sublist of survivors S, which is spliced back into the global list in the place which C occupied. When simulating age-ordered schemes, the time of allocation serves as a unique timestamp, and membership in a sublist can be determined by timestamp comparison instead of a fullfledged set-membership test; other manipulations are simplified as well. For renewal-age schemes, S is attached past the young end instead. The objects in S receive renewed timestamps.
4.3.3 Block-based simulation We implemented a prototype version of the block heap organization for age-based garbage collection, as outlined in Section 3.2. This prototype runs on 32-bit Sun SPARC machines and
69
does not use the design for large address spaces of Section 3.2.4. It is heavily instrumented in order to record all collector actions, as well as all the write barrier actions. In order to examine how the block implementation works for the same traces that were examined using object-level simulation, we construct a pseudomutator: a program that accepts a trace input in our standard format, and performs the same heap-related actions that the original traced program did. We then integrate the pseudomutator with the (various) garbage collectors in the block implementation prototype. Because the computational aspects of the program are replaced with table lookup in the pseudomutator, the combined system remains a simulator, rather than an accurate replica of the original program. Therefore we are content with obtaining thorough statistics of copying costs (as counts of words copied) and pointer-management costs (as counts of different write-barrier and remembered-set processing operations).
70
1.8 Interactive-AllCallsOn 1.6
Mark/cons ratio mu
1.4 1.2 1 0.8 0.6 0.4 0.2 0 500
1000
1500
2000 2500 3000 Heap size (words)
3500
4000
4500
(a) Copying cost as the mark/cons ratio µ 70 Interactive-AllCallsOn 60
Number of collections
50
40
30
20
10
0 500
1000
1500
2000 2500 3000 Heap size (words)
3500
4000
4500
(b) Collector invocation count Figure 4.1. Non-generational collection performance: Interactive-AllCallsOn.
71
2.5 Interactive-TextEditing
Mark/cons ratio mu
2
1.5
1
0.5
0 1000
2000
3000
4000 5000 Heap size (words)
6000
7000
8000
(a) Copying cost as the mark/cons ratio µ 60 Interactive-TextEditing
Number of collections
50
40
30
20
10
0 1000
2000
3000
4000 5000 Heap size (words)
6000
7000
8000
(b) Collector invocation count Figure 4.2. Non-generational collection performance: Interactive-TextEditing.
72
0.8 StandardNonInteractive 0.7
Mark/cons ratio mu
0.6 0.5 0.4 0.3 0.2 0.1 0 1000
2000
3000
4000 5000 Heap size (words)
6000
7000
8000
(a) Copying cost as the mark/cons ratio µ 300 StandardNonInteractive
Number of collections
250
200
150
100
50
0 1000
2000
3000
4000 5000 Heap size (words)
6000
7000
8000
(b) Collector invocation count Figure 4.3. Non-generational collection performance: StandardNonInteractive.
73
2.2 HeapSim 2 1.8
Mark/cons ratio mu
1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 100000
200000
300000
400000 500000 Heap size (words)
600000
700000
(a) Copying cost as the mark/cons ratio µ 110 HeapSim 100 90
Number of collections
80 70 60 50 40 30 20 10 0 100000
200000
300000
400000 500000 Heap size (words)
600000
(b) Collector invocation count Figure 4.4. Non-generational collection performance: HeapSim.
74
700000
1.4 Lambda-Fact5 1.2
Mark/cons ratio mu
1
0.8
0.6
0.4
0.2
0 5000
10000
15000
20000 25000 30000 Heap size (words)
35000
40000
45000
(a) Copying cost as the mark/cons ratio µ 90 Lambda-Fact5 80
Number of collections
70 60 50 40 30 20 10 0 5000
10000
15000
20000 25000 30000 Heap size (words)
35000
40000
45000
(b) Collector invocation count Figure 4.5. Non-generational collection performance: Lambda-Fact5.
75
1.2 Lambda-Fact6
Mark/cons ratio mu
1
0.8
0.6
0.4
0.2
0 10000
20000
30000
40000
50000 60000 70000 Heap size (words)
80000
90000 100000
(a) Copying cost as the mark/cons ratio µ 160 Lambda-Fact6 140
Number of collections
120 100 80 60 40 20 0 10000
20000
30000
40000
50000 60000 70000 Heap size (words)
80000
90000 100000
(b) Collector invocation count Figure 4.6. Non-generational collection performance: Lambda-Fact6.
76
3 Swim
Mark/cons ratio mu
2.5
2
1.5
1
0.5
0 20000 30000 40000 50000 60000 70000 80000 90000 100000 110000 120000 Heap size (words)
(a) Copying cost as the mark/cons ratio µ 300 Swim
Number of collections
250
200
150
100
50
0 20000 30000 40000 50000 60000 70000 80000 90000 100000 110000 120000 Heap size (words)
(b) Collector invocation count Figure 4.7. Non-generational collection performance: Swim.
77
3 Tomcatv
Mark/cons ratio mu
2.5
2
1.5
1
0.5
0 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 220000 Heap size (words)
(a) Copying cost as the mark/cons ratio µ 220 Tomcatv 200 180
Number of collections
160 140 120 100 80 60 40 20 0 20000 40000 60000 80000 100000 120000 140000 160000 180000 200000 220000 Heap size (words)
(b) Collector invocation count Figure 4.8. Non-generational collection performance: Tomcatv.
78
2.5 Tree-Replace-Binary
Mark/cons ratio mu
2
1.5
1
0.5
0 10000
20000
30000
40000 50000 Heap size (words)
60000
70000
(a) Copying cost as the mark/cons ratio µ 50 Tree-Replace-Binary 45
Number of collections
40 35 30 25 20 15 10 5 0 10000
20000
30000
40000 50000 Heap size (words)
60000
70000
(b) Collector invocation count Figure 4.9. Non-generational collection performance: Tree-Replace-Binary.
79
1.8 Tree-Replace-Random 1.6
Mark/cons ratio mu
1.4 1.2 1 0.8 0.6 0.4 0.2 0 10000
20000
30000
40000 50000 60000 Heap size (words)
70000
80000
90000
(a) Copying cost as the mark/cons ratio µ 160 Tree-Replace-Random 140
Number of collections
120 100 80 60 40 20 0 10000
20000
30000
40000 50000 60000 Heap size (words)
70000
80000
90000
(b) Collector invocation count Figure 4.10. Non-generational collection performance: Tree-Replace-Random.
80
1.8 Richards 1.6
Mark/cons ratio mu
1.4 1.2 1 0.8 0.6 0.4 0.2 0 1000
2000
3000
4000
5000 6000 7000 Heap size (words)
8000
9000 10000 11000
(a) Copying cost as the mark/cons ratio µ 7000 Richards 6000
Number of collections
5000
4000
3000
2000
1000
0 1000
2000
3000
4000
5000 6000 7000 Heap size (words)
8000
9000 10000 11000
(b) Collector invocation count Figure 4.11. Non-generational collection performance: Richards.
81
0.7 JavaBYTEmark 0.6
Mark/cons ratio mu
0.5
0.4
0.3
0.2
0.1
0 50000
100000
150000
200000 250000 Heap size (words)
300000
350000
400000
(a) Copying cost as the mark/cons ratio µ 25 JavaBYTEmark
Number of collections
20
15
10
5
0 50000
100000
150000
200000 250000 Heap size (words)
300000
350000
400000
(b) Collector invocation count Figure 4.12. Non-generational collection performance: JavaBYTEmark.
82
0.6 Bloat-Bloat 0.55 0.5
Mark/cons ratio mu
0.45 0.4 0.35 0.3 0.25 0.2 0.15 0.1 0.05 200000
400000
600000
800000 1e+06 Heap size (words)
1.2e+06
1.4e+06
(a) Copying cost as the mark/cons ratio µ 250 Bloat-Bloat
Number of collections
200
150
100
50
0 200000
400000
600000
800000 1e+06 Heap size (words)
1.2e+06
1.4e+06
(b) Collector invocation count Figure 4.13. Non-generational collection performance: Bloat-Bloat.
83
0.6 Toba
Mark/cons ratio mu
0.5
0.4
0.3
0.2
0.1
0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 1.6e+06 1.8e+06 2e+06 Heap size (words)
(a) Copying cost as the mark/cons ratio µ 180 Toba 160
Number of collections
140 120 100 80 60 40 20 0 200000 400000 600000 800000 1e+06 1.2e+06 1.4e+06 1.6e+06 1.8e+06 2e+06 Heap size (words)
(b) Collector invocation count Figure 4.14. Non-generational collection performance: Toba.
84
700
1 Cumulative distribution (by count)
Interactive-AllCallsOn
Number of objects
600 500 400 300 200 100 0
Interactive-AllCallsOn 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(a) Distribution by plain count (raw and cumulative). 10000
1 Cumulative distribution (by volume)
Interactive-AllCallsOn
9000
Volume of objects
8000 7000 6000 5000 4000 3000 2000 1000 0
Interactive-AllCallsOn 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(b) Distribution weighted by volume (raw and cumulative). Cumulative distribution (by total lifetime)
120000 Interactive-AllCallsOn
80000 60000 40000 20000 0 1
10 100 Object size (words)
Interactive-AllCallsOn 0.8
0.6
0.4
0.2
0
1000
1 Cumulative distribution (by volume-lifetime product)
Total lifetime of objects
100000
1
10 100 Object size (words)
1000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
3e+06 Interactive-AllCallsOn 2.5e+06 2e+06 1.5e+06 1e+06 500000 0 1
10 100 Object size (words)
1000
1 Interactive-AllCallsOn 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.15. Object size distribution: Interactive-AllCallsOn.
85
1000
2500
1 Cumulative distribution (by count)
Interactive-TextEditing
Number of objects
2000
1500
1000
500
0
Interactive-TextEditing 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(a) Distribution by plain count (raw and cumulative). 10000
1 Cumulative distribution (by volume)
Interactive-TextEditing
9000
Volume of objects
8000 7000 6000 5000 4000 3000 2000 1000 0
Interactive-TextEditing 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(b) Distribution weighted by volume (raw and cumulative). Interactive-TextEditing
800000 700000 600000 500000 400000 300000 200000 100000 0 1
10 100 Object size (words)
1 Interactive-TextEditing 0.8
0.6
0.4
0.2
0
1000
1 Cumulative distribution (by volume-lifetime product)
900000 Total lifetime of objects
Cumulative distribution (by total lifetime)
1e+06
10 100 Object size (words)
1000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
4e+06 Interactive-TextEditing 3.5e+06 3e+06 2.5e+06 2e+06 1.5e+06 1e+06 500000 0 1
10 100 Object size (words)
1000
1 Interactive-TextEditing 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.16. Object size distribution: Interactive-TextEditing.
86
1000
40000
1 Cumulative distribution (by count)
StandardNonInteractive
Number of objects
35000 30000 25000 20000 15000 10000 5000 0
StandardNonInteractive 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(a) Distribution by plain count (raw and cumulative). 160000
1 Cumulative distribution (by volume)
StandardNonInteractive
Volume of objects
140000 120000 100000 80000 60000 40000 20000 0
StandardNonInteractive 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(b) Distribution weighted by volume (raw and cumulative). Cumulative distribution (by total lifetime)
6e+06 StandardNonInteractive
4e+06 3e+06 2e+06 1e+06 0 1
10 100 Object size (words)
StandardNonInteractive 0.8
0.6
0.4
0.2
0
1000
1 Cumulative distribution (by volume-lifetime product)
Total lifetime of objects
5e+06
1
10 100 Object size (words)
1000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
4.5e+07 StandardNonInteractive
4e+07 3.5e+07 3e+07 2.5e+07 2e+07 1.5e+07 1e+07 5e+06 0 1
10 100 Object size (words)
1000
1 StandardNonInteractive 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.17. Object size distribution: StandardNonInteractive.
87
1000
900000
1 Cumulative distribution (by count)
HeapSim
800000 Number of objects
700000 600000 500000 400000 300000 200000 100000 0
HeapSim 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
1
10
100 1000 Object size (words)
10000
(a) Distribution by plain count (raw and cumulative). 2.5e+06
1 Cumulative distribution (by volume)
HeapSim
Volume of objects
2e+06
1.5e+06
1e+06
500000
0
HeapSim 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
1
10
100 1000 Object size (words)
10000
(b) Distribution weighted by volume (raw and cumulative). Cumulative distribution (by total lifetime)
2.5e+10
2e+10
1.5e+10
1e+10
5e+09
0 1
10
100 1000 Object size (words)
1 HeapSim 0.8
0.6
0.4
0.2
0
10000
1 Cumulative distribution (by volume-lifetime product)
Total lifetime of objects
HeapSim
10
100 1000 Object size (words)
10000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
1.6e+11 HeapSim 1.4e+11 1.2e+11 1e+11 8e+10 6e+10 4e+10 2e+10 0 1
10
100 1000 Object size (words)
10000
1 HeapSim 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.18. Object size distribution: HeapSim.
88
10000
45000
1 Cumulative distribution (by count)
Lambda-Fact5
40000 Number of objects
35000 30000 25000 20000 15000 10000 5000 0
Lambda-Fact5 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
100
1
10 Object size (words)
100
(a) Distribution by plain count (raw and cumulative). 250000
1 Cumulative distribution (by volume)
Lambda-Fact5
Volume of objects
200000
150000
100000
50000
0
Lambda-Fact5 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
100
1
10 Object size (words)
100
(b) Distribution weighted by volume (raw and cumulative). Cumulative distribution (by total lifetime)
2.5e+08
2e+08
1.5e+08
1e+08
5e+07
0 1
10 Object size (words)
1 Lambda-Fact5 0.8
0.6
0.4
0.2
0
100
1 Cumulative distribution (by volume-lifetime product)
Total lifetime of objects
Lambda-Fact5
10 Object size (words)
100
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
1.2e+09 Lambda-Fact5 1e+09 8e+08 6e+08 4e+08 2e+08 0 1
10 Object size (words)
100
1 Lambda-Fact5 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.19. Object size distribution: Lambda-Fact5.
89
100
250000
1 Cumulative distribution (by count)
Lambda-Fact6
Number of objects
200000
150000
100000
50000
0
Lambda-Fact6 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
100
1
10 Object size (words)
100
(a) Distribution by plain count (raw and cumulative). 1.2e+06
1 Cumulative distribution (by volume)
Lambda-Fact6
Volume of objects
1e+06 800000 600000 400000 200000 0
Lambda-Fact6 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
100
1
10 Object size (words)
100
(b) Distribution weighted by volume (raw and cumulative). Lambda-Fact6
1.6e+09 1.4e+09 1.2e+09 1e+09 8e+08 6e+08 4e+08 2e+08 0 1
10 Object size (words)
1 Lambda-Fact6 0.8
0.6
0.4
0.2
0
100
1 Cumulative distribution (by volume-lifetime product)
1.8e+09 Total lifetime of objects
Cumulative distribution (by total lifetime)
2e+09
10 Object size (words)
100
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
1e+10 Lambda-Fact6
9e+09 8e+09 7e+09 6e+09 5e+09 4e+09 3e+09 2e+09 1e+09 0 1
10 Object size (words)
100
1 Lambda-Fact6 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.20. Object size distribution: Lambda-Fact6.
90
100
400000
1 Cumulative distribution (by count)
Swim
Number of objects
350000 300000 250000 200000 150000 100000 50000 0
Swim 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(a) Distribution by plain count (raw and cumulative). 1.2e+06
1 Cumulative distribution (by volume)
Swim
Volume of objects
1e+06 800000 600000 400000 200000 0
Swim 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(b) Distribution weighted by volume (raw and cumulative). Swim
4e+09 3.5e+09 3e+09 2.5e+09 2e+09 1.5e+09 1e+09 5e+08 0 1
10 100 Object size (words)
1 Swim 0.8
0.6
0.4
0.2
0
1000
1 Cumulative distribution (by volume-lifetime product)
4.5e+09 Total lifetime of objects
Cumulative distribution (by total lifetime)
5e+09
10 100 Object size (words)
1000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
1.6e+10 Swim 1.4e+10 1.2e+10 1e+10 8e+09 6e+09 4e+09 2e+09 0 1
10 100 Object size (words)
1000
1 Swim 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.21. Object size distribution: Swim.
91
1000
600000
1 Cumulative distribution (by count)
Tomcatv
Number of objects
500000 400000 300000 200000 100000 0
Tomcatv 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(a) Distribution by plain count (raw and cumulative). 1.6e+06
1 Cumulative distribution (by volume)
Tomcatv
Volume of objects
1.4e+06 1.2e+06 1e+06 800000 600000 400000 200000 0
Tomcatv 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(b) Distribution weighted by volume (raw and cumulative). Tomcatv
8e+09 7e+09 6e+09 5e+09 4e+09 3e+09 2e+09 1e+09 0 1
10 100 Object size (words)
1 Tomcatv 0.8
0.6
0.4
0.2
0
1000
1 Cumulative distribution (by volume-lifetime product)
9e+09 Total lifetime of objects
Cumulative distribution (by total lifetime)
1e+10
10 100 Object size (words)
1000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
3e+10 Tomcatv 2.5e+10 2e+10 1.5e+10 1e+10 5e+09 0 1
10 100 Object size (words)
1000
1 Tomcatv 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.22. Object size distribution: Tomcatv.
92
1000
35000
1 Cumulative distribution (by count)
Tree-Replace-Binary
Number of objects
30000 25000 20000 15000 10000 5000 0
Tree-Replace-Binary 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
100
1
10 Object size (words)
100
(a) Distribution by plain count (raw and cumulative). 200000
1 Cumulative distribution (by volume)
Tree-Replace-Binary
180000
Volume of objects
160000 140000 120000 100000 80000 60000 40000 20000 0
Tree-Replace-Binary 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
100
1
10 Object size (words)
100
(b) Distribution weighted by volume (raw and cumulative). Cumulative distribution (by total lifetime)
3e+08 Tree-Replace-Binary
2e+08 1.5e+08 1e+08 5e+07 0 1
10 Object size (words)
Tree-Replace-Binary 0.8
0.6
0.4
0.2
0
100
1 Cumulative distribution (by volume-lifetime product)
Total lifetime of objects
2.5e+08
1
10 Object size (words)
100
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
1.8e+09 Tree-Replace-Binary
1.6e+09 1.4e+09 1.2e+09 1e+09 8e+08 6e+08 4e+08 2e+08 0 1
10 Object size (words)
100
1 Tree-Replace-Binary 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.23. Object size distribution: Tree-Replace-Binary.
93
100
90000
1 Cumulative distribution (by count)
Tree-Replace-Random
80000 Number of objects
70000 60000 50000 40000 30000 20000 10000 0
Tree-Replace-Random 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
100
1
10 Object size (words)
100
(a) Distribution by plain count (raw and cumulative). 600000
1 Cumulative distribution (by volume)
Tree-Replace-Random
Volume of objects
500000 400000 300000 200000 100000 0
Tree-Replace-Random 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
100
1
10 Object size (words)
100
(b) Distribution weighted by volume (raw and cumulative). Cumulative distribution (by total lifetime)
1.2e+09 Tree-Replace-Random
8e+08 6e+08 4e+08 2e+08 0 1
10 Object size (words)
Tree-Replace-Random 0.8
0.6
0.4
0.2
0
100
1 Cumulative distribution (by volume-lifetime product)
Total lifetime of objects
1e+09
1
10 Object size (words)
100
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
7e+09 Tree-Replace-Random 6e+09 5e+09 4e+09 3e+09 2e+09 1e+09 0 1
10 Object size (words)
100
1 Tree-Replace-Random 0.8
0.6
0.4
0.2
0 1
10 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.24. Object size distribution: Tree-Replace-Random.
94
100
500000
1 Cumulative distribution (by count)
Richards
450000
Number of objects
400000 350000 300000 250000 200000 150000 100000 50000 0
Richards 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(a) Distribution by plain count (raw and cumulative). 2e+06
1 Cumulative distribution (by volume)
Richards
1.8e+06
Volume of objects
1.6e+06 1.4e+06 1.2e+06 1e+06 800000 600000 400000 200000 0
Richards 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
1000
1
10 100 Object size (words)
1000
(b) Distribution weighted by volume (raw and cumulative). Cumulative distribution (by total lifetime)
2.5e+08
2e+08
1.5e+08
1e+08
5e+07
0 1
10 100 Object size (words)
1 Richards 0.8
0.6
0.4
0.2
0
1000
1 Cumulative distribution (by volume-lifetime product)
Total lifetime of objects
Richards
10 100 Object size (words)
1000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
1.8e+09 Richards
1.6e+09 1.4e+09 1.2e+09 1e+09 8e+08 6e+08 4e+08 2e+08 0 1
10 100 Object size (words)
1000
1 Richards 0.8
0.6
0.4
0.2
0 1
10 100 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.25. Object size distribution: Richards.
95
1000
50000
1 Cumulative distribution (by count)
JavaBYTEmark
45000
Number of objects
40000 35000 30000 25000 20000 15000 10000 5000 0
JavaBYTEmark 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
100000
1
10
100 1000 Object size (words)
10000
100000
(a) Distribution by plain count (raw and cumulative). 250000
1 Cumulative distribution (by volume)
JavaBYTEmark
Volume of objects
200000
150000
100000
50000
0
JavaBYTEmark 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
100000
1
10
100 1000 Object size (words)
10000
100000
(b) Distribution weighted by volume (raw and cumulative). JavaBYTEmark
4e+08 3.5e+08 3e+08 2.5e+08 2e+08 1.5e+08 1e+08 5e+07 0 1
10
100 1000 Object size (words)
10000
1 JavaBYTEmark 0.8
0.6
0.4
0.2
0
100000
1 Cumulative distribution (by volume-lifetime product)
4.5e+08 Total lifetime of objects
Cumulative distribution (by total lifetime)
5e+08
10
100 1000 Object size (words)
10000
100000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
1e+10 JavaBYTEmark
9e+09 8e+09 7e+09 6e+09 5e+09 4e+09 3e+09 2e+09 1e+09 0 1
10
100 1000 Object size (words)
10000
100000
1 JavaBYTEmark 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.26. Object size distribution: JavaBYTEmark.
96
100000
1.8e+06
1 Cumulative distribution (by count)
Bloat-Bloat
1.6e+06 Number of objects
1.4e+06 1.2e+06 1e+06 800000 600000 400000 200000 0
Bloat-Bloat 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
1
10
100 1000 Object size (words)
10000
(a) Distribution by plain count (raw and cumulative). 9e+06
1
8e+06
Cumulative distribution (by volume)
Bloat-Bloat
Volume of objects
7e+06 6e+06 5e+06 4e+06 3e+06 2e+06 1e+06 0
Bloat-Bloat 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
1
10
100 1000 Object size (words)
10000
(b) Distribution weighted by volume (raw and cumulative). Bloat-Bloat
8e+10 7e+10 6e+10 5e+10 4e+10 3e+10 2e+10 1e+10 0 1
10
100 1000 Object size (words)
1 Bloat-Bloat 0.8
0.6
0.4
0.2
0
10000
1 Cumulative distribution (by volume-lifetime product)
9e+10 Total lifetime of objects
Cumulative distribution (by total lifetime)
1e+11
10
100 1000 Object size (words)
10000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
5e+11 Bloat-Bloat
4.5e+11 4e+11 3.5e+11 3e+11 2.5e+11 2e+11 1.5e+11 1e+11 5e+10 0 1
10
100 1000 Object size (words)
10000
1 Bloat-Bloat 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.27. Object size distribution: Bloat-Bloat.
97
10000
2e+06
1 Cumulative distribution (by count)
Toba
1.8e+06
Number of objects
1.6e+06 1.4e+06 1.2e+06 1e+06 800000 600000 400000 200000 0
Toba 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
1
10
100 1000 Object size (words)
10000
(a) Distribution by plain count (raw and cumulative). 1e+07
1 Cumulative distribution (by volume)
Toba
9e+06
Volume of objects
8e+06 7e+06 6e+06 5e+06 4e+06 3e+06 2e+06 1e+06 0
Toba 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
10000
1
10
100 1000 Object size (words)
10000
(b) Distribution weighted by volume (raw and cumulative). Cumulative distribution (by total lifetime)
1.6e+11 Toba 1.2e+11 1e+11 8e+10 6e+10 4e+10 2e+10 0 1
10
100 1000 Object size (words)
1 Toba 0.8
0.6
0.4
0.2
0
10000
1 Cumulative distribution (by volume-lifetime product)
Total lifetime of objects
1.4e+11
10
100 1000 Object size (words)
10000
(c) Distribution weighted by lifetime (raw and cumulative). Volume-lifetime product of objects
8e+11 Toba 7e+11 6e+11 5e+11 4e+11 3e+11 2e+11 1e+11 0 1
10
100 1000 Object size (words)
10000
1 Toba 0.8
0.6
0.4
0.2
0 1
10
100 1000 Object size (words)
(d) Distribution weighted by lifetime and volume (raw and cumulative). Figure 4.28. Object size distribution: Toba.
98
10000
1
1 Interactive-AllCallsOn
0.9
Interactive-AllCallsOn
0.7
0.1
0.6
Mortality
Survivor function
0.8
0.5 0.4
0.01
0.3 0.2 0.1 0
0.001 1
10
100 1000 Age (words)
10000
100000
1
10
100 1000 Age (words)
10000
100000
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.29. Object lifetime distribution: Interactive-AllCallsOn.
1
0.1 Interactive-TextEditing
0.9
Interactive-TextEditing
0.7 0.6
Mortality
Survivor function
0.8
0.5 0.4
0.01
0.3 0.2 0.1 0
0.001 1
10
100 1000 Age (words)
10000
100000
1
10
100 1000 Age (words)
10000
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.30. Object lifetime distribution: Interactive-TextEditing.
99
100000
1
1 StandardNonInteractive
0.9
StandardNonInteractive
0.8
0.6
Mortality
Survivor function
0.1 0.7
0.5 0.4
0.01
0.3 0.001 0.2 0.1 0
0.0001 1
10
100
1000 10000 Age (words)
100000
1e+06
1
10
100
1000 10000 Age (words)
100000
1e+06
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.31. Object lifetime distribution: StandardNonInteractive.
1
1 HeapSim
0.9
HeapSim 0.1
0.7 0.6
Mortality
Survivor function
0.8
0.5 0.4
0.01
0.001
0.3 0.2
0.0001
0.1 0
1e-05 1
10
100
1000 10000 100000 1e+06 1e+07 Age (words)
1
10
100
1000 10000 100000 1e+06 1e+07 Age (words)
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.32. Object lifetime distribution: HeapSim.
1
0.1 Lambda-Fact5
Lambda-Fact5
0.9 0.8 0.01
0.6 Mortality
Survivor function
0.7
0.5 0.4
0.001
0.3 0.2 0.1 0
0.0001 1
10
100
1000 Age (words)
10000
100000
1e+06
1
10
100
1000 Age (words)
10000
100000
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.33. Object lifetime distribution: Lambda-Fact5.
100
1e+06
1
0.01 Lambda-Fact6
0.9
Lambda-Fact6
0.7
0.001
0.6
Mortality
Survivor function
0.8
0.5 0.4
0.0001
0.3 0.2 0.1 0
1e-05 1
10
100
1000 10000 100000 1e+06 1e+07 Age (words)
1
10
100
1000 10000 Age (words)
100000
1e+06
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.34. Object lifetime distribution: Lambda-Fact6.
1
1 Swim
Swim
0.9 0.8
0.1
0.6
0.01 Mortality
Survivor function
0.7
0.5 0.4
0.001
0.3 0.2
0.0001
0.1 0
1e-05 1
10
100
1000 Age (words)
10000
100000
1e+06
1
10
100
1000 Age (words)
10000
100000
1e+06
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.35. Object lifetime distribution: Swim.
1
1 Tomcatv
0.9
Tomcatv 0.1
0.7 0.6
Mortality
Survivor function
0.8
0.5 0.4
0.01
0.001
0.3 0.2
0.0001
0.1 0
1e-05 1
10
100
1000 10000 100000 1e+06 1e+07 Age (words)
1
10
100
1000 10000 100000 1e+06 1e+07 Age (words)
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.36. Object lifetime distribution: Tomcatv.
101
1
0.1 Tree-Replace-Binary
0.9
Tree-Replace-Binary
0.8
0.6
Mortality
Survivor function
0.01 0.7
0.5 0.4
0.001
0.3 0.0001 0.2 0.1 0
1e-05 1
10
100
1000 10000 Age (words)
100000
1e+06
1
10
100
1000 10000 Age (words)
100000
1e+06
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.37. Object lifetime distribution: Tree-Replace-Binary.
1
0.1 Tree-Replace-Random
0.9
Tree-Replace-Random
0.8
0.6
Mortality
Survivor function
0.01 0.7
0.5 0.4
0.001
0.3 0.0001 0.2 0.1 0
1e-05 1
10
100
1000 10000 Age (words)
100000
1e+06
1
10
100
1000 10000 Age (words)
100000
1e+06
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.38. Object lifetime distribution: Tree-Replace-Random.
1
0.1 Richards
0.9
Richards
0.7 0.6
Mortality
Survivor function
0.8
0.5 0.4
0.01
0.3 0.2 0.1 0
0.001 1
10
100
1000 10000 100000 1e+06 1e+07 Age (words)
10
100
1000
10000 100000 Age (words)
1e+06
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.39. Object lifetime distribution: Richards.
102
1e+07
1
0.1 JavaBYTEmark
0.9
JavaBYTEmark
0.7
0.01
0.6
Mortality
Survivor function
0.8
0.5 0.4
0.001
0.3 0.2 0.1 0
0.0001 1
10
100
1000 10000 100000 1e+06 1e+07 Age (words)
1
10
100
1000 10000 Age (words)
100000
1e+06
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.40. Object lifetime distribution: JavaBYTEmark.
1
0.1 Bloat-Bloat
0.9
Bloat-Bloat 0.01
0.7 0.6
Mortality
Survivor function
0.8
0.5 0.4
0.001
0.0001
0.3 0.2
1e-05
0.1 0
1e-06 1
10
100
1000 100001000001e+06 1e+07 1e+08 Age (words)
1
10
100
1000 100001000001e+06 1e+07 1e+08 Age (words)
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.41. Object lifetime distribution: Bloat-Bloat.
1
0.1 Toba
0.9
Toba 0.01
0.7 0.6
Mortality
Survivor function
0.8
0.5 0.4
0.001
0.0001
0.3 0.2
1e-05
0.1 0
1e-06 1
10
100
1000 100001000001e+06 1e+07 1e+08 Age (words)
1
10
100
1000 100001000001e+06 1e+07 1e+08 Age (words)
(b) Mortality function, smoothed
(a) Survivor function
Figure 4.42. Object lifetime distribution: Toba.
103
700 Interactive-AllCallsOn 600
Live amount (words)
500
400
300
200
100
0 0
2000
4000
6000
8000 10000 12000 14000 16000 18000 20000 Time (allocation in words)
Figure 4.43. Live profile: Interactive-AllCallsOn.
104
1200 Interactive-TextEditing
Live amount (words)
1000
800
600
400
200
0 0
5000
10000 15000 Time (allocation in words)
20000
25000
Figure 4.44. Live profile: Interactive-TextEditing.
1200 StandardNonInteractive
Live amount (words)
1000
800
600
400
200
0 0
50000
100000 150000 Time (allocation in words)
200000
Figure 4.45. Live profile: StandardNonInteractive.
105
250000
100000 HeapSim 90000 80000
Live amount (words)
70000 60000 50000 40000 30000 20000 10000 0 0
500000
1e+06
1.5e+06 2e+06 2.5e+06 Time (allocation in words)
3e+06
3.5e+06
4e+06
Figure 4.46. Live profile: HeapSim.
7000 Lambda-Fact5 6000
Live amount (words)
5000
4000
3000
2000
1000
0 0
50000
100000 150000 200000 Time (allocation in words)
Figure 4.47. Live profile: Lambda-Fact5.
106
250000
300000
14000 Lambda-Fact6 12000
Live amount (words)
10000
8000
6000
4000
2000
0 0
200000
400000
600000 800000 1e+06 Time (allocation in words)
1.2e+06
1.4e+06
Figure 4.48. Live profile: Lambda-Fact6.
18000 Swim 16000
Live amount (words)
14000 12000 10000 8000 6000 4000 2000 0 0
200000
400000
600000 800000 1e+06 Time (allocation in words)
Figure 4.49. Live profile: Swim.
107
1.2e+06
1.4e+06
1.6e+06
35000 Tomcatv 30000
Live amount (words)
25000
20000
15000
10000
5000
0 0
500000
1e+06 1.5e+06 Time (allocation in words)
2e+06
2.5e+06
Figure 4.50. Live profile: Tomcatv.
10000 Tree-Replace-Binary 9000 8000
Live amount (words)
7000 6000 5000 4000 3000 2000 1000 0 0
50000
100000 150000 Time (allocation in words)
200000
Figure 4.51. Live profile: Tree-Replace-Binary.
108
250000
14000 Tree-Replace-Random 12000
Live amount (words)
10000
8000
6000
4000
2000
0 0
100000 200000 300000 400000 500000 600000 700000 800000 900000 1e+06 Time (allocation in words)
Figure 4.52. Live profile: Tree-Replace-Random.
1600 Richards 1400
Live amount (words)
1200
1000
800
600
400
200
0 0
500000
1e+06 1.5e+06 2e+06 2.5e+06 3e+06 3.5e+06 4e+06 4.5e+06 Time (allocation in words)
Figure 4.53. Live profile: Richards.
109
60000 JavaBYTEmark
Live amount (words)
50000
40000
30000
20000
10000
0 0
200000
400000 600000 800000 Time (allocation in words)
1e+06
1.2e+06
Figure 4.54. Live profile: JavaBYTEmark.
250000 Bloat-Bloat
Live amount (words)
200000
150000
100000
50000
0 0
5e+06
1e+07
1.5e+07 2e+07 2.5e+07 Time (allocation in words)
3e+07
Figure 4.55. Live profile: Bloat-Bloat.
110
3.5e+07
4e+07
300000 Toba
Live amount (words)
250000
200000
150000
100000
50000
0 0
5e+06
1e+07
1.5e+07 2e+07 2.5e+07 Time (allocation in words)
Figure 4.56. Live profile: Toba.
111
3e+07
3.5e+07
4e+07
CHAPTER 5 EVALUATION: COPYING COST
In this chapter we focus on the copying cost of collection to the exclusion of other cost factors. We look at the measured copying cost of the various age-based algorithms, including the traditional generational youngest-first collectors and the newly proposed deferred older-first collector. We examine the contribution of excess retention to the copying cost, and search for the lower bounds of copying cost given the constraint of heap size, by looking at a hypothetical optimal collector and at different tentative adaptive collectors.
5.1 Method for evaluating different configurations of a collector The copying cost of a given garbage collection scheme depends primarily on the size of the available heap space and the settings of configuration parameters. The available heap space is described by a single number, V , since in this study that space is constant. For collection schemes with fixed size of the collected region (FC), that size is the only configuration parameter, and is conveniently expressed as a fraction g of total heap volume V , thus g
C V.
For each benchmark, a suitable set of values is chosen for V . The heap volume must be at least equal to the maximum amount of live data in the program. The performance of any collector, however, can be expected to be extremely poor if V is too close to this minimum. We let V assume up to 9 discrete values between 1 2 and 6 times the minimum, in geometric
increments of 20%. Note that fewer than 9 heap sizes are reported for some benchmarks: when the criterion for benchmark relevance discussed above (more than 10 non-generational collections) is not met for the largest tested heap sizes, they are not reported. A suitable set of values is chosen for g as well: we let the fraction assume 20 discrete values between 0 and 1. 112
For each pair of V and g values, we performed a simulation of the benchmark. For a given heap size V , by varying the configuration parameter g (how much data is collected each time), we explore its effects on copying cost, within a fixed strategy of where to choose the data to collect. Similarly, for the generational schemes, specifically 2GYF and 3GYF, the fraction of heap space dedicated to the nursery is the primary configuration parameter. For 2GYF it is the only parameter for a given heap size, and for 3GYF there is an additional parameter in the choice of division of the remainder of the heap between the middle and the oldest generation, but, as we shall see, its influence on performance is less pronounced.
5.1.1 Results We present the results for each benchmark in turn, and for each benchmark we give a series of plots, ordered by increasing heap size. For each heap size, we show in separate plots (a) the performance of the FC schemes as a function of the fixed fraction collected, and (b) the performance of the GYF schemes as a function of the nursery size. Each of the figures in this section (Figures 5.7–5.110) consists of two plots, on the left (a) the plot for FC schemes and on the right (b) the plot for GYF schemes. The vertical axis (the ordinate) has the same meaning in both: it gives the relative mark/cons ratio ρ, which is the mark/cons ratio of a particular scheme relative to the mark/cons ratio of the non-generational collector for the same heap size. The scale on the vertical axis is the same for FC and for GYF, and for all heap sizes for the same benchmark. The horizontal axis (the abscissa) in the FC plots gives the size of the collected region expressed as the fraction g of heap size. In the GYF plots, the horizontal axis gives the size of the nursery as a fraction of heap size. It would be misleading to show both in the same graph, since the size of the collected region in GYF is occasionally the whole heap. In plots (a), we have drawn the curves for the following schemes:
113
FC-TOF, FC-TYF, FC-DOF, FC-DYF, and FC-ROF.1 In plots (b), we have drawn a curve for
the scheme 2GYF, whereas the measurements for the 3GYF scheme are presented as a scatter plot, since the same nursery size can be combined with different divisions of the remaining space into the two older generations. These plots permit us to analyze how the choice of configuration affects the copying cost. Furthermore, for FC schemes, the configuration determines the maximum amount that can be copied on each collection, thus it (approximately) bounds pause times.
5.1.1.1 The FC-TOF collector We observe that in all benchmarks save the two Tree-Replace programs, FC-TOF performs poorly. Its copying cost is generally higher than that of non-generational collection: ρ
1.
When the size of the collected region is close to the whole heap size, FC-TOF performs as well as non-generational, which is to be expected, but when the collected region is reduced, the collection cost rises dramatically, and ultimately, the collector fails. This result is easily explained by the presence of some amount of permanent data in the heap: permanent objects are the oldest objects, hence the collector copies them repeatedly, and if the collected region is so small that it contains nothing but permanent objects, the collector finds no garbage and fails. For the programs Tree-Replace-Binary and Tree-Replace-Random, on the contrary, FCTOF is the best scheme, and its copying cost is as low as one half that of non-generational
collection. In these (admittedly synthetic) benchmarks, the amount of permanent data is relatively small—only the first several layers of a tree from the root are permanent. Moreover, the subtrees are chosen for replacement at random, which means that older subtrees do eventually become garbage. Our intuitive argument in Chapter 1 and Figure 1.1, that the longer the collector waits the more likely it is that an object is garbage, holds in these programs.
1
If a curve is entirely missing from a plot, it is because no configuration of that scheme successfully completes execution. (See Section 4.3.1.)
114
5.1.1.2 The FC-ROF collector The FC-ROF collector behaves just like the FC-TOF collector for large collected region sizes. However, at smaller collected region sizes, the FC-ROF collector diverges from FCTOF, and is sometimes able to avoid its poor behavior. For instance, in benchmark Swim,
with a heap size of 38733 words (Figure 5.53(a), p. 158), whereas the performance of FC-TOF degrades rapidly as g decreases below 0.6, FC-ROF only degrades to ρ
1 5, and then returns
to lower levels—but never below 1. An interesting reversal occurs for the two programs TreeReplace-Binary and Tree-Replace-Random, where FC-TOF works well, and FC-ROF degrades
precisely when the size of the collected region is reduced below 0.6 (Figure 5.80(a), p. 169). This phenomenon deserves further investigation.
5.1.1.3 The FC-TYF collector The FC-TYF collector joins FC-TOF and FC-ROF in the class of collectors that only work well when the collected region is nearly the whole heap. Otherwise, this collector almost always performs worse than the non-generational collector. This is not surprising; if this collector could work in a steady state, it would never examine objects to the left of its collection window. In other words, once a young object survives one nursery collection, it is “tenured”. It is surprising that this collector works at all, and this must be due to the fact that programs show phase behavior and do not enter a truly steady state.
5.1.1.4 The FC-DOF collector The FC-DOF collector is consistently better than other FC schemes, except on the two programs Tree-Replace-Binary and Tree-Replace-Random. It generally works better with smaller collected region sizes, except for Toba and Lambda. For many examined programs there exists, for sufficiently large heaps, a characteristic region size below which the FC-DOF collector works particularly well. This passage from acceptable performance to exceptionally good performance is sometimes abrupt, as for StandardNonInteractive at region size g
0 9 (Fig
ure 5.25(a), p. 145), and sometimes gradual, as for Richards, when region size is reduced from
115
g
0 8 to g
0 4 (Figure 5.85(a), p. 171). The FC-DOF collector is the best among the FC
schemes, and we shall investigate its behavior in greater detail in the remainder of this chapter and in Chapter 6.
5.1.1.5 The FC-DYF collector The FC-DYF collector usually performs better than non-generational collection. However, it is always substantially worse than FC-DOF, from which it differs only in the direction of collection window motion. Its copying cost is remarkably insensitive to the region size parameter for those values where the collector succeeds; these are, however, limited to medium values, whereas for small windows as well as for large windows, this collector is prone to failure. Both the poor performance and the failures (as the ultimate manifestation of poor performance) are caused by the fact that the collection window sweeps the heap too fast. The intuition that survivors of one collection should not be in the collected region of the following collection conspires with the younger-to-older window motion direction in FC-DYF to cause fast window motion, and too many visits to the far ends of the heap where there are many survivors.
5.1.1.6 The GYF collectors The curve of the 2GYF collector copying cost and the patterns of the 3GYF scatter points reflect the presence of different modes of operation of the generational collector, corresponding to different numbers of younger-generation collections between successive older-generation collections (Figure 5.50(b), p. 157). The resultant 2GYF copying cost is therefore sometimes lowest for medium values of nursery size, as in InteractiveAllCallsOn (Figure 5.12(b), p. 140), sometimes it is lowest for small values of nursery size, as in Swim (Figure 5.50(b), p. 157), and sometimes it is lowest for large values of nursery size, as in Lambda-Fact6 (Figure 5.44(b), p. 154). It will also be observed that the scatter points for 3GYF are clustered closely, and usually not far below or above the line corresponding to 2GYF. Thus, the copying cost of 3GYF is usually comparable to that of 2GYF. Occasionally we see a configuration of 3GYF with only
116
half the copying cost of 2GYF with the same nursery size, as in Lambda-Fact6 (Figure 5.45(b), p. 154); on the other hand, there are programs where no 3GYF configuration is better than 2GYF, as in Toba (Figure 5.102(b), p. 179), hence the added overhead 2 of 3GYF is only detri-
mental.
5.2 Method for comparing various collection schemes We now summarize the findings of the preceding section. In order to compare different collection schemes, we shall restrict each to operate within the same heap size, but we shall let it assume the best configuration. In other words, we give each scheme the benefit of the best static choice of the size of the collected region, or the sizes of the various generations. The plots in Figures 5.111–5.124 (pp. 183–190) show the copying cost for these best configurations. Along the horizontal axis is the heap size allotted, and along the vertical axis the relative mark/cons ratio ρ. There are 8 curves in each plot, for the collection schemes FC-TOF, FCTYF, FC-DOF, FC-DYF, FC-ROF, NGYF,3 2GYF, and 3GYF.
In the plots for Interactive-TextEditing (Figure 5.112), StandardNonInteractive (Figure 5.113), HeapSim (Figure 5.114), Richards (Figure 5.121), and Bloat-Bloat (Figure 5.123), there is a
clear separation between the schemes FC-TOF, FC-TYF, FC-DYF, and FC-ROF, which exhibit high copying cost, and the schemes FC-DOF, NGYF, 2GYF, and 3GYF, which exhibit low copying cost. In other programs the separation is not as clear, and indeed for Tree-Replace, the FC-TOF and FC-ROF schemes work best. There is not a uniform dependence of ρ on heap size (for a specific collection scheme): whereas in Richards ρ decreases with heap size for the four better-performing schemes, in Bloat-Bloat and Swim it increases. Note that the absolute copying cost µ nevertheless decreases—
ρ is relative to the cost of non-generational collection, which itself is decreasing. This fact is 2
Recall the elision of the write barrier at collection time (Section 3.2.2).
3
Since NGYF has no configuration parameters, it was not discussed in the preceding section. Heap size uniquely determines the value of ρ reported here for NGYF.
117
evidenced in Figures 5.125–5.138, pp. 191–198, which display the absolute copying cost, expressed as the mark/cons ratio µ, for the best configuration of each scheme. With respect to copying cost, the newly introduced scheme FC-DOF is competitive with generational collectors for all examined programs, and markedly better for some (InteractiveAllCallsOn, Interactive-TextEditing, StandardNonInteractive, and Richards). For Richards with
a heap size of 4914 words, the copying cost of 3GYF is 10 37 times higher than that of FC
DOF.
Observing the performance of all the collectors, we note that most programs permit regionbased schemes to cut down the copying cost of collection significantly with respect to nongenerational collection: for Interactive-AllCallsOn to the fraction 0.13; for Interactive-TextEditing to 0.18; for StandardNonInteractive to 0.045; for Richards to 0.029; and for HeapSim, Swim, Tomcatv, and Bloat to the vicinity of 0.2. However, the copying cost of Toba is not reduced
below 0.6, Tree-Replace-Binary and Tree-Replace-Random are only reduced to about 0.5, and the most intransigent Lambda-Fact5 and Lambda-Fact6 are not reduced below 0.7.
5.3 Method for recognizing and measuring the excess retention cost Let us examine a garbage collector that considers a collected region C smaller than the entire heap, in order to limit the amount of work done at each collection. The maximum copying cost is bounded by the size of the collected region, since in the worst case all objects within it may be live. The exact computation of the set of live objects, however, requires an examination (pointer tracing) of the entire heap, and not just C , at a cost not much lower than the cost of collecting the entire heap. Instead, collectors make a conservative approximation: they assume that all objects in the uncollected region U are live, hence that every pointer that crosses from U to C lies on a path from a root, even if that is not actually the case. Excess retention is that part of the collection copying cost which is caused by the inaccuracy of this assumption. On a single collection, the excess retention is manifested as an increase in the
118
volume of survivors. Since the volume freed is correspondingly reduced, the next collection must take place sooner, and the overall number of collections is increased. To measure how large the excess retention part of the copying cost is, we devised a variant of collector simulation (deployed in all the simulated schemes) that does not assume liveness for the uncollected region. Instead, the survivor set S among the collected objects C is precisely the set of objects that are not known-dead. This manner of simulation is possible because our trace contains accurate liveness information, and can thus act as an oracle guiding the collection. The difference between the actual copying cost, and the copying cost as it would be with this oracle, gives the excess retention cost.
5.3.1 Results Each of the figures for this section (Figures 5.139–5.242, pp. 199–243) consists of two plots, on the left (a) the plot for FC schemes and on the right (b) the plot for GYF schemes. The vertical axis (the ordinate) shows the excess retention for the configuration, expressed as the ratio of the actual copying cost to the oracular copying cost. Note that the variation of the excess retention ratio is so large that we could not maintain the same scale in the plots, thus visual comparisons between FC and GYF are discouraged. The horizontal axis (the abscissa) in the FC plots gives the size of the collected region expressed as the fraction g of heap size. In the GYF plots, the horizontal axis gives the size of the nursery as a fraction of heap size. Let us first note that in some rare configurations, both for the FC and for the GYF schemes, the excess retention ratio is below 1, for instance in Interactive-AllCallsOn, Figure 5.141, p. 200. It may seem odd at first glance that the collector without the benefit of an oracle performs better than the one with it, but recall that, if the oracle makes an actual difference in the determination of the survivor set S, then the two collectors will perform their collections at different instants, and, as we discussed in Section 4.2.2.2, the choice of collection instants introduces a slight “random” effect. Having dispatched this anomaly, we note that excess retention displays widely different behavior for the several benchmarks, and for different schemes.
119
The excess retention of the GYF schemes is low, below 10% (ratio below 1.1) for most programs, with Lambda (Figures 5.166–5.181, pp.211–217), Tree-replace-Random (Figures 5.205– 5.212, pp.228–230), and Toba (Figures 5.234–5.242, pp.240–243) as exceptions. Excess retention does play a critical role in determining the performance of collectors for some programs. For instance, for Lambda-Fact6, we noticed that none of the collectors has a copying cost much lower than non-generational (Figure 5.48, p. 155), and now we can observe in the corresponding plot of excess retention (Figure 5.180, p. 216) that for a nursery size of one-half heap size, the excess ratio of 2GYF is 2, hence with an oracle the copying cost would be halved from ρ
1 5 to ρ
0 75. But the effect on FC-DOF is far stronger: the excess
ratio is 50 for collected region size equal to 0.2 of heap size, hence with an oracle the copying cost would be reduced from ρ
3 to ρ
0 06. The previously noted intransigence of Lambda
Fact6 with respect to copying cost thus has its origin in a pointer structure that causes very high
excess retention for those schemes (GYF, DOF) that we found to work well on other programs. On the other hand, consider the benchmark Richards with a heap size of 4032, for which the copying costs are shown in Figure 5.85 on p. 171, and the excess retention in Figure 5.217 on p. 232. The actual copying cost of FC-DOF decreases as the size of the region collected is decreased from 0.8 down to 0.3, even though the excess retention ratio increases sharply from 1 to 8. Meanwhile the 2GYF collector enjoys negligible excess retention, yet its copying cost is much higher than that of FC-DOF. In summary, excess retention sometimes dominantly influences performance, but sometimes its effects are negligible. Whereas it could have been surmised that schemes that collect regions in the older parts of the heap suffer from prohibitively high excess retention because of the supposed predominance of younger-to-older pointers, this case does not occur in our benchmarks.
120
5.4 Methods for examining the lower limits of copying cost and the feasibility of adaptation We have seen that the GYF and FC-DOF schemes compete for lowest copying cost, and that FC-DOF is sometimes considerably better. We now examine if copying cost can be reduced
further, and how. We remain within the class of FC schemes—thus each collection collects a region of fixed size—but now we allow more latitude in the choice of where the collected region lies within the heap.
5.4.1 Locally-optimal copying cost We described the locally-optimal schemes in Section 3.1.3. It is straightforward to simulate them in an object-level garbage collection simulator. On each collection, we place the collection window at all possible positions in the heap, and for each position we perform a collection as usual, except for the final step of deleting the garbage found. Thus for each possible position we calculate the corresponding mark/cons ratio, then choose the position with the lowest one, and complete the collection with the window at that position. The number of possible positions is at most the number of objects in the heap, and somewhat smaller because the window must wholly lie within the heap. However, it can be a large number if the heap consists of many small objects, hence this simulation can only be carried out for smaller benchmarks. In plots in Figures 5.243–5.266, pp. 244–253 (at the end of this chapter), we show the copying costs of the locally optimal schemes together with the age-based schemes considered before, for three representative benchmarks, StandardNonInteractive, Lambda-Fact5, and Tree-Replace-Random.
For StandardNonInteractive, the locally optimal collector only matches the performance of FC-DOF, and is in some configurations not as good as FC-DOF (recall from Section 3.1.3 that FC-OPT is not necessarily optimal globally!). For Lambda-Fact5, on the other hand, FCOPT is significantly better than any of the realistic FC collectors, and any of the generational
collectors, and achieves a copying cost of less than half of the non-generational collector’s
121
cost. The situation is similar for Tree-Replace-Random, where FC-OPT has much lower cost than FC-TOF, the best realistic scheme considered for this benchmark. The explanation is straightforward in this case: the collected region is positioned to include exactly the objects of the subtree that has just been replaced (provided that the size of the collected region is not much larger than the subtree).
5.4.2 Window motion In order to understand the behavior of FC collectors, we observe how the position of the collected region within the heap changes from one collection to the next. We call this window motion. We shall use window motion analysis to guide the design of adaptive collection in Section 5.4.5.2. A window motion plot (such as Figure 5.1, p. 132) shows the location of the collection window in the heap in successive collections. The window is contained between the two marks of the old end and the young end, shown along the vertical axis. The horizontal axis show the progression of collections, and is gauged according to the amount of allocation, so that the window motion graph can be compared against the live profiles (Section 4.2.3) and the detailed heap profiles (next section). The horizontal displacement of the window from one collection to the next equals the amount allocated between collections. The vertical displacement of the window, for the DOF regime of window motion, equals the amount of survivors of the preceding collection. Therefore, for DOF, the slope of the window motion curves equals the instantaneous mark/cons ratio. Hence good performance shows as shallow slope, and ideal performance would show as a flat window motion curve.
5.4.3 Demise point analysis Our fully accurate trace permits simulation and evaluation of any garbage collection algorithm. The algorithms differ in the choice of the instants when they perform collections, and, at each collection, the choice of the collected region. These differences cause the heaps to contain different amounts of garbage data and in different positions. The sets of live objects, however, are necessarily the same. Objects in the heap are ordered according to their time of
122
allocation, hence the live objects are not only equal as a set, but also as an ordered list, across all collectors. When the collection algorithm decides which region it should collect, its goal is to choose that region which has the greatest garbage content. We have observed that garbage collectors differ in the amount and position of garbage that they keep in the heap. However, the component of garbage that just came into being—objects that have most recently died—is equal for all collectors. Indeed, if a collector succeeds in collecting objects immediately after they become garbage (or very soon), that will be the only component of garbage. Therefore, the focus of our method is to identify when and where fresh garbage arises. To answer the temporal question, we imagine a collector that immediately discovers all objects that become garbage, and immediately collects them. Clearly, the information in the trace as described in Section 4.1 suffices to simulate this collector. To answer the spatial question, recall that ordering by age is assumed. Therefore, in a linear list of objects currently in the heap, the collector can mark those it has just discovered are garbage, and report their position before collecting them away. The position is given as the number of words from the young end of the heap, for each word of a garbage object. Since the purpose of the method is to guide designers of garbage collection algorithms in their analysis of collector performance, we turn now to the task of presenting visually the findings of the described ideal collector.
5.4.3.1 Visualization: demise maps We observed that the ideal collector reports the demise of each object as soon as it happens. More precisely, it reports the demise of each word of the object, giving the time of demise exactly, as the amount of allocation in the program up to that point. It is then natural to construct a two-dimensional grid, with one dimension corresponding to the allocation time, and the other to the position of the word in the heap, and the grid point is set if the word at the corresponding heap position died at the corresponding time. We lay the time along the longer,
123
horizontal, axis, and heap position along the vertical axis. The heap position is measured from the young, most recently allocated end of the heap in age order. A demise point is shown as a black dot. Thus we have constructed a demise bitmap. In practice, however, we must take an additional step to produce visual displays of demise. Note that the dimensions of the demise bitmap are (Total words allocated)
(Maximum words
live), which, even for the relatively small benchmarks in our suite, can be of the order 10 6 108 . Such a large bitmap cannot be graphically rendered directly. Instead, we divide both the time and the space axis into clusters, compute the number of set bits of the demise bitmap in each cluster, and assign greyscale levels to each cluster accordingly. Having considered the capability of the rendering device and the size of the final images, and having experimented with different granularities, we settled on a 100
660 greyscale map. Thus obtained demise
greyscale maps for our benchmark are presented in Figures 5.267–5.280 on pp. 254–267. Each panel is in three aligned displays: the middle display shows the demise points, while the top and bottom display show heap profiles, which we presently discuss (the top display shows the curves of equal time of allocation; and the bottom display shows the curves of equal age).
5.4.3.2 Visualization: heap profiles The demise map faithfully presents temporally local information. We know which words have just died in the sense that the map shows where they were. It does not tell us the identity of these words—at which point were they allocated. However, the ideal trace can be used to produce a telling visual display of this information, in the form of a heap profile. The duration of program execution is divided into a number of equal-sized segments, and the heap profile tracks the motion of each segment in the heap from the time of allocation onwards. Since the heap is age-ordered, the objects allocated during one segment remain contiguous in the heap. Thus it is possible to draw nonintersecting lines of constant allocation time for each of the times at the boundary of two allocation segments. If we again show the heap positions with youngest end at the bottom, then subsequent segments are stacked below previous segments
124
in a banded plot. The height of a band indicates the currently surviving amount of data from a given segment of allocation. The total height of all bands indicates the total amount of live data in the heap. The information in this plot is, in principle, equivalent to that in the demise map, but it has different visual properties. For instance, in benchmark Interactive-AllCallsOn, Figure 5.267, it is easy to see a correspondence between the sharp fall of the lines in the heap profile and the stripe of demise points at time 10200. However, it would be difficult to discern in the heap profile a corresponding feature for the stripe of demise points at time 1000, because it happens when the lines in the heap profile are mostly rising. Thus the two forms of display are complementary. Lastly, the bottom displays of Figures 5.267–5.280 show the heap profiles using lines of constant age, allowing us to observe the relative participation of younger and older objects in the heap, as execution progresses. Heap profiling is not a new concept. It has been used in analyzing heap usage in lazy functional programming languages [Runciman and Wakeling, 1992; Runciman and R¨ojemo, 1995; Sansom, 1994; Sansom and Peyton Jones, 1994; R¨ojemo and Runciman, 1996], where, however, the grouping criterion of interest is not the age of objects, but the function-point that allocates them. We earlier proposed a related three-dimensional visualization based on object age [Stefanovi´c, 1993a; Stefanovi´c and Moss, 1994].
5.4.3.3 Visualization: position mortality The demise maps and heap profiles provide a graphical representation of the time-varying behavior of a program, but we nevertheless desire a time-averaged summary. Even if the demise maps show that garbage will occasionally arise anywhere in the heap, we want to know where most of it should be expected, in the hope that a collector might work well by concentrating its efforts there. A useful summary is obtained by counting the number of demise points that occur at each heap position. Note that if we ascribe demise not to an object, as
125
we usually do, but instead to the heap position the object occupies, then we can speak of position mortality, as the likelihood that the word at certain position (whichever that word is) will become garbage upon the next time increment. In the frequency interpretation of probability, this likelihood is exactly given by the count of demise points. We show the plots of this quantity in Figures 5.281–5.294 on pp. 268–275. The horizontal axis corresponds to heap position, with the young end at 0. The vertical axis, on a logarithmic scale, gives the number of demise points at each heap position.
5.4.4 Case studies We now look in detail at the spatio-temporal behavior of benchmarks Richards and LambdaFact5. In addition to discussing the information directly available in the graphs obtained by
the method of the preceding section, we shall relate it to the object lifetime statistics, and the observed performance of different age-based garbage collection schemes. This analysis will serve as the basis for improving the copying cost of collection in the following section.
5.4.4.1 Benchmark Richards The benchmark Richards is characterized by a small amount of live data in comparison with its long duration. The average lifetime of objects is short, which is reflected in the strong band of demise points (black) in the demise map (Figure 5.277) along the bottom edge of the map, corresponding to the demise of recently allocated objects. A long-term periodic pattern is clear as well, both in the demise map, and in the heap profile, with about four periods (of duration approximately 900000 words) completing. The end of a period is marked in the demise map by a sudden demise of a large number of words in the mid-to-older part of the heap. The heap profile is revealing: the main feature is the wide band of oldest data, which consists of objects allocated entirely within the first segment or about 60000 words. Once the size of this segment has dropped to about 500 words, it remains constant for the remainder of program execution. In other words, these are the objects that the program uses permanently:
126
its “static” data. It is worth noting that these objects represent a large fraction of the total live amount, above one third. Indeed, in almost all the benchmarks we examined, there is a significant amount of static data. In addition to the band that survives from the very start of the program, within each period there is an additional accreting set of bands, which die together at the end of their period. For the duration of the period, these bands also act as static data. Underneath we observe a region of objects that survive a short time, enough to reach position 200-300 in the heap, and then die. The presence of a large quantity of static data as the oldest data in the heap has a strong detrimental effect on the performance of garbage collector algorithms that ever choose to collect the older region of the heap. It is remarkable, however, that the FC-DOF algorithm, which repeatedly visits the oldest part of the heap, once for every rightward sweep (Section 3.1.1), works quite well on the Richards benchmark, and outperforms all examined agebased schemes, including generational collection—in spite of the presence of static data. It does so by sweeping the static data fast, and then focusing on the region of high demise. In Section 5.4.5.1, we modify that algorithm to avoid the static data and find that its copying cost improves further.
5.4.4.2 Benchmark Lambda-Fact5 The benchmark Lambda-Fact5 is of particular interest because we found that none of the examined garbage collection algorithms performs well on it. Examination of the heap profile, Figure 5.271 reveals that there is a very large fraction of static data, which hampers older-first algorithms. The static data, allocated during the first 10% of program execution, account for over half of the live amount present in the heap. In addition, there is some accumulation of objects allocated later on, seen as an increasingly wide band of narrowly spaced lines. On the other hand, the demise map shows significant demise areas in middle heap positions, especially as a periodic pattern during the latter half of program execution. The presence of these demise points precludes the dominance of points in the very youngest part of the heap
127
and hampers youngest-first algorithms, such as traditional generational collection. The plot of the distribution of demise points by position in Figure 5.285 emphasizes this observation. Generational collection, unless given a very large heap space, works poorly with small nursery (youngest generation) sizes because too much data is promoted out of the nursery (Figure 5.36(b)); if the nursery size is increased, the above-mentioned demise regions are included, and the performance improves, but even then it only begins to approach non-generational collector performance (when the nursery is so large that the collector behaves almost as a nongenerational collector). Similarly, age-based collectors with constant collected region size perform poorly on this benchmark (Figure 5.36(a)), no better than the non-generational collector. However, the hypothetical FC-OPT collector is capable of markedly better performance, with copying cost only 27% of the non-generational for heap size 11402 (Figure 5.254).
5.4.5 Adaptive FC collection In this section we examine how to improve the performance of collection by modifying the idea of sweeping the heap with the collection window in successive collections. We are guided by the demise map, and by the observed motion of the collection window in the locally-optimal collector. We undertake exploratory case studies of adaptive policies within the class of agebased collectors with a fixed collected region size. On the example of the Richards benchmark we shall examine how to remove the permanent data of a program from consideration, and on the example of the Lambda-Fact5 benchmark we shall examine how to adjust the position of the collection region adaptively in response to collection results. In both cases, we shall find it possible to reduce the copying cost of collection over fixed policies.
5.4.5.1 Eliminating the “static” data in Richards We noted in Section 5.4.4.1 that the presence of a substantial amount of permanent, “static” data in the oldest region of the heap degrades the performance of DOF collection, because collections that examine the oldest region find little garbage, or even fail if they find no garbage. If, however, the collector knew that the oldest region contains such permanent data, it could
128
skip over them, never trying to collect them. It would thus be able to sweep over only the part of the heap where there is some object demise, and potentially incur a lower copying cost. The danger, of course, is if the “static” region is incorrectly identified as such: if some objects in the region that the collector never considers do in fact become garbage, then two penalties are paid. First, the space occupied by these objects is not reclaimed, and the collector works with a heap effectively smaller than it needs to be; and second, since these objects are considered live, any pointers they might have into the collected region cause excess retention. These dangers are familiar from the parallel situation of premature tenuring of objects in generational garbage collection [Ungar, 1984; Ungar, 1986]. Most programs, however, do have a region of permanent data, and the question is only how big it is. The collector should not immediately exclude the oldest region. The heap profiles (Figures 5.267–5.280) show that a significant amount of allocation occurs before the permanent data consolidate, and during this initial period, there is garbage to be collected from among the oldest data as well. Hence, we introduce an adaptive thresholding policy to recognize when the oldest region begins to contain mostly permanent data. The collector initially operates in the usual FC-DOF manner. On each collection upon a window reset, i.e., a collection of an oldest region, the survivor ratio for the collection is computed. As soon as the ratio is above a threshold value θ, all subsequent window resets skip the oldest region. We take the benchmark Richards for our case study, with the heap size of 4032 words. We use a window size of 1088 words, or g copying cost of µ cost µ
0 27, for which the FC-DOF collector achieves a
0 03181. With the same heap size, the NONGEN collector has a copying
0 3937, the best configuration of the 2GYF collector has µ
0 2198 (with nursery
size 2056 words), the best configuration of the 3GYF collector has µ
0 1221 (with nursery
size 2214 words), and the best configuration of the FC-DOF collector has µ window size 1572 words, or g
0 02975 (with
0 39).
Our first experiment is to supply to the collector the size of the region of permanent data. This value can be read off the heap profile, and for Richards it is 490 words. We expect that if θ
129
is chosen too low, then the region of permanent data will be taken out of consideration too soon, whereas if θ is chosen too high, the region will be excluded too late to improve performance. Experimentation with different values of θ shows that results are not exceedingly sensitive to the choice, as shown in Table 5.1. Table 5.1. Copying cost dependence on the adaptation threshold
θ 0.2 0.3 0.4 0.5 0.6 0.7 0.8
µ 0.02759 0.02734 0.02734 0.02734 0.02760 0.02830 0.03260
The best range of threshold values, 0 3
results in a copying cost of µ
θ
0 5 includes the value
490 1088
0 45, and
0 02734. Thus, it was possible to reduce the copying cost by
14% with respect to the ordinary FC-DOF collector with the same window size, and in fact by 8% with respect to the best configuration of the ordinary FC-DOF collector. In the second experiment, the collector is not told the amount of permanent data, but tries to guess it using the following heuristic: each time the first collection following a window reset results in a survivor ratio above the threshold, multiply the amount of survivors by a fixed fraction ϕ, and add the result to the current estimate of the amount of permanent data. The collector guesses that the high survivor ratio for this collection arose because there were permanent data in the collected region, and that they were the oldest data in the region. It is willing to subject the remaining fraction 1
ϕ of survivors to examination at the following
reset. Thus the heuristic has two parameters: the threshold θ and the fraction ϕ. Note that this heuristic is only able to increase the estimate, starting from an initial zero, but never to revise the estimate downward. Therefore the danger is that if θ is too low, or if ϕ is too high, the oldest region may prematurely be considered permanent. We found that the best values are 0.8 for the threshold θ and 0.5 for the fraction ϕ, in which case the copying cost is µ 130
0 029. This
result leads us to believe that it is possible to devise adaptive schemes to sense the presence and gauge the size of permanent data, although our simplistic heuristic is too sensitive to the choice of the two parameters to be reliably used.
5.4.5.2 Choosing the collected region in Lambda-Fact5 Instead of restricting the window motion to the region recognized as non-permanent data, we can modify the behavior of the DOF collector more subtly. The window motion of DOF is based on the intuitive premise that survivors of a collection should not be subject to the immediately succeeding collection. Therefore the window moved rightward over the survivors (Figure 3.9). This policy has excellent performance when the collection survivor ratio is zero or extremely low (as illustrated in Figure 3.10), which is the case in practice for several benchmarks, including Richards and StandardNonInteractive (Section 5.1.1.4). This policy suffers if the survivor ratio is not very low, however, as in Lambda-Fact5. In spite of the many demise points in the middle range of the heap for an ideal collector (demise points map in Figure 5.271), the survivor ratios in actual runs of FC-DOF are high. In Figure 5.2 we show the measurements of the survivor ratios for each collection of the FC-DOF collector on the Lambda-Fact5 benchmark, with a heap size of 11402 words and a window size of 4446 words,
for which a copying cost of µ
1 27 is achieved. The FC-DOF collector performs a total of
150 collections, and each shows up as a point in the scatter plot. The horizontal axis gives the position of the window (indicated by the amount of data between the old end of the window from the old end of the heap), and the vertical axis gives the survivor ratio of the collection (the ratio of the amount of survivors to the window size). The corresponding window motion diagram is in Figure 5.1. Thus, FC-DOF suffers because its window moves too fast across the middle region where it might encounter acceptable survivor ratios, and is too often reset to the old end, window position 0, where survivor ratios are high. Contrast this with the run of the FC-OPT collector with the same heap size and window size, which achieves µ
0 198. Its
window motion graph is in Figure 5.3, and the survivor ratio scatter plot in Figure 5.4. Ob-
131
serving that FC-OPT moves its window much more slowly, we shall try to emulate its behavior using the following heuristic window motion policy, called the floating window. window young end window old end
Heap position (words from old end)
12000
10000
8000
6000
4000
2000
0 0
50000
100000 150000 200000 Time (words allocated)
250000
300000
Figure 5.1. Window motion in the FC-DOF scheme.
The position of the window of collection is defined as the amount of data in the age-ordered heap between the old end of the heap and the old end of the window of collection. Let X be the current position of the window of collection. Let W be the size of the window of collection. Upon collection, we find that an amount S has survived out of W . We decide the position for the next collection as X
X
∆X (but resetting X to zero after the young end of the heap
is reached). The increment ∆X is determined as follows. If S
0, then ∆X
0. Otherwise,
note the position pi of each surviving word in the collection window relative to the old end of the window. The average survivor position is ¯p
1 S ∑S pi .
The relative bias is the relative
difference between ¯p and the valueW2 that would be obtained if survivors were evenly spread in the collection window: r
e, we set ∆X
W 2
p¯
. Thus
1
r
1. Given an additional fixed external bias
r e S. The window is initially placed at X equal to one-half heap size. W 2
In other words, the motion of the window is governed by two forces. An external bias pushes the window in a fixed direction. In particular, the value 1 for the external bias cor132
1
Survivor ratio
0.8
0.6
0.4
0.2
0 0
2000
4000
6000 Window position
8000
10000
Figure 5.2. Window survivor ratios in the FC-DOF scheme.
responds to the ordinary FC-DOF collector, pushing the window in the direction of younger objects. The second force uses feedback information from the completed collection. Intuitively, if most of the survivors of the collection were found in one part of the window, then the window should be pushed in the opposite direction. For example, if the window straddles the region of permanent data, then most of the survivors will have come from the older part of the window, and it should be pushed towards younger data. We calculate a simple heuristic measure of the desired push as the average position of the survivors within the window. If the average survivor position is left of center, the force is rightwards, and vice versa. Recall that our test case is the Lambda-Fact5 benchmark, with a heap size of 11402 words and a window size of 4446 words, or g copying cost of µ
µ
1 27. With that heap size, the NONGEN collector has a copying cost
0 59, the best configuration of the 2GYF collector has µ
of the 3GYF collector has µ
µ
0 39, for which the FC-DOF collector achieves a
0 55, the best configuration
0 56, and the best configuration of the FC-DOF collector has
0 52. Experiments with different values of the external bias e produce the following results,
shown in Table 5.2. 133
Heap position (words from old end)
12000
window young end window old end
10000
8000
6000
4000
2000
0 0
50000
100000 150000 200000 Time (words allocated)
250000
300000
Figure 5.3. Window motion in the FC-OPT scheme.
The lowest achieved copying cost with the floating window policy is µ
0 37. This is
28% below the best of the fixed schemes for any configuration at the same heap size, namely
µ
0 52 for FC-DOF with a window size of 10135 (g
0 88). We observe the window motion
of the adaptive scheme in Figure 5.5, and its scatter plot of window position vs. survivor ratio in Figure 5.6. Table 5.2. Copying cost dependence on external bias
e -0.2 -0.1 0.0 0.1 0.2 0.3 0.4 0.5
µ 0.64 0.43 0.52 0.46 0.44 0.37 0.40 0.48
134
1
Survivor ratio
0.8
0.6
0.4
0.2
0 0
2000
4000
6000 Window position
8000
10000
Figure 5.4. Window survivor ratios in the FC-OPT scheme.
The floating window policy is relatively robust with respect to the choice of the external bias, since for the entire region
01
e
0 5, the scheme is better than any of the fixed
schemes. On the other hand, the FC-OPT copying cost of µ
0 198, for the same heap size
and the same window size, is still another 47% lower. A final word of caution is in order. We have seen that adaptive variants of FC schemes, and of FC-DOF in particular, can lower the copying cost of collection, and that there exists, for some programs, still more room for improvement, as indicated by the locally-optimal schemes. However, to achieve this further reduction of copying cost requires window motion policies more general than, for instance, the predictable linear sweep of FC-DOF. Recall from Section 3.2.2 that effective pointer filtering at the write barrier exploited the predictability of window motion. Hence, the ultimate reduction of copying cost may well be offset by an increase of pointer management costs.
135
Heap position (words from old end)
12000
window young end window old end
10000
8000
6000
4000
2000
0 0
50000
100000 150000 200000 Time (words allocated)
250000
300000
Figure 5.5. Window motion in the adaptive scheme.
5.5 Summary This chapter examined the copying cost of garbage collection for several age-based algorithms. The main finding is that the promise of low copying cost of our proposed FC-DOF collector is delivered on many actual allocation loads. This result could not be expected given the prior understanding of generational collector performance. Particularly revealing is the fact that excess retention effects, even when very pronounced, do not by themselves rule out using non-youngest-first collectors for our test programs. The measurements of the performance of generational collectors are interesting in their own right, as they shed light on collector behavior under constraints on total heap size. It is not surprising that small nursery sizes lead to excessive promotion into the older generation, but it is surprising that in many cases the optimal configuration assigns more than half the heap to the nursery. It is possible that more sophisticated generational schemes could perform better— for instance, requiring an object to survive more than one collection in the younger generation before promotion, or, equivalently, dividing the younger generation into steps, possibly combined with adaptive policies, such as Ungar and Jackson’s feedback mediation [Ungar and 136
1
Survivor ratio
0.8
0.6
0.4
0.2
0 0
2000
4000
6000 Window position
8000
10000
Figure 5.6. Window survivor ratios in the adaptive scheme.
Jackson, 1992]. However, each collection adds to the copying cost, and the final balance cannot be predicted. Unfortunately, the vast dimensions of the space of possible configurations makes it difficult to these collection schemes. (Thus, the selection of their many configurable parameters [Hudson et al., 1991] has largely been a matter of judgement by experience.) The proposed non-youngest-first schemes can also benefit from more sophisticated configurations. We observed that adaptive positioning of a window of fixed size can reduce copying cost; it is likely that it would be more effective in conjunction with adaptive selection of window size. Most important, however, could be the heuristic discovery of permanent (static) data, and their elimination from consideration by the collector, a task implicitly performed by a multi-generational collector.
137
Interactive-AllCallsOn
Interactive-AllCallsOn 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 764)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 764)
(b) GYF schemes
Figure 5.7. Copying cost comparison: Interactive-AllCallsOn, V
Interactive-AllCallsOn
764.
Interactive-AllCallsOn
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 931)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 931)
(b) GYF schemes
Figure 5.8. Copying cost comparison: Interactive-AllCallsOn, V
138
931.
1
Interactive-AllCallsOn
Interactive-AllCallsOn 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1135)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1135)
(b) GYF schemes
Figure 5.9. Copying cost comparison: Interactive-AllCallsOn, V
Interactive-AllCallsOn 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1135.
Interactive-AllCallsOn
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1384)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1384)
1
(b) GYF schemes
Figure 5.10. Copying cost comparison: Interactive-AllCallsOn, V
Interactive-AllCallsOn
1384.
Interactive-AllCallsOn
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1687)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1687)
(b) GYF schemes
Figure 5.11. Copying cost comparison: Interactive-AllCallsOn, V
139
1687.
1
Interactive-AllCallsOn
Interactive-AllCallsOn 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2057)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2057)
(b) GYF schemes
Figure 5.12. Copying cost comparison: Interactive-AllCallsOn, V
140
2057.
1
Interactive-TextEditing
Interactive-TextEditing 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1338)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1338)
(b) GYF schemes
Figure 5.13. Copying cost comparison: Interactive-TextEditing, V
Interactive-TextEditing
1338.
Interactive-TextEditing
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1631)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1631)
(b) GYF schemes
Figure 5.14. Copying cost comparison: Interactive-TextEditing, V
141
1631.
1
Interactive-TextEditing
Interactive-TextEditing 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1988)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1988)
(b) GYF schemes
Figure 5.15. Copying cost comparison: Interactive-TextEditing, V
Interactive-TextEditing 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1988.
Interactive-TextEditing
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2424)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2424)
1
(b) GYF schemes
Figure 5.16. Copying cost comparison: Interactive-TextEditing, V
Interactive-TextEditing
2424.
Interactive-TextEditing
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2955)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2955)
(b) GYF schemes
Figure 5.17. Copying cost comparison: Interactive-TextEditing, V
142
2955.
1
StandardNonInteractive
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1414)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1414)
(b) GYF schemes
Figure 5.18. Copying cost comparison: StandardNonInteractive, V
StandardNonInteractive
1414.
StandardNonInteractive
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1723)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1723)
(b) GYF schemes
Figure 5.19. Copying cost comparison: StandardNonInteractive, V
143
1723.
1
StandardNonInteractive
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2101)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2101)
(b) GYF schemes
Figure 5.20. Copying cost comparison: StandardNonInteractive, V
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
2101.
StandardNonInteractive
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2561)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2561)
1
(b) GYF schemes
Figure 5.21. Copying cost comparison: StandardNonInteractive, V
StandardNonInteractive
2561.
StandardNonInteractive
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3122)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3122)
(b) GYF schemes
Figure 5.22. Copying cost comparison: StandardNonInteractive, V
144
3122.
1
StandardNonInteractive
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3805)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3805)
(b) GYF schemes
Figure 5.23. Copying cost comparison: StandardNonInteractive, V
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3805.
StandardNonInteractive
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4639)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4639)
1
(b) GYF schemes
Figure 5.24. Copying cost comparison: StandardNonInteractive, V
StandardNonInteractive
4639.
StandardNonInteractive
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 5655)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 5655)
(b) GYF schemes
Figure 5.25. Copying cost comparison: StandardNonInteractive, V
145
5655.
1
StandardNonInteractive
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 6894)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 6894)
(b) GYF schemes
Figure 5.26. Copying cost comparison: StandardNonInteractive, V
146
6894.
1
HeapSim
HeapSim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 113397)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 113397)
(b) GYF schemes
Figure 5.27. Copying cost comparison: HeapSim, V
HeapSim
113397.
HeapSim
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 138231)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 138231)
(b) GYF schemes
Figure 5.28. Copying cost comparison: HeapSim, V
147
138231.
1
HeapSim
HeapSim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 168503)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 168503)
(b) GYF schemes
Figure 5.29. Copying cost comparison: HeapSim, V
HeapSim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
168503.
HeapSim
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 205405)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 205405)
1
(b) GYF schemes
Figure 5.30. Copying cost comparison: HeapSim, V
HeapSim
205405.
HeapSim
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 250387)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 250387)
(b) GYF schemes
Figure 5.31. Copying cost comparison: HeapSim, V
148
250387.
1
HeapSim
HeapSim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 305221)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 305221)
(b) GYF schemes
Figure 5.32. Copying cost comparison: HeapSim, V
HeapSim
305221.
HeapSim
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 372063)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 372063)
(b) GYF schemes
Figure 5.33. Copying cost comparison: HeapSim, V
149
372063.
1
Lambda-Fact5
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 7673)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 7673)
(b) GYF schemes
Figure 5.34. Copying cost comparison: Lambda-Fact5, V
Lambda-Fact5
7673.
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 9354)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 9354)
(b) GYF schemes
Figure 5.35. Copying cost comparison: Lambda-Fact5, V
150
9354.
1
Lambda-Fact5
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 11402)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 11402)
(b) GYF schemes
Figure 5.36. Copying cost comparison: Lambda-Fact5, V
Lambda-Fact5 4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
11402.
Lambda-Fact5
4
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 13899)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 13899)
1
(b) GYF schemes
Figure 5.37. Copying cost comparison: Lambda-Fact5, V
Lambda-Fact5
13899.
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 16943)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 16943)
(b) GYF schemes
Figure 5.38. Copying cost comparison: Lambda-Fact5, V
151
16943.
1
Lambda-Fact5
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 20654)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 20654)
(b) GYF schemes
Figure 5.39. Copying cost comparison: Lambda-Fact5, V
Lambda-Fact5
20654.
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 25177)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 25177)
(b) GYF schemes
Figure 5.40. Copying cost comparison: Lambda-Fact5, V
152
25177.
1
Lambda-Fact6
Lambda-Fact6
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 16669)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 16669)
(b) GYF schemes
Figure 5.41. Copying cost comparison: Lambda-Fact6, V
Lambda-Fact6
16669.
Lambda-Fact6
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 20320)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 20320)
(b) GYF schemes
Figure 5.42. Copying cost comparison: Lambda-Fact6, V
153
20320.
1
Lambda-Fact6
Lambda-Fact6
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 24770)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 24770)
(b) GYF schemes
Figure 5.43. Copying cost comparison: Lambda-Fact6, V
Lambda-Fact6 4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
24770.
Lambda-Fact6
4
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 30194)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 30194)
1
(b) GYF schemes
Figure 5.44. Copying cost comparison: Lambda-Fact6, V
Lambda-Fact6
30194.
Lambda-Fact6
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 36807)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 36807)
(b) GYF schemes
Figure 5.45. Copying cost comparison: Lambda-Fact6, V
154
36807.
1
Lambda-Fact6
Lambda-Fact6
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 44868)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 44868)
(b) GYF schemes
Figure 5.46. Copying cost comparison: Lambda-Fact6, V
Lambda-Fact6 4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
44868.
Lambda-Fact6
4
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 54693)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 54693)
1
(b) GYF schemes
Figure 5.47. Copying cost comparison: Lambda-Fact6, V
Lambda-Fact6
54693.
Lambda-Fact6
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 66671)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 66671)
(b) GYF schemes
Figure 5.48. Copying cost comparison: Lambda-Fact6, V
155
66671.
1
Lambda-Fact6
Lambda-Fact6
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 81272)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 81272)
(b) GYF schemes
Figure 5.49. Copying cost comparison: Lambda-Fact6, V
156
81272.
1
Swim
Swim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 21383)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 21383)
(b) GYF schemes
Figure 5.50. Copying cost comparison: Swim, V
Swim
21383.
Swim
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 26066)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 26066)
(b) GYF schemes
Figure 5.51. Copying cost comparison: Swim, V
157
26066.
1
Swim
Swim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 31774)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 31774)
(b) GYF schemes
Figure 5.52. Copying cost comparison: Swim, V
Swim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
31774.
Swim
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 38733)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 38733)
1
(b) GYF schemes
Figure 5.53. Copying cost comparison: Swim, V
Swim
38733.
Swim
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 47215)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 47215)
(b) GYF schemes
Figure 5.54. Copying cost comparison: Swim, V
158
47215.
1
Swim
Swim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 57555)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 57555)
(b) GYF schemes
Figure 5.55. Copying cost comparison: Swim, V
Swim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
57555.
Swim
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 70160)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 70160)
1
(b) GYF schemes
Figure 5.56. Copying cost comparison: Swim, V
Swim
70160.
Swim
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 85524)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 85524)
(b) GYF schemes
Figure 5.57. Copying cost comparison: Swim, V
159
85524.
1
Swim
Swim 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 104254)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 104254)
(b) GYF schemes
Figure 5.58. Copying cost comparison: Swim, V
160
104254.
1
Tomcatv
Tomcatv 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 37292)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 37292)
(b) GYF schemes
Figure 5.59. Copying cost comparison: Tomcatv, V
Tomcatv
37292.
Tomcatv
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 45459)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 45459)
(b) GYF schemes
Figure 5.60. Copying cost comparison: Tomcatv, V
161
45459.
1
Tomcatv
Tomcatv 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 55414)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 55414)
(b) GYF schemes
Figure 5.61. Copying cost comparison: Tomcatv, V
Tomcatv 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
55414.
Tomcatv
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 67550)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 67550)
1
(b) GYF schemes
Figure 5.62. Copying cost comparison: Tomcatv, V
Tomcatv
67550.
Tomcatv
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 82343)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 82343)
(b) GYF schemes
Figure 5.63. Copying cost comparison: Tomcatv, V
162
82343.
1
Tomcatv
Tomcatv 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 100376)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 100376)
(b) GYF schemes
Figure 5.64. Copying cost comparison: Tomcatv, V
Tomcatv 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
100376.
Tomcatv
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 122358)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 122358)
1
(b) GYF schemes
Figure 5.65. Copying cost comparison: Tomcatv, V
Tomcatv
122358.
Tomcatv
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 149154)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 149154)
(b) GYF schemes
Figure 5.66. Copying cost comparison: Tomcatv, V
163
149154.
1
Tomcatv
Tomcatv 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 181818)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 181818)
(b) GYF schemes
Figure 5.67. Copying cost comparison: Tomcatv, V
164
181818.
1
Tree-Replace-Binary
Tree-Replace-Binary 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 11926)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 11926)
(b) GYF schemes
Figure 5.68. Copying cost comparison: Tree-Replace-Binary, V
Tree-Replace-Binary
11926.
Tree-Replace-Binary
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 14538)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 14538)
(b) GYF schemes
Figure 5.69. Copying cost comparison: Tree-Replace-Binary, V
165
14538.
1
Tree-Replace-Binary
Tree-Replace-Binary 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 17722)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 17722)
(b) GYF schemes
Figure 5.70. Copying cost comparison: Tree-Replace-Binary, V
Tree-Replace-Binary 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
17722.
Tree-Replace-Binary
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 21603)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 21603)
1
(b) GYF schemes
Figure 5.71. Copying cost comparison: Tree-Replace-Binary, V
Tree-Replace-Binary
21603.
Tree-Replace-Binary
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 26334)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 26334)
(b) GYF schemes
Figure 5.72. Copying cost comparison: Tree-Replace-Binary, V
166
26334.
1
Tree-Replace-Random
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 15985)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 15985)
(b) GYF schemes
Figure 5.73. Copying cost comparison: Tree-Replace-Random, V
Tree-Replace-Random
15985.
Tree-Replace-Random
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 19486)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 19486)
(b) GYF schemes
Figure 5.74. Copying cost comparison: Tree-Replace-Random, V
167
19486.
1
Tree-Replace-Random
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 23754)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 23754)
(b) GYF schemes
Figure 5.75. Copying cost comparison: Tree-Replace-Random, V
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
23754.
Tree-Replace-Random
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 28956)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 28956)
1
(b) GYF schemes
Figure 5.76. Copying cost comparison: Tree-Replace-Random, V
Tree-Replace-Random
28956.
Tree-Replace-Random
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 35297)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 35297)
(b) GYF schemes
Figure 5.77. Copying cost comparison: Tree-Replace-Random, V
168
35297.
1
Tree-Replace-Random
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 43027)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 43027)
(b) GYF schemes
Figure 5.78. Copying cost comparison: Tree-Replace-Random, V
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
43027.
Tree-Replace-Random
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 52450)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 52450)
1
(b) GYF schemes
Figure 5.79. Copying cost comparison: Tree-Replace-Random, V
Tree-Replace-Random
52450.
Tree-Replace-Random
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 63936)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 63936)
(b) GYF schemes
Figure 5.80. Copying cost comparison: Tree-Replace-Random, V
169
63936.
1
Richards
Richards 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1826)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1826)
(b) GYF schemes
Figure 5.81. Copying cost comparison: Richards, V
Richards
1826.
Richards
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2225)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2225)
(b) GYF schemes
Figure 5.82. Copying cost comparison: Richards, V
170
2225.
1
Richards
Richards 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2713)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2713)
(b) GYF schemes
Figure 5.83. Copying cost comparison: Richards, V
Richards 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
2713.
Richards
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3307)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3307)
1
(b) GYF schemes
Figure 5.84. Copying cost comparison: Richards, V
Richards
3307.
Richards
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4032)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4032)
(b) GYF schemes
Figure 5.85. Copying cost comparison: Richards, V
171
4032.
1
Richards
Richards 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4914)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4914)
(b) GYF schemes
Figure 5.86. Copying cost comparison: Richards, V
Richards 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
4914.
Richards
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 5991)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 5991)
1
(b) GYF schemes
Figure 5.87. Copying cost comparison: Richards, V
Richards
5991.
Richards
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 7303)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 7303)
(b) GYF schemes
Figure 5.88. Copying cost comparison: Richards, V
172
7303.
1
Richards
Richards 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 8902)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 8902)
(b) GYF schemes
Figure 5.89. Copying cost comparison: Richards, V
173
8902.
1
JavaBYTEmark
JavaBYTEmark
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 72807)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 72807)
(b) GYF schemes
Figure 5.90. Copying cost comparison: JavaBYTEmark, V
JavaBYTEmark
72807.
JavaBYTEmark
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 88752)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 88752)
(b) GYF schemes
Figure 5.91. Copying cost comparison: JavaBYTEmark, V
174
88752.
1
JavaBYTEmark
JavaBYTEmark
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 108188)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 108188)
(b) GYF schemes
Figure 5.92. Copying cost comparison: JavaBYTEmark, V
JavaBYTEmark
108188.
JavaBYTEmark
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2.5 2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 131881)
1
(a) FC schemes
0
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 131881)
(b) GYF schemes
Figure 5.93. Copying cost comparison: JavaBYTEmark, V
175
131881.
1
Bloat-Bloat
Bloat-Bloat 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 246766)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 246766)
(b) GYF schemes
Figure 5.94. Copying cost comparison: Bloat-Bloat, V
Bloat-Bloat
246766.
Bloat-Bloat
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 300808)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 300808)
(b) GYF schemes
Figure 5.95. Copying cost comparison: Bloat-Bloat, V
176
300808.
1
Bloat-Bloat
Bloat-Bloat 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 366682)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 366682)
(b) GYF schemes
Figure 5.96. Copying cost comparison: Bloat-Bloat, V
Bloat-Bloat 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
366682.
Bloat-Bloat
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 446984)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 446984)
1
(b) GYF schemes
Figure 5.97. Copying cost comparison: Bloat-Bloat, V
Bloat-Bloat
446984.
Bloat-Bloat
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 544872)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 544872)
(b) GYF schemes
Figure 5.98. Copying cost comparison: Bloat-Bloat, V
177
544872.
1
Bloat-Bloat
Bloat-Bloat 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 664195)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 664195)
(b) GYF schemes
Figure 5.99. Copying cost comparison: Bloat-Bloat, V
Bloat-Bloat 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
664195.
Bloat-Bloat
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 809650)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 809650)
1
(b) GYF schemes
Figure 5.100. Copying cost comparison: Bloat-Bloat, V
Bloat-Bloat
809650.
Bloat-Bloat
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 986959)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 986959)
(b) GYF schemes
Figure 5.101. Copying cost comparison: Bloat-Bloat, V
178
986959.
1
Toba
Toba 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 353843)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 353843)
(b) GYF schemes
Figure 5.102. Copying cost comparison: Toba, V
Toba
353843.
Toba
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 431335)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 431335)
(b) GYF schemes
Figure 5.103. Copying cost comparison: Toba, V
179
431335.
1
Toba
Toba 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 525794)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 525794)
(b) GYF schemes
Figure 5.104. Copying cost comparison: Toba, V
Toba 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
525794.
Toba
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 640941)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 640941)
1
(b) GYF schemes
Figure 5.105. Copying cost comparison: Toba, V
Toba
640941.
Toba
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 781303)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 781303)
(b) GYF schemes
Figure 5.106. Copying cost comparison: Toba, V
180
781303.
1
Toba
Toba 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 952404)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 952404)
(b) GYF schemes
Figure 5.107. Copying cost comparison: Toba, V
Toba 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
952404.
Toba
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1160976)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1160976)
1
(b) GYF schemes
Figure 5.108. Copying cost comparison: Toba, V
Toba
1160976.
Toba
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1415223)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1415223)
(b) GYF schemes
Figure 5.109. Copying cost comparison: Toba, V
181
1415223.
1
Toba
Toba 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.5
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
2 1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1725148)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1725148)
(b) GYF schemes
Figure 5.110. Copying cost comparison: Toba, V
182
1725148.
1
Interactive-AllCallsOn FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Relative mark/cons ratio rho of best configuration
1.2
1
0.8
0.6
0.4
0.2
0 800
1000
1200
1400 1600 1800 Heap size (words)
2000
2200
2400
Figure 5.111. Best configuration comparison: Interactive-AllCallsOn.
183
Interactive-TextEditing
Relative mark/cons ratio rho of best configuration
1.6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1.4 1.2 1 0.8 0.6 0.4 0.2 0 1500
2000
2500 Heap size (words)
3000
3500
Figure 5.112. Best configuration comparison: Interactive-TextEditing.
Relative mark/cons ratio rho of best configuration
StandardNonInteractive FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1.4
1.2
1
0.8
0.6
0.4
0.2
0 2000
3000
4000 5000 Heap size (words)
6000
7000
8000
Figure 5.113. Best configuration comparison: StandardNonInteractive.
184
HeapSim FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Relative mark/cons ratio rho of best configuration
1.2
1
0.8
0.6
0.4
0.2
0 150000
200000
250000 300000 Heap size (words)
350000
400000
Figure 5.114. Best configuration comparison: HeapSim.
Lambda-Fact5
Relative mark/cons ratio rho of best configuration
2
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1.5
1
0.5
0 10000
15000
20000 Heap size (words)
25000
Figure 5.115. Best configuration comparison: Lambda-Fact5.
185
30000
Lambda-Fact6
Relative mark/cons ratio rho of best configuration
2
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1.5
1
0.5
0 20000
30000
40000
50000 60000 70000 Heap size (words)
80000
90000
Figure 5.116. Best configuration comparison: Lambda-Fact6.
Swim
Relative mark/cons ratio rho of best configuration
1.2 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1
0.8
0.6
0.4
0.2
0 20000
40000
60000 80000 Heap size (words)
100000
Figure 5.117. Best configuration comparison: Swim.
186
120000
Tomcatv
Relative mark/cons ratio rho of best configuration
1.2
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1
0.8
0.6
0.4
0.2
0 40000
60000
80000 100000 120000 140000 160000 180000 200000 Heap size (words)
Figure 5.118. Best configuration comparison: Tomcatv.
Relative mark/cons ratio rho of best configuration
Tree-Replace-Binary FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
3
2.5
2
1.5
1
0.5
0 15000
20000 Heap size (words)
25000
30000
Figure 5.119. Best configuration comparison: Tree-Replace-Binary.
187
Relative mark/cons ratio rho of best configuration
Tree-Replace-Random FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
3.5
3
2.5
2
1.5
1
0.5
0 20000
30000
40000
50000 60000 Heap size (words)
70000
80000
90000
Figure 5.120. Best configuration comparison: Tree-Replace-Random.
Relative mark/cons ratio rho of best configuration
Richards FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 2000
3000
4000
5000 6000 7000 Heap size (words)
8000
9000
Figure 5.121. Best configuration comparison: Richards.
188
10000
Relative mark/cons ratio rho of best configuration
JavaBYTEmark FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1
0.8
0.6
0.4
0.2
0 70000
80000
90000 100000 Heap size (words)
110000
120000
Figure 5.122. Best configuration comparison: JavaBYTEmark.
Bloat-Bloat
Relative mark/cons ratio rho of best configuration
10 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
8
6
4
2
0 400000
600000
800000 1e+06 Heap size (words)
1.2e+06
Figure 5.123. Best configuration comparison: Bloat-Bloat.
189
1.4e+06
Toba
Relative mark/cons ratio rho of best configuration
1.6
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1.4 1.2 1 0.8 0.6 0.4 0.2 0 400000 600000 800000
1e+06 1.2e+06 1.4e+06 1.6e+06 1.8e+06 Heap size (words)
Figure 5.124. Best configuration comparison: Toba.
190
2e+06
Interactive-AllCallsOn 1.8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 800
1000
1200
1400 1600 1800 Heap size (words)
2000
2200
2400
Figure 5.125. Best configuration comparison: Interactive-AllCallsOn.
191
Interactive-TextEditing
Mark/cons ratio mu of best configuration
2.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
2
1.5
1
0.5
0 1500
2000
2500 Heap size (words)
3000
3500
Figure 5.126. Best configuration comparison: Interactive-TextEditing.
StandardNonInteractive
Mark/cons ratio mu of best configuration
1.2
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
1
0.8
0.6
0.4
0.2
0 2000
3000
4000 5000 Heap size (words)
6000
7000
8000
Figure 5.127. Best configuration comparison: StandardNonInteractive.
192
HeapSim
Mark/cons ratio mu of best configuration
2.5
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
2
1.5
1
0.5
0 150000
200000
250000 300000 Heap size (words)
350000
400000
Figure 5.128. Best configuration comparison: HeapSim.
Lambda-Fact5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
1.6 1.4 1.2 1 0.8 0.6 0.4 0.2 0 10000
15000
20000 Heap size (words)
25000
Figure 5.129. Best configuration comparison: Lambda-Fact5.
193
30000
Lambda-Fact6 1.6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
1.4 1.2 1 0.8 0.6 0.4 0.2 0 20000
30000
40000
50000 60000 70000 Heap size (words)
80000
90000
Figure 5.130. Best configuration comparison: Lambda-Fact6.
Swim 3.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
3
2.5
2
1.5
1
0.5
0 20000
40000
60000 80000 Heap size (words)
100000
Figure 5.131. Best configuration comparison: Swim.
194
120000
Tomcatv FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
3
2.5
2
1.5
1
0.5
0 40000
60000
80000 100000 120000 140000 160000 180000 200000 Heap size (words)
Figure 5.132. Best configuration comparison: Tomcatv.
Tree-Replace-Binary FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
4 3.5 3 2.5 2 1.5 1 0.5 0 15000
20000 Heap size (words)
25000
30000
Figure 5.133. Best configuration comparison: Tree-Replace-Binary.
195
Tree-Replace-Random FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
4 3.5 3 2.5 2 1.5 1 0.5 0 20000
30000
40000
50000 60000 Heap size (words)
70000
80000
90000
Figure 5.134. Best configuration comparison: Tree-Replace-Random.
Richards
Mark/cons ratio mu of best configuration
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
2.5
2
1.5
1
0.5
0 2000
3000
4000
5000 6000 7000 Heap size (words)
8000
9000
Figure 5.135. Best configuration comparison: Richards.
196
10000
JavaBYTEmark FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 70000
80000
90000 100000 Heap size (words)
110000
120000
Figure 5.136. Best configuration comparison: JavaBYTEmark.
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0 400000
600000
800000 1e+06 Heap size (words)
1.2e+06
Figure 5.137. Best configuration comparison: Bloat-Bloat.
197
1.4e+06
Toba FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF NGYF 2GYF 3GYF
Mark/cons ratio mu of best configuration
0.6
0.5
0.4
0.3
0.2
0.1
0 400000 600000 800000
1e+06 1.2e+06 1.4e+06 1.6e+06 1.8e+06 Heap size (words)
Figure 5.138. Best configuration comparison: Toba.
198
2e+06
Interactive-AllCallsOn
Interactive-AllCallsOn
1.003
1.02 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.015 1.01
1.002
Retention ratio
Retention ratio
1.0025
1.0015 1.001
1.005 1 0.995 0.99
1.0005
0.985
1
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 764)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 764)
(b) GYF schemes
Figure 5.139. Excess retention ratios: Interactive-AllCallsOn, V
Interactive-AllCallsOn
764.
Interactive-AllCallsOn
1.025
1.005 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.02
2GYF 3GYF
1.004 1.003
1.015
Retention ratio
Retention ratio
1
1.01 1.005
1.002 1.001 1
1
0.999
0.995
0.998 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 931)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 931)
(b) GYF schemes
Figure 5.140. Excess retention ratios: Interactive-AllCallsOn, V
199
931.
1
Interactive-AllCallsOn
Interactive-AllCallsOn
1.08
1.01 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.005 1
1.04
Retention ratio
Retention ratio
1.06
1.02 1
0.995 0.99 0.985 0.98
0.98
0.975
0.96
0.97 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1135)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1135)
(b) GYF schemes
Figure 5.141. Excess retention ratios: Interactive-AllCallsOn, V
Interactive-AllCallsOn 1.01 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.18 1.16
2GYF 3GYF
1.008 1.006 Retention ratio
1.14 Retention ratio
1135.
Interactive-AllCallsOn
1.2
1.12 1.1 1.08 1.06
1.004 1.002 1 0.998 0.996
1.04 1.02
0.994
1
0.992
0.98
0.99 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1384)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1384)
1
(b) GYF schemes
Figure 5.142. Excess retention ratios: Interactive-AllCallsOn, V
Interactive-AllCallsOn
1384.
Interactive-AllCallsOn
1.16
1.01 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.14 1.12
2GYF 3GYF
1.008 1.006
1.1
Retention ratio
Retention ratio
1
1.08 1.06 1.04
1.004 1.002 1 0.998
1.02
0.996
1 0.98
0.994 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1687)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1687)
(b) GYF schemes
Figure 5.143. Excess retention ratios: Interactive-AllCallsOn, V
200
1687.
1
Interactive-AllCallsOn
Interactive-AllCallsOn
1.14
1.01 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.12
1.008 Retention ratio
Retention ratio
1.1
2GYF 3GYF
1.009
1.08 1.06 1.04
1.007 1.006 1.005 1.004 1.003
1.02 1.002 1
1.001
0.98
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2057)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2057)
(b) GYF schemes
Figure 5.144. Excess retention ratios: Interactive-AllCallsOn, V
201
2057.
1
Interactive-TextEditing
Interactive-TextEditing
4
1.45 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.4 1.35
3
Retention ratio
Retention ratio
3.5
2.5 2
1.3 1.25 1.2 1.15
1.5
1.1
1
1.05 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1338)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1338)
(b) GYF schemes
Figure 5.145. Excess retention ratios: Interactive-TextEditing, V
Interactive-TextEditing
1338.
Interactive-TextEditing
8
1.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
2GYF 3GYF
1.4
Retention ratio
6 Retention ratio
1
5 4 3
1.3 1.2 1.1 1
2 1
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1631)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1631)
(b) GYF schemes
Figure 5.146. Excess retention ratios: Interactive-TextEditing, V
202
1631.
1
Interactive-TextEditing
Interactive-TextEditing
2.8
1.6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.6
1.4
2.2
Retention ratio
Retention ratio
2.4
2GYF 3GYF
1.5
2 1.8 1.6
1.3 1.2 1.1
1.4 1
1.2 1
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1988)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1988)
(b) GYF schemes
Figure 5.147. Excess retention ratios: Interactive-TextEditing, V
Interactive-TextEditing
1988.
Interactive-TextEditing
11
1.3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10 9 8
2GYF 3GYF
1.25 1.2 Retention ratio
Retention ratio
1
7 6 5
1.15 1.1 1.05
4 1 3 0.95
2 1
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2424)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2424)
(b) GYF schemes
Figure 5.148. Excess retention ratios: Interactive-TextEditing, V
Interactive-TextEditing
2424.
Interactive-TextEditing
4
1.9 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3.5
2GYF 3GYF
1.8 1.7
3
Retention ratio
Retention ratio
1
2.5 2
1.6 1.5 1.4 1.3 1.2 1.1
1.5
1 1
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2955)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2955)
(b) GYF schemes
Figure 5.149. Excess retention ratios: Interactive-TextEditing, V
203
2955.
1
StandardNonInteractive
StandardNonInteractive
5.5
1.08 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
4
1.04 Retention ratio
Retention ratio
4.5
2GYF 3GYF
1.06
3.5 3 2.5
1.02 1 0.98
2 0.96 1.5 0.94
1 0.5
0.92 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1414)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1414)
(b) GYF schemes
Figure 5.150. Excess retention ratios: StandardNonInteractive, V
StandardNonInteractive
1414.
StandardNonInteractive
8
1.14 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
2GYF 3GYF
1.12 1.1 Retention ratio
6 Retention ratio
1
5 4 3
1.08 1.06 1.04 1.02 1
2 0.98 1
0.96
0
0.94 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1723)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1723)
(b) GYF schemes
Figure 5.151. Excess retention ratios: StandardNonInteractive, V
204
1723.
1
StandardNonInteractive
StandardNonInteractive
7
1.08 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
6
Retention ratio
Retention ratio
5
2GYF 3GYF
1.06
4 3 2
1.04 1.02 1 0.98
1 0
0.96 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2101)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2101)
(b) GYF schemes
Figure 5.152. Excess retention ratios: StandardNonInteractive, V
StandardNonInteractive 1.15 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF 1.1
2.5
Retention ratio
Retention ratio
2101.
StandardNonInteractive
3.5
2 1.5
1.05
1
0.95
1 0.5
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2561)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2561)
1
(b) GYF schemes
Figure 5.153. Excess retention ratios: StandardNonInteractive, V
StandardNonInteractive
2561.
StandardNonInteractive
5
1.1 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4.5 4
2GYF 3GYF
1.08 1.06 1.04
3.5
Retention ratio
Retention ratio
1
3 2.5 2
1.02 1 0.98 0.96 0.94
1.5
0.92
1
0.9
0.5
0.88 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3122)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3122)
(b) GYF schemes
Figure 5.154. Excess retention ratios: StandardNonInteractive, V
205
3122.
1
StandardNonInteractive
StandardNonInteractive
20
1.06 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
18
14
Retention ratio
Retention ratio
16
2GYF 3GYF
1.04
12 10 8 6 4
1.02 1 0.98 0.96
2 0
0.94 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3805)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3805)
(b) GYF schemes
Figure 5.155. Excess retention ratios: StandardNonInteractive, V
StandardNonInteractive 1.06 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10 9
2GYF 3GYF
1.05 1.04 Retention ratio
8 Retention ratio
3805.
StandardNonInteractive
11
7 6 5 4
1.03 1.02 1.01 1 0.99
3 2
0.98
1
0.97
0
0.96 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4639)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4639)
1
(b) GYF schemes
Figure 5.156. Excess retention ratios: StandardNonInteractive, V
StandardNonInteractive
4639.
StandardNonInteractive
5.5
1.3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5 4.5 4
2GYF 3GYF
1.25 1.2 Retention ratio
Retention ratio
1
3.5 3 2.5 2
1.15 1.1 1.05
1.5 1
1 0.5
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 5655)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 5655)
(b) GYF schemes
Figure 5.157. Excess retention ratios: StandardNonInteractive, V
206
5655.
1
StandardNonInteractive
StandardNonInteractive
5
1.025 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4.5
1.015
3.5
Retention ratio
Retention ratio
4
2GYF 3GYF
1.02
3 2.5 2
1.01 1.005 1 0.995 0.99
1.5
0.985
1
0.98
0.5
0.975 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 6894)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 6894)
(b) GYF schemes
Figure 5.158. Excess retention ratios: StandardNonInteractive, V
207
6894.
1
HeapSim
HeapSim
1.6
1.006 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.5
1.004 Retention ratio
Retention ratio
1.4
2GYF 3GYF
1.005
1.3 1.2
1.003 1.002
1.1
1.001
1
1
0.9
0.999 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 113397)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 113397)
(b) GYF schemes
Figure 5.159. Excess retention ratios: HeapSim, V
HeapSim
113397.
HeapSim
1.25
1.0002 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.2
2GYF 3GYF
1.0001 1
1.15
Retention ratio
Retention ratio
1
1.1 1.05
0.9999 0.9998 0.9997 0.9996
1
0.9995
0.95
0.9994 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 138231)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 138231)
(b) GYF schemes
Figure 5.160. Excess retention ratios: HeapSim, V
208
138231.
1
HeapSim
HeapSim
1.16
1.0003 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.14
1.0002
1.1
Retention ratio
Retention ratio
1.12
2GYF 3GYF
1.00025
1.08 1.06 1.04 1.02
1.00015 1.0001 1.00005 1 0.99995
1
0.9999
0.98
0.99985 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 168503)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 168503)
(b) GYF schemes
Figure 5.161. Excess retention ratios: HeapSim, V
HeapSim 1.0003 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.12
2GYF 3GYF
1.00025 1.0002 Retention ratio
1.1 Retention ratio
168503.
HeapSim
1.14
1.08 1.06 1.04
1.00015 1.0001 1.00005
1.02
1
1
0.99995
0.98
0.9999 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 205405)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 205405)
1
(b) GYF schemes
Figure 5.162. Excess retention ratios: HeapSim, V
HeapSim
205405.
HeapSim
1.12
1.0004 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.1
2GYF 3GYF
1.00035 1.0003 Retention ratio
1.08 Retention ratio
1
1.06 1.04 1.02
1.00025 1.0002 1.00015 1.0001 1.00005
1
1
0.98
0.99995 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 250387)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 250387)
(b) GYF schemes
Figure 5.163. Excess retention ratios: HeapSim, V
209
250387.
1
HeapSim
HeapSim
1.18
1.035 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.16
1.12
2GYF 3GYF
1.03 1.025 Retention ratio
Retention ratio
1.14
1.1 1.08 1.06 1.04
1.02 1.015 1.01
1.02 1.005
1 0.98
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 305221)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 305221)
(b) GYF schemes
Figure 5.164. Excess retention ratios: HeapSim, V
HeapSim
305221.
HeapSim
1.1
1.0006 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.09 1.08
2GYF 3GYF
1.0005
Retention ratio
1.07 Retention ratio
1
1.06 1.05 1.04 1.03
1.0004 1.0003 1.0002
1.02 1.01
1.0001
1 0.99
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 372063)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 372063)
(b) GYF schemes
Figure 5.165. Excess retention ratios: HeapSim, V
210
372063.
1
Lambda-Fact5
Lambda-Fact5
5.5
1.45 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
4
1.35 Retention ratio
Retention ratio
4.5
3.5 3 2.5
1.3 1.25 1.2 1.15
2
1.1
1.5
1.05
1
1
0.5
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 7673)
2GYF 3GYF
1.4
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 7673)
(b) GYF schemes
Figure 5.166. Excess retention ratios: Lambda-Fact5, V
Lambda-Fact5
7673.
Lambda-Fact5
5
1.8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4.5 4
2GYF 3GYF
1.7 1.6
3.5
Retention ratio
Retention ratio
1
3 2.5 2 1.5
1.5 1.4 1.3 1.2 1.1
1
1
0.5
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 9354)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 9354)
(b) GYF schemes
Figure 5.167. Excess retention ratios: Lambda-Fact5, V
211
9354.
1
Lambda-Fact5
Lambda-Fact5
18
2.2 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
16
1.8
12
Retention ratio
Retention ratio
14
2GYF 3GYF
2
10 8 6
1.6 1.4 1.2
4 1
2 0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 11402)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 11402)
(b) GYF schemes
Figure 5.168. Excess retention ratios: Lambda-Fact5, V
Lambda-Fact5 2.8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
18 16 14
2.4
12 10 8
2.2 2 1.8 1.6
6
1.4
4
1.2
2
1
0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 13899)
2GYF 3GYF
2.6
Retention ratio
Retention ratio
11402.
Lambda-Fact5
20
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 13899)
1
(b) GYF schemes
Figure 5.169. Excess retention ratios: Lambda-Fact5, V
Lambda-Fact5
13899.
Lambda-Fact5
45
2.6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
40 35
2GYF 3GYF
2.4 2.2
30
Retention ratio
Retention ratio
1
25 20 15
2 1.8 1.6 1.4
10
1.2
5 0
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 16943)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 16943)
(b) GYF schemes
Figure 5.170. Excess retention ratios: Lambda-Fact5, V
212
16943.
1
Lambda-Fact5
Lambda-Fact5
60
2.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
2.2 2
40
Retention ratio
Retention ratio
50
30 20
1.8 1.6 1.4 1.2
10
1
0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 20654)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 20654)
(b) GYF schemes
Figure 5.171. Excess retention ratios: Lambda-Fact5, V
Lambda-Fact5
20654.
Lambda-Fact5
60
2.6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
50
2GYF 3GYF
2.4 2.2
40
Retention ratio
Retention ratio
1
30 20
2 1.8 1.6 1.4 1.2
10 1 0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 25177)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 25177)
(b) GYF schemes
Figure 5.172. Excess retention ratios: Lambda-Fact5, V
213
25177.
1
Lambda-Fact6
Lambda-Fact6
5.5
1.45 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
4
1.35 Retention ratio
Retention ratio
4.5
3.5 3 2.5
1.3 1.25 1.2 1.15
2
1.1
1.5
1.05
1
1
0.5
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 16669)
2GYF 3GYF
1.4
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 16669)
(b) GYF schemes
Figure 5.173. Excess retention ratios: Lambda-Fact6, V
Lambda-Fact6
16669.
Lambda-Fact6
6
1.8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5.5 5
2GYF 3GYF
1.7 1.6 Retention ratio
4.5 Retention ratio
1
4 3.5 3 2.5
1.5 1.4 1.3 1.2
2 1.1
1.5 1
1
0.5
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 20320)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 20320)
(b) GYF schemes
Figure 5.174. Excess retention ratios: Lambda-Fact6, V
214
20320.
1
Lambda-Fact6
Lambda-Fact6
16
2.2 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
14
1.8 Retention ratio
Retention ratio
12
2GYF 3GYF
2
10 8 6
1.6 1.4 1.2
4
1
2 0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 24770)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 24770)
(b) GYF schemes
Figure 5.175. Excess retention ratios: Lambda-Fact6, V
Lambda-Fact6 2.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
60
2GYF 3GYF
2.2 2 Retention ratio
50 Retention ratio
24770.
Lambda-Fact6
70
40 30 20
1.8 1.6 1.4 1.2
10
1
0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 30194)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 30194)
1
(b) GYF schemes
Figure 5.176. Excess retention ratios: Lambda-Fact6, V
Lambda-Fact6
30194.
Lambda-Fact6
80
2.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
70
2GYF 3GYF
2.2 2 Retention ratio
60 Retention ratio
1
50 40 30
1.8 1.6 1.4
20
1.2
10
1
0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 36807)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 36807)
(b) GYF schemes
Figure 5.177. Excess retention ratios: Lambda-Fact6, V
215
36807.
1
Lambda-Fact6
Lambda-Fact6
70
2.8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
60
2.4 Retention ratio
Retention ratio
50
2GYF 3GYF
2.6
40 30
2.2 2 1.8 1.6 1.4
20
1.2 10
1
0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 44868)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 44868)
(b) GYF schemes
Figure 5.178. Excess retention ratios: Lambda-Fact6, V
Lambda-Fact6
44868.
Lambda-Fact6
90
2.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
80 70
2GYF 3GYF
2.2 2
60
Retention ratio
Retention ratio
1
50 40 30
1.8 1.6 1.4
20 1.2
10 0
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 54693)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 54693)
(b) GYF schemes
Figure 5.179. Excess retention ratios: Lambda-Fact6, V
Lambda-Fact6
54693.
Lambda-Fact6
80
2.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
70
2GYF 3GYF
2.2 2 Retention ratio
60 Retention ratio
1
50 40 30
1.8 1.6 1.4
20
1.2
10 0
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 66671)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 66671)
(b) GYF schemes
Figure 5.180. Excess retention ratios: Lambda-Fact6, V
216
66671.
1
Lambda-Fact6
Lambda-Fact6
120
2.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
2.2 2
80
Retention ratio
Retention ratio
100
60 40
1.8 1.6 1.4 1.2
20
1
0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 81272)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 81272)
(b) GYF schemes
Figure 5.181. Excess retention ratios: Lambda-Fact6, V
217
81272.
1
Swim
Swim
1.6
1.02 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.5
0.98 Retention ratio
Retention ratio
1.4
2GYF 3GYF
1
1.3 1.2
0.96 0.94 0.92 0.9 0.88
1.1
0.86 1
0.84
0.9
0.82 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 21383)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 21383)
(b) GYF schemes
Figure 5.182. Excess retention ratios: Swim, V
Swim
21383.
Swim
1.25
1.05 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.2
2GYF 3GYF
1.04 1.03
1.15
Retention ratio
Retention ratio
1
1.1 1.05
1.02 1.01 1
1
0.99
0.95
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 26066)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 26066)
(b) GYF schemes
Figure 5.183. Excess retention ratios: Swim, V
218
26066.
1
Swim
Swim
1.14
1.05 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.12
1.03 Retention ratio
Retention ratio
1.1
2GYF 3GYF
1.04
1.08 1.06 1.04
1.02 1.01 1
1.02
0.99
1
0.98
0.98
0.97 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 31774)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 31774)
(b) GYF schemes
Figure 5.184. Excess retention ratios: Swim, V
Swim
31774.
Swim
1.1
1.14 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.08
2GYF 3GYF
1.12 1.1 Retention ratio
1.06 Retention ratio
1
1.04 1.02 1
1.08 1.06 1.04 1.02 1
0.98
0.98
0.96
0.96 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 38733)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 38733)
(b) GYF schemes
Figure 5.185. Excess retention ratios: Swim, V
Swim
38733.
Swim
1.07
1.045 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.06
2GYF 3GYF
1.04 1.035 Retention ratio
1.05 Retention ratio
1
1.04 1.03 1.02
1.03 1.025 1.02 1.015 1.01
1.01 1.005 1
1
0.99
0.995 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 47215)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 47215)
(b) GYF schemes
Figure 5.186. Excess retention ratios: Swim, V
219
47215.
1
Swim
Swim
1.1
1.04 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.03 1.02
1.06
Retention ratio
Retention ratio
1.08
1.04 1.02
1.01 1 0.99 0.98 0.97
1 0.96 0.98
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 57555)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 57555)
(b) GYF schemes
Figure 5.187. Excess retention ratios: Swim, V
Swim 1.05 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.1
2GYF 3GYF
1.04
Retention ratio
1.08 Retention ratio
57555.
Swim
1.12
1.06 1.04 1.02
1.03 1.02 1.01 1
1 0.98
0.99 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 70160)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 70160)
1
(b) GYF schemes
Figure 5.188. Excess retention ratios: Swim, V
Swim
70160.
Swim
1.6
1.012 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.5
2GYF 3GYF
1.01 1.008 Retention ratio
1.4 Retention ratio
1
1.3 1.2 1.1
1.006 1.004 1.002 1 0.998
1
0.996
0.9
0.994 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 85524)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 85524)
(b) GYF schemes
Figure 5.189. Excess retention ratios: Swim, V
220
85524.
1
Swim
Swim
1.06
1.03 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.05
1.02 Retention ratio
Retention ratio
1.04
2GYF 3GYF
1.025
1.03 1.02
1.015 1.01
1.01
1.005
1
1
0.99
0.995 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 104254)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 104254)
(b) GYF schemes
Figure 5.190. Excess retention ratios: Swim, V
221
104254.
1
Tomcatv
Tomcatv
2.2
1.1 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2
1.08 1.07 Retention ratio
Retention ratio
1.8
2GYF 3GYF
1.09
1.6 1.4 1.2
1.06 1.05 1.04 1.03 1.02 1.01
1
1 0.8
0.99 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 37292)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 37292)
(b) GYF schemes
Figure 5.191. Excess retention ratios: Tomcatv, V
Tomcatv
37292.
Tomcatv
1.9
1.04 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.8 1.7 1.6
2GYF 3GYF
1.035 1.03 Retention ratio
Retention ratio
1
1.5 1.4 1.3 1.2
1.025 1.02 1.015 1.01 1.005
1.1 1
1
0.9
0.995 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 45459)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 45459)
(b) GYF schemes
Figure 5.192. Excess retention ratios: Tomcatv, V
222
45459.
1
Tomcatv
Tomcatv
2.2
1.04 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2
Retention ratio
Retention ratio
1.8
2GYF 3GYF
1.02
1.6 1.4 1.2
1 0.98 0.96 0.94
1 0.8
0.92 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 55414)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 55414)
(b) GYF schemes
Figure 5.193. Excess retention ratios: Tomcatv, V
Tomcatv
55414.
Tomcatv
1.9
1.2 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.8 1.7 1.6
2GYF 3GYF 1.15 Retention ratio
Retention ratio
1
1.5 1.4 1.3
1.1
1.05
1.2 1.1
1
1 0.9
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 67550)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 67550)
(b) GYF schemes
Figure 5.194. Excess retention ratios: Tomcatv, V
Tomcatv
67550.
Tomcatv
1.8
1.03 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.7 1.6
2GYF 3GYF
1.025 1.02
1.5
Retention ratio
Retention ratio
1
1.4 1.3 1.2
1.015 1.01 1.005 1
1.1
0.995
1 0.9
0.99 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 82343)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 82343)
(b) GYF schemes
Figure 5.195. Excess retention ratios: Tomcatv, V
223
82343.
1
Tomcatv
Tomcatv
1.8
1.04 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.7
1
1.5
Retention ratio
Retention ratio
1.6
2GYF 3GYF
1.02
1.4 1.3 1.2
0.98 0.96 0.94 0.92
1.1
0.9
1
0.88
0.9
0.86 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 100376)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 100376)
(b) GYF schemes
Figure 5.196. Excess retention ratios: Tomcatv, V
Tomcatv
100376.
Tomcatv
1.5
1.08 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.4
2GYF 3GYF
1.07 1.06
1.3
Retention ratio
Retention ratio
1
1.2 1.1
1.05 1.04 1.03 1.02 1.01 1
1
0.99 0.9
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 122358)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 122358)
(b) GYF schemes
Figure 5.197. Excess retention ratios: Tomcatv, V
Tomcatv
122358.
Tomcatv
1.5
1.05 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.4
2GYF 3GYF
1.045 1.04 1.035
1.3
Retention ratio
Retention ratio
1
1.2 1.1
1.03 1.025 1.02 1.015 1.01
1
1.005
0.9
0.995
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 149154)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 149154)
(b) GYF schemes
Figure 5.198. Excess retention ratios: Tomcatv, V
224
149154.
1
Tomcatv
Tomcatv
1.4
1.045 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.35
1.035
1.25
Retention ratio
Retention ratio
1.3
2GYF 3GYF
1.04
1.2 1.15 1.1
1.03 1.025 1.02 1.015 1.01
1.05
1.005
1
1
0.95
0.995 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 181818)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 181818)
(b) GYF schemes
Figure 5.199. Excess retention ratios: Tomcatv, V
225
181818.
1
Tree-Replace-Binary
Tree-Replace-Binary
4
1.09 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.08 1.07
3
Retention ratio
Retention ratio
3.5
2.5 2
1.06 1.05 1.04 1.03 1.02
1.5 1.01 1
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 11926)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 11926)
(b) GYF schemes
Figure 5.200. Excess retention ratios: Tree-Replace-Binary, V
Tree-Replace-Binary
11926.
Tree-Replace-Binary
5.5
1.16 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5 4.5 4
2GYF 3GYF
1.14 1.12 Retention ratio
Retention ratio
1
3.5 3 2.5 2
1.1 1.08 1.06 1.04 1.02
1.5 1
1
0.5
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 14538)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 14538)
(b) GYF schemes
Figure 5.201. Excess retention ratios: Tree-Replace-Binary, V
226
14538.
1
Tree-Replace-Binary
Tree-Replace-Binary
6
1.18 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5.5
1.14 Retention ratio
Retention ratio
5 4.5 4 3.5 3
1.12 1.1 1.08 1.06
2.5
1.04
2
1.02
1.5
1
1
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 17722)
2GYF 3GYF
1.16
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 17722)
(b) GYF schemes
Figure 5.202. Excess retention ratios: Tree-Replace-Binary, V
Tree-Replace-Binary
17722.
Tree-Replace-Binary
4.5
1.2 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4
2GYF 3GYF
1.18 1.16 1.14 Retention ratio
3.5 Retention ratio
1
3 2.5 2
1.12 1.1 1.08 1.06 1.04 1.02
1.5
1 1
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 21603)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 21603)
(b) GYF schemes
Figure 5.203. Excess retention ratios: Tree-Replace-Binary, V
Tree-Replace-Binary
21603.
Tree-Replace-Binary
4.5
1.2 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4
2GYF 3GYF 1.15 Retention ratio
3.5 Retention ratio
1
3 2.5 2
1.1
1.05
1.5 1 1 0.5
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 26334)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 26334)
(b) GYF schemes
Figure 5.204. Excess retention ratios: Tree-Replace-Binary, V
227
26334.
1
Tree-Replace-Random
Tree-Replace-Random
4.5
1.035 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4
1.025 Retention ratio
Retention ratio
3.5
2GYF 3GYF
1.03
3 2.5 2
1.02 1.015 1.01 1.005
1.5
1
1
0.995 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 15985)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 15985)
(b) GYF schemes
Figure 5.205. Excess retention ratios: Tree-Replace-Random, V
Tree-Replace-Random
15985.
Tree-Replace-Random
4
1.14 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3.5
2GYF 3GYF
1.12 1.1 Retention ratio
3 Retention ratio
1
2.5 2 1.5
1.08 1.06 1.04 1.02
1
1
0.5
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 19486)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 19486)
(b) GYF schemes
Figure 5.206. Excess retention ratios: Tree-Replace-Random, V
228
19486.
1
Tree-Replace-Random
Tree-Replace-Random
3.5
1.25 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF Retention ratio
Retention ratio
3
2GYF 3GYF
1.2
2.5
2
1.5
1.15 1.1 1.05 1
1
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 23754)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 23754)
(b) GYF schemes
Figure 5.207. Excess retention ratios: Tree-Replace-Random, V
Tree-Replace-Random 1.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4
2GYF 3GYF
1.35 1.3 Retention ratio
3.5 Retention ratio
23754.
Tree-Replace-Random
4.5
3 2.5 2 1.5
1.25 1.2 1.15 1.1 1.05
1
1
0.5
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 28956)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 28956)
1
(b) GYF schemes
Figure 5.208. Excess retention ratios: Tree-Replace-Random, V
Tree-Replace-Random
28956.
Tree-Replace-Random
7
1.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
6
2GYF 3GYF
1.35 1.3
5
Retention ratio
Retention ratio
1
4 3
1.25 1.2 1.15 1.1 1.05
2 1 1
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 35297)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 35297)
(b) GYF schemes
Figure 5.209. Excess retention ratios: Tree-Replace-Random, V
229
35297.
1
Tree-Replace-Random
Tree-Replace-Random
12
1.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.35 1.3
8
Retention ratio
Retention ratio
10
6 4
1.25 1.2 1.15 1.1 1.05
2 1 0
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 43027)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 43027)
(b) GYF schemes
Figure 5.210. Excess retention ratios: Tree-Replace-Random, V
Tree-Replace-Random 1.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
16 14
2GYF 3GYF
1.35 1.3
12
Retention ratio
Retention ratio
43027.
Tree-Replace-Random
18
10 8 6
1.25 1.2 1.15 1.1
4
1.05
2 0
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 52450)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 52450)
1
(b) GYF schemes
Figure 5.211. Excess retention ratios: Tree-Replace-Random, V
Tree-Replace-Random
52450.
Tree-Replace-Random
50
1.45 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
45 40 35
2GYF 3GYF
1.4 1.35 Retention ratio
Retention ratio
1
30 25 20 15
1.3 1.25 1.2 1.15
10
1.1
5
1.05
0
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 63936)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 63936)
(b) GYF schemes
Figure 5.212. Excess retention ratios: Tree-Replace-Random, V
230
63936.
1
Richards
Richards
1.012
1.02 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.01
1.01 Retention ratio
Retention ratio
1.008
2GYF 3GYF
1.015
1.006 1.004 1.002
1.005 1 0.995 0.99 0.985
1
0.98
0.998
0.975 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1826)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1826)
(b) GYF schemes
Figure 5.213. Excess retention ratios: Richards, V
Richards
1826.
Richards
6
1.07 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5.5 5
2GYF 3GYF
1.06 1.05 Retention ratio
4.5 Retention ratio
1
4 3.5 3 2.5
1.04 1.03 1.02 1.01
2 1
1.5
0.99
1 0.5
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2225)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2225)
(b) GYF schemes
Figure 5.214. Excess retention ratios: Richards, V
231
2225.
1
Richards
Richards
3.5
1.05 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.04 1.03 1.02
2.5
Retention ratio
Retention ratio
3
2 1.5
1.01 1 0.99 0.98 0.97 0.96
1
0.95 0.5
0.94 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2713)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2713)
(b) GYF schemes
Figure 5.215. Excess retention ratios: Richards, V
Richards
2713.
Richards
4.5
1.1 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4
2GYF 3GYF
1.08
Retention ratio
3.5 Retention ratio
1
3 2.5 2
1.06 1.04 1.02
1.5 1
1 0.5
0.98 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3307)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3307)
(b) GYF schemes
Figure 5.216. Excess retention ratios: Richards, V
Richards
3307.
Richards
8
1.05 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
2GYF 3GYF
1.04 1.03 Retention ratio
6 Retention ratio
1
5 4 3
1.02 1.01 1
2
0.99
1
0.98
0
0.97 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4032)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4032)
(b) GYF schemes
Figure 5.217. Excess retention ratios: Richards, V
232
4032.
1
Richards
Richards
10
1.06 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
9
7
1.04 Retention ratio
Retention ratio
8
2GYF 3GYF
1.05
6 5 4 3
1.03 1.02 1.01
2 1
1 0
0.99 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4914)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4914)
(b) GYF schemes
Figure 5.218. Excess retention ratios: Richards, V
Richards 1.07 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
9 8 7
2GYF 3GYF
1.06 1.05 1.04 Retention ratio
Retention ratio
4914.
Richards
10
6 5 4 3
1.03 1.02 1.01 1 0.99
2
0.98
1
0.97
0
0.96 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 5991)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 5991)
1
(b) GYF schemes
Figure 5.219. Excess retention ratios: Richards, V
Richards
5991.
Richards
11
1.08 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10 9
2GYF 3GYF
1.06
Retention ratio
8 Retention ratio
1
7 6 5 4
1.04 1.02 1
3 2
0.98
1 0
0.96 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 7303)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 7303)
(b) GYF schemes
Figure 5.220. Excess retention ratios: Richards, V
233
7303.
1
Richards
Richards
14
1.03 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
12
Retention ratio
Retention ratio
10
2GYF 3GYF
1.02
8 6 4
1.01 1 0.99 0.98
2 0
0.97 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 8902)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 8902)
(b) GYF schemes
Figure 5.221. Excess retention ratios: Richards, V
234
8902.
1
JavaBYTEmark
JavaBYTEmark
1.3
1.3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.25 1.2
1.1
Retention ratio
Retention ratio
1.2
1 0.9
1.15 1.1 1.05 1 0.95
0.8 0.9 0.7
0.85 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 72807)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 72807)
(b) GYF schemes
Figure 5.222. Excess retention ratios: JavaBYTEmark, V
JavaBYTEmark
72807.
JavaBYTEmark
3.5
1.25 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
1.2 1.15
2.5
Retention ratio
Retention ratio
1
2 1.5
1.1 1.05 1 0.95 0.9
1 0.85 0.5
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 88752)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 88752)
(b) GYF schemes
Figure 5.223. Excess retention ratios: JavaBYTEmark, V
235
88752.
1
JavaBYTEmark
JavaBYTEmark
2
1.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.8
Retention ratio
Retention ratio
1.6
2GYF 3GYF
1.4
1.4 1.2 1
1.3 1.2 1.1 1
0.8 0.6
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 108188)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 108188)
(b) GYF schemes
Figure 5.224. Excess retention ratios: JavaBYTEmark, V
JavaBYTEmark
108188.
JavaBYTEmark
4
1.35 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3.5
2GYF 3GYF 1.3
1.25 Retention ratio
3 Retention ratio
1
2.5 2
1.2 1.15
1.5
1.1
1
1.05
0.5
1 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 131881)
1
(a) FC schemes
0
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 131881)
(b) GYF schemes
Figure 5.225. Excess retention ratios: JavaBYTEmark, V
236
131881.
1
Bloat-Bloat
Bloat-Bloat
1.12
1.3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2GYF 3GYF
1.2
1.08
Retention ratio
Retention ratio
1.1
1.06 1.04 1.02
1.1 1 0.9 0.8
1
0.7 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 246766)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 246766)
(b) GYF schemes
Figure 5.226. Excess retention ratios: Bloat-Bloat, V
Bloat-Bloat
246766.
Bloat-Bloat
1.7
1.25 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.6
2GYF 3GYF
1.2
Retention ratio
1.5 Retention ratio
1
1.4 1.3 1.2
1.15 1.1 1.05
1.1 1
1 0.9
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 300808)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 300808)
(b) GYF schemes
Figure 5.227. Excess retention ratios: Bloat-Bloat, V
237
300808.
1
Bloat-Bloat
Bloat-Bloat
2.4
1.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.2
Retention ratio
Retention ratio
2
2GYF 3GYF
1.4
1.8 1.6 1.4
1.3 1.2 1.1 1
1.2 1
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 366682)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 366682)
(b) GYF schemes
Figure 5.228. Excess retention ratios: Bloat-Bloat, V
Bloat-Bloat
366682.
Bloat-Bloat
2.4
1.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.2
2GYF 3GYF
1.4 1.3 Retention ratio
2 Retention ratio
1
1.8 1.6 1.4
1.2 1.1 1 0.9 0.8
1.2
0.7
1
0.6 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 446984)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 446984)
(b) GYF schemes
Figure 5.229. Excess retention ratios: Bloat-Bloat, V
Bloat-Bloat
446984.
Bloat-Bloat
2.6
3.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.4
2GYF 3GYF
3
Retention ratio
2.2 Retention ratio
1
2 1.8 1.6
2.5 2 1.5
1.4 1
1.2 1
0.5 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 544872)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 544872)
(b) GYF schemes
Figure 5.230. Excess retention ratios: Bloat-Bloat, V
238
544872.
1
Bloat-Bloat
Bloat-Bloat
2.4
1.8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.2
Retention ratio
Retention ratio
2
2GYF 3GYF
1.6
1.8 1.6 1.4
1.4 1.2 1 0.8
1.2 1
0.6 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 664195)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 664195)
(b) GYF schemes
Figure 5.231. Excess retention ratios: Bloat-Bloat, V
Bloat-Bloat
664195.
Bloat-Bloat
30
1.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
25
2GYF 3GYF
1.4 1.3
20
Retention ratio
Retention ratio
1
15 10
1.2 1.1 1 0.9
5
0.8
0
0.7 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 809650)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 809650)
(b) GYF schemes
Figure 5.232. Excess retention ratios: Bloat-Bloat, V
Bloat-Bloat
809650.
Bloat-Bloat
40
2.2 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
35
2GYF 3GYF
2 1.8 Retention ratio
30 Retention ratio
1
25 20 15
1.6 1.4 1.2 1 0.8
10 0.6 5
0.4
0
0.2 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 986959)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 986959)
(b) GYF schemes
Figure 5.233. Excess retention ratios: Bloat-Bloat, V
239
986959.
1
Toba
Toba
1.45
1.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.4
1.2
1.3
Retention ratio
Retention ratio
1.35
2GYF 3GYF
1.3
1.25 1.2 1.15
1.1 1 0.9
1.1 0.8
1.05 1
0.7 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 353843)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 353843)
(b) GYF schemes
Figure 5.234. Excess retention ratios: Toba, V
Toba
353843.
Toba
1.35
1.6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.3
2GYF 3GYF
1.5 1.4 Retention ratio
1.25 Retention ratio
1
1.2 1.15 1.1 1.05
1.3 1.2 1.1 1 0.9
1
0.8
0.95
0.7 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 431335)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 431335)
(b) GYF schemes
Figure 5.235. Excess retention ratios: Toba, V
240
431335.
1
Toba
Toba
1.5
1.35 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.45 1.4
1.25 Retention ratio
Retention ratio
1.35
2GYF 3GYF
1.3
1.3 1.25 1.2 1.15
1.2 1.15 1.1 1.05 1
1.1
0.95
1.05 1
0.9
0.95
0.85 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 525794)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 525794)
(b) GYF schemes
Figure 5.236. Excess retention ratios: Toba, V
Toba 1.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2
2GYF 3GYF
1.4 1.3 Retention ratio
1.8 Retention ratio
525794.
Toba
2.2
1.6 1.4
1.2 1.1
1.2
1
1
0.9
0.8
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 640941)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 640941)
1
(b) GYF schemes
Figure 5.237. Excess retention ratios: Toba, V
Toba
640941.
Toba
2.8
1.35 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
2.6 2.4
2GYF 3GYF
1.3 1.25 1.2
2.2
Retention ratio
Retention ratio
1
2 1.8 1.6
1.15 1.1 1.05 1 0.95
1.4
0.9
1.2
0.85
1
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 781303)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 781303)
(b) GYF schemes
Figure 5.238. Excess retention ratios: Toba, V
241
781303.
1
Toba
Toba
3.5
1.6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.4 Retention ratio
Retention ratio
3
2GYF 3GYF
1.5
2.5
2
1.3 1.2 1.1 1 0.9
1.5
0.8 0.7
1
0.6 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 952404)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 952404)
(b) GYF schemes
Figure 5.239. Excess retention ratios: Toba, V
Toba
952404.
Toba
3.5
1.4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2GYF 3GYF
1.35 1.3
2.5
Retention ratio
Retention ratio
1
2 1.5
1.25 1.2 1.15 1.1 1.05
1 1 0.5
0.95 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1160976)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1160976)
(b) GYF schemes
Figure 5.240. Excess retention ratios: Toba, V
Toba
1160976.
Toba
12
1.5 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
2GYF 3GYF
1.4 1.3
8
Retention ratio
Retention ratio
1
6 4
1.2 1.1 1
2
0.9
0
0.8 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1415223)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1415223)
(b) GYF schemes
Figure 5.241. Excess retention ratios: Toba, V
242
1415223.
1
Toba
Toba
25
1.6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
1.4 Retention ratio
Retention ratio
20
2GYF 3GYF
1.5
15
10
1.3 1.2 1.1
5 1 0
0.9 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1725148)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1725148)
(b) GYF schemes
Figure 5.242. Excess retention ratios: Toba, V
243
1725148.
1
StandardNonInteractive
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1414)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1414)
(b) GYF schemes
Figure 5.243. Copying cost, with FC-OPT: StandardNonInteractive, V
StandardNonInteractive
1414.
StandardNonInteractive
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1723)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1723)
(b) GYF schemes
Figure 5.244. Copying cost, with FC-OPT: StandardNonInteractive, V
244
1723.
1
StandardNonInteractive
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2101)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2101)
(b) GYF schemes
Figure 5.245. Copying cost, with FC-OPT: StandardNonInteractive, V
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
2101.
StandardNonInteractive
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2561)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2561)
1
(b) GYF schemes
Figure 5.246. Copying cost, with FC-OPT: StandardNonInteractive, V
StandardNonInteractive
2561.
StandardNonInteractive
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3122)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3122)
(b) GYF schemes
Figure 5.247. Copying cost, with FC-OPT: StandardNonInteractive, V
245
3122.
1
StandardNonInteractive
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3805)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3805)
(b) GYF schemes
Figure 5.248. Copying cost, with FC-OPT: StandardNonInteractive, V
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3805.
StandardNonInteractive
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4639)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4639)
1
(b) GYF schemes
Figure 5.249. Copying cost, with FC-OPT: StandardNonInteractive, V
StandardNonInteractive
4639.
StandardNonInteractive
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 5655)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 5655)
(b) GYF schemes
Figure 5.250. Copying cost, with FC-OPT: StandardNonInteractive, V
246
5655.
1
StandardNonInteractive
StandardNonInteractive 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 6894)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 6894)
(b) GYF schemes
Figure 5.251. Copying cost, with FC-OPT: StandardNonInteractive, V
247
6894.
1
Lambda-Fact5
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
3 2.5
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 7673)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 7673)
(b) GYF schemes
Figure 5.252. Copying cost, with FC-OPT: Lambda-Fact5, V
Lambda-Fact5
7673.
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
3 2.5
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 9354)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 9354)
(b) GYF schemes
Figure 5.253. Copying cost, with FC-OPT: Lambda-Fact5, V
248
9354.
1
Lambda-Fact5
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
3 2.5
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 11402)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 11402)
(b) GYF schemes
Figure 5.254. Copying cost, with FC-OPT: Lambda-Fact5, V
Lambda-Fact5 4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
3 2.5
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
11402.
Lambda-Fact5
4
2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 13899)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 13899)
1
(b) GYF schemes
Figure 5.255. Copying cost, with FC-OPT: Lambda-Fact5, V
Lambda-Fact5
13899.
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
3 2.5
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 16943)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 16943)
(b) GYF schemes
Figure 5.256. Copying cost, with FC-OPT: Lambda-Fact5, V
249
16943.
1
Lambda-Fact5
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
3 2.5
2GYF 3GYF
3.5 Relative mark/cons ratio rho
Relative mark/cons ratio rho
3.5
2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 20654)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 20654)
(b) GYF schemes
Figure 5.257. Copying cost, with FC-OPT: Lambda-Fact5, V
Lambda-Fact5
20654.
Lambda-Fact5
4
4 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
3 2.5
2GYF 3GYF
3.5 Relative mark/cons ratio rho
3.5 Relative mark/cons ratio rho
1
2 1.5 1 0.5
3 2.5 2 1.5 1 0.5
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 25177)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 25177)
(b) GYF schemes
Figure 5.258. Copying cost, with FC-OPT: Lambda-Fact5, V
250
25177.
1
Tree-Replace-Random
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 15985)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 15985)
(b) GYF schemes
Figure 5.259. Copying cost, with FC-OPT: Tree-Replace-Random, V
Tree-Replace-Random
15985.
Tree-Replace-Random
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 19486)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 19486)
(b) GYF schemes
Figure 5.260. Copying cost, with FC-OPT: Tree-Replace-Random, V
251
19486.
1
Tree-Replace-Random
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 23754)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 23754)
(b) GYF schemes
Figure 5.261. Copying cost, with FC-OPT: Tree-Replace-Random, V
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
23754.
Tree-Replace-Random
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 28956)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 28956)
1
(b) GYF schemes
Figure 5.262. Copying cost, with FC-OPT: Tree-Replace-Random, V
Tree-Replace-Random
28956.
Tree-Replace-Random
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 35297)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 35297)
(b) GYF schemes
Figure 5.263. Copying cost, with FC-OPT: Tree-Replace-Random, V
252
35297.
1
Tree-Replace-Random
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 43027)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 43027)
(b) GYF schemes
Figure 5.264. Copying cost, with FC-OPT: Tree-Replace-Random, V
Tree-Replace-Random 3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
43027.
Tree-Replace-Random
3
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 52450)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 52450)
1
(b) GYF schemes
Figure 5.265. Copying cost, with FC-OPT: Tree-Replace-Random, V
Tree-Replace-Random
52450.
Tree-Replace-Random
3
3 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF FC-OPT
2.5 2
Relative mark/cons ratio rho
Relative mark/cons ratio rho
1
1.5 1 0.5 0
2GYF 3GYF
2.5 2 1.5 1 0.5 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 63936)
1
0
(a) FC schemes including OPT
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 63936)
(b) GYF schemes
Figure 5.266. Copying cost, with FC-OPT: Tree-Replace-Random, V
253
63936.
1
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
254
0
100
200
300
400
500
0
100
200
300
400
500
0
0
2000
2000
6000
6000
8000 10000 Allocation time (words)
Lines of constant age: Interactive-AllCallsOn
8000 10000 Allocation time (words)
12000
12000
Figure 5.267. Demise graphs: Interactive-AllCallsOn.
4000
4000
Lines of constant allocation time: Interactive-AllCallsOn
14000
14000
16000
16000
18000
18000
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
255
0
200
400
600
800
1000
0
200
400
600
800
1000
0
0
10000 Allocation time (words)
Lines of constant age: Interactive-TextEditing
10000 Allocation time (words)
15000
15000
Figure 5.268. Demise graphs: Interactive-TextEditing.
5000
5000
Lines of constant allocation time: Interactive-TextEditing
20000
20000
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
256
0
100
200
300
400
500
600
700
800
0
100
200
300
400
500
600
700
800
0
0
100000 Allocation time (words)
Lines of constant age: StandardNonInteractive
100000 Allocation time (words)
Figure 5.269. Demise graphs: StandardNonInteractive.
50000
50000
Lines of constant allocation time: StandardNonInteractive
150000
150000
200000
200000
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
257
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
0
10000
20000
30000
40000
50000
60000
70000
80000
90000
0
0
500000
500000
1.5e+06
1.5e+06
2e+06 Allocation time (words)
Lines of constant age: HeapSim
2e+06 Allocation time (words)
Figure 5.270. Demise graphs: HeapSim.
1e+06
1e+06
Lines of constant allocation time: HeapSim
2.5e+06
2.5e+06
3e+06
3e+06
3.5e+06
3.5e+06
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
258
0
1000
2000
3000
4000
5000
0
1000
2000
3000
4000
5000
0
0
50000
50000
150000 Allocation time (words)
Lines of constant age: Lambda-Fact5
150000 Allocation time (words)
Figure 5.271. Demise graphs: Lambda-Fact5.
100000
100000
Lines of constant allocation time: Lambda-Fact5
200000
200000
250000
250000
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
259
0
2000
4000
6000
8000
10000
12000
0
2000
4000
6000
8000
10000
12000
0
0
200000
200000
600000 Allocation time (words)
Lines of constant age: Lambda-Fact6
600000 Allocation time (words)
800000
800000
Figure 5.272. Demise graphs: Lambda-Fact6.
400000
400000
Lines of constant allocation time: Lambda-Fact6
1e+06
1e+06
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
260
0
2000
4000
6000
8000
10000
12000
14000
16000
0
2000
4000
6000
8000
10000
12000
14000
16000
0
0
200000
200000
400000
400000
800000 Allocation time (words)
Lines of constant age: Swim
800000 Allocation time (words)
Figure 5.273. Demise graphs: Swim.
600000
600000
Lines of constant allocation time: Swim
1e+06
1e+06
1.2e+06
1.2e+06
1.4e+06
1.4e+06
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
261
0
5000
10000
15000
20000
25000
30000
0
5000
10000
15000
20000
25000
30000
0
0
500000
500000
Figure 5.274. Demise graphs: Tomcatv.
1e+06 Allocation time (words)
Lines of constant age: Tomcatv
1e+06 Allocation time (words)
Lines of constant allocation time: Tomcatv
1.5e+06
1.5e+06
2e+06
2e+06
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
262
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0
1000
2000
3000
4000
5000
6000
7000
8000
9000
0
0
100000 Allocation time (words)
Lines of constant age: Tree-Replace-Binary
100000 Allocation time (words)
Figure 5.275. Demise graphs: Tree-Replace-Binary.
50000
50000
Lines of constant allocation time: Tree-Replace-Binary
150000
150000
200000
200000
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
263
0
2000
4000
6000
8000
10000
12000
0
2000
4000
6000
8000
10000
12000
0
0
100000
100000
300000
300000
400000 500000 Allocation time (words)
Lines of constant age: Tree-Replace-Random
400000 500000 Allocation time (words)
600000
600000
Figure 5.276. Demise graphs: Tree-Replace-Random.
200000
200000
Lines of constant allocation time: Tree-Replace-Random
700000
700000
800000
800000
900000
900000
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
264
0
200
400
600
800
1000
1200
1400
0
200
400
600
800
1000
1200
1400
0
0
500000
500000
1e+06
1e+06
2e+06 2.5e+06 Allocation time (words)
Lines of constant age: Richards
2e+06 2.5e+06 Allocation time (words)
Figure 5.277. Demise graphs: Richards.
1.5e+06
1.5e+06
Lines of constant allocation time: Richards
3e+06
3e+06
3.5e+06
3.5e+06
4e+06
4e+06
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
265
0
5000
10000
15000
20000
25000
30000
35000
0
5000
10000
15000
20000
25000
30000
35000
0
0
200000
200000
600000 Allocation time (words)
Lines of constant age: JavaBYTEmark
600000 Allocation time (words)
Figure 5.278. Demise graphs: JavaBYTEmark.
400000
400000
Lines of constant allocation time: JavaBYTEmark
800000
800000
1e+06
1e+06
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
266
0
20000
40000
60000
80000
100000
120000
0
20000
40000
60000
80000
100000
120000
0
0
5e+06
5e+06
1.5e+07
1.5e+07
2e+07 Allocation time (words)
Lines of constant age: Bloat-Bloat
2e+07 Allocation time (words)
Figure 5.279. Demise graphs: Bloat-Bloat.
1e+07
1e+07
Lines of constant allocation time: Bloat-Bloat
2.5e+07
2.5e+07
3e+07
3e+07
3.5e+07
3.5e+07
Heap position (young end at 0)
Demise positions
Heap position (young end at 0)
267
0
50000
100000
150000
200000
250000
0
50000
100000
150000
200000
250000
0
0
5e+06
5e+06
1e+07
1e+07
2e+07 Allocation time (words)
Lines of constant age: Toba
2e+07 Allocation time (words)
Figure 5.280. Demise graphs: Toba.
1.5e+07
1.5e+07
Lines of constant allocation time: Toba
2.5e+07
2.5e+07
3e+07
3e+07
3.5e+07
3.5e+07
Demise points interpreted as mortality: Interactive-AllCallsOn
Total demise points
1000
100
10
1 0
50
100 150 Position from young end
200
Figure 5.281. Demise position as mortality: Interactive-AllCallsOn.
268
250
Demise points interpreted as mortality: Interactive-TextEditing
Total demise points
1000
100
10
1 0
100
200
300 400 500 Position from young end
600
700
Figure 5.282. Demise position as mortality: Interactive-TextEditing.
Demise points interpreted as mortality: StandardNonInteractive 100000
Total demise points
10000
1000
100
10
1 0
100
200
300 400 500 Position from young end
600
700
Figure 5.283. Demise position as mortality: StandardNonInteractive.
269
Demise points interpreted as mortality: HeapSim 1e+06
Total demise points
100000
10000
1000
100
10
1 0
10000 20000 30000 40000 50000 60000 70000 80000 90000 100000 Position from young end
Figure 5.284. Demise position as mortality: HeapSim.
Demise points interpreted as mortality: Lambda-Fact5 10000
Total demise points
1000
100
10
1 0
500
1000
1500 2000 2500 Position from young end
3000
Figure 5.285. Demise position as mortality: Lambda-Fact5.
270
3500
Demise points interpreted as mortality: Lambda-Fact6 10000
Total demise points
1000
100
10
1 0
1000
2000
3000 4000 5000 Position from young end
6000
7000
8000
Figure 5.286. Demise position as mortality: Lambda-Fact6.
Demise points interpreted as mortality: Swim 1e+06
Total demise points
100000
10000
1000
100
10
1 0
2000
4000 6000 8000 Position from young end
10000
Figure 5.287. Demise position as mortality: Swim.
271
12000
Demise points interpreted as mortality: Tomcatv 1e+06
Total demise points
100000
10000
1000
100
10
1 0
5000
10000 15000 Position from young end
20000
25000
Figure 5.288. Demise position as mortality: Tomcatv.
Demise points interpreted as mortality: Tree-Replace-Binary 10000
Total demise points
1000
100
10
1 0
1000
2000
3000 4000 5000 6000 7000 Position from young end
8000
9000 10000
Figure 5.289. Demise position as mortality: Tree-Replace-Binary.
272
Demise points interpreted as mortality: Tree-Replace-Random 100000
Total demise points
10000
1000
100
10
1 0
2000
4000
6000 8000 10000 Position from young end
12000
14000
Figure 5.290. Demise position as mortality: Tree-Replace-Random.
Demise points interpreted as mortality: Richards 100000
Total demise points
10000
1000
100
10
1 0
200
400 600 800 Position from young end
1000
Figure 5.291. Demise position as mortality: Richards.
273
1200
Demise points interpreted as mortality: JavaBYTEmark 100000
Total demise points
10000
1000
100
10
1 0
5000
10000 15000 20000 25000 30000 35000 40000 45000 Position from young end
Figure 5.292. Demise position as mortality: JavaBYTEmark.
Demise points interpreted as mortality: Bloat-Bloat 1e+06
Total demise points
100000
10000
1000
100
10
1 0
20000
40000
60000 80000 100000 Position from young end
120000
Figure 5.293. Demise position as mortality: Bloat-Bloat.
274
140000
Demise points interpreted as mortality: Toba 1e+07
1e+06
Total demise points
100000
10000 1000
100
10
1 0
50000
100000 150000 Position from young end
200000
Figure 5.294. Demise position as mortality: Toba.
275
250000
CHAPTER 6 EVALUATION: COSTS OTHER THAN COPYING
In this chapter we examine the non-copying costs of collection, focusing on the cost of maintaining the information needed for accurate region-based collection: the write barrier and remembered set overheads. The evaluation uses a realistic, block-allocation collection simulator, based on our prototype implementation. We compare the generational youngest-first collector with the deferred older-first algorithm, which was shown in the preceding chapter to be the most promising with respect to copying cost among the alternatives to generational collection that we considered.
6.1 Method for evaluating the pointer maintenance cost We use an instrumented version of the block simulator (Section 4.3.3) to obtain the statistics of pointer management operations. We discussed in Section 3.2 the design of pointer management for age-based collection, and noted alternative models of pointer store filtering and of remembered set organization. The actual costs of operations depend on these alternative design decisions. Indeed, the operation counts are not invariant themselves: if two filtering tests are executed in sequence, the outcome of the first influences the number of times the second is executed. The division of labor between the run time and the garbage collection time is also a matter of choice. Here we use for the generational collector the traditional model of operation with a unified remembered set and generational filtering, and we show that it is very efficient. For the DOF collector we use block-local and directional filtering. All of the operation counts reported are accurate for these design choices, while some of them are also universally applicable regardless of the design choice. 276
For the write barrier, we report the operation counts for pointer stores, and the number of non-null pointer stores. Non-null stores are further divided into stores filtered by the specified filtering mechanism, and those not filtered. The non-filtered stores are divided into unique and duplicate, with respect to the prior content of the remembered set into which the stores are inserted. Note that the write-barrier statistics include both the run time and the garbagecollection time pointer stores. For remembered set processing at garbage-collection time, we report the number of pointers from the uncollected region into the collected region, as well as the number of remembered set entries actually processed, which can be larger.
6.1.1 Results In Table 6.1 we report the statistics that are independent of the collection algorithm or its configuration. The columns “Words alloc.” and “Ptr. st.” repeat the information of Table 4.1. The column “Alloc./st.” gives the ratio of the number of words allocated to the number of pointer stores for the program. This value is an indication, albeit crude, of the frequency of pointer manipulations, or the “object-orientedness” of the program, as well as of the expected influence of pointer maintenance operations on overall memory management performance. The last two columns give the number of non-null pointer stores as an absolute number, and as a percentage of all pointer stores. In many implementations of the write barrier, the check for a null pointer is a separate first step. As we can see in our numbers, there are few null pointers, and this test rarely eliminates the pointer from consideration. Hence a write barrier that integrates the null-pointer test into a general address test, as proposed in Section 3.2.4, is to be preferred. In Tables 6.2–6.15, pp. 295–382 (at the end of this chapter) we report the pointer management statistics for the different configurations of the FC-DOF and the 2GYF collectors. The first group of tables (Tables 6.2–6.8) is for the FC-DOF collector. The first three columns in Tables 6.2–6.8 indicate the configuration. The first column, having values between 9 and 20,
277
Table 6.1. Properties of benchmarks used: run-time allocation and pointer stores Benchmark Interactive-AllCallsOn Interactive-TextEditing StandardNonInteractive HeapSim Lambda-Fact5 Lambda-Fact6 Swim Tomcatv Tree-Replace-Binary Tree-Replace-Random Richards JavaBYTEmark Bloat-Bloat Toba
Words alloc.
Ptr. st.
Alloc./st.
Non-null ptr. st.
%
Smalltalk 18 359 855 22 747 3 509 204 954 7 251 3 791 306 30 298 277 940 91 877 1 216 247 404 670 1 533 642 134 355 2 117 849 286 032 209 600 39 642 925 236 168 513 4 400 543 763 626
21.47 6.48 28.26 125.13 3.03 3.01 11.41 7.40 5.29 5.49 5.76
134 3 241 5 882 30 233 87 403 387 037 120 427 285 056 32 986 140 029 611 767
15.7 92.4 81.1 99.8 95.1 95.6 89.6 99.6 83.2 83.1 80.1
Java 49 061 4 927 497 3 027 982
23.68 7.58 12.85
48 835 4 376 798 2 944 672
99.5 88.8 97.2
1 161 949 37 364 458 38 897 724
Geometric mean
10.27
80.2
gives the block size: for value k, the block size is 2k bytes. The second column gives the allotted heap size, in terms of blocks. The third column gives the size of the collected region (the window size), also in terms of blocks. The fourth column gives the absolute copying cost as the number of words copied. The fifth column gives the number of collector invocations. The sixth column gives the number of remembered set insertion operations at the write barrier, i.e., the number of pointer stores that were not filtered out. The seventh column gives the number of insertion operations that are not duplicates, and hence actually add entries to remembered sets. The eighth column gives the sum of the sizes of all remembered sets processed at garbage collection time (i.e., the sets belonging to blocks within the collected region). The ninth column gives the sum over all collections of the sizes of filtered unions of sets within the collected region. The design used to produce the statistics is the following. Each block has its own remembered set (Section 3.2.2). The write barrier code is executed on every pointer store, both at run time and at garbage collection time. The write barrier consists of three filtering checks: first,
278
the check for null pointers; second, the check for block-local pointers; and third, the directional check (as in Figures 3.15 and 3.16). If none of the checks eliminates the pointer store, then the write-barrier code inserts it into the remembered set of the pointer target block, and adds it to the tally for table column six, “W.b. insert.” The remembered set may already contain an entry recording the same pointer source address; otherwise, a new entry is added to the remembered set, and to the tally for table column seven, “W.b. add.” At collection time, following the processing of global roots, the collector processed the remembered sets of the blocks in the collected region one by one. Table column eight, “R.s. proc,” reports the sum of the sizes of all remembered sets processed over all collections. On the other hand, the sum of the sizes of remembered sets for the collected region can be greater than the number of pointers actually crossing into the collected region from the uncollected region. The latter number is computed as the union of the remembered sets involved, less any pointers that have both source and target in the collected region. The sum of these numbers over all collections is reported in column nine, “R.s. proc.” The second group of tables (Tables 6.9–6.15) is for the 2GYF collector. The first four columns in Tables 6.9–6.15 indicate the configuration. The first column gives the block size, the second column gives the allotted heap size in blocks, the third column gives the size of the younger generation (the nursery) in blocks, and the fourth column gives the size of the older generation in blocks. The fifth column gives the absolute copying cost as the number of words copied. The sixth column gives the number of collector invocations. The seventh column gives the number of remembered set insertion operations at the write barrier, and the eighth gives the number of these that are not duplicates. The ninth column gives the sum of the sizes of processed remembered sets. The design used to produce the statistics is the following. There is a single remembered set, and it records the pointers from the older generation into the younger generation. The write barrier code is executed on every pointer store at run time. No write barrier is needed at garbage collection time, because, with only two generations, all objects are in the oldest
279
generation following a collection. The write barrier consists of two filtering checks: first, the check for null pointers; and second, the generational check. If neither filter eliminates the pointer store, it is inserted into the remembered set, and added to the tally for column seven, “W.b. insert.” If the pointer source is not already present in the remembered set, a new entry is added to the set, and to column eight, “W.b. add.” At collection time, the collector processes the single remembered set. Table column nine, “R.s. proc”, reports the sum, over all collection, of the sizes of the remembered set. There is no need for a “R.s. union” column for 2GYF, because its meaning is subsumed by “R.s. proc.” Each row of a table corresponds to one configuration of a collector, and thus to one simulation run. The configurations are ordered by block size, and for each block size, they are ordered by heap size. For a given heap size, the configurations are ordered by the size of the collected region (in the case of FC-DOF), and by the size of the younger generation (in the case of 2GYF). Certain rows are printed in boldface for ease of reference in both the FC-DOF and the 2GYF tables; we shall use these configurations as examples to discuss the comparative costs of
the two schemes.
6.1.2 Block size We first examine the effect of block size on copying and pointer management costs. For the FC-DOF collector, we find that increasing the block size reduces the operation counts for the
write barrier and remembered set processing. Consider benchmark Lambda-Fact5 in Table 6.4, p. 310. We shall compare block sizes 29 bytes and 210 bytes, and, in particular, configurations, printed in italic boldface font, for block size 29 bytes and heap size 240 blocks, and for block size 210 bytes and heap size 120 blocks. These configurations are pairwise identical with respect to heap size and collected region size. The differences in copying cost are small in some pairs, but significant in others, such as 30236 words copied for configuration 9 240 108 vs. 44507 words copied for configuration
280
10 120 54 . There are two reasons for the differences. First, blocks of different size cause
different fragmentation at block boundaries. However, since blocks are already substantially larger than typical object sizes present in the trace, the effect of fragmentation should be small, and the effect of differences in fragmentation even smaller; moreover, larger blocks should suffer less from fragmentation than smaller blocks, whereas we see that the copying cost is lower for smaller blocks. The second reason, which applies here, is that positioning of the collection window in the FC-DOF scheme is constrained by block boundaries and is less flexible with larger blocks, and, as Section 5.4.5.2 has shown, window positioning is important for good performance of FC-DOF. If we focus on pairs with similar copying cost, we note that larger blocks enjoy lower counts in columns “W.b. insert,” “W.b. add,” and “R.s. proc,” which is to be expected. With larger blocks, a greater portion of pointer stores are block-local, hence filtered out. Note, however, that doubling the block size reduces the number of write-barrier and remembered set operations by only about 10% in these examples. The column “R.s. union” is not reduced by increasing block size since the number of external pointers into the collected area does not depend on its internal division. For the 2GYF collector, Table 6.4, p. 310, again comparing the configurations for block size 29 bytes and heap size 240 blocks, and for block size 210 bytes and heap size 120 blocks, we find that differences in copying cost are generally not large, and in some cases favor larger, whereas in other cases smaller blocks. The pointer management operation counts are not systematically affected by block size, which is to be expected, since the generational write barrier check ignores block boundaries, except to the extent that it is indirectly influenced by fragmentation.
6.1.3 Comparison between collectors We compare the FC-DOF and the 2GYF collector with respect to copying and pointer management costs. Consider the benchmark StandardNonInteractive with a block size of 2 9 bytes
281
and heap size of 25 blocks, i.e., 3200 words (Table 6.2, p. 295). Among the configurations of the FC-DOF collector for this heap size, the lowest copying cost of 3902 words is achieved with a collected region size set to 4 blocks. We abbreviate this configuration as 9 25 4 . Among the
configurations of the 2GYF collector for this heap size (Table 6.9, p. 335), the lowest copying cost of 15699 words is achieved with a younger generation set to 17 blocks, and older generation set to 8 blocks, abbreviated 9 25 17 8 . Thus the copying cost of the FC-DOF collector is
significantly (4.02 times) lower than that of the 2GYF collector, which confirms, in the context of a block-based implementation, the observations we made in Sections 5.1 and 5.2. The number of pointer management operations, however, is significantly higher for FCDOF than for 2GYF. The number of non-filtered pointer stores is 688 13
the number of processed remembered set entries is
615 12
52 9 times greater, and
51 5 times greater. It should come
as no surprise that pointer management costs are higher in the FC-DOF scheme. The “R.s. union” column for FC-DOF indicates that the total number of pointers into the collected region (summed over all collections) is 392, whereas the corresponding number for 2GYF is just 12. Therefore, regardless of the efficiency of pointer filtering, when collected regions are chosen in the FC-DOF manner, the number of incoming pointers is high. The causes are in the small size of the region (and large size of the remainder), and the fact that the region visits all parts of the heap regularly, and some parts of the heap have many incoming pointers; we investigate pointer structure further in the following Section 6.2. We can remark then on the efficiency of the FC-DOF filtering scheme of Section 3.2.2: out of 7251 pointer stores in StandardNonInteractive (Table 6.1), only 688 or 9.5% are inserted into remembered sets. This reduction in the number of pointer that are recorded is certainly not as good as 13 or 0.2% for the generational filtering of 2GYF. On the other hand, we noted that keeping a large number of pointers is a necessary price to pay in order to choose collected regions in a particular way. The true measure of the efficiency of the filtering scheme is therefore given by the ratio of the “R.s. proc” column and the “R.s. union” column:
615 392
1 56,
thus 56% more pointers than is absolutely necessary are processed at garbage collection time.
282
We see that the copying cost is lower for the FC-DOF collector, and the pointer management cost is lower for the 2GYF collector. We must look at absolute numbers to determine which collector achieves a lower total cost. The difference in copying cost, in favor of FCDOF, is 11797 words copied. The difference in pointer management cost, in favor of 2GYF, is
675 remembered set insertions (636 non-duplicate), and 603 processed entries at garbage collection time. Which of these is more expensive depends on the details of implementation. For the sake of a simple analysis, we subsume the different components of pointer management under the single appellation of pointer processing cost, which we associate with the highest operation count, that of the non-filtered pointer stores, thus erring conservatively in favor of the scheme with lower pointer management cost. If the cost of processing a pointer is greater than
11797 675
17 5 times the cost of copying a word at garbage collection time, then the 2GYF
collector will have the advantage for benchmark StandardNonInteractive. (These break-even ratios are summarized for all examined configurations in Figures 6.1– 6.9, pp. 389–393 which we discuss below.) If we are interested in incremental operation of the collector, then we should compare configurations that result in similar (and large) numbers of collector invocations. In this case, if the 2GYF collector is configured with the younger generation at 4 blocks, and the older generations at 21 blocks, then it will perform 418 collections, the same as the FC-DOF collector with the collection window of 4 blocks. The copying cost advantage of FC-DOF is now greater, 23510 words, but the pointer management advantage of 2GYF is only slightly smaller, 673 remembered set insertions (633 non-duplicate), and 600 processed entries at garbage collection time; the pointer/copy cost ratio would have to be over 34 9 for 2GYF to have the advantage.
Similar results obtain for the larger block size of 210 : compare the configuration 10 13 3
of FC-DOF with configuration 10 13 7 6 of 2GYF.
We turn our attention to benchmark HeapSim (Tables 6.3, p. 301 and 6.10, p. 343), at block size 213 bytes and heap size 123 blocks, or 251904 words. The best configuration of FC-DOF is 13 123 4 , with copying cost 262984, with 4645 non-filtered pointer stores (of which 3731
283
non-duplicate), and 3297 processed entries at garbage collection time. The remembered set union total is 1628, thus the filtering efficiency is assessed at an overhead of 102% more processed pointers than is absolutely necessary. The best 2GYF configuration is 13 123 41 82 ,
with copying cost 341702, with 22 non-filtered pointer stores (of which 12 non-duplicate), and 12 processed entries at garbage collection time. The copying cost advantage of FC-DOF is 89798 words; the pointer/copy cost ratio would have to be over 19 4 for 2GYF to have the
overall advantage. Similar results obtain for the larger block sizes of 2 14 (compare the configuration 14 62 3 of FC-DOF with configuration 14 62 20 42 of 2GYF) and 2 16 (compare
16 16 4 with 16 16 6 10 ).
In benchmark Richards (Tables 6.5, p. 322 and 6.12, p. 370), we consider the block size 210 bytes, and heap size 20 blocks, or 5120 words. The best configuration of FC-DOF is 10 20 8 , with copying cost 56758, with 30270 non-filtered pointer stores (of which 29775
non-duplicate), and 29686 processed entries at garbage collection time. The remembered set union total is 11354, thus the filtering achieves an overhead of 161%. The best 2GYF configuration is 10 20 11 9 , with copying cost 631920, with 3754 non-filtered pointer stores (of
which 1541 non-duplicate), and 1541 processed entries at garbage collection time. The copying cost advantage of FC-DOF is 575162, and its pointer management disadvantage is 28614 insertions (26021 for non-duplicates) and 28145 processed entries. The pointer/copy cost ratio would have to be over 20 1 for 2GYF to have the overall advantage. With twice as large blocks
(configurations 11 10 4 and 11 10 6 4 ) the results are similar, and the pointer/copy cost
ratio break-even point is 27 0.
The situation is somewhat different for Lambda-Fact5 (Tables 6.4, p. 310 and 6.11, p. 354). Recall that no collector worked much better than non-generational for this benchmark in object-level simulations of Section 5.1, and that differences among collectors were small. We now look at block size 29 bytes and heap size 90 blocks, or 11520 words. The best configuration of FC-DOF is 9 90 67 , with copying cost 134723, with 21778 non-filtered pointer stores
(of which 21769 non-duplicate), and 21087 processed entries at garbage collection time. The
284
remembered set union total is 4365, which means that filtering is unusually inefficient here, at an overhead of 383%. The best 2GYF configuration is 9 90 67 23 , with copying cost 158419,
with 801 non-filtered pointer stores (of which 558 non-duplicate), and 557 processed entries at garbage collection time. The copying cost advantage of FC-DOF is 23696, but its disadvantage is 20977 insertions (21211 for non-duplicates) and 20530 processed entries. It is sufficient for the pointer/copy cost ratio to be over 1 12 for 2GYF to have the overall advantage.
Similarly, in Bloat-Bloat (Tables 6.7, p. 327 and 6.14, p. 376), we find that the copying cost difference between the collectors is very small, but the pointer management costs are noticeably higher for FC-DOF. For instance, with a block size of 2 16 , and a heap size of 28 blocks, or 458752 words, the best configuration of FC-DOF is 16 28 14 , with copying cost
2308104 and 387199 non-filtered pointer stores, whereas the best configuration of 2GYF is 16 28 13 15 , with copying cost 2498540 and 33304 non-filtered pointer stores. In this case
it is sufficient for the pointer/copy cost ratio to be over 0 53 for 2GYF to have lower total cost.
In Toba (Tables 6.8, p. 331 and 6.15, p. 382), with 216 -byte blocks and a heap size of 40 blocks, the difference in copying cost in favor of FC-DOF configuration 16 40 25 is 756188
words, and the difference in pointer cost in favor of 2GYF configuration 16 40 23 17 is
288379 operations, giving a pointer/copy cost ratio threshold of 2 62.
Benchmark JavaBYTEmark stands out (Tables 6.6, p. 326 and 6.13, p. 375). We look at block size 218 bytes and a heap size of 6 blocks, or 393216 words. One of the (tied) best configurations of FC-DOF is 18 6 3 , with copying cost 17634 words, with 23 non-filtered
pointer stores (out of 49061 pointer stores), 19 non-duplicate, and 19 processed entries at garbage collection time. The best 2GYF configuration is 18 6 5 1 , with copying cost 37842
words, with 135 non-filtered pointer stores (of which 102 non-duplicate), and 16 processed entries at garbage collection time. Here both collectors have negligible pointer management costs, but FC-DOF is better, by more than a factor of 2, with respect to copying cost. A summary of the break-even ratios of pointer-operation cost to word-copying cost is presented in Figures 6.1–6.11, pp. 389–394 (at the end of this chapter), for all simulated heap
285
sizes, including the exemplars discussed above. In these graphs, the horizontal axis indicates the heap size (in words), and the vertical axis indicates the break-even ratio, shown on a logarithmic scale. Each graph contains several data lines, one for each block size. For example, Figure 6.1 has three lines, for block sizes 29 , 210 , and 211 words. If in an implementation the ratio of the cost of a pointer operation to the per-word cost of copying is above the point in the graph, then 2GYF has a lower cost overall than FC-DOF.
6.1.4 Additional observations In addition to the comparison between FC-DOF and 2GYF, we can observe a trade-off between copying cost and pointer-maintenance cost in the configurations of FC-DOF itself. Consider for instance the configurations 16 28
of benchmark Bloat-Bloat. As the size of
collected region is increased from 11 to 26 blocks, the copying cost first briefly falls from 2 6 106 at 11 blocks to the minimum of 2 3 106 at 13 blocks (the configuration we compared
against 2GYF above), then rises to 5 4 106 at 24 blocks, and briefly falls to 5 2 106 at 26
blocks. Although the copying cost is not strictly monotone increasing, it shows a clear increasing trend. At the same time, the “R.s. union” column shows a decreasing trend, from 106683 entries in the remembered set union with collected region of 11 blocks, down to 29426 with 26 blocks. The actual processing costs in the “R.s. proc” column are also nearly monotone decreasing. An examination of all the tabulated results for the FC-DOF collector reveals that there are remarkably few duplicate entries: the values in the “W.b. add” column are close to those in the “W.b. insert” column. Therefore, it is not essential to eliminate duplicates early in pointer management. A simple sequential store buffer representation of the remembered set, which does not remove duplicates, may be adequate. With the single generational remembered set of the 2GYF collector, the number of duplicates is significant: the values in the “W.b. add” column are typically well below those in the “W.b. insert” column. When the proportion of duplicate entries is large, it is worth eliminating
286
them early and cheaply, e.g., by using a card-marking write barrier, with cards flushed into sets without duplicates. Since the proportion of pointer stores retained is higher in the FC-DOF collector than in the 2GYF collector, the amount of auxiliary space needed for the remembered sets is higher, which raises the concern about the total memory usage of the collector. It turns out, however, that the space needed for remembered sets is typically below 2% of heap size. In particular, assuming that a remembered set entry uses one word, the total size of remembered sets for all heap blocks is 0.62% of total heap size for StandardNonInteractive (in the configuration discussed above), 0.49% for Richards, 1.38% for Bloat-Bloat, and 1.02% for Toba. Therefore, the space overhead of pointer maintenance in FC-DOF is low. The measured values for individual copying and pointer-tracking operations (Section 3.2.5) permit a more detailed estimation of the total cost of collection. However, our trace-based simulation does not simulate stack contents, and therefore, while its copying counts are accurate, it cannot model the pointer-scanning counts: the number of scanned pointers in to-space that do not result in object copying depends on the contents of the stack. We can establish lower and upper limits, however. The lower limit corresponds to approximately one duplicate pointer per object, that one being the pointer to the class object. The upper limit corresponds to all words being duplicate pointers. We assume individual operation costs as in Section 3.2.5, and, for a given heap size, we allow each collector to assume its best configuration with respect to the total cost estimate. For the heap sizes as in the examples discussed above, the lower and upper bounds for the ratios of total processor cycles spent on garbage collection by 2GYF and by DOF are as follows: for benchmark StandardNonInteractive, between 2.66 and 3.28; for benchmark Richards, between 3.84 and 5.70; for benchmark Lambda-Fact5, between 0.98 and 1.04; for benchmark JavaBYTEmark, between 2.16 and 2.48; for benchmark Toba, between 0.98 and 1.08; for benchmark Bloat-Bloat, between 0.94 and 0.98. Therefore, pointer management costs do not significantly alter the qualitative performance relationship between the two collectors that we established when considering the amounts copied in Chapter 5: DOF
287
remains significantly better than 2GYF on one set of benchmarks, while not significantly worse than 2GYF on the remaining benchmarks.
6.2 Methods for examining the directions of pointers in the heap An evaluation of the cost of remembered-set maintenance provides us with raw numbers, such as the number of executed write-barrier checks, the number of processed remembered-set entries, and an estimate of the time penalty of these operations. Upon seeing these numbers, we can assess the effect of pointer tracking on different collectors, but we are at a loss about the origins of the effect. Just as it is necessary to recognize where (in which region of the heap) the high copying cost was incurred, it is necessary to recognize where in the heap the pointer-tracking costs arise, if we are to understand and remedy the situation. Our purpose is to establish quantitative measurements where the literature so far provides speculative statements.
6.2.1 Pointer stores in the ideal heap We begin with an examination of pointer store directions in an absolute sense, without reference to a specific collection algorithm, assuming instead that the heap is perfectly collected, as we did for the heap profiles in Section 5.4.3. For each non-null pointer store, we note the heap position of both the pointer source and the pointer target. Recall that the heap position of an object is expressed as the distance from the young end of the heap, i.e., the amount of data that was allocated since that object was allocated and is still live. We define pointer distance as the difference between the heap position of the pointer target and the heap position of the pointer source. Hence, the distance is positive for younger-to-older pointers, and negative for older-to-younger pointers. Aggregating over all stores, the distribution of the positions is shown in Figures 6.12–6.25, pp. 395–408 (at the end of this chapter). Each figure contains three plots: (a) a histogram of the distribution of heap positions of pointer sources; (b) a histogram of the distribution of heap positions of pointer targets; and (c) a cumulative distribution graph for pointer distances and directions.
288
The pointer source histograms for all programs have a peak in the low positions, the youngest objects. A majority of pointer stores are into the most recently allocated objects. We emphasize that the histogram excludes null pointer stores, therefore the default initializations of pointer fields to the null pointer do not skew the distribution. Only the actual initializations by assignment are reflected in the histogram. The pointer target distributions also exhibit sharp peaks for the youngest objects, and indeed the fact that pointer distances are very short means that the source and target distributions must be similar. Thus a majority of pointers are from one object in the youngest part of the heap to another. We now understand that the generational write barrier works well because for most pointer stores, both the source and the target are in the youngest generation, and thus the store need not be recorded. The putative explanation, that older-to-younger pointers are rare relative to younger-to-older pointers, is neither true (as the pointer distances graphs show, both directions are common), nor relevant. Two programs with exceptional behavior are Swim and Tomcatv; they have secondary peaks of the pointer source distribution among the oldest objects. Both of these programs use a common implementation of floating-point matrices. A floating-point value in Smalltalk is not represented as an immediate register value, because the necessary tag bit(s) would both limit precision and require expensive conversions to the hardware floating-point format. Instead, a level of indirection is used, and each floating-point value is allocated in the heap. Computation creates new floating-point objects and updates the elements of a matrix to point to them. Since the matrix itself is allocated early in program execution, computation produces a stream of old-to-new pointer stores This peculiarity of the Smalltalk floating-point representation, and similar situations with frequently updated data structures with boxed representations in general and their interaction with the write barrier, have been noted before [Wilson, 1990; Chambers, 1992; Wilson, 1992] as a potential problem for the generational write barrier.
289
6.2.2 Pointer stores and remembered sets in age-based collectors We now examine the pointer distributions that arise as a result of the execution of particular collection algorithms. Our frame of reference is an age-ordered heap, divided into blocks of equal size. The heap position of the youngest block (i.e., the current allocation block) is numbered 0, and older blocks are given successive numbers. We are interested in the distribution of pointer-tracking events by heap position. In other words, we are interested, having observed the positional distribution with an ideally collected heap, in any bias that the actual collection scheme introduces. The resulting pointer source and target distributions are shown in plots (a)–(d) of Figures 6.26–6.31, pp. 409–414, for the benchmarks and their configurations discussed in the preceding Section 6.1.3. Plots (a) and (c) on the left reflect all non-null pointer stores, whereas plots (b) and (d) on the right reflect only those stores not eliminated by filtering, i.e., those inserted into remembered sets. Plots (a) and (b) show the histogram of positions of the pointer source, whereas plots (c) and (d) show the histogram of positions of the pointer target. Additionally, the bottom plots (e) of Figures 6.26–6.31 show the positional distribution of the remembered set entries chosen for processing at garbage collection time. The plots (a) and (c) show the same patterns as the corresponding plots (a) and (b) for the perfectly collected heap in the preceding section. Therefore, the FC-DOF collector, owing to its regular full sweeps, does not maintain large quantities of uncollected (“tenured”) garbage, which, if present, would visibly distort the distribution relative to the perfectly collected heap. Of greater interest is the distribution of heap position among the non-filtered pointer stores, in plots (b) and (d). First note that the absolute numbers are much smaller, since filtering eliminates most pointer stores. A majority of the non-filtered stores still have their source in the youngest block (position 0). The non-filtered targets in the youngest block are few, because of the block-local filtering. However, there are many targets tailing off from block 1 towards older blocks—the directional filter could not eliminate the pointers from block 0 into these older blocks. In Lambda-Fact5
290
there is also a large number of pointers with source in block 0 and target among the very oldest objects. When a filter cannot eliminate a pointer store, it inserts it into the remembered set of the target block. Thus, most pointers are inserted into the remembered sets of blocks at low positions (but not position 0). However, some time elapses between the pointer store and the time when the target block becomes part of the collected region of a collection, and during that time the block ages and moves toward higher heap positions. Therefore, in the distribution of heap positions of processed remembered set entries, the peak is in the middle or higher (older) regions. In Lambda-Fact5, for instance, just the oldest of the 90 blocks accounts for 30% of all processed remembered set entries (Figure 6.29(e), p. 412). Recall that the oldest blocks contain permanent data (Figure 5.271, p. 258), hence collections immediately following the resetting of the window not only incur a high copying cost, but also a high pointer processing cost. Benchmark JavaBYTEmark is an exception: the filtering is so successful that very few pointers remain non-filtered, and, with their number so small, it is not warranted to discuss their positional distribution.
6.3 Method for evaluating the collection start-up cost In addition to the copying cost of collection, which is directly related to the efficiency of the collector in finding low-survival regions, and the remembered-set maintenance cost, which is related to the ability of the collector to approximate the structure of the uncollected region safely and cheaply, there are other costs that are incurred on every collection. First, there is a direct overhead of switching system activity from the mutator to the collector. This overhead depends on the implementation details. In some implementations, the collection is triggered by a page fault on accessing a protected page past the end of the available nursery space, or it may involve saving architectural state because the mutator is written in a different language from the garbage collector itself, and in these cases the start-up may
291
be costly. On the other hand, this direct overhead is minimal in virtual machine implementations. The change of program focus from the mutator to the collector also has an adverse effect on reference locality—the performance of the instruction cache, and possibly the data cache, degrades. Second, each collection requires establishing the global roots. These are found in the registers (machine or virtual) and in the stack (if the implementation uses an explicit stack). Whether this task is a large fraction of total garbage collection cost depends on the implementation: in the case of an explicit stack, has the implementation provided stack maps in a form that permits efficient stack scanning [Diwan et al., 1992]?1 The cost of stack scanning is not directly dependent of the garbage collection algorithm that is subsequently employed. However, the cost of one stack scan depends on the stack state (how many activation records there are on the stack), hence it depends on the moment at which it is examined. Since different collector algorithms result in different moments of collector invocation, there is an indirect dependence of total scanning cost on collection algorithm, beyond the mere number of invocations, but with many invocations spread over the execution, with somewhat asynchronous points of collection, this additional dependence is probably very weak. In summary, there is a certain fixed cost to be paid for each collector invocation. If we are primarily concerned with the total running time of the program, then we must be careful in the choice of the garbage collection algorithm lest it cause too many collector invocations. For many programs, we found that the lowest copying costs are obtained with smaller collected regions or smaller nurseries, and consequently with larger numbers of collector invocations. Clearly, when some rather large number is reached [Stefanovi´c and Moss, 1994], the invocation costs will dominate. 1
In the case of a stack hidden within the heap, stack scanning is implicitly accomplished during the Cheney scan phase of garbage collection, so it is difficult to isolate the stack scanning cost.
292
If, on the other hand, the primary consideration is that of minimizing pause times, then the invocation cost represents a fixed component that cannot be avoided. Since this cost is independent of the collection algorithm, except as mentioned above, it is still appropriate to choose an algorithm that minimizes the other costs.
6.3.1 Results The invocation cost is shown in Figures 6.32–6.135, pp. 415–459. The vertical axis shows the number of collector invocations relative to the non-generational collector. (This permits us to use the same scale for all plots for a given benchmark.) As before, the horizontal axis in the FC plots gives the size of the collected region expressed as the fraction g of heap size. In the GYF plots, the horizontal axis gives the size of the nursery as a fraction of heap size.
A very regular pattern emerges, for all benchmarks, and all heap sizes: when the size of the collected region (or nursery size) is close to the full heap size, the behavior is non-generational and the relative invocation count is close to 1; but as the size of the collected region decreases, the invocation count increases. This result is intuitively clear: the amount of garbage found by a collection cannot be larger than the size of the collected region, thus the number of collector invocations must rise as the reciprocal of that size, if we ignore for a moment the influence of that size on the proportion of survivors.
6.4 Summary We have shown that pointer-maintenance costs in a generational collector are very low, confirming past research. The analysis of pointer stores reveals that the explanation lies in the location of stored pointers (in recently allocated data) and not in the temporal direction of pointers (younger-to-older or older-to-younger). The FC-DOF collector, because it collects smaller regions in all parts of the heap, incurs a considerably higher pointer maintenance cost, both in the higher number of pointer stores that must be recorded at run time (even after efficient filtering), and in the high number of incoming
293
pointers processed at garbage collection time. Our trade-off analysis suggests that the copying cost advantage of DOF outweighs the pointer cost disadvantage for several tested programs, but the final verdict must await a full implementation. Let us note, however, that the focus of our study on the application of collectors to the entire heap precluded an investigation of the space of older (mature) objects only, in which the distributions of pointer stores as well as the distribution of lifetimes is less favorable to the generational approach (to the extent that this can be plausibly inferred from observations of relatively longer-lived objects in our programs). Moreover, it can be argued, pending further measurements, that, with remembered set costs having already been absorbed in the mature object space [Hudson and Moss, 1992], deferred older-first becomes a collection policy worth exploring. It is a different policy from the train algorithm [Hudson and Moss, 1992; Seligmann and Grarup, 1995]; in particular, it does not guarantee reclamation of arbitrary cycles. On the other hand, the train algorithm leaves certain promotion decisions for objects moved out of a train underspecified; whereas locality (for the purpose of exploiting key object opportunism [Hayes, 1991]) is suggested as a criterion for making these decisions (thus: moving an object close to the object that keeps it live by pointing to it), age is a permissible criterion as well.
294
Table 6.2. Costs of block FC-DOF for StandardNonInteractive Configuration 9 12 10 9 14 6 9 14 7 9 14 9 9 14 10 9 14 12 9 17 6 9 17 7 9 17 8 9 17 9 9 17 10 9 17 11 9 17 12 9 17 13 9 17 14 9 17 15 9 17 16 9 21 6 9 21 7 9 21 8 9 21 9 9 21 11 9 21 12 9 21 13 9 21 14 9 21 16 9 21 17 9 21 18 9 21 19 9 25 4 9 25 5 9 25 7 9 25 8 9 25 10 9 25 11 9 25 13 9 25 14 9 25 16 9 25 17 9 25 18 9 25 20 9 25 21 9 25 22 9 25 23 9 25 24 9 30 3 9 30 4 9 30 6
Copied 98850 20028 26720 81824 80452 73403 14719 15403 14888 17476 19831 25396 56139 56103 56389 53896 51752 9878 9729 10506 10585 11634 13141 14320 15793 39201 39948 38121 37421 3902 4456 5563 7085 7141 8247 8773 8506 9578 10837 12331 30557 31288 31622 30533 30304 2246 2389 3519
Invoc 301 304 269 269 250 222 296 255 222 201 182 171 182 172 167 159 152 288 247 217 193 159 147 137 128 125 122 117 113 418 335 241 213 170 156 132 123 108 103 97 96 94 92 89 87 550 413 278
W.b. insert 2668 693 1022 2333 2365 1705 704 623 621 608 758 897 1751 1375 1505 1394 1007 640 598 575 531 514 607 517 504 1005 1062 834 724 688 678 668 582 562 503 426 386 368 340 429 832 859 849 694 615 664 669 678
295
W.b. add 2662 674 1005 2318 2342 1699 675 597 600 593 744 881 1733 1369 1500 1391 1006 607 567 544 509 499 589 504 495 998 1056 833 723 648 637 630 547 531 478 410 374 364 333 424 828 854 845 688 612 623 628 640
R.s. proc R.s. union 2645 371 666 358 986 440 2292 592 2322 474 1686 234 662 406 584 343 591 306 572 219 738 236 875 465 1720 431 1315 179 1483 171 1375 190 996 25 582 329 553 301 534 262 453 232 486 282 571 328 497 227 490 162 950 228 1047 198 821 164 707 41 615 392 614 395 601 308 529 237 514 239 469 226 397 212 361 222 347 187 320 122 411 164 812 179 804 137 838 122 675 47 598 74 595 444 602 369 619 360 continued on next page
Table 6.2. continued Configuration 9 30 8 9 30 10 9 30 12 9 30 13 9 30 15 9 30 17 9 30 19 9 30 20 9 30 22 9 30 24 9 30 26 9 30 27 9 30 28 9 30 29 9 37 3 9 37 6 9 37 8 9 37 10 9 37 12 9 37 14 9 37 17 9 37 19 9 37 21 9 37 23 9 37 25 9 37 27 9 37 30 9 37 31 9 37 33 9 37 34 9 37 35 9 37 36 9 45 4 9 45 7 9 45 9 9 45 12 9 45 15 9 45 18 9 45 20 9 45 23 9 45 26 9 45 28 9 45 31 9 45 33 9 45 36 9 45 38 9 45 40 9 45 41
Copied 3210 3953 3827 3594 4885 7070 6247 8405 9205 9733 23200 24382 23502 24267 1621 1495 1486 1464 1560 2177 2317 2887 3155 3760 4376 5138 7571 8662 17392 18962 18851 18994 1492 1465 1376 1356 1405 1314 1349 1277 1484 2632 3011 3707 5089 6551 14201 14880
Invoc 208 167 139 128 112 100 89 86 78 72 72 71 70 69 546 273 205 164 137 118 97 87 79 72 67 62 57 55 55 55 54 53 407 233 181 136 109 91 82 71 63 59 53 50 47 44 44 44
W.b. Insert 658 648 555 544 482 423 289 373 227 191 540 553 471 444 683 679 682 673 672 626 496 428 372 280 228 158 273 261 398 468 469 382 698 698 698 697 665 658 605 501 362 329 279 186 124 225 299 345
296
W.b. Add 619 609 522 515 458 403 279 363 218 187 534 550 470 444 642 638 641 632 631 590 468 405 355 269 218 152 270 256 396 467 467 382 657 657 657 656 625 620 569 472 343 313 272 181 118 221 296 342
R.s. proc R.s. union 599 246 589 285 505 198 493 201 445 241 385 222 272 166 299 147 209 120 174 82 526 128 542 52 465 92 438 22 611 462 601 344 603 254 597 273 598 225 568 222 451 152 389 147 336 164 265 109 206 118 147 80 259 147 246 78 391 71 454 54 451 43 370 22 610 383 608 345 607 297 604 243 588 331 591 191 547 183 438 131 320 109 301 144 256 114 170 65 108 35 214 70 284 87 329 29 continued on next page
Table 6.2. continued Configuration 9 45 42 9 45 43 9 54 2 9 54 5 9 54 8 9 54 11 9 54 15 9 54 18 9 54 21 9 54 24 9 54 28 9 54 31 9 54 34 9 54 37 9 54 40 9 54 43 9 54 46 9 54 48 9 54 50 9 54 51 9 54 52 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
6 7 9 9 9 9 9 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 13 13 15 15
5 6 4 5 6 7 8 3 4 5 6 7 8 9 10 3 4 5 6 7 8 9 10 11 12 3 4
Copied 14768 13976 1777 1508 1440 1414 1387 1288 1279 1235 1237 1267 1546 1556 2654 3743 6161 6683 11511 12149 11703
Invoc 43 42 811 324 203 148 108 90 78 68 58 53 48 44 41 39 37 35 35 35 34
W.b. Insert 326 272 719 717 723 714 693 692 688 668 514 414 291 222 212 116 108 152 255 303 243
W.b. Add 323 272 678 676 682 673 653 652 648 629 483 393 276 211 204 113 104 151 253 301 241
94809 71219 12907 17226 51354 50572 47608 8530 8571 9271 11251 13004 35565 36631 36273 4236 5581 6553 6637 7851 6745 9446 27375 28988 28514 2557 3302
294 217 214 175 170 153 142 278 209 168 141 122 119 111 105 272 205 165 138 119 103 93 91 87 83 269 203
1854 1081 517 491 1074 1049 808 603 519 435 419 374 799 650 604 531 546 563 446 305 347 288 473 616 501 557 590
1851 1073 500 479 1062 1041 806 582 496 422 408 367 796 647 603 504 517 538 428 293 344 285 470 615 500 529 564
297
R.s. proc 320 264 631 629 629 623 601 603 615 588 448 366 251 195 193 97 82 143 245 285 195
R.s. union 53 48 537 428 289 278 346 201 186 141 133 135 94 80 104 48 27 78 63 20 11
1833 372 1061 154 495 360 468 207 1061 389 1030 255 791 67 565 388 491 330 417 294 397 242 359 246 786 195 634 140 594 156 502 315 504 287 528 310 427 263 291 160 293 170 237 148 461 149 609 180 487 130 513 332 551 322 continued on next page
Table 6.2. continued Configuration 10 15 5 10 15 6 10 15 7 10 15 8 10 15 9 10 15 10 10 15 11 10 15 12 10 15 13 10 15 14 10 19 2 10 19 3 10 19 4 10 19 5 10 19 6 10 19 7 10 19 9 10 19 10 10 19 11 10 19 12 10 19 13 10 19 14 10 19 15 10 19 16 10 19 17 10 19 18 10 23 2 10 23 3 10 23 5 10 23 6 10 23 8 10 23 9 10 23 10 10 23 12 10 23 13 10 23 14 10 23 16 10 23 17 10 23 18 10 23 20 10 23 21 10 23 22 10 27 2 10 27 4 10 27 6 10 27 7 10 27 9 10 27 11
Copied 3311 3607 3374 4807 6164 7375 9205 23645 23797 24146 1820 1591 1602 1570 1520 1432 1782 2988 2912 4234 4377 6130 5266 16607 18372 18033 1798 1569 1524 1474 1465 1462 1388 1717 1915 1306 2822 4219 4546 13522 13615 13359 1788 1538 1452 1340 1366 1326
Invoc 162 135 116 102 92 83 76 75 71 69 400 266 200 160 133 114 89 81 73 68 63 59 55 54 53 51 398 265 159 133 100 89 80 67 62 57 50 48 45 43 41 40 396 198 132 113 88 72
W.b. Insert 566 548 398 313 267 238 320 491 420 440 585 589 591 587 583 550 415 339 290 256 217 245 132 279 473 299 602 605 607 608 584 568 540 401 315 274 159 111 176 255 311 191 623 630 630 598 589 582
298
W.b. Add 541 525 378 302 261 230 313 485 418 439 557 561 563 559 555 525 395 321 278 248 209 239 130 278 469 297 574 577 579 580 557 541 515 385 304 262 155 107 170 251 310 190 595 602 602 571 562 556
R.s. proc R.s. union 528 281 515 241 372 227 301 191 252 152 215 148 306 184 474 143 416 82 426 58 534 425 534 358 533 311 528 278 524 236 509 244 389 163 313 157 256 131 242 130 206 134 236 143 129 73 275 76 455 157 296 57 540 434 539 366 538 295 547 259 540 216 527 194 501 187 372 124 296 119 256 136 147 80 101 56 170 89 241 113 310 101 184 19 560 454 560 344 557 272 539 275 535 208 531 193 continued on next page
Table 6.2. continued Configuration 10 27 12 10 27 14 10 27 15 10 27 17 10 27 18 10 27 20 10 27 22 10 27 23 10 27 24 10 27 25 10 27 26 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
4 5 5 6 6 6 6 7 7 7 7 7 8 8 8 8 8 8 10 10 10 10 10 10 10 10 12 12 12 12 12 12 12 12 12 12 14
3 3 4 2 3 4 5 2 3 4 5 6 2 3 4 5 6 7 2 3 4 5 6 7 8 9 2 3 4 5 6 7 8 9 10 11 2
Copied 1294 1204 1305 1319 1614 2601 4323 4753 10737 11380 11676
Invoc 66 57 53 47 44 40 37 35 35 34 34
W.b. Insert 572 433 351 257 210 187 186 231 230 218 159
W.b. Add 547 413 336 244 201 181 181 229 228 218 159
65841 48669 44080 6708 9426 35048 33374 3504 4999 6905 28399 26087 2119 2951 4086 6241 21036 22735 1392 1372 1290 2137 3129 4339 14946 16282 1392 1352 1282 1258 1236 1470 2622 3653 12324 13546 1366
206 166 135 206 139 117 102 202 136 103 91 80 200 134 101 82 73 67 198 132 99 80 67 58 53 50 197 132 99 79 66 57 50 45 42 40 196
867 776 472 385 397 666 289 403 346 251 504 358 421 380 283 136 346 272 446 445 410 269 157 123 107 83 468 447 437 414 314 236 143 128 152 110 482
857 765 470 374 387 663 286 393 336 249 501 356 407 370 280 134 343 270 432 431 398 263 154 122 107 83 454 433 424 402 307 229 139 127 151 110 468
299
R.s. proc 523 402 322 236 195 169 171 216 224 216 155
R.s. union 148 151 128 109 95 114 100 141 97 63 17
806 441 762 341 470 232 372 297 379 291 657 329 286 177 386 278 336 221 247 196 501 245 352 114 378 271 359 204 272 191 130 94 337 151 268 69 393 294 392 241 379 171 258 138 149 102 122 94 103 51 82 39 413 314 410 259 401 191 383 160 294 139 227 146 138 74 126 91 151 80 109 46 415 320 continued on next page
Table 6.2. continued Configuration 11 14 3 11 14 4 11 14 5 11 14 6 11 14 7 11 14 8 11 14 9 11 14 10 11 14 11 11 14 12 11 14 13
Copied 1352 1268 1258 1228 1217 1154 1655 2531 3763 9620 11007
Invoc 131 98 79 66 56 49 44 40 37 34 33
W.b. Insert 464 457 447 441 355 287 178 126 91 140 84
W.b. Add 450 444 434 429 345 278 175 121 91 137 83
R.s. proc 424 407 411 408 316 254 174 120 85 129 81
R.s. union 276 207 180 161 157 104 98 85 59 78 56
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
3 4 4 5 5 5 6 6 6 6 7 7 7 7 7
2 2 3 2 3 4 2 3 4 5 2 3 4 5 6
37115 5110 23469 1429 4417 17690 1342 1232 3679 13746 1320 1210 1295 2375 11784
118 101 74 99 67 54 99 66 50 42 98 66 49 40 35
276 272 232 298 175 151 327 204 113 127 347 324 176 108 213
275 267 228 291 173 150 319 200 112 127 339 316 174 108 212
275 260 227 285 169 148 310 198 112 127 321 311 167 108 212
275 209 225 184 164 147 208 116 71 125 219 141 119 99 212
13 13 13
3 4 4
2 2 3
17281 1223 10752
54 49 34
250 144 187
248 139 185
248 135 177
247 131 177
300
Table 6.3. Costs of block FC-DOF for HeapSim Configuration 13 56 48 13 56 50 13 56 52 13 56 53 13 56 54 13 68 6 13 68 10 13 68 14 13 68 18 13 68 22 13 68 27 13 68 31 13 68 35 13 68 39 13 68 43 13 68 46 13 68 50 13 68 54 13 68 58 13 68 60 13 68 63 13 68 64 13 68 65 13 68 66 13 83 7 13 83 12 13 83 17 13 83 22 13 83 27 13 83 32 13 83 37 13 83 42 13 83 47 13 83 52 13 83 56 13 83 61 13 83 66 13 83 71 13 83 74 13 83 76 13 83 78 13 83 79 13 83 80 13 101 9 13 101 15 13 101 21 13 101 27 13 101 33
Copied 5327558 4878471 4603637 4440427 4274724 529350 588982 668317 832717 1345262 2168396 2194904 3250597 3570427 3532656 3343271 3128299 2873891 2697368 2587184 2484252 2403352 2409081 2326937 421530 418704 502316 497638 580774 655284 895854 1208816 1918684 1869868 1904614 1798912 1773195 1676040 1650759 1572842 1635741 1548874 1554664 331754 331560 333382 413435 411742
Invoc 134 123 116 112 108 341 208 151 122 112 107 96 98 99 95 89 82 75 69 66 63 62 61 60 283 165 119 92 77 66 60 57 60 55 53 49 47 44 42 41 41 40 40 213 128 92 73 60
W.b. insert 229815 211206 198078 191022 184076 10125 13767 17993 27313 51121 87428 90304 136637 149910 151487 142525 133954 121245 113692 109339 104596 102571 103261 99026 7670 9241 12889 14342 18915 22787 32883 49085 79729 76792 79547 75354 72834 67654 67342 66925 67429 61941 63290 6587 7359 7590 9948 11566
301
W.b. add 229769 211162 198034 190978 184032 9204 12846 17199 26514 50337 86892 89751 136084 149362 150949 141987 133910 121201 113650 109297 104565 102540 103230 98960 6730 8423 12071 13524 18362 22234 32335 48537 79683 76748 79503 75323 72809 67636 67324 66907 67411 61923 63283 5727 6621 6852 9456 11074
R.s. proc R.s. union 225707 75 207048 71 193762 65 187058 63 179898 63 7274 2528 10113 2744 14666 2795 22811 3977 46630 7659 83335 10645 85921 7255 132424 9942 145490 4543 146974 1981 137756 700 129392 52 117114 48 109466 45 105317 43 100311 42 98539 40 99108 41 94853 40 5229 1802 5764 1236 9258 1545 10105 1584 14538 1557 18286 1762 28315 2132 44685 1676 75683 47 72232 42 75356 40 70722 37 68653 34 63125 32 63743 31 62993 31 63417 30 58077 29 58785 20 3896 1375 3975 1079 4111 1090 6908 855 7285 782 continued on next page
Table 6.3. continued Configuration 13 101 39 13 101 45 13 101 52 13 101 58 13 101 64 13 101 69 13 101 75 13 101 81 13 101 86 13 101 90 13 101 93 13 101 95 13 101 96 13 101 97 13 123 4 13 123 11 13 123 18 13 123 26 13 123 33 13 123 41 13 123 48 13 123 55 13 123 63 13 123 70 13 123 77 13 123 84 13 123 91 13 123 98 13 123 105 13 123 109 13 123 113 13 123 116 13 123 117 13 123 119 13 150 4 13 150 13 13 150 22 13 150 31 13 150 40 13 150 49 13 150 58 13 150 67 13 150 76 13 150 86 13 150 94 13 150 102 13 150 111 13 150 120
Copied 492751 486986 638726 873771 1169229 1287495 1269543 1236771 1224301 1202810 1122663 1180671 1184501 1095749 262984 323488 323284 323098 325582 327370 324917 405206 398933 477079 535990 838665 900843 941868 928914 902469 913267 880675 882820 887955 235342 236684 238944 240390 270026 311629 310068 313274 317015 397160 392270 451379 602409 715389
Invoc 52 45 40 38 37 36 34 32 31 30 29 29 29 28 465 172 105 73 58 47 40 36 31 29 26 26 25 24 23 22 22 21 21 21 455 140 83 59 46 38 32 28 25 23 21 19 19 18
W.b. Insert 14677 14810 22554 32773 46766 51013 50473 51036 49283 49002 46960 47195 48290 43952 4645 6681 7535 8895 8152 9488 7788 12158 12113 15965 20078 33417 34462 35447 36702 35888 36972 33190 33127 34990 4616 5150 5828 5762 7484 8344 9080 7798 8711 12786 12771 13099 20151 27360
302
W.b. Add 14185 14318 22554 32773 46766 51013 50473 51036 49283 49002 46960 47195 48290 43952 3731 5821 6797 8403 7660 8996 7788 12158 12113 15965 20078 33417 34462 35447 36702 35888 36972 33190 33127 34990 3702 4412 5090 5270 6992 8344 9080 7798 8711 12786 12771 13099 20151 27360
R.s. proc R.s. union 10231 846 10319 905 17191 226 29127 29 42974 25 47110 22 46656 19 47323 18 45273 16 45104 14 41948 14 43382 14 44716 15 39771 14 3297 1628 4267 1403 4506 980 4627 539 4847 551 4783 681 3812 77 7759 240 7417 192 11683 24 16137 20 28955 18 31190 15 31789 14 32423 12 31958 12 32766 12 29506 11 29888 11 30754 11 2989 1543 2755 884 2725 1034 2269 494 4671 712 5361 161 5258 74 4697 42 4642 127 7864 20 8720 18 10465 15 15649 12 23323 10 continued on next page
Table 6.3. continued Configuration 13 150 127 13 150 133 13 150 138 13 150 141 13 150 143 13 150 145 13 182 5 13 182 16 13 182 27 13 182 38 13 182 49 13 182 60 13 182 71 13 182 82 13 182 93 13 182 104 13 182 115 13 182 124 13 182 135 13 182 146 13 182 155 13 182 162 13 182 167 13 182 171 13 182 174 13 182 175 13 222 7 13 222 20 13 222 33 13 222 47 13 222 60 13 222 73 13 222 87 13 222 100 13 222 113 13 222 127 13 222 140 13 222 151 13 222 164 13 222 178 13 222 189 13 222 197 13 222 204 13 222 209 13 222 212 13 222 214 13 270 8 13 270 24
Copied 698011 687710 682529 658355 662166 666004 218646 221298 221454 224282 228770 228854 237158 307481 304693 305546 300217 385192 365004 504015 548267 539117 545998 508107 511535 512429 199978 202424 205140 204892 206750 210396 211360 221162 224604 222393 290073 286996 283598 357878 399981 408207 466652 451551 453891 455336 179142 181040
Invoc 17 17 16 16 16 16 356 112 66 47 37 30 26 23 20 18 16 16 14 14 13 13 13 12 12 12 248 87 53 37 29 24 20 18 16 14 13 12 11 11 10 10 10 10 10 10 209 70
W.b. Insert 29014 24386 27022 24383 25583 26918 4614 5202 5377 6342 5597 6175 6485 8553 8461 9321 8737 13160 10807 16857 22566 20255 20653 16674 19287 14577 4456 5053 5425 5085 5825 6116 6851 7174 7277 6888 7703 9325 10095 10878 13202 16842 16451 15611 17300 15654 4356 4503
303
W.b. Add 29014 24386 27022 24383 25583 26918 3700 4464 4885 5850 5597 6175 6485 8553 8461 9321 8737 13160 10807 16857 22566 20255 20653 16674 19287 14577 3596 4315 4933 5085 5825 6116 6851 7174 7277 6888 7703 9325 10095 10878 13202 16842 16451 15611 17300 15654 3496 4011
R.s. proc R.s. union 24914 10 19714 9 23593 9 19985 8 21097 7 22626 8 2948 1506 2724 789 2388 447 2382 579 1648 34 1464 28 1235 25 6168 122 5748 37 5238 17 5161 15 9014 13 7837 11 12244 9 18841 8 16776 7 16221 7 12150 6 14996 6 10903 6 2834 1252 2668 949 2349 517 1659 36 1647 27 1601 24 1419 20 1203 18 913 16 629 13 6128 12 5869 10 5006 9 7369 8 9945 6 10732 6 12950 5 12263 5 14135 3 12373 5 2772 1126 2291 406 continued on next page
Table 6.3. continued Configuration 13 270 40 13 270 57 13 270 73 13 270 89 13 270 105 13 270 121 13 270 138 13 270 154 13 270 170 13 270 184 13 270 200 13 270 216 13 270 229 13 270 240 13 270 248 13 270 254 13 270 258 13 270 260 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
28 28 28 28 34 34 34 34 34 34 34 34 34 34 34 42 42 42 42 42 42 42 42 42 42 42 42 42 42 42
24 25 26 27 5 7 9 23 25 27 29 30 31 32 33 6 9 11 14 16 19 21 24 26 29 31 34 36 37 39
Copied 183696 187684 186344 190224 190834 194878 210132 199616 203188 280703 273859 263844 273585 335381 392686 397683 382987 356163
Invoc 42 30 23 19 16 14 13 11 10 10 9 8 8 8 8 8 8 7
W.b. Insert 5207 5005 5668 5995 6667 6913 7246 7363 7003 9740 9693 9200 9587 13085 16898 12546 16627 12586
W.b. Add 4715 5005 5668 5995 6667 6913 7246 7363 7003 9740 9693 9200 9587 13085 16898 12546 16627 12586
5550430 5054931 4686691 4358808 645248 754665 918166 3396793 3123537 2868354 2696212 2584574 2560361 2396401 2321905 418586 499816 497767 578612 655663 980222 1208610 1909889 1871502 1831696 1776685 1692207 1679846 1651400 1545659
138 127 118 110 210 154 125 90 82 75 70 67 65 62 60 165 112 92 74 66 60 57 59 55 51 48 45 43 42 40
204080 184323 170751 159056 10407 14781 21572 125453 113742 105154 100025 93413 91205 85477 84661 6079 8744 8286 13443 16151 28627 40001 68207 65784 65338 61938 58128 59304 58429 55238
204034 184279 170707 159012 9492 13985 20771 124915 113698 105110 99983 93371 91174 85446 84636 5258 7923 7465 12890 15598 28079 39453 68161 65740 65296 61907 58110 59286 58411 55220
304
R.s. proc 2276 1620 1576 1572 1499 1320 1055 752 540 6684 6172 5927 5108 8045 11479 9841 10102 9538
R.s. union 601 30 23 19 15 14 13 11 10 9 8 7 6 4 5 4 4 3
200590 75 180576 69 167123 69 155314 64 8243 3174 11779 3275 17977 4375 121170 700 109997 51 101446 49 96481 46 89592 43 87359 41 81935 41 81254 40 3848 1231 6043 1544 5577 1573 10120 1756 12969 1619 24961 2098 36071 1677 64563 46 62234 42 61615 38 58209 36 54792 32 55438 32 54520 32 51481 29 continued on next page
Table 6.3. continued Configuration 14 42 40 14 51 5 14 51 8 14 51 11 14 51 14 14 51 17 14 51 20 14 51 23 14 51 26 14 51 29 14 51 32 14 51 35 14 51 38 14 51 41 14 51 43 14 51 45 14 51 47 14 51 48 14 51 49 14 62 2 14 62 6 14 62 9 14 62 13 14 62 17 14 62 20 14 62 24 14 62 28 14 62 32 14 62 35 14 62 39 14 62 42 14 62 46 14 62 50 14 62 53 14 62 55 14 62 57 14 62 58 14 62 59 14 62 60 14 75 2 14 75 7 14 75 11 14 75 16 14 75 20 14 75 25 14 75 29 14 75 34 14 75 38
Copied 1551810 331998 330962 330793 411850 410324 488486 485034 638147 786953 1168684 1205305 1252689 1242597 1225824 1202816 1177971 1092712 1099447 261134 323628 323398 323016 323656 326552 324853 403189 474790 478933 625909 838536 816180 858927 841732 906078 823756 881096 885920 890286 235470 237046 239184 240352 314834 317502 318051 316763 316858
Invoc 39 192 120 87 70 58 50 44 40 37 37 35 34 32 31 30 29 28 28 465 158 105 73 56 48 40 35 31 29 27 26 24 23 22 22 21 21 21 21 455 130 83 57 47 38 33 28 25
W.b. Insert 54678 4510 4650 5332 7655 8211 11328 10295 16715 23931 40093 39330 42615 41974 43300 39831 38347 36536 35996 3318 4103 4986 4577 5730 5757 3906 6609 10820 10874 17390 27520 27486 28176 27276 29425 26996 29095 29549 30626 3336 3447 3828 3619 5290 3965 5973 5882 6376
305
W.b. Add 54671 3656 3910 4592 7163 7719 10836 9803 16715 23931 40093 39330 42615 41974 43300 39831 38347 36536 35996 2420 3363 4246 4085 5238 5265 3906 6609 10820 10874 17390 27520 27486 28176 27276 29425 26996 29095 29549 30626 2438 2707 3088 3127 4798 3965 5973 5882 6376
R.s. proc R.s. union 50834 23 2572 1334 2560 880 2611 1156 4885 841 4898 706 8088 834 6778 829 13649 222 20042 29 36952 24 35731 21 39011 19 38154 18 39796 17 35944 16 34783 15 32276 14 32128 13 2200 1596 2598 968 2831 986 2729 515 2813 569 2817 736 1916 72 3773 138 7829 76 7340 26 14789 22 23023 17 24268 15 23716 11 23639 12 25445 12 22692 11 25289 11 26149 11 26072 10 2059 1534 1859 935 1844 1031 1442 504 3248 743 2601 123 2919 71 2812 78 2586 121 continued on next page
Table 6.3. continued Configuration 14 75 43 14 75 47 14 75 51 14 75 55 14 75 60 14 75 64 14 75 67 14 75 69 14 75 70 14 75 72 14 91 3 14 91 8 14 91 14 14 91 19 14 91 25 14 91 30 14 91 35 14 91 41 14 91 46 14 91 52 14 91 57 14 91 62 14 91 67 14 91 73 14 91 77 14 91 81 14 91 84 14 91 86 14 91 87 14 91 88 14 111 3 14 111 10 14 111 17 14 111 23 14 111 30 14 111 37 14 111 43 14 111 50 14 111 57 14 111 63 14 111 70 14 111 75 14 111 82 14 111 89 14 111 94 14 111 99 14 111 102 14 111 104
Copied 397200 392112 451306 599085 714868 699476 689011 682475 656226 663917 219210 221572 223110 224464 227398 229080 233764 307534 302250 305415 298143 385090 363339 503458 546225 538573 546730 508737 511166 513513 199794 202544 203910 207684 206990 213916 220342 221422 227066 220788 290344 285262 283616 357832 398542 460576 466149 450465
Invoc 23 21 19 19 18 17 17 16 16 16 297 112 64 47 36 30 26 23 20 18 16 16 14 14 13 13 13 12 12 12 289 87 51 38 29 24 21 18 16 14 13 12 11 11 10 10 10 10
W.b. Insert 9104 8653 12409 17036 22066 21148 21851 21351 20039 21995 3238 3458 3359 3744 2939 3117 3307 5134 5820 6783 5359 9360 7790 15791 17261 18057 16078 16459 16299 16121 3208 3418 3583 3708 2943 3481 3838 4107 4439 4532 7923 6848 8281 9015 12974 15273 13152 14516
306
W.b. Add 9104 8653 12409 17036 22066 21148 21851 21351 20039 21995 2384 2718 2867 3252 2939 3117 3307 5134 5820 6783 5359 9360 7790 15791 17261 18057 16078 16459 16299 16121 2354 2678 3091 3216 2943 3481 3838 4107 4439 4532 7923 6848 8281 9015 12974 15273 13152 14516
R.s. proc R.s. union 6137 21 5426 19 8424 15 13567 12 18422 11 17498 9 17552 9 17511 9 16145 8 16860 6 1986 1243 1845 786 1523 457 1515 581 802 35 720 29 618 26 3429 107 3185 22 3188 16 3085 15 6411 14 5325 11 12751 9 13730 6 15413 7 12934 7 12021 6 11642 6 12674 6 1962 1235 1817 949 1491 528 1492 677 814 28 794 24 720 20 595 17 438 16 317 12 4232 12 3463 11 3238 9 5919 8 8263 5 11766 6 10640 5 10762 5 continued on next page
Table 6.3. continued Configuration 14 111 106 14 111 107 14 135 4 14 135 12 14 135 20 14 135 28 14 135 36 14 135 45 14 135 53 14 135 61 14 135 69 14 135 77 14 135 85 14 135 92 14 135 100 14 135 108 14 135 115 14 135 120 14 135 124 14 135 127 14 135 129 14 135 130 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
9 9 11 11 11 11 13 13 13 13 13 13 13 13 13 13 16 16 16 16 16 16 16 16 16 16
7 8 7 8 9 10 3 4 5 6 7 8 9 10 11 12 3 4 5 6 7 8 9 10 11 12
Copied 453677 455024 179400 181268 183990 184422 192066 192790 192960 197078 210294 199868 203356 280776 273982 264050 329110 335362 392671 380687 383053 356199
Invoc 10 10 209 70 42 30 24 19 16 14 13 11 10 10 9 8 8 8 8 8 8 7
W.b. Insert 12947 14487 3168 2823 3366 2375 3152 3355 3976 4122 4710 4708 4726 7846 7142 7512 9547 9323 12228 10648 10559 11790
W.b. Add 12947 14487 2314 2331 2874 2375 3152 3355 3976 4122 4710 4708 4726 7846 7142 7512 9547 9323 12228 10648 10559 11790
2668451 2392160 1808389 1757214 1637526 1548682 412539 406817 483723 638167 1100806 1295537 1254035 1234528 1191068 1182383 322217 321217 325307 360866 400007 475297 546689 680905 886071 931577
70 61 51 47 42 39 82 61 50 43 41 38 35 33 30 29 79 59 48 40 35 31 28 26 25 24
40142 35413 26142 24343 22724 22613 2633 3028 3670 5367 14427 17455 17768 17133 17076 15694 2090 1923 2060 1817 2270 3417 6415 7440 10958 12285
40083 35374 26083 24304 22706 22606 2191 2586 3228 5367 14427 17455 17768 17133 17076 15694 1648 1481 1618 1817 2270 3417 6415 7440 10958 12285
307
R.s. proc 9556 10353 1931 1470 1462 787 802 784 742 650 520 370 265 4774 4256 4177 6676 6549 7281 7749 7528 7854
R.s. union 5 5 1120 406 602 29 22 18 15 14 13 9 10 9 7 6 6 5 5 4 4 4
38628 45 33837 40 24687 38 22969 32 21081 31 21056 23 1700 764 1807 701 2439 681 4338 354 12843 29 15959 24 16246 20 15583 18 15397 15 14318 13 1113 534 1012 545 1154 734 1327 174 1290 93 2306 27 4923 24 6169 18 9867 16 10833 14 continued on next page
Table 6.3. continued Configuration 16 16 13 16 16 14 16 16 15 16 19 2 16 19 3 16 19 4 16 19 5 16 19 6 16 19 7 16 19 9 16 19 10 16 19 11 16 19 12 16 19 13 16 19 14 16 19 15 16 19 16 16 19 17 16 19 18 16 23 2 16 23 3 16 23 5 16 23 6 16 23 8 16 23 9 16 23 10 16 23 12 16 23 13 16 23 14 16 23 16 16 23 17 16 23 18 16 23 20 16 23 21 16 23 22 16 28 3 16 28 4 16 28 6 16 28 8 16 28 9 16 28 11 16 28 13 16 28 14 16 28 16 16 28 18 16 28 19 16 28 21 16 28 22
Copied 833872 818485 888553 237998 239140 244689 315112 314018 316405 312103 389188 384936 448205 522589 659237 712567 673846 735980 659932 222054 223972 227186 230838 229100 231782 301284 300676 306067 308492 372624 426714 559924 533817 590446 509956 204122 206604 209778 214660 216674 214488 218216 222556 287541 295992 288303 348937 419059
Invoc 22 21 21 114 76 57 47 39 34 26 24 22 20 19 19 18 17 17 16 112 75 45 38 28 25 23 19 18 17 15 14 14 13 13 12 73 55 37 28 25 20 17 16 14 13 12 11 11
W.b. Insert 10694 9929 11325 1891 1622 1661 2683 1362 1581 1644 2705 3387 3630 6188 8480 8467 7269 8719 8721 1879 1521 1717 788 977 803 1770 1801 2208 2035 3396 3752 6952 5414 6527 6419 1502 1590 643 932 743 1035 1066 1182 2279 1834 2249 4393 4366
308
W.b. Add 10694 9929 11325 1273 1180 1219 2241 1362 1581 1644 2705 3387 3630 6188 8480 8467 7269 8719 8721 1261 1079 1275 788 977 803 1770 1801 2208 2035 3396 3752 6952 5414 6527 6419 1060 1148 643 932 743 1035 1066 1182 2279 1834 2249 4393 4366
R.s. proc R.s. union 8968 12 8493 11 9806 11 1030 792 797 412 873 539 1383 732 685 126 695 62 610 79 1846 22 2058 19 2036 17 4774 13 7144 12 7369 11 5507 9 6742 9 6855 7 1027 791 801 411 798 605 206 37 171 28 147 23 973 116 932 18 919 16 847 14 2084 12 2610 10 5929 9 4000 7 4238 5 4761 6 795 409 795 504 211 36 210 28 204 24 175 20 138 16 115 15 1405 13 927 12 818 11 2287 7 3188 8 continued on next page
Table 6.3. continued Configuration 16 28 24 16 28 25 16 28 26 16 28 27 16 34 3 16 34 5 16 34 7 16 34 9 16 34 11 16 34 13 16 34 15 16 34 17 16 34 19 16 34 21 16 34 23 16 34 25 16 34 27 16 34 29 16 34 30 16 34 31 16 34 32 16 34 33
Copied 401427 460841 448400 455716 181628 184300 184866 192702 199138 202080 193266 207230 216870 222948 280968 274090 261107 327756 384174 391287 381945 357620
Invoc 10 10 10 10 70 42 30 24 20 17 14 13 12 11 10 9 8 8 8 8 8 7
W.b. Insert 4838 4919 5458 5214 1470 1604 635 816 877 953 966 1602 1076 1470 1846 2298 1944 4021 4272 4589 4174 4707
W.b. Add 4838 4919 5458 5214 1028 1162 635 816 877 953 966 1602 1076 1470 1846 2298 1944 4021 4272 4589 4174 4707
R.s. proc 3716 3664 3515 3742 786 784 198 202 204 194 163 135 98 70 1428 1228 1051 2610 2859 3023 3100 2906
R.s. union 6 6 5 5 406 602 28 23 20 16 13 13 12 9 9 7 6 5 5 5 4 4
18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
994314 716660 715983 301908 294933 484641 571520 219224 277118 284000 468578 510907 182914 195020 191294 208406 259985 315384 349081
26 22 18 29 19 15 13 28 19 14 12 11 26 18 13 11 9 8 7
1984 1481 1270 513 239 851 1351 186 363 335 966 1137 102 146 188 288 280 686 578
1984 1481 1270 513 239 851 1351 186 363 335 966 1137 102 146 188 288 280 686 578
1798 1219 1098 282 104 490 987 53 296 203 649 773 50 51 39 23 123 397 219
13 14 9 106 16 10 7 27 133 13 8 6 25 17 13 10 7 5 3
4 5 5 6 6 6 6 7 7 7 7 7 9 9 9 9 9 9 9
3 3 4 2 3 4 5 2 3 4 5 6 2 3 4 5 6 7 8
309
Table 6.4. Costs of block FC-DOF for Lambda-Fact5 Configuration 9 60 51 9 60 53 9 60 55 9 60 56 9 60 57 9 74 55 9 74 59 9 74 63 9 74 66 9 74 68 9 74 70 9 74 71 9 90 41 9 90 46 9 90 51 9 90 57 9 90 61 9 90 67 9 90 72 9 90 77 9 90 80 9 90 83 9 90 85 9 90 86 9 90 87 9 109 36 9 109 43 9 109 49 9 109 56 9 109 62 9 109 69 9 109 74 9 109 81 9 109 87 9 109 93 9 109 97 9 109 100 9 109 102 9 109 104 9 109 105 9 133 36 9 133 44 9 133 52 9 133 60 9 133 68 9 133 76 9 133 84 9 133 90
Copied 482991 450215 405639 374651 381807 265528 240676 234167 220311 234085 253979 254287 175200 142305 186703 166633 142518 134723 144858 142746 145369 158048 161111 163742 175843 130099 145471 114712 108150 129899 117367 111536 104571 106880 117771 116277 115132 117070 128408 119759 145531 83796 87488 80920 83801 93625 79042 75734
Invoc 158 147 134 125 125 89 80 76 72 73 75 75 90 74 76 63 56 51 50 48 47 48 48 48 49 90 78 64 55 53 46 42 38 36 36 35 34 34 35 34 92 64 55 47 42 39 34 31
W.b. insert 64550 60235 52834 48390 48949 39148 34351 31835 28890 29860 31514 31191 28705 23904 28381 27758 24510 21778 21317 20197 19577 20227 20296 20587 21677 24199 25416 21234 19976 22551 19598 18610 17338 16819 16595 16183 14899 14735 16222 15079 27529 19257 19423 17522 17613 17474 14759 13887
310
W.b. add 64522 60218 52818 48364 48929 39135 34329 31827 28882 29846 31507 31177 28673 23875 28366 27734 24487 21769 21307 20196 19572 20220 20292 20581 21667 24154 25380 21196 19946 22524 19586 18588 17324 16812 16589 16178 14895 14733 16217 15070 27461 19206 19376 17485 17584 17447 14746 13864
R.s. proc R.s. union 63936 8485 59583 6852 52100 4356 47787 3095 48300 2632 38387 7066 33295 5180 31239 3690 27829 2773 29139 2266 30835 1861 30598 1470 27814 9271 22604 6300 27778 7595 26685 6252 23817 5688 21087 4365 20511 3583 19633 2961 18935 2237 19415 1704 19429 1401 19669 1197 20938 996 23261 8062 24623 8706 20236 6104 19185 5030 21314 5840 18870 4522 17917 3873 16714 3483 16224 2813 16034 2291 15644 1970 13992 1564 13759 1342 15642 1137 14066 928 26460 10061 17881 5351 18692 5443 15784 4084 16223 4069 16041 4003 14105 3225 12821 2924 continued on next page
Table 6.4. continued Configuration 9 133 98 9 133 106 9 133 113 9 133 118 9 133 122 9 133 125 9 133 127 9 133 128 9 162 34 9 162 44 9 162 53 9 162 63 9 162 73 9 162 83 9 162 92 9 162 102 9 162 110 9 162 120 9 162 130 9 162 138 9 162 144 9 162 149 9 162 152 9 162 155 9 162 156 9 197 30 9 197 41 9 197 53 9 197 65 9 197 77 9 197 89 9 197 100 9 197 112 9 197 124 9 197 134 9 197 146 9 197 158 9 197 167 9 197 175 9 197 181 9 197 185 9 197 188 9 197 190 9 240 36 9 240 50 9 240 65 9 240 79 9 240 94
Copied 82075 76978 76246 77326 88379 91362 94684 97783 85476 65589 60983 68838 58747 51175 48529 53510 57471 61997 57740 52695 54891 69425 63426 73311 74063 83293 71507 63492 49691 46234 51685 47475 58411 33154 47276 44645 47460 44523 42019 51983 45132 46604 49161 66165 41801 33296 34702 31745
Invoc 29 26 25 25 25 25 25 25 82 59 49 42 36 31 27 25 24 22 20 19 18 19 18 19 19 90 64 48 38 31 28 25 23 19 19 17 16 15 14 14 14 14 14 70 47 35 29 24
W.b. Insert 14001 12414 11785 10945 12093 11908 11966 12000 21042 18049 17424 17351 15237 12907 11519 11613 11266 11140 9676 8274 8076 9286 8141 9192 9246 21369 19537 17825 16700 14826 13886 12533 12603 8694 9663 8582 7806 7207 6376 7065 6195 6163 6136 19519 16228 14906 14312 13478
311
W.b. Add 13973 12411 11774 10932 12090 11904 11964 11993 20984 18004 17379 17302 15198 12871 11487 11598 11253 11130 9670 8272 8071 9281 8133 9188 9245 21314 19479 17784 16657 14781 13846 12508 12577 8670 9642 8563 7795 7195 6366 7058 6188 6157 6127 19471 16182 14868 14271 13441
R.s. proc R.s. union 12520 2490 11811 2553 10704 1773 10155 1432 11382 1467 11213 1182 11267 1012 11229 855 19750 7169 16627 4646 16240 4119 16292 4738 14069 3242 11986 2510 10359 2238 10672 2473 10613 2338 9712 1974 9051 2240 7566 1621 7401 1210 8724 999 7112 830 8610 808 8668 703 19673 7093 18284 5925 15770 4156 15462 3904 13438 3064 12533 2515 11229 2299 11691 2556 7845 1964 8732 2010 7875 1732 7069 1426 6333 1334 5792 1170 6152 797 5282 684 5177 554 5249 437 17934 6155 14478 3655 13495 3242 12886 2583 12024 2076 continued on next page
Table 6.4. continued Configuration 9 240 108 9 240 122 9 240 137 9 240 151 9 240 163 9 240 178 9 240 192 9 240 204 9 240 213 9 240 221 9 240 226 9 240 229 9 240 231 9 293 26 9 293 44 9 293 62 9 293 79 9 293 97 9 293 114 9 293 132 9 293 149 9 293 167 9 293 185 9 293 199 9 293 217 9 293 234 9 293 249 9 293 260 9 293 270 9 293 275 9 293 280 9 293 283 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
30 30 30 30 37 37 37 37 37 37 37 37 45 45 45 45
25 27 28 29 25 27 30 31 33 34 35 36 20 23 26 28
Copied 30236 25749 30186 35671 42159 31946 25854 44661 47529 41271 40648 43100 43731 78960 48386 37377 23859 29906 23008 17806 14761 24379 23699 27496 22390 20283 28566 28779 32277 27154 31270 32261
Invoc 21 18 17 16 15 13 12 12 12 11 11 11 11 99 53 36 27 23 19 16 14 13 12 11 10 9 9 9 9 8 8 8
W.b. Insert 12254 10882 9299 9111 8566 6434 5053 6569 6748 5845 5629 5507 5693 21310 17405 16073 14113 13251 11573 9961 8126 8897 7459 6902 5177 3954 4807 4340 5056 3874 4028 4059
W.b. Add 12212 10847 9269 9093 8546 6417 5032 6558 6740 5840 5625 5502 5690 21258 17360 16035 14069 13213 11538 9928 8098 8867 7436 6888 5171 3950 4800 4338 5051 3874 4027 4057
413467 403753 373729 371550 274770 276068 232838 226615 224004 240211 251142 241075 164353 174005 183599 163207
140 133 123 121 94 91 77 75 72 73 74 71 88 79 73 64
49345 47579 43097 41716 38269 36893 29235 27888 25777 27598 28007 26506 24582 25285 25605 23497
49302 47553 43043 41666 38228 36866 29210 27871 25761 27580 27979 26475 24531 25231 25576 23474
312
R.s. proc 10939 9659 7724 8335 7722 5705 4348 5913 5962 5298 5079 4969 5042 19489 15482 14537 11653 10122 9232 8140 6537 7816 6635 5988 4224 2881 4139 3224 4479 2996 2989 3092
R.s. union 2036 2004 1626 1851 1595 1471 1235 1126 1118 911 763 611 514 8452 4541 4192 1756 1325 1259 1394 1049 1804 1620 1518 1308 901 1102 762 850 586 457 342
48789 7362 47095 4848 42504 3196 41095 2005 37648 9775 36346 7770 28283 4889 27155 4122 25145 2791 26913 2326 27158 1791 25780 1163 23893 8801 24662 7996 25092 7753 22473 6106 continued on next page
Table 6.4. continued Configuration 10 45 31 10 45 33 10 45 36 10 45 38 10 45 40 10 45 41 10 45 42 10 45 43 10 55 18 10 55 21 10 55 25 10 55 28 10 55 31 10 55 35 10 55 37 10 55 41 10 55 44 10 55 47 10 55 49 10 55 51 10 55 52 10 55 53 10 67 18 10 67 22 10 67 26 10 67 30 10 67 34 10 67 38 10 67 42 10 67 46 10 67 50 10 67 54 10 67 57 10 67 60 10 67 62 10 67 63 10 67 64 10 67 65 10 81 17 10 81 22 10 81 27 10 81 32 10 81 36 10 81 41 10 81 46 10 81 51 10 81 55 10 81 60
Copied 143931 135780 132782 143000 149725 150563 165713 162230 141268 139135 108241 94932 131827 111711 121373 96466 95266 105484 115473 115346 117751 114722 94665 76280 89151 81977 74846 89887 88137 78012 78667 82301 80103 86334 81114 87887 77969 87186 102887 82760 62544 56052 53868 49660 48996 56192 51621 64744
Invoc 54 51 48 48 47 47 48 47 92 78 61 52 53 44 43 36 34 34 34 33 33 33 80 61 54 46 40 38 35 31 28 26 25 25 24 24 23 24 85 62 47 39 35 30 27 25 23 22
W.b. Insert 21823 19365 17453 18127 17587 17589 18853 17610 23348 22736 17963 16670 19945 17523 18252 14323 13168 13671 13820 13377 13119 12845 18692 16769 17429 16195 14951 15497 14287 12574 12309 12054 10835 10813 9920 10274 8725 9340 19627 17820 15502 14342 13471 11584 10522 10421 9064 10487
313
W.b. Add 21798 19350 17439 18116 17576 17570 18839 17590 23277 22687 17921 16620 19900 17493 18224 14299 13142 13667 13812 13371 13113 12832 18632 16690 17368 16135 14919 15462 14252 12544 12283 12051 10824 10809 9913 10265 8722 9333 19563 17737 15424 14266 13417 11535 10478 10411 9045 10476
R.s. proc R.s. union 21283 5593 18749 4495 16710 3462 17654 3110 16995 2241 17079 1826 18010 1519 17067 1208 22611 9077 21725 8510 17323 5816 15995 4571 19384 5974 16939 4450 17694 4324 13389 3293 12489 2736 13162 2248 12936 1988 12749 1425 12572 1162 12295 814 17833 7045 15821 5128 16594 5356 15530 5019 14237 3936 14699 4046 13588 3341 11534 2652 11727 2836 11365 2348 10007 1878 10313 1761 9023 1452 9181 1196 8174 964 8407 773 18609 7689 16621 6157 14251 4137 13329 3683 12527 3228 10726 2667 9419 2297 9698 2612 7898 1913 9908 2510 continued on next page
Table 6.4. continued Configuration 10 81 65 10 81 69 10 81 72 10 81 75 10 81 76 10 81 77 10 81 78 10 99 15 10 99 21 10 99 27 10 99 33 10 99 39 10 99 45 10 99 50 10 99 56 10 99 62 10 99 67 10 99 73 10 99 79 10 99 84 10 99 88 10 99 91 10 99 93 10 99 94 10 99 95 10 120 18 10 120 25 10 120 32 10 120 40 10 120 47 10 120 54 10 120 61 10 120 68 10 120 76 10 120 82 10 120 89 10 120 96 10 120 102 10 120 107 10 120 110 10 120 113 10 120 115 10 120 116 10 147 22 10 147 31 10 147 40 10 147 49 10 147 57
Copied 56707 58466 58056 58761 60930 67034 63364 85812 70749 58089 50312 49336 50256 34620 47989 35662 38189 50288 51449 44387 43015 50874 46843 42192 52765 67961 45457 39774 42015 31755 44507 29142 30930 28072 35830 28455 29801 40734 39246 43997 43036 45794 44810 53825 43380 29473 38363 15188
Invoc 20 19 18 18 18 18 18 90 62 46 37 31 28 23 22 19 18 17 16 15 14 14 14 13 14 70 47 36 29 24 22 18 17 15 14 13 12 12 11 11 11 11 11 54 37 27 23 18
W.b. Insert 8667 8153 7554 6885 7078 7368 6859 18822 16691 15205 14275 13276 11952 10237 11249 7882 7204 7885 7337 6163 5653 6224 5559 4841 5865 17141 15051 13748 12914 11596 12722 9674 8168 6390 7088 4925 4942 5553 5162 5670 5422 5214 5200 16023 14301 12099 12550 10460
314
W.b. Add 8659 8147 7539 6884 7077 7365 6856 18738 16604 15137 14209 13226 11892 10206 11210 7853 7171 7855 7321 6148 5643 6215 5547 4828 5849 17066 14976 13679 12841 11529 12661 9616 8118 6356 7067 4897 4909 5544 5155 5666 5417 5211 5196 15948 14233 12036 12477 10391
R.s. proc R.s. union 8117 2207 7338 1630 6979 1235 6178 1034 6374 909 6471 796 5981 591 17125 7497 15711 5995 13708 4059 12945 3638 12021 3017 10813 2608 9411 2321 10517 2510 7141 2002 6485 1492 6983 1533 6117 1441 5505 1317 5166 1185 5361 920 4816 715 4269 597 4677 541 15714 6208 13139 3948 12101 3377 11206 2549 10429 2235 11701 3086 8399 2045 7007 1801 5060 1480 6379 1806 3826 1053 4326 1169 5017 1196 4633 1079 5195 952 4880 728 4692 579 4626 500 13952 4570 12926 4352 9321 1587 11145 3049 8715 1242 continued on next page
Table 6.4. continued Configuration 10 147 66 10 147 75 10 147 84 10 147 93 10 147 100 10 147 109 10 147 118 10 147 125 10 147 131 10 147 135 10 147 138 10 147 140 10 147 142
Copied 34553 14996 22470 24926 26119 24320 30488 26697 27711 25222 29172 26278 28396
Invoc 17 14 13 12 11 10 10 9 9 8 8 8 8
W.b. Insert 11039 7097 6305 6485 5847 4870 4564 4120 3801 3501 3619 3065 3184
W.b. Add 10974 7059 6269 6450 5825 4855 4555 4112 3798 3497 3614 3063 3176
11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
431757 378194 269224 243451 234426 222525 212122 207472 239124 167714 154241 160422 160361 142079 126472 131943 140907 146792 162628 145732 123114 103339 90147 125480 109625 118606 97113 97734 101153 99978 108051 122335 85693 102174 86370
143 125 94 85 79 74 70 67 70 88 71 67 61 52 48 47 46 45 46 92 71 57 51 50 42 42 35 34 33 32 32 33 77 67 53
43137 35981 33513 29191 27004 24047 21567 20201 22091 21243 19774 20549 22011 18437 15582 15078 14740 14023 14910 20747 17891 15272 13690 16990 15361 15200 12601 11649 10919 10210 10515 11301 15245 16258 14772
43078 35887 33451 29150 26969 24023 21540 20169 22035 21176 19733 20490 21963 18402 15561 15061 14726 13993 14869 20663 17829 15223 13643 16945 15324 15170 12583 11624 10902 10200 10482 11272 15159 16183 14702
15 15 19 19 19 19 19 19 19 23 23 23 23 23 23 23 23 23 23 28 28 28 28 28 28 28 28 28 28 28 28 28 34 34 34
13 14 12 13 14 15 16 17 18 10 12 13 14 16 17 18 20 21 22 9 11 13 14 16 18 19 21 22 24 25 26 27 9 11 13
315
R.s. proc 9783 5602 3862 5739 4897 4145 3457 3541 3079 2727 2764 2221 2486
R.s. union 2572 1282 1065 1708 1510 1288 989 1101 836 755 593 504 398
42681 6637 35472 3126 32968 10953 28588 8317 26572 6543 23592 5236 21099 3860 19771 2554 21264 1876 20651 9458 19234 7268 20002 7003 21036 6734 17977 5339 14829 4263 14651 3800 14184 2406 13485 1813 14163 1152 20075 9428 17148 7322 14518 5343 13163 4514 16356 5803 14848 4561 14566 4263 11990 3255 11101 2935 10431 2329 9585 1863 9799 1276 10860 826 14170 6168 15436 6077 14081 5339 continued on next page
Table 6.4. continued Configuration 11 34 15 11 34 17 11 34 19 11 34 21 11 34 23 11 34 25 11 34 27 11 34 29 11 34 30 11 34 31 11 34 32 11 34 33 11 41 9 11 41 11 11 41 14 11 41 16 11 41 18 11 41 21 11 41 23 11 41 26 11 41 28 11 41 30 11 41 33 11 41 35 11 41 36 11 41 38 11 41 39 11 41 40 11 50 7 11 50 10 11 50 13 11 50 16 11 50 19 11 50 22 11 50 25 11 50 29 11 50 31 11 50 34 11 50 37 11 50 40 11 50 42 11 50 44 11 50 46 11 50 47 11 50 48 11 60 9 11 60 13 11 60 16
Copied 89398 67724 77648 89841 74615 71798 75607 78289 77970 87856 85303 86419 83107 79209 63796 61235 60681 50128 53992 65265 55087 63740 51679 51433 60458 61572 64626 70213 93244 61072 55306 47928 46665 46782 42470 49765 33955 36649 48064 51775 40997 46033 40498 44452 46232 54991 52234 39105
Invoc 47 39 37 34 30 27 25 24 24 24 24 23 75 61 46 40 36 29 27 25 23 22 19 18 18 18 18 18 98 63 47 37 32 27 24 21 19 17 17 16 14 14 13 13 13 66 46 36
W.b. Insert 14626 11982 12046 13736 10878 10095 9996 9012 8753 9014 8112 7992 15421 15120 12829 12832 11889 9967 10720 10384 8528 9124 6670 6343 6643 6162 5987 6352 16873 13714 13225 12373 11620 10495 9221 9485 6876 6336 6789 6454 5289 5321 4239 4492 4304 13588 13190 11695
316
W.b. Add 14560 11929 12009 13700 10854 10055 9978 8999 8746 8995 8097 7966 15340 15051 12757 12763 11834 9936 10664 10362 8504 9095 6664 6330 6626 6161 5977 6340 16783 13633 13150 12292 11554 10433 9165 9452 6845 6306 6770 6431 5279 5317 4224 4480 4298 13504 13107 11624
R.s. proc R.s. union 13849 5271 11414 3463 11530 3652 13198 3552 10334 2829 9298 2641 9522 2516 8594 1930 8124 1968 8113 1542 7396 1141 7363 603 14492 6553 14261 5941 11872 3889 11652 3733 10824 3377 8872 2495 9872 2648 9810 2766 7593 1992 8622 2571 6164 1862 5922 1584 6186 1263 5445 1002 5285 690 5436 408 15090 7183 12424 4796 12021 4039 11533 3920 10753 3132 9514 2624 8383 2341 8862 2545 6261 2027 4847 1395 5993 1675 5888 1464 4823 1440 4882 1283 3766 794 4092 718 3826 485 11091 4449 10888 3608 10311 3404 continued on next page
Table 6.4. continued Configuration 11 60 20 11 60 23 11 60 27 11 60 31 11 60 34 11 60 38 11 60 41 11 60 44 11 60 48 11 60 51 11 60 53 11 60 55 11 60 56 11 60 57 11 60 58 11 74 7 11 74 11 11 74 16 11 74 20 11 74 24 11 74 29 11 74 33 11 74 38 11 74 42 11 74 47 11 74 50 11 74 55 11 74 59 11 74 63 11 74 66 11 74 68 11 74 70 11 74 71
Copied 38311 27747 28628 23958 30706 29127 32819 32912 33392 38238 45355 39819 42597 43320 55003 66815 45855 26849 24287 32234 16576 17851 14261 11675 24822 30310 27587 25582 27985 37345 28138 28588 28589
Invoc 29 24 21 18 17 15 14 13 12 12 11 11 11 11 11 87 52 33 26 23 18 16 14 12 12 11 10 9 9 9 8 8 8
W.b. Insert 11188 10507 8469 7203 6769 5397 5990 5300 4584 4652 5304 4531 4526 4368 5352 15380 12927 11137 10111 10878 8734 7457 5945 4790 5714 5659 4608 3686 3818 4349 3371 3031 2986
W.b. Add 11108 10434 8406 7146 6720 5368 5969 5277 4568 4631 5295 4524 4517 4357 5337 15309 12846 11059 10032 10800 8652 7384 5898 4744 5677 5624 4586 3674 3813 4341 3368 3025 2982
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
350603 319064 192928 194677 197019 143697 159261 133458 136656 126248 135853 107314 100297 114691 107279
125 109 71 66 62 69 62 50 47 43 42 63 52 48 42
29798 24354 19127 16797 15110 15785 17701 14920 13076 10959 10161 13306 12689 13446 12563
29768 24292 19090 16744 15070 15716 17649 14890 13044 10939 10113 13250 12649 13395 12528
8 8 10 10 10 12 12 12 12 12 12 14 14 14 14
6 7 7 8 9 6 7 8 9 10 11 6 7 8 9
317
R.s. proc 9835 9467 7694 6548 5712 4165 5424 4719 4103 4173 4901 4117 4073 3933 4904 12487 11510 8709 7914 9845 7106 5810 4676 3715 5086 4628 3536 3109 3354 3935 2732 2388 2294
R.s. union 2528 2202 2124 2071 1665 1394 1828 1479 1237 1171 1061 996 797 598 490 5610 4353 2339 1557 3174 1411 1332 1210 1089 1785 1536 1222 1021 1128 955 786 568 449
29268 9174 23999 4292 18736 6620 16313 4219 14440 2564 14772 6979 17298 6950 14492 5624 12683 4301 10612 2952 9589 1655 12836 6180 12301 5235 12966 5243 12175 4668 continued on next page
Table 6.4. continued Configuration 12 14 10 12 14 11 12 14 12 12 14 13 12 17 5 12 17 6 12 17 7 12 17 8 12 17 9 12 17 10 12 17 11 12 17 12 12 17 13 12 17 14 12 17 15 12 17 16 12 21 6 12 21 7 12 21 8 12 21 9 12 21 11 12 21 12 12 21 13 12 21 14 12 21 16 12 21 17 12 21 18 12 21 19 12 21 20 12 25 4 12 25 5 12 25 7 12 25 8 12 25 10 12 25 11 12 25 13 12 25 14 12 25 16 12 25 17 12 25 19 12 25 20 12 25 21 12 25 22 12 25 23 12 25 24 12 30 6 12 30 8 12 30 10
Copied 96781 97999 111782 105064 103712 82799 94594 75320 75520 74801 93809 67076 71324 70520 80119 84172 72101 61487 57707 37995 54443 49637 61646 54239 55890 64325 61080 60369 71269 77884 56303 50136 47785 43612 38743 39330 35707 43811 44226 47087 43811 41482 42378 45008 51313 49634 34807 31697
Invoc 36 34 34 32 72 57 51 42 37 34 33 28 26 24 24 24 55 46 39 33 29 26 25 22 20 19 18 18 18 82 61 43 37 30 26 23 21 19 18 16 15 14 14 14 14 49 35 28
W.b. Insert 10816 9737 9797 8051 14276 11742 12424 10633 11188 9710 11512 8412 7983 7140 7058 6544 11384 10539 10487 7960 8148 8535 8834 7868 6384 6708 6075 5197 5534 12412 10959 9901 10120 8865 8472 7164 6401 6738 6032 5571 4813 4357 4094 3940 4082 10571 9081 8219
318
W.b. Add 10801 9724 9774 8019 14213 11694 12374 10587 11138 9671 11471 8387 7964 7124 7036 6520 11337 10498 10426 7916 8114 8495 8801 7843 6366 6687 6068 5181 5509 12366 10916 9863 10073 8830 8431 7133 6377 6711 6011 5548 4789 4341 4088 3935 4069 10530 9046 8183
R.s. proc R.s. union 10132 3924 9372 3097 9147 2413 7456 1270 13765 7796 10943 5232 11555 5166 9368 3823 9879 3727 8773 3230 10978 3806 7978 2881 7568 2763 6815 2089 6508 1908 5947 1057 9977 4664 9963 4061 9688 3938 7210 2461 7568 2611 8029 2825 8453 2853 6469 2082 6036 2157 6346 2005 5722 1526 4811 1188 4909 765 11709 7014 9854 5120 9209 3810 9204 3737 8173 2880 7780 2510 6434 2084 5890 1845 6347 2258 5666 1891 5186 1820 4405 1567 3952 1398 3615 1273 3536 900 3491 552 8784 3840 8058 3153 7381 2605 continued on next page
Table 6.4. continued Configuration 12 30 12 12 30 13 12 30 15 12 30 17 12 30 19 12 30 20 12 30 22 12 30 24 12 30 26 12 30 27 12 30 28 12 30 29 12 37 6 12 37 8 12 37 10 12 37 12 12 37 14 12 37 17 12 37 19 12 37 21 12 37 23 12 37 25 12 37 27 12 37 30 12 37 31 12 37 33 12 37 34 12 37 35 12 37 36
Copied 39032 34892 28503 36949 29892 29711 29856 38171 44924 43373 42616 50945 43031 24086 22872 18524 23328 33147 28888 16918 24737 26430 21904 20979 25883 26626 27876 27639 26898
Invoc 24 22 19 17 15 14 13 12 12 11 11 11 47 33 26 22 19 16 14 12 12 11 10 9 9 8 8 8 8
W.b. Insert 8512 7877 6874 7057 4499 4951 4330 4453 4607 4225 3772 4070 10560 8733 8063 7929 7254 7129 6510 4164 5063 4584 3618 2825 3324 2918 2712 2423 2224
W.b. Add 8475 7844 6850 7031 4463 4925 4312 4427 4590 4219 3757 4049 10520 8694 8025 7890 7219 7098 6480 4137 5046 4565 3604 2812 3315 2908 2706 2418 2220
13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
192281 125611 140073 118202 106151 98109 108177 81423 66654 66910 66437 71970 69371 68466 59482 55206 48701 51569 42842
68 50 46 63 46 37 34 56 40 33 27 24 22 54 40 31 26 22 19
12927 11806 9332 12299 11961 8861 7456 10075 8366 8159 7422 6498 4898 8585 8089 7319 6523 5835 4788
12879 11774 9283 12227 11895 8828 7416 10025 8310 8108 7384 6469 4874 8532 8024 7272 6479 5791 4772
5 6 6 7 7 7 7 9 9 9 9 9 9 11 11 11 11 11 11
4 4 5 3 4 5 6 3 4 5 6 7 8 3 4 5 6 7 8
319
R.s. proc 7285 6866 6013 6460 3339 4469 3994 3551 4264 3875 3481 3722 9338 6951 6307 6268 5471 6229 5723 2923 4490 3816 3043 1864 2925 2630 2289 1894 1695
R.s. union 2486 2307 2285 2341 1308 1847 1650 1145 1193 1112 818 477 4446 2188 1696 1634 1297 2392 2175 1069 1802 1602 1330 989 1209 1020 759 543 340
12483 4470 11212 5730 8891 3064 11924 7022 11089 5520 8534 4143 7013 2380 9661 6229 7882 4071 7524 3505 6641 2955 6216 2668 4355 1589 7760 4853 7546 4146 6552 3065 6024 2675 5257 2244 4462 2268 continued on next page
Table 6.4. continued Configuration 13 11 9 13 11 10 13 13 3 13 13 4 13 13 5 13 13 6 13 13 7 13 13 8 13 13 9 13 13 10 13 13 11 13 13 12 13 15 3 13 15 4 13 15 5 13 15 6 13 15 7 13 15 8 13 15 9 13 15 10 13 15 11 13 15 12 13 15 13 13 15 14 13 19 3 13 19 4 13 19 5 13 19 6 13 19 7 13 19 9 13 19 10 13 19 11 13 19 12 13 19 13 13 19 14 13 19 15 13 19 16 13 19 17 13 19 18 14 14 14 14 14 14 14 14 14
4 5 5 5 6 6 6 6 7
3 2 3 4 2 3 4 5 2
Copied 48116 60072 56357 48859 32582 39822 41569 40464 47855 43550 35903 40475 60884 41546 35515 34873 43649 25747 31584 29817 29519 34703 46445 40364 52378 40610 30539 21481 30781 28156 29981 12512 29637 29543 25240 22794 26176 24559 29613
Invoc 17 17 51 37 28 24 21 18 17 15 13 13 51 36 28 24 21 17 16 14 13 12 12 11 48 35 27 22 19 15 14 12 12 11 10 9 9 8 8
W.b. Insert 4255 4085 8017 7777 6120 6330 5682 5788 5076 4627 3308 2928 8898 7404 6786 6398 6621 4750 4849 4148 3495 3733 3929 2951 8488 7970 6574 5570 6451 5674 5505 2826 4625 3961 3223 2688 2872 2258 2136
W.b. Add 4244 4055 7963 7723 6081 6280 5650 5741 5038 4591 3294 2915 8826 7350 6747 6358 6588 4712 4809 4115 3470 3701 3908 2937 8452 7932 6539 5536 6409 5622 5479 2803 4600 3928 3207 2671 2846 2241 2126
79627 58797 58414 59836 56151 46005 42698 42501 52419
30 39 26 20 38 25 19 15 37
5084 6727 6277 3856 6146 5612 4379 2661 6178
5054 6681 6233 3814 6098 5554 4359 2639 6121
320
R.s. proc 3864 3798 7034 7111 4997 5832 4844 5439 4662 4309 3063 2673 7997 6292 5911 5467 6149 4106 4503 3771 3027 3386 3598 2705 7222 7074 4312 4199 5780 5035 5015 2007 4123 3393 2392 1757 2552 1916 1488
R.s. union 1783 1143 4384 3873 2171 2587 2081 2503 1750 1866 1614 942 5212 3309 2739 2335 2851 1862 1997 1822 1613 1567 1295 799 4677 4114 1771 1669 2727 2128 2395 1159 1830 1589 1283 1063 1202 940 533
4958 3616 6535 4538 5725 3667 3512 2305 5785 4201 5184 2887 4148 2866 2582 1857 5560 4017 continued on next page
Table 6.4. continued Configuration 14 7 3 14 7 4 14 7 5 14 7 6 14 8 2 14 8 3 14 8 4 14 8 5 14 8 6 14 8 7 14 10 2 14 10 3 14 10 4 14 10 5 14 10 6 14 10 7 14 10 8 14 10 9
Copied 30011 31424 38391 33801 52510 31608 36405 27256 30832 32305 40988 20405 18643 15783 26854 18980 20762 22377
Invoc 23 18 15 12 37 23 18 14 12 10 35 21 16 13 11 9 8 8
W.b. Insert 4872 3941 3710 2282 6637 4830 5297 3471 3163 1986 5936 4124 3468 2666 3691 2239 2246 1443
321
W.b. Add 4824 3902 3683 2275 6577 4798 5260 3425 3115 1963 5908 4098 3439 2647 3647 2218 2225 1438
R.s. proc 3397 3560 3526 2128 6450 4240 5137 3309 3012 1846 5310 3001 2511 1899 3361 1958 2158 1291
R.s. union 1986 2274 2315 1464 4815 2527 2947 2177 1978 1120 4037 1743 1584 1362 1990 1375 1512 936
Table 6.5. Costs of block FC-DOF for Richards Configuration 10 8 7 10 9 6 10 9 7 10 9 8 10 11 6 10 11 7 10 11 8 10 11 9 10 11 10 10 13 6 10 13 7 10 13 8 10 13 9 10 13 10 10 13 11 10 13 12 10 16 6 10 16 7 10 16 8 10 16 9 10 16 10 10 16 11 10 16 12 10 16 13 10 16 14 10 16 15 10 20 7 10 20 8 10 20 9 10 20 10 10 20 11 10 20 13 10 20 14 10 20 15 10 20 16 10 20 17 10 20 18 10 20 19 10 24 5 10 24 6 10 24 8 10 24 9 10 24 11 10 24 12 10 24 14 10 24 15 10 24 16 10 24 18
Copied 4215818 3560761 3107563 2949754 1796246 2167571 2120608 2031689 2088334 564276 820548 1071155 1520018 1490926 1430346 1447058 112585 149586 221826 384428 575071 832751 1034594 1056175 1042023 1060282 58762 56758 63860 73656 102806 264118 384488 579959 728358 757425 750381 772130 55492 48697 39664 41374 41688 42735 55973 75396 113499 304598
Invoc 6748 6033 5243 4776 4199 3869 3593 3346 3235 3369 3037 2780 2704 2541 2400 2321 3060 2645 2349 2163 2026 1930 1851 1790 1728 1679 2592 2266 2018 1821 1666 1459 1390 1349 1309 1280 1245 1222 3624 3016 2257 2007 1643 1506 1295 1214 1148 1063
W.b. insert 166442 224511 160663 124361 164308 171319 146440 112517 91799 56466 76252 100407 117510 91617 68850 53418 35269 32271 33786 44565 57236 72666 81350 65717 47641 37229 34394 30270 27222 24861 23803 32355 38980 50819 55975 49796 35863 29366 39026 37595 34035 32249 25321 22770 18949 18619 19490 33775
322
W.b. add 165991 224253 160465 124153 164081 171175 146316 112361 91669 56146 76016 100221 117344 91490 68733 53303 34695 31872 33424 44280 57055 72490 81235 65594 47547 37159 33880 29775 26817 24251 23446 32017 38738 50652 55882 49714 35785 29285 38442 37092 33557 31804 24789 22241 18428 18218 19093 33482
R.s. proc R.s. union 165962 29704 224139 85263 160445 62526 124117 44701 163970 101725 171138 96578 146212 72900 112315 51316 91653 28690 56108 39866 75977 44532 100175 57013 117238 66263 91463 44914 68641 28609 53274 17603 34643 17048 31822 18470 33384 20702 44238 25974 57013 32640 72304 41303 81186 46198 65564 30996 47521 23910 37130 6921 33859 12994 29686 11354 26774 11971 24207 10662 23407 11646 31954 17582 38615 22046 50606 29365 55839 31920 49604 23033 35743 16243 29181 5780 38406 16703 37062 14140 33537 10733 31778 10526 24739 8916 22167 8637 18357 7807 18174 8536 19056 10084 33428 17215 continued on next page
Table 6.5. continued Configuration 10 24 19 10 24 20 10 24 21 10 24 22 10 24 23 10 29 6 10 29 8 10 29 10 10 29 11 10 29 13 10 29 15 10 29 17 10 29 18 10 29 20 10 29 21 10 29 23 10 29 25 10 29 26 10 29 27 10 29 28 10 35 5 10 35 7 10 35 9 10 35 12 10 35 14 10 35 16 10 35 18 10 35 20 10 35 22 10 35 24 10 35 26 10 35 28 10 35 30 10 35 31 10 35 32 10 35 33 10 35 34 11 11 11 11 11 11 11 11 11 11 11
5 6 6 7 7 7 8 8 8 8 10
4 4 5 4 5 6 4 5 6 7 3
Copied 446289 561608 584271 586645 601428 42671 34294 31422 29885 29980 30670 32813 36460 51256 97402 229499 447567 453930 455726 468146 44012 33331 29016 25088 22895 20952 20528 23635 24723 28844 47556 105561 268686 349719 362738 364440 377486
Invoc 1036 1012 994 974 957 3011 2254 1803 1638 1386 1202 1061 1003 906 871 819 792 776 763 754 3613 2575 2000 1499 1284 1123 999 900 818 750 696 655 632 624 617 608 602
W.b. Insert 42320 43692 37891 29086 23886 37865 36295 33604 31501 26137 21344 17836 16193 13965 17829 26070 37155 29844 23868 19675 39032 37633 36775 33276 29578 25694 21213 18072 15743 12825 12738 14920 27551 29292 25211 20346 16265
W.b. Add 42147 43587 37803 29008 23797 37501 35671 32881 30822 25687 20653 16970 15764 13479 17475 25798 37060 29760 23785 19572 38793 37386 35996 32306 29130 24654 20693 17264 14765 12230 12310 14469 27341 29175 25114 20222 16143
2607278 1825363 1716882 742328 1315292 1315623 263423 756734 1053458 1038680 61314
4238 3179 2841 2572 2320 2144 2348 2071 1844 1725 3001
136997 120304 68341 76840 84630 67758 32925 69569 69325 39576 28802
136799 120216 68229 76660 84510 67626 32674 69420 69248 39500 28445
323
R.s. proc 42105 43545 37765 28910 23763 37455 35632 32854 30802 25597 20629 16923 15744 13427 17434 25690 37018 29654 23743 19428 38740 37336 35913 32246 29064 24597 20670 17191 14745 12206 12265 14426 27156 29064 25075 20195 16023
R.s. union 22207 24537 17676 12285 5663 14030 10828 8999 8322 7422 6757 6194 5872 6007 8127 12879 19661 14496 10267 5196 16733 12139 9862 7689 6708 5835 5213 5279 4727 4546 5162 7025 14156 15150 11029 8199 4769
136792 74576 120062 80030 68223 38585 76500 57204 84441 58413 67622 38726 32673 23903 69339 52137 69244 45830 39494 22229 28429 15152 continued on next page
Table 6.5. continued Configuration 11 10 4 11 10 5 11 10 6 11 10 7 11 10 8 11 10 9 11 12 3 11 12 4 11 12 5 11 12 6 11 12 7 11 12 8 11 12 9 11 12 10 11 12 11 11 15 3 11 15 4 11 15 5 11 15 6 11 15 7 11 15 8 11 15 9 11 15 10 11 15 11 11 15 12 11 15 13 11 15 14 11 18 3 11 18 4 11 18 5 11 18 6 11 18 7 11 18 8 11 18 9 11 18 10 11 18 11 11 18 12 11 18 13 11 18 14 11 18 15 11 18 16 11 18 17 12 12 12 12 12 12
4 5 5 6 6 6
3 3 4 2 3 4
Copied 58925 77938 163327 522495 735425 745285 50506 41607 41257 44502 58953 126032 403177 568759 584958 43031 35148 29928 28728 28283 28633 32941 42759 109530 273085 425487 440125 40306 30893 27759 25549 23194 21905 20231 21998 23021 27057 35897 84739 235712 342653 353705
Invoc 2249 1807 1533 1414 1298 1240 2992 2240 1792 1495 1285 1140 1073 1004 970 2987 2237 1787 1489 1276 1117 994 896 827 783 750 730 2985 2234 1786 1488 1275 1115 991 892 811 744 688 646 622 599 585
W.b. Insert 23376 19834 22129 49238 45700 28594 28909 26138 22227 17972 15461 17677 38070 36430 23214 28898 27726 25891 22999 18989 15919 13298 12158 14633 29439 27945 21478 29224 29011 27370 25073 24214 20982 17975 14792 12614 11277 10573 12386 23414 22977 16018
W.b. Add 23006 19521 21843 49110 45642 28525 28439 25777 21654 17490 15139 17327 37900 36371 23140 28659 27252 25704 22528 18878 15809 12949 11871 14498 29289 27868 21388 29016 28856 27262 24972 23908 20609 17350 14586 12494 10978 10064 12073 23294 22909 15930
1053156 336380 749115 36110 35754 255904
1846 1573 1312 2218 1479 1161
57992 42813 42614 16642 11374 33492
57970 42802 42606 16634 11363 33485
324
R.s. proc 22999 19488 21838 49090 45537 28452 28416 25768 21633 17489 15137 17283 37895 36371 23064 28624 27231 25685 22497 18851 15803 12949 11867 14496 29282 27833 21381 28966 28810 27192 24941 23871 20588 17332 14565 12485 10974 10060 12071 23291 22909 15885
R.s. union 12323 12226 15251 37374 31958 16112 14248 11347 9586 8804 9005 11539 27670 24917 12899 14168 10809 9330 7751 7262 6664 6046 6353 9278 20446 18354 11520 14178 11250 9119 7717 6867 6180 5483 5216 4963 4830 4994 6896 16452 14826 8347
57901 46194 42733 34873 42605 33073 16634 10823 11363 8181 33483 26842 continued on next page
Table 6.5. continued Configuration 12 6 5 12 8 2 12 8 3 12 8 4 12 8 5 12 8 6 12 8 7 12 9 2 12 9 3 12 9 4 12 9 5 12 9 6 12 9 7 12 9 8
Copied 564323 29173 24289 21654 24781 160906 389311 28462 22670 19458 18824 22365 143021 337952
Invoc 1000 2213 1474 1105 885 759 689 2212 1473 1104 883 737 648 596
W.b. Insert 31030 18060 15498 11278 8584 22007 21686 18220 16264 13232 9926 7948 20042 19043
W.b. Add 31030 18052 15491 11273 8581 22006 21679 18211 16258 13229 9922 7946 20042 19037
R.s. proc 31025 18044 15486 11271 8581 21994 21678 18201 16246 13222 9922 7946 20041 19036
R.s. union 25044 10601 7605 6036 5175 16808 17017 10495 7520 5857 5044 4436 14713 14721
13 13 13 13 13 13
719438 29725 455645 20421 22602 330441
1255 1084 793 1082 721 579
62732 10847 39304 11465 8765 29433
62199 9069 38633 9667 6993 28715
62199 9062 38633 9667 6991 28715
61701 7908 38001 6840 5547 28023
3 4 4 5 5 5
2 2 3 2 3 4
325
Table 6.6. Costs of block FC-DOF for JavaBYTEmark Configuration 18 3 2 18 4 2 18 4 3 18 5 2 18 5 3 18 5 4 18 6 2 18 6 3 18 6 4 18 6 5
Copied 145533 17634 60291 17634 17634 69040 17634 17634 17634 63987
Invoc 10 9 6 9 6 5 8 6 4 4
W.b. insert 15 56 33 38 38 17 23 23 32 29
326
W.b. add 15 43 28 33 33 14 19 19 25 24
R.s. proc 15 43 28 33 33 9 13 19 19 24
R.s. union 12 23 10 23 15 5 8 13 8 7
Table 6.7. Costs of block FC-DOF for Bloat-Bloat Configuration 16 16 11 16 16 12 16 16 13 16 16 14 16 19 10 16 19 12 16 19 13 16 19 14 16 19 15 16 19 16 16 19 17 16 19 18 16 23 10 16 23 13 16 23 14 16 23 16 16 23 17 16 23 18 16 23 20 16 23 21 16 28 11 16 28 13 16 28 14 16 28 16 16 28 18 16 28 19 16 28 21 16 28 22 16 28 24 16 28 25 16 28 26 16 34 11 16 34 13 16 34 15 16 34 17 16 34 19 16 34 21 16 34 23 16 34 25 16 34 27 16 34 29 16 34 30 16 34 31 16 34 32 16 34 33 16 41 9 16 41 11 16 41 14
Copied 12876471 13085042 12461415 11862567 4757169 7389650 8676499 9014138 9437192 9300046 8845748 8599180 3281896 3365409 4102366 5403719 6076636 6871866 6976001 6793647 2636562 2462227 2308104 2370084 2594284 2971513 4185737 4737228 5419853 5417102 5226810 2192541 2373520 2179527 2100045 1980670 2087321 2208534 2422813 3397026 4084392 4257942 4263674 4150617 4110114 2557773 2199908 1933209
Invoc 295 279 263 250 257 230 221 210 202 194 186 180 247 190 180 164 158 154 145 140 221 186 172 151 135 129 121 118 112 110 107 217 185 160 140 125 114 104 97 92 88 87 85 84 82 267 217 169
W.b. insert 697181 688374 610242 503342 421318 519561 543285 495443 472295 444606 380115 355629 435491 389935 388507 329094 404255 354608 345792 287276 500105 432114 387199 262564 227942 270221 288240 292317 281177 251690 244684 576701 517779 465836 376493 303203 268910 225715 203739 256724 218667 232291 197347 171133 165909 665434 630212 531921
327
W.b. add 581742 578839 519298 457599 318856 414061 436564 409878 397909 383907 347275 329192 309373 275092 274431 274892 316466 303957 294824 259172 328117 278789 259105 198459 177462 207777 234322 240525 241342 223261 220074 368060 334863 301691 262244 212029 186589 171745 156587 191829 185636 187859 173381 161101 159934 439201 387455 334913
R.s. proc R.s. union 579250 87920 575994 87195 517184 66705 455031 46732 314477 92788 409521 85385 433020 82552 407223 68459 395184 55891 381170 45472 344679 36538 325826 25616 302432 102333 270386 78510 268658 67722 271390 53579 313734 59772 301454 47008 292266 38575 256675 27186 322775 106683 272445 89736 254044 82203 190897 56246 173344 45733 205245 53744 231459 48123 238062 40629 237944 35775 220050 30480 217212 29426 353858 123741 323173 106029 293087 93951 255204 77093 200805 61920 183095 56086 165140 53348 152282 47004 189044 41483 181895 28390 184743 27701 170256 23806 158616 16439 155973 11502 406936 176233 376662 139670 322383 101384 continued on next page
Table 6.7. continued Configuration 16 41 16 16 41 18 16 41 21 16 41 23 16 41 26 16 41 28 16 41 30 16 41 33 16 41 35 16 41 36 16 41 38 16 41 39 16 41 40 16 50 10 16 50 13 16 50 16 16 50 19 16 50 22 16 50 25 16 50 29 16 50 32 16 50 34 16 50 37 16 50 40 16 50 43 16 50 44 16 50 46 16 50 47 16 50 48 16 61 9 16 61 13 16 61 16 16 61 20 16 61 24 16 61 27 16 61 31 16 61 35 16 61 38 16 61 41 16 61 45 16 61 49 16 61 52 16 61 54 16 61 56 16 61 57 16 61 58 16 61 59 16 74 7
Copied 1934193 1812181 1792123 1725472 1710140 1837780 1808711 2440093 2951941 3450948 3558620 3368551 3342310 2917164 1745003 1871008 1807320 1734156 1475550 1495594 1644572 1510633 1787088 1713054 2399719 2364272 2912999 2904596 2828231 2090674 1744819 1638790 1672035 1421898 1420755 1378990 1441311 1377644 1230787 1493424 1281836 1678582 2044713 2197506 2384827 2310417 2202838 2043614
Invoc 148 131 113 103 91 85 79 73 70 70 68 66 65 242 181 147 124 107 94 81 74 69 64 59 56 55 54 53 53 262 180 146 117 97 86 75 67 61 57 52 47 45 44 43 43 42 42 334
W.b. Insert 523459 439550 376833 282533 243671 195852 178290 168190 165079 207801 165346 149437 136462 697371 629755 573977 503497 481070 412913 263025 223829 200363 203688 130734 171828 143358 141771 135927 116829 696119 660497 607421 511574 576340 438950 369144 359757 252319 196377 147821 105801 123422 129698 127513 132252 111174 92991 800511
328
W.b. Add 328822 283148 247645 198499 171798 148986 134693 135968 138012 169217 147557 136373 126982 465025 397008 358575 324014 315242 261795 171706 167061 147415 151490 106078 136692 120423 129375 124808 108206 445838 405345 381486 334324 352568 285302 239228 222249 177790 135910 121526 84594 97364 109926 103417 108318 98291 85392 510958
R.s. proc R.s. union 316784 97436 270958 80269 242972 71088 190762 62468 163926 51203 140559 43220 130427 37376 132738 25965 134087 21856 166457 27039 143792 16955 133285 15961 124373 12590 435269 148099 371871 130960 342814 98994 309987 92931 300075 92151 251863 74792 157670 46688 162338 47235 140607 41026 144130 41421 103019 26135 133111 25970 116643 22775 126755 16904 122323 12793 106247 10099 419547 176023 392560 138056 363965 115084 318046 92692 340037 88362 271771 76780 220126 69642 214292 61134 168710 44810 126911 35204 117888 34381 76913 22471 94037 22572 106406 21426 100745 16825 105147 19145 95350 11988 82252 8595 451376 219306 continued on next page
Table 6.7. continued Configuration 16 74 11 16 74 16 16 74 20 16 74 24 16 74 29 16 74 33 16 74 38 16 74 42 16 74 47 16 74 50 16 74 55 16 74 59 16 74 63 16 74 66 16 74 68 16 74 70 16 74 71
Copied 2043878 1480869 1630315 1359220 1363591 1475290 1282772 1242068 1251104 1158984 1297354 1392641 1445059 1655323 1636235 1824631 1952068
Invoc 213 144 116 96 80 70 61 55 49 46 42 39 37 36 35 34 34
W.b. Insert 695889 697802 658996 589643 488961 473644 301231 335280 251540 236284 190819 202170 149725 111369 84935 79387 86974
W.b. Add 449895 423865 426607 382701 313117 296743 203177 223183 165603 146654 127702 128872 108730 87052 73634 72305 78068
18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
9245283 7283924 6759511 3183558 5738118 5662171 2007413 1943122 2054687 2520609 4072555 4043639 1970306 1709577 1622388 1812100 2362843 3128413 3221099 1794950 1823393 1679290 1499837 1492991 1575460 1548656 1840595 2709255 2749813 1767217 1658659
195 170 144 154 131 116 198 148 119 101 90 82 148 118 98 84 75 68 63 195 147 117 97 83 73 65 59 55 52 194 146
229680 273905 148606 188507 216454 150169 326689 257830 215405 129884 150750 102881 357575 270670 211515 138307 140745 106829 67895 390698 366605 323586 233325 244104 151447 112562 79805 87937 66175 422384 395542
162498 193935 107053 137072 156928 118656 202555 166951 134582 96150 104085 75815 217101 174443 125346 98348 91413 76620 52383 237804 223640 207571 166457 146494 100194 79992 60347 65630 51939 265789 254080
5 6 6 7 7 7 9 9 9 9 9 9 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 13 13 16 16
4 4 5 4 5 6 3 4 5 6 7 8 4 5 6 7 8 9 10 3 4 5 6 7 8 9 10 11 12 3 4
329
R.s. proc 395917 393569 388811 356967 279707 284956 169674 212594 151510 130137 117573 124395 101802 82949 70793 69954 75506
R.s. union 158557 120620 116269 91548 72130 74958 48188 59927 48722 38779 32659 33790 21504 17994 12922 9202 8343
160792 56116 191957 80397 106620 35584 136695 66305 156031 56533 117910 42946 198898 116110 163626 80986 132389 66321 94286 49848 102204 43558 74096 25120 211421 110703 169575 82429 121549 67216 94121 47648 91058 45420 75348 25979 51752 16002 228152 136491 219871 108653 204866 91205 162589 83302 142879 68373 98745 52467 76317 38661 59528 31636 64283 24123 50361 15535 256414 152336 242193 127448 continued on next page
Table 6.7. continued Configuration 18 16 5 18 16 6 18 16 7 18 16 8 18 16 9 18 16 10 18 16 11 18 16 12 18 16 13 18 16 14 18 16 15 18 19 2 18 19 3 18 19 4 18 19 5 18 19 6 18 19 7 18 19 9 18 19 10 18 19 11 18 19 12 18 19 13 18 19 14 18 19 15 18 19 16 18 19 17 18 19 18
Copied 1632098 1441111 1451849 1350731 1336134 1402477 1368801 1535951 1641812 2067479 2178510 1823849 1673635 1479112 1586752 1423186 1257039 1274272 1170568 1348679 1348728 1224158 1201141 1162111 1436889 1770725 1935203
Invoc 116 97 83 72 64 58 53 49 45 42 40 290 193 144 116 96 82 64 57 53 48 44 41 38 36 35 33
W.b. Insert 356411 346786 297484 227789 185466 153690 157622 117061 79337 64800 52396 469220 452494 425509 401978 366763 331355 262960 232347 204841 150447 122494 106291 75756 70131 48965 93387
W.b. Add 222971 220517 189885 150800 122508 104797 101670 79493 55569 49957 43493 292737 279894 263913 254730 239264 212877 159716 136966 128496 104959 85634 68640 51790 47225 38299 58620
R.s. proc 213700 214666 185661 144891 116414 97877 97904 78207 54716 49358 43117 273406 268698 250578 242891 229409 203160 152873 131669 121590 101361 83696 63391 49674 45693 37416 57333
R.s. union 97465 89070 68755 67346 56599 49713 47029 37855 27455 17465 11508 209105 161989 130343 112365 107215 81186 61047 55918 53699 45011 37611 33398 23557 18990 14452 20958
20 20 20 20 20 20
3878918 1589557 2552316 1232075 1201338 1911445
78 73 50 72 48 37
123991 165288 60379 167093 87907 48288
66039 90758 27688 101486 53049 21616
66039 90020 27688 100398 52379 21530
65445 80715 26692 78518 46159 19638
3 4 4 5 5 5
2 2 3 2 3 4
330
Table 6.8. Costs of block FC-DOF for Toba Configuration 16 22 19 16 22 20 16 27 20 16 27 22 16 27 23 16 27 24 16 33 19 16 33 21 16 33 22 16 33 24 16 33 26 16 33 28 16 33 29 16 33 30 16 33 31 16 40 20 16 40 23 16 40 25 16 40 27 16 40 30 16 40 32 16 40 34 16 40 36 16 40 37 16 40 38 16 48 16 16 48 19 16 48 22 16 48 24 16 48 27 16 48 30 16 48 33 16 48 36 16 48 38 16 48 41 16 48 43 16 48 44 16 48 45 16 48 46 16 59 16 16 59 19 16 59 23 16 59 27 16 59 30 16 59 34 16 59 37 16 59 40 16 59 44
Copied 15022397 14442188 10215144 10371671 10928970 10471549 6772949 6859504 7043184 7375292 7340954 7922126 8293596 7771935 8025961 6204868 5222159 4929155 5628181 5498713 5750122 6258225 6885023 6589158 6741542 7015919 6366145 6043575 6018127 5123821 5000839 4443304 5339933 5348156 5320769 5299091 5675534 5597408 5363695 6067396 5993059 5458426 5119388 5060803 4428231 3831748 4242984 3619011
Invoc 202 194 159 147 145 140 149 135 130 121 113 108 107 103 102 139 118 108 102 92 87 84 82 80 80 175 146 125 115 100 90 81 76 72 68 65 65 64 63 171 144 117 99 89 78 71 66 59
W.b. insert 691552 650465 523931 506829 508015 468950 391021 393432 390656 399131 381777 373738 387928 345727 353078 394565 325390 294684 315003 280682 297743 320467 319584 297539 302214 453171 413473 387995 372811 325205 299587 254293 301675 278635 260587 254216 263095 249194 237574 427593 405996 380371 346842 333779 294468 247338 257728 210137
331
W.b. add 690821 649727 522731 505971 507252 468246 389673 392148 389306 397991 380724 372858 387226 345088 352484 393269 324061 293340 313756 279550 296630 319579 318875 296912 301580 451753 412152 386673 371417 323880 298234 253128 300479 277583 259708 253417 262372 248551 236924 426210 404647 379034 345476 332439 293170 246043 256513 209048
R.s. proc R.s. union 688179 61101 648159 42269 521201 70165 504092 61392 505247 50695 466517 38481 387750 77126 390579 71144 387458 66895 395859 69880 378827 56643 370897 41140 386004 33455 343718 33232 338322 21835 381663 106112 315733 68888 280911 64753 303257 52789 278229 40468 295202 48603 307087 37450 307070 33989 283060 27556 288084 20456 436123 198121 396148 156014 370904 118760 360205 97758 316603 66816 285256 56096 243353 47840 291059 49678 267053 39210 257618 32060 240854 26795 260499 24012 237258 20681 235872 14446 411632 190122 383088 168399 360963 126711 322272 95848 321913 82557 282862 56263 226567 48893 247190 50401 198233 39249 continued on next page
Table 6.8. continued Configuration 16 59 47 16 59 50 16 59 52 16 59 54 16 59 55 16 59 56 16 59 57 16 71 15 16 71 19 16 71 23 16 71 28 16 71 32 16 71 36 16 71 40 16 71 45 16 71 48 16 71 53 16 71 57 16 71 60 16 71 63 16 71 65 16 71 67 16 71 68 16 87 13 16 87 18 16 87 23 16 87 29 16 87 34 16 87 39 16 87 44 16 87 50 16 87 55 16 87 59 16 87 64 16 87 70 16 87 74 16 87 77 16 87 80 16 87 82 16 87 83 16 87 84 16 106 16 16 106 22 16 106 29 16 106 35 16 106 41 16 106 48 16 106 54
Copied 4092301 4100409 4489595 4694964 4567134 4181649 4378152 5567821 5112476 4866178 4444597 4616234 4342026 3921500 3794354 3693642 3555322 3193516 3741124 3819069 3239254 3850305 3785459 6861644 5520477 4725415 4553976 4354855 4273716 3861602 3251451 3187167 2385002 2712068 2663283 3078241 2887515 3235180 3090178 3092943 3044820 5400088 4911542 3821524 3916903 3223219 3773352 3433005
Invoc 56 53 52 51 50 49 49 179 140 115 94 83 73 65 58 54 49 45 44 42 40 40 39 211 148 114 90 77 67 59 51 46 42 39 36 35 33 33 32 31 31 165 119 88 73 61 53 47
W.b. Insert 220715 209935 225827 219955 210713 187498 189995 401222 383689 349337 334461 324387 300051 258041 256542 244650 197976 188305 192020 208191 145690 179152 171500 484879 422721 359143 345349 331949 314523 292382 247373 218122 167044 170902 162063 158828 152056 167320 141836 146963 132294 417255 378195 321985 339782 290913 305262 258829
332
W.b. Add 219633 209008 224959 219244 210063 186926 189440 399819 382293 348030 333159 323066 298777 256766 255358 243448 196859 187167 191041 207226 144965 178557 170895 483544 421304 357810 344019 330590 313218 291053 246079 216922 165803 169739 160930 157894 151010 166520 141180 146381 131668 415857 376786 320622 338356 289492 303906 257475
R.s. proc R.s. union 209486 37151 198465 31631 214989 29266 217878 18448 200383 17547 185962 16694 188374 10146 356916 174691 348016 152030 312562 119985 304325 105541 306269 94277 279764 69523 239774 41689 243397 50207 225361 46644 189087 32385 178197 31528 180860 24807 198506 27749 136205 16752 167889 17752 162532 8085 447743 260481 375043 193479 332317 142372 320113 122664 305335 103546 268593 84114 262752 68539 228659 47459 181375 40855 157481 30191 155269 29519 151059 28675 156254 22269 144249 23558 164394 20253 139402 16854 139535 18180 122696 7682 372883 194567 342849 158288 264355 104214 319626 120343 239780 76981 270925 80480 235913 58535 continued on next page
Table 6.8. continued Configuration 16 106 60 16 106 67 16 106 72 16 106 78 16 106 85 16 106 90 16 106 94 16 106 98 16 106 100 16 106 101 16 106 102 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
7 9 9 9 10 10 10 10 10 12 12 12 12 12 12 12 15 15 15 15 15 15 15 15 15 15 15 18 18 18 18 18 18 18 18 18 18
6 6 7 8 5 6 7 8 9 5 6 7 8 9 10 11 4 5 6 7 8 9 10 11 12 13 14 5 6 7 8 9 10 11 12 13 14
Copied 2386269 2852379 2197131 2528417 2213776 1995010 2400718 2040224 2498239 2554660 2275252
Invoc 41 37 34 32 29 27 27 25 25 25 25
W.b. Insert 213989 205778 160155 171715 129217 105560 139024 103153 123388 125683 101331
W.b. Add 212619 204559 158917 170547 128087 104557 138094 102243 122618 124936 100795
10557914 7212395 7835864 7415219 5723150 6210093 6653768 6701407 6260650 6717821 5511308 4905544 5209000 4841445 5708098 5600359 6157757 4870721 4809979 4354547 4140150 4011778 4593828 4249132 4084660 4339548 4143888 5042347 4788418 4110896 4588074 4269130 3844825 3669324 3475408 3406012 3544328
140 118 105 95 135 114 99 89 80 138 112 95 84 74 69 65 169 132 109 93 81 72 66 60 55 51 48 131 109 92 82 72 64 58 53 49 46
294815 289894 266405 216069 291938 276688 259263 231447 187567 341051 282779 241128 238692 191780 206728 170640 330425 269418 260175 264881 227447 199959 217149 189257 163333 155471 126194 287392 281559 235952 252808 232024 202414 191400 171920 153469 158764
293847 288362 265177 215232 290204 275015 257737 230322 186750 339290 281038 239437 237033 190364 205589 169742 328690 267669 258508 263136 225647 198292 215420 187734 162004 154426 125359 285709 279887 234263 251087 230281 200696 189827 170370 152073 157255
333
R.s. proc 179286 172279 152954 164234 112924 93101 126715 97705 116714 117566 99704
R.s. union 37349 37419 31859 31803 22362 16032 18170 16500 14460 10100 7490
293323 44046 288042 80266 259341 62673 215200 32671 284242 97673 265455 74802 248276 67344 229924 53824 179450 30549 323855 143889 272139 91058 232150 72936 228488 59910 190163 46738 196434 47371 169487 25248 309381 187449 257811 134666 232261 108225 254738 96607 217532 62564 197750 51809 207852 59952 179803 47988 154811 40091 147815 36485 119563 21872 247083 147913 246142 129767 222178 94438 238426 92465 208545 62143 186812 55014 185155 49506 159075 41369 144916 35399 156563 39395 continued on next page
Table 6.8. continued Configuration 18 18 15 18 18 16 18 18 17 18 22 5 18 22 6 18 22 7 18 22 9 18 22 10 18 22 11 18 22 13 18 22 14 18 22 15 18 22 16 18 22 18 18 22 19 18 22 20 18 22 21 18 27 6 18 27 7 18 27 9 18 27 11 18 27 12 18 27 14 18 27 15 18 27 17 18 27 18 18 27 20 18 27 22 18 27 23 18 27 24 18 27 25 18 27 26
Copied 3329820 3287413 3836959 4917512 4612817 3973449 4147048 3777769 3640102 3035591 2828839 3121825 3267859 3142156 2831318 2980521 2722441 5018366 3843051 4176674 3627756 3574707 3162265 2744717 2581970 2518603 2505493 2801038 2306098 2194403 2656252 2380790
Invoc 43 40 39 130 108 91 71 64 58 48 45 42 40 35 33 32 31 108 90 71 57 53 45 41 36 34 31 28 27 26 25 24
W.b. Insert 137208 116152 127504 288535 285171 250471 257294 230397 218415 178309 165702 167960 168708 138456 115127 104751 78071 310200 258424 271754 232750 238065 204877 187890 145799 137983 122418 126559 105465 83865 99236 77629
W.b. Add 135835 115149 126662 286734 283426 248765 255593 228692 216708 176616 164029 166327 167178 137147 113811 103811 77183 308441 256720 270022 231047 236361 203242 186165 144224 136392 120866 125271 104160 82530 98176 76918
R.s. proc 128533 107842 119261 264264 263832 209590 232516 206185 192405 164597 146328 154361 166278 125392 109332 103716 77033 272225 204359 244730 217383 220664 189230 183113 134180 126686 108873 106120 96805 76205 93425 64985
R.s. union 30683 27048 15473 156815 145352 104140 101679 71029 61628 50916 42830 42704 48558 29120 23176 20637 11231 160868 112017 122197 85758 86245 50264 53894 35182 35129 28921 26725 23457 18399 16770 9296
20 20 20 20 20 20 20 20 20 20 20 20 20
4703450 4223654 3947436 4042792 4017172 3249012 3137875 2965312 4090073 3339464 2501170 2258927 2334110
55 80 53 40 79 52 39 31 79 52 38 31 26
73387 161229 113939 64723 173008 124241 98298 52494 175125 145188 102923 66914 37746
72316 159328 112157 63594 171092 122363 96696 51474 173155 143172 101006 65123 36860
72303 151462 105296 61042 160611 112022 94284 51420 160420 134171 90905 65082 36860
67128 94810 61217 49981 110782 56659 57238 47026 118410 74379 47702 38710 31767
4 5 5 5 6 6 6 6 7 7 7 7 7
3 2 3 4 2 3 4 5 2 3 4 5 6
334
Table 6.9. Costs of block 2GYF for StandardNonInteractive 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Configuration 12 1 11 12 2 10 12 3 9 12 4 8 12 5 7 12 6 6 12 7 5 12 8 4 12 9 3 12 10 2 14 1 13 14 2 12 14 3 11 14 4 10 14 5 9 14 6 8 14 7 7 14 8 6 14 9 5 14 10 4 14 12 2 17 1 16 17 2 15 17 3 14 17 4 13 17 5 12 17 6 11 17 7 10 17 8 9 17 9 8 17 10 7 17 11 6 17 12 5 17 13 4 17 14 3 17 15 2 17 16 1 21 1 20 21 2 19 21 3 18 21 4 17 21 6 15 21 7 14 21 8 13 21 9 12 21 11 10 21 12 9 21 13 8
Copied 68661 61484 58205 58087 55902 58361 101550 166802 166943 166285 60948 50440 44807 42610 36442 37997 38401 43540 74875 122732 125215 51698 42915 38716 33446 29027 28362 27882 28055 29946 28362 31109 58231 89909 91049 90940 90495 48810 38044 34070 28600 24552 23099 20912 20256 20496 20191 21118
Invoc 1660 855 575 434 351 294 267 271 269 268 1660 847 567 425 339 284 245 217 202 201 202 1660 841 563 422 337 281 240 212 189 170 156 148 148 147 147 147 1660 838 560 419 279 239 209 186 153 140 130
335
W.b. insert 44 33 32 18 17 21 18 12 16 11 44 22 29 18 17 16 20 20 21 17 15 44 31 24 15 15 14 12 16 12 12 9 11 9 9 9 18 44 26 21 24 14 17 15 14 10 10 13
W.b. add R.s. proc 43 43 31 31 30 30 18 18 17 17 19 19 16 16 12 12 16 16 11 11 43 43 22 22 27 27 18 18 17 17 16 16 18 18 18 18 21 21 17 17 15 15 43 43 29 29 24 24 15 15 15 15 14 14 12 12 16 16 12 12 12 12 9 9 11 11 9 9 9 9 9 9 16 16 43 43 24 24 21 21 22 22 14 14 15 15 15 15 14 14 10 10 10 10 13 13 continued on next page
Table 6.9. continued 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Configuration 21 14 7 21 16 5 21 17 4 21 18 3 21 19 2 25 1 24 25 2 23 25 4 21 25 5 20 25 7 18 25 8 17 25 10 15 25 11 14 25 13 12 25 14 11 25 16 9 25 17 8 25 18 7 25 20 5 25 21 4 25 22 3 25 23 2 25 24 1 30 1 29 30 3 27 30 4 26 30 6 24 30 8 22 30 10 20 30 12 18 30 13 17 30 15 15 30 17 13 30 19 11 30 20 10 30 22 8 30 24 6 30 26 4 30 27 3 30 28 2 30 29 1 37 1 36 37 3 34 37 6 31 37 8 29 37 10 27 37 12 25 37 14 23
Copied 22860 36625 66141 67015 67067 45402 35483 27412 23741 19498 19741 17654 17208 15894 15784 16045 15699 17661 32105 52301 52990 53042 51770 43928 30277 26449 21397 18556 15389 15119 14433 12765 13636 13611 11694 12932 14458 41158 41859 42143 42258 41822 29189 20292 16979 15318 13990 13070
Invoc 121 108 109 108 108 1660 836 418 334 238 209 167 152 129 119 105 99 93 86 86 86 86 85 1660 557 417 278 209 167 139 128 111 98 88 83 76 70 68 68 68 68 1660 555 278 208 166 138 118
336
W.b. Insert 14 11 12 13 17 44 32 15 14 16 19 11 12 14 10 10 13 15 7 9 10 9 10 44 29 12 14 9 17 13 13 9 10 9 13 9 10 9 8 11 11 44 24 16 14 18 9 15
W.b. Add R.s. proc 13 13 11 11 11 11 12 12 16 16 43 43 30 30 15 15 14 14 16 16 17 17 11 11 12 12 14 14 10 10 8 8 12 12 14 11 7 7 8 8 9 9 8 8 10 7 43 43 28 28 12 12 14 14 9 9 17 17 13 13 12 12 9 9 10 10 8 8 12 9 8 8 9 9 8 8 7 7 8 8 9 9 43 43 22 22 16 16 14 14 16 16 9 9 15 15 continued on next page
Table 6.9. continued 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Configuration 37 17 20 37 19 18 37 21 16 37 23 14 37 25 12 37 27 10 37 30 7 37 31 6 37 33 4 37 34 3 37 35 2 37 36 1 45 1 44 45 4 41 45 7 38 45 9 36 45 12 33 45 15 30 45 18 27 45 20 25 45 23 22 45 26 19 45 28 17 45 31 14 45 33 12 45 36 9 45 38 7 45 40 5 45 41 4 45 42 3 45 43 2 54 2 52 54 5 49 54 8 46 54 11 43 54 15 39 54 18 36 54 21 33 54 24 30 54 28 26 54 31 23 54 34 20 54 37 17 54 40 14 54 43 11 54 46 8 54 48 6 54 50 4
Copied 11364 10258 10063 9294 9057 8664 10502 11677 30614 32404 32382 31275 40736 24297 17924 15341 14015 12351 10893 10564 8960 8577 7844 6509 7247 6951 9219 15978 26141 25786 25674 31682 21191 15530 13664 12187 10725 9544 8414 7521 6605 6066 5882 6336 6354 6122 7962 20310
Invoc 98 87 79 72 66 61 55 54 52 53 53 52 1660 416 237 185 138 111 92 83 72 64 59 53 50 46 44 42 42 42 42 832 332 208 151 110 92 79 69 59 53 48 44 41 38 36 34 34
337
W.b. Insert 11 9 7 7 7 8 12 8 11 9 9 11 44 13 12 12 12 15 8 11 8 11 8 9 13 6 10 7 7 8 5 28 11 22 11 9 10 9 12 8 7 7 9 8 12 7 5 5
W.b. Add R.s. proc 10 10 8 8 6 6 6 6 6 6 7 7 10 10 6 6 8 8 7 7 7 7 9 9 43 43 13 13 12 12 12 12 12 12 13 13 8 8 10 10 7 7 9 9 6 6 7 7 10 7 5 5 7 7 5 5 5 5 6 6 4 4 26 26 11 11 20 20 11 11 9 9 10 10 8 8 10 10 6 6 5 5 5 5 6 6 6 6 8 8 5 5 3 3 3 3 continued on next page
Table 6.9. continued 9 9 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
Configuration 54 51 3 54 52 2 6 6 6 6 6 7 7 7 7 7 7 9 9 9 9 9 9 9 9 11 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13 13 13 13 13 15 15 15 15 15
1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8 9 10 11 12 1 2 3 4 5
5 4 3 2 1 6 5 4 3 2 1 8 7 6 5 4 3 2 1 10 9 8 7 6 5 4 3 2 1 12 11 10 9 8 7 6 5 4 3 2 1 14 13 12 11 10
Copied 20853 20813
Invoc 34 34
W.b. Insert 6 4
52015 56515 59973 167107 171625 44129 39398 36711 39148 122983 126087 38784 30657 25144 22517 24128 26623 81307 81959 36124 28622 23663 21165 19045 17685 16995 19370 61354 62624 34469 26663 21504 18307 17733 16051 14239 14891 15482 15544 47686 48798 33122 25234 20817 17860 16421
809 430 295 276 275 809 419 280 214 204 203 809 412 275 206 166 140 135 134 809 410 273 205 164 137 118 104 101 101 809 409 272 204 163 136 116 102 91 82 80 80 809 408 271 203 163
47 31 27 21 19 47 25 27 23 20 19 47 31 27 18 14 11 12 20 47 27 22 17 14 12 15 12 17 16 47 27 26 18 10 16 10 15 14 13 12 10 47 24 21 19 22
338
W.b. Add 4 3
R.s. proc 4 3
0 41 0 27 0 24 0 19 0 17 0 41 0 23 0 22 0 21 0 16 0 15 0 41 0 26 0 24 0 16 0 13 0 11 0 12 0 18 0 41 0 25 0 19 0 14 0 14 0 12 0 14 0 11 0 11 0 11 0 41 0 24 0 23 0 17 0 10 0 14 0 10 0 11 0 11 0 11 0 9 0 8 0 41 0 23 0 20 0 16 0 19 continued on next page
Table 6.9. continued 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
Configuration 15 6 9 15 7 8 15 8 7 15 9 6 15 10 5 15 11 4 15 12 3 15 13 2 15 14 1 19 1 18 19 2 17 19 3 16 19 4 15 19 5 14 19 6 13 19 7 12 19 9 10 19 10 9 19 11 8 19 12 7 19 13 6 19 14 5 19 15 4 19 16 3 19 17 2 19 18 1 23 1 22 23 2 21 23 3 20 23 5 18 23 6 17 23 8 15 23 9 14 23 10 13 23 12 11 23 13 10 23 14 9 23 16 7 23 17 6 23 18 5 23 20 3 23 21 2 23 22 1 27 1 26 27 2 25 27 4 23 27 6 21 27 7 20
Copied 14757 13974 12068 11423 10993 12107 12924 40572 41193 32177 24097 20776 16553 15571 13389 14480 10272 10607 9072 8399 8169 9170 9195 10712 29814 30882 30661 23744 18500 14819 12824 11015 9695 9302 8109 7705 8517 6397 7271 6580 9505 23898 24581 30043 23397 15854 13077 12490
Invoc 135 116 101 90 81 74 68 67 67 809 407 271 203 162 135 116 90 81 73 67 62 58 54 51 50 50 809 406 270 162 135 101 90 81 67 62 58 50 47 45 41 40 40 809 406 202 135 115
339
W.b. Insert 20 14 10 16 7 9 12 15 8 47 31 24 19 19 19 14 14 11 13 13 10 14 9 13 9 7 47 26 19 17 14 17 12 14 15 12 9 7 6 13 6 11 9 47 21 21 18 14
W.b. Add R.s. proc 0 17 0 13 0 9 0 13 0 6 0 7 0 11 0 11 0 7 0 41 0 26 0 21 0 19 0 18 0 18 0 14 0 12 0 10 0 11 0 11 0 7 0 9 0 7 0 9 0 7 0 4 0 41 0 24 0 18 0 16 0 13 0 15 0 12 0 12 0 11 0 8 0 7 0 5 0 5 0 10 0 4 0 8 0 6 0 41 0 19 0 20 0 15 0 14 continued on next page
Table 6.9. continued 10 10 10 10 10 10 10 10 10 10 10 10 10 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
Configuration 27 9 18 27 11 16 27 12 15 27 14 13 27 15 12 27 17 10 27 18 9 27 20 7 27 22 5 27 23 4 27 24 3 27 25 2 27 26 1 3 3 4 4 4 5 5 5 5 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 8 10 10 10 10 10 10 10 10
1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8
2 1 3 2 1 4 3 2 1 5 4 3 2 1 6 5 4 3 2 1 7 6 5 4 3 2 1 9 8 7 6 5 4 3 2
Copied 10031 9247 7402 8033 7116 6359 6235 5414 5069 5809 7721 19675 20053
Invoc 90 73 67 58 54 47 45 40 36 35 34 33 33
W.b. Insert 12 9 13 11 11 9 9 13 8 5 3 9 9
42337 233072 31591 30579 118307 28407 22474 20884 78084 27035 18402 17821 15745 57828 25754 19308 15979 14151 12819 46931 24756 17335 14368 12184 10529 10265 39331 24114 17507 13435 11396 10093 8230 8534 9868
402 386 402 214 197 402 207 140 131 402 205 137 103 98 402 204 136 102 82 79 402 203 135 101 81 68 66 402 203 135 101 81 67 58 51
45 39 45 36 24 45 23 18 21 45 23 23 20 20 45 18 25 17 23 12 45 27 17 28 26 12 14 45 32 23 16 20 18 19 13
340
W.b. Add 0 0 0 0 0 0 0 0 0 0 0 0 0
R.s. proc 12 7 10 10 8 6 7 8 6 3 2 6 6
33 33 29 29 33 33 27 27 19 19 33 33 17 17 14 14 18 18 33 33 19 19 18 18 16 14 16 14 33 33 15 15 18 18 11 11 17 17 10 10 33 33 21 21 15 15 21 19 19 17 8 8 11 11 33 33 24 24 18 18 14 14 15 15 12 10 13 13 8 8 continued on next page
Table 6.9. continued 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 13 13
Configuration 10 9 1 12 1 11 12 2 10 12 3 9 12 4 8 12 5 7 12 6 6 12 7 5 12 8 4 12 9 3 12 10 2 12 11 1 14 1 13 14 2 12 14 3 11 14 4 10 14 5 9 14 6 8 14 7 7 14 8 6 14 9 5 14 10 4 14 11 3 14 12 2 14 13 1 2 3 3 4 4 4 5 5 5 5 6 6 6 6 6 7 7 7 7 7 7 2 3
1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 1
1 2 1 3 2 1 4 3 2 1 5 4 3 2 1 6 5 4 3 2 1 1 2
Copied 29527 23601 15349 12650 10936 9233 7793 7953 5973 6304 6389 23564 23289 15858 13014 9667 8019 7636 7142 5395 5011 5429 5079 5379 20418
Invoc 49 402 202 134 101 80 67 57 50 45 40 39 402 202 134 100 80 67 57 50 44 40 36 33 33
W.b. Insert 15 45 26 26 16 21 14 15 10 11 16 7 45 24 27 22 24 17 8 13 7 14 5 14 11
25089 18384 16318 17152 12254 11228 16261 11053 9781 8171 15684 11166 8358 5971 7824 15771 9968 7505 5515 5555 5953 12024 10791
200 200 106 200 102 69 200 101 68 51 200 101 67 50 41 200 100 67 50 40 34 100 100
14 14 9 14 10 7 14 8 8 11 14 13 13 7 6 14 14 13 7 10 3 13 13
341
W.b. Add 11 33 21 20 14 16 10 10 7 7 11 5 33 18 20 17 18 12 6 8 5 10 4 9 7
R.s. proc 11 33 21 20 14 16 10 10 7 7 11 5 33 18 20 15 16 12 6 6 5 10 4 9 7
0 13 0 13 0 8 0 13 0 8 0 6 0 13 0 8 0 6 0 9 0 13 0 12 0 10 0 5 0 4 0 13 0 12 0 10 0 5 0 7 0 2 10 10 10 10 continued on next page
Table 6.9. continued 13 13 13 13
Configuration 3 2 4 1 4 2 4 3
1 3 2 1
Copied 7321 10331 5709 5271
Invoc 51 100 50 34
342
W.b. Insert 12 13 10 13
W.b. Add 7 10 6 8
R.s. proc 7 10 6 8
Table 6.10. Costs of block 2GYF for HeapSim 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
Configuration 56 2 56 5 56 8 56 12 56 15 56 18 56 22 56 25 56 29 56 32 56 35 56 38 56 41 56 45 56 48 56 50 56 52 56 53 56 54 68 2 68 6 68 10 68 14 68 18 68 22 68 27 68 31 68 35 68 39 68 43 68 46 68 50 68 54 68 58 68 60 68 63 68 64 68 65 68 66 83 2 83 7 83 12 83 17 83 22 83 27 83 32 83 37 83 42
54 51 48 44 41 38 34 31 27 24 21 18 15 11 8 6 4 3 2 66 62 58 54 50 46 41 37 33 29 25 22 18 14 10 8 5 4 3 2 81 76 71 66 61 56 51 46 41
Copied 870801 1034112 1286129 3941765 5434667 6358576 6820966 7134689 7369797 7496892 7584273 7608841 7703753 7693569 7728651 7747103 7755799 7765052 7766506 531008 609541 688552 764906 925821 1432020 2935521 3430263 3693126 3925412 3992511 4109706 4123386 4207035 4155494 4175790 4185104 4194039 4195671 4197194 452646 441550 433508 517424 507445 582750 744293 1060740 1868085
Invoc 930 372 233 159 135 123 113 109 106 104 103 102 102 101 101 101 101 101 101 928 309 185 132 103 85 72 66 63 61 59 59 58 58 57 57 57 57 57 57 927 265 154 109 84 68 58 50 45
343
W.b. insert 54 46 67 74 64 63 6 79 46 90 66 61 22 16 94 91 91 91 91 54 58 45 46 63 6 46 41 66 61 6 22 91 91 82 82 62 62 62 51 54 39 74 61 6 46 90 63 6
W.b. add R.s. proc 32 32 36 36 30 30 22 22 24 24 22 22 4 4 18 18 12 12 18 18 18 18 18 18 12 12 9 9 10 10 10 10 10 10 10 10 10 10 32 32 22 22 28 28 14 14 22 22 4 4 12 12 16 16 18 18 18 18 4 4 11 11 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 32 32 22 22 22 22 20 20 4 4 12 12 18 18 18 18 4 4 continued on next page
Table 6.10. continued 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
Configuration 83 47 83 52 83 56 83 61 83 66 83 71 83 74 83 76 83 78 83 79 83 80 101 3 101 9 101 15 101 21 101 27 101 33 101 39 101 45 101 52 101 58 101 64 101 69 101 75 101 81 101 86 101 90 101 93 101 95 101 96 101 97 123 4 123 11 123 18 123 26 123 33 123 41 123 48 123 55 123 63 123 70 123 77 123 84 123 91 123 98 123 105 123 109 123 113
36 31 27 22 17 12 9 7 5 4 3 98 92 86 80 74 68 62 56 49 43 37 32 26 20 15 11 8 6 5 4 119 112 105 97 90 82 75 68 60 53 46 39 32 25 18 14 10
Copied 2216894 2478769 2625433 2650391 2648219 2691703 2717850 2721748 2736561 2736846 2737385 364364 356838 351203 437140 427744 423309 498849 573029 718667 1191258 1548218 1729766 1767884 1913110 1861959 1892968 1897475 1912600 1913019 1913162 367112 360290 356871 350951 346567 341702 419620 409600 485102 555359 695936 1054992 1236695 1350981 1350562 1381353 1387089
Invoc 42 41 40 39 38 38 38 38 38 38 38 618 206 123 88 68 56 47 41 35 32 30 29 28 28 27 27 27 27 27 27 463 168 103 71 56 45 38 33 29 26 24 22 21 21 20 20 20
344
W.b. Insert 83 91 91 62 51 38 35 35 35 35 14 64 57 64 21 46 79 61 16 91 82 62 38 35 6 6 6 6 0 0 0 64 13 63 75 79 22 94 91 62 38 35 6 6 0 0 0 0
W.b. Add R.s. proc 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 4 4 30 30 28 28 24 24 12 12 12 12 18 18 18 18 9 9 10 10 10 10 10 10 10 10 10 10 4 4 4 4 4 4 4 4 0 0 0 0 0 0 30 30 9 9 22 22 18 18 18 18 12 12 10 10 10 10 10 10 10 10 10 10 4 4 4 4 0 0 0 0 0 0 0 0 continued on next page
Table 6.10. continued 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
Configuration 123 116 123 117 123 119 150 4 150 13 150 22 150 31 150 40 150 49 150 58 150 67 150 76 150 86 150 94 150 102 150 111 150 120 150 127 150 133 150 138 150 141 150 143 150 145 182 5 182 16 182 27 182 38 182 49 182 60 182 71 182 82 182 93 182 104 182 115 182 124 182 135 182 146 182 155 182 162 182 167 182 171 182 174 182 175 222 7 222 20 222 33 222 47 222 60
7 6 4 146 137 128 119 110 101 92 83 74 64 56 48 39 30 23 17 12 9 7 5 177 166 155 144 133 122 111 100 89 78 67 58 47 36 27 20 15 11 8 7 215 202 189 175 162
Copied 1403346 1404333 1405931 276703 270671 358277 351756 348642 340637 332811 330118 409382 396500 461782 533716 886871 980039 1078403 1021244 1026050 1043567 1045162 1046595 275501 268907 265779 261723 257678 254023 340141 329720 319356 313253 391403 387396 431013 767934 781055 819055 822947 826290 845160 845708 273644 267618 265886 261249 254023
Invoc 20 20 20 463 142 84 59 46 37 31 27 24 21 19 18 17 16 16 15 15 15 15 15 370 115 68 48 37 30 26 22 19 17 16 15 13 13 12 12 12 12 12 12 264 92 56 39 30
345
W.b. Insert 0 0 0 64 76 6 41 30 94 82 51 35 6 0 0 0 0 0 0 0 0 0 0 46 69 46 61 94 82 38 6 6 0 0 0 0 0 0 0 0 0 0 0 39 45 79 83 82
W.b. Add R.s. proc 0 0 0 0 0 0 30 30 26 26 4 4 16 16 12 12 10 10 10 10 10 10 10 10 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 36 36 20 20 12 12 18 18 10 10 10 10 10 10 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 22 22 20 20 18 18 10 10 10 10 continued on next page
Table 6.10. continued 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 14 14 14 14 14 14 14 14 14 14 14 14 14
Configuration 222 73 222 87 222 100 222 113 222 127 222 140 222 151 222 164 222 178 222 189 222 197 222 204 222 209 222 212 222 214 270 8 270 24 270 40 270 57 270 73 270 89 270 105 270 121 270 138 270 154 270 170 270 184 270 200 270 216 270 229 270 240 270 248 270 254 270 258 270 260 28 28 28 28 28 28 28 28 28 28 28 28 28
1 3 4 6 8 9 11 13 14 16 18 19 21
149 135 122 109 95 82 71 58 44 33 25 18 13 10 8 262 246 230 213 197 181 165 149 132 116 100 86 70 54 41 30 22 16 12 10
Copied 255296 252564 246353 322692 312849 312936 305065 364597 477447 601892 566560 604225 607740 628480 629312 272989 267984 263317 257903 255296 245838 243856 244499 238743 242362 221404 305304 290679 340288 457035 424007 464167 467124 490462 491197
Invoc 25 21 18 16 14 13 12 11 10 10 9 9 9 9 9 231 77 46 32 25 20 17 15 13 12 10 10 9 8 8 7 7 7 7 7
W.b. Insert 35 6 0 0 0 0 0 0 0 0 0 0 0 0 0 67 97 30 82 35 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0
27 25 24 22 20 19 17 15 14 12 10 9 7
869909 1120294 1285986 3944878 6012802 6450262 7077996 7444919 7541359 7744556 7798127 7851738 7895573
926 312 234 161 133 125 116 111 109 107 105 105 104
55 58 67 74 69 64 6 75 46 90 63 61 6
346
W.b. Add 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0 30 19 12 10 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0
R.s. proc 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0 30 19 12 10 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0
33 33 22 22 30 30 22 22 20 20 23 23 4 4 18 18 12 12 18 18 18 18 18 18 4 4 continued on next page
Table 6.10. continued 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
Configuration 28 22 28 24 28 25 28 26 28 27 34 1 34 3 34 5 34 7 34 9 34 11 34 13 34 15 34 17 34 19 34 21 34 23 34 25 34 27 34 29 34 30 34 31 34 32 34 33 42 1 42 4 42 6 42 9 42 11 42 14 42 16 42 19 42 21 42 24 42 26 42 29 42 31 42 34 42 36 42 37 42 39 42 40 51 2 51 5 51 8 51 11 51 14 51 17
6 4 3 2 1 33 31 29 27 25 23 21 19 17 15 13 11 9 7 5 4 3 2 1 41 38 36 33 31 28 26 23 21 18 16 13 11 8 6 5 3 2 49 46 43 40 37 34
Copied 7931478 7975717 7995432 8002797 7923387 530093 610574 690209 766825 925646 1432355 2693810 3378238 3727929 3940912 4110512 4192636 4204518 4287917 4232454 4256008 4261143 4273504 4275538 453109 440733 434399 517276 507756 583425 748562 1066824 1715150 2291863 2388381 2550359 2569868 2663301 2697877 2723672 2639926 2650397 362095 355580 350258 436784 429303 420642
Invoc 104 104 104 104 103 926 310 186 133 103 85 74 68 65 62 61 60 59 59 58 58 58 58 58 926 232 154 103 84 66 58 49 45 42 40 39 38 38 38 38 37 37 463 185 115 84 66 54
347
W.b. Insert 6 94 91 91 91 55 58 45 46 64 6 75 41 66 61 6 22 91 91 82 82 62 62 51 55 67 74 64 6 46 90 61 6 94 91 82 62 38 35 35 35 14 64 45 69 6 46 66
W.b. Add R.s. proc 4 4 10 10 10 10 10 10 10 10 33 33 22 22 28 28 14 14 23 23 4 4 18 18 15 15 18 18 18 18 4 4 11 11 10 10 10 10 10 10 10 10 10 10 10 10 10 10 33 33 30 30 22 22 23 23 4 4 12 12 18 18 18 18 4 4 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 4 4 30 30 28 28 20 20 4 4 12 12 18 18 continued on next page
Table 6.10. continued 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
Configuration 51 20 51 23 51 26 51 29 51 32 51 35 51 38 51 41 51 43 51 45 51 47 51 48 51 49 62 2 62 6 62 9 62 13 62 17 62 20 62 24 62 28 62 32 62 35 62 39 62 42 62 46 62 50 62 53 62 55 62 57 62 58 62 59 62 60 75 2 75 7 75 11 75 16 75 20 75 25 75 29 75 34 75 38 75 43 75 47 75 51 75 55 75 60 75 64
31 28 25 22 19 16 13 10 8 6 4 3 2 60 56 53 49 45 42 38 34 30 27 23 20 16 12 9 7 5 4 3 2 73 68 64 59 55 50 46 41 37 32 28 24 20 15 11
Copied 500845 574248 636179 1036920 1549415 1736336 1818516 1826210 1866994 1896779 1903081 1917221 1917641 367393 359795 357443 351650 344897 342101 419842 414183 489104 555421 700272 992445 1241519 1401630 1354021 1384797 1390180 1393542 1407944 1409924 276798 270676 358230 350861 348755 345556 332826 332931 409481 396615 461811 533705 884634 979391 1079097
Invoc 46 40 35 32 30 29 28 27 27 27 27 27 27 463 154 103 71 54 46 38 33 29 26 24 22 21 21 20 20 20 20 20 20 463 132 84 57 46 37 31 27 24 21 19 18 17 16 16
348
W.b. Insert 30 22 91 82 62 38 35 6 6 6 0 0 0 64 74 64 75 66 30 94 91 62 38 35 6 6 0 0 0 0 0 0 0 64 46 6 90 30 91 82 38 35 6 0 0 0 0 0
W.b. Add R.s. proc 12 12 11 11 10 10 10 10 10 10 10 10 10 10 4 4 4 4 4 4 0 0 0 0 0 0 30 30 22 22 23 23 18 18 18 18 12 12 10 10 10 10 10 10 10 10 10 10 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 30 30 14 14 4 4 18 18 12 12 10 10 10 10 10 10 10 10 4 4 0 0 0 0 0 0 0 0 0 0 continued on next page
Table 6.10. continued 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
Configuration 75 67 75 69 75 70 75 72 91 3 91 8 91 14 91 19 91 25 91 30 91 35 91 41 91 46 91 52 91 57 91 62 91 67 91 73 91 77 91 81 91 84 91 86 91 87 91 88 111 3 111 10 111 17 111 23 111 30 111 37 111 43 111 50 111 57 111 63 111 70 111 75 111 82 111 89 111 94 111 99 111 102 111 104 111 106 111 107 135 4 135 12 135 20 135 28
8 6 5 3 88 83 77 72 66 61 56 50 45 39 34 29 24 18 14 10 7 5 4 3 108 101 94 88 81 74 68 61 54 48 41 36 29 22 17 12 9 7 5 4 131 123 115 107
Copied 1021877 1025889 1026730 1044375 274399 269021 267136 261765 263450 254149 337675 329728 329467 313322 393073 387498 429283 767943 779692 818263 823472 843787 845022 845974 274399 267806 264324 262606 254149 258364 249567 246441 324461 311024 313082 303956 364568 477572 601083 600446 604158 607112 628130 628904 273007 268092 263433 261564
Invoc 15 15 15 15 308 115 66 48 37 30 26 22 20 17 16 15 13 13 12 12 12 12 12 12 308 92 54 40 30 25 21 18 16 14 13 12 11 10 10 9 9 9 9 9 231 77 46 33
349
W.b. Insert 0 0 0 0 58 69 46 61 91 82 38 6 6 0 0 0 0 0 0 0 0 0 0 0 58 45 66 22 82 35 6 0 0 0 0 0 0 0 0 0 0 0 0 0 67 97 30 91
W.b. Add R.s. proc 0 0 0 0 0 0 0 0 22 22 20 20 12 12 18 18 10 10 10 10 10 10 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 22 22 20 20 18 18 11 11 10 10 10 10 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 30 30 19 19 12 12 10 10 continued on next page
Table 6.10. continued 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Configuration 135 36 135 45 135 53 135 61 135 69 135 77 135 85 135 92 135 100 135 108 135 115 135 120 135 124 135 127 135 129 135 130 7 7 7 7 7 7 9 9 9 9 9 9 9 9 11 11 11 11 11 11 11 11 11 11 13 13 13 13 13 13 13 13
1 2 3 4 5 6 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7 8 9 10 1 2 3 4 5 6 7 8
99 90 82 74 66 58 50 43 35 27 20 15 11 8 6 5
Copied 251845 248316 245581 246485 238777 242369 221402 305397 290738 340309 457726 424030 464096 466794 490551 491039
Invoc 25 20 17 15 13 12 10 10 9 8 8 7 7 7 7 7
W.b. Insert 35 6 0 0 0 0 0 0 0 0 0 0 0 0 0 0
6 5 4 3 2 1 8 7 6 5 4 3 2 1 10 9 8 7 6 5 4 3 2 1 12 11 10 9 8 7 6 5
1284039 8296588 9902468 10319074 10537601 10575430 518621 684418 1095064 3074650 3729588 3994675 4102691 4103254 442953 434323 513533 580495 903826 1939845 2332710 2544224 2593103 2655987 357813 352895 435919 427418 501287 576508 808112 1413737
231 162 144 138 136 135 231 118 80 66 60 58 57 56 231 116 78 58 47 42 39 38 37 37 231 116 77 58 46 39 34 30
67 69 97 90 30 94 67 69 97 90 30 94 91 62 67 69 97 90 30 94 91 62 35 14 67 69 97 90 30 94 91 62
350
W.b. Add 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0
R.s. proc 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0
30 30 20 20 19 19 18 18 12 12 10 10 30 30 20 20 19 19 18 18 12 12 10 10 10 10 10 10 30 30 20 20 19 19 18 18 12 12 10 10 10 10 10 10 10 10 4 4 30 30 20 20 19 19 18 18 12 12 10 10 10 10 10 10 continued on next page
Table 6.10. continued 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Configuration 13 9 13 10 13 11 13 12 16 1 16 2 16 3 16 4 16 5 16 6 16 7 16 8 16 9 16 10 16 11 16 12 16 13 16 14 16 15 19 1 19 2 19 3 19 4 19 5 19 6 19 7 19 9 19 10 19 11 19 12 19 13 19 14 19 15 19 16 19 17 19 18 23 1 23 2 23 3 23 5 23 6 23 8 23 9 23 10 23 12 23 13 23 14 23 16
4 3 2 1 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 18 17 16 15 14 13 12 10 9 8 7 6 5 4 3 2 1 22 21 20 18 17 15 14 13 11 10 9 7
Copied 1737251 1870260 1956374 1897467 363693 359415 354255 350245 343107 336487 417399 492475 483055 616426 1069844 1301969 1320651 1368624 1404722 273129 269231 357476 355667 349496 343165 341860 326615 409354 399766 463637 534989 887851 931422 1074535 1113409 1133992 273129 269231 268176 263459 259815 345115 334223 336564 325496 313479 311585 373268
Invoc 29 28 28 27 231 116 77 58 46 38 33 29 26 23 22 21 20 20 20 231 115 77 58 46 38 33 25 23 21 19 18 17 16 16 16 16 231 115 77 46 38 29 25 23 19 17 16 14
351
W.b. Insert 35 14 6 0 67 69 97 90 30 94 91 62 35 14 6 0 0 0 0 67 69 97 90 30 94 91 35 14 6 0 0 0 0 0 0 0 67 69 97 30 94 62 35 14 0 0 0 0
W.b. Add R.s. proc 10 10 4 4 4 4 0 0 30 30 20 20 19 19 18 18 12 12 10 10 10 10 10 10 10 10 4 4 4 4 0 0 0 0 0 0 0 0 30 30 20 20 19 19 18 18 12 12 10 10 10 10 10 10 4 4 4 4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 30 30 20 20 19 19 12 12 10 10 10 10 10 10 4 4 0 0 0 0 0 0 0 0 continued on next page
Table 6.10. continued 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 18 18 18 18 18 18
Configuration 23 17 23 18 23 20 23 21 23 22 28 1 28 3 28 4 28 6 28 8 28 9 28 11 28 13 28 14 28 16 28 18 28 19 28 21 28 22 28 24 28 25 28 26 28 27 34 1 34 3 34 5 34 7 34 9 34 11 34 13 34 15 34 17 34 19 34 21 34 23 34 25 34 27 34 29 34 30 34 31 34 32 34 33 2 3 3 4 4 4
1 1 2 1 2 3
6 5 3 2 1 27 25 24 22 20 19 17 15 14 12 10 9 7 6 4 3 2 1 33 31 29 27 25 23 21 19 17 15 13 11 9 7 5 4 3 2 1
Copied 433102 647428 785484 821942 843570 273129 268176 262993 259815 252325 251976 255850 241944 243076 314679 298119 306532 370542 417712 654522 600403 605405 629714 273129 268176 263459 261516 251976 255850 241944 242910 236298 240138 312169 305446 290797 275320 458789 516603 463834 467877 491679
Invoc 13 13 12 12 12 231 77 57 38 28 25 21 17 16 14 12 12 11 10 10 9 9 9 231 77 46 33 25 21 17 15 13 12 11 10 9 8 8 8 7 7 7
W.b. Insert 0 0 0 0 0 67 97 90 94 62 35 6 0 0 0 0 0 0 0 0 0 0 0 67 97 30 91 35 6 0 0 0 0 0 0 0 0 0 0 0 0 0
1 2 1 3 2 1
3106246 504738 2986212 346000 501336 1566272
57 57 46 57 30 25
90 90 62 90 62 0
352
W.b. Add 0 0 0 0 0 30 19 18 10 10 10 4 0 0 0 0 0 0 0 0 0 0 0 30 19 12 10 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0
R.s. proc 0 0 0 0 0 30 19 18 10 10 10 4 0 0 0 0 0 0 0 0 0 0 0 30 19 12 10 10 4 0 0 0 0 0 0 0 0 0 0 0 0 0
18 18 18 18 10 10 18 18 10 10 0 0 continued on next page
Table 6.10. continued 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
Configuration 5 1 5 2 5 3 5 4 6 1 6 2 6 3 6 4 6 5 7 1 7 2 7 3 7 4 7 5 7 6 9 1 9 2 9 3 9 4 9 5 9 6 9 7 9 8
4 3 2 1 5 4 3 2 1 6 5 4 3 2 1 8 7 6 5 4 3 2 1
Copied 263046 340420 391606 1067071 263046 252384 326866 377105 828746 263046 252384 250860 314411 293525 642845 263046 252384 250860 240538 230695 289227 282355 432879
Invoc 57 29 19 17 57 28 19 14 13 57 28 19 14 11 10 57 28 19 14 11 9 8 7
353
W.b. Insert 90 62 0 0 90 62 0 0 0 90 62 0 0 0 0 90 62 0 0 0 0 0 0
W.b. Add 18 10 0 0 18 10 0 0 0 18 10 0 0 0 0 18 10 0 0 0 0 0 0
R.s. proc 18 10 0 0 18 10 0 0 0 18 10 0 0 0 0 18 10 0 0 0 0 0 0
Table 6.11. Costs of block 2GYF for Lambda-Fact5 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Configuration 60 2 60 5 60 9 60 13 60 16 60 20 60 23 60 27 60 31 60 34 60 38 60 41 60 44 60 48 60 51 60 53 60 55 60 56 60 57 74 2 74 7 74 11 74 16 74 20 74 24 74 29 74 33 74 38 74 42 74 47 74 50 74 55 74 59 74 63 74 66 74 68 74 70 74 71 90 3 90 8 90 13 90 19 90 24 90 30 90 35 90 41 90 46 90 51
58 55 51 47 44 40 37 33 29 26 22 19 16 12 9 7 5 4 3 72 67 63 58 54 50 45 41 36 32 27 24 19 15 11 8 6 4 3 87 82 77 71 66 60 55 49 44 39
Copied 541348 519056 488873 445425 430114 397687 404469 387990 388007 381103 384510 396249 392140 387960 389122 396863 390214 397538 384240 416822 382338 356321 327954 311479 286996 277301 259576 247839 242288 229321 237818 236205 239478 243217 242333 242209 242546 242905 352473 327816 294284 265600 244875 217970 205729 192591 186906 177450
Invoc 1141 478 277 197 166 138 119 105 97 92 90 91 90 89 89 90 89 90 88 1127 335 216 151 125 107 93 80 69 64 57 57 56 56 56 56 56 56 56 753 290 180 125 101 82 69 61 56 50
354
W.b. insert 6451 4018 3065 2475 2209 1977 1824 1685 1653 1575 1522 1581 1533 1554 1490 1598 1516 1560 1586 6431 3368 2570 2090 1870 1701 1517 1433 1291 1207 1069 1138 1107 1197 1174 1190 1200 1190 1142 5358 3146 2280 1848 1596 1455 1180 1101 999 999
W.b. add R.s. proc 6005 6001 3511 3491 2541 2521 2024 2004 1798 1778 1529 1518 1436 1430 1317 1311 1266 1258 1197 1186 1130 1119 1193 1182 1153 1147 1158 1147 1118 1107 1177 1171 1144 1133 1152 1146 1179 1171 5998 5994 2901 2881 2102 2089 1688 1677 1498 1475 1333 1322 1162 1139 1067 1043 947 924 886 863 756 733 810 787 813 790 844 821 859 855 834 811 844 821 837 814 819 796 4837 4833 2636 2630 1881 1864 1476 1456 1233 1213 1103 1080 880 864 780 772 694 674 707 698 continued on next page
Table 6.11. continued 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Configuration 90 57 90 61 90 67 90 72 90 77 90 80 90 83 90 85 90 86 90 87 109 3 109 10 109 16 109 23 109 29 109 36 109 43 109 49 109 56 109 62 109 69 109 74 109 81 109 87 109 93 109 97 109 100 109 102 109 104 109 105 133 4 133 12 133 20 133 28 133 36 133 44 133 52 133 60 133 68 133 76 133 84 133 90 133 98 133 106 133 113 133 118 133 122 133 125
33 29 23 18 13 10 7 5 4 3 106 99 93 86 80 73 66 60 53 47 40 35 28 22 16 12 9 7 5 4 129 121 113 105 97 89 81 73 65 57 49 43 35 27 20 15 11 8
Copied 164503 160294 158419 165209 165180 166985 167880 167842 167553 168016 316464 281512 259119 225704 215199 180159 173902 161806 152103 139504 132127 124984 119697 118720 122668 123560 124467 124704 124923 125274 289219 259453 215921 195279 168687 160922 139407 124571 121266 116068 104697 102357 95320 93735 93061 92965 92886 95138
Invoc 43 41 39 39 39 39 39 39 39 39 749 229 146 101 83 65 56 49 43 40 37 33 30 29 29 29 29 29 29 29 561 192 114 83 64 54 45 39 35 31 28 27 24 23 22 22 22 22
355
W.b. Insert 875 861 801 858 850 827 854 847 896 849 5388 2788 2093 1647 1436 1091 1083 952 966 956 753 696 673 654 609 551 661 677 647 694 4576 2454 1707 1505 1215 1027 892 834 852 736 621 628 506 583 534 575 532 569
W.b. Add R.s. proc 620 596 603 602 558 557 601 600 597 573 590 566 600 576 593 569 605 581 583 563 4856 4850 2311 2291 1657 1637 1238 1225 1115 1111 816 800 783 767 680 671 674 657 639 611 530 510 453 442 465 454 442 431 428 417 405 394 445 432 453 440 438 425 465 451 4038 4025 2024 2011 1352 1340 1142 1114 857 833 745 725 612 596 576 560 581 553 482 470 421 400 413 402 343 325 370 359 338 327 381 364 338 326 385 368 continued on next page
Table 6.11. continued 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9
Configuration 133 127 133 128 162 5 162 15 162 24 162 34 162 44 162 53 162 63 162 73 162 83 162 92 162 102 162 110 162 120 162 130 162 138 162 144 162 149 162 152 162 155 162 156 197 6 197 18 197 30 197 41 197 53 197 65 197 77 197 89 197 100 197 112 197 124 197 134 197 146 197 158 197 167 197 175 197 181 197 185 197 188 197 190 240 7 240 22 240 36 240 50 240 65 240 79
6 5 157 147 138 128 118 109 99 89 79 70 60 52 42 32 24 18 13 10 7 6 191 179 167 156 144 132 120 108 97 85 73 63 51 39 30 22 16 12 9 7 233 218 204 190 175 161
Copied 95916 93361 267402 224009 195645 177581 141140 131217 122239 111256 98526 93310 92391 77564 74978 72327 66808 70902 72442 72725 73416 73626 251417 204598 171722 157554 129481 109249 98023 87951 81236 78617 69774 66479 55534 64936 61391 54088 54172 54511 54829 55363 241721 188739 160183 132082 100249 100134
Invoc 22 22 448 151 95 68 51 43 36 32 27 25 23 21 19 18 17 17 17 17 17 17 372 125 77 56 44 35 29 25 22 20 18 17 15 15 14 13 13 13 13 13 319 102 63 45 34 28
356
W.b. Insert 563 531 4019 2099 1657 1369 977 847 820 718 614 639 640 560 453 420 383 414 396 387 378 402 3595 1880 1362 1151 823 796 645 617 474 537 475 469 385 406 407 346 289 293 317 303 3324 1635 1254 896 694 697
W.b. Add R.s. proc 386 369 339 328 3478 3474 1689 1678 1282 1270 1001 989 688 679 603 587 563 546 479 455 414 396 401 387 405 389 342 323 297 281 290 270 252 228 261 241 251 243 248 240 220 212 248 240 3083 3063 1505 1485 1024 1018 811 791 580 576 541 521 450 438 372 365 314 310 372 365 282 270 317 293 253 227 241 225 245 224 238 234 188 181 181 174 205 198 195 188 2837 2833 1254 1250 931 911 627 613 453 437 457 441 continued on next page
Table 6.11. continued 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 9 10 10 10 10 10 10 10 10 10 10 10 10 10 10
Configuration 240 94 240 108 240 122 240 137 240 151 240 163 240 178 240 192 240 204 240 213 240 221 240 226 240 229 240 231 293 9 293 26 293 44 293 62 293 79 293 97 293 114 293 132 293 149 293 167 293 185 293 199 293 217 293 234 293 249 293 260 293 270 293 275 293 280 293 283 30 30 30 30 30 30 30 30 30 30 30 30 30 30
1 3 5 6 8 10 12 14 15 17 19 20 22 24
146 132 118 103 89 77 62 48 36 27 19 14 11 9 284 267 249 231 214 196 179 161 144 126 108 94 76 59 44 33 23 18 13 10
Copied 75522 77034 73309 58927 61022 59596 44263 49991 52861 41281 42077 42829 42951 42440 230995 176976 126540 113727 96895 70936 73770 66835 55446 49568 45330 48040 50871 37002 36729 29497 33815 34113 35031 35128
Invoc 23 20 18 16 15 14 12 12 11 10 10 10 10 10 249 86 50 36 28 22 19 17 15 13 12 11 10 9 9 8 8 8 8 8
W.b. Insert 500 499 566 388 397 443 304 285 333 313 361 278 269 266 2874 1483 915 793 690 449 491 394 388 353 269 322 343 260 245 212 234 240 211 244
29 27 25 24 22 20 18 16 15 13 11 10 8 6
504721 493591 469702 455436 433866 405701 396355 375607 378252 370778 369224 372191 372825 369799
1096 402 251 215 166 139 114 101 99 90 88 87 87 86
6475 3741 2858 2631 2310 1977 1699 1646 1692 1532 1479 1493 1472 1443
357
W.b. Add 361 315 352 257 225 257 203 182 186 169 214 162 159 155 2413 1097 664 538 459 304 315 266 246 222 177 188 200 148 167 144 127 147 130 147
R.s. proc 354 307 334 241 208 240 192 169 173 160 201 155 152 153 2389 1078 663 534 439 287 306 260 240 204 158 173 191 133 158 131 117 137 120 134
6045 6040 3213 3208 2429 2423 2191 2170 1836 1815 1542 1537 1339 1330 1249 1240 1301 1292 1168 1159 1128 1119 1120 1111 1113 1104 1111 1102 continued on next page
Table 6.11. continued 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
Configuration 30 25 30 27 30 28 30 29 37 1 37 3 37 6 37 8 37 10 37 12 37 14 37 17 37 19 37 21 37 23 37 25 37 27 37 30 37 31 37 33 37 34 37 35 37 36 45 1 45 4 45 7 45 9 45 12 45 15 45 18 45 20 45 23 45 26 45 28 45 31 45 33 45 36 45 38 45 40 45 41 45 42 45 43 55 2 55 5 55 8 55 12 55 15 55 18
5 3 2 1 36 34 31 29 27 25 23 20 18 16 14 12 10 7 6 4 3 2 1 44 41 38 36 33 30 27 25 22 19 17 14 12 9 7 5 4 3 2 53 50 47 43 40 37
Copied 370411 373067 377358 379146 402383 382942 362899 332890 311970 275830 275934 261423 232772 243743 239533 236848 235513 241808 242420 245172 244543 244502 245127 353424 323842 296183 267218 250262 217082 209500 191227 172476 169063 163434 166869 160096 161969 166677 159021 171166 171026 160194 310159 287090 253189 223420 202294 180383
Invoc 86 86 87 87 1096 387 203 152 124 102 94 77 68 63 59 57 56 56 56 56 56 56 56 1096 289 169 133 100 81 67 61 54 48 45 41 39 39 39 38 39 39 38 559 228 146 99 79 66
358
W.b. Insert 1466 1505 1490 1565 6475 3713 2574 2141 1887 1555 1521 1348 1216 1253 1214 1156 1108 1163 1187 1172 1092 1170 1132 6475 3065 2290 1846 1681 1321 1204 1239 1065 919 918 853 827 801 837 843 866 860 872 4508 2818 2070 1757 1282 1238
W.b. Add R.s. proc 1116 1107 1135 1126 1135 1130 1190 1181 6045 6040 3200 3179 2188 2177 1705 1687 1488 1460 1210 1192 1200 1177 1024 1008 890 869 925 904 862 841 831 810 792 771 839 818 846 825 850 829 785 764 843 822 822 801 6045 6040 2576 2562 1847 1829 1505 1485 1279 1268 980 971 903 885 844 835 759 757 662 641 641 627 612 597 598 596 568 554 592 571 573 571 604 589 603 588 589 587 3999 3994 2375 2354 1670 1665 1327 1306 974 968 891 880 continued on next page
Table 6.11. continued 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
Configuration 55 21 55 25 55 28 55 31 55 35 55 37 55 41 55 44 55 47 55 49 55 51 55 52 55 53 67 2 67 6 67 10 67 14 67 18 67 22 67 26 67 30 67 34 67 38 67 42 67 46 67 50 67 54 67 57 67 60 67 62 67 63 67 64 67 65 81 2 81 7 81 12 81 17 81 22 81 27 81 32 81 36 81 41 81 46 81 51 81 55 81 60 81 65 81 69
34 30 27 24 20 18 14 11 8 6 4 3 2 65 61 57 53 49 45 41 37 33 29 25 21 17 13 10 7 5 4 3 2 79 74 69 64 59 54 49 45 40 35 30 26 21 16 12
Copied 172069 157822 149722 137655 129809 120973 123159 119421 123430 117808 121022 126603 115000 283390 250824 220082 186661 171867 152373 141344 130512 119034 108758 101358 99658 84844 86312 87889 85898 86939 86724 86910 86863 270331 232799 200403 174292 141700 119754 114348 106330 97090 90406 93792 83496 85288 71694 65364
Invoc 57 48 43 39 36 33 30 29 29 28 28 29 28 556 189 114 82 64 53 45 39 34 31 28 26 23 21 21 21 21 21 21 21 554 162 95 68 52 42 36 32 28 24 23 21 20 18 17
359
W.b. Insert 1059 947 929 843 800 701 639 707 669 679 622 711 702 4554 2446 1810 1341 1128 978 930 825 704 671 698 667 561 574 559 523 506 514 537 512 4631 2261 1523 1178 993 799 785 732 592 584 602 593 522 399 410
W.b. Add R.s. proc 790 772 659 654 650 639 565 559 552 542 498 484 453 443 455 441 471 457 459 449 405 392 488 474 454 444 4003 3998 2010 1996 1430 1409 1050 1029 847 840 714 708 662 657 566 551 462 449 456 435 450 432 440 427 356 339 342 327 360 345 349 332 344 334 350 340 362 352 348 338 4106 4092 1823 1802 1190 1172 869 855 711 690 583 578 541 513 488 471 408 387 384 370 382 364 360 347 327 307 281 275 254 248 continued on next page
Table 6.11. continued 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
Configuration 81 72 81 75 81 76 81 77 81 78 99 3 99 9 99 15 99 21 99 27 99 33 99 39 99 45 99 50 99 56 99 62 99 67 99 73 99 79 99 84 99 88 99 91 99 93 99 94 99 95 120 4 120 11 120 18 120 25 120 32 120 40 120 47 120 54 120 61 120 68 120 76 120 82 120 89 120 96 120 102 120 107 120 110 120 113 120 115 120 116 147 4 147 13 147 22
9 6 5 4 3 96 90 84 78 72 66 60 54 49 43 37 32 26 20 15 11 8 6 5 4 116 109 102 95 88 80 73 66 59 52 44 38 31 24 18 13 10 7 5 4 143 134 125
Copied 70560 73053 73296 73629 73145 252476 206197 173680 144853 114794 107260 94829 85306 89400 76284 71390 64683 61765 56394 55458 52848 54754 56217 55777 55592 235627 188284 153031 116555 116392 98180 78680 86124 78551 60098 64064 50010 49037 34952 50741 43288 42428 41300 42267 42398 233275 174267 137977
Invoc 17 17 17 17 17 371 124 75 54 41 34 28 25 22 20 18 17 16 14 13 13 13 13 13 13 276 102 62 44 35 28 23 21 18 16 15 13 12 11 11 10 10 10 10 10 277 85 50
360
W.b. Insert 445 407 419 440 461 3675 1892 1368 1109 840 795 646 567 629 501 449 427 354 363 388 311 347 329 354 363 3154 1667 1088 851 749 696 546 544 503 417 382 343 309 220 291 322 322 262 281 313 3088 1465 1021
W.b. Add R.s. proc 261 255 269 263 278 272 285 279 272 266 3162 3148 1504 1498 965 951 804 793 598 583 532 516 421 414 348 346 383 366 332 311 296 283 277 267 226 212 243 232 241 224 213 200 235 218 225 212 237 220 244 227 2689 2675 1287 1273 833 810 607 587 501 486 457 436 371 364 352 331 324 311 260 250 245 225 210 202 204 194 145 133 182 174 187 179 201 175 172 162 171 161 185 175 2623 2602 1128 1119 711 709 continued on next page
Table 6.11. continued 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
Configuration 147 31 147 40 147 49 147 57 147 66 147 75 147 84 147 93 147 100 147 109 147 118 147 125 147 131 147 135 147 138 147 140 147 142 15 15 15 15 15 15 15 15 15 15 15 15 15 15 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19 19
1 2 3 4 5 6 7 8 9 10 11 12 13 14 1 2 3 4 5 6 7 9 10 11 12 13 14 15 16 17 18
116 107 98 90 81 72 63 54 47 38 29 22 16 12 9 7 5
Copied 106754 94787 78981 68818 63941 59008 55011 42206 46389 47507 40519 34582 33209 34964 34830 34947 35807
Invoc 35 28 22 19 16 14 13 12 11 10 9 9 8 8 8 8 8
W.b. Insert 720 599 516 430 425 372 342 222 328 312 292 192 210 199 248 244 235
14 13 12 11 10 9 8 7 6 5 4 3 2 1 18 17 16 15 14 13 12 10 9 8 7 6 5 4 3 2 1
467717 475298 455922 443993 435288 423367 398437 397132 393226 389098 391532 395961 398139 398654 370511 363163 338978 327001 302591 285586 263147 257212 243522 230531 230877 222246 232162 232044 235653 229139 228779
545 306 216 169 140 122 105 99 93 91 91 91 91 91 545 292 200 153 127 105 92 75 67 60 57 54 54 54 54 53 53
4571 3285 2622 2370 1954 1844 1666 1600 1564 1565 1489 1534 1604 1572 4571 3287 2542 2168 1947 1738 1547 1354 1333 1189 1167 1131 1180 1168 1176 1138 1149
361
W.b. Add 513 406 330 297 273 237 205 154 204 176 162 124 145 125 161 156 149
R.s. proc 503 396 313 295 252 229 198 147 197 160 145 118 125 112 143 138 141
4034 4024 2799 2789 2187 2164 1911 1901 1598 1588 1473 1462 1276 1265 1232 1221 1188 1177 1185 1174 1134 1123 1170 1159 1200 1189 1186 1175 4034 4024 2747 2737 2131 2121 1742 1719 1549 1523 1341 1318 1197 1186 992 975 967 950 849 832 830 813 808 791 840 823 825 808 823 806 810 793 813 796 continued on next page
Table 6.11. continued 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
Configuration 23 1 23 2 23 3 23 5 23 6 23 8 23 9 23 10 23 12 23 13 23 14 23 16 23 17 23 18 23 20 23 21 23 22 28 1 28 3 28 4 28 6 28 8 28 9 28 11 28 13 28 14 28 16 28 18 28 19 28 21 28 22 28 24 28 25 28 26 28 27 34 1 34 3 34 5 34 7 34 9 34 11 34 13 34 15 34 17 34 19 34 21 34 23 34 25
22 21 20 18 17 15 14 13 11 10 9 7 6 5 3 2 1 27 25 24 22 20 19 17 15 14 12 10 9 7 6 4 3 2 1 33 31 29 27 25 23 21 19 17 15 13 11 9
Copied 327404 318153 295698 253774 244165 212816 202880 183653 177460 170799 147612 155945 145533 149099 155944 157396 157834 293010 274851 254458 226542 190617 189565 158746 146329 140082 124181 124900 120217 117506 113070 115508 117470 118514 119398 279528 253264 217520 199918 165250 145333 130353 123999 121405 109440 101414 93235 95668
Invoc 545 286 195 118 102 77 68 61 52 49 43 39 37 37 37 37 37 545 193 145 97 74 66 53 46 43 37 34 32 29 28 28 28 28 28 545 189 114 83 65 53 44 39 34 31 28 26 24
362
W.b. Insert 4571 3102 2529 1828 1689 1337 1235 1081 951 997 847 865 800 797 822 814 808 4571 2555 2144 1638 1336 1186 976 939 829 718 679 760 680 713 649 670 677 668 4571 2603 1708 1382 1185 1059 864 823 883 726 739 629 614
W.b. Add R.s. proc 4034 4024 2655 2632 2107 2096 1415 1404 1281 1258 1014 988 901 878 826 809 723 715 689 672 583 575 614 606 561 553 550 542 551 540 553 545 547 539 4034 4024 2155 2145 1711 1701 1286 1263 963 953 867 857 713 687 655 644 566 556 513 499 484 468 503 480 455 447 455 438 460 437 465 442 470 447 463 440 4034 4024 2107 2097 1356 1333 1072 1061 840 832 724 713 624 616 569 561 585 572 477 454 470 444 410 387 389 381 continued on next page
Table 6.11. continued 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
Configuration 34 27 34 29 34 30 34 31 34 32 34 33 41 1 41 4 41 6 41 9 41 11 41 14 41 16 41 18 41 21 41 23 41 26 41 28 41 30 41 33 41 35 41 36 41 38 41 39 41 40 50 1 50 4 50 7 50 10 50 13 50 16 50 19 50 22 50 25 50 29 50 31 50 34 50 37 50 40 50 42 50 44 50 46 50 47 50 48 60 2 60 5 60 9 60 13
7 5 4 3 2 1 40 37 35 32 30 27 25 23 20 18 15 13 11 8 6 5 3 2 1 49 46 43 40 37 34 31 28 25 21 19 16 13 10 8 6 4 3 2 58 55 51 47
Copied 92611 87358 88396 89963 90399 90889 262820 224937 198688 155193 146092 113557 101849 101990 95449 91900 85609 72534 80081 67212 69432 66597 68564 69026 69199 250849 222880 176061 150107 115751 108995 96094 87250 84153 68436 77620 60806 53895 56918 53551 54148 53881 54010 54619 237410 191152 149451 118581
Invoc 22 21 21 21 21 21 545 143 95 63 51 40 35 31 27 25 22 20 19 17 17 16 16 16 16 545 140 81 56 43 35 29 25 22 19 18 16 15 14 13 13 13 13 13 276 110 62 43
363
W.b. Insert 505 509 495 497 488 487 4571 2115 1576 1184 1037 816 772 675 625 578 607 431 502 426 432 366 377 382 376 4571 1949 1424 1179 852 808 640 601 581 454 504 413 392 376 350 388 301 304 329 3063 1730 1127 884
W.b. Add R.s. proc 326 315 342 334 331 323 327 319 318 310 321 313 4034 4024 1698 1688 1202 1179 871 860 699 682 590 576 536 516 452 438 397 383 395 382 380 364 293 281 315 295 309 291 256 233 240 219 245 227 256 238 250 232 4034 4024 1579 1553 1068 1051 839 816 592 575 553 536 420 407 392 381 366 350 287 274 302 288 262 250 259 245 219 210 221 210 244 224 189 178 192 181 223 212 2606 2596 1367 1341 838 815 614 588 continued on next page
Table 6.11. continued 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 12 12 12 12 12 12 12 12 12 12 12 12 12
Configuration 60 16 60 20 60 23 60 27 60 31 60 34 60 38 60 41 60 44 60 48 60 51 60 53 60 55 60 56 60 57 60 58 74 2 74 7 74 11 74 16 74 20 74 24 74 29 74 33 74 38 74 42 74 47 74 50 74 55 74 59 74 63 74 66 74 68 74 70 74 71 8 8 8 8 8 8 8 10 10 10 10 10 10
1 2 3 4 5 6 7 1 2 3 4 5 6
44 40 37 33 29 26 22 19 16 12 9 7 5 4 3 2 72 67 63 58 54 50 45 41 36 32 27 24 19 15 11 8 6 4 3
Copied 95876 93150 77962 87116 75593 56701 53276 53381 47310 40984 50988 41279 41443 41378 42767 43068 232550 174698 133291 98364 85201 71846 71782 60370 59700 55605 36852 44382 38009 38994 31356 36166 33072 33881 33182
Invoc 34 28 24 21 18 16 14 13 12 11 11 10 10 10 10 10 275 79 51 34 27 23 19 16 14 13 11 11 10 9 8 8 8 8 8
W.b. Insert 744 653 520 534 471 365 341 344 324 291 318 348 254 241 316 316 3084 1459 1010 723 544 538 496 422 354 400 256 311 267 250 252 273 239 237 212
7 6 5 4 3 2 1 9 8 7 6 5 4
391274 399789 362210 339700 350733 345465 348829 327103 308676 278388 254996 232789 212523
271 164 120 92 83 81 81 271 152 106 83 68 56
3057 2294 1883 1487 1558 1482 1489 3057 2131 1749 1374 1271 1020
364
W.b. Add 508 415 353 353 289 242 218 214 201 158 189 211 156 148 182 190 2606 1104 706 495 353 349 297 281 215 272 158 192 181 154 156 148 159 160 137
R.s. proc 494 398 343 330 281 231 203 206 188 145 179 198 147 139 173 181 2583 1087 695 487 340 341 289 272 213 246 145 184 155 141 137 135 146 147 123
2568 2547 1851 1830 1441 1420 1111 1092 1143 1124 1091 1072 1100 1081 2568 2547 1732 1713 1327 1306 1031 1012 956 937 770 747 continued on next page
Table 6.11. continued 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
Configuration 10 7 10 8 10 9 12 1 12 2 12 3 12 4 12 5 12 6 12 7 12 8 12 9 12 10 12 11 14 1 14 2 14 3 14 4 14 5 14 6 14 7 14 8 14 9 14 10 14 11 14 12 14 13 17 1 17 2 17 3 17 4 17 5 17 6 17 7 17 8 17 9 17 10 17 11 17 12 17 13 17 14 17 15 17 16 21 1 21 2 21 3 21 4 21 6
3 2 1 11 10 9 8 7 6 5 4 3 2 1 13 12 11 10 9 8 7 6 5 4 3 2 1 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 20 19 18 17 15
Copied 206265 210689 213840 297240 270348 245794 212952 189565 176931 144679 145702 148323 150258 152741 277819 254597 229450 198325 172695 151306 137507 146970 125481 121787 120946 126089 128092 261385 233292 208296 183445 175017 141775 124339 119844 97831 100516 100024 89294 92416 93215 86762 89865 247258 217152 203286 174774 134792
Invoc 50 50 50 271 147 102 78 61 53 45 39 36 36 36 271 145 99 75 59 49 43 39 35 31 29 29 29 271 142 97 73 60 48 41 36 32 29 27 25 23 22 21 21 271 140 95 72 48
365
W.b. Insert 1026 1013 1016 3057 2176 1722 1298 1195 1082 842 801 739 805 789 3057 2081 1638 1302 1108 1025 835 848 822 757 667 684 706 3057 2032 1602 1287 1147 972 793 811 684 671 612 574 589 602 531 522 3057 1980 1638 1247 1027
W.b. Add R.s. proc 752 729 727 704 730 707 2568 2547 1714 1695 1294 1271 973 962 848 837 775 756 603 593 581 567 512 498 550 536 550 536 2568 2547 1670 1649 1234 1215 959 948 806 795 701 691 598 584 634 611 572 549 519 505 452 429 466 443 481 458 2568 2547 1636 1617 1195 1174 940 919 848 829 693 674 574 555 564 553 444 442 429 419 428 405 370 360 383 369 389 370 362 343 354 335 2568 2547 1588 1569 1281 1258 922 901 712 689 continued on next page
Table 6.11. continued 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
Configuration 21 7 21 8 21 9 21 11 21 12 21 13 21 14 21 16 21 17 21 18 21 19 21 20 25 1 25 2 25 4 25 5 25 7 25 8 25 10 25 11 25 13 25 14 25 16 25 17 25 19 25 20 25 21 25 22 25 23 25 24 30 1 30 3 30 4 30 6 30 8 30 10 30 12 30 13 30 15 30 17 30 19 30 20 30 22 30 24 30 26 30 27 30 28 30 29
14 13 12 10 9 8 7 5 4 3 2 1 24 23 21 20 18 17 15 14 12 11 9 8 6 5 4 3 2 1 29 27 26 24 22 20 18 17 15 13 11 10 8 6 4 3 2 1
Copied 122846 110393 105155 96155 82099 98561 85749 71020 70335 65459 66307 67838 238492 210776 159405 146865 110779 103221 101145 92489 77025 74428 55111 69745 64529 61803 53833 53352 55324 53301 232094 188523 157400 126064 101476 89780 76174 75089 74592 61366 64284 59810 51434 45465 53604 41842 42424 43007
Invoc 41 35 32 26 24 23 21 18 17 16 16 16 271 139 70 56 40 34 28 26 21 20 17 17 15 14 13 13 13 13 271 94 70 46 34 27 23 21 18 16 15 14 13 12 11 10 10 10
366
W.b. Insert 816 738 662 664 598 679 535 408 432 425 393 407 3057 2018 1298 1156 779 670 643 604 564 503 340 456 430 404 353 296 359 318 3057 1570 1187 985 674 583 528 496 549 404 419 421 334 381 358 314 316 340
W.b. Add R.s. proc 578 568 494 492 429 408 422 412 390 369 432 411 343 324 288 269 266 255 263 255 239 228 242 231 2568 2547 1605 1586 934 923 814 793 535 514 463 461 451 432 398 379 347 338 324 305 244 235 293 274 258 256 244 228 216 199 202 188 239 229 201 180 2568 2547 1182 1161 871 850 664 645 451 449 401 382 352 329 331 312 331 322 260 241 271 248 249 239 211 192 208 187 219 217 174 172 203 179 216 192 continued on next page
Table 6.11. continued 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
Configuration 37 1 37 3 37 6 37 8 37 10 37 12 37 14 37 17 37 19 37 21 37 23 37 25 37 27 37 30 37 31 37 33 37 34 37 35 37 36 4 4 4 5 5 5 5 6 6 6 6 6 7 7 7 7 7 7 9 9 9 9 9 9 9 9 11 11 11
1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 8 1 2 3
36 34 31 29 27 25 23 20 18 16 14 12 10 7 6 4 3 2 1
Copied 228979 175709 126257 106068 87741 74354 72760 62144 64272 47538 40088 42758 44140 40245 36432 35855 34125 35532 33966
Invoc 271 92 46 35 27 22 19 16 14 13 12 11 10 9 9 8 8 8 8
W.b. Insert 3057 1634 883 766 664 539 455 395 406 304 244 300 294 228 247 254 255 234 160
3 2 1 4 3 2 1 5 4 3 2 1 6 5 4 3 2 1 8 7 6 5 4 3 2 1 10 9 8
318551 436811 436275 268624 248575 256475 235847 249846 219588 170941 163912 159923 232852 196924 167787 147305 127578 129263 217996 183220 152942 122393 108763 88329 90267 86228 209489 174983 142179
135 110 99 135 83 64 55 135 79 54 42 38 135 75 51 40 32 30 135 73 50 37 29 25 22 20 135 71 48
1913 1697 1624 1913 1414 1221 1059 1913 1296 909 850 772 1913 1219 959 913 688 661 1913 1255 1015 741 675 517 510 575 1913 1324 994
367
W.b. Add 2568 1177 636 516 434 351 280 263 244 204 180 198 179 154 145 149 158 151 108
R.s. proc 2547 1158 625 495 415 331 272 244 235 202 161 175 159 147 135 137 149 142 100
1572 1546 1353 1327 1275 1249 1572 1546 1094 1068 877 873 752 748 1572 1546 990 964 681 677 614 604 547 537 1572 1546 950 946 717 713 654 644 468 450 442 438 1572 1546 951 925 727 723 538 512 455 437 364 354 354 344 358 340 1572 1546 981 977 726 700 continued on next page
Table 6.11. continued 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13 13
Configuration 11 4 11 5 11 6 11 7 11 8 11 9 11 10 13 1 13 2 13 3 13 4 13 5 13 6 13 7 13 8 13 9 13 10 13 11 13 12 15 1 15 2 15 3 15 4 15 5 15 6 15 7 15 8 15 9 15 10 15 11 15 12 15 13 15 14 19 1 19 2 19 3 19 4 19 5 19 6 19 7 19 9 19 10 19 11 19 12 19 13 19 14 19 15 19 16
7 6 5 4 3 2 1 12 11 10 9 8 7 6 5 4 3 2 1 14 13 12 11 10 9 8 7 6 5 4 3 2 1 18 17 16 15 14 13 12 10 9 8 7 6 5 4 3
Copied 107357 99178 83828 88930 66720 60503 65755 206908 164052 132097 101890 94758 81135 80898 58827 61626 52365 53726 48820 200050 160710 126118 104712 93149 69306 67537 60100 59037 58963 46299 46231 54906 43530 197371 156245 126665 105679 90124 73717 67304 53492 55737 44266 41152 48461 36592 44822 34733
Invoc 35 29 24 21 18 16 15 135 70 47 35 28 23 20 17 16 14 13 12 135 70 46 35 28 23 20 17 15 14 12 12 11 10 135 69 46 35 27 22 19 15 13 12 11 10 9 9 8
368
W.b. Insert 696 638 518 518 402 333 375 1913 1135 943 635 612 556 531 395 405 355 280 279 1913 1206 877 710 624 451 459 394 304 444 293 278 371 317 1913 1142 915 733 645 508 429 303 388 318 294 310 243 279 224
W.b. Add R.s. proc 491 473 444 434 355 329 340 330 277 273 225 214 231 221 1572 1546 877 873 679 675 459 455 415 411 376 365 336 318 275 264 261 235 221 203 204 193 191 175 1572 1546 939 913 656 646 517 513 425 421 318 314 319 293 282 271 198 181 267 241 198 181 170 166 239 223 182 165 1572 1546 870 866 654 644 538 512 443 425 349 332 290 274 203 192 228 218 199 189 190 174 178 163 162 154 181 165 146 126 continued on next page
Table 6.11. continued 13 13 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14 14
Configuration 19 17 19 18 2 3 3 4 4 4 5 5 5 5 6 6 6 6 6 7 7 7 7 7 7 8 8 8 8 8 8 8 10 10 10 10 10 10 10 10 10
1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8 9
2 1
Copied 36483 33264
Invoc 8 8
W.b. Insert 199 205
W.b. Add 131 112
R.s. proc 121 108
1 2 1 3 2 1 4 3 2 1 5 4 3 2 1 6 5 4 3 2 1 7 6 5 4 3 2 1 9 8 7 6 5 4 3 2 1
237650 178182 216440 164455 133541 114907 155426 119311 81742 89800 158110 110423 77122 63049 62696 154939 105229 78542 74142 54897 45839 148212 101723 75142 61839 64250 47848 42750 147116 108398 73366 68584 60619 42176 34555 33207 33189
67 67 53 67 40 29 67 37 25 21 67 36 24 18 15 67 35 23 18 14 12 67 35 23 17 14 11 10 67 35 23 17 13 11 9 8 8
1162 1162 996 1162 819 618 1162 734 531 515 1162 690 514 386 379 1162 707 523 508 331 309 1162 714 510 410 388 321 299 1162 793 510 460 416 282 216 241 201
850 850 704 850 568 420 850 500 361 346 850 471 339 263 243 850 484 327 331 220 193 850 489 331 265 245 182 189 850 527 334 293 260 173 147 140 116
834 834 688 834 552 404 834 484 345 330 834 455 323 247 230 834 468 317 315 207 180 834 473 315 255 232 167 174 834 511 318 277 249 160 139 125 100
369
Table 6.12. Costs of block 2GYF for Richards 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
Configuration 8 1 7 8 2 6 8 3 5 8 4 4 8 5 3 8 6 2 8 7 1 9 1 8 9 2 7 9 3 6 9 4 5 9 5 4 9 6 3 9 7 2 9 8 1 11 1 10 11 2 9 11 3 8 11 4 7 11 5 6 11 6 5 11 7 4 11 8 3 11 9 2 11 10 1 13 1 12 13 2 11 13 3 10 13 4 9 13 5 8 13 6 7 13 7 6 13 8 5 13 9 4 13 10 3 13 11 2 13 12 1 16 1 15 16 2 14 16 3 13 16 4 12 16 5 11 16 6 10 16 7 9 16 8 8 16 9 7 16 10 6 16 11 5
Copied 4776121 4831103 4899711 6554704 6856678 6829140 6825616 4117706 3743626 3714939 3538799 4732598 5020488 5024717 5001620 3343032 2814513 2364919 2041471 2211442 2480657 3277608 3449058 3441060 3440124 2987601 2427044 2050222 1652677 1591705 1462558 1488790 1649225 2358853 2502993 2518551 2510513 2712144 2148891 1766954 1403765 1281797 1118147 1001276 926526 967599 1013001 1204985
Invoc 17920 10482 7462 6196 6063 6045 6041 17920 9996 6885 5194 4498 4466 4468 4464 17920 9574 6452 4776 3908 3314 3055 3030 3024 3022 17920 9404 6263 4690 3758 3132 2706 2405 2246 2233 2236 2237 17920 9273 6178 4609 3685 3070 2627 2303 2056 1863 1723
370
W.b. insert 4566 4282 4189 4087 4065 4113 4083 4566 4178 4062 3985 3880 3891 3879 3911 4566 4212 4073 3975 3817 3883 3954 3921 3891 3924 4566 4178 4056 3969 3811 3880 3812 3802 3794 3778 3740 3755 4566 4135 4024 3972 3863 3828 3838 3809 3753 3749 3770
W.b. add R.s. proc 4485 4485 4223 4223 4143 4143 4044 4044 4027 4027 4076 4076 4047 4047 4485 4485 4133 4133 4002 4002 3967 3967 3746 3746 3803 3803 3794 3794 3824 3824 4485 4485 4157 4157 4038 4038 3936 3936 3251 3251 2936 2936 2805 2805 2762 2762 2737 2737 2763 2763 4485 4485 4130 4130 4024 4024 3945 3945 3201 3201 2821 2821 2420 2420 2177 2177 2040 2040 2019 2019 1988 1988 2001 2001 4485 4485 4094 4094 3982 3982 3938 3938 3210 3210 2716 2716 2393 2393 2102 2102 1846 1846 1704 1704 1608 1608 continued on next page
Table 6.12. continued 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
Configuration 16 12 4 16 13 3 16 14 2 16 15 1 20 1 19 20 2 18 20 3 17 20 4 16 20 5 15 20 7 13 20 8 12 20 9 11 20 10 10 20 11 9 20 13 7 20 14 6 20 15 5 20 16 4 20 17 3 20 18 2 20 19 1 24 1 23 24 2 22 24 4 20 24 5 19 24 6 18 24 8 16 24 9 15 24 11 13 24 12 12 24 14 10 24 15 9 24 16 8 24 18 6 24 19 5 24 20 4 24 21 3 24 22 2 24 23 1 29 1 28 29 3 26 29 4 25 29 6 23 29 8 21 29 10 19 29 11 18 29 13 16 29 15 14
Copied 1718827 1831175 1834156 1831967 2503213 1968469 1581980 1288065 1132162 842036 728287 714790 661131 631920 646652 712836 884611 1264113 1348834 1345162 1344093 2375629 1843985 1220944 1049384 887402 637765 627700 532549 467101 477360 463291 455311 560160 694111 994477 1067309 1063662 1065258 2281821 1431668 1143851 838891 556491 527573 481239 416555 381543
Invoc 1643 1631 1632 1631 17920 9185 6119 4574 3653 2601 2272 2023 1821 1656 1408 1316 1244 1204 1197 1195 1195 17920 9134 4550 3636 3024 2262 2012 1645 1507 1296 1209 1135 1018 973 948 945 944 944 17920 6055 4532 3014 2254 1804 1640 1387 1202
371
W.b. Insert 3770 3760 3748 3787 4566 4109 4006 3953 3847 3804 3798 3757 3733 3754 3723 3727 3722 3742 3736 3734 3732 4566 4126 3972 3848 3830 3828 3764 3766 3749 3705 3702 3737 3706 3724 3732 3701 3720 3705 4566 3964 3949 3818 3856 3751 3764 3731 3715
W.b. Add R.s. proc 1546 1546 1526 1526 1517 1517 1555 1555 4485 4485 4064 4064 3975 3975 3922 3922 3180 3180 2335 2335 2065 2065 1831 1831 1658 1658 1541 1541 1319 1319 1248 1248 1188 1188 1168 1168 1157 1157 1160 1160 1157 1157 4485 4485 4081 4081 3939 3939 3171 3171 2695 2695 2082 2082 1832 1832 1544 1544 1419 1419 1216 1216 1141 1141 1116 1116 988 988 971 971 958 958 934 934 949 949 937 937 4485 4485 3935 3935 3917 3917 2680 2680 2090 2090 1658 1658 1538 1538 1313 1313 1146 1146 continued on next page
Table 6.12. continued 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 10 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
Configuration 29 17 12 29 18 11 29 20 9 29 21 8 29 23 6 29 25 4 29 26 3 29 27 2 29 28 1 35 1 34 35 3 32 35 5 30 35 7 28 35 9 26 35 12 23 35 14 21 35 16 19 35 18 17 35 20 15 35 22 13 35 24 11 35 26 9 35 28 7 35 30 5 35 31 4 35 32 3 35 33 2 35 34 1 4 4 4 5 5 5 5 6 6 6 6 6 7 7 7 7 7 7 8 8
1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2
3 2 1 4 3 2 1 5 4 3 2 1 6 5 4 3 2 1 7 6
Copied 347218 353575 325805 357251 432847 791003 839343 841519 838151 2222445 1383958 962469 692627 548575 423565 368682 323416 304221 283258 269929 249701 269243 288297 433640 631348 675968 675537 673119
Invoc 1061 1003 903 862 792 748 746 746 746 17920 6039 3614 2575 2001 1500 1285 1125 1000 900 818 750 694 646 608 599 598 598 597
W.b. Insert 3699 3730 3714 3701 3718 3685 3687 3680 3705 4566 3955 3830 3807 3748 3730 3719 3726 3723 3700 3703 3724 3686 3709 3675 3668 3674 3679 3670
3424861 8696827 8792578 2398681 2412548 4642961 4781096 2040481 1692238 1675450 2972053 3117662 1847645 1436990 1200888 1149594 2303501 2418731 1734422 1365213
8888 8205 7802 8888 5184 4301 4229 8888 4841 3304 2863 2794 8888 4716 3149 2397 2176 2152 8888 4661
4316 4262 4224 4316 4021 3987 3996 4316 4008 3926 3847 3864 4316 4037 3889 3798 3816 3817 4316 3936
372
W.b. Add 1023 999 914 871 825 768 768 760 779 4485 3930 3149 2324 1814 1401 1222 1094 993 895 837 795 727 699 648 634 637 643 636
R.s. proc 1023 999 914 871 825 768 768 760 779 4485 3930 3149 2324 1814 1401 1222 1094 993 895 837 795 727 699 646 634 637 643 636
4170 4170 4132 4132 4108 4108 4170 4170 3947 3947 3774 3774 3711 3711 4170 4170 3928 3928 2951 2951 2582 2582 2550 2550 4170 4170 3958 3958 2823 2823 2156 2156 1998 1998 1976 1976 4170 4170 3898 3898 continued on next page
Table 6.12. continued 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11 11
Configuration 8 3 5 8 4 4 8 5 3 8 6 2 8 7 1 10 1 9 10 2 8 10 3 7 10 4 6 10 5 5 10 6 4 10 7 3 10 8 2 10 9 1 12 1 11 12 2 10 12 3 9 12 4 8 12 5 7 12 6 6 12 7 5 12 8 4 12 9 3 12 10 2 12 11 1 15 1 14 15 2 13 15 3 12 15 4 11 15 5 10 15 6 9 15 7 8 15 8 7 15 9 6 15 10 5 15 11 4 15 12 3 15 13 2 15 14 1 18 1 17 18 2 16 18 3 15 18 4 14 18 5 13 18 6 12 18 7 11 18 8 10 18 9 9
Copied 1039329 874352 966929 1795201 1905791 1596827 1198564 892715 707717 613348 588878 668509 1284229 1376638 1518730 1195129 825985 642937 527816 471964 435137 447431 518262 1004547 1078490 1450839 1117869 775520 580270 482586 414996 362876 333435 309888 307668 327314 385295 757413 810667 1408932 1060078 746981 566895 457060 388143 340638 301082 276301
Invoc 3093 2324 1896 1739 1711 8888 4588 3044 2278 1824 1527 1328 1249 1237 8888 4560 3021 2260 1806 1506 1293 1137 1022 975 968 8888 4528 3004 2247 1796 1496 1282 1122 998 900 821 760 734 729 8888 4509 2995 2241 1791 1491 1278 1118 994
373
W.b. Insert 3912 3822 3802 3776 3799 4316 4021 3898 3842 3792 3762 3769 3739 3749 4316 3985 3890 3861 3882 3770 3764 3753 3750 3699 3725 4316 4003 3881 3868 3776 3751 3752 3741 3726 3736 3716 3722 3701 3718 4316 4003 3928 3840 3790 3744 3744 3757 3731
W.b. Add R.s. proc 2795 2795 2131 2131 1760 1760 1624 1624 1622 1622 4170 4170 3934 3934 2748 2748 2105 2105 1700 1700 1445 1445 1287 1287 1206 1206 1207 1207 4170 4170 3925 3925 2724 2724 2095 2095 1753 1753 1432 1432 1251 1251 1123 1123 1030 1030 957 957 974 974 4170 4170 3911 3911 2708 2708 2092 2092 1667 1667 1401 1401 1240 1240 1104 1104 990 990 924 924 839 839 807 807 766 766 771 771 4170 4170 3924 3924 2733 2733 2071 2071 1670 1670 1397 1397 1221 1221 1111 1111 994 994 continued on next page
Table 6.12. continued 11 11 11 11 11 11 11 11
Configuration 18 10 18 11 18 12 18 13 18 14 18 15 18 16 18 17
8 7 6 5 4 3 2 1
Copied 255256 242580 236611 233405 249357 301618 613056 654773
Invoc 894 814 746 690 643 604 588 585
W.b. Insert 3736 3728 3714 3709 3710 3704 3681 3719
W.b. Add 914 844 789 737 696 656 641 657
R.s. proc 914 844 789 737 696 656 641 657
12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12 12
2 3 3 4 4 4 5 5 5 5 6 6 6 6 6 8 8 8 8 8 8 8 9 9 9 9 9 9 9 9
1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 7 1 2 3 4 5 6 7 8
1 2 1 3 2 1 4 3 2 1 5 4 3 2 1 7 6 5 4 3 2 1 8 7 6 5 4 3 2 1
4392162 1617622 4149809 1304469 908356 2120789 1190238 702541 586104 1417384 1128876 629174 460320 433706 1064167 1065570 569315 391781 312968 274698 284271 718010 1047513 553727 380387 294040 249750 228398 246342 614526
4404 4404 3976 4404 2412 2072 4404 2313 1558 1403 4404 2277 1516 1151 1062 4404 2247 1493 1119 898 755 716 4404 2240 1488 1115 892 745 645 616
4360 4360 4318 4360 4076 3998 4360 4030 3913 3886 4360 4036 3891 3846 3829 4360 4019 3896 3828 3804 3782 3750 4360 4031 3889 3848 3799 3774 3752 3746
4283 4283 3885 4283 2417 2088 4283 2305 1609 1463 4283 2290 1558 1227 1140 4283 2250 1546 1185 989 860 802 4283 2254 1532 1199 980 844 741 718
4283 4283 3885 4283 2417 2088 4283 2305 1609 1463 4283 2290 1558 1227 1140 4283 2250 1546 1185 989 860 802 4283 2254 1532 1199 980 844 741 718
13 13 13 13 13 13 13 13 13 13
2 3 3 4 4 4 5 5 5 5
1 1 2 1 2 3 1 2 3 4
1 2 1 3 2 1 4 3 2 1
862635 638274 480752 586894 334341 308568 563813 303795 222709 229407
2156 2156 1188 2156 1117 765 2156 1102 736 565
3824 3824 3752 3824 3735 3692 3824 3731 3711 3698
2006 2006 1169 2006 1101 789 2006 1083 780 634
2006 2006 1169 2006 1101 789 2006 1083 780 634
374
Table 6.13. Costs of block 2GYF for JavaBYTEmark Configuration 18 2 1 1 18 3 1 2 18 3 2 1 18 4 1 3 18 4 2 2 18 4 3 1 18 5 1 4 18 5 2 3 18 5 3 2 18 5 4 1 18 6 1 5 18 6 2 4 18 6 3 3 18 6 4 2 18 6 5 1
Copied 150427 133592 87789 133592 71718 51493 133592 71718 51493 46855 133592 71718 51493 46855 37842
Invoc 20 20 10 20 10 6 20 10 6 5 20 10 6 5 4
W.b. insert 278 278 177 278 158 54 278 158 54 137 278 158 54 137 135
375
W.b. add 245 245 127 245 120 19 245 120 19 106 245 120 19 106 102
R.s. proc 155 155 30 155 30 17 155 30 17 16 155 30 17 16 12
Table 6.14. Costs of block 2GYF for Bloat-Bloat 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Configuration 16 1 15 16 2 14 16 3 13 16 4 12 16 5 11 16 6 10 16 7 9 16 8 8 16 9 7 16 10 6 16 11 5 16 12 4 16 13 3 16 14 2 19 1 18 19 2 17 19 3 16 19 4 15 19 5 14 19 6 13 19 7 12 19 9 10 19 10 9 19 11 8 19 12 7 19 13 6 19 14 5 19 15 4 19 16 3 19 17 2 19 18 1 23 1 22 23 2 21 23 3 20 23 5 18 23 6 17 23 8 15 23 9 14 23 10 13 23 12 11 23 13 10 23 14 9 23 16 7 23 17 6 23 18 5 23 20 3 23 21 2 28 1 27
Copied 5586489 4923038 4715154 4769205 4589356 4853174 5420551 7350601 10178457 12715009 15440731 17860932 19771607 19725327 5210358 4561459 4206513 4035345 3846856 3663016 3658250 4123333 4312733 5610469 7916579 9843328 11835670 13750934 14787025 14995648 15059949 4755921 4277460 3541449 3261615 3264059 3094966 3164740 3386693 3176402 3415013 3616462 5891127 7105112 9040941 11275372 11226326 4544476
Invoc 2283 1152 768 577 461 385 331 293 266 247 235 228 227 226 2283 1149 766 574 460 383 328 257 231 211 197 186 179 175 174 173 174 2283 1147 764 459 382 286 255 230 191 177 165 146 140 136 133 132 2283
376
W.b. insert 138221 99475 81049 73694 61502 56370 47851 51422 52494 41721 46973 41700 44477 44760 138221 102098 83378 77174 60698 59087 43972 44681 43514 37932 38141 42388 38067 34301 31198 29905 30951 138221 102181 77768 64261 52547 44520 40299 41373 40740 35112 34185 26892 29045 30348 28565 28489 138221
W.b. add R.s. proc 57562 57551 40320 40309 34907 34886 30406 30395 27895 27830 24494 24444 23193 23109 22082 22007 22512 22462 20085 20035 19895 19884 19934 19902 19375 19343 20898 20877 57562 57551 41552 41531 36196 36175 31419 31369 27618 27584 25511 25446 22859 22784 21227 21216 19989 19914 18615 18511 18474 18463 18795 18763 18845 18780 17815 17740 17706 17641 16968 16909 17398 17387 57562 57551 41750 41729 34328 34294 29390 29379 24502 24437 22355 22261 21421 21346 19424 19413 17893 17834 16928 16844 16398 16377 15109 15009 15254 15179 15064 14960 14574 14563 15143 15049 57562 57551 continued on next page
Table 6.14. continued 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Configuration 28 3 25 28 4 24 28 6 22 28 8 20 28 9 19 28 11 17 28 13 15 28 14 14 28 16 12 28 18 10 28 19 9 28 21 7 28 22 6 28 24 4 28 25 3 28 26 2 34 1 33 34 3 31 34 5 29 34 7 27 34 9 25 34 11 23 34 13 21 34 15 19 34 17 17 34 19 15 34 21 13 34 23 11 34 25 9 34 27 7 34 29 5 34 30 4 34 31 3 34 32 2 34 33 1 41 1 40 41 4 37 41 6 35 41 9 32 41 11 30 41 14 27 41 16 25 41 18 23 41 21 20 41 23 18 41 26 15 41 28 13 41 30 11
Copied 3589931 3366903 2862583 2647460 2709352 2634141 2498540 2585478 3068428 2701472 3084709 4676406 5947201 7868865 8826807 8747838 4291518 3439548 2769184 2528534 2371660 2539118 2296614 2181897 2133641 2216944 2333964 2457187 2671649 3599351 5338996 6291831 6777718 6776128 6794862 4226891 2838193 2571537 2361250 2165799 1952919 1833028 2042049 2081814 2061719 2067274 1772593 2065876
Invoc 763 573 381 286 254 208 176 164 144 127 121 111 107 103 103 102 2283 763 457 326 254 208 176 152 134 120 109 100 92 86 82 81 80 80 80 2283 571 381 254 207 163 142 127 109 99 88 81 76
377
W.b. Insert 81160 68538 58255 46183 41457 44445 33304 27113 32452 25602 28298 27621 25784 24832 24805 21833 138221 78657 60106 49531 40173 43070 33782 34500 29449 25067 25622 27459 25743 19897 21322 28603 20715 21470 20580 138221 73868 53400 43615 40903 29421 26706 27776 28024 30216 22882 19416 20434
W.b. Add R.s. proc 35987 35953 29633 29622 25603 25528 20935 20870 20849 20774 18340 18256 16469 16375 15943 15893 15417 15383 14853 14737 15007 14913 13071 12996 12594 12505 13709 13644 13321 13300 12826 12769 57562 57551 33458 33437 28170 28105 24670 24586 21678 21644 18229 18179 16538 16504 16923 16885 15420 15382 13516 13373 13247 13143 12480 12430 12927 12843 11355 11290 10597 10503 10834 10730 11475 11406 11452 11373 11934 11854 57562 57551 29873 29823 24261 24196 20838 20788 18726 18667 17415 17331 15331 15215 14072 13997 13783 13772 12104 12015 11440 11346 11441 11362 11096 11033 continued on next page
Table 6.14. continued 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Configuration 41 33 8 41 35 6 41 36 5 41 38 3 41 39 2 41 40 1 50 1 49 50 4 46 50 7 43 50 10 40 50 13 37 50 16 34 50 19 31 50 22 28 50 25 25 50 29 21 50 32 18 50 34 16 50 37 13 50 40 10 50 43 7 50 44 6 50 46 4 50 47 3 50 48 2 61 2 59 61 5 56 61 9 52 61 13 48 61 16 45 61 20 41 61 24 37 61 27 34 61 31 30 61 35 26 61 38 23 61 41 20 61 45 16 61 49 12 61 52 9 61 54 7 61 56 5 61 57 4 61 58 3 61 59 2 74 2 72 74 7 67 74 11 63
Copied 2789307 3840382 4395318 5306929 5365141 5323811 4127288 2840778 2473833 2223875 2066038 1839606 1707548 1787221 1673430 1830777 1641796 1844265 1785598 1876355 2610637 2751430 3975994 4150293 4183439 3406263 2655779 2171871 2036762 1820670 1783480 1689622 1572276 1566295 1500232 1515549 1427643 1527134 1605443 1858955 2110586 2804623 3220481 3390473 3383290 3336876 2401535 2014431
Invoc 70 66 65 64 64 64 2283 571 326 228 175 142 120 104 91 78 71 67 62 57 53 52 51 51 51 1143 457 253 175 142 114 95 84 73 65 60 55 50 46 44 42 41 41 41 41 1142 327 207
378
W.b. Insert 24420 24699 20141 16194 14874 18263 138221 71090 48454 46367 35371 26070 21918 25131 21037 20259 18146 20112 17858 14914 16560 13988 18215 12772 15369 103154 66112 38737 32764 27698 27345 21805 19899 19231 18748 15419 16147 19135 15542 16153 16600 12283 12032 11662 11423 103805 50029 38607
W.b. Add R.s. proc 11967 11956 9914 9759 10664 10567 9373 9286 8953 8866 9603 9540 57562 57551 31224 31190 23025 22960 19463 19379 16772 16734 14500 14411 13348 13264 12289 12255 12034 11945 10602 10495 10997 10826 10723 10607 8802 8752 8452 8395 9338 9114 8081 7886 8299 8155 7794 7651 8165 8022 43019 43008 27410 27399 20819 20715 16590 16531 15242 15179 14084 14009 11651 11592 10951 10780 10933 10854 9446 9346 9194 9090 9476 9296 8898 8733 8770 8626 7524 7486 8565 8428 7311 7195 7057 6950 6885 6826 6767 6729 42291 42270 23593 23582 17949 17855 continued on next page
Table 6.14. continued 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
Configuration 74 16 58 74 20 54 74 24 50 74 29 45 74 33 41 74 38 36 74 42 32 74 47 27 74 50 24 74 55 19 74 59 15 74 63 11 74 66 8 74 68 6 74 70 4 74 71 3 4 4 4 5 5 5 5 6 6 6 6 6 7 7 7 7 7 7 9 9 9 9 9 9 9 9 11 11 11 11 11 11
1 2 3 1 2 3 4 1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 8 1 2 3 4 5 6
3 2 1 4 3 2 1 5 4 3 2 1 6 5 4 3 2 1 8 7 6 5 4 3 2 1 10 9 8 7 6 5
Copied 1714904 1647755 1501824 1487522 1559994 1328676 1294318 1279195 1228524 1280122 1229569 1370257 1574113 2098259 2665735 2739037
Invoc 142 114 95 78 69 60 54 48 45 41 38 36 34 33 33 33
W.b. Insert 23467 27070 22471 21046 20952 16421 14559 12715 14028 12802 11813 11484 10915 16967 13181 10517
4622093 8302477 21830991 3707512 3632159 5881079 13954525 3307593 3198184 3239610 4548957 10843589 3294683 2805684 2585395 2687127 3683459 8402643 3002093 2478270 2252843 2275782 2095978 2054571 3052093 6015511 2925048 2391805 2031734 2051116 1988813 1848633
570 314 275 570 293 204 182 570 290 195 150 139 570 289 193 145 119 110 570 287 191 144 115 96 84 79 570 286 191 143 114 95
72966 54421 52980 72966 51088 38526 38983 72966 52796 44192 31893 40514 72966 50132 46445 31029 28146 31679 72966 47775 39315 36707 31195 21943 28081 21583 72966 47843 36649 35551 27543 27382
379
W.b. Add 14512 13811 11632 11350 10081 10059 8703 8305 7571 7248 6698 6504 6399 6983 6309 5919
R.s. proc 14423 13736 11567 11243 10049 9975 8640 8110 7327 7078 6566 6397 6232 6720 6165 5802
29790 29763 22866 22788 21446 21419 29790 29763 21941 21863 18657 18630 17318 17240 29790 29763 21477 21399 18102 18075 16515 16472 15456 15378 29790 29763 22121 22094 18787 18760 16153 16082 14697 14670 12856 12692 29790 29763 22259 22181 17125 17082 15021 14994 11948 11905 12993 12922 11311 11240 11164 11071 29790 29763 21175 21097 18128 18101 15875 15804 13619 13455 12787 12716 continued on next page
Table 6.14. continued 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
Configuration 11 7 4 11 8 3 11 9 2 11 10 1 13 1 12 13 2 11 13 3 10 13 4 9 13 5 8 13 6 7 13 7 6 13 8 5 13 9 4 13 10 3 13 11 2 13 12 1 16 1 15 16 2 14 16 3 13 16 4 12 16 5 11 16 6 10 16 7 9 16 8 8 16 9 7 16 10 6 16 11 5 16 12 4 16 13 3 16 14 2 16 15 1 19 1 18 19 2 17 19 3 16 19 4 15 19 5 14 19 6 13 19 7 12 19 9 10 19 10 9 19 11 8 19 12 7 19 13 6 19 14 5 19 15 4 19 16 3 19 17 2 19 18 1
Copied 1910252 1797397 2288098 4714305 2918572 2366567 1951855 1920328 1841324 1762570 1527652 1708842 1525684 1760958 1884088 3738735 2777771 2303418 1970752 1707050 1790873 1740203 1439625 1679801 1372134 1387484 1450956 1559009 1469348 1827829 3172849 2723131 2085569 1944007 1707894 1675897 1523025 1464939 1468139 1282650 1434121 1286892 1335495 1180920 1468035 1437407 1579314 2426435
Invoc 82 72 65 62 570 286 190 143 114 95 81 71 63 57 52 50 570 286 190 143 114 95 81 71 63 57 52 47 44 41 40 570 285 190 142 114 95 81 63 57 52 47 44 40 38 36 34 33
380
W.b. Insert 19197 22197 18059 16033 72966 49328 34928 25871 26412 25415 18489 21830 16450 22943 13341 15586 72966 51733 40744 24375 28575 24507 17409 17193 15318 14283 15358 18887 18440 14593 19985 72966 45139 37281 27078 26343 22063 21219 19207 15013 18730 12241 14036 12202 15174 14765 12692 9682
W.b. Add R.s. proc 11311 11268 10198 10155 9680 9653 9512 9348 29790 29763 21484 21406 17221 17178 15004 14926 14005 13934 11947 11876 11525 11432 11883 11790 9380 9201 8903 8724 7958 7776 8604 8422 29790 29763 21781 21754 17430 17387 15077 15050 14669 14626 10773 10730 10312 10241 10285 10214 9209 9138 8355 8284 8857 8814 8843 8639 6872 6708 7162 6983 7382 7218 29790 29763 21990 21912 17249 17171 14262 14191 13775 13697 11036 10958 10881 10717 9892 9821 8238 8195 8799 8772 7601 7422 8037 7959 7617 7458 7267 7196 7122 7095 6655 6612 5760 5717 continued on next page
Table 6.14. continued Configuration 20 20 20 20 20 20 20 20 20 20
2 3 3 4 4 4 5 5 5 5
1 1 2 1 2 3 1 2 3 4
1 2 1 3 2 1 4 3 2 1
Copied
Invoc
W.b. Insert
W.b. Add
R.s. proc
2149218 1841768 1770564 1732852 1534535 1600108 1744298 1433614 1255636 1208917
142 142 74 142 72 49 142 71 48 36
30260 30260 17152 30260 20434 16149 30260 17512 13795 12336
14794 14794 10452 14794 10207 8357 14794 9962 7832 6189
14767 14767 10425 14767 10180 8330 14767 9864 7805 6025
381
Table 6.15. Costs of block 2GYF for Toba 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Configuration 22 1 22 2 22 3 22 5 22 6 22 7 22 9 22 10 22 11 22 13 22 14 22 15 22 16 22 18 22 19 22 20 27 1 27 2 27 4 27 6 27 7 27 9 27 11 27 12 27 14 27 15 27 17 27 18 27 20 27 22 27 23 27 24 33 1 33 3 33 5 33 7 33 9 33 11 33 13 33 15 33 17 33 19 33 21 33 22 33 24 33 26 33 28 33 29
21 20 19 17 16 15 13 12 11 9 8 7 6 4 3 2 26 25 23 21 20 18 16 15 13 12 10 9 7 5 4 3 32 30 28 26 24 22 20 18 16 14 12 11 9 7 5 4
Copied 12591818 11116641 10960736 11093420 12005563 11443916 12375984 13704951 15039560 17733236 19109783 20250622 20712410 22789072 23199960 23689464 11340734 9738692 8670448 8282062 7955917 8175976 8601306 8354492 9035349 9575017 10674668 11919193 13105182 14364131 15154400 16033469 10564261 9028210 8156104 7198084 7049347 6959029 6394643 6378196 7149388 7131125 8107715 7751164 8671491 9642086 10675585 11675117
Invoc 2403 1218 813 489 410 353 278 254 233 207 197 191 185 178 178 180 2403 1213 608 405 347 270 222 204 177 165 149 144 134 129 127 128 2403 807 484 346 269 221 187 162 143 129 117 113 105 100 97 96
382
W.b. insert 84765 52894 38693 22824 23686 18766 19643 13923 13125 11234 8936 13478 7561 9251 10547 12352 84765 52521 31075 16000 21828 12257 12580 8229 7155 11475 9900 12640 10718 8743 9660 11758 84765 40955 22948 18376 17206 14161 10545 11904 8680 8304 9343 5162 9494 9077 7554 6032
W.b. add R.s. proc 82857 82856 51031 51029 36930 36924 20983 20977 21864 21855 16901 16894 17699 17698 12045 12032 11207 11197 9281 9274 7052 7048 11508 11504 5656 5653 7320 7317 8637 8624 10444 10433 82857 82856 50756 50754 29314 29313 14204 14202 19985 19979 10391 10380 10687 10683 6304 6294 5240 5231 9516 9509 7944 7938 10689 10688 8767 8756 6761 6759 7705 7694 9807 9805 82857 82856 39165 39163 21088 21081 16571 16564 15315 15306 12317 12311 8642 8640 9847 9838 6715 6707 6348 6344 7381 7365 3199 3189 7524 7512 7112 7105 5590 5586 4030 4026 continued on next page
Table 6.15. continued 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Configuration 33 30 33 31 40 1 40 4 40 6 40 8 40 11 40 13 40 16 40 18 40 20 40 23 40 25 40 27 40 30 40 32 40 34 40 36 40 37 40 38 48 1 48 4 48 7 48 10 48 13 48 16 48 19 48 22 48 24 48 27 48 30 48 33 48 36 48 38 48 41 48 43 48 44 48 45 48 46 59 2 59 5 59 9 59 12 59 16 59 19 59 23 59 27 59 30
3 2 39 36 34 32 29 27 24 22 20 17 15 13 10 8 6 4 3 2 47 44 41 38 35 32 29 26 24 21 18 15 12 10 7 5 4 3 2 57 54 50 47 43 40 36 32 29
Copied 11878209 11708491 10432554 7559615 6984337 6949810 6959041 6501847 5935176 5919640 5765725 5685343 5799969 6444446 6493485 7236589 7213034 8458649 8586463 8609610 9781248 7494067 6537490 6175641 5750707 5570895 5655561 5414857 5105442 5149220 4733472 5189342 5387710 5227761 6057491 6225060 6661744 6712723 6836726 8493361 7252753 6069469 5965181 5497062 5140408 4657626 4552709 4490860
Invoc 96 96 2403 604 403 302 221 186 151 135 121 106 98 91 83 79 76 74 74 74 2403 603 344 241 185 151 127 110 101 90 81 74 68 65 61 60 59 59 59 1206 482 268 201 151 127 104 89 80
383
W.b. Insert 5096 5699 84765 30687 22989 17666 15441 13414 6886 15694 11005 6305 7608 8725 7241 5177 6507 5860 5044 4895 84765 27394 14724 16706 13007 9764 8151 7225 7644 6613 9013 4071 6844 5964 4382 4168 5098 4663 3909 51896 19739 17780 12681 11574 8123 9120 7107 10620
W.b. Add R.s. proc 3140 3136 3737 3660 82857 82856 28889 28887 21241 21240 15849 15840 13595 13589 11466 11462 4990 4986 13748 13747 9037 9009 4327 4318 5646 5636 6721 6717 5241 5228 3231 3154 4546 4540 3884 3880 3052 2975 2910 2833 82857 82856 25681 25675 12877 12867 14780 14767 11076 10999 7842 7835 6211 6207 5205 5194 5660 5657 4631 4627 7033 7024 2092 2082 4847 4834 3974 3961 2402 2394 2173 2166 3095 3076 2674 2659 1921 1902 50043 50042 18013 18006 15895 15889 10779 10772 9686 9680 6173 6167 7175 7166 5144 5136 8629 8620 continued on next page
Table 6.15. continued 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16
Configuration 59 34 59 37 59 40 59 44 59 47 59 50 59 52 59 54 59 55 59 56 59 57 71 2 71 6 71 11 71 15 71 19 71 23 71 28 71 32 71 36 71 40 71 45 71 48 71 53 71 57 71 60 71 63 71 65 71 67 71 68 87 3 87 8 87 13 87 18 87 23 87 29 87 34 87 39 87 44 87 50 87 55 87 59 87 64 87 70 87 74 87 77 87 80 87 82
25 22 19 15 12 9 7 5 4 3 2 69 65 60 56 52 48 43 39 35 31 26 23 18 14 11 8 6 4 3 84 79 74 69 64 58 53 48 43 37 32 28 23 17 13 10 7 5
Copied 4261642 4384321 4555863 4223257 4443330 4449340 4644286 4820466 5183566 5565407 5361499 8321666 6581044 5715827 5030439 4636679 4715487 4518681 4303274 3961441 3866383 4030186 3527695 3677191 3484212 3753255 3728420 3804178 4002838 4078398 7601796 5905616 5164095 4861538 4490055 4311287 4048794 3896050 3647852 3586602 3354195 3195691 3099474 2824376 3098261 3199763 2653134 3259716
Invoc 71 65 60 55 52 49 48 47 46 47 46 1205 401 219 160 127 104 86 75 67 60 54 50 46 42 40 39 38 37 37 802 300 185 134 104 83 71 61 54 48 44 41 38 34 32 31 30 30
384
W.b. Insert 4628 6613 5078 4810 3943 4127 5661 7077 4223 3330 3953 52031 22261 12724 8188 5599 12675 13611 9619 7770 8603 6916 3762 12256 3855 5338 3236 3608 5144 5542 32741 21196 14423 6553 8656 7407 7875 6980 4649 7705 5254 4562 6441 5383 6192 3879 3252 4265
W.b. Add R.s. proc 2640 2625 4625 4606 3110 3102 2834 2812 1936 1932 2161 2143 3647 3639 5098 5087 2239 2231 1364 1363 1968 1959 50199 50198 20442 20436 10822 10816 6249 6246 3698 3696 10737 10667 11654 11644 7641 7634 5790 5782 6606 6599 4898 4891 1775 1743 10278 10201 1884 1829 3377 3344 1259 1245 1659 1592 3165 3098 3539 3472 31072 31066 19360 19349 12515 12506 4580 4579 6727 6657 5451 5441 5885 5883 4997 4983 2673 2646 5719 5704 3243 3236 2580 2578 4460 4459 3394 3361 4226 4168 1904 1844 1289 1101 2300 2273 continued on next page
Table 6.15. continued 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 16 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
Configuration 87 83 87 84 106 3 106 10 106 16 106 22 106 29 106 35 106 41 106 48 106 54 106 60 106 67 106 72 106 78 106 85 106 90 106 94 106 98 106 100 106 101 106 102 6 6 6 6 6 7 7 7 7 7 7 9 9 9 9 9 9 9 9 10 10 10 10 10 10 10
1 2 3 4 5 1 2 3 4 5 6 1 2 3 4 5 6 7 8 1 2 3 4 5 6 7
4 3 103 96 90 84 77 71 65 58 52 46 39 34 28 21 16 12 8 6 5 4
Copied 3433080 3462159 7486683 5474652 4681580 4522876 4108770 3988817 3840313 3518283 3239133 3064149 3063955 2427058 2911797 2766662 2555843 2662781 2242414 2536890 2607726 2990573
Invoc 30 30 802 240 150 109 83 68 59 50 44 40 36 33 31 28 27 26 24 24 24 24
W.b. Insert 8004 7215 36104 12390 10516 4773 5859 5116 7363 4034 4360 5147 5219 3894 3394 3532 6111 3060 2941 4437 5781 3689
5 4 3 2 1 6 5 4 3 2 1 8 7 6 5 4 3 2 1 9 8 7 6 5 4 3
8785525 11367580 13779210 18418008 22253449 8362063 8014919 8441619 10108059 13141873 16152005 8003237 7165137 6765654 6496695 6974175 6966905 8488453 10660231 7791229 7023681 6711466 6009993 5809424 5972714 6487686
595 318 222 183 174 595 309 208 161 138 132 595 305 203 153 124 104 93 90 595 304 203 152 122 102 89
25811 14194 12634 9538 12444 25811 16231 15082 8959 11645 6675 25811 15189 11410 8762 6820 9226 5630 8639 25811 12618 9147 13521 11904 6813 6147
385
W.b. Add 6040 5246 34255 10496 8571 2843 3894 3122 5373 2056 2370 3136 3240 1897 1435 1552 4120 1096 983 2465 3797 1707
R.s. proc 6027 5176 34253 10483 8558 2830 3884 3114 5372 2052 2307 3124 3236 1868 1433 1507 4109 1090 896 2405 3764 1675
24063 24061 12402 12400 10725 10723 7564 7558 10549 10543 24063 24061 14419 14417 13181 13172 7065 7061 9686 9677 4735 4726 24063 24061 13311 13309 9474 9470 6811 6802 4861 4859 7233 7218 3684 3673 6663 6648 24063 24061 10809 10807 7263 7261 11589 11580 9940 9931 4834 4828 4181 4166 continued on next page
Table 6.15. continued 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18
Configuration 10 8 10 9 12 1 12 2 12 3 12 4 12 5 12 6 12 7 12 8 12 9 12 10 12 11 15 1 15 2 15 3 15 4 15 5 15 6 15 7 15 8 15 9 15 10 15 11 15 12 15 13 15 14 18 1 18 2 18 3 18 4 18 5 18 6 18 7 18 8 18 9 18 10 18 11 18 12 18 13 18 14 18 15 18 16 18 17 22 1 22 2 22 3 22 5
2 1 11 10 9 8 7 6 5 4 3 2 1 14 13 12 11 10 9 8 7 6 5 4 3 2 1 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 21 20 19 17
Copied 7063723 9183237 7335812 6504339 6234046 5638418 5254524 5629985 5212972 5259212 5178472 5899102 7172210 7062422 6327451 5585677 5106319 4533484 4952296 4641514 4341541 4157497 4087995 4298552 4599576 4244387 5396474 7057831 5996465 5543018 4987962 4559158 4487482 4359409 4271401 4046367 3848551 3771743 3767137 3747181 3693446 3709924 3625418 4230538 6815179 5829377 5299992 4616395
Invoc 80 78 595 302 201 151 121 101 87 76 69 64 62 595 301 200 150 120 100 86 75 67 60 55 51 48 47 595 300 200 150 120 100 85 75 67 60 55 50 46 43 41 38 37 595 299 199 119
386
W.b. Insert 5235 5572 25811 15221 9713 7403 5352 6541 8797 7752 7888 4857 10408 25811 13313 8876 8033 7918 8889 5316 6280 5486 4826 4197 3880 4220 7631 25811 14665 13017 9130 6668 11268 7118 5209 4940 5901 4282 4285 3605 3588 6677 2983 4930 25811 17519 15791 6684
W.b. Add R.s. proc 3263 3259 3583 3568 24063 24061 13396 13394 7808 7804 5475 5466 3425 3423 4558 4552 6796 6794 5804 5795 5868 5866 2846 2844 8430 8428 24063 24061 11456 11454 6933 6924 6098 6089 5965 5956 6907 6901 3370 3361 4279 4273 3495 3489 2844 2821 2221 2206 1899 1876 2250 2244 5643 5639 24063 24061 12802 12800 11102 11100 7173 7171 4717 4715 9299 9297 5131 5122 3229 3225 2942 2940 3907 3903 2283 2281 2313 2304 1630 1612 1621 1610 4707 4698 1026 994 2917 2875 24063 24061 15647 15638 13856 13852 4761 4746 continued on next page
Table 6.15. continued 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 18 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20 20
Configuration 22 6 22 7 22 9 22 10 22 11 22 13 22 14 22 15 22 16 22 18 22 19 22 20 22 21 27 1 27 2 27 4 27 6 27 7 27 9 27 11 27 12 27 14 27 15 27 17 27 18 27 20 27 22 27 23 27 24 27 25 27 26 2 3 3 4 4 4 5 5 5 5 6 6 6 6 6 7 7
1 1 2 1 2 3 1 2 3 4 1 2 3 4 5 1 2
16 15 13 12 11 9 8 7 6 4 3 2 1 26 25 23 21 20 18 16 15 13 12 10 9 7 5 4 3 2 1
Copied 4413488 4283250 4037535 3870386 3743816 3407540 3323226 3099585 3173735 2998176 2860781 2966343 3445157 6737603 5998228 4846012 4431588 4084153 3894656 3709213 3487484 3262968 2843891 2912449 2953653 2694418 2615701 2112657 2513627 2641062 2780389
Invoc 100 85 66 60 54 46 42 40 37 33 32 30 30 595 299 149 99 85 66 54 49 42 40 35 33 30 27 26 25 24 24
W.b. Insert 7964 7128 6405 5218 5668 4013 5476 3600 4453 2952 4048 5775 5878 25811 19823 9242 7309 4811 4177 4640 4503 4274 4229 5951 4490 4957 3955 2966 2989 3548 3401
1 2 1 3 2 1 4 3 2 1 5 4 3 2 1 6 5
7144539 5336057 5489077 5138698 4123868 4451954 5029654 3975144 3641446 3583989 4984286 4042792 3592622 2977875 2515719 4880934 3914284
148 148 83 148 77 54 148 76 51 39 148 75 50 38 31 148 75
9442 9442 4418 9442 8482 7233 9442 7546 3520 6719 9442 4698 4367 6879 3528 9442 4714
387
W.b. Add 6021 5138 4426 3248 3649 2041 3486 1628 2470 982 2088 3823 3906 24063 17991 7342 5342 2832 2198 2643 2510 2290 2242 3982 2512 2994 1976 1014 1011 1584 1444
R.s. proc 6019 5123 4417 3246 3640 2037 3454 1624 2433 945 2079 3777 3891 24061 17989 7338 5327 2826 2183 2628 2481 2258 2240 3971 2503 2985 1958 1005 988 1542 1438
7539 7527 7539 7527 2477 2465 7539 7527 6526 6519 5257 5245 7539 7527 5578 5566 1549 1542 4756 4715 7539 7527 2744 2737 2394 2370 4892 4880 1584 1577 7539 7527 2747 2735 continued on next page
Table 6.15. continued 20 20 20 20
Configuration 7 3 7 4 7 5 7 6
4 3 2 1
Copied 3401516 3094451 2650357 2290351
Invoc 50 37 30 25
388
W.b. Insert 4261 3748 6596 2963
W.b. Add 2283 1778 4645 997
R.s. proc 2276 1737 4621 923
1000
Break-even ratio
9 10 11
100
10 1000
1500
2000
2500 3000 Heap size (words)
3500
4000
4500
Figure 6.1. Copying and pointer cost tradeoff: Interactive-AllCallsOn.
389
100 9 10 11 12
Break-even ratio
10
1
0.1 1000
2000
3000
4000 5000 Heap size (words)
6000
7000
8000
Figure 6.2. Copying and pointer cost tradeoff: Interactive-TextEditing.
100
Break-even ratio
9 10 11 12
10
1 1000
2000
3000
4000 5000 Heap size (words)
6000
7000
8000
Figure 6.3. Copying and pointer cost tradeoff: StandardNonInteractive.
390
1000 13 14 16 18
Break-even ratio
100
10
1
0.1 100000 150000 200000 250000 300000 350000 400000 450000 500000 550000 600000 Heap size (words)
Figure 6.4. Copying and pointer cost tradeoff: HeapSim.
10
Break-even ratio
9 10 11 12 13 14
1
0.1 5000
10000
15000
20000 25000 30000 Heap size (words)
35000
40000
Figure 6.5. Copying and pointer cost tradeoff: Lambda-Fact5.
391
45000
10
Break-even ratio
9 10 11 12 13 14 16
1
0.1 10000
20000
30000
40000 50000 60000 Heap size (words)
70000
80000
90000
Figure 6.6. Copying and pointer cost tradeoff: Lambda-Fact6.
10 11 12 13 14 16
Break-even ratio
1
0.1
0.01 30000
40000
50000
60000
70000 80000 90000 Heap size (words)
100000 110000 120000
Figure 6.7. Copying and pointer cost tradeoff: Swim.
392
100 11 12 13 14 16
Break-even ratio
10
1
0.1 40000
60000
80000
100000 120000 140000 Heap size (words)
160000
180000
200000
Figure 6.8. Copying and pointer cost tradeoff: Tomcatv.
1000 9 10 11 12 13 14
Break-even ratio
100
10
1
0.1
0.01 15000 20000 25000 30000 35000 40000 45000 50000 55000 60000 65000 Heap size (words)
Figure 6.9. Copying and pointer cost tradeoff: Tree-Replace-Binary.
393
10
Break-even ratio
16 18 20
1
0.1 400000 500000 600000 700000 800000 900000 1e+06 1.1e+061.2e+06 1.3e+061.4e+06 Heap size (words)
Figure 6.10. Copying and pointer cost tradeoff: Bloat-Bloat.
10
Break-even ratio
16 18 20
1
0.1 600000
800000
1e+06
1.2e+06 1.4e+06 Heap size (words)
1.6e+06
1.8e+06
Figure 6.11. Copying and pointer cost tradeoff: Toba.
394
2e+06
Interactive-AllCallsOn
70 60
Histogram
50 40 30 20 10 0 0
20
40
60 80 100 Source position
120
140
160
(a) Distribution of pointer source positions 45
Interactive-AllCallsOn
40 35
Histogram
30 25 20 15 10 5 0 0
20
40
60
80 100 Target position
120
140
160
(b) Distribution of pointer target positions 1 Interactive-AllCallsOn 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -8
-6
-4 -2 0 2 4 log2(abs(distance)) * sgn(distance)
6
8
(c) Distribution of pointer distances Figure 6.12. Pointer store heap position: Interactive-AllCallsOn.
395
Interactive-TextEditing 2000
Histogram
1500
1000
500
0 0
100
200
300 400 500 Source position
600
700
(a) Distribution of pointer source positions Interactive-TextEditing 2000
Histogram
1500
1000
500
0 0
100
200
300 400 500 Target position
600
700
(b) Distribution of pointer target positions 1 Interactive-TextEditing 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -10
-8
-6
-4 -2 0 2 4 log2(abs(distance)) * sgn(distance)
6
8
10
(c) Distribution of pointer distances Figure 6.13. Pointer store heap position: Interactive-TextEditing.
396
5000 StandardNonInteractive 4500 4000
Histogram
3500 3000 2500 2000 1500 1000 500 0 0
100
200
300 400 500 Source position
600
700
(a) Distribution of pointer source positions 4500
StandardNonInteractive
4000 3500
Histogram
3000 2500 2000 1500 1000 500 0 0
100
200
300 400 500 Target position
600
700
(b) Distribution of pointer target positions 1 StandardNonInteractive 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -10
-8
-6
-4 -2 0 2 4 log2(abs(distance)) * sgn(distance)
6
8
10
(c) Distribution of pointer distances Figure 6.14. Pointer store heap position: StandardNonInteractive.
397
30000 HeapSim 25000
Histogram
20000
15000
10000
5000
0 0
2000
4000
6000 8000 Source position
10000
12000
(a) Distribution of pointer source positions HeapSim 25000
Histogram
20000
15000
10000
5000
0 0
2000
4000
6000 8000 Target position
10000
12000
(b) Distribution of pointer target positions 1 HeapSim 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -15
-10
-5 0 5 log2(abs(distance)) * sgn(distance)
10
15
(c) Distribution of pointer distances Figure 6.15. Pointer store heap position: HeapSim.
398
80000 Lambda-Fact5 70000 60000
Histogram
50000 40000 30000 20000 10000 0 0
1000
2000 3000 Source position
4000
5000
(a) Distribution of pointer source positions Lambda-Fact5
60000
50000
Histogram
40000
30000
20000
10000
0 0
1000
2000 3000 Target position
4000
5000
(b) Distribution of pointer target positions 1 Lambda-Fact5 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -15
-10
-5 0 5 log2(abs(distance)) * sgn(distance)
10
15
(c) Distribution of pointer distances Figure 6.16. Pointer store heap position: Lambda-Fact5.
399
Lambda-Fact6
350000 300000
Histogram
250000 200000 150000 100000 50000 0 0
2000
4000
6000 8000 Source position
10000
12000
(a) Distribution of pointer source positions Lambda-Fact6 250000
Histogram
200000
150000
100000
50000
0 0
2000
4000
6000 8000 Target position
10000
12000
(b) Distribution of pointer target positions 1 Lambda-Fact6 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -15
-10
-5 0 5 log2(abs(distance)) * sgn(distance)
10
15
(c) Distribution of pointer distances Figure 6.17. Pointer store heap position: Lambda-Fact6.
400
45000
Swim
40000 35000
Histogram
30000 25000 20000 15000 10000 5000 0 0
2000
4000
6000 8000 10000 Source position
12000
14000
16000
(a) Distribution of pointer source positions Swim 80000 70000
Histogram
60000 50000 40000 30000 20000 10000 0 0
2000
4000
6000
8000 10000 Target position
12000
14000
16000
(b) Distribution of pointer target positions 1 Swim 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -15
-10
-5 0 5 log2(abs(distance)) * sgn(distance)
10
15
(c) Distribution of pointer distances Figure 6.18. Pointer store heap position: Swim.
401
90000 Tomcatv 80000 70000
Histogram
60000 50000 40000 30000 20000 10000 0 0
5000
10000
15000 20000 Source position
25000
30000
(a) Distribution of pointer source positions Tomcatv 250000
Histogram
200000
150000
100000
50000
0 0
5000
10000
15000 20000 Target position
25000
30000
(b) Distribution of pointer target positions 1 Tomcatv 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -15
-10
-5 0 5 log2(abs(distance)) * sgn(distance)
10
15
(c) Distribution of pointer distances Figure 6.19. Pointer store heap position: Tomcatv.
402
Tree-Replace-Binary
30000
25000
Histogram
20000
15000
10000
5000
0 0
1000
2000
3000
4000 5000 6000 Source position
7000
8000
9000
(a) Distribution of pointer source positions 30000
Tree-Replace-Binary
25000
Histogram
20000
15000
10000
5000
0 0
1000
2000
3000
4000 5000 6000 Target position
7000
8000
9000
(b) Distribution of pointer target positions 1 Tree-Replace-Binary 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -15
-10
-5 0 5 log2(abs(distance)) * sgn(distance)
10
15
(c) Distribution of pointer distances Figure 6.20. Pointer store heap position: Tree-Replace-Binary.
403
140000 Tree-Replace-Random 120000
Histogram
100000
80000
60000
40000
20000
0 0
2000
4000
6000 8000 Source position
10000
12000
(a) Distribution of pointer source positions 140000 Tree-Replace-Random 120000
Histogram
100000
80000
60000
40000
20000
0 0
2000
4000
6000 8000 Target position
10000
12000
(b) Distribution of pointer target positions 1 Tree-Replace-Random 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -15
-10
-5 0 5 log2(abs(distance)) * sgn(distance)
10
15
(c) Distribution of pointer distances Figure 6.21. Pointer store heap position: Tree-Replace-Random.
404
Richards
400000 350000
Histogram
300000 250000 200000 150000 100000 50000 0 0
200
400 600 Source position
800
1000
(a) Distribution of pointer source positions Richards 350000 300000
Histogram
250000 200000 150000 100000 50000 0 0
200
400 600 Target position
800
1000
(b) Distribution of pointer target positions 1 Richards 0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -10
-8
-6
-4 -2 0 2 4 log2(abs(distance)) * sgn(distance)
6
8
10
(c) Distribution of pointer distances Figure 6.22. Pointer store heap position: Richards.
405
40000
JavaBYTEmark
35000
Histogram
30000 25000 20000 15000 10000 5000 0 0
5000 10000 15000 20000 25000 30000 35000 40000 Source position
(a) Distribution of pointer source positions 50000
JavaBYTEmark
Histogram
40000
30000
20000
10000
0 0
5000 10000 15000 20000 25000 30000 35000 40000 Target position
(b) Distribution of pointer target positions 1 JavaBYTEmark
0.9
Cumulative probability
0.8 0.7 0.6 0.5 0.4 0.3 0.2 0.1 0 -20
-15
-10 -5 0 5 log2(abs(distance)) * sgn(distance)
10
15
(c) Distribution of pointer distances Figure 6.23. Pointer store heap position: JavaBYTEmark.
406
Bloat-Bloat
4e+06 3.5e+06
Histogram
3e+06 2.5e+06 2e+06 1.5e+06 1e+06 500000 0 0
20000
40000
60000 80000 100000 120000 140000 Source position
(a) Distribution of pointer source positions Bloat-Bloat
3.5e+06 3e+06
Histogram
2.5e+06 2e+06 1.5e+06 1e+06 500000 0 0
20000
40000
60000 80000 100000 120000 140000 Target position
(b) Distribution of pointer target positions 1 Bloat-Bloat
Cumulative probability
0.8
0.6
0.4
0.2
0 -20
-15
-10 -5 0 5 10 log2(abs(distance)) * sgn(distance)
15
20
(c) Distribution of pointer distances Figure 6.24. Pointer store heap position: Bloat-Bloat.
407
3e+06 Toba 2.5e+06
Histogram
2e+06
1.5e+06
1e+06
500000
0 0
50000
100000 150000 200000 Source position
250000
(a) Distribution of pointer source positions 2.5e+06
Toba
Histogram
2e+06
1.5e+06
1e+06
500000
0 0
50000
100000 150000 200000 Target position
250000
(b) Distribution of pointer target positions 1 Toba
Cumulative probability
0.8
0.6
0.4
0.2
0 -20
-15
-10 -5 0 5 10 log2(abs(distance)) * sgn(distance)
15
20
(c) Distribution of pointer distances Figure 6.25. Pointer store heap position: Toba.
408
StandardNonInteractive [9, 25, 4]
StandardNonInteractive [9, 25, 4]
600
5000
500
4000
400
Histogram
Histogram
6000
3000
300
2000
200
1000
100
0
0 0
5
10 15 Source block position
20
25
0
5
10 15 Source block position
20
25
(a) All non-null pointers: source block posi- (b) Non-filtered pointers: source block position tion StandardNonInteractive [9, 25, 4]
4500
StandardNonInteractive [9, 25, 4]
4000
200
3000
Histogram
Histogram
3500
2500 2000
150
100
1500 1000
50
500 0
0 0
5
10 15 Target block position
20
25
0
5
10 15 Target block position
20
25
(c) All non-null pointers: target block posi- (d) Non-filtered pointers: target block position tion StandardNonInteractive [9, 25, 4]
90 80
Histogram
70 60 50 40 30 20 10 0 0
5
10 15 Processed block position
20
25
(e) Processed remembered set entry block position Figure 6.26. DOF pointer position distributions: StandardNonInteractive.
409
HeapSim [13, 123, 4]
30000
HeapSim [13, 123, 4]
4000 3500
25000
Histogram
Histogram
3000 20000 15000
2500 2000 1500
10000
1000 5000
500
0
0 0
20
40 60 80 Source block position
100
120
0
20
40 60 80 Source block position
100
120
(a) All non-null pointers: source block posi- (b) Non-filtered pointers: source block position tion 2000
HeapSim [13, 123, 4]
HeapSim [13, 123, 4]
25000 1500 Histogram
Histogram
20000
15000
1000
10000 500 5000
0
0 0
20
40 60 80 Target block position
100
120
0
20
40 60 80 Target block position
100
120
(c) All non-null pointers: target block posi- (d) Non-filtered pointers: target block position tion 400
HeapSim [13, 123, 4]
350
Histogram
300 250 200 150 100 50 0 0
20
40 60 80 Processed block position
100
120
(e) Processed remembered set entry block position Figure 6.27. DOF pointer position distributions: HeapSim.
410
Richards [10, 20, 8]
600000 500000
20000
400000
Histogram
Histogram
Richards [10, 20, 8]
25000
300000
15000
10000
200000 5000
100000 0
0 0
5
10 15 Source block position
20
0
5
10 15 Source block position
20
(a) All non-null pointers: source block posi- (b) Non-filtered pointers: source block position tion 400000 Richards [10, 20, 8]
10000
300000 250000
Histogram
Histogram
Richards [10, 20, 8]
12000
350000
200000 150000
8000 6000 4000
100000 2000
50000 0
0 0
5
10 15 Target block position
20
0
5
10 15 Target block position
20
(c) All non-null pointers: target block posi- (d) Non-filtered pointers: target block position tion Richards [10, 20, 8] 5000
Histogram
4000 3000 2000 1000 0 0
5
10 15 Processed block position
20
(e) Processed remembered set entry block position Figure 6.28. DOF pointer position distributions: Richards.
411
Lambda-Fact5 [9, 90, 67]
70000
Lambda-Fact5 [9, 90, 67] 6000
60000 5000 Histogram
Histogram
50000 40000 30000
4000 3000
20000
2000
10000
1000
0
0 0
10
20
30 40 50 60 Source block position
70
80
90
0
10
20
30 40 50 60 Source block position
70
80
90
(a) All non-null pointers: source block posi- (b) Non-filtered pointers: source block position tion Lambda-Fact5 [9, 90, 67]
800 Histogram
40000 Histogram
Lambda-Fact5 [9, 90, 67]
1000
50000
30000
600
20000
400
10000
200
0
0 0
10
20
30 40 50 60 Target block position
70
80
90
0
10
20
30 40 50 60 Target block position
70
80
90
(c) All non-null pointers: target block posi- (d) Non-filtered pointers: target block position tion 7000
Lambda-Fact5 [9, 90, 67]
6000
Histogram
5000 4000 3000 2000 1000 0 0
10
20
30 40 50 60 70 Processed block position
80
90
(e) Processed remembered set entry block position Figure 6.29. DOF pointer position distributions: Lambda-Fact5.
412
4.5e+06 Bloat-Bloat [16, 28, 14]
4e+06
Bloat-Bloat [16, 28, 14]
250000
3.5e+06
200000 Histogram
Histogram
3e+06 2.5e+06 2e+06 1.5e+06 1e+06
150000
100000
50000
500000 0
0 0
5
10 15 20 Source block position
25
0
5
10 15 20 Source block position
25
(a) All non-null pointers: source block posi- (b) Non-filtered pointers: source block position tion 3.5e+06
Bloat-Bloat [16, 28, 14]
Bloat-Bloat [16, 28, 14]
100000
3e+06 80000 Histogram
Histogram
2.5e+06 2e+06 1.5e+06
60000
40000
1e+06 20000 500000 0
0 0
5
10 15 20 Target block position
25
0
5
10 15 20 Target block position
25
(c) All non-null pointers: target block posi- (d) Non-filtered pointers: target block position tion Bloat-Bloat [16, 28, 14]
25000
Histogram
20000
15000
10000
5000
0 0
5
10 15 20 Processed block position
25
(e) Processed remembered set entry block position Figure 6.30. DOF pointer position distributions: Bloat-Bloat.
413
50000
JavaBYTEmark [18, 6, 3]
10 Histogram
40000 Histogram
JavaBYTEmark [18, 6, 3]
12
30000
20000
8 6 4
10000
2
0
0 0
1
2 3 4 Source block position
5
6
0
1
2 3 4 Source block position
5
6
(a) All non-null pointers: source block posi- (b) Non-filtered pointers: source block position tion 20
JavaBYTEmark [18, 6, 3]
50000
15 Histogram
Histogram
40000
JavaBYTEmark [18, 6, 3]
30000
20000
10
5 10000
0
0 0
1
2 3 4 Target block position
5
6
0
1
2 3 4 Target block position
5
6
(c) All non-null pointers: target block posi- (d) Non-filtered pointers: target block position tion JavaBYTEmark [18, 6, 3]
8 7
Histogram
6 5 4 3 2 1 0 0
1
2 3 4 Processed block position
5
6
(e) Processed remembered set entry block position Figure 6.31. DOF pointer position distributions: JavaBYTEmark.
414
Interactive-AllCallsOn
Interactive-AllCallsOn 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 764)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 764)
(b) GYF schemes
Figure 6.32. Relative invocation counts: Interactive-AllCallsOn, V
Interactive-AllCallsOn
764.
Interactive-AllCallsOn
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 931)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 931)
(b) GYF schemes
Figure 6.33. Relative invocation counts: Interactive-AllCallsOn, V
415
931.
1
Interactive-AllCallsOn
Interactive-AllCallsOn 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1135)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1135)
(b) GYF schemes
Figure 6.34. Relative invocation counts: Interactive-AllCallsOn, V
Interactive-AllCallsOn 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1135.
Interactive-AllCallsOn
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1384)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1384)
1
(b) GYF schemes
Figure 6.35. Relative invocation counts: Interactive-AllCallsOn, V
Interactive-AllCallsOn
1384.
Interactive-AllCallsOn
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1687)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1687)
(b) GYF schemes
Figure 6.36. Relative invocation counts: Interactive-AllCallsOn, V
416
1687.
1
Interactive-AllCallsOn
Interactive-AllCallsOn 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2057)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2057)
(b) GYF schemes
Figure 6.37. Relative invocation counts: Interactive-AllCallsOn, V
417
2057.
1
Interactive-TextEditing
Interactive-TextEditing 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1338)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1338)
(b) GYF schemes
Figure 6.38. Relative invocation counts: Interactive-TextEditing, V
Interactive-TextEditing
1338.
Interactive-TextEditing
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1631)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1631)
(b) GYF schemes
Figure 6.39. Relative invocation counts: Interactive-TextEditing, V
418
1631.
1
Interactive-TextEditing
Interactive-TextEditing 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1988)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1988)
(b) GYF schemes
Figure 6.40. Relative invocation counts: Interactive-TextEditing, V
Interactive-TextEditing 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1988.
Interactive-TextEditing
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2424)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2424)
1
(b) GYF schemes
Figure 6.41. Relative invocation counts: Interactive-TextEditing, V
Interactive-TextEditing
2424.
Interactive-TextEditing
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2955)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2955)
(b) GYF schemes
Figure 6.42. Relative invocation counts: Interactive-TextEditing, V
419
2955.
1
StandardNonInteractive
StandardNonInteractive 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1414)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1414)
(b) GYF schemes
Figure 6.43. Relative invocation counts: StandardNonInteractive, V
StandardNonInteractive
1414.
StandardNonInteractive
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1723)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1723)
(b) GYF schemes
Figure 6.44. Relative invocation counts: StandardNonInteractive, V
420
1723.
1
StandardNonInteractive
StandardNonInteractive 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2101)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2101)
(b) GYF schemes
Figure 6.45. Relative invocation counts: StandardNonInteractive, V
StandardNonInteractive 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
2101.
StandardNonInteractive
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2561)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2561)
1
(b) GYF schemes
Figure 6.46. Relative invocation counts: StandardNonInteractive, V
StandardNonInteractive
2561.
StandardNonInteractive
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3122)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3122)
(b) GYF schemes
Figure 6.47. Relative invocation counts: StandardNonInteractive, V
421
3122.
1
StandardNonInteractive
StandardNonInteractive 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3805)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3805)
(b) GYF schemes
Figure 6.48. Relative invocation counts: StandardNonInteractive, V
StandardNonInteractive 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
3805.
StandardNonInteractive
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4639)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4639)
1
(b) GYF schemes
Figure 6.49. Relative invocation counts: StandardNonInteractive, V
StandardNonInteractive
4639.
StandardNonInteractive
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 5655)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 5655)
(b) GYF schemes
Figure 6.50. Relative invocation counts: StandardNonInteractive, V
422
5655.
1
StandardNonInteractive
StandardNonInteractive 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 6894)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 6894)
(b) GYF schemes
Figure 6.51. Relative invocation counts: StandardNonInteractive, V
423
6894.
1
HeapSim
HeapSim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 113397)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 113397)
(b) GYF schemes
Figure 6.52. Relative invocation counts: HeapSim, V
HeapSim
113397.
HeapSim
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 138231)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 138231)
(b) GYF schemes
Figure 6.53. Relative invocation counts: HeapSim, V
424
138231.
1
HeapSim
HeapSim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 168503)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 168503)
(b) GYF schemes
Figure 6.54. Relative invocation counts: HeapSim, V
HeapSim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
168503.
HeapSim
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 205405)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 205405)
1
(b) GYF schemes
Figure 6.55. Relative invocation counts: HeapSim, V
HeapSim
205405.
HeapSim
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 250387)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 250387)
(b) GYF schemes
Figure 6.56. Relative invocation counts: HeapSim, V
425
250387.
1
HeapSim
HeapSim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 305221)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 305221)
(b) GYF schemes
Figure 6.57. Relative invocation counts: HeapSim, V
HeapSim
305221.
HeapSim
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 372063)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 372063)
(b) GYF schemes
Figure 6.58. Relative invocation counts: HeapSim, V
426
372063.
1
Lambda-Fact5
Lambda-Fact5 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 7673)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 7673)
(b) GYF schemes
Figure 6.59. Relative invocation counts: Lambda-Fact5, V
Lambda-Fact5
7 6
7673.
Lambda-Fact5 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 9354)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 9354)
(b) GYF schemes
Figure 6.60. Relative invocation counts: Lambda-Fact5, V
427
9354.
1
Lambda-Fact5 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
Lambda-Fact5
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 11402)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 11402)
(b) GYF schemes
Figure 6.61. Relative invocation counts: Lambda-Fact5, V
Lambda-Fact5 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6 5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 13899)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 13899)
Lambda-Fact5 8
Lambda-Fact5
Relative number of collections
6
13899.
8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
1
(b) GYF schemes
Figure 6.62. Relative invocation counts: Lambda-Fact5, V
Relative number of collections
11402.
Lambda-Fact5
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 16943)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 16943)
(b) GYF schemes
Figure 6.63. Relative invocation counts: Lambda-Fact5, V
428
16943.
1
Lambda-Fact5
Lambda-Fact5 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 20654)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 20654)
(b) GYF schemes
Figure 6.64. Relative invocation counts: Lambda-Fact5, V
Lambda-Fact5 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
20654.
Lambda-Fact5
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 25177)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 25177)
(b) GYF schemes
Figure 6.65. Relative invocation counts: Lambda-Fact5, V
429
25177.
1
Lambda-Fact6
Lambda-Fact6 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 16669)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 16669)
(b) GYF schemes
Figure 6.66. Relative invocation counts: Lambda-Fact6, V
Lambda-Fact6
7 6
16669.
Lambda-Fact6 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 20320)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 20320)
(b) GYF schemes
Figure 6.67. Relative invocation counts: Lambda-Fact6, V
430
20320.
1
Lambda-Fact6 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
Lambda-Fact6
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 24770)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 24770)
(b) GYF schemes
Figure 6.68. Relative invocation counts: Lambda-Fact6, V
Lambda-Fact6 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6 5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 30194)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 30194)
Lambda-Fact6 8
Lambda-Fact6
Relative number of collections
6
30194.
8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
1
(b) GYF schemes
Figure 6.69. Relative invocation counts: Lambda-Fact6, V
Relative number of collections
24770.
Lambda-Fact6
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 36807)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 36807)
(b) GYF schemes
Figure 6.70. Relative invocation counts: Lambda-Fact6, V
431
36807.
1
Lambda-Fact6 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
Lambda-Fact6
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 44868)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 44868)
(b) GYF schemes
Figure 6.71. Relative invocation counts: Lambda-Fact6, V
Lambda-Fact6 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6 5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 54693)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 54693)
Lambda-Fact6 8
Lambda-Fact6
Relative number of collections
6
54693.
8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
1
(b) GYF schemes
Figure 6.72. Relative invocation counts: Lambda-Fact6, V
Relative number of collections
44868.
Lambda-Fact6
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 66671)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 66671)
(b) GYF schemes
Figure 6.73. Relative invocation counts: Lambda-Fact6, V
432
66671.
1
Lambda-Fact6 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
Lambda-Fact6
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 81272)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 81272)
(b) GYF schemes
Figure 6.74. Relative invocation counts: Lambda-Fact6, V
433
81272.
1
Swim
Swim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 21383)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 21383)
(b) GYF schemes
Figure 6.75. Relative invocation counts: Swim, V
Swim
21383.
Swim
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 26066)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 26066)
(b) GYF schemes
Figure 6.76. Relative invocation counts: Swim, V
434
26066.
1
Swim
Swim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 31774)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 31774)
(b) GYF schemes
Figure 6.77. Relative invocation counts: Swim, V
Swim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
31774.
Swim
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 38733)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 38733)
1
(b) GYF schemes
Figure 6.78. Relative invocation counts: Swim, V
Swim
38733.
Swim
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 47215)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 47215)
(b) GYF schemes
Figure 6.79. Relative invocation counts: Swim, V
435
47215.
1
Swim
Swim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 57555)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 57555)
(b) GYF schemes
Figure 6.80. Relative invocation counts: Swim, V
Swim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
57555.
Swim
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 70160)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 70160)
1
(b) GYF schemes
Figure 6.81. Relative invocation counts: Swim, V
Swim
70160.
Swim
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 85524)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 85524)
(b) GYF schemes
Figure 6.82. Relative invocation counts: Swim, V
436
85524.
1
Swim
Swim 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 104254)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 104254)
(b) GYF schemes
Figure 6.83. Relative invocation counts: Swim, V
437
104254.
1
Tomcatv
Tomcatv 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 37292)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 37292)
(b) GYF schemes
Figure 6.84. Relative invocation counts: Tomcatv, V
Tomcatv
37292.
Tomcatv
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 45459)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 45459)
(b) GYF schemes
Figure 6.85. Relative invocation counts: Tomcatv, V
438
45459.
1
Tomcatv
Tomcatv 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 55414)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 55414)
(b) GYF schemes
Figure 6.86. Relative invocation counts: Tomcatv, V
Tomcatv 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
55414.
Tomcatv
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 67550)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 67550)
1
(b) GYF schemes
Figure 6.87. Relative invocation counts: Tomcatv, V
Tomcatv
67550.
Tomcatv
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 82343)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 82343)
(b) GYF schemes
Figure 6.88. Relative invocation counts: Tomcatv, V
439
82343.
1
Tomcatv
Tomcatv 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 100376)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 100376)
(b) GYF schemes
Figure 6.89. Relative invocation counts: Tomcatv, V
Tomcatv 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
100376.
Tomcatv
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 122358)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 122358)
1
(b) GYF schemes
Figure 6.90. Relative invocation counts: Tomcatv, V
Tomcatv
122358.
Tomcatv
12
12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
1
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 149154)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 149154)
(b) GYF schemes
Figure 6.91. Relative invocation counts: Tomcatv, V
440
149154.
1
Tomcatv
Tomcatv 12 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
10
Relative number of collections
Relative number of collections
12
8 6 4 2 0
2GYF 3GYF
10 8 6 4 2 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 181818)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 181818)
(b) GYF schemes
Figure 6.92. Relative invocation counts: Tomcatv, V
441
181818.
1
Tree-Replace-Binary
Tree-Replace-Binary 7 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
6 5
Relative number of collections
Relative number of collections
7
4 3 2 1 0
2GYF 3GYF
6 5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 11926)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 11926)
(b) GYF schemes
Figure 6.93. Relative invocation counts: Tree-Replace-Binary, V
Tree-Replace-Binary
11926.
Tree-Replace-Binary
7
7 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
6 5
Relative number of collections
Relative number of collections
1
4 3 2 1 0
2GYF 3GYF
6 5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 14538)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 14538)
(b) GYF schemes
Figure 6.94. Relative invocation counts: Tree-Replace-Binary, V
442
14538.
1
Tree-Replace-Binary
Tree-Replace-Binary 7 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
6 5
Relative number of collections
Relative number of collections
7
4 3 2 1 0
2GYF 3GYF
6 5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 17722)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 17722)
(b) GYF schemes
Figure 6.95. Relative invocation counts: Tree-Replace-Binary, V
Tree-Replace-Binary 7 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
6 5
Relative number of collections
Relative number of collections
17722.
Tree-Replace-Binary
7
4 3 2 1 0
2GYF 3GYF
6 5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 21603)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 21603)
1
(b) GYF schemes
Figure 6.96. Relative invocation counts: Tree-Replace-Binary, V
Tree-Replace-Binary
21603.
Tree-Replace-Binary
7
7 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
6 5
Relative number of collections
Relative number of collections
1
4 3 2 1 0
2GYF 3GYF
6 5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 26334)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 26334)
(b) GYF schemes
Figure 6.97. Relative invocation counts: Tree-Replace-Binary, V
443
26334.
1
Tree-Replace-Random
Tree-Replace-Random 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 15985)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 15985)
(b) GYF schemes
Figure 6.98. Relative invocation counts: Tree-Replace-Random, V
Tree-Replace-Random
15985.
Tree-Replace-Random
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 19486)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 19486)
(b) GYF schemes
Figure 6.99. Relative invocation counts: Tree-Replace-Random, V
444
19486.
1
Tree-Replace-Random
Tree-Replace-Random 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 23754)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 23754)
(b) GYF schemes
Figure 6.100. Relative invocation counts: Tree-Replace-Random, V
Tree-Replace-Random 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
23754.
Tree-Replace-Random
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 28956)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 28956)
1
(b) GYF schemes
Figure 6.101. Relative invocation counts: Tree-Replace-Random, V
Tree-Replace-Random
28956.
Tree-Replace-Random
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 35297)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 35297)
(b) GYF schemes
Figure 6.102. Relative invocation counts: Tree-Replace-Random, V
445
35297.
1
Tree-Replace-Random
Tree-Replace-Random 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 43027)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 43027)
(b) GYF schemes
Figure 6.103. Relative invocation counts: Tree-Replace-Random, V
Tree-Replace-Random 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
43027.
Tree-Replace-Random
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 52450)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 52450)
1
(b) GYF schemes
Figure 6.104. Relative invocation counts: Tree-Replace-Random, V
Tree-Replace-Random
52450.
Tree-Replace-Random
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 63936)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 63936)
(b) GYF schemes
Figure 6.105. Relative invocation counts: Tree-Replace-Random, V
446
63936.
1
Richards
Richards 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1826)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1826)
(b) GYF schemes
Figure 6.106. Relative invocation counts: Richards, V
Richards
7 6
1826.
Richards 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2225)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2225)
(b) GYF schemes
Figure 6.107. Relative invocation counts: Richards, V
447
2225.
1
Richards 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
Richards
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 2713)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 2713)
(b) GYF schemes
Figure 6.108. Relative invocation counts: Richards, V
Richards 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6 5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 3307)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 3307)
Richards 8
Richards
Relative number of collections
6
3307.
8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
1
(b) GYF schemes
Figure 6.109. Relative invocation counts: Richards, V
Relative number of collections
2713.
Richards
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4032)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4032)
(b) GYF schemes
Figure 6.110. Relative invocation counts: Richards, V
448
4032.
1
Richards 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
Richards
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 4914)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 4914)
(b) GYF schemes
Figure 6.111. Relative invocation counts: Richards, V
Richards 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6 5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 5991)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 5991)
Richards 8
Richards
Relative number of collections
6
5991.
8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
1
(b) GYF schemes
Figure 6.112. Relative invocation counts: Richards, V
Relative number of collections
4914.
Richards
Relative number of collections
Relative number of collections
8
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 7303)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 7303)
(b) GYF schemes
Figure 6.113. Relative invocation counts: Richards, V
449
7303.
1
Richards 8
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
Richards
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 8902)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 8902)
(b) GYF schemes
Figure 6.114. Relative invocation counts: Richards, V
450
8902.
1
JavaBYTEmark
JavaBYTEmark 8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
8
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 72807)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 72807)
(b) GYF schemes
Figure 6.115. Relative invocation counts: JavaBYTEmark, V
JavaBYTEmark
72807.
JavaBYTEmark
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7 6
Relative number of collections
Relative number of collections
1
5 4 3 2 1
2GYF 3GYF
7 6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 88752)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 88752)
(b) GYF schemes
Figure 6.116. Relative invocation counts: JavaBYTEmark, V
451
88752.
1
JavaBYTEmark
JavaBYTEmark
8
8
Relative number of collections
6 5 4 3 2
2GYF 3GYF
7
Relative number of collections
FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
7
6 5 4 3 2
1
1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 108188)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 108188)
(b) GYF schemes
Figure 6.117. Relative invocation counts: JavaBYTEmark, V
JavaBYTEmark
108188.
JavaBYTEmark
8
8 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
6
2GYF 3GYF
7 Relative number of collections
7 Relative number of collections
1
5 4 3 2 1
6 5 4 3 2 1
0
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 131881)
1
(a) FC schemes
0
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 131881)
(b) GYF schemes
Figure 6.118. Relative invocation counts: JavaBYTEmark, V
452
131881.
1
Bloat-Bloat
4
2GYF 3GYF
5 Relative number of collections
5 Relative number of collections
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
3
2
1
0
4
3
2
1
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 246766)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 246766)
(b) GYF schemes
Figure 6.119. Relative invocation counts: Bloat-Bloat, V
Bloat-Bloat
2GYF 3GYF
5 Relative number of collections
Relative number of collections
4
246766.
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
1
3
2
1
0
4
3
2
1
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 300808)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 300808)
(b) GYF schemes
Figure 6.120. Relative invocation counts: Bloat-Bloat, V
453
300808.
1
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4
2GYF 3GYF
5 Relative number of collections
5 Relative number of collections
Bloat-Bloat
3
2
1
0
4
3
2
1
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 366682)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 366682)
(b) GYF schemes
Figure 6.121. Relative invocation counts: Bloat-Bloat, V
Bloat-Bloat
2GYF 3GYF
5 Relative number of collections
Relative number of collections
4
3
2
1
0
4
3
2
1
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 446984)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 446984)
Bloat-Bloat
2GYF 3GYF
5 Relative number of collections
4
446984.
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
1
(b) GYF schemes
Figure 6.122. Relative invocation counts: Bloat-Bloat, V
Relative number of collections
366682.
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
1
3
2
1
0
4
3
2
1
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 544872)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 544872)
(b) GYF schemes
Figure 6.123. Relative invocation counts: Bloat-Bloat, V
454
544872.
1
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
4
2GYF 3GYF
5 Relative number of collections
5 Relative number of collections
Bloat-Bloat
3
2
1
0
4
3
2
1
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 664195)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 664195)
(b) GYF schemes
Figure 6.124. Relative invocation counts: Bloat-Bloat, V
Bloat-Bloat
2GYF 3GYF
5 Relative number of collections
Relative number of collections
4
3
2
1
0
4
3
2
1
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 809650)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 809650)
Bloat-Bloat
2GYF 3GYF
5 Relative number of collections
4
809650.
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
1
(b) GYF schemes
Figure 6.125. Relative invocation counts: Bloat-Bloat, V
Relative number of collections
664195.
Bloat-Bloat FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
1
3
2
1
0
4
3
2
1
0 0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 986959)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 986959)
(b) GYF schemes
Figure 6.126. Relative invocation counts: Bloat-Bloat, V
455
986959.
1
Toba
Toba 6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
6
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 353843)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 353843)
(b) GYF schemes
Figure 6.127. Relative invocation counts: Toba, V
Toba
353843.
Toba
6
6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
1
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 431335)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 431335)
(b) GYF schemes
Figure 6.128. Relative invocation counts: Toba, V
456
431335.
1
Toba
Toba 6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
6
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 525794)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 525794)
(b) GYF schemes
Figure 6.129. Relative invocation counts: Toba, V
Toba 6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
525794.
Toba
6
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 640941)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 640941)
1
(b) GYF schemes
Figure 6.130. Relative invocation counts: Toba, V
Toba
640941.
Toba
6
6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
1
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 781303)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 781303)
(b) GYF schemes
Figure 6.131. Relative invocation counts: Toba, V
457
781303.
1
Toba
Toba 6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
6
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 952404)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 952404)
(b) GYF schemes
Figure 6.132. Relative invocation counts: Toba, V
Toba 6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
952404.
Toba
6
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1160976)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1160976)
1
(b) GYF schemes
Figure 6.133. Relative invocation counts: Toba, V
Toba
1160976.
Toba
6
6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
1
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1415223)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1415223)
(b) GYF schemes
Figure 6.134. Relative invocation counts: Toba, V
458
1415223.
1
Toba
Toba 6 FC-TOF FC-TYF FC-DOF FC-DYF FC-ROF
5
Relative number of collections
Relative number of collections
6
4 3 2 1 0
2GYF 3GYF
5 4 3 2 1 0
0
0.2 0.4 0.6 0.8 Fraction collected g (of total heap size 1725148)
1
0
(a) FC schemes
0.2 0.4 0.6 0.8 Nursery size (fraction of total heap size 1725148)
(b) GYF schemes
Figure 6.135. Relative invocation counts: Toba, V
459
1725148.
1
CHAPTER 7 CONCLUDING REMARKS
In this study, we have explored a heretofore neglected alternative to generational garbage collection and found it to be competitive with generational algorithms, and in many cases superior to them. We introduced a simple new algorithm, called deferred older-first, within the classification of age-based algorithms. We described an efficient implementation of the new algorithm, its heap organization, copying mechanisms, and pointer-maintenance mechanisms (Chapter 3). We demonstrated, in a comparative trace-based simulation study, that the new algorithm achieves a significantly lower copying cost than generational collection on many programs, sometimes as much as 11 times lower (Chapter 5). We showed that, even though the pointer-maintenance cost of the new algorithm is usually higher than that of generational collection, the total cost is nevertheless lower on many programs, sometimes as much as 4 times (Chapter 6). These results were unexpected in light of the prior understanding of generational collector performance, which suggested that collecting among the youngest data achieves the best performance. Using profiling of an age-ordered heap, our investigation provides insight into the copying performance of the new algorithm, and shows how it genuinely concentrates effort on those object ages, neither the youngest nor the oldest in the heap, which result in the lowest copying cost. Our new algorithm is very simple, and non-adaptive, yet its performance is good. While much work has been devoted to the design of adaptive generational algorithms, adaptive versions of our alternative, or indeed, other alternatives, are unexplored, but, according to our results, promising. Thus the field of garbage collection, even in the simplest setting of stop-and-copy uniprocessor algorithms, is reopened for study.
460
In our examination of the pointer-maintenance structures to support the demands of new algorithms, we found that precepts for the design of pointer maintenance from existing studies of generational collection do not carry over to alternative algorithms (for instance, the number of duplicate remembered set insertions is much greater in generational collection), and thus pointer maintenance design is also open for further study. Furthermore, the investigation of the positional distribution of pointers dispells the oft-cited notion that younger-to-older pointers dominate even in object-oriented languages. This work has exposed the facts about the performance of copying garbage collection algorithms, the factors affecting it, and the limits of its possible improvement. However, the settings in which the experimental results were obtained were necessarily limited, and other settings remain to be explored. We used trace-based simulation, albeit in conjunction with a prototype implementation of the collectors. However, traces do not capture stack processing costs nor the interactions between the collector and the mutator. A desirable follow-up study is to integrate the several collectors we described with a running mutator system, such as a Java virtual machine. This integration will permit the verification of the design we outlined for implementation, the validation of the cost models we assumed (Chapter 3), the measurement of memory locality effects, and the final verdict in the form of absolute timing of different algorithms. In this study, we eschewed analytical modelling of garbage collection costs. Modelling the actions of a collector presupposes the availability of analytical models for the behavior of the mutator. Whereas some models have been attempted for object lifetimes, with unclear utility, models for pointer structure are entirely unexplored. The success of the deferred olderfirst collector as an approximation of the ideal queue collector (Chapter 1), the heap profiles (Chapter 5), and the pointer distributions (Chapter 6) indicate a high degree of spatio-temporal correlation of object lifetimes and of pointer structure, suggesting that first-order statistical models will be inadequate.
461
Our investigation focused on the application of collectors to the entire heap, from the most recently allocated objects to the oldest ones. As we discovered in the study of demise positions as well as of pointer directions, and entirely in accordance with intuitive expectations, an advantage is possible in large and long-running systems if an a priori division is imposed on the heap, in which the most recently allocated data are managed separately to lower the pointer maintenance overhead for stores into new objects, the oldest data are identified as permanent (by programmer declaration, or adaptively) to exclude them from consideration entirely, and the middle portion, representing an active mature space, is managed by a deferred-older-first algorithm. Indeed, this observation can be seen as the basis for defining policies on top of the mechanisms described for the Mature Object Space and its train algorithm [Hudson and Moss, 1992; Seligmann and Grarup, 1995]. The evaluation of different algorithms applied to a mature space could, in principle, proceed as in this study. The trace of mature space objects can be obtained either as the sequence of objects promoted out of a concrete younger-space collector, or, more abstractly, as the subsequence of objects with lifetimes above a fixed threshold extracted from a full sequence of allocated objects. Since the volume of objects presented to the mature space collector is, by design, much smaller than the total volume of allocation, then, in order for the mature space collector to be exercised sufficiently (Section 4.2.2.2) the total allocation must be considerably larger. We believe that it will prove infeasible to obtain accurate traces as for this study, and that direct implementation and measurement will be especially needed for the mature space.
462
BIBLIOGRAPHY
[Abadi and Cardelli, 1996] Mart´ın Abadi and Luca Cardelli. A Theory of Objects. SpringerVerlag, New York, 1996. [Aditya et al., 1994] Shail Aditya, Christine Flood, and James Hicks. Garbage collection for strongly-typed languages using run-time type reconstruction. In Proceedings of SIGPLAN’94 Conference on Programming Languages Design and Implementation, volume 29 of SIGPLAN Notices, pages 12–23, Orlando, Florida, June 1994. ACM Press. Also Lisp Pointers VIII 3, July–September 1994. [Agesen et al., 1998] Ole Agesen, David Detlefs, and J. Eliot B. Moss. Garbage collection and local variable type-precision and liveness in Java(TM) virtual machines. In Proceedings of SIGPLAN’98 Conference on Programming Languages Design and Implementation, volume 33 of SIGPLAN Notices, pages 269–279, Montreal, Qu´ebec, Canada, June 1998. ACM Press. [Appel and Shao, 1996] Andrew W. Appel and Zhong Shao. Empirical and analytic study of stack versus heap cost for languages with closures. Journal of Functional Programming, 6(1):47–74, January 1996. [Appel, 1987] Andrew W. Appel. Garbage collection can be faster than stack allocation. Information Processing Letters, 25(4):275–279, 1987. [Appel, 1989a] Andrew W. Appel. Runtime tags aren’t necessary. Lisp and Symbolic Computation, 2:153–162, 1989. [Appel, 1989b] Andrew W. Appel. Simple generational garbage collection and fast allocation. Software Practice and Experience, 19(2):171–183, 1989. [Appel, 1992] Andrew W. Appel. Compiling with Continuations. Cambridge University Press, Cambridge, 1992. [Appel, 1997] Andrew Appel. A better analytical model for the strong generational hypothesis. November 1997. [Baker, 1978] Henry G. Baker. List processing in real-time on a serial computer. Communications of the ACM, 21(4):280–94, 1978. Also AI Laboratory Working Paper 139, 1977. [Baker, 1990] Henry G. Baker. Unify and conquer (garbage, updating, aliasing, tional languages. In LFP [1990], pages 218–226.
463
) in func-
[Baker, 1993] Henry G. Baker. ‘Infant Mortality’ and generational garbage collection. SIGPLAN Notices, 28(4):55–57, 1993. [Barendregt, 1984] Henk P. Barendregt. The Lambda Calculus: Its Syntax and Semantics. Elsevier (North-Holland), Amsterdam, 1984. Second edition. [Barrett and Zorn, 1995] David A. Barrett and Benjamin Zorn. Garbage collection using a dynamic threatening boundary. In Proceedings of SIGPLAN’95 Conference on Programming Languages Design and Implementation, volume 30 of SIGPLAN Notices, pages 301–314, La Jolla, CA, June 1995. ACM Press. [Bekkers and Cohen, 1992] Yves Bekkers and Jacques Cohen, editors. International Workshop on Memory Management, number 637 in Lecture Notes in Computer Science, St. Malo, France, September 1992. Springer-Verlag. [Bekkers et al., 1986] Yves Bekkers, B. Canet, Olivier Ridoux, and L. Ungaro. MALI: A memory with a real-time garbage collector for implementing logic programming languages. In 3rd Symposium on Logic Programming, pages 258–264. IEEE Press, 1986. [Bishop, 1977] Peter B. Bishop. Computer Systems with a Very Large Address Space and Garbage Collection. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, May 1977. [Boehm and Shao, 1993] Hans-Juergen Boehm and Zhong Shao. Inferring type maps during garbage collection. In Moss et al. [1993]. [Boehm and Weiser, 1988] Hans-Juergen Boehm and Mark Weiser. Garbage collection in an uncooperative environment. Software Practice and Experience, 18(9):807–820, 1988. [Boehm, 1993] Hans-Juergen Boehm. Space efficient conservative garbage collection. In Proceedings of SIGPLAN’93 Conference on Programming Languages Design and Implementation, volume 28(6) of SIGPLAN Notices, pages 197–206, Albuquerque, New Mexico, June 1993. ACM Press. [Caudill and Wirfs-Brock, 1986] Patrick J. Caudill and Allen Wirfs-Brock. A third generation Smalltalk-80 implementation. In Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, pages 119–130, Portland, Oregon, September 1986. SIGPLAN Notices 21, 11 (November 1986). [Chambers, 1992] Craig Chambers. The Design and Implementation of the SELF Compiler, an Optimizing Compiler for an Objected-Oriented Programming Language. PhD thesis, Stanford University, March 1992. [Cheney, 1970] C. J. Cheney. A non-recursive list compacting algorithm. Communications of the ACM, 13(11):677–8, November 1970. [Clinger and Hansen, 1997] William D. Clinger and Lars T. Hansen. Generational garbage collection and the radioactive decay model. SIGPLAN Notices, 32(5):97–108, May 1997. Proceedings of the ACM SIGPLAN ’97 Conference on Programming Language Design and Implementation. 464
[Cohen, 1981] Jacques Cohen. Garbage collection of linked data structures. Computing Surveys, 13(3):341–367, September 1981. [Courts, 1988] Robert Courts. Improving locality of reference in a garbage-collecting memory management system. Communications of the ACM, 31(9):1128–1138, September 1988. [Cox and Oakes, 1984] D. R. Cox and D. Oakes. Analysis of Survival Data. Chapman and Hall, London, 1984. [Demers et al., 1990] Alan Demers, Mark Weiser, Barry Hayes, Daniel G. Bobrow, and Scott Shenker. Combining generational and conservative garbage collection: Framework and implementations. In Conference Record of the Seventeenth Annual ACM Symposium on Principles of Programming Languages, SIGPLAN Notices, pages 261–269, San Francisco, CA, January 1990. ACM Press. [DeTreville, 1990a] John DeTreville. Experience with concurrent garbage collectors for Modula-2+. Technical Report 64, DEC Systems Research Center, Palo Alto, CA, August 1990. [DeTreville, 1990b] John DeTreville. Heap usage in the Topaz environment. Technical Report 63, DEC Systems Research Center, Palo Alto, CA, August 1990. [Diwan et al., 1992] Amer Diwan, J. Eliot B. Moss, and Richard L. Hudson. Compiler support for garbage collection in a statically typed language. In Proceedings of SIGPLAN’92 Conference on Programming Languages Design and Implementation, volume 27 of SIGPLAN Notices, pages 273–282, San Francisco, CA, June 1992. ACM Press. [Diwan et al., 1995] Amer Diwan, David Tarditi, and J. Eliot B. Moss. Memory system performance of programs with intensive heap allocation. ACM Transactions on Computer Systems, 13(3):244–273, August 1995. [Elandt-Johnson and Johnson, 1980] Regina C. Elandt-Johnson and Norman Lloyd Johnson. Survival Models and Data Analysis. Wiley, New York, 1980. [Fenichel and Yochelson, 1969] Robert R. Fenichel and Jerome C. Yochelson. A Lisp garbage collector for virtual memory computer systems. Communications of the ACM, 12(11):611– 612, November 1969. [Fradet, 1994] Pascal Fradet. Collecting more garbage. In LFP [1994]. [Goldberg and Gloger, 1992] Benjamin Goldberg and Michael Gloger. Polymorphic type reconstruction for garbage collection without tags. In Conference Record of the 1992 ACM Symposium on Lisp and Functional Programming, pages 53–65, San Francisco, CA, June 1992. ACM Press. [Goldberg and Robson, 1983] Adele Goldberg and David Robson. Smalltalk-80: The Language and its Implementation. Addison-Wesley, 1983.
465
[Gonc¸alves, 1995] Marcelo J. R. Gonc¸alves. Cache Performance of Programs with Intensive Heap Allocation and Generational Garbage Collection. PhD thesis, Princeton University, May 1995. [Gosling et al., 1996] James Gosling, Bill Joy, and Guy Steele. The Java Language Specification. Addison-Wesley, Reading, Mass., 1996. [Grarup and Seligmann, 1993] Steffen Grarup and Jacob Seligmann. Incremental mature garbage collection. Technical report, Computer Science Department, Aarhus University, Aarhus, Denmark, August 1993. M.Sc. Thesis. [Harrison, 1989] Williams Ludwell Harrison, III. The Interprocedural Analysis and Automatic Parallelization of Scheme Programs. Lisp and Symbolic Computation, 2(3/4):179–396, October 1989. [Hayes, 1991] Barry Hayes. Using key object opportunism to collect old objects. In Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, pages 33–46, Phoenix, Arizona, October 1991. SIGPLAN Notices 26, 11 (November 1991). [Hayes, 1993] Barry Hayes. Key Objects in Garbage Collection. PhD thesis, Stanford University, Stanford, California, March 1993. [Hicks et al., 1998] Michael Hicks, Luke Hornof, Jonathan T. Moore, and Scott M. Nettles. A study of large object spaces. In International Symposium on Memory Management, pages 138–145, Vancouver, British Columbia, Canada, October 1998. ACM Press. [Hosking and Hudson, 1993] Antony L. Hosking and Richard L. Hudson. Remembered sets can also play cards. In Moss et al. [1993]. [Hosking et al., 1992] Antony L. Hosking, J. Eliot B. Moss, and Darko Stefanovi´c. A comparative performance evaluation of write barrier implementations. In Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, pages 92–109, Vancouver, Canada, October 1992. SIGPLAN Notices 27, 10 (October 1992). [Hudson and Moss, 1992] Richard L. Hudson and J. Eliot B. Moss. Incremental collection of mature objects. In Bekkers and Cohen [1992], pages 388–403. [Hudson et al., 1991] Richard L. Hudson, J. Eliot B. Moss, Amer Diwan, and Christopher F. Weight. A language-independent garbage collector toolkit. COINS Technical Report 91-47, University of Massachusetts, Amherst, September 1991. [Jones and Lins, 1996] Richard Jones and Rafael Lins. Garbage Collection: Algorithms for Automatic Dynamic Memory Management. John Wiley, Chichester, 1996. [Kolmogorov and Uspenskii, 1958] A. N. Kolmogorov and V. A. Uspenskii. On the definition of an algorithm. Uspekhi Mat. Nauk, 13:3–28, 1958. English translation in: Russian Math. Surveys 30 (1963) 217–245.
466
[Lang and Dupont, 1987] Bernard Lang and Francis Dupont. Incremental incrementally compacting garbage collection. In SIGPLAN’87 Symposium on Interpreters and Interpretive Techniques, volume 22(7) of SIGPLAN Notices, pages 253–263, Inst. Rech. Informat. Automat. Lab., France, 1987. ACM Press. [LFP, 1990] Conference Record of the 1990 ACM Symposium on Lisp and Functional Programming, Nice, France, June 1990. ACM Press. [LFP, 1994] 1994 ACM Conference on Lisp and Functional Programming, Orlando, Florida, June 1994. ACM Press. [Lieberman and Hewitt, 1983] Henry Lieberman and Carl Hewitt. A real-time garbage collector based on the lifetimes of objects. Communications of the ACM, 26(6):419–429, June 1983. [McCarthy, 1960] John McCarthy. Recursive functions of symbolic expressions and their computation by machine, part I. Communications of the ACM, 3(4):184–195, April 1960. [Minsky, 1963] Marvin L. Minsky. A Lisp garbage collector algorithm using serial secondary storage. Technical Report Memo 58 (rev.), Project MAC, MIT, Cambridge, MA, December 1963. [Mohnen, 1995] Markus Mohnen. Efficient compile-time garbage collection for arbitrary data structures. Technical Report 95–08, University of Aachen, May 1995. Also in Seventh International Symposium on Programming Languages, Implementations, Logics and Programs, PLILP95. [Moon, 1984] David Moon. Garbage collection in a large Lisp system. In Proceedings of the ACM Symposium on Lisp and Functional Programming, pages 235–246, Austin, TX, August 1984. [Morrisett et al., 1995] Greg Morrisett, Matthias Felleisen, and Robert Harper. Abstract models of memory management. In Functional Programming and Computer Architecture, June 1995. [Moss et al., 1993] Eliot Moss, Paul R. Wilson, and Benjamin Zorn, editors. OOPSLA/ECOOP ’93 Workshop on Garbage Collection in Object-Oriented Systems, October 1993. [Moss, 1987] J. Eliot B. Moss. Managing stack frames in Smalltalk. In Proceedings of the ACM SIGPLAN ’87 Symposium on Interpreters and Interpretive Techniques, pages 229– 240, St. Paul, Minnesota, July 1987. SIGPLAN Notices 22, 7 (July 1987). [Nystrom, 1998] Nathaniel Nystrom. Bytecode-level analysis and optimization of Java class files. Master’s thesis, Purdue University, West Lafayette, IN, May 1998. [Odersky and Wadler, 1997] Martin Odersky and Philip Wadler. Pizza into Java: translating theory into practice. In POPL ’97. Proceedings of the 24th ACM SIGPLAN-SIGACT symposium on Principles of programming, pages 146–159, New York, 1997. ACM Press. 467
[Proebsting et al., 1998] Todd A. Proebsting, Gregg Townsend, Patrick Bridges, John H. Hartman, Tim Newsham, and Scott A. Watterson. Toba: Java for applications, a way ahead of time (WAT) compiler. (at http://www.cs.arizona.edu/sumatra/toba/), 1998. [Reinhold, 1993] Mark B. Reinhold. Cache performance of garbage-collected programming languages. PhD thesis, Massachusetts Institute of Technology, Cambridge, MA, September 1993. [R¨ojemo and Runciman, 1996] Niklas R¨ojemo and Colin Runciman. Lag, drag, void, and use: heap profiling and space-efficient compilation revisited. In Proceedings of SIGPLAN’96 Conference on Programming Languages Design and Implementation, volume 31 of SIGPLAN Notices, pages 34–41. ACM Press, 1996. [Runciman and R¨ojemo, 1995] Colin Runciman and Niklas R¨ojemo. Lag, drag and postmortem heap profiling. In Implementation of Functional Languages Workshop, B˚astad, Sweden, September 1995. [Runciman and Wakeling, 1992] Colin Runciman and David Wakeling. Heap profiling of lazy functional programs. Technical Report 172, University of York, Dept. of Computer Science, Heslington, York Y01 5DD, United Kingdom, April 1992. [Sansom and Peyton Jones, 1994] Patrick M. Sansom and Simon L. Peyton Jones. Time and space profiling for non-strict, higher-order functional languages. Technical Report Research Report FP-1994-10, Dept. of Computing Science, University of Glasgow, Glasgow, Scotland, 1994. [Sansom, 1994] Patrick M. Sansom. Execution Profiling for Non-strict Functional Languages. PhD thesis, University of Glasgow, Glasgow, Scotland, April 1994. [Sch¨onhage, 1980] A. Sch¨onhage. Storage modification machines. SIAM Journal on Computing, 9:490–508, 1980. [Seligmann and Grarup, 1995] Jacob Seligmann and Steffen Grarup. Incremental mature garbage collection using the train algorithm. In O. Nierstras, editor, Proceedings of 1995 European Conference on Object-Oriented Programming, Lecture Notes in Computer Science, pages 235–252, University of Aarhus, August 1995. Springer-Verlag. [Shaw, 1988] Robert A. Shaw. Empirical Analysis of a LISP System. PhD thesis, Stanford University, February 1988. Available as Technical Report CSL-TR-88-351. [Sleator and Tarjan, 1983] Daniel Dominic Sleator and Robert Endre Tarjan. Self-adjusting binary search trees. In Proceedings of the ACM SIGACT Symposium on Theory, pages 235–245, Boston, Massachusetts, April 1983. [Sobalvarro, 1988] Patrick G. Sobalvarro. A lifetime-based garbage collector for LISP systems on general-purpose computers, 1988. B.S. Thesis, Dept. of EECS, Massachusetts Institute of Technology, Cambridge.
468
[Stefanovi´c and Moss, 1994] Darko Stefanovi´c and J. Eliot B. Moss. Characterisation of object behaviour in Standard ML of New Jersey. In LFP [1994]. [Stefanovi´c et al., 1998a] Darko Stefanovi´c, J. Eliot B. Moss, and Kathryn S. M cKinley. Oldest-first garbage collection. Computer Science Technical Report 98-81, University of Massachusetts, Amherst, April 1998. [Stefanovi´c et al., 1998b] Darko Stefanovi´c, J. Eliot B. Moss, and Kathryn S. M cKinley. On models for object lifetimes. February 1998. [Stefanovi´c, 1993a] Darko Stefanovi´c. The garbage collection toolkit as an experimentation tool. In Moss et al. [1993]. [Stefanovi´c, 1993b] Darko Stefanovi´c. Generational copying garbage collection for Standard ML: a quantitative study. Master’s thesis, Department of Computer Science, University of Massachusetts, 1993. [Sullivan and Chillarege, 1991] Mark Sullivan and Ram Chillarege. Software defects and their impact on system availability – a study of field failures in operating systems. In Digest of the 21st International Symposium on Fault Tolerant Computing, pages 2–9, June 1991. [Ungar and Jackson, 1988] David M. Ungar and Frank Jackson. Tenuring policies for generation-based storage reclamation. SIGPLAN Notices, 23(11):1–17, 1988. [Ungar and Jackson, 1992] David M. Ungar and Frank Jackson. An adaptive tenuring policy for generation scavengers. ACM Transactions on Programming Languages and Systems, 14(1):1–27, 1992. [Ungar, 1984] David Ungar. Generation scavenging: A non-disruptive high performance storage reclamation algorithm. In Proceedings of the ACM SIGSOFT/SIGPLAN Software Engineering Symposium on Practical Software Development Environments, pages 157–167, Pittsburgh, Pennsylvania, April 1984. SIGPLAN Notices 19, 5 (May 1984). [Ungar, 1986] David M. Ungar. The Design and Evaluation of a High Performance Smalltalk System. ACM distinguished dissertation 1986. MIT Press, 1986. [van Emde Boas, 1990] Peter van Emde Boas. Machine Models and Simulations, volume A: Algorithms and Complexity. Elsevier and MIT Press, 1990. [Wilson and Moher, 1989] Paul R. Wilson and Thomas G. Moher. Design of the opportunistic garbage collector. In Proceedings of the Conference on Object-Oriented Programming Systems, Languages, and Applications, pages 23–35, New Orleans, Louisiana, October 1989. SIGPLAN Notices 24, 10 (October 1989). [Wilson et al., 1991] Paul R. Wilson, Michael S. Lam, and Thomas G. Moher. Effective “static-graph” reorganization to improve locality in garbage-collected systems. In Proceedings of the ACM SIGPLAN ’91 Conference on Programming Language Design and Implementation, pages 177–191, Toronto, Canada, June 1991. SIGPLAN Notices 26, 6 (June 1991). 469
[Wilson et al., 1995] Paul R. Wilson, Mark S. Johnstone, Michael Neely, and David Boles. Dynamic storage allocation: A survey and critical review. In Henry Baker, editor, Proceedings of International Workshop on Memory Management, Lecture Notes in Computer Science, pages 1–116, Kinross, Scotland, September 1995. Springer-Verlag. [Wilson, 1989] Paul R. Wilson. A simple bucket-brigade advancement mechanism for generation-based garbage collection. SIGPLAN Notices, 24(5):38–46, May 1989. [Wilson, 1990] Paul R. Wilson. Some issues and strategies in heap management and memory hierarchies. In Eric Jul and Niels-Christian Juul, editors, OOPSLA/ECOOP ’90 Workshop on Garbage Collection in Object-Oriented Systems, October 1990. Also in SIGPLAN Notices 23(1):45–52, January 1991. [Wilson, 1992] Paul R. Wilson. Uniprocessor garbage collection techniques. In Bekkers and Cohen [1992], pages 1–42. [Yi and Harrison, 1992] Kwangkeun Yi and Williams Ludwell Harrison, III. Interprocedural data flow analysis for compile-time memory management. Technical Report 1244, Center for Supercomputing Research and Development, University of Illinois at UrbanaChampaign, Urbana, Illinois, August 1992. [Zorn, 1989] Benjamin G. Zorn. Comparative Performance Evaluation of Garbage Collection Algorithms. PhD thesis, University of California at Berkeley, December 1989. Available as Technical Report UCB/CSD 89/544. [Zorn, 1990a] Benjamin Zorn. Barrier methods for garbage collection. Technical Report CUCS-494-90, University of Colorado at Boulder, November 1990. [Zorn, 1990b] Benjamin Zorn. Comparing mark-and-sweep and stop-and-copy garbage collection. In LFP [1990]. [Zorn, 1991] Benjamin G. Zorn. The effect of garbage collection on cache performance. Technical Report CU-CS-528-91, University of Colorado at Boulder, May 1991.
470