Concepts of Combinatorial Optimization, Volume 1

Concepts of Combinatorial Optimization

www.it-ebooks.info

Combinatorial Optimization
volume 1

Concepts of
Combinatorial Optimization

Edited by
Vangelis Th. Paschos

www.it-ebooks.info

First published 2010 in Great Britain and the United States by ISTE Ltd and John Wiley & Sons, Inc.
Adapted and updated from Optimisation combinatoire volumes 1 to 5 published 2005-2007 in France by
Hermes Science/Lavoisier © LAVOISIER 2005, 2006, 2007
Apart from any fair dealing for the purposes of research or private study, or criticism or review, as
permitted under the Copyright, Designs and Patents Act 1988, this publication may only be reproduced,
stored or transmitted, in any form or by any means, with the prior permission in writing of the publishers,

or in the case of reprographic reproduction in accordance with the terms and licenses issued by the
CLA. Enquiries concerning reproduction outside these terms should be sent to the publishers at the
undermentioned address:
ISTE Ltd
27-37 St George’s Road
London SW19 4EU
UK

John Wiley & Sons, Inc.
111 River Street
Hoboken, NJ 07030
USA

www.iste.co.uk

www.wiley.com

© ISTE Ltd 2010
The rights of Vangelis Th. Paschos to be identified as the author of this work have been asserted by him
in accordance with the Copyright, Designs and Patents Act 1988.

Library of Congress Cataloging-in-Publication Data
Combinatorial optimization / edited by Vangelis Th. Paschos.
v. cm.
Includes bibliographical references and index.
Contents: v. 1. Concepts of combinatorial optimization
ISBN 978-1-84821-146-9 (set of 3 vols.) -- ISBN 978-1-84821-147-6 (v. 1)
1. Combinatorial optimization. 2. Programming (Mathematics)
I. Paschos, Vangelis Th.
QA402.5.C545123 2010
519.6'4--dc22
2010018423
British Library Cataloguing-in-Publication Data
A CIP record for this book is available from the British Library
ISBN 978-1-84821-146-9 (Set of 3 volumes)
ISBN 978-1-84821-147-6 (Volume 1)
Printed and bound in Great Britain by CPI Antony Rowe, Chippenham and Eastbourne.

www.it-ebooks.info

Table of Contents


Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
Vangelis Th. PASCHOS

xiii

PART I. COMPLEXITY OF COMBINATORIAL OPTIMIZATION PROBLEMS . . .

1

Chapter 1. Basic Concepts in Algorithms and Complexity Theory . . . . .
Vangelis Th. PASCHOS

3

1.1. Algorithmic complexity . . . . . . . . . . . . . . . . .
1.2. Problem complexity . . . . . . . . . . . . . . . . . . . .
1.3. The classes P, NP and NPO . . . . . . . . . . . . . . .
1.4. Karp and Turing reductions . . . . . . . . . . . . . . .
1.5. NP-completeness . . . . . . . . . . . . . . . . . . . . .

1.6. Two examples of NP-complete problems . . . . . . .
1.6.1. MIN VERTEX COVER . . . . . . . . . . . . . . . . . .
1.6.2. MAX STABLE . . . . . . . . . . . . . . . . . . . . . .
1.7. A few words on strong and weak NP-completeness .
1.8. A few other well-known complexity classes . . . . .
1.9. Bibliography . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.

3

4
7
9
10
13
14
15
16
17
18

Chapter 2. Randomized Complexity . . . . . . . . . . . . . . . . . . . . . . . .
Jérémy BARBAY

21

2.1. Deterministic and probabilistic algorithms . .
2.1.1. Complexity of a Las Vegas algorithm . . .
2.1.2. Probabilistic complexity of a problem. . .
2.2. Lower bound technique . . . . . . . . . . . . . .

2.2.1. Definitions and notations . . . . . . . . . .
2.2.2. Minimax theorem . . . . . . . . . . . . . . .
2.2.3. The Loomis lemma and the Yao principle

www.it-ebooks.info

.
.
.
.
.
.
.

.
.
.
.
.
.

.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.

.
.
.
.
.
.
.

22
24
26
28
28
30
33

vi

Combinatorial Optimization 1

2.3. Elementary intersection problem
2.3.1. Upper bound . . . . . . . . . .
2.3.2. Lower bound. . . . . . . . . .
2.3.3. Probabilistic complexity . . .
2.4. Conclusion . . . . . . . . . . . . .
2.5 Bibliography . . . . . . . . . . . .

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

35
35
36
37
37
37

PART II. CLASSICAL SOLUTION METHODS . . . . . . . . . . . . . . . . . . . . .

39

Chapter 3. Branch-and-Bound Methods . . . . . . . . . . . . . . . . . . . . . .
Irène CHARON and Olivier HUDRY

41

3.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2. Branch-and-bound method principles . . . . . . . . . . . . . . . . . . .
3.2.1. Principle of separation . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.2. Pruning principles . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.2.3. Developing the tree . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3. A detailed example: the binary knapsack problem . . . . . . . . . . .
3.3.1. Calculating the initial bound . . . . . . . . . . . . . . . . . . . . . .
3.3.2. First principle of separation . . . . . . . . . . . . . . . . . . . . . . .
3.3.3. Pruning without evaluation . . . . . . . . . . . . . . . . . . . . . . .
3.3.4. Evaluation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.5. Complete execution of the branch-and-bound method for finding
only one optimal solution . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.3.6. First variant: finding all the optimal solutions . . . . . . . . . . . .
3.3.7. Second variant: best first search strategy . . . . . . . . . . . . . . .
3.3.8. Third variant: second principle of separation . . . . . . . . . . . .
3.4. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
3.5. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.

41
43
44
45
51
54
55
57
58
60

.
.
.
.
.
.

61
63
64
65
67
68

Chapter 4. Dynamic Programming . . . . . . . . . . . . . . . . . . . . . . . . .
Bruno ESCOFFIER and Olivier SPANJAARD

71

4.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.2. A first example: crossing the bridge . . . . . . . . . . . . . . . .
4.3. Formalization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.3.1. State space, decision set, transition function . . . . . . . . .
4.3.2. Feasible policies, comparison relationships and objectives
4.4. Some other examples . . . . . . . . . . . . . . . . . . . . . . . . .
4.4.1. Stock management . . . . . . . . . . . . . . . . . . . . . . . .
4.4.2. Shortest path bottleneck in a graph . . . . . . . . . . . . . .
4.4.3. Knapsack problem . . . . . . . . . . . . . . . . . . . . . . . .
4.5. Solution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
4.5.1. Forward procedure . . . . . . . . . . . . . . . . . . . . . . . .

www.it-ebooks.info

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.

71
72
75
75
77
79
79
81
82
83
84

Table of Contents

4.5.2. Backward procedure . . . . . . . . . . . . . .
4.5.3. Principles of optimality and monotonicity .
4.6. Solution of the examples . . . . . . . . . . . . . .
4.6.1. Stock management . . . . . . . . . . . . . . .
4.6.2. Shortest path bottleneck . . . . . . . . . . . .
4.6.3. Knapsack . . . . . . . . . . . . . . . . . . . . .
4.7. A few extensions. . . . . . . . . . . . . . . . . . .
4.7.1. Partial order and multicriteria optimization
4.7.2. Dynamic programming with variables . . .
4.7.3. Generalized dynamic programming . . . . .
4.8. Conclusion . . . . . . . . . . . . . . . . . . . . . .
4.9. Bibliography . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.

vii

.
.
.
.
.
.
.
.
.
.
.
.

85
86
88
88
89
89
90
91
94
95
98
98

PART III. ELEMENTS FROM MATHEMATICAL PROGRAMMING . . . . . . . . .

101

Chapter 5. Mixed Integer Linear Programming Models for Combinatorial
Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 103
Frédérico DELLA CROCE
5.1. Introduction . . . . . . . . . . . . . . . . . . . . .
5.1.1. Preliminaries. . . . . . . . . . . . . . . . . .
5.1.2. The knapsack problem . . . . . . . . . . . .
5.1.3. The bin-packing problem . . . . . . . . . .
5.1.4. The set covering/set partitioning problem
5.1.5. The minimum cost flow problem . . . . .
5.1.6. The maximum flow problem . . . . . . . .
5.1.7. The transportation problem . . . . . . . . .
5.1.8. The assignment problem . . . . . . . . . . .
5.1.9. The shortest path problem . . . . . . . . . .
5.2. General modeling techniques . . . . . . . . . .
5.2.1. Min-max, max-min, min-abs models . . .
5.2.2. Handling logic conditions . . . . . . . . . .
5.3. More advanced MILP models . . . . . . . . . .
5.3.1. Location models . . . . . . . . . . . . . . .
5.3.2. Graphs and network models . . . . . . . .
5.3.3. Machine scheduling models. . . . . . . . .
5.4. Conclusions . . . . . . . . . . . . . . . . . . . . .
5.5. Bibliography . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

103
103
105
105
106
107
108
109
110
111
111
112
113
117
117
120
127
132
133

Chapter 6. Simplex Algorithms for Linear Programming . . . . . . . . . . .
Frédérico DELLA CROCE and Andrea GROSSO

135

6.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2. Primal and dual programs . . . . . . . . . . . . . . . . . . . . . . . . . . .

135
135

www.it-ebooks.info

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

viii

Combinatorial Optimization 1

6.2.1. Optimality conditions and strong duality . . . . . . . . . . . .
6.2.2. Symmetry of the duality relation . . . . . . . . . . . . . . . . .
6.2.3. Weak duality. . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.2.4. Economic interpretation of duality . . . . . . . . . . . . . . . .
6.3. The primal simplex method . . . . . . . . . . . . . . . . . . . . . .
6.3.1. Basic solutions . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.3.2. Canonical form and reduced costs . . . . . . . . . . . . . . . .
6.4. Bland’s rule . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
6.4.1. Searching for a feasible solution . . . . . . . . . . . . . . . . .
6.5. Simplex methods for the dual problem . . . . . . . . . . . . . . .
6.5.1. The dual simplex method . . . . . . . . . . . . . . . . . . . . .
6.5.2. The primal–dual simplex algorithm . . . . . . . . . . . . . . .
6.6. Using reduced costs and pseudo-costs for integer programming
6.6.1. Using reduced costs for tightening variable bounds. . . . . .
6.6.2. Pseudo-costs for integer programming . . . . . . . . . . . . .
6.7. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

136
137
138
139
140
140
142
145
146
147
147
149
152
152
153
155

Chapter 7. A Survey of some Linear Programming Methods . . . . . . . . .
Pierre TOLLA

157

7.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.2. Dantzig’s simplex method . . . . . . . . . . . . . . . . . . . . .
7.2.1. Standard linear programming and the main results . . . .
7.2.2. Principle of the simplex method . . . . . . . . . . . . . . .
7.2.3. Putting the problem into canonical form . . . . . . . . . .
7.2.4. Stopping criterion, heuristics and pivoting . . . . . . . . .
7.3. Duality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.4. Khachiyan’s algorithm . . . . . . . . . . . . . . . . . . . . . . .
7.5. Interior methods . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.5.1. Karmarkar’s projective algorithm . . . . . . . . . . . . . .
7.5.2. Primal–dual methods and corrective predictive methods.
7.5.3. Mehrotra predictor–corrector method . . . . . . . . . . .
7.6. Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
7.7. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

Chapter 8. Quadratic Optimization in 0–1 Variables . . . . . . . . . . . . . .
Alain BILLIONNET

189

www.it-ebooks.info

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

157
158
158
159
159
160
162
162
165
165
169
181
186
187

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

8.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . .
8.2. Pseudo-Boolean functions and set functions . . . . .
8.3. Formalization using pseudo-Boolean functions . . .
8.4. Quadratic pseudo-Boolean functions (qpBf) . . . . .
8.5. Integer optimum and continuous optimum of qpBfs
8.6. Derandomization. . . . . . . . . . . . . . . . . . . . . .

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.

.
.
.
.
.
.

189
190
191
192
194
195

Table of Contents

8.7. Posiforms and quadratic posiforms . . . . . . . . . . . . . . . . . . . . .
8.7.1. Posiform maximization and stability in a graph . . . . . . . . . . .
8.7.2. Implication graph associated with a quadratic posiform . . . . . .
8.8. Optimizing a qpBf: special cases and polynomial cases . . . . . . . .
8.8.1. Maximizing negative–positive functions . . . . . . . . . . . . . . .
8.8.2. Maximizing functions associated with k-trees . . . . . . . . . . . .
8.8.3. Maximizing a quadratic posiform whose terms are associated
with two consecutive arcs of a directed multigraph . . . . . . . . . . . . .
8.8.4. Quadratic pseudo-Boolean functions equal to the product of two
linear functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9. Reductions, relaxations, linearizations, bound calculation
and persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.1. Complementation . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.2. Linearization. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.3. Lagrangian duality . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.4. Another linearization . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.9.5. Convex quadratic relaxation . . . . . . . . . . . . . . . . . . . . . .
8.9.6. Positive semi-definite relaxation . . . . . . . . . . . . . . . . . . . .
8.9.7. Persistence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.10. Local optimum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.11. Exact algorithms and heuristic methods for optimizing qpBfs . . . .
8.11.1. Different approaches . . . . . . . . . . . . . . . . . . . . . . . . . .
8.11.2. An algorithm based on Lagrangian decomposition . . . . . . . .
8.11.3. An algorithm based on convex quadratic programming . . . . .
8.12. Approximation algorithms . . . . . . . . . . . . . . . . . . . . . . . . .
8.12.1. A 2-approximation algorithm for maximizing a quadratic
posiform . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.12.2. MAX-SAT approximation . . . . . . . . . . . . . . . . . . . . . . .
8.13. Optimizing a quadratic pseudo-Boolean function with linear
constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.13.1. Examples of formulations . . . . . . . . . . . . . . . . . . . . . . .
8.13.2. Some polynomial and pseudo-polynomial cases . . . . . . . . .
8.13.3. Complementation . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14. Linearization, convexification and Lagrangian relaxation for
optimizing a qpBf with linear constraints . . . . . . . . . . . . . . . . . . . .
8.14.1. Linearization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.2. Convexification . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.14.3. Lagrangian duality . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.15. ε-Approximation algorithms for optimizing a qpBf with linear
constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
8.16. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

www.it-ebooks.info

ix

.
.
.
.
.
.

196
196
197
198
198
199

.

199

.

199

.
.
.
.
.
.
.
.
.
.
.
.
.
.

200
200
201
202
203
203
204
206
206
208
208
209
210
211

.
.

211
213

.
.
.
.

213
214
217
217

.
.
.
.

220
221
222
223

.
.

223
224

x

Combinatorial Optimization 1

Chapter 9. Column Generation in Integer Linear Programming . . . . . .
Irène LOISEAU, Alberto CESELLI, Nelson MACULAN and Matteo SALANI
9.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.2. A column generation method for a bounded variable linear
programming problem . . . . . . . . . . . . . . . . . . . . . . . . .
9.3. An inequality to eliminate the generation of a 0–1 column .
9.4. Formulations for an integer linear program . . . . . . . . . .
9.5. Solving an integer linear program using column generation
9.5.1. Auxiliary problem (pricing problem) . . . . . . . . . . .
9.5.2. Branching . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6. Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . .
9.6.1. The p-medians problem . . . . . . . . . . . . . . . . . . .
9.6.2. Vehicle routing . . . . . . . . . . . . . . . . . . . . . . . .
9.7. Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . .

235

. . . . . . .

235

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

236
238
240
243
243
244
247
247
252
255

Chapter 10. Polyhedral Approaches. . . . . . . . . . . . . . . . . . . . . . . . .
Ali Ridha MAHJOUB

261

10.1. Introduction . . . . . . . . . . . . . . . . . . . . . . . . . .
10.2. Polyhedra, faces and facets . . . . . . . . . . . . . . . . .
10.2.1. Polyhedra, polytopes and dimension . . . . . . . . .
10.2.2. Faces and facets . . . . . . . . . . . . . . . . . . . . .
10.3. Combinatorial optimization and linear programming .
10.3.1. Associated polytope . . . . . . . . . . . . . . . . . .
10.3.2. Extreme points and extreme rays . . . . . . . . . . .
10.4. Proof techniques . . . . . . . . . . . . . . . . . . . . . . .
10.4.1. Facet proof techniques . . . . . . . . . . . . . . . . .
10.4.2. Integrality techniques . . . . . . . . . . . . . . . . . .
10.5. Integer polyhedra and min–max relations . . . . . . . .
10.5.1. Duality and combinatorial optimization . . . . . . .
10.5.2. Totally unimodular matrices. . . . . . . . . . . . . .
10.5.3. Totally dual integral systems . . . . . . . . . . . . .
10.5.4. Blocking and antiblocking polyhedral . . . . . . . .
10.6. Cutting-plane method . . . . . . . . . . . . . . . . . . . .
10.6.1. The Chvátal–Gomory method . . . . . . . . . . . . .
10.6.2. Cutting-plane algorithm . . . . . . . . . . . . . . . .
10.6.3. Branch-and-cut algorithms. . . . . . . . . . . . . . .
10.6.4. Separation and optimization . . . . . . . . . . . . . .
10.7. The maximum cut problem . . . . . . . . . . . . . . . .
10.7.1. Spin glass models and the maximum cut problem
10.7.2. The cut polytope . . . . . . . . . . . . . . . . . . . . .
10.8. The survivable network design problem . . . . . . . . .
10.8.1. Formulation and associated polyhedron . . . . . . .

www.it-ebooks.info

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.

261
265
265
268
276
276
279
282
283
287
293
293
294
296
297
301
302
304
305
306
308
309
310
313
314

Table of Contents

10.8.2. Valid inequalities and separation
10.8.3. A branch-and-cut algorithm . . . .
10.9. Conclusion . . . . . . . . . . . . . . . .
10.10. Bibliography. . . . . . . . . . . . . . .

.
.
.
.

315
318
319
320

Chapter 11. Constraint Programming . . . . . . . . . . . . . . . . . . . . . . .
Claude LE PAPE

325

11.1. Introduction . . . . . . . . .
11.2. Problem definition . . . . .
11.3. Decision operators . . . . .
11.4. Propagation . . . . . . . . .
11.5. Heuristics . . . . . . . . . .
11.5.1. Branching . . . . . . . .
11.5.2. Exploration strategies .
11.6. Conclusion . . . . . . . . .
11.7. Bibliography . . . . . . . .

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

.
.
.
.

.
.
.
.
.
.
.
.
.

325
327
328
330
333
333
335
336
336

List of Authors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

339

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

343

Summary of Other Volumes in the Series . . . . . . . . . . . . . . . . . . . . .

347

www.it-ebooks.info

.
.
.
.
.
.
.
.
.

.
.
.
.

xi

.
.
.
.
.
.
.
.
.

Preface

What is combinatorial optimization? There are, in my opinion, as many
definitions as there are researchers in this domain, each one as valid as the other. For
me, it is above all the art of understanding a real, natural problem, and being able to
transform it into a mathematical model. It is the art of studying this model in order
to extract its structural properties and the characteristics of the solutions of the
modeled problem. It is the art of exploiting these characteristics in order to
determine algorithms that calculate the solutions but also to show the limits in
economy and efficiency of these algorithms. Lastly, it is the art of enriching or
abstracting existing models in order to increase their strength, portability, and ability
to describe mathematically (and computationally) other problems, which may or
may not be similar to the problems that inspired the initial models.
Seen in this light, we can easily understand why combinatorial optimization is at
the heart of the junction of scientific disciplines as rich and different as theoretical
computer science, pure and applied, discrete and continuous mathematics,
mathematical economics, and quantitative management. It is inspired by these, and
enriches them all.
This book, Concepts of Combinatorial Optimization, is the first volume in a set
entitled Combinatorial Optimization. It tries, along with the other volumes in the set,
to embody the idea of combinatorial optimization. The subjects of this volume cover
themes that are considered to constitute the hard core of combinatorial optimization.
The book is divided into three parts:
– Part I: Complexity of Combinatorial Optimization Problems;
– Part II: Classical Solution Methods;
– Part III: Elements from Mathematical Programming.

www.it-ebooks.info

xiv

Combinatorial Optimization 1

In the first part, Chapter 1 introduces the fundamentals of the theory of
(deterministic) complexity and of algorithm analysis. In Chapter 2, the context
changes and we consider algorithms that make decisions by “tossing a coin”. At
each stage of the resolution of a problem, several alternatives have to be considered,
each one occurring with a certain probability. This is the context of probabilistic (or
randomized) algorithms, which is described in this chapter.
In the second part some methods are introduced that make up the great classics
of combinatorial optimization: branch-and-bound and dynamic programming. The
former is perhaps the most well known and the most popular when we try to find an
optimal solution to a difficult combinatorial optimization problem. Chapter 3 gives a
thorough overview of this method as well as of some of the most well-known tree
search methods based upon branch-and-bound. What can we say about dynamic
programming, presented in Chapter 4? It has considerable reach and scope, and very
many optimization problems have optimal solution algorithms that use it as their
central method.
The third part is centered around mathematical programming, considered to be
the heart of combinatorial optimization and operational research. In Chapter 5, a
large number of linear models and an equally large number of combinatorial
optimization problems are set out and discussed. In Chapter 6, the main simplex
algorithms for linear programming, such as the primal simplex algorithm, the dual
simplex algorithm, and the primal–dual simplex algorithm are introduced. Chapter 7
introduces some classical linear programming methods, while Chapter 8 introduces
quadratic integer optimization methods. Chapter 9 describes a series of resolution
methods currently widely in use, namely column generation. Chapter 10 focuses on
polyhedral methods, almost 60 years old but still relevant to combinatorial
optimization research. Lastly, Chapter 11 introduces a more contemporary, but
extremely interesting, subject, namely constraint programming.
This book is intended for novice researchers, or even Master’s students, as much
as for senior researchers. Master’s students will probably need a little basic
knowledge of graph theory and mathematical (especially linear) programming to be
able to read the book comfortably, even though the authors have been careful to give
definitions of all the concepts they use in their chapters. In any case, to improve
their knowledge of graph theory, readers are invited to consult a great, flagship book
from one of our gurus, Claude Berge: Graphs and Hypergraphs, North Holland,
1973. For linear programming, there is a multitude of good books that the reader
could consult, for example V. Chvátal, Linear Programming, W.H. Freeman, 1983,
or M. Minoux, Programmation mathématique: théorie et algorithmes, Dunod, 1983.
Editing this book has been an exciting adventure, and all my thanks go, firstly, to
the authors who, despite their many responsibilities and commitments (which is the

www.it-ebooks.info

Preface

xv

lot of any university academic), have agreed to participate in the book by writing
chapters in their areas of expertise and, at the same time, to take part in a very tricky
exercise: writing chapters that are both educational and high-level science at the
same time.
This work could never have come into being without the original proposal of
Jean-Charles Pomerol, Vice President of the scientific committee at Hermes, and
Sami Ménascé and Raphaël Ménascé, the heads of publications at ISTE. I give my
warmest thanks to them for their insistence and encouragement. It is a pleasure to
work with them as well as with Rupert Heywood, who has ingeniously translated the
material in this book from the original French.
Vangelis Th. PASCHOS
June 2010

www.it-ebooks.info

PART I

Complexity of Combinatorial
Optimization Problems

www.it-ebooks.info

Chapter 1

Basic Concepts in Algorithms
and Complexity Theory

1.1. Algorithmic complexity
In algorithmic theory, a problem is a general question to which we wish to find an
answer. This question usually has parameters or variables the values of which have
yet to be determined. A problem is posed by giving a list of these parameters as well
as the properties to which the answer must conform. An instance of a problem is
obtained by giving explicit values to each of the parameters of the instanced problem.
An algorithm is a sequence of elementary operations (variable affectation, tests,
forks, etc.) that, when given an instance of a problem as input, gives the solution of
this problem as output after execution of the final operation.
The two most important parameters for measuring the quality of an algorithm are:
its execution time and the memory space that it uses. The first parameter is expressed
in terms of the number of instructions necessary to run the algorithm. The use of the
number of instructions as a unit of time is justified by the fact that the same program
will use the same number of instructions on two different machines but the time taken
will vary, depending on the respective speeds of the machines. We generally consider
that an instruction equates to an elementary operation, for example an assignment, a
test, an addition, a multiplication, a trace, etc. What we call the complexity in time or
simply the complexity of an algorithm gives us an indication of the time it will take to
solve a problem of a given size. In reality this is a function that associates an order of

Chapter written by Vangelis Th. PASCHOS.

Concepts of Combinatorial Optimization
Edited by Vangelis Th. Paschos
© 2010 ISTE Ltd. Published 2010 by ISTE Ltd.

www.it-ebooks.info

4

Combinatorial Optimization 1

magnitude1 of the number of instructions necessary for the solution of a given problem
with the size of an instance of that problem. The second parameter corresponds to the
number of memory units used by the algorithm to solve a problem. The complexity
in space is a function that associates an order of magnitude of the number of memory
units used for the operations necessary for the solution of a given problem with the
size of an instance of that problem.
There are several sets of hypotheses concerning the “standard configuration” that
we use as a basis for measuring the complexity of an algorithm. The most commonly
used framework is the one known as “worst-case”. Here, the complexity of an algorithm is the number of operations carried out on the instance that represents the worst
configuration, amongst those of a fixed size, for its execution; this is the framework
used in most of this book. However, it is not the only framework for analyzing the
complexity of an algorithm. Another framework often used is “average analysis”.
This kind of analysis consists of finding, for a fixed size (of the instance) n, the average execution time of an algorithm on all the instances of size n; we assume that for
this analysis the probability of each instance occurring follows a specific distribution
pattern. More often than not, this distribution pattern is considered to be uniform.
There are three main reasons for the worst-case analysis being used more often than
the average analysis. The first is psychological: the worst-case result tells us for certain that the algorithm being analyzed can never have a level of complexity higher
than that shown by this analysis; in other words, the result we have obtained gives
us an upper bound on the complexity of our algorithm. The second reason is mathematical: results from a worst-case analysis are often easier to obtain than those from
an average analysis, which very often requires mathematical tools and more complex
analysis. The third reason is “analysis portability”: the validity of an average analysis
is limited by the assumptions made about the distribution pattern of the instances; if
the assumptions change, then the original analysis is no longer valid.
1.2. Problem complexity
The definition of the complexity of an algorithm can be easily transposed to problems. Informally, the complexity of a problem is equal to the complexity of the best
algorithm that solves it (this definition is valid independently of which framework we
use).
Let us take a size n and a function f (n). Thus:
– TIMEf (n) is the class of problems for which the complexity (in time) of an
instance of size n is in O(f (n)).
1. Orders of magnitude are defined as follows: given two functions f and g: f = O(g) if and
only if ∃(k, k′ ) ∈ (R+ , R) such that f  kg + k′ ; f = o(g) if and only if limn→∞ (f /g) = 0;
f = Ω(g) if and only if g = o(f ); f = Θ(g) if and only if f = O(g) and g = O(f ).

www.it-ebooks.info

Basic Concepts in Algorithms and Complexity Theory

5

– SPACEf (n) is the class of problems that can be solved, for an instance of size n,
by using a memory space of O(f (n)).
We can now specify the following general classes of complexity:
– P is the class of all the problems that can be solved in a time that is a polynomial
k
function of their instance size, that is P = ∪∞
k= 0 TIMEn .
– EXPTIME is the class of problems that can be solved in a time that is an exponk
nential function of their instance size, that is EXPTIME = ∪∞
k= 0 TIME2 .
– PSPACE is the class of all the problems that can be solved using a memory space that is a polynomial function of their instance size, that is PSPACE =
k
∪∞
k= 0 SPACEn .
With respect to the classes that we have just defined, we have the following relations: P ⊆ PSPACE ⊆ EXPTIME and P ⊂ EXPTIME. Knowing whether the
inclusions of the first relation are strict or not is still an open problem.
Almost all combinatorial optimization problems can be classified, from an algorithmic complexity point of view, into two large categories. Polynomial problems
can be solved optimally by algorithms of polynomial complexity, that is in O(nk ),
where k is a constant independent of n (this is the class P that we have already defined). Non-polynomial problems are those for which the best algorithms (those giving
an optimum solution) are of a “super-polynomial” complexity, that is in O(f (n)g( n) ),
where f and g are increasing functions in n and limn→∞ g(n) = ∞. All these problems contain the class EXPTIME.
The definition of any algorithmic problem (and even more so in the case of any
combinatorial optimization problem) comprises two parts. The first gives the instance
of the problem, that is the type of its variables (a graph, a set, a logical expression,
etc.). The second part gives the type and properties to which the expected solution
must conform. In the complexity theory case, algorithmic problems can be classified
into three categories:
– decision problems;
– optimum value calculation problems;
– optimization problems.
Decision problems are questions concerning the existence, for a given instance, of
a configuration such that this configuration itself, or its value, conforms to certain
properties. The solution to a decision problem is an answer to the question associated
with the problem. In other words, this solution can be:
– either “yes, such a configuration does exist”;
– or “no, such a configuration does not exist”.

www.it-ebooks.info

6

Combinatorial Optimization 1

Let us consider as an example the conjunctive normal form satisfiability problem,
known in the literature as SAT: “Given a set U of n Boolean variables x1 , . . . , xn and a
set C of m clauses2 C1 , . . . , Cm , is there a model for the expression φ = C1 ∧. . .∧Cm ;
i.e. is there an assignment of the values 0 or 1 to the variables such that φ = 1?”.
For an instance φ of this problem, if φ allows a model then the solution (the correct
answer) is yes, otherwise the solution is no.
Let us now consider the MIN TSP problem, defined as follows: given a complete
graph Kn over n vertices for which each edge e ∈ E(Kn ) has a value d(e) > 0, we
are looking for a Hamiltonian cycle H ⊂ E (a partial closely
 related graph such that
each vertex is of 2 degrees) that minimizes the quantity e∈H d(e). Let us assume
that for this problem we have, as well as the complete graph Kn and the vector d,
costs on the edges Kn of a constant K and that we are looking not to determine the
smallest (in terms of total cost) Hamiltonian cycle, but rather to answer the following
question: “Does there exist a Hamiltonian cycle of total distance less than or equal
to K?”. Here, once more, the solution is either yes if such a cycle exists, or no if it
does not.
For optimum value calculation problems, we are looking to calculate the value of
the optimum solution (and not the solution itself).
In the case of the MIN TSP for example, the optimum associated value calculation
problem comes down to calculating the cost of the smallest Hamiltonian cycle, and
not the cycle itself.
Optimization problems, which are naturally of interest to us in this book, are those
for which we are looking to establish the best solution amongst those satisfying certain
properties given by the very definition of the problem. An optimization problem may
be seen as a mathematical program of the form:

opt v (x)
x ∈ CI
where x is the vector describing the solution3, v(x) is the objective function, CI is the
problem’s constraint set, set out for the instance I (in other words, CI sets out both the
instance and the properties of the solution that we are looking to find for this instance),
and opt ∈ {max, min}. An optimum solution of I is a vector x∗ ∈ argopt{v(x) :
x ∈ CI }. The quantity v(x∗ ) is known as the objective value or value of the problem.
A solution x ∈ CI is known as a feasible solution.

2. We associate two literals x and x
¯ with a Boolean variable x, that is the variable itself and its
negation; a clause is a disjunction of the literals.
3. For the combinatorial optimization problems that concern us, we can assume that the components of this vector are 0 or 1 or, if need be, integers.

www.it-ebooks.info

Basic Concepts in Algorithms and Complexity Theory

7

Let us consider the problem MIN WEIGHTED VERTEX COVER4. An instance of this
problem (given by the information from the incident matrix A, of dimension m × n,
of a graph G(V, E) of order n with |E| = m and a vector w, of dimension n of the
costs of the edges of V ), can be expressed in terms of a linear program in integers as
follows:

⎨ min w · x
A.x  1

x ∈ {0, 1}n

where x is a vector from 0, 1 of dimension n such that xi = 1 if the vertex vi ∈ V
is included in the solution, xi = 0 if it is not included. The block of m constraints
A.x  1 expresses the fact that for each edge at least one of these extremes must be
included in the solution. The feasible solutions are all the transversals of G and the
optimum solution is a transversal of G of minimum total weight, that is a transversal
corresponding to a feasible vector consisting of a maximum number of 1.
The solution to an optimization problem includes an evaluation of the optimum
value. Therefore, an optimum value calculation problem can be associated with an
optimization problem. Moreover, optimization problems always have a decisional
variant as shown in the MIN TSP example above.
1.3. The classes P, NP and NPO
Let us consider a decision problem Π. If for any instance I of Π a solution (that
is a correct answer to the question that states Π) of I can be found algorithmically in
polynomial time, that is in O(|I|k ) stages, where |I| is the size of I, then Π is called
a polynomial problem and the algorithm that solves it a polynomial algorithm (let us
remember that polynomial problems make up the P class).
For reasons of simplicity, we will assume in what follows that the solution to a
decision problem is:
– either “yes, such a solution exists, and this is it”;
– or “no, such a solution does not exist”.
In other words, if, to solve a problem, we could consult an “oracle”, it would
provide us with an answer of not just a yes or no but also, in the first case, a certificate

4. Given a graph G(V, E) of order n, in the MIN VERTEX COVER problem we are looking to
find a smallest transversal of G, that is a set V ′ ⊆ V such that for every edge (u, v) ∈ E,
either u ∈ V ′ , or v ∈ V ′ of minimum size; we denote by MIN WEIGHTED VERTEX COVER the
version of MIN VERTEX COVER where a positive weight is associated with each vertex and the
objective is to find a transversal of G that minimizes the sum of the vertex weights.

www.it-ebooks.info

8

Combinatorial Optimization 1

proving the veracity of the yes. This testimony is simply a solution proposal that the
oracle “asserts” as being the real solution to our problem.
Let us consider the decision problems for which the validity of the certificate can
be verified in polynomial time. These problems form the class NP.
D EFINITION 1.1.– A decision problem Π is in NP if the validity of all solutions of Π
is verifiable in polynomial time.
For example, the SAT problem belongs to NP. Indeed, given the assignment of the
values 0, 1 to the variables of an instance φ of this problem, we can, with at most nm
applications of the connector ∨, decide whether the proposed assignment is a model
for φ, that is whether it satisfies all the clauses.
Therefore, we can easily see that the decisional variant of MIN TSP seen previously
also belongs to NP.
Definition 1.1 can be extended to optimization problems. Let us consider an optimization problem Π and let us assume that each instance I of Π conforms to the three
following properties:
1) The feasibility of a solution can be verified in polynomial time.
2) The value of a feasible solution can be calculated in polynomial time.
3) There is at least one feasible solution that can be calculated in polynomial time.
Thus, Π belongs to the class NPO. In other words, the class NPO is the class of
optimization problems for which the decisional variant is in NP. We can therefore
define the class PO of optimization problems for which the decisional variant belongs
to the class P. In other words, PO is the class of problems that can be optimally solved
in polynomial time.
We note that the class NP has been defined (see definition 1.1) without explicit
reference to the optimum solution of its problems, but by reference to the verification
of a given solution. Evidently, the condition of belonging to P being stronger than
that of belonging to NP (what can be solved can be verified), we have the obvious
inclusion of : P ⊆ NP (Figure 1.1).
“What is the complexity of the problems in NP \ P?”. The best general result on
the complexity of the solution of problems from NP is as follows [GAR 79].
T HEOREM 1.1.– For any problem Π ∈ NP, there is a polynomial pΠ such that each
instance I of Π can be solved by an algorithm of complexity O(2pΠ( |I|) ).
In fact, theorem 1.1 merely gives an upper limit on the complexity of problems
in NP, but no lower limit. The diagram in Figure 1.1 is nothing more than a conjecture,

www.it-ebooks.info

Basic Concepts in Algorithms and Complexity Theory

9

P
NP

Figure 1.1. P and NP (under the assumption P = NP)

and although almost all researchers in complexity are completely convinced of its
veracity, it has still not been proved. The question “Is P equal to or different from
NP?” is the biggest open question in computer science and one of the best known in
mathematics.

1.4. Karp and Turing reductions
As we have seen, problems in NP \ P are considered to be algorithmically more
difficult than problems in P. A large number of problems in NP \ P are very strongly
bound to each other through the concept of polynomial reduction.
The principle of reducing a problem Π to a problem Π′ consists of considering the
problem Π as a specific case of Π′ , modulo a slight transformation. If this transformation is polynomial, and we know that we can solve Π′ in polynomial time, we will
also be able to solve Π in polynomial time. Reduction is thus a means of transferring
the result of solving one problem to another; in the same way it is a tool for classifying
problems according to the level of difficulty of their solution.
We will start with the classic Karp reduction (for the class NP) [GAR 79, KAR 72].
This links two decision problems by the possibility of their optimum (and simultaneous) solution in polynomial time. In the following, given a problem Π, let IΠ be all of
its instances (we assume that each instance I ∈ IΠ is identifiable in polynomial time
in |I|). Let OΠ be the subset of IΠ for which the solution is yes; OΠ is also known as
the set of yes-instances (or positive instances) of Π.
D EFINITION 1.2.– Given two decision problems Π1 and Π2 , a Karp reduction (or
polynomial transformation) is a function f : IΠ 1 → IΠ 2 , which can be calculated in
polynomial time, such that, given a solution for f (I), we are able to find a solution
for I in polynomial time in |I| (the size of the instance I).

www.it-ebooks.info

10

Combinatorial Optimization 1

A Karp reduction of a decision problem Π1 to a decision problem Π2 implies the
existence of an algorithm A1 for Π1 that uses an algorithm A2 for Π2 . Given any
instance I1 ∈ IΠ 1 , the algorithm A1 constructs an instance I2 ∈ IΠ 2 ; it executes the
algorithm A2 , which calculates a solution on I2 , then A1 transforms this solution into
a solution for Π1 on I1 . If A2 is polynomial, then A1 is also polynomial.
Following on from this, we can state another reduction, known in the literature
as the Turing reduction, which is better adapted to optimization problems. In what
follows, we define a problem Π as a couple (IΠ , SolΠ ), where SolΠ is the set of
solutions for Π (we denote by SolΠ (I) the set of solutions for the instance I ∈ IΠ ).
D EFINITION 1.3.– A Turing reduction of a problem Π1 to a problem Π2 is an algorithm A1 that solves Π1 by using (possibly several times) an algorithm A2 for Π2 in
such a way that if A2 is polynomial, then A1 is also polynomial.
The Karp and Turing reductions are transitive: if Π1 is reduced (by one of these
two reductions) to Π2 and Π2 is reduced to Π3 , then Π1 reduces to Π3 . We can
therefore see that both reductions preserve membership of the class P in the sense that
if Π reduces to Π′ and Π′ ∈ P, then Π ∈ P.
For more details on both the Karp and Turing reductions refer to [AUS 99, GAR 79,
PAS 04]. In Garey and Johnson [GAR 79] (Chapter 5) there is also a very interesting
historical summary of the development of the ideas and terms that have led to the
structure of complexity theory as we know it today.

1.5. NP-completeness
From the definition of the two reductions in the preceding section, if Π′ reduces
to Π, then Π can reasonably be considered as at least as difficult as Π′ (regarding
their solution by polynomial algorithms), in the sense that a polynomial algorithm
for Π would have sufficed to solve not only Π itself, but equally Π′ . Let us confine
ourselves to the Karp reduction. By using it, we can highlight a problem Π∗ ∈ NP
such that any other problem Π ∈ NP reduces to Π∗ [COO 71, GAR 79, FAG 74].
Such a problem is, as we have mentioned, the most difficult problem of the class NP.
Therefore we can show [GAR 79, KAR 72] that there are other problems Π∗ ′ ∈ NP
such that Π∗ reduces to Π∗ ′ . In this way we expose a family of problems such that
any problem in NP reduces (remembering that the Karp reduction is transitive) to one
of its problems. This family has, of course, the following properties:
– It is made up of the most difficult problems of NP.
– A polynomial algorithm for at least one of its problems would have been sufficient to solve, in polynomial time, all the other problems of this family (and indeed
any problem in NP).

www.it-ebooks.info

Basic Concepts in Algorithms and Complexity Theory

11

The problems from this family are NP-complete problems and the class of these
problems is called the NP-complete class.
D EFINITION 1.4.– A decision problem Π is NP-complete if, and only if, it fulfills the
following two conditions:
1) Π ∈ NP ;
2) ∀Π′ ∈ NP, Π′ reduces to Π by a Karp reduction.
Of course, a notion of NP-completeness very similar to that of definition 1.4 can
also be based on the Turing reduction.
The following application of definition 1.3 is very often used to show the NPcompleteness of a problem. Let Π1 = (IΠ 1 , SolΠ 1 ) and Π2 = (IΠ 2 , SolΠ 2 ) be two
problems, and let (f, g) be a pair of functions, which can be calculated in polynomial
time, where:
– f : IΠ 1 → IΠ 2 is such that for any instance I ∈ IΠ 1 , f (I) ∈ IΠ 2 ;
– g : IΠ 1 × SolΠ 2 → SolΠ 1 is such that for every pair (I, S) ∈ (IΠ
SolΠ 2 (f (I))), g(I, S) ∈ SolΠ 1 (I).

1

×

Let us assume that there is a polynomial algorithm A for the problem Π2 . In this case,
the algorithm f ◦ A ◦ g is a (polynomial) Turing reduction.
A problem that fulfills condition 2 of definition 1.4 (without necessarily checking
condition 1) is called NP-hard5. It follows that a decision problem Π is NP-complete
if and only if it belongs to NP and it is NP-hard.
With the class NP-complete, we can further refine (Figure 1.2) the world of NP.
Of course, if P = NP, the th