Main
Introduction to algorithms
Introduction to algorithms
Thomas H. Cormen, Charles E. Leiserson, Ronald L. Rivest, Clifford Stein
5.0 /
5.0
How much do you like this book?
What’s the quality of the file?
Download the book for quality assessment
What’s the quality of the downloaded files?
Some books on algorithms are rigorous but incomplete; others cover masses of material but lack rigor. Introduction to Algorithms uniquely combines rigor and comprehensiveness. The book covers a broad range of algorithms in depth, yet makes their design and analysis accessible to all levels of readers. Each chapter is relatively selfcontained and can be used as a unit of study. The algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The explanations have been kept elementary without sacrificing depth of coverage or mathematical rigor.
The first edition became a widely used text in universities worldwide as well as the standard reference for professionals. The second edition featured new chapters on the role of algorithms, probabilistic analysis and randomized algorithms, and linear programming. The third edition has been revised and updated throughout. It includes two completely new chapters, on van Emde Boas trees and multithreaded algorithms, substantial additions to the chapter on recurrence (now called “DivideandConquer”), and an appendix on matrices. It features improved treatment of dynamic programming and greedy algorithms and a new notion of edgebased flow in the material on flow networks. Many new exercises and problems have been added for this edition. As of the third edition, this textbook is published exclusively by the MIT Press.
The first edition became a widely used text in universities worldwide as well as the standard reference for professionals. The second edition featured new chapters on the role of algorithms, probabilistic analysis and randomized algorithms, and linear programming. The third edition has been revised and updated throughout. It includes two completely new chapters, on van Emde Boas trees and multithreaded algorithms, substantial additions to the chapter on recurrence (now called “DivideandConquer”), and an appendix on matrices. It features improved treatment of dynamic programming and greedy algorithms and a new notion of edgebased flow in the material on flow networks. Many new exercises and problems have been added for this edition. As of the third edition, this textbook is published exclusively by the MIT Press.
Categories:
Year:
2009
Edition:
3
Publisher:
The MIT Press
Language:
english
Pages:
1312 / 1313
ISBN 10:
026225946X
ISBN 13:
9780262259460
File:
PDF, 4.84 MB
Your tags:
Download (pdf, 4.84 MB)
 Open in Browser
 Checking other formats...
 Convert to EPUB
 Convert to FB2
 Convert to MOBI
 Convert to TXT
 Convert to RTF
 Converted file can differ from the original. If possible, download the file in its original format.
 Please login to your account first

Need help? Please read our short guide how to send a book to Kindle
The file will be sent to your email address. It may take up to 15 minutes before you receive it.
The file will be sent to your Kindle account. It may takes up to 15 minutes before you received it.
Please note: you need to verify every book you want to send to your Kindle. Check your mailbox for the verification email from Amazon Kindle.
Please note: you need to verify every book you want to send to your Kindle. Check your mailbox for the verification email from Amazon Kindle.
You may be interested in Powered by Rec2Me
Most frequently terms
algorithm^{2056}
vertex^{1002}
node^{864}
graph^{854}
algorithms^{779}
linear^{741}
matrix^{700}
vertices^{639}
loop^{562}
optimal^{546}
binary^{541}
shortest^{538}
array^{484}
input^{469}
polynomial^{459}
theorem^{451}
equation^{407}
sequence^{404}
mod^{391}
programming^{359}
lemma^{357}
compute^{354}
heap^{350}
dynamic^{344}
probability^{339}
recursive^{329}
nil^{315}
hash^{308}
nodes^{287}
integer^{287}
recurrence^{281}
binary search^{274}
matrices^{267}
iteration^{266}
nsert^{256}
linear program^{254}
variable^{243}
variables^{242}
approximation^{231}
ree^{231}
greedy^{226}
amortized^{225}
fibonacci^{218}
sorting^{217}
computing^{214}
sorted^{213}
subtree^{212}
notation^{212}
subarray^{209}
integers^{205}
insertion^{199}
multiplication^{197}
spanning^{195}
subset^{195}
subproblems^{194}
denote^{190}
shortest paths^{183}
elete^{181}
vector^{179}
multithreaded^{177}
Related Booklists
13 comments
WebQuake
Great book, I have a one. It's worthy to buy it in the printed version.
26 January 2020 (16:57)
asher1101
Thanks for this wonderful book.
22 March 2020 (19:01)
Vincent foster
When I get BOOK
?.........?.................
?.........?.................
12 April 2020 (00:35)
Ashraful
Good book. Recommended by most people. Really praiseworthy work by the authors.
18 December 2020 (11:51)
Brayo
The perfect book for learning algorithms, might be a little advanced for a complete beginner tho
05 April 2021 (21:08)
r12esh
Haha. Zlib is like your friend who always has notes which you didn't take during your classes. I fuxing love this site!
19 April 2021 (02:11)
Azat
Guys, is these books legal? I coudn't find any info regarding it on the website.
13 May 2021 (07:18)
VD
It is illegal but you wont get into any trouble
16 May 2021 (21:36)
felixcn
Zlib is really a great website.
15 June 2021 (08:16)
Wang
This is awsomeeeeeee
16 August 2021 (15:37)
star
excellent! thanks a lot for the contributor!
17 August 2021 (03:35)
Sergi
where can i get the new edition?
10 September 2021 (00:12)
ilikecheese
i like cheese
cheese.
cheese.
06 December 2021 (21:53)
You can write a book review and share your experiences. Other readers will always be interested in your opinion of the books you've read. Whether you've loved the book or not, if you give your honest and detailed thoughts then people will find new books that are right for them.
1

2

T H O M A S H. C O R M E N C H A R L E S E. L E I S E R S O N R O N A L D L. R I V E S T C L I F F O R D STEIN INTRODUCTION TO ALGORITHMS T H I R D E D I T I O N Introduction to Algorithms Third Edition Thomas H. Cormen Charles E. Leiserson Ronald L. Rivest Clifford Stein Introduction to Algorithms Third Edition The MIT Press Cambridge, Massachusetts London, England c 2009 Massachusetts Institute of Technology All rights reserved. No part of this book may be reproduced in any form or by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher. For information about special quantity discounts, please email special sales@mitpress.mit.edu. This book was set in Times Roman and Mathtime Pro 2 by the authors. Printed and bound in the United States of America. Library of Congress CataloginginPublication Data Introduction to algorithms / Thomas H. Cormen . . . [et al.].—3rd ed. p. cm. Includes bibliographical references and index. ISBN 9780262033848 (hardcover : alk. paper)—ISBN 9780262533058 (pbk. : alk. paper) 1. Computer programming. 2. Computer algorithms. I. Cormen, Thomas H. QA76.6.I5858 2009 005.1—dc22 2009008593 10 9 8 7 6 5 4 3 2 Contents Preface xiii I Foundations Introduction 3 1 The Role of Algorithms in Computing 5 1.1 Algorithms 5 1.2 Algorithms as a technology 11 2 Getting Started 16 2.1 Insertion sort 16 2.2 Analyzing algorithms 23 2.3 Designing algorithms 29 3 Growth of Functions 43 3.1 Asymptotic notation 43 3.2 Standard notations and common functions 4 ? 5 ? 53 DivideandConquer 65 4.1 The maximumsubarray problem 68 4.2 Strassen’s algorithm for matrix multiplication 75 4.3 The substitution method for solving recurrences 83 4.4 The recursiontree method for solving recurrences 88 4.5 The master method for solving recurrences 93 4.6 Proof of the master theorem 97 Probabilistic Analysis and Randomized Algorithms 114 5.1 The hiring problem 114 5.2 Indicator ; random variables 118 5.3 Randomized algorithms 122 5.4 Probabilistic analysis and further uses of indicator random variables 130 vi Contents II Sorting and Order Statistics Introduction 6 7 8 9 147 Heapsort 151 6.1 Heaps 151 6.2 Maintaining the heap property 6.3 Building a heap 156 6.4 The heapsort algorithm 159 6.5 Priority queues 162 154 Quicksort 170 7.1 Description of quicksort 170 7.2 Performance of quicksort 174 7.3 A randomized version of quicksort 7.4 Analysis of quicksort 180 Sorting in Linear Time 191 8.1 Lower bounds for sorting 8.2 Counting sort 194 8.3 Radix sort 197 8.4 Bucket sort 200 179 191 Medians and Order Statistics 213 9.1 Minimum and maximum 214 9.2 Selection in expected linear time 215 9.3 Selection in worstcase linear time 220 III Data Structures Introduction 10 11 ? 229 Elementary Data Structures 232 10.1 Stacks and queues 232 10.2 Linked lists 236 10.3 Implementing pointers and objects 10.4 Representing rooted trees 246 Hash Tables 253 11.1 Directaddress tables 254 11.2 Hash tables 256 11.3 Hash functions 262 11.4 Open addressing 269 11.5 Perfect hashing 277 241 Contents 12 ? 13 14 vii Binary Search Trees 286 12.1 What is a binary search tree? 286 12.2 Querying a binary search tree 289 12.3 Insertion and deletion 294 12.4 Randomly built binary search trees 299 RedBlack Trees 308 13.1 Properties of redblack trees 13.2 Rotations 312 13.3 Insertion 315 13.4 Deletion 323 308 Augmenting Data Structures 339 14.1 Dynamic order statistics 339 14.2 How to augment a data structure 14.3 Interval trees 348 345 IV Advanced Design and Analysis Techniques Introduction 357 15 Dynamic Programming 359 15.1 Rod cutting 360 15.2 Matrixchain multiplication 370 15.3 Elements of dynamic programming 378 15.4 Longest common subsequence 390 15.5 Optimal binary search trees 397 16 Greedy Algorithms 414 16.1 An activityselection problem 415 16.2 Elements of the greedy strategy 423 16.3 Huffman codes 428 16.4 Matroids and greedy methods 437 16.5 A taskscheduling problem as a matroid ? ? 17 Amortized Analysis 451 17.1 Aggregate analysis 452 17.2 The accounting method 456 17.3 The potential method 459 17.4 Dynamic tables 463 443 viii Contents V Advanced Data Structures Introduction 18 BTrees 484 18.1 Deﬁnition of Btrees 488 18.2 Basic operations on Btrees 491 18.3 Deleting a key from a Btree 499 19 Fibonacci Heaps 505 19.1 Structure of Fibonacci heaps 507 19.2 Mergeableheap operations 510 19.3 Decreasing a key and deleting a node 518 19.4 Bounding the maximum degree 523 20 van Emde Boas Trees 531 20.1 Preliminary approaches 532 20.2 A recursive structure 536 20.3 The van Emde Boas tree 545 21 Data Structures for Disjoint Sets 561 21.1 Disjointset operations 561 21.2 Linkedlist representation of disjoint sets 564 21.3 Disjointset forests 568 21.4 Analysis of union by rank with path compression ? VI 481 Graph Algorithms Introduction 587 22 Elementary Graph Algorithms 589 22.1 Representations of graphs 589 22.2 Breadthﬁrst search 594 22.3 Depthﬁrst search 603 22.4 Topological sort 612 22.5 Strongly connected components 615 23 Minimum Spanning Trees 624 23.1 Growing a minimum spanning tree 625 23.2 The algorithms of Kruskal and Prim 631 573 Contents 24 ix SingleSource Shortest Paths 643 24.1 The BellmanFord algorithm 651 24.2 Singlesource shortest paths in directed acyclic graphs 24.3 Dijkstra’s algorithm 658 24.4 Difference constraints and shortest paths 664 24.5 Proofs of shortestpaths properties 671 25 AllPairs Shortest Paths 684 25.1 Shortest paths and matrix multiplication 686 25.2 The FloydWarshall algorithm 693 25.3 Johnson’s algorithm for sparse graphs 700 26 Maximum Flow 708 26.1 Flow networks 709 26.2 The FordFulkerson method 714 26.3 Maximum bipartite matching 732 26.4 Pushrelabel algorithms 736 26.5 The relabeltofront algorithm 748 ? ? 655 VII Selected Topics Introduction 769 27 Multithreaded Algorithms 772 27.1 The basics of dynamic multithreading 774 27.2 Multithreaded matrix multiplication 792 27.3 Multithreaded merge sort 797 28 Matrix Operations 813 28.1 Solving systems of linear equations 813 28.2 Inverting matrices 827 28.3 Symmetric positivedeﬁnite matrices and leastsquares approximation 832 29 Linear Programming 843 29.1 Standard and slack forms 850 29.2 Formulating problems as linear programs 29.3 The simplex algorithm 864 29.4 Duality 879 29.5 The initial basic feasible solution 886 859 x Contents 30 Polynomials and the FFT 898 30.1 Representing polynomials 900 30.2 The DFT and FFT 906 30.3 Efﬁcient FFT implementations 915 31 NumberTheoretic Algorithms 926 31.1 Elementary numbertheoretic notions 927 31.2 Greatest common divisor 933 31.3 Modular arithmetic 939 31.4 Solving modular linear equations 946 31.5 The Chinese remainder theorem 950 31.6 Powers of an element 954 31.7 The RSA publickey cryptosystem 958 31.8 Primality testing 965 31.9 Integer factorization 975 ? ? 32 ? 33 String Matching 985 32.1 The naive stringmatching algorithm 988 32.2 The RabinKarp algorithm 990 32.3 String matching with ﬁnite automata 995 32.4 The KnuthMorrisPratt algorithm 1002 Computational Geometry 1014 33.1 Linesegment properties 1015 33.2 Determining whether any pair of segments intersects 33.3 Finding the convex hull 1029 33.4 Finding the closest pair of points 1039 34 NPCompleteness 1048 34.1 Polynomial time 1053 34.2 Polynomialtime veriﬁcation 1061 34.3 NPcompleteness and reducibility 1067 34.4 NPcompleteness proofs 1078 34.5 NPcomplete problems 1086 35 Approximation Algorithms 1106 35.1 The vertexcover problem 1108 35.2 The travelingsalesman problem 1111 35.3 The setcovering problem 1117 35.4 Randomization and linear programming 35.5 The subsetsum problem 1128 1123 1021 Contents xi VIII Appendix: Mathematical Background Introduction A 1143 Summations 1145 A.1 Summation formulas and properties A.2 Bounding summations 1149 1145 B Sets, Etc. 1158 B.1 Sets 1158 B.2 Relations 1163 B.3 Functions 1166 B.4 Graphs 1168 B.5 Trees 1173 C Counting and Probability 1183 C.1 Counting 1183 C.2 Probability 1189 C.3 Discrete random variables 1196 C.4 The geometric and binomial distributions 1201 C.5 The tails of the binomial distribution 1208 ? D Matrices 1217 D.1 Matrices and matrix operations D.2 Basic matrix properties 1222 Bibliography Index 1251 1231 1217 Preface Before there were computers, there were algorithms. But now that there are computers, there are even more algorithms, and algorithms lie at the heart of computing. This book provides a comprehensive introduction to the modern study of computer algorithms. It presents many algorithms and covers them in considerable depth, yet makes their design and analysis accessible to all levels of readers. We have tried to keep explanations elementary without sacriﬁcing depth of coverage or mathematical rigor. Each chapter presents an algorithm, a design technique, an application area, or a related topic. Algorithms are described in English and in a pseudocode designed to be readable by anyone who has done a little programming. The book contains 244 ﬁgures—many with multiple parts—illustrating how the algorithms work. Since we emphasize efﬁciency as a design criterion, we include careful analyses of the running times of all our algorithms. The text is intended primarily for use in undergraduate or graduate courses in algorithms or data structures. Because it discusses engineering issues in algorithm design, as well as mathematical aspects, it is equally well suited for selfstudy by technical professionals. In this, the third edition, we have once again updated the entire book. The changes cover a broad spectrum, including new chapters, revised pseudocode, and a more active writing style. To the teacher We have designed this book to be both versatile and complete. You should ﬁnd it useful for a variety of courses, from an undergraduate course in data structures up through a graduate course in algorithms. Because we have provided considerably more material than can ﬁt in a typical oneterm course, you can consider this book to be a “buffet” or “smorgasbord” from which you can pick and choose the material that best supports the course you wish to teach. xiv Preface You should ﬁnd it easy to organize your course around just the chapters you need. We have made chapters relatively selfcontained, so that you need not worry about an unexpected and unnecessary dependence of one chapter on another. Each chapter presents the easier material ﬁrst and the more difﬁcult material later, with section boundaries marking natural stopping points. In an undergraduate course, you might use only the earlier sections from a chapter; in a graduate course, you might cover the entire chapter. We have included 957 exercises and 158 problems. Each section ends with exercises, and each chapter ends with problems. The exercises are generally short questions that test basic mastery of the material. Some are simple selfcheck thought exercises, whereas others are more substantial and are suitable as assigned homework. The problems are more elaborate case studies that often introduce new material; they often consist of several questions that lead the student through the steps required to arrive at a solution. Departing from our practice in previous editions of this book, we have made publicly available solutions to some, but by no means all, of the problems and exercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to these solutions. You will want to check this site to make sure that it does not contain the solution to an exercise or problem that you plan to assign. We expect the set of solutions that we post to grow slowly over time, so you will need to check it each time you teach the course. We have starred (?) the sections and exercises that are more suitable for graduate students than for undergraduates. A starred section is not necessarily more difﬁcult than an unstarred one, but it may require an understanding of more advanced mathematics. Likewise, starred exercises may require an advanced background or more than average creativity. To the student We hope that this textbook provides you with an enjoyable introduction to the ﬁeld of algorithms. We have attempted to make every algorithm accessible and interesting. To help you when you encounter unfamiliar or difﬁcult algorithms, we describe each one in a stepbystep manner. We also provide careful explanations of the mathematics needed to understand the analysis of the algorithms. If you already have some familiarity with a topic, you will ﬁnd the chapters organized so that you can skim introductory sections and proceed quickly to the more advanced material. This is a large book, and your class will probably cover only a portion of its material. We have tried, however, to make this a book that will be useful to you now as a course textbook and also later in your career as a mathematical desk reference or an engineering handbook. Preface xv What are the prerequisites for reading this book? You should have some programming experience. In particular, you should understand recursive procedures and simple data structures such as arrays and linked lists. You should have some facility with mathematical proofs, and especially proofs by mathematical induction. A few portions of the book rely on some knowledge of elementary calculus. Beyond that, Parts I and VIII of this book teach you all the mathematical techniques you will need. We have heard, loud and clear, the call to supply solutions to problems and exercises. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions for a few of the problems and exercises. Feel free to check your solutions against ours. We ask, however, that you do not send your solutions to us. To the professional The wide range of topics in this book makes it an excellent handbook on algorithms. Because each chapter is relatively selfcontained, you can focus in on the topics that most interest you. Most of the algorithms we discuss have great practical utility. We therefore address implementation concerns and other engineering issues. We often provide practical alternatives to the few algorithms that are primarily of theoretical interest. If you wish to implement any of the algorithms, you should ﬁnd the translation of our pseudocode into your favorite programming language to be a fairly straightforward task. We have designed the pseudocode to present each algorithm clearly and succinctly. Consequently, we do not address errorhandling and other softwareengineering issues that require speciﬁc assumptions about your programming environment. We attempt to present each algorithm simply and directly without allowing the idiosyncrasies of a particular programming language to obscure its essence. We understand that if you are using this book outside of a course, then you might be unable to check your solutions to problems and exercises against solutions provided by an instructor. Our Web site, http://mitpress.mit.edu/algorithms/, links to solutions for some of the problems and exercises so that you can check your work. Please do not send your solutions to us. To our colleagues We have supplied an extensive bibliography and pointers to the current literature. Each chapter ends with a set of chapter notes that give historical details and references. The chapter notes do not provide a complete reference to the whole ﬁeld xvi Preface of algorithms, however. Though it may be hard to believe for a book of this size, space constraints prevented us from including many interesting algorithms. Despite myriad requests from students for solutions to problems and exercises, we have chosen as a matter of policy not to supply references for problems and exercises, to remove the temptation for students to look up a solution rather than to ﬁnd it themselves. Changes for the third edition What has changed between the second and third editions of this book? The magnitude of the changes is on a par with the changes between the ﬁrst and second editions. As we said about the secondedition changes, depending on how you look at it, the book changed either not much or quite a bit. A quick look at the table of contents shows that most of the secondedition chapters and sections appear in the third edition. We removed two chapters and one section, but we have added three new chapters and two new sections apart from these new chapters. We kept the hybrid organization from the ﬁrst two editions. Rather than organizing chapters by only problem domains or according only to techniques, this book has elements of both. It contains techniquebased chapters on divideandconquer, dynamic programming, greedy algorithms, amortized analysis, NPCompleteness, and approximation algorithms. But it also has entire parts on sorting, on data structures for dynamic sets, and on algorithms for graph problems. We ﬁnd that although you need to know how to apply techniques for designing and analyzing algorithms, problems seldom announce to you which techniques are most amenable to solving them. Here is a summary of the most signiﬁcant changes for the third edition: We added new chapters on van Emde Boas trees and multithreaded algorithms, and we have broken out material on matrix basics into its own appendix chapter. We revised the chapter on recurrences to more broadly cover the divideandconquer technique, and its ﬁrst two sections apply divideandconquer to solve two problems. The second section of this chapter presents Strassen’s algorithm for matrix multiplication, which we have moved from the chapter on matrix operations. We removed two chapters that were rarely taught: binomial heaps and sorting networks. One key idea in the sorting networks chapter, the 01 principle, appears in this edition within Problem 87 as the 01 sorting lemma for compareexchange algorithms. The treatment of Fibonacci heaps no longer relies on binomial heaps as a precursor. Preface xvii We revised our treatment of dynamic programming and greedy algorithms. Dynamic programming now leads off with a more interesting problem, rod cutting, than the assemblyline scheduling problem from the second edition. Furthermore, we emphasize memoization a bit more than we did in the second edition, and we introduce the notion of the subproblem graph as a way to understand the running time of a dynamicprogramming algorithm. In our opening example of greedy algorithms, the activityselection problem, we get to the greedy algorithm more directly than we did in the second edition. The way we delete a node from binary search trees (which includes redblack trees) now guarantees that the node requested for deletion is the node that is actually deleted. In the ﬁrst two editions, in certain cases, some other node would be deleted, with its contents moving into the node passed to the deletion procedure. With our new way to delete nodes, if other components of a program maintain pointers to nodes in the tree, they will not mistakenly end up with stale pointers to nodes that have been deleted. The material on ﬂow networks now bases ﬂows entirely on edges. This approach is more intuitive than the net ﬂow used in the ﬁrst two editions. With the material on matrix basics and Strassen’s algorithm moved to other chapters, the chapter on matrix operations is smaller than in the second edition. We have modiﬁed our treatment of the KnuthMorrisPratt stringmatching algorithm. We corrected several errors. Most of these errors were posted on our Web site of secondedition errata, but a few were not. Based on many requests, we changed the syntax (as it were) of our pseudocode. We now use “D” to indicate assignment and “==” to test for equality, just as C, C++, Java, and Python do. Likewise, we have eliminated the keywords do and then and adopted “//” as our commenttoendofline symbol. We also now use dotnotation to indicate object attributes. Our pseudocode remains procedural, rather than objectoriented. In other words, rather than running methods on objects, we simply call procedures, passing objects as parameters. We added 100 new exercises and 28 new problems. We also updated many bibliography entries and added several new ones. Finally, we went through the entire book and rewrote sentences, paragraphs, and sections to make the writing clearer and more active. xviii Preface Web site You can use our Web site, http://mitpress.mit.edu/algorithms/, to obtain supplementary information and to communicate with us. The Web site links to a list of known errors, solutions to selected exercises and problems, and (of course) a list explaining the corny professor jokes, as well as other content that we might add. The Web site also tells you how to report errors or make suggestions. How we produced this book Like the second edition, the third edition was produced in LATEX 2" . We used the Times font with mathematics typeset using the MathTime Pro 2 fonts. We thank Michael Spivak from Publish or Perish, Inc., Lance Carnes from Personal TeX, Inc., and Tim Tregubov from Dartmouth College for technical support. As in the previous two editions, we compiled the index using Windex, a C program that we wrote, and the bibliography was produced with B IBTEX. The PDF ﬁles for this book were created on a MacBook running OS 10.5. We drew the illustrations for the third edition using MacDraw Pro, with some of the mathematical expressions in illustrations laid in with the psfrag package for LATEX 2" . Unfortunately, MacDraw Pro is legacy software, having not been marketed for over a decade now. Happily, we still have a couple of Macintoshes that can run the Classic environment under OS 10.4, and hence they can run MacDraw Pro—mostly. Even under the Classic environment, we ﬁnd MacDraw Pro to be far easier to use than any other drawing software for the types of illustrations that accompany computerscience text, and it produces beautiful output.1 Who knows how long our preIntel Macs will continue to run, so if anyone from Apple is listening: Please create an OS Xcompatible version of MacDraw Pro! Acknowledgments for the third edition We have been working with the MIT Press for over two decades now, and what a terriﬁc relationship it has been! We thank Ellen Faran, Bob Prior, Ada Brunstein, and Mary Reilly for their help and support. We were geographically distributed while producing the third edition, working in the Dartmouth College Department of Computer Science, the MIT Computer 1 We investigated several drawing programs that run under Mac OS X, but all had signiﬁcant shortcomings compared with MacDraw Pro. We brieﬂy attempted to produce the illustrations for this book with a different, well known drawing program. We found that it took at least ﬁve times as long to produce each illustration as it took with MacDraw Pro, and the resulting illustrations did not look as good. Hence the decision to revert to MacDraw Pro running on older Macintoshes. Preface xix Science and Artiﬁcial Intelligence Laboratory, and the Columbia University Department of Industrial Engineering and Operations Research. We thank our respective universities and colleagues for providing such supportive and stimulating environments. Julie Sussman, P.P.A., once again bailed us out as the technical copyeditor. Time and again, we were amazed at the errors that eluded us, but that Julie caught. She also helped us improve our presentation in several places. If there is a Hall of Fame for technical copyeditors, Julie is a sureﬁre, ﬁrstballot inductee. She is nothing short of phenomenal. Thank you, thank you, thank you, Julie! Priya Natarajan also found some errors that we were able to correct before this book went to press. Any errors that remain (and undoubtedly, some do) are the responsibility of the authors (and probably were inserted after Julie read the material). The treatment for van Emde Boas trees derives from Erik Demaine’s notes, which were in turn inﬂuenced by Michael Bender. We also incorporated ideas from Javed Aslam, Bradley Kuszmaul, and Hui Zha into this edition. The chapter on multithreading was based on notes originally written jointly with Harald Prokop. The material was inﬂuenced by several others working on the Cilk project at MIT, including Bradley Kuszmaul and Matteo Frigo. The design of the multithreaded pseudocode took its inspiration from the MIT Cilk extensions to C and by Cilk Arts’s Cilk++ extensions to C++. We also thank the many readers of the ﬁrst and second editions who reported errors or submitted suggestions for how to improve this book. We corrected all the bona ﬁde errors that were reported, and we incorporated as many suggestions as we could. We rejoice that the number of such contributors has grown so great that we must regret that it has become impractical to list them all. Finally, we thank our wives—Nicole Cormen, Wendy Leiserson, Gail Rivest, and Rebecca Ivry—and our children—Ricky, Will, Debby, and Katie Leiserson; Alex and Christopher Rivest; and Molly, Noah, and Benjamin Stein—for their love and support while we prepared this book. The patience and encouragement of our families made this project possible. We affectionately dedicate this book to them. T HOMAS H. C ORMEN C HARLES E. L EISERSON RONALD L. R IVEST C LIFFORD S TEIN February 2009 Lebanon, New Hampshire Cambridge, Massachusetts Cambridge, Massachusetts New York, New York Introduction to Algorithms Third Edition I Foundations Introduction This part will start you thinking about designing and analyzing algorithms. It is intended to be a gentle introduction to how we specify algorithms, some of the design strategies we will use throughout this book, and many of the fundamental ideas used in algorithm analysis. Later parts of this book will build upon this base. Chapter 1 provides an overview of algorithms and their place in modern computing systems. This chapter deﬁnes what an algorithm is and lists some examples. It also makes a case that we should consider algorithms as a technology, alongside technologies such as fast hardware, graphical user interfaces, objectoriented systems, and networks. In Chapter 2, we see our ﬁrst algorithms, which solve the problem of sorting a sequence of n numbers. They are written in a pseudocode which, although not directly translatable to any conventional programming language, conveys the structure of the algorithm clearly enough that you should be able to implement it in the language of your choice. The sorting algorithms we examine are insertion sort, which uses an incremental approach, and merge sort, which uses a recursive technique known as “divideandconquer.” Although the time each requires increases with the value of n, the rate of increase differs between the two algorithms. We determine these running times in Chapter 2, and we develop a useful notation to express them. Chapter 3 precisely deﬁnes this notation, which we call asymptotic notation. It starts by deﬁning several asymptotic notations, which we use for bounding algorithm running times from above and/or below. The rest of Chapter 3 is primarily a presentation of mathematical notation, more to ensure that your use of notation matches that in this book than to teach you new mathematical concepts. 4 Part I Foundations Chapter 4 delves further into the divideandconquer method introduced in Chapter 2. It provides additional examples of divideandconquer algorithms, including Strassen’s surprising method for multiplying two square matrices. Chapter 4 contains methods for solving recurrences, which are useful for describing the running times of recursive algorithms. One powerful technique is the “master method,” which we often use to solve recurrences that arise from divideandconquer algorithms. Although much of Chapter 4 is devoted to proving the correctness of the master method, you may skip this proof yet still employ the master method. Chapter 5 introduces probabilistic analysis and randomized algorithms. We typically use probabilistic analysis to determine the running time of an algorithm in cases in which, due to the presence of an inherent probability distribution, the running time may differ on different inputs of the same size. In some cases, we assume that the inputs conform to a known probability distribution, so that we are averaging the running time over all possible inputs. In other cases, the probability distribution comes not from the inputs but from random choices made during the course of the algorithm. An algorithm whose behavior is determined not only by its input but by the values produced by a randomnumber generator is a randomized algorithm. We can use randomized algorithms to enforce a probability distribution on the inputs—thereby ensuring that no particular input always causes poor performance—or even to bound the error rate of algorithms that are allowed to produce incorrect results on a limited basis. Appendices A–D contain other mathematical material that you will ﬁnd helpful as you read this book. You are likely to have seen much of the material in the appendix chapters before having read this book (although the speciﬁc deﬁnitions and notational conventions we use may differ in some cases from what you have seen in the past), and so you should think of the Appendices as reference material. On the other hand, you probably have not already seen most of the material in Part I. All the chapters in Part I and the Appendices are written with a tutorial ﬂavor. 1 The Role of Algorithms in Computing What are algorithms? Why is the study of algorithms worthwhile? What is the role of algorithms relative to other technologies used in computers? In this chapter, we will answer these questions. 1.1 Algorithms Informally, an algorithm is any welldeﬁned computational procedure that takes some value, or set of values, as input and produces some value, or set of values, as output. An algorithm is thus a sequence of computational steps that transform the input into the output. We can also view an algorithm as a tool for solving a wellspeciﬁed computational problem. The statement of the problem speciﬁes in general terms the desired input/output relationship. The algorithm describes a speciﬁc computational procedure for achieving that input/output relationship. For example, we might need to sort a sequence of numbers into nondecreasing order. This problem arises frequently in practice and provides fertile ground for introducing many standard design techniques and analysis tools. Here is how we formally deﬁne the sorting problem: Input: A sequence of n numbers ha1 ; a2 ; : : : ; an i. Output: A permutation (reordering) ha10 ; a20 ; : : : ; an0 i of the input sequence such that a10 a20 an0 . For example, given the input sequence h31; 41; 59; 26; 41; 58i, a sorting algorithm returns as output the sequence h26; 31; 41; 41; 58; 59i. Such an input sequence is called an instance of the sorting problem. In general, an instance of a problem consists of the input (satisfying whatever constraints are imposed in the problem statement) needed to compute a solution to the problem. 6 Chapter 1 The Role of Algorithms in Computing Because many programs use it as an intermediate step, sorting is a fundamental operation in computer science. As a result, we have a large number of good sorting algorithms at our disposal. Which algorithm is best for a given application depends on—among other factors—the number of items to be sorted, the extent to which the items are already somewhat sorted, possible restrictions on the item values, the architecture of the computer, and the kind of storage devices to be used: main memory, disks, or even tapes. An algorithm is said to be correct if, for every input instance, it halts with the correct output. We say that a correct algorithm solves the given computational problem. An incorrect algorithm might not halt at all on some input instances, or it might halt with an incorrect answer. Contrary to what you might expect, incorrect algorithms can sometimes be useful, if we can control their error rate. We shall see an example of an algorithm with a controllable error rate in Chapter 31 when we study algorithms for ﬁnding large prime numbers. Ordinarily, however, we shall be concerned only with correct algorithms. An algorithm can be speciﬁed in English, as a computer program, or even as a hardware design. The only requirement is that the speciﬁcation must provide a precise description of the computational procedure to be followed. What kinds of problems are solved by algorithms? Sorting is by no means the only computational problem for which algorithms have been developed. (You probably suspected as much when you saw the size of this book.) Practical applications of algorithms are ubiquitous and include the following examples: The Human Genome Project has made great progress toward the goals of identifying all the 100,000 genes in human DNA, determining the sequences of the 3 billion chemical base pairs that make up human DNA, storing this information in databases, and developing tools for data analysis. Each of these steps requires sophisticated algorithms. Although the solutions to the various problems involved are beyond the scope of this book, many methods to solve these biological problems use ideas from several of the chapters in this book, thereby enabling scientists to accomplish tasks while using resources efﬁciently. The savings are in time, both human and machine, and in money, as more information can be extracted from laboratory techniques. The Internet enables people all around the world to quickly access and retrieve large amounts of information. With the aid of clever algorithms, sites on the Internet are able to manage and manipulate this large volume of data. Examples of problems that make essential use of algorithms include ﬁnding good routes on which the data will travel (techniques for solving such problems appear in 1.1 Algorithms 7 Chapter 24), and using a search engine to quickly ﬁnd pages on which particular information resides (related techniques are in Chapters 11 and 32). Electronic commerce enables goods and services to be negotiated and exchanged electronically, and it depends on the privacy of personal information such as credit card numbers, passwords, and bank statements. The core technologies used in electronic commerce include publickey cryptography and digital signatures (covered in Chapter 31), which are based on numerical algorithms and number theory. Manufacturing and other commercial enterprises often need to allocate scarce resources in the most beneﬁcial way. An oil company may wish to know where to place its wells in order to maximize its expected proﬁt. A political candidate may want to determine where to spend money buying campaign advertising in order to maximize the chances of winning an election. An airline may wish to assign crews to ﬂights in the least expensive way possible, making sure that each ﬂight is covered and that government regulations regarding crew scheduling are met. An Internet service provider may wish to determine where to place additional resources in order to serve its customers more effectively. All of these are examples of problems that can be solved using linear programming, which we shall study in Chapter 29. Although some of the details of these examples are beyond the scope of this book, we do give underlying techniques that apply to these problems and problem areas. We also show how to solve many speciﬁc problems, including the following: We are given a road map on which the distance between each pair of adjacent intersections is marked, and we wish to determine the shortest route from one intersection to another. The number of possible routes can be huge, even if we disallow routes that cross over themselves. How do we choose which of all possible routes is the shortest? Here, we model the road map (which is itself a model of the actual roads) as a graph (which we will meet in Part VI and Appendix B), and we wish to ﬁnd the shortest path from one vertex to another in the graph. We shall see how to solve this problem efﬁciently in Chapter 24. We are given two ordered sequences of symbols, X D hx1 ; x2 ; : : : ; xm i and Y D hy1 ; y2 ; : : : ; yn i, and we wish to ﬁnd a longest common subsequence of X and Y . A subsequence of X is just X with some (or possibly all or none) of its elements removed. For example, one subsequence of hA; B; C; D; E; F; Gi would be hB; C; E; Gi. The length of a longest common subsequence of X and Y gives one measure of how similar these two sequences are. For example, if the two sequences are base pairs in DNA strands, then we might consider them similar if they have a long common subsequence. If X has m symbols and Y has n symbols, then X and Y have 2m and 2n possible subsequences, 8 Chapter 1 The Role of Algorithms in Computing respectively. Selecting all possible subsequences of X and Y and matching them up could take a prohibitively long time unless m and n are very small. We shall see in Chapter 15 how to use a general technique known as dynamic programming to solve this problem much more efﬁciently. We are given a mechanical design in terms of a library of parts, where each part may include instances of other parts, and we need to list the parts in order so that each part appears before any part that uses it. If the design comprises n parts, then there are nŠ possible orders, where nŠ denotes the factorial function. Because the factorial function grows faster than even an exponential function, we cannot feasibly generate each possible order and then verify that, within that order, each part appears before the parts using it (unless we have only a few parts). This problem is an instance of topological sorting, and we shall see in Chapter 22 how to solve this problem efﬁciently. We are given n points in the plane, and we wish to ﬁnd the convex hull of these points. The convex hull is the smallest convex polygon containing the points. Intuitively, we can think of each point as being represented by a nail sticking out from a board. The convex hull would be represented by a tight rubber band that surrounds all the nails. Each nail around which the rubber band makes a turn is a vertex of the convex hull. (See Figure 33.6 on page 1029 for an example.) Any of the 2n subsets of the points might be the vertices of the convex hull. Knowing which points are vertices of the convex hull is not quite enough, either, since we also need to know the order in which they appear. There are many choices, therefore, for the vertices of the convex hull. Chapter 33 gives two good methods for ﬁnding the convex hull. These lists are far from exhaustive (as you again have probably surmised from this book’s heft), but exhibit two characteristics that are common to many interesting algorithmic problems: 1. They have many candidate solutions, the overwhelming majority of which do not solve the problem at hand. Finding one that does, or one that is “best,” can present quite a challenge. 2. They have practical applications. Of the problems in the above list, ﬁnding the shortest path provides the easiest examples. A transportation ﬁrm, such as a trucking or railroad company, has a ﬁnancial interest in ﬁnding shortest paths through a road or rail network because taking shorter paths results in lower labor and fuel costs. Or a routing node on the Internet may need to ﬁnd the shortest path through the network in order to route a message quickly. Or a person wishing to drive from New York to Boston may want to ﬁnd driving directions from an appropriate Web site, or she may use her GPS while driving. 1.1 Algorithms 9 Not every problem solved by algorithms has an easily identiﬁed set of candidate solutions. For example, suppose we are given a set of numerical values representing samples of a signal, and we want to compute the discrete Fourier transform of these samples. The discrete Fourier transform converts the time domain to the frequency domain, producing a set of numerical coefﬁcients, so that we can determine the strength of various frequencies in the sampled signal. In addition to lying at the heart of signal processing, discrete Fourier transforms have applications in data compression and multiplying large polynomials and integers. Chapter 30 gives an efﬁcient algorithm, the fast Fourier transform (commonly called the FFT), for this problem, and the chapter also sketches out the design of a hardware circuit to compute the FFT. Data structures This book also contains several data structures. A data structure is a way to store and organize data in order to facilitate access and modiﬁcations. No single data structure works well for all purposes, and so it is important to know the strengths and limitations of several of them. Technique Although you can use this book as a “cookbook” for algorithms, you may someday encounter a problem for which you cannot readily ﬁnd a published algorithm (many of the exercises and problems in this book, for example). This book will teach you techniques of algorithm design and analysis so that you can develop algorithms on your own, show that they give the correct answer, and understand their efﬁciency. Different chapters address different aspects of algorithmic problem solving. Some chapters address speciﬁc problems, such as ﬁnding medians and order statistics in Chapter 9, computing minimum spanning trees in Chapter 23, and determining a maximum ﬂow in a network in Chapter 26. Other chapters address techniques, such as divideandconquer in Chapter 4, dynamic programming in Chapter 15, and amortized analysis in Chapter 17. Hard problems Most of this book is about efﬁcient algorithms. Our usual measure of efﬁciency is speed, i.e., how long an algorithm takes to produce its result. There are some problems, however, for which no efﬁcient solution is known. Chapter 34 studies an interesting subset of these problems, which are known as NPcomplete. Why are NPcomplete problems interesting? First, although no efﬁcient algorithm for an NPcomplete problem has ever been found, nobody has ever proven 10 Chapter 1 The Role of Algorithms in Computing that an efﬁcient algorithm for one cannot exist. In other words, no one knows whether or not efﬁcient algorithms exist for NPcomplete problems. Second, the set of NPcomplete problems has the remarkable property that if an efﬁcient algorithm exists for any one of them, then efﬁcient algorithms exist for all of them. This relationship among the NPcomplete problems makes the lack of efﬁcient solutions all the more tantalizing. Third, several NPcomplete problems are similar, but not identical, to problems for which we do know of efﬁcient algorithms. Computer scientists are intrigued by how a small change to the problem statement can cause a big change to the efﬁciency of the best known algorithm. You should know about NPcomplete problems because some of them arise surprisingly often in real applications. If you are called upon to produce an efﬁcient algorithm for an NPcomplete problem, you are likely to spend a lot of time in a fruitless search. If you can show that the problem is NPcomplete, you can instead spend your time developing an efﬁcient algorithm that gives a good, but not the best possible, solution. As a concrete example, consider a delivery company with a central depot. Each day, it loads up each delivery truck at the depot and sends it around to deliver goods to several addresses. At the end of the day, each truck must end up back at the depot so that it is ready to be loaded for the next day. To reduce costs, the company wants to select an order of delivery stops that yields the lowest overall distance traveled by each truck. This problem is the wellknown “travelingsalesman problem,” and it is NPcomplete. It has no known efﬁcient algorithm. Under certain assumptions, however, we know of efﬁcient algorithms that give an overall distance which is not too far above the smallest possible. Chapter 35 discusses such “approximation algorithms.” Parallelism For many years, we could count on processor clock speeds increasing at a steady rate. Physical limitations present a fundamental roadblock to everincreasing clock speeds, however: because power density increases superlinearly with clock speed, chips run the risk of melting once their clock speeds become high enough. In order to perform more computations per second, therefore, chips are being designed to contain not just one but several processing “cores.” We can liken these multicore computers to several sequential computers on a single chip; in other words, they are a type of “parallel computer.” In order to elicit the best performance from multicore computers, we need to design algorithms with parallelism in mind. Chapter 27 presents a model for “multithreaded” algorithms, which take advantage of multiple cores. This model has advantages from a theoretical standpoint, and it forms the basis of several successful computer programs, including a championship chess program. 1.2 Algorithms as a technology 11 Exercises 1.11 Give a realworld example that requires sorting or a realworld example that requires computing a convex hull. 1.12 Other than speed, what other measures of efﬁciency might one use in a realworld setting? 1.13 Select a data structure that you have seen previously, and discuss its strengths and limitations. 1.14 How are the shortestpath and travelingsalesman problems given above similar? How are they different? 1.15 Come up with a realworld problem in which only the best solution will do. Then come up with one in which a solution that is “approximately” the best is good enough. 1.2 Algorithms as a technology Suppose computers were inﬁnitely fast and computer memory was free. Would you have any reason to study algorithms? The answer is yes, if for no other reason than that you would still like to demonstrate that your solution method terminates and does so with the correct answer. If computers were inﬁnitely fast, any correct method for solving a problem would do. You would probably want your implementation to be within the bounds of good software engineering practice (for example, your implementation should be well designed and documented), but you would most often use whichever method was the easiest to implement. Of course, computers may be fast, but they are not inﬁnitely fast. And memory may be inexpensive, but it is not free. Computing time is therefore a bounded resource, and so is space in memory. You should use these resources wisely, and algorithms that are efﬁcient in terms of time or space will help you do so. 12 Chapter 1 The Role of Algorithms in Computing Efﬁciency Different algorithms devised to solve the same problem often differ dramatically in their efﬁciency. These differences can be much more signiﬁcant than differences due to hardware and software. As an example, in Chapter 2, we will see two algorithms for sorting. The ﬁrst, known as insertion sort, takes time roughly equal to c1 n2 to sort n items, where c1 is a constant that does not depend on n. That is, it takes time roughly proportional to n2 . The second, merge sort, takes time roughly equal to c2 n lg n, where lg n stands for log2 n and c2 is another constant that also does not depend on n. Insertion sort typically has a smaller constant factor than merge sort, so that c1 < c2 . We shall see that the constant factors can have far less of an impact on the running time than the dependence on the input size n. Let’s write insertion sort’s running time as c1 n n and merge sort’s running time as c2 n lg n. Then we see that where insertion sort has a factor of n in its running time, merge sort has a factor of lg n, which is much smaller. (For example, when n D 1000, lg n is approximately 10, and when n equals one million, lg n is approximately only 20.) Although insertion sort usually runs faster than merge sort for small input sizes, once the input size n becomes large enough, merge sort’s advantage of lg n vs. n will more than compensate for the difference in constant factors. No matter how much smaller c1 is than c2 , there will always be a crossover point beyond which merge sort is faster. For a concrete example, let us pit a faster computer (computer A) running insertion sort against a slower computer (computer B) running merge sort. They each must sort an array of 10 million numbers. (Although 10 million numbers might seem like a lot, if the numbers are eightbyte integers, then the input occupies about 80 megabytes, which ﬁts in the memory of even an inexpensive laptop computer many times over.) Suppose that computer A executes 10 billion instructions per second (faster than any single sequential computer at the time of this writing) and computer B executes only 10 million instructions per second, so that computer A is 1000 times faster than computer B in raw computing power. To make the difference even more dramatic, suppose that the world’s craftiest programmer codes insertion sort in machine language for computer A, and the resulting code requires 2n2 instructions to sort n numbers. Suppose further that just an average programmer implements merge sort, using a highlevel language with an inefﬁcient compiler, with the resulting code taking 50n lg n instructions. To sort 10 million numbers, computer A takes 2 .107 /2 instructions D 20,000 seconds (more than 5.5 hours) ; 1010 instructions/second while computer B takes 1.2 Algorithms as a technology 13 50 107 lg 107 instructions 1163 seconds (less than 20 minutes) : 107 instructions/second By using an algorithm whose running time grows more slowly, even with a poor compiler, computer B runs more than 17 times faster than computer A! The advantage of merge sort is even more pronounced when we sort 100 million numbers: where insertion sort takes more than 23 days, merge sort takes under four hours. In general, as the problem size increases, so does the relative advantage of merge sort. Algorithms and other technologies The example above shows that we should consider algorithms, like computer hardware, as a technology. Total system performance depends on choosing efﬁcient algorithms as much as on choosing fast hardware. Just as rapid advances are being made in other computer technologies, they are being made in algorithms as well. You might wonder whether algorithms are truly that important on contemporary computers in light of other advanced technologies, such as advanced computer architectures and fabrication technologies, easytouse, intuitive, graphical user interfaces (GUIs), objectoriented systems, integrated Web technologies, and fast networking, both wired and wireless. The answer is yes. Although some applications do not explicitly require algorithmic content at the application level (such as some simple, Webbased applications), many do. For example, consider a Webbased service that determines how to travel from one location to another. Its implementation would rely on fast hardware, a graphical user interface, widearea networking, and also possibly on object orientation. However, it would also require algorithms for certain operations, such as ﬁnding routes (probably using a shortestpath algorithm), rendering maps, and interpolating addresses. Moreover, even an application that does not require algorithmic content at the application level relies heavily upon algorithms. Does the application rely on fast hardware? The hardware design used algorithms. Does the application rely on graphical user interfaces? The design of any GUI relies on algorithms. Does the application rely on networking? Routing in networks relies heavily on algorithms. Was the application written in a language other than machine code? Then it was processed by a compiler, interpreter, or assembler, all of which make extensive use 14 Chapter 1 The Role of Algorithms in Computing of algorithms. Algorithms are at the core of most technologies used in contemporary computers. Furthermore, with the everincreasing capacities of computers, we use them to solve larger problems than ever before. As we saw in the above comparison between insertion sort and merge sort, it is at larger problem sizes that the differences in efﬁciency between algorithms become particularly prominent. Having a solid base of algorithmic knowledge and technique is one characteristic that separates the truly skilled programmers from the novices. With modern computing technology, you can accomplish some tasks without knowing much about algorithms, but with a good background in algorithms, you can do much, much more. Exercises 1.21 Give an example of an application that requires algorithmic content at the application level, and discuss the function of the algorithms involved. 1.22 Suppose we are comparing implementations of insertion sort and merge sort on the same machine. For inputs of size n, insertion sort runs in 8n2 steps, while merge sort runs in 64n lg n steps. For which values of n does insertion sort beat merge sort? 1.23 What is the smallest value of n such that an algorithm whose running time is 100n2 runs faster than an algorithm whose running time is 2n on the same machine? Problems 11 Comparison of running times For each function f .n/ and time t in the following table, determine the largest size n of a problem that can be solved in time t, assuming that the algorithm to solve the problem takes f .n/ microseconds. Notes for Chapter 1 1 second 15 1 minute 1 hour 1 day 1 month 1 year 1 century lg n p n n n lg n n2 n3 2n nŠ Chapter notes There are many excellent texts on the general topic of algorithms, including those by Aho, Hopcroft, and Ullman [5, 6]; Baase and Van Gelder [28]; Brassard and Bratley [54]; Dasgupta, Papadimitriou, and Vazirani [82]; Goodrich and Tamassia [148]; Hofri [175]; Horowitz, Sahni, and Rajasekaran [181]; Johnsonbaugh and Schaefer [193]; Kingston [205]; Kleinberg and Tardos [208]; Knuth [209, 210, 211]; Kozen [220]; Levitin [235]; Manber [242]; Mehlhorn [249, 250, 251]; Purdom and Brown [287]; Reingold, Nievergelt, and Deo [293]; Sedgewick [306]; Sedgewick and Flajolet [307]; Skiena [318]; and Wilf [356]. Some of the more practical aspects of algorithm design are discussed by Bentley [42, 43] and Gonnet [145]. Surveys of the ﬁeld of algorithms can also be found in the Handbook of Theoretical Computer Science, Volume A [342] and the CRC Algorithms and Theory of Computation Handbook [25]. Overviews of the algorithms used in computational biology can be found in textbooks by Gusﬁeld [156], Pevzner [275], Setubal and Meidanis [310], and Waterman [350]. 2 Getting Started This chapter will familiarize you with the framework we shall use throughout the book to think about the design and analysis of algorithms. It is selfcontained, but it does include several references to material that we introduce in Chapters 3 and 4. (It also contains several summations, which Appendix A shows how to solve.) We begin by examining the insertion sort algorithm to solve the sorting problem introduced in Chapter 1. We deﬁne a “pseudocode” that should be familiar to you if you have done computer programming, and we use it to show how we shall specify our algorithms. Having speciﬁed the insertion sort algorithm, we then argue that it correctly sorts, and we analyze its running time. The analysis introduces a notation that focuses on how that time increases with the number of items to be sorted. Following our discussion of insertion sort, we introduce the divideandconquer approach to the design of algorithms and use it to develop an algorithm called merge sort. We end with an analysis of merge sort’s running time. 2.1 Insertion sort Our ﬁrst algorithm, insertion sort, solves the sorting problem introduced in Chapter 1: Input: A sequence of n numbers ha1 ; a2 ; : : : ; an i. Output: A permutation (reordering) ha10 ; a20 ; : : : ; an0 i of the input sequence such that a10 a20 an0 . The numbers that we wish to sort are also known as the keys. Although conceptually we are sorting a sequence, the input comes to us in the form of an array with n elements. In this book, we shall typically describe algorithms as programs written in a pseudocode that is similar in many respects to C, C++, Java, Python, or Pascal. If you have been introduced to any of these languages, you should have little trouble 2.1 Insertion sort 17 ♣♣ ♣ ♣♣ 10 5♣ ♣ 4 ♣♣ ♣♣ ♣ ♣ ♣ ♣ ♣♣ ♣ 7 ♣ 0 ♣♣ ♣ 5♣ ♣♣ ♣ 4 2♣ ♣ ♣ ♣ ♣♣ ♣ ♣♣ 7 ♣ 2 ♣ 1 Figure 2.1 Sorting a hand of cards using insertion sort. reading our algorithms. What separates pseudocode from “real” code is that in pseudocode, we employ whatever expressive method is most clear and concise to specify a given algorithm. Sometimes, the clearest method is English, so do not be surprised if you come across an English phrase or sentence embedded within a section of “real” code. Another difference between pseudocode and real code is that pseudocode is not typically concerned with issues of software engineering. Issues of data abstraction, modularity, and error handling are often ignored in order to convey the essence of the algorithm more concisely. We start with insertion sort, which is an efﬁcient algorithm for sorting a small number of elements. Insertion sort works the way many people sort a hand of playing cards. We start with an empty left hand and the cards face down on the table. We then remove one card at a time from the table and insert it into the correct position in the left hand. To ﬁnd the correct position for a card, we compare it with each of the cards already in the hand, from right to left, as illustrated in Figure 2.1. At all times, the cards held in the left hand are sorted, and these cards were originally the top cards of the pile on the table. We present our pseudocode for insertion sort as a procedure called I NSERTION S ORT, which takes as a parameter an array AŒ1 : : n containing a sequence of length n that is to be sorted. (In the code, the number n of elements in A is denoted by A:length.) The algorithm sorts the input numbers in place: it rearranges the numbers within the array A, with at most a constant number of them stored outside the array at any time. The input array A contains the sorted output sequence when the I NSERTION S ORT procedure is ﬁnished. 18 Chapter 2 Getting Started 1 2 3 4 5 6 (a) 5 2 4 6 1 3 1 2 3 4 5 6 (d) 2 4 5 6 1 3 1 2 3 4 5 6 (b) 2 5 4 6 1 3 1 2 3 4 5 6 (e) 1 2 4 5 6 3 1 2 3 4 5 6 (c) 2 4 5 6 1 3 1 2 3 4 5 6 (f) 1 2 3 4 5 6 Figure 2.2 The operation of I NSERTION S ORT on the array A D h5; 2; 4; 6; 1; 3i. Array indices appear above the rectangles, and values stored in the array positions appear within the rectangles. (a)–(e) The iterations of the for loop of lines 1–8. In each iteration, the black rectangle holds the key taken from AŒj , which is compared with the values in shaded rectangles to its left in the test of line 5. Shaded arrows show array values moved one position to the right in line 6, and black arrows indicate where the key moves to in line 8. (f) The ﬁnal sorted array. I NSERTION S ORT .A/ 1 for j D 2 to A:length 2 key D AŒj 3 // Insert AŒj into the sorted sequence AŒ1 : : j 1. 4 i D j 1 5 while i > 0 and AŒi > key 6 AŒi C 1 D AŒi 7 i D i 1 8 AŒi C 1 D key Loop invariants and the correctness of insertion sort Figure 2.2 shows how this algorithm works for A D h5; 2; 4; 6; 1; 3i. The index j indicates the “current card” being inserted into the hand. At the beginning of each iteration of the for loop, which is indexed by j , the subarray consisting of elements AŒ1 : : j 1 constitutes the currently sorted hand, and the remaining subarray AŒj C 1 : : n corresponds to the pile of cards still on the table. In fact, elements AŒ1 : : j 1 are the elements originally in positions 1 through j 1, but now in sorted order. We state these properties of AŒ1 : : j 1 formally as a loop invariant: At the start of each iteration of the for loop of lines 1–8, the subarray AŒ1 : : j 1 consists of the elements originally in AŒ1 : : j 1, but in sorted order. We use loop invariants to help us understand why an algorithm is correct. We must show three things about a loop invariant: 2.1 Insertion sort 19 Initialization: It is true prior to the ﬁrst iteration of the loop. Maintenance: If it is true before an iteration of the loop, it remains true before the next iteration. Termination: When the loop terminates, the invariant gives us a useful property that helps show that the algorithm is correct. When the ﬁrst two properties hold, the loop invariant is true prior to every iteration of the loop. (Of course, we are free to use established facts other than the loop invariant itself to prove that the loop invariant remains true before each iteration.) Note the similarity to mathematical induction, where to prove that a property holds, you prove a base case and an inductive step. Here, showing that the invariant holds before the ﬁrst iteration corresponds to the base case, and showing that the invariant holds from iteration to iteration corresponds to the inductive step. The third property is perhaps the most important one, since we are using the loop invariant to show correctness. Typically, we use the loop invariant along with the condition that caused the loop to terminate. The termination property differs from how we usually use mathematical induction, in which we apply the inductive step inﬁnitely; here, we stop the “induction” when the loop terminates. Let us see how these properties hold for insertion sort. Initialization: We start by showing that the loop invariant holds before the ﬁrst loop iteration, when j D 2.1 The subarray AŒ1 : : j 1, therefore, consists of just the single element AŒ1, which is in fact the original element in AŒ1. Moreover, this subarray is sorted (trivially, of course), which shows that the loop invariant holds prior to the ﬁrst iteration of the loop. Maintenance: Next, we tackle the second property: showing that each iteration maintains the loop invariant. Informally, the body of the for loop works by moving AŒj 1, AŒj 2, AŒj 3, and so on by one position to the right until it ﬁnds the proper position for AŒj (lines 4–7), at which point it inserts the value of AŒj (line 8). The subarray AŒ1 : : j then consists of the elements originally in AŒ1 : : j , but in sorted order. Incrementing j for the next iteration of the for loop then preserves the loop invariant. A more formal treatment of the second property would require us to state and show a loop invariant for the while loop of lines 5–7. At this point, however, 1 When the loop is a for loop, the moment at which we check the loop invariant just prior to the ﬁrst iteration is immediately after the initial assignment to the loopcounter variable and just before the ﬁrst test in the loop header. In the case of I NSERTION S ORT , this time is after assigning 2 to the variable j but before the ﬁrst test of whether j A: length. 20 Chapter 2 Getting Started we prefer not to get bogged down in such formalism, and so we rely on our informal analysis to show that the second property holds for the outer loop. Termination: Finally, we examine what happens when the loop terminates. The condition causing the for loop to terminate is that j > A:length D n. Because each loop iteration increases j by 1, we must have j D n C 1 at that time. Substituting n C 1 for j in the wording of loop invariant, we have that the subarray AŒ1 : : n consists of the elements originally in AŒ1 : : n, but in sorted order. Observing that the subarray AŒ1 : : n is the entire array, we conclude that the entire array is sorted. Hence, the algorithm is correct. We shall use this method of loop invariants to show correctness later in this chapter and in other chapters as well. Pseudocode conventions We use the following conventions in our pseudocode. Indentation indicates block structure. For example, the body of the for loop that begins on line 1 consists of lines 2–8, and the body of the while loop that begins on line 5 contains lines 6–7 but not line 8. Our indentation style applies to ifelse statements2 as well. Using indentation instead of conventional indicators of block structure, such as begin and end statements, greatly reduces clutter while preserving, or even enhancing, clarity.3 The looping constructs while, for, and repeatuntil and the ifelse conditional construct have interpretations similar to those in C, C++, Java, Python, and Pascal.4 In this book, the loop counter retains its value after exiting the loop, unlike some situations that arise in C++, Java, and Pascal. Thus, immediately after a for loop, the loop counter’s value is the value that ﬁrst exceeded the for loop bound. We used this property in our correctness argument for insertion sort. The for loop header in line 1 is for j D 2 to A:length, and so when this loop terminates, j D A:length C 1 (or, equivalently, j D n C 1, since n D A:length). We use the keyword to when a for loop increments its loop 2 In an ifelse statement, we indent else at the same level as its matching if. Although we omit the keyword then, we occasionally refer to the portion executed when the test following if is true as a then clause. For multiway tests, we use elseif for tests after the ﬁrst one. 3 Each pseudocode procedure in this book appears on one page so that you will not have to discern levels of indentation in code that is split across pages. 4 Most blockstructured languages have equivalent constructs, though the exact syntax may differ. Python lacks repeatuntil loops, and its for loops operate a little differently from the for loops in this book. 2.1 Insertion sort 21 counter in each iteration, and we use the keyword downto when a for loop decrements its loop counter. When the loop counter changes by an amount greater than 1, the amount of change follows the optional keyword by. The symbol “//” indicates that the remainder of the line is a comment. A multiple assignment of the form i D j D e assigns to both variables i and j the value of expression e; it should be treated as equivalent to the assignment j D e followed by the assignment i D j . Variables (such as i, j , and key) are local to the given procedure. We shall not use global variables without explicit indication. We access array elements by specifying the array name followed by the index in square brackets. For example, AŒi indicates the ith element of the array A. The notation “: :” is used to indicate a range of values within an array. Thus, AŒ1 : : j indicates the subarray of A consisting of the j elements AŒ1; AŒ2; : : : ; AŒj . We typically organize compound data into objects, which are composed of attributes. We access a particular attribute using the syntax found in many objectoriented programming languages: the object name, followed by a dot, followed by the attribute name. For example, we treat an array as an object with the attribute length indicating how many elements it contains. To specify the number of elements in an array A, we write A:length. We treat a variable representing an array or object as a pointer to the data representing the array or object. For all attributes f of an object x, setting y D x causes y:f to equal x:f . Moreover, if we now set x:f D 3, then afterward not only does x:f equal 3, but y:f equals 3 as well. In other words, x and y point to the same object after the assignment y D x. Our attribute notation can “cascade.” For example, suppose that the attribute f is itself a pointer to some type of object that has an attribute g. Then the notation x:f :g is implicitly parenthesized as .x:f /:g. In other words, if we had assigned y D x:f , then x:f :g is the same as y:g. Sometimes, a pointer will refer to no object at all. In this case, we give it the special value NIL. We pass parameters to a procedure by value: the called procedure receives its own copy of the parameters, and if it assigns a value to a parameter, the change is not seen by the calling procedure. When objects are passed, the pointer to the data representing the object is copied, but the object’s attributes are not. For example, if x is a parameter of a called procedure, the assignment x D y within the called procedure is not visible to the calling procedure. The assignment x:f D 3, however, is visible. Similarly, arrays are passed by pointer, so that 22 Chapter 2 Getting Started a pointer to the array is passed, rather than the entire array, and changes to individual array elements are visible to the calling procedure. A return statement immediately transfers control back to the point of call in the calling procedure. Most return statements also take a value to pass back to the caller. Our pseudocode differs from many programming languages in that we allow multiple values to be returned in a single return statement. The boolean operators “and” and “or” are short circuiting. That is, when we evaluate the expression “x and y” we ﬁrst evaluate x. If x evaluates to FALSE, then the entire expression cannot evaluate to TRUE, and so we do not evaluate y. If, on the other hand, x evaluates to TRUE, we must evaluate y to determine the value of the entire expression. Similarly, in the expression “x or y” we evaluate the expression y only if x evaluates to FALSE. Shortcircuiting operators allow us to write boolean expressions such as “x ¤ NIL and x:f D y” without worrying about what happens when we try to evaluate x:f when x is NIL. The keyword error indicates that an error occurred because conditions were wrong for the procedure to have been called. The calling procedure is responsible for handling the error, and so we do not specify what action to take. Exercises 2.11 Using Figure 2.2 as a model, illustrate the operation of I NSERTION S ORT on the array A D h31; 41; 59; 26; 41; 58i. 2.12 Rewrite the I NSERTION S ORT procedure to sort into nonincreasing instead of nondecreasing order. 2.13 Consider the searching problem: Input: A sequence of n numbers A D ha1 ; a2 ; : : : ; an i and a value . Output: An index i such that D AŒi or the special value NIL if does not appear in A. Write pseudocode for linear search, which scans through the sequence, looking for . Using a loop invariant, prove that your algorithm is correct. Make sure that your loop invariant fulﬁlls the three necessary properties. 2.14 Consider the problem of adding two nbit binary integers, stored in two nelement arrays A and B. The sum of the two integers should be stored in binary form in 2.2 Analyzing algorithms 23 an .n C 1/element array C . State the problem formally and write pseudocode for adding the two integers. 2.2 Analyzing algorithms Analyzing an algorithm has come to mean predicting the resources that the algorithm requires. Occasionally, resources such as memory, communication bandwidth, or computer hardware are of primary concern, but most often it is computational time that we want to measure. Generally, by analyzing several candidate algorithms for a problem, we can identify a most efﬁcient one. Such analysis may indicate more than one viable candidate, but we can often discard several inferior algorithms in the process. Before we can analyze an algorithm, we must have a model of the implementation technology that we will use, including a model for the resources of that technology and their costs. For most of this book, we shall assume a generic oneprocessor, randomaccess machine (RAM) model of computation as our implementation technology and understand that our algorithms will be implemented as computer programs. In the RAM model, instructions are executed one after another, with no concurrent operations. Strictly speaking, we should precisely deﬁne the instructions of the RAM model and their costs. To do so, however, would be tedious and would yield little insight into algorithm design and analysis. Yet we must be careful not to abuse the RAM model. For example, what if a RAM had an instruction that sorts? Then we could sort in just one instruction. Such a RAM would be unrealistic, since real computers do not have such instructions. Our guide, therefore, is how real computers are designed. The RAM model contains instructions commonly found in real computers: arithmetic (such as add, subtract, multiply, divide, remainder, ﬂoor, ceiling), data movement (load, store, copy), and control (conditional and unconditional branch, subroutine call and return). Each such instruction takes a constant amount of time. The data types in the RAM model are integer and ﬂoating point (for storing real numbers). Although we typically do not concern ourselves with precision in this book, in some applications precision is crucial. We also assume a limit on the size of each word of data. For example, when working with inputs of size n, we typically assume that integers are represented by c lg n bits for some constant c 1. We require c 1 so that each word can hold the value of n, enabling us to index the individual input elements, and we restrict c to be a constant so that the word size does not grow arbitrarily. (If the word size could grow arbitrarily, we could store huge amounts of data in one word and operate on it all in constant time—clearly an unrealistic scenario.) 24 Chapter 2 Getting Started Real computers contain instructions not listed above, and such instructions represent a gray area in the RAM model. For example, is exponentiation a constanttime instruction? In the general case, no; it takes several instructions to compute x y when x and y are real numbers. In restricted situations, however, exponentiation is a constanttime operation. Many computers have a “shift left” instruction, which in constant time shifts the bits of an integer by k positions to the left. In most computers, shifting the bits of an integer by one position to the left is equivalent to multiplication by 2, so that shifting the bits by k positions to the left is equivalent to multiplication by 2k . Therefore, such computers can compute 2k in one constanttime instruction by shifting the integer 1 by k positions to the left, as long as k is no more than the number of bits in a computer word. We will endeavor to avoid such gray areas in the RAM model, but we will treat computation of 2k as a constanttime operation when k is a small enough positive integer. In the RAM model, we do not attempt to model the memory hierarchy that is common in contemporary computers. That is, we do not model caches or virtual memory. Several computational models attempt to account for memoryhierarchy effects, which are sometimes signiﬁcant in real programs on real machines. A handful of problems in this book examine memoryhierarchy effects, but for the most part, the analyses in this book will not consider them. Models that include the memory hierarchy are quite a bit more complex than the RAM model, and so they can be difﬁcult to work with. Moreover, RAMmodel analyses are usually excellent predictors of performance on actual machines. Analyzing even a simple algorithm in the RAM model can be a challenge. The mathematical tools required may include combinatorics, probability theory, algebraic dexterity, and the ability to identify the most signiﬁcant terms in a formula. Because the behavior of an algorithm may be different for each possible input, we need a means for summarizing that behavior in simple, easily understood formulas. Even though we typically select only one machine model to analyze a given algorithm, we still face many choices in deciding how to express our analysis. We would like a way that is simple to write and manipulate, shows the important characteristics of an algorithm’s resource requirements, and suppresses tedious details. Analysis of insertion sort The time taken by the I NSERTION S ORT procedure depends on the input: sorting a thousand numbers takes longer than sorting three numbers. Moreover, I NSERTION S ORT can take different amounts of time to sort two input sequences of the same size depending on how nearly sorted they already are. In general, the time taken by an algorithm grows with the size of the input, so it is traditional to describe the running time of a program as a function of the size of its input. To do so, we need to deﬁne the terms “running time” and “size of input” more carefully. 2.2 Analyzing algorithms 25 The best notion for input size depends on the problem being studied. For many problems, such as sorting or computing discrete Fourier transforms, the most natural measure is the number of items in the input—for example, the array size n for sorting. For many other problems, such as multiplying two integers, the best measure of input size is the total number of bits needed to represent the input in ordinary binary notation. Sometimes, it is more appropriate to describe the size of the input with two numbers rather than one. For instance, if the input to an algorithm is a graph, the input size can be described by the numbers of vertices and edges in the graph. We shall indicate which input size measure is being used with each problem we study. The running time of an algorithm on a particular input is the number of primitive operations or “steps” executed. It is convenient to deﬁne the notion of step so that it is as machineindependent as possible. For the moment, let us adopt the following view. A constant amount of time is required to execute each line of our pseudocode. One line may take a different amount of time than another line, but we shall assume that each execution of the ith line takes time ci , where ci is a constant. This viewpoint is in keeping with the RAM model, and it also reﬂects how the pseudocode would be implemented on most actual computers.5 In the following discussion, our expression for the running time of I NSERTION S ORT will evolve from a messy formula that uses all the statement costs ci to a much simpler notation that is more concise and more easily manipulated. This simpler notation will also make it easy to determine whether one algorithm is more efﬁcient than another. We start by presenting the I NSERTION S ORT procedure with the time “cost” of each statement and the number of times each statement is executed. For each j D 2; 3; : : : ; n, where n D A:length, we let tj denote the number of times the while loop test in line 5 is executed for that value of j . When a for or while loop exits in the usual way (i.e., due to the test in the loop header), the test is executed one time more than the loop body. We assume that comments are not executable statements, and so they take no time. 5 There are some subtleties here. Computational steps that we specify in English are often variants of a procedure that requires more than just a constant amount of time. For example, later in this book we might say “sort the points by xcoordinate,” which, as we shall see, takes more than a constant amount of time. Also, note that a statement that calls a subroutine takes constant time, though the subroutine, once invoked, may take more. That is, we separate the process of calling the subroutine—passing parameters to it, etc.—from the process of executing the subroutine. 26 Chapter 2 Getting Started I NSERTION S ORT .A/ 1 for j D 2 to A:length 2 key D AŒj 3 // Insert AŒj into the sorted sequence AŒ1 : : j 1. 4 i D j 1 5 while i > 0 and AŒi > key 6 AŒi C 1 D AŒi 7 i D i 1 8 AŒi C 1 D key cost c1 c2 times n n1 0 c4 c5 c6 c7 c8 n1 n P 1 n t PjnD2 j .t 1/ PjnD2 j .t j D2 j 1/ n1 The running time of the algorithm is the sum of running times for each statement executed; a statement that takes ci steps to execute and executes n times will contribute ci n to the total running time.6 To compute T .n/, the running time of I NSERTION S ORT on an input of n values, we sum the products of the cost and times columns, obtaining T .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c5 n X j D2 C c7 n X tj C c6 n X .tj 1/ j D2 .tj 1/ C c8 .n 1/ : j D2 Even for inputs of a given size, an algorithm’s running time may depend on which input of that size is given. For example, in I NSERTION S ORT, the best case occurs if the array is already sorted. For each j D 2; 3; : : : ; n, we then ﬁnd that AŒi key in line 5 when i has its initial value of j 1. Thus tj D 1 for j D 2; 3; : : : ; n, and the bestcase running time is T .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c5 .n 1/ C c8 .n 1/ D .c1 C c2 C c4 C c5 C c8 /n .c2 C c4 C c5 C c8 / : We can express this running time as an C b for constants a and b that depend on the statement costs ci ; it is thus a linear function of n. If the array is in reverse sorted order—that is, in decreasing order—the worst case results. We must compare each element AŒj with each element in the entire sorted subarray AŒ1 : : j 1, and so tj D j for j D 2; 3; : : : ; n. Noting that 6 This characteristic does not necessarily hold for a resource such as memory. A statement that references m words of memory and is executed n times does not necessarily reference mn distinct words of memory. 2.2 Analyzing algorithms n X j D2 j D 27 n.n C 1/ 1 2 and n X n.n 1/ .j 1/ D 2 j D2 (see Appendix A for a review of how to solve these summations), we ﬁnd that in the worst case, the running time of I NSERTION S ORT is n.n C 1/ 1 T .n/ D c1 n C c2 .n 1/ C c4 .n 1/ C c5 2 n.n 1/ n.n 1/ C c7 C c8 .n 1/ C c6 2 2 c c6 c7 2 c5 c6 c7 5 C C n C c1 C c2 C c4 C C c8 n D 2 2 2 2 2 2 .c2 C c4 C c5 C c8 / : We can express this worstcase running time as an2 C bn C c for constants a, b, and c that again depend on the statement costs ci ; it is thus a quadratic function of n. Typically, as in insertion sort, the running time of an algorithm is ﬁxed for a given input, although in later chapters we shall see some interesting “randomized” algorithms whose behavior can vary even for a ﬁxed input. Worstcase and averagecase analysis In our analysis of insertion sort, we looked at both the best case, in which the input array was already sorted, and the worst case, in which the input array was reverse sorted. For the remainder of this book, though, we shall usually concentrate on ﬁnding only the worstcase running time, that is, the longest running time for any input of size n. We give three reasons for this orientation. The worstcase running time of an algorithm gives us an upper bound on the running time for any input. Knowing it provides a guarantee that the algorithm will never take any longer. We need not make some educated guess about the running time and hope that it never gets much worse. For some algorithms, the worst case occurs fairly often. For example, in searching a database for a particular piece of information, the searching algorithm’s worst case will often occur when the information is not present in the database. In some applications, searches for absent information may be frequent. 28 Chapter 2 Getting Started The “average case” is often roughly as bad as the worst case. Suppose that we randomly choose n numbers and apply insertion sort. How long does it take to determine where in subarray AŒ1 : : j 1 to insert element AŒj ? On average, half the elements in AŒ1 : : j 1 are less than AŒj , and half the elements are greater. On average, therefore, we check half of the subarray AŒ1 : : j 1, and so tj is about j=2. The resulting averagecase running time turns out to be a quadratic function of the input size, just like the worstcase running time. In some particular cases, we shall be interested in the averagecase running time of an algorithm; we shall see the technique of probabilistic analysis applied to various algorithms throughout this book. The scope of averagecase analysis is limited, because it may not be apparent what constitutes an “average” input for a particular problem. Often, we shall assume that all inputs of a given size are equally likely. In practice, this assumption may be violated, but we can sometimes use a randomized algorithm, which makes random choices, to allow a probabilistic analysis and yield an expected running time. We explore randomized algorithms more in Chapter 5 and in several other subsequent chapters. Order of growth We used some simplifying abstractions to ease our analysis of the I NSERTION S ORT procedure. First, we ignored the actual cost of each statement, using the constants ci to represent these costs. Then, we observed that even these constants give us more detail than we really need: we expressed the worstcase running time as an2 C bn C c for some constants a, b, and c that depend on the statement costs ci . We thus ignored not only the actual statement costs, but also the abstract costs ci . We shall now make one more simplifying abstraction: it is the rate of growth, or order of growth, of the running time that really interests us. We therefore consider only the leading term of a formula (e.g., an2 ), since the lowerorder terms are relatively insigniﬁcant for large values of n. We also ignore the leading term’s constant coefﬁcient, since constant factors are less signiﬁcant than the rate of growth in determining computational efﬁciency for large inputs. For insertion sort, when we ignore the lowerorder terms and the leading term’s constant coefﬁcient, we are left with the factor of n2 from the leading term. We write that insertion sort has a worstcase running time of ‚.n2 / (pronounced “theta of nsquared”). We shall use ‚notation informally in this chapter, and we will deﬁne it precisely in Chapter 3. We usually consider one algorithm to be more efﬁcient than another if its worstcase running time has a lower order of growth. Due to constant factors and lowerorder terms, an algorithm whose running time has a higher order of growth might take less time for small inputs than an algorithm whose running time has a lower 2.3 Designing algorithms 29 order of growth. But for large enough inputs, a ‚.n2 / algorithm, for example, will run more quickly in the worst case than a ‚.n3 / algorithm. Exercises 2.21 Express the function n3 =1000 100n2 100n C 3 in terms of ‚notation. 2.22 Consider sorting n numbers stored in array A by ﬁrst ﬁnding the smallest element of A and exchanging it with the element in AŒ1. Then ﬁnd the second smallest element of A, and exchange it with AŒ2. Continue in this manner for the ﬁrst n 1 elements of A. Write pseudocode for this algorithm, which is known as selection sort. What loop invariant does this algorithm maintain? Why does it need to run for only the ﬁrst n 1 elements, rather than for all n elements? Give the bestcase and worstcase running times of selection sort in ‚notation. 2.23 Consider linear search again (see Exercise 2.13). How many elements of the input sequence need to be checked on the average, assuming that the element being searched for is equally likely to be any element in the array? How about in the worst case? What are the averagecase and worstcase running times of linear search in ‚notation? Justify your answers. 2.24 How can we modify almost any algorithm to have a good bestcase running time? 2.3 Designing algorithms We can choose from a wide range of algorithm design techniques. For insertion sort, we used an incremental approach: having sorted the subarray AŒ1 : : j 1, we inserted the single element AŒj into its proper place, yielding the sorted subarray AŒ1 : : j . In this section, we examine an alternative design approach, known as “divideandconquer,” which we shall explore in more detail in Chapter 4. We’ll use divideandconquer to design a sorting algorithm whose worstcase running time is much less than that of insertion sort. One advantage of divideandconquer algorithms is that their running times are often easily determined using techniques that we will see in Chapter 4. 30 Chapter 2 Getting Started 2.3.1 The divideandconquer approach Many useful algorithms are recursive in structure: to solve a given problem, they call themselves recursively one or more times to deal with closely related subproblems. These algorithms typically follow a divideandconquer approach: they break the problem into several subproblems that are similar to the original problem but smaller in size, solve the subproblems recursively, and then combine these solutions to create a solution to the original problem. The divideandconquer paradigm involves three steps at each level of the recursion: Divide the problem into a number of subproblems that are smaller instances of the same problem. Conquer the subproblems by solving them recursively. If the subproblem sizes are small enough, however, just solve the subproblems in a straightforward manner. Combine the solutions to the subproblems into the solution for the original problem. The merge sort algorithm closely follows the divideandconquer paradigm. Intuitively, it operates as follows. Divide: Divide the nelement sequence to be sorted into two subsequences of n=2 elements each. Conquer: Sort the two subsequences recursively using merge sort. Combine: Merge the two sorted subsequences to produce the sorted answer. The recursion “bottoms out” when the sequence to be sorted has length 1, in which case there is no work to be done, since every sequence of length 1 is already in sorted order. The key operation of the merge sort algorithm is the merging of two sorted sequences in the “combine” step. We merge by calling an auxiliary procedure M ERGE .A; p; q; r/, where A is an array and p, q, and r are indices into the array such that p q < r. The procedure assumes that the subarrays AŒp : : q and AŒq C 1 : : r are in sorted order. It merges them to form a single sorted subarray that replaces the current subarray AŒp : : r. Our M ERGE procedure takes time ‚.n/, where n D r p C 1 is the total number of elements being merged, and it works as follows. Returning to our cardplaying motif, suppose we have two piles of cards face up on a table. Each pile is sorted, with the smallest cards on top. We wish to merge the two piles into a single sorted output pile, which is to be face down on the table. Our basic step consists of choosing the smaller of the two cards on top of the faceup piles, removing it from its pile (which exposes a new top card), and placing this card face down onto 2.3 Designing algorithms 31 the output pile. We repeat this step until one input pile is empty, at which time we just take the remaining input pile and place it face down onto the output pile. Computationally, each basic step takes constant time, since we are comparing just the two top cards. Since we perform at most n basic steps, merging takes ‚.n/ time. The following pseudocode implements the above idea, but with an additional twist that avoids having to check whether either pile is empty in each basic step. We place on the bottom of each pile a sentinel card, which contains a special value that we use to simplify our code. Here, we use 1 as the sentinel value, so that whenever a card with 1 is exposed, it cannot be the smaller card unless both piles have their sentinel cards exposed. But once that happens, all the nonsentinel cards have already been placed onto the output pile. Since we know in advance that exactly r p C 1 cards will be placed onto the output pile, we can stop once we have performed that many basic steps. M ERGE .A; p; q; r/ 1 n1 D q p C 1 2 n2 D r q 3 let LŒ1 : : n1 C 1 and RŒ1 : : n2 C 1 be new arrays 4 for i D 1 to n1 5 LŒi D AŒp C i 1 6 for j D 1 to n2 7 RŒj D AŒq C j 8 LŒn1 C 1 D 1 9 RŒn2 C 1 D 1 10 i D 1 11 j D 1 12 for k D p to r 13 if LŒi RŒj 14 AŒk D LŒi 15 i D i C1 16 else AŒk D RŒj 17 j D j C1 In detail, the M ERGE procedure works as follows. Line 1 computes the length n1 of the subarray AŒp : : q, and line 2 computes the length n2 of the subarray AŒq C 1 : : r. We create arrays L and R (“left” and “right”), of lengths n1 C 1 and n2 C 1, respectively, in line 3; the extra position in each array will hold the sentinel. The for loop of lines 4–5 copies the subarray AŒp : : q into LŒ1 : : n1 , and the for loop of lines 6–7 copies the subarray AŒq C 1 : : r into RŒ1 : : n2 . Lines 8–9 put the sentinels at the ends of the arrays L and R. Lines 10–17, illus 32 Chapter 2 Getting Started 8 9 A … 2 k L 10 11 12 13 14 15 16 17 4 5 7 1 2 3 8 6 … 9 A … 1 1 2 3 4 5 1 2 3 4 2 i 4 5 7 ∞ R 1 j 2 3 6 ∞ 5 L 10 11 12 13 14 15 16 17 4 k 5 8 9 L 1 2 3 2 4 i 5 5 k 4 5 7 ∞ 7 1 3 6 … 2 3 4 5 1 2 3 4 4 5 7 ∞ R 1 2 j 3 6 ∞ 5 (b) 2 3 8 6 … 1 2 3 R 1 2 j 3 (c) 2 1 10 11 12 13 14 15 16 17 2 1 2 i (a) A … 1 7 4 9 A … 1 5 6 ∞ L 1 2 3 2 4 i 5 10 11 12 13 14 15 16 17 2 2 4 5 7 ∞ 7 k 1 2 3 6 … 1 2 3 R 1 2 3 j 4 5 6 ∞ (d) Figure 2.3 The operation of lines 10–17 in the call M ERGE.A; 9; 12; 16/, when the subarray AŒ9 : : 16 contains the sequence h2; 4; 5; 7; 1; 2; 3; 6i. After copying and inserting sentinels, the array L contains h2; 4; 5; 7; 1i, and the array R contains h1; 2; 3; 6; 1i. Lightly shaded positions in A contain their ﬁnal values, and lightly shaded positions in L and R contain values that have yet to be copied back into A. Taken together, the lightly shaded positions always comprise the values originally in AŒ9 : : 16, along with the two sentinels. Heavily shaded positions in A contain values that will be copied over, and heavily shaded positions in L and R contain values that have already been copied back into A. (a)–(h) The arrays A, L, and R, and their respective indices k, i, and j prior to each iteration of the loop of lines 12–17. trated in Figure 2.3, perform the r p C 1 basic steps by maintaining the following loop invariant: At the start of each iteration of the for loop of lines 12–17, the subarray AŒp : : k 1 contains the k p smallest elements of LŒ1 : : n1 C 1 and RŒ1 : : n2 C 1, in sorted order. Moreover, LŒi and RŒj are the smallest elements of their arrays that have not been copied back into A. We must show that this loop invariant holds prior to the ﬁrst iteration of the for loop of lines 12–17, that each iteration of the loop maintains the invariant, and that the invariant provides a useful property to show correctness when the loop terminates. Initialization: Prior to the ﬁrst iteration of the loop, we have k D p, so that the subarray AŒp : : k 1 is empty. This empty subarray contains the k p D 0 smallest elements of L and R, and since i D j D 1, both LŒi and RŒj are the smallest elements of their arrays that have not been copied back into A. 2.3 Designing algorithms 8 9 A … 1 L 33 10 11 12 13 14 15 16 17 2 2 3 1 k 2 3 8 6 … 9 A … 1 1 2 3 4 5 1 2 3 4 2 4 i 5 7 ∞ R 1 2 3 6 ∞ j 5 L 10 11 12 13 14 15 16 17 2 2 3 4 8 9 L 1 2 3 2 4 5 2 4 5 3 7 ∞ i 4 2 3 4 5 1 2 3 4 4 5 i 7 ∞ R 1 2 3 6 ∞ j 8 9 L 5 (f) 5 3 k 8 6 … 1 2 3 R 1 2 3 4 9 A … 1 5 6 ∞ j (g) A … 1 6 … 1 10 11 12 13 14 15 16 17 2 3 2 (e) A … 1 2 k L 1 2 3 2 4 5 10 11 12 13 14 15 16 17 2 2 4 5 7 ∞ i 3 4 5 6 6 … k 1 2 3 R 1 2 3 4 5 6 ∞ j (h) 10 11 12 13 14 15 16 17 2 2 3 4 5 6 7 … k 1 2 3 4 5 1 2 3 4 2 4 5 7 ∞ i R 1 2 3 6 ∞ j 5 (i) Figure 2.3, continued (i) The arrays and indices at termination. At this point, the subarray in AŒ9 : : 16 is sorted, and the two sentinels in L and R are the only two elements in these arrays that have not been copied into A. Maintenance: To see that each iteration maintains the loop invariant, let us ﬁrst suppose that LŒi RŒj . Then LŒi is the smallest element not yet copied back into A. Because AŒp : : k 1 contains the k p smallest elements, after line 14 copies LŒi into AŒk, the subarray AŒp : : k will contain the k p C 1 smallest elements. Incrementing k (in the for loop update) and i (in line 15) reestablishes the loop invariant for the next iteration. If instead LŒi > RŒj , then lines 16–17 perform the appropriate action to maintain the loop invariant. Termination: At termination, k D r C 1. By the loop invariant, the subarray AŒp : : k 1, which is AŒp : : r, contains the k p D r p C 1 smallest elements of LŒ1 : : n1 C 1 and RŒ1 : : n2 C 1, in sorted order. The arrays L and R together contain n1 C n2 C 2 D r p C 3 elements. All but the two largest have been copied back into A, and these two largest elements are the sentinels. 34 Chapter 2 Getting Started To see that the M ERGE procedure runs in ‚.n/ time, where n D r p C 1, observe that each of lines 1–3 and 8–11 takes constant time, the for loops of lines 4–7 take ‚.n1 C n2 / D ‚.n/ time,7 and there are n iterations of the for loop of lines 12–17, each of which takes constant time. We can now use the M ERGE procedure as a subroutine in the merge sort algorithm. The procedure M ERGE S ORT .A; p; r/ sorts the elements in the subarray AŒp : : r. If p r, the subarray has at most one element and is therefore already sorted. Otherwise, the divide step simply computes an index q that partitions AŒp : : r into two subarrays: AŒp : : q, containing dn=2e elements, and AŒq C 1 : : r, containing bn=2c elements.8 M ERGE S ORT .A; p; r/ 1 if p < r 2 q D b.p C r/=2c 3 M ERGE S ORT .A; p; q/ 4 M ERGE S ORT .A; q C 1; r/ 5 M ERGE .A; p; q; r/ To sort the entire sequence A D hAŒ1; AŒ2; : : : ; AŒni, we make the initial call M ERGE S ORT .A; 1; A:length/, where once again A:length D n. Figure 2.4 illustrates the operation of the procedure bottomup when n is a power of 2. The algorithm consists of merging pairs of 1item sequences to form sorted sequences of length 2, merging pairs of sequences of length 2 to form sorted sequences of length 4, and so on, until two sequences of length n=2 are merged to form the ﬁnal sorted sequence of length n. 2.3.2 Analyzing divideandconquer algorithms When an algorithm contains a recursive call to itself, we can often describe its running time by a recurrence equation or recurrence, which describes the overall running time on a problem of size n in terms of the running time on smaller inputs. We can then use mathematical tools to solve the recurrence and provide bounds on the performance of the algorithm. 7 We shall see in Chapter 3 how to formally interpret equations containing ‚notation. 8 The expression dxe denotes the least integer greater than or equal to x, and bxc denotes the greatest integer less than or equal to x. These notations are deﬁned in Chapter 3. The easiest way to verify that setting q to b.p C r/=2c yields subarrays AŒp : : q and AŒq C 1 : : r of sizes dn=2e and bn=2c, respectively, is to examine the four cases that arise depending on whether each of p and r is odd or even. 2.3 Designing algorithms 35 sorted sequence 1 2 2 3 4 5 6 7 1 2 3 merge 2 4 5 7 merge 2 merge 5 4 merge 5 7 1 merge 2 6 4 3 2 merge 7 1 6 merge 3 2 6 initial sequence Figure 2.4 The operation of merge sort on the array A D h5; 2; 4; 7; 1; 3; 2; 6i. The lengths of the sorted sequences being merged increase as the algorithm progresses from bottom to top. A recurrence for the running time of a divideandconquer algorithm falls out from the three steps of the basic paradigm. As before, we let T .n/ be the running time on a problem of size n. If the problem size is small enough, say n c for some constant c, the straightforward solution takes constant time, which we write as ‚.1/. Suppose that our division of the problem yields a subproblems, each of which is 1=b the size of the original. (For merge sort, both a and b are 2, but we shall see many divideandconquer algorithms in which a ¤ b.) It takes time T .n=b/ to solve one subproblem of size n=b, and so it takes time aT .n=b/ to solve a of them. If we take D.n/ time to divide the problem into subproblems and C.n/ time to combine the solutions to the subproblems into the solution to the original problem, we get the recurrence ( ‚.1/ if n c ; T .n/ D aT .n=b/ C D.n/ C C.n/ otherwise : In Chapter 4, we shall see how to solve common recurrences of this form. Analysis of merge sort Although the pseudocode for M ERGE S ORT works correctly when the number of elements is not even, our recurrencebased analysis is simpliﬁed if we assume that 36 Chapter 2 Getting Started the original problem size is a power of 2. Each divide step then yields two subsequences of size exactly n=2. In Chapter 4, we shall see that this assumption does not affect the order of growth of the solution to the recurrence. We reason as follows to set up the recurrence for T .n/, the worstcase running time of merge sort on n numbers. Merge sort on just one element takes constant time. When we have n > 1 elements, we break down the running time as follows. Divide: The divide step just computes the middle of the subarray, which takes constant time. Thus, D.n/ D ‚.1/. Conquer: We recursively solve two subproblems, each of size n=2, which contributes 2T .n=2/ to the running time. Combine: We have already noted that the M ERGE procedure on an nelement subarray takes time ‚.n/, and so C.n/ D ‚.n/. When we add the functions D.n/ and C.n/ for the merge sort analysis, we are adding a function that is ‚.n/ and a function that is ‚.1/. This sum is a linear function of n, that is, ‚.n/. Adding it to the 2T .n=2/ term from the “conquer” step gives the recurrence for the worstcase running time T .n/ of merge sort: ( ‚.1/ if n D 1 ; T .n/