Algorithm arrayMax(A, n) Input array A of n integers Output maximum element of A
Running Time (§1.1) best case
Most algorithms transform
w average case
input objects into output
worst case 120 objects.
Analysis of Algorithms
The running time of an 100
w e
algorithm typically grows
80 im with the input size. T g
60 Average case time is often w n n
40 u difficult to determine. R
20 We focus on the worst case w
Input Algorithm Output running time. 1000 2 0 0 0 3000 4000 n Easier t o analyze
I nput Size
Crucial to applications such as An algorithm is a step-by-step procedure for n games, finance and robotics solving a problem in a finite amount of time.
Analysis of Algorithms
2 Experimental Studies (§ 1.6) Limitations of Experiments 9 0 0 0
Write a program
w I t is necessary to implement the w
8 0 0 0
implementing the
algorithm, which may be difficult
algorithm 7 0 0 0
6 0 0 0
Run the program with Results may not be indicative of the
w ) w s
inputs of varying size and 5 0 0 0
m running time on other inputs not included
( e
composition
4 0 0 0
im in the experiment.
Use a method like T
w 3 0 0 0 System.currentTimeMillis()
to
I n order to compare two algorithms, the 2 0 0 0 w
get an accurate measure
1 0 0 0 same hardware and software
of the actual running time
environments must be used
Plot the results
w 5 0 1 0 0 I nput Size
Analysis of Algorithms
3 Analysis of Algorithms
4 Theoretical Analysis Pseudocode (§1.1)
Example: find max High-level description
w Uses a high-level description of the w
element of an array of an algorithm
algorithm instead of an implementation
More structured than
w Algorithm arrayMax(A, n)
English prose
Input array A of n integers Characterizes running time as a w
Less detailed than a
w Output maximum element of A function of the input size, n.
program
currentMax ← A[0]
Preferred notation for
w Takes into account all possible inputs w for i 1 to n 1 do
← −
describing algorithms
if A[i] currentMax then >
Allows us to evaluate the speed of an w
Hides program design
w currentMax A[i]
←
issues
algorithm independent of the return currentMax hardware/ software environment
Analysis of Algorithms
# operations
1 Total 7n −
return currentMax
{ increment counter i } 2(n − 1)
if A[i] > currentMax then 2(n − 1) currentMax ← A[i] 2(n − 1)
2 + n
for i ← 1 to n − 1 do
2
currentMax ← A[0]
Algorithm arrayMax(A, n)
w
By inspecting the pseudocode, we can determine the maximum number of primitive operations executed by an algorithm, as a function of the input size
10 Counting Primitive Operations (§1.1) w
Analysis of Algorithms
method
n Calling a method n Returning from a
I ndexing int o an ar r ay
n
to a variable
1 Estimating Running Time
Algorit hm
Evaluating an expression
T(n) ≤ b(7n
w The linear growth rate of the running time T(n) is an intrinsic property of algorithm arrayMax
n Affects T(n) by a const ant fact or, but n Does not alt er t he growt h rat e of T(n)
Changing the hardware/ software environment
Growth Rate of Running Time w
is bounded by t w o linear funct ions
T(n)
Hence, the running time
− 1) w
1) ≤
arrayMax
a (7n −
be worst-case time of arrayMax. Then
T(n)
Let
a = Time taken by the fastest primitive operation b = Time taken by the slowest primitive operation w
operat ions in t he worst case. Define:
− 1 primit ive
execut es 7n
n Assigning a value
n
7 Pseudocode Details w
Expressions
in Java)
==
Equality testing (like
=
in Java)
=
(like
←
return expression w
mat hemat ical formatting allowed
Return value
var.method (arg [, arg…]) w
Method call
Input … Output … w
Algorithm method (arg [, arg…])
Method declaration
n if … then … [else …] n while … d o … n repeat … until … n for … do … n w
Control flow
n 2 Superscripts and other
Analysis of Algorithms
Examples:
w
w
Assumed to take a constant amount of time in the RAM model
w
Exact definition not important (we will see why later)
w
Largely independent from the programming language
w
I dentifiable in pseudocode
Basic computations performed by an algorithm
8 The Random Access Machine (RAM) Model w
w
Analysis of Algorithms
Memory cells are numbered and accessing any cell in memory takes unit time.
w
2
1
An pot ent ially unbounded bank of memory cells, each of which can hold an arbit rary number or charact er
CPU w
A
9 Primitive Operations
Growth Rates Constant Factors
1E+26
1E+30 Quadratic
Growth rates of
1E+28 The growth rate is
1E+24 w Cubic w
Quadratic
1E+26
1E+22
functions: not affected by
Quadrat ic
1E+24
1E+20 Linear
Linear n
1E+22 constant factors or n ≈ n
1E+18 Linear 2 Linear
1E+20
1E+16
Quadratic n
n ≈
lower-order terms
n
1E+18 )
1E+14 3 ) n
1E+16 (
Cubic n
n ≈
1E+12 (n Examples T w
1E+14 T
1E+12 n 10 n 10 is a linear
- 1E+10
1 E + 8
1E+10
I n a log-log chart, funct ion
w
1 E + 6
1 E + 8
1 E + 4 n n 5 2 8
1 E + 6 10 n 10 is a
- the slope of the line
1 E + 4
1 E + 2
corresponds to the quadrat ic funct ion
1 E + 2
1 E + 0
growth rate of the
1 E + 0
1E+0
1E+2
1E+4
1E+6
1E+8
1E+10
1E+0
1E+2
1E+4
1E+6
1E+8
1E+10
function
n n
Analysis of Algorithms
13 Analysis of Algorithms
14 Big-Oh Notation (§1.2) Big-Oh Example 10,000
1,000,000 Given functions f(n) and 3n
w n ^ 2
Example: the function
w 100n g(n), we say that f(n) is
2n+ 10
100,000
1,000
2
n is not O(n) 10n O(g(n)) if there are n 2 n n cn n ≤ 10,000
positive constants
100 n c n ≤ c and n such that
1,000 The above inequalit y
n f(n) ≤ cg(n) for n ≥ n
cannot be satisfied
10
Example: 2n 10 is O(n)
w
- 100 since c must be a
2n 10 cn n ≤
- constant
1 0
1 (c 2) n
10 n − ≥
1 10 100 1,000 n 10 (c 2)
1
n ≥ / − n
Pick c
3 and n
10
1 10 100 1,000
n = = n
Analysis of Algorithms
15 Analysis of Algorithms
16 More Big-Oh Examples Big-Oh and Growth Rate
7n-2
n
The big-Oh notation gives an upper bound on the 7n-2 is O( n)
w
growth rate of a function need c > 0 and n 1 such t hat 7n-2 c• n for n n
≥ ≤ ≥
t his is t rue for c = 7 and n = 1 The statement “ f(n) is O(g(n))” means that the growth
w
rate of f(n) is no more than the growth rate of g(n)
3
2 3n + 20n + 5
n 3 2 3 We can use the big-Oh notation to rank functions w
3n + 20n + 5 is O(n ) 3 2 3 according to their growth rate need c > 0 and n 1 such t hat 3 n + 20n + 5 c• n for n n
≥ ≤ ≥
t his is t rue for c = 4 and n = 21
f(n) is O(g(n)) g(n) is O(f(n))
3 log n + log log n
n g(n) grows more Yes No
3 log n + log log n is O(log n)
f(n) grows more No Yes
need c > 0 and n 1 such t hat 3 log n + log log n c• log n for n n
≥ ≤ ≥
Same growth Yes Yes t his is t rue for c = 4 and n = 2
Big-Oh Rules Asymptotic Algorithm Analysis
The asymptotic analysis of an algorithm determines
w
the running time in big-Oh notation I f is a polynomial of degree d , t hen is
w f(n) f(n)
To perform the asymptotic analysis
w d
We find the worst-case number of primitive operations
) , i.e., n
O(n
executed as a function of the input size
n Drop lower-order terms
We express this function with big-Oh notation
n n Drop constant factors
Example: We det ermine t hat algorit hm arrayMax executes at most
n
Use t he smallest possible class of funct ions
w 7n 1 primitive operations
−
2
n Say “ 2n is O(n)” instead of “2n is O(n )”
We say that algorithm arrayMax “ runs in time”
n O(n)
Use t he simplest expression of t he class Since constant factors and lower-order terms are
w w
eventually dropped anyhow, we can disregard them
n Say “ 3n 5 is O(n)” instead of “3n 5 is O(3n)”
when counting primitive operations
Analysis of Algorithms
19 Analysis of Algorithms
20 Computing Prefix Averages Prefix Averages (Quadratic)
The following algorithm computes prefix averages in We further illustrate w
w
35
quadratic time by applying the definition asymptotic analysis with
X
30
two algorithms for prefix A
Algorithm prefixAverages1(X, n)
averages
25 Input array X of n integers
The i-th prefix average of
w
# operations
Output array A of prefix averages of X
20
an array X is average of the
A new array of n integers n ←
first (i 1) elements of X:
- 15
for i to n 1 do n ← −
X[1] … X[i ])/(i+1) A[i] = ( X[0] + + +
10 s X[0] n
←
1 2 … (n 1)
w ← − + + +
5 Computing the array A of for j 1 to i do
prefix averages of another
s s X[j]
1 2 … (n 1)
← − + + + +
A[i] s (i 1) n
1
2
3
4
5
6 7 ← /
- array X has applications to
financial analysis
return A
1 Analysis of Algorithms
21 Analysis of Algorithms
22 Arithmetic Progression Prefix Averages (Linear)
The following algorithm computes prefix averages in
w
7 The running time of linear time by keeping a running sum w
6 prefixAverages1 is
Algorithm prefixAverages2(X, n) O(1
2 … n)
5 Input array X of n integers
The sum of the first n
w
# operations
4 Output array A of prefix averages of X
- integers is n(n 1) /
2 A new array of n integers n
←
3 n There is a simple visual s
1
←
proof of this fact
2 for i to n 1 do n
← −
Thus, algorithm
w
←
- 1 s s X[i] n
A[i] s (i 1) n ← /
- prefixAverages1 runs in
2 O(n ) time
return A
1
1
2
3
4
5
6 Algorithm prefixAverages2 runs in O(n) time w Analysis of Algorithms
25 w propert ies of logarit hms:
Θ
strictly greater
(g(n)) if is asymptotically
ω
lit t le- omega n f(n) is
t han g(n)
strictly less
f(n) is o(g(n)) if f(n) is asymptotically
little- oh n
(g(n)) if f(n) is asympt ot ically equal t o g( n)
big- Theta n f(n) is
Analysis of Algorithms
t o g( n)
greater than or equal
(g(n)) if f(n) is asymptotically
Ω
f(n) is
big- Om ega n
t o g( n)
Notation Big- Oh n f(n) is O(g(n)) if f(n) is asympt ot ically less t han or equal
Analysis of Algorithms
n
t han g( n)
28 Example Uses of the Relatives of Big-Oh f(n) is
c•g(n) for n
1 such that f(n) ≥ c•g(n) for n
(n 2 )
5n 2 is Ω
≥ n let c = 5 and n = 1 n
1 such that f(n) ≥ c•g(n) for n
≥
Ω (g(n)) if there is a constant c > 0 and an integer constant n
(n) f(n) is
5n 2 is Ω
≥ n let c = 1 and n = 1 n
≥
ω (g(n)) if, for any constant c > 0, there is an integer constant n
Ω (g(n)) if there is a constant c > 0 and an integer constant n
(n) f(n) is
5n 2 is ω
≥ n
≥ c/5
→ given c, the n that satifies this is n
≥ c•n
≥ n need 5n 2
≥ c•g(n) for n
≥ 0 such that f(n)
≥
≥
log b (xy) = log b x + log b y log b (x/ y) = log b x - log b y log b xa = alog b x b x x
26 Relatives of Big-Oh w big- Om ega n
n
≥
c• g(n) for n
≥
1 such t hat f( n)
≥
(g(n)) if there is a constant c > 0 and an integer constant n
Ω
f(n) is
Math you need to Review Analysis of Algorithms
f(n) is
Basic probability (Sec. 1.3.4)
w
Proof techniques (Sec. 1.3.3)
w
Logarithms and Exponents (Sec. 1.3.2)
w
Summations (Sec. 1.3.1)
w
a ( b + c) = a b a c a b c = (a b ) c a b / a c = a (b-c) b = a l o g a b b c = a c* log a b
w properties of exponent ials :
w big- Thet a n
Θ
0 such that f(n)
0 such that f(n)
≥
(g(n)) if, for any constant c > 0, there is an integer constant n
ω
f(n) is
w lit t le- omega n
n
≥
c•g(n) for n
≤
≥
(g(n)) if there are constants c ’ > 0 and c’’ > 0 and an integer constant n
constant n
w little- oh n f(n) is o(g(n)) if, for any constant c > 0, there is an integer
n
≥
c’’• g(n) for n
≤
f(n)
≤
1 such t hat c’• g(n)
≥