Design and Analysis of Algorithms Final
Name: _______________________________________
NYU NetID: __________________________________
Multiple Choice
 Employing the master theorem,
the solution to the recurrence T(n) = 2T(n/4) +
n^{0.51} is
 Θ(n^{2})
 Θ(n^{2} log n)
 Θ(n^{0.51})
 the master theorem can't be applied here.
 When we have an optimal substructure and only the locally
optimal choice matters, we should probably use:
 a straightforward recursive solution.
 dynamic programming.
 a memoized recursive algorithm.
 a greedy algorithm.
 If we have a binary search tree of height h, then
all searching functions such as min, max, and successor
will run in
 O(h^{2}) time.
 O(ln h) time.
 O(h) time.
 O(lg h) time.
 A depthfirst search of this tree:
Will cover the nodes in what order?
 ACBHEFIKGJD
 ABHCFIKDEGJ
 ABCDEFGHIJK
 ABCHFIKDEGJ
 According to the binary search tree property
 for all nodes higher in the tree than x,
x.key ≤ y.key.
 for all nodes in the left subtree of x,
y.key ≤ x.key.
 for all nodes lower in the tree than x,
x.key ≤ y.key.
 all nodes in the right subtree of x will
be the successors of x.
 Employing the master theorem,
the solution to the recurrence T(n) = 0.25T(n/2) + 8n is
 Θ(n^{2})
 Θ(n^{2} log n)
 Θ(8n)
 the master theorem can't be applied here.
 For large n, an algorithm will run the slowest
if it time complexity is:
 O(n^{3})
 O((1.02)^{n})
 O(n^{5})
 O(500n log n)
 Matroids are essentially a theory of
 independence.
 connectivity.
 resilience.
 linearity.
 Consider the following alphabet with frequencies:
Symbol

Frequency

A

24

B

12

C

10

D

8

E

8

Which of the following is a possible Huffman coding for the
string "ABBCDAED"?
 11111101011100111110
 11001001011100111110
 01001001011100111110
 11001001011100111110
 When we have an optimal substructure and
overlapping subproblems, we should probably use:
 a straightforward recursive solution.
 dynamic programming.
 a merge sort.
 a greedy algorithm.
 Employing the master theorem,
the solution to the recurrence T(n) = 4T(n/2) + cn is
 Θ(n^{2})
 Θ(n^{2} log n)
 Θ(cn)
 the master theorem can't be applied here.
 We have one algorithm for processing customer records
with run time of O(n), and another with run time of O(lg n) + 2500.
In what circumstances might we want
to choose the O(n) algorithm?
 The algorithm will be distirbuted across a network.
 We believe our program will
always be dealing with a
number of records less than 2500.
 We are running on a slow CPU.
 If n is very large.
 We can sometimes be loose with our analysis of
divideandconquer algorithms,
in that we might omit details like
 recursions and function calls.
 exponential factors.
 floors, ceilings, and boundary conditions.
 any n raised to a power less than 3.
 Under what circumstances might we
most reasonably expect to
receive worstcase input to our binary search tree?
 We insert data into the tree as it comes along to us.
 The data has been randomly shuffled.
 A user types in the data from a sorted list.
 All of the above.
 When we have overlapping subproblems and we want to solve
them from the bottom up, we should employ:
 dynamic programming
 a recursive algorithm
 a recursive algorithm with memoization
 a greedy algorithm
 When we randomize an algorithm, we then speak of its
 necessary running time.
 bestcase running time.
 expected running time.
 unbounded running time.
 For large n, an algorithm will run the fastest
if it time complexity is:
 O(n!)
 O(n^{3})
 O(n^{4})
 O((1.02)^{n})
 Strongly connected components of a graph are
 components of a directed graph that can
each be reached from each other.
 components in a weighted graph where the
connections have the highest weights.
 components of a spanning tree.
 an vertices with edges between them.
 Which of these functions grows most slowly?
 lg n
 lg* n
 ln n
 log_{10} n
 For a greedy algorithm to work, the optimal choice
 must depend on many overlapping subproblems.
 must not depend on any future choices.
 must be dependent on a global optimal.
 must only be available after considering
all subproblems.
 Ωnotation applied to a function implies
 it is a function we know little about.
 it is asymptotically bound from above and below.
 only that it is asymptotically bound from below.
 only that it is asymptotically bound from above.
 In algorithm analysis, we usually analyze algorithms
in terms of
 actual running time in nanoseconds.
 the number of disk operations.
 the number of basic operations.
 CPU time used.
 In proving that a problem has the greedychoice property,
we often use a proof technique called:
 cutandpaste.
 the master theorem.
 substitution.
 recursiontree analysis.
 The chief advantage of a doublylinked list over a singlylinked list is
 inserts are faster.
 there are fewer pointers to maintain.
 we can back up as well as go forward in the list.
 less memory is used.
 Often, a good alternative to dynamic programming is
 a simple recursive solution.
 a recursive solution with memoization.
 a bruteforce solution.
 all of the above.
 A random variable is actually
 the mean of all possible values.
 an algorithm.
 a precise real number value.
 a function.
 In the substitution method for solving recurrences,
we
 substitute a different recurrence for
the one characterizing our algorithm.
 guess a solution and then
use induction to prove it.
 substitute a polynomial factor for
an exponential one.
 substitute an n^{2} wherever
we see a 2^{n}
 Consider the following recursion tree.
What closed form best characterizes this tree?
 O(n/8)
 O(n log n)
 O(n^{2})
 O(log n)
 The worst case of hashing with chaining occurs when
 the input comes in sorted already.
 all inputs hash to different values.
 the input is purely random.
 all inputs hash to the same value.
 Consider this recurrence:
What is the solution to this recurrence?
 T(n) = Θ(n^{3})
 T(n) = Θ(n^{2})
 T(n) = Θ(n^{8})
 T(n) = Θ(n^{8/3})
 If you have available a simple algorithm with some
acceptable run time, and a more complex algorithm
with slightly faster run time, you should
 always choose the faster algorithm.
 always choose the simpler algorithm.
 consider how big your input is likely
to be before choosing either algorithm.
 ask your boss what to do.
 In graph theory, a tree is
 a forest with no edges.
 a cyclical graph with no forests.
 any walk on a graph.
 a connected graph with no cycles.
 Graph theory is useful for
 task management.
 map coloring.
 cellular telephone networks.
 all of the above.
 A problem with linear probing to resolve hash collisions is
 the code is extremely complex.
 clusters of keys will build up in a linked list.
 the hash table clusters around its central value.
 once inputs start to cluster in one area of the
hash table, they will become more likely
to do so.
 Consider this graph:
Which of the following would be an adjacency list
representation of this graph?
 1 (2, 3, 4) 2 (1, 4) 3 (1, 4) 4 (2, 3, 5) 5 (4, 6)
6 (4, 5)
 1 (2, 3) 2 (1, 4) 3 (1, 4) 4 (2, 5, 6) 5 (4, 6)
6 (4, 5)
 1 (2, 3) 2 (1, 4) 3 (1, 4) 4 (2, 3, 5, 6) 5 (4, 6)
6 (4, 5)
 1 (2, 3) 2 (1, 4) 3 (1, 4) 4 (1, 3, 5, 6) 5 (4, 6)
6 (4, 5)
 Taking the graph from the previous question,
which of the following would be an adjacency matrix
representation of this graph?

0  1  1 
0  0  0 
1  0  0 
1  0  0 
1  0  0 
1  0  0 
0  1  1 
0  1  1 
0  0  0 
1  0  1 
0  0  0 
1  1  0 

0  1  0 
0  0  0 
1  0  0 
1  0  0 
1  0  0 
1  0  0 
0  1  1 
1  1  1 
0  0  0 
1  0  1 
0  0  0 
1  1  0 

0  1  1 
0  0  0 
1  0  0 
1  0  0 
1  0  0 
1  0  1 
0  1  1 
0  1  1 
0  0  1 
1  0  1 
0  0  0 
1  1  0 

0  1  1 
0  0  0 
1  0  0 
1  0  0 
0  0  0 
1  0  0 
0  1  1 
0  1  1 
0  0  0 
1  0  1 
1  0  0 
1  1  0 
 Besides running time, we can also measure algorithm performance
by:
 amount of diskmemory communication
 number of processors used
 amount of network bandwith used
 all of the above
 An advantage of the adjacency matrix representation
over the adjacency list representation is
 it is smaller.
 it allows quicker searches for determining whether some
edge exists.
 it can represent weighted graphs.
 it can represent directed graphs.
 Which of the following matrices represents this directed
graph:

0  0  1  0 
0  1  1  0 
0  1  1  0 
0  0  1  0 

0  1  1  0 
0  1  1  0 
0  1  1  0 
0  1  1  0 

0  0  1  0 
1  0  0  0 
0  1  1  0 
1  0  0  1 

0  0  1  0 
1  0  0  0 
0  1  0  1 
1  0  0  0 
 A breadthfirst search of this graph:
Will cover the nodes in what order?
 ABCDEFGHIJK
 ACBHEFIKGJD
 ABHCFIKDEGJ
 ABCHFIKDEGJ
 Θnotation applied to a function implies
it is
 a function we know little about.
 asymptotically bound from above and below.
 asymptotically bound from below only.
 asymptotically bound from above only.
 Using the master theorem, the solution to the recurrence
T(n) = 3T(n/3) + n/2 is:
 Θ(n/3)
 Θ(n log n)
 Θ(n)
 the master theorem can't be applied here.
 Dynamic programming is preferred to simple recursive
solutions when
 the recursive solution is hard to grasp.
 all of the subproblems are completely
independent of each other.
 the recursion goes beyond 4 levels.
 the same subproblems must be solved multiple
times.
 Direct dictionaries are only practical when
 the number of keys is small.
 we don't care how slow lookups will be.
 we aren't worried about the large number of collisions
hashing will produce.
 linear probing is appropriate.
 A greedy algorithm is appropriate when
 we need to consider the global situation
when making any choice.
 there is extensive overlap among subproblems.
 only the locally best choice matters.
 greed is the goal of the algorithm choice.
 We can use a greedy algorithm to solve
 a rodcutting problem.
 a minimumspanningtree problem.
 a matrixparenthisization problem.
 a shortestpath problem.
 If f(n) = ω(g(n)), that means
 g dominates f asymptotically
 f is bounded below by g asymptotically
 f is bounded above by g asymptotically
 f dominates g asymptotically
 Using the master theorem, the solution to the recurrence
T(n) = 64T(n/8)  n^{2} log n is:
 Θ(n^{2})
 Θ(n^{4} log n)
 Θ(n^{2} log n)
 the master theorem can't be applied here.
 The worstcase running time of a search on a binary search
tree is
 θ(n^{2})
 θ(n)
 θ(lg n)
 θ(1)
 A toppological sort on a tree makes use of
 breadthfirst search.
 finding the minimum spanning tree.
 depthfirst search.
 finding the shortest path.
Problems
 Let us say we have an algorithm that runs
in 10n^{2} + 10 time.
We want to show its order of complexity is
θ(n^{2}). Give
an example of a set of k_{1}, k_{2},
and n_{0} that would do this.
 How will a depthfirst search process the following tree?
Write a number above each vertex indicating the order in which
it will be visited:
 Use an indicator random variable to show how many numbers we
have to pick from a set of n numbers before we are
more likely than not to have drawn the same number twice.
 Describe why the fractional knapsack problem can be solved with
a greedy algorithm, but the whole unit problem cannot.
 Prove that Kruskal's algorithm can correctly find a minimum
spanning tree.