An algorithm is a finite sequence of precise instructions for performing a computation or for solving a problem.
Algorithm for finding the maximum (largest) value in a finite sequence of integers: Solution Steps:
Properties of an Algorithm
The problem of locating an element in an ordered list occurs in many contexts. For instance, a program that checks the spelling of words searches for them in a dictionary, which is just an ordered list of words. Problems of this kind are called searching problems.
procedure linear search(x: integer,
a_{1}, a_{2}, .
. . , a_{n}: distinct integers)
i := 1
while (i ≤ n
and x != a[i] )
i = i + 1
if i ≤ n then
location := i
else location := 0
return location
{location is the subscript of
the term that equals x,
or is 0 if x is not found}
This algorithm can be used when
the list has terms occurring in
order of increasing size (for instance:
if the terms are numbers,
they are listed from smallest to largest;
if they are words, they
are listed in lexicographic, or alphabetic,
order). This second
searching algorithm is called the binary search algorithm.
Psuedocode
procedure bubblesort(a_{1}, a_{2}, . . . , a_{n}: real numbers n ≥ 2)
for ( i := 1 to n-1)
for ( j to n-1)
if a _{ j} > a_{j+1} then interchange a_{j} and a_{j+1}
{a_{1}, . . . ,a _{n} is in increasing order}
procedure
insertionsort(a_{1}, a_{2}, .
. . , a_{n}: real numbers n ≥ 2)
for ( j := 2 to n)
i := 1
while a _{ j} >
a_{i}
i := i+1
m := a_{j}
for k := 0 to j − i − 1
a_{j}_{−}_{k}
:= a_{j}
_{−}_{k}
_{ − 1}
a_{i} := m
{a_{1}, . . . ,
a_{n}}
Algorithms that make what seems to be the best
choice at each step are called greedy algorithms.
Greedy Change-Making Algorithm
procedure
change(c_{1}, c_{2}, .
. . , c_{n}: value of denominations
where c_{1} > c_{2} > .
. . , c_{n} : n is a positive
integer)
for ( i := 1 to r)
d_{i} := 0 {d_{i}
counts the number of denominations c_{i}
used
while n ≥
c_{i}
d_{i} := d_{i} +
1 {add a coin of denomination c_{i} }
n := n - c_{i}
d_{i} is the number of coins of
denominations in c_{i} in change for
i = 1, 2, 3
LEMMA 1
If n is a positive integer, then n cents in change using quarters, dimes, nickels, and pennies using the fewest coins possible has at most two dimes, at most one nickel, at most four pennies, and cannot have two dimes and a nickel. The amount of change in dimes, nickels, and pennies cannot exceed 24 cents.
THEOREM 1
The greedy algorithm produces change using the
fewest coins possible.
Greedy Algorithm for Scheduling Talks
procedure
schedule(s_{1} ≤
s_{2} ≤ .
. . , s_{n}: start time of talks
e_{1} ≤ e_{2} ≤ .
. . , e_{n} : n ending times
of talks)
sort talks by finish time and reorder so that e_{1} ≤
e_{2} ≤ . . , e_{n}
S := 0
for ( i := 1 to n)
if talk j is compatible with S
S := S &union; {talk j}
return S{ S is the set of talks produced}
1. c; 2. a; 3. b; 4. b; 5. a; 6. d; 7. c; 8. b; 9. a;
DEFINITION
Let f and g be functions
from the set of integers
or the set of real numbers to the set
of real numbers.We say that
f(x) is O(g(x)) if
there are constants C and
k such that |f(x)|
≤ C|g(x)|whenever
x > k.
(This is read as "f(x)
is big-oh of g(x).")
Example 1
Show that
f(x) = 7x^{2} is
O(x^{3}).
Take C = 1 and k
= 7 as witnesses to establish
that 7x^{2}
is O(x^{3}).
When x > 7,
7x^{2} < x^{3}.
Consequently, we can take
C= 1 and k = 7 as
witnesses to establish the
relationship 7x^{2} is
O(x^{3}).
Alternatively, when x
> 1, we have 7x^{2} <
7x^{3},
so that C = 7
and k = 1 are also
witnesses to the relationship
7x^{2} is O(x^{3}).
THEOREM 1
Let f(x) =
a_{n}
x^{n} +
a_{n −
1}x^{n
− 1} +...+
a^{x} + a,
where a_{0},
a_{1}...
a_{n − 1},
a_{n} are real numbers.
Let f(x) =
a_{n}x^{n} +
a_{n −
1}x^{n −1}
+...+ ax + a,
where a_{0},
a_{1}...
a_{n − 1},
a_{n} are real numbers.
Then f(x) is
O(x^{n}).
A Display of the Growth
of Functions Commonly Used in Big-O
Estimates.
THEOREM 2
Suppose that f_{1}(x) is
O(g_{1}(x)) and that
f_{2}(x)
is O(g_{2}(x)).
Then (f_{1}
+ f_{2})
(x) is O
(max(|g_{1}
(x)|, |g_{2}(x)|)).
COROLLARY 1
Suppose that f_{1}(x) and
f_{2}(x)
are both O(g(x)).
Then (f_{1} +
f_{2})(x) is
O(g(x)).
THEOREM 3
Suppose that f_{1}(x)
is O(g_{1}(x))
and f_{2}(x)
is O(g_{2}(x)).
Then (f_{1}f_{2})
(x) is
O(g_{1}(x)g_{2}(x)).
DEFINITION 2
Let f and g be
functions from the set of integers
or the set of real numbers to
the set of real numbers.We say that
f(x) is Ω;
(g(x) if there are positive constants
C and k such that |f(x)| ≥
C|g(x)|
whenever x > k.
(This is read as "f(x
is big-Omega of g(x.")
Example: Show that
f(x) = x^{3} is
Ω(7x^{2}).
DEFINITION 3
Let f and g be
functions from the set of integers
or the set of real numbers to
the set of real numbers.We say that
f(x) is θ(g(x))
if f(x) is O(g(x)
and f(x) is Ω(g(x)).
When f(x)is
Θ(g(x)) we say that f
is big-Theta of g(x),
that f(x) is of order g(x),
and that f(x) and
g(x) are of the same order.
Example: Show that 3x^{2}+ 8x
log x is Θ(x^{2}).
THEOREM 4
Let f(x) =
a_{n}x^{n} +
a_{n-1}x^{n −
1}
+...+ a_{1}
+ a_{0}, where a_{0},
a_{1}, . . . ,
an are real numbers
with a_{n} ≠ 0.
a_{n −
1} x ^{ n - 1}
+ . . . + a_{1}x
+ a_{0}, where a_{0},
a_{1}, . . . ,
a_{n} are real numbers
with a_{n} ≠ 0.
Then f(x) is of
order x^{n}.
Algorithms class lecture on the growth of functions.
1. b; 2. b; 3. a; 4. a; 5. a; 6. b;
An analysis of the time required
to solve a problem of a particular
size involves the time complexity
of the algorithm.The time complexity of
an algorithm can be expressed
in terms of the number of operations
used by the algorithm when
the input has a particular size.
In most of the cases we consider
the worst-case time complexity of an
algorithm. This provides an upper
bound on the number of operations an
algorithm uses to solve a problem
with input of a particular size.
Complexity Analysis of Some Algorithms
Example:
Describe the time
complexity of an Algorithm for finding
the maximum element in a finite set of integers.
An analysis of the time required to
solve a problem of a particular
size involves the time
complexity of the algorithm.
The time complexity of
an algorithm can be expressed in
terms of the number of operations
used by the algorithm when the
input has a particular size.
In most of the cases we consider the
worst-case time complexity of an
algorithm. This provides an upper bound
on the number of operations an
algorithm uses to solve a problem
with input of a particular size.
Example:
Describe the
time complexity of an Algorithm
for finding the maximum element
in a finite set of integers.
The definition of the product
of two matrices can be expressed as an
algorithm for computing the
product of two matrices. Suppose that
C = [c_{ij}] is the
m ✕ n matrix that is the
product of the m ✕ k
matrix A = [a_{ij}]
and the k ✕ n matrix
B = [b_{ij} ].
The definition of the product of two matrices can
be expressed as an
algorithm for computing the product of
two matrices. Suppose that
C = [c_{ij}]
is the m ✕ n matrix that is the
product of the m ✕ k
matrix A = [a_{ij}]
and the k ✕ n
matrix B = [b_{ij} ].
Pseudocode Matrix Multiplication
Complexity Calculation How many additions of integers and multiplications of integers are used by the matrix multiplication algorithm to multiply two n * n matrices.
Algorithm Paradigms is a general
approach based on a particular concept
that can be used to construct algorithms
for solving a variety of problems.
Greedy Algorithm, Divide-and-Conquer Algorithm,
Brute Force Algorithm, and Probabilistic
Algorithm are algorithm paradigms.
Example:
Brute Force Algorithms
A brute-force algorithm is solved in the most
straightforward manner,
without taking advantage of any ideas
that can make the algorithm more efficient.
Construct a brute-force algorithm for finding
the closest pair of
points in a set of n points in the
plane and provide a worst-case
estimate of the number of arithmetic operations.
Pseudocode for Brute-Force Closest Point Algorithm
Complexity of the Algorithm
The algorithm loops through
n(n - 1)/2 pairs of
points, computes the value
(x_{j} -
x_{i})^{2}+
(y_{j} -
y_{i})^{2}and
compares it with the minimum, etc.
So, the algorithm uses
Θ(n^{2})
The algorithm loops through n(n − 1)/2
pairs of points, computes the value
(x_{j} −
x_{i})^{2} +
(y_{j} −
y_{i})^{2} and
compares it with the minimum, etc. So, the algorithm uses
Θ(n^{2})
arithmetic and comparison operations.
Commonly Used Terminology for the Complexity of Algorithms. | |
---|---|
Complexity | Terminology |
Θ(1) | Constant Complexity |
Θ(log n) | Logarithmic Complexity |
Θ(n) | Linear Complexity |
Θ(n log n) | Linearithmic complexity |
Θ(n^{b}) | Polynomial complexity |
Θ(b^{n}) where b >1 | Exponential Complexity |
Θ(n!) | Factorial Complexity |
Tractable problem: a problem
for which there is a worst-case
polynomial-time algorithm that solves it
Intractable problem: a
problem for which no worst-case
polynomial-time algorithm exists for solving it
Solvable problem: a problem
that can be solved by an algorithm
Unsolvable problem: a problem
that cannot be solved by an
algorithm
1. b; 2. b; 3. d; 4. b; 5. a; 6. a;