Week 5 – CST370 – Design and Analysis of Algorithms
This week we learned about Quicksort, Binary Tree traversal,
decrease and conquer, and Presorting.
Quicksort is based on the divide and conquer technique. With
Quicksort, you can sort an array by selecting an element as the pivot. You must
partition elements that are less than or equal to the pivot to the left of the
pivot and elements that are greater than or equal to the pivot to the right. At
the end of this first partition, the pivot will end up in its final position.
This leaves you with two sub arrays which need to be partition also. You
continue to partition the subarrays until all elements in the array are sorted.
Quicksort has a time complexity of O(n log n) in the best and average cases and
O(n2) in the worst case.
A binary tree is a set of nodes that is either empty or consist
of a root and two disjoint binary trees which are called the left and right subtrees
of the root. There are three classic binary tree traversal algorithms:
preorder, inorder, and postorder. In preorder, the root is visited first, then
the left subtree, and the right subtree is visited last. In inorder traversal,
the left subtree is visited first, followed by the root, and then the right
subtree. In post order, the left subtree is visited first, followed by the
right subtree, and the root is visited last.
Decrease and conquer is a technique that uses the relationship
between a solution to a given instance of a problem and a solution to its
smaller instance. There are three major variations of decrease and conquer:
decrease by a constant, decrease by a constant factor, and decrease by a
variable size. A good example of decrease and conquer by a variable size is Euclid’s
algorithm where the value of the second argument is always smaller on the right-hand
side than on the left-hand side, but it decreases neither by a constant nor by
a constant factor.
Lastly, presorting is the idea that answering questions
about a data set is easier to do when the data set is sorted. The text uses the
element uniqueness algorithm as an example. The running time of the algorithm
is the sum of the time spent on sorting and the time spent on checking
consecutive elements. Since the sorting requires at least n log n comparisons,
and checking for consecutive elements needs no more than n-1 comparisons, it is
the sorting part that will determine the overall efficiency of the algorithm. The
benefit of presorting, especially with large data sets, is that if you are
going to search the same data set more than once, you only need to sort it
once.
Comments
Post a Comment