Quick Sort

Quick Sort is a Divide and Conquer algorithm. It picks an element as pivot and partitions the given array around the picked pivot. There are many different versions of quick Sort that pick pivot in different ways.

1) Always pick first element as pivot.
2) Always pick last element as pivot (implemented below)
3) Pick a random element as pivot.
4) Pick median as pivot.

Quick sort is explained below by choosing a random element as the pivot.

The key process in quick Sort is partition(). Target of partitions is, given an array and an element x of array as pivot, put x at its correct position in sorted array and put all smaller elements (smaller than x) before x, and put all greater elements (greater than x) after x. All this should be done in linear time.

From the below animation it can be seen that Quick sort works fastest for nearly sorted array of elements. The time required to sort the elements increases in the order: reversed ,random and then a few unique.

Random Nearly Sorted Reversed Few Unique


  1. Black values are sorted.
  2. Gray values are unsorted.
  3. A red triangle marks the algorithm position.


/* This function takes last element as pivot, places the pivot element at its correct position in sorted array, and places all smaller (smaller than pivot) to left of pivot and all greater elements to right of pivot */
int partition (int arr[], int l, int h)
    int x = arr[h];    // pivot
    int i = (l - 1);  // Index of smaller element
    for (int j = l; j <= h- 1; j++)
        // If current element is smaller than or equal to pivot 
        if (arr[j] <= x)
            i++;    // increment index of smaller element
            swap(&arr[i], &arr[j]);  // Swap current element with index
    swap(&arr[i + 1], &arr[h]);  
    return (i + 1);
/* arr[] --> Array to be sorted, l  --> Starting index, h  --> Ending index */
void quickSort(int arr[], int l, int h)
    if (l < h)
        int p = partition(arr, l, h); /* Partitioning index */
        quickSort(arr, l, p - 1);
        quickSort(arr, p + 1, h);

Time Complexity Analysis of Quick Sort

Time taken by Quick Sort in general can be written as following.

 T(n) = T(k) + T(n-k-1) + \theta(n)

The first two terms are for two recursive calls, the last term is for the partition process. k is the number of elements which are smaller than pivot.
The time taken by Quick Sort depends upon the input array and partition strategy. Following are three cases.

Worst Case: The worst case occurs when the partition process always picks greatest or smallest element as pivot. If we consider above partition strategy where last element is always picked as pivot, the worst case would occur when the array is already sorted in increasing or decreasing order. Following is recurrence for worst case.

 T(n) = T(0) + T(n-1) + \theta(n)
which is equivalent to  
 T(n) = T(n-1) + \theta(n)

The solution of above recurrence is \theta(n2).

Best Case: The best case occurs when the partition process always picks the middle element as pivot. Following is recurrence for best case.

 T(n) = 2T(n/2) + \theta(n)

The solution of above recurrence is \theta(nLogn).

Average Case:
To do average case analysis, we need to consider all possible permutation of array and calculate time taken by every permutation which doesn’t look easy.
We can get an idea of average case by considering the case when partition puts O(n/9) elements in one set and O(9n/10) elements in other set. Following is recurrence for this case.

 T(n) = T(n/9) + T(9n/10) + \theta(n)

Solution of above recurrence is also O(nLogn)

Auxiliary Space: O(n)
Sorting In Place: Yes
Stable: No

Although the worst case time complexity of Quick Sort is O(n2) which is more than many other sorting algorithms like Merge Sort and Heap Sort, Quick Sort is faster in practice, because its inner loop can be efficiently implemented on most architectures, and in most real-world data. Quick Sort can be implemented in different ways by changing the choice of pivot, so that the worst case rarely occurs for a given type of data. However, merge sort is generally considered better when data is huge and stored in external storage.

Source: Wikipedia