Insertion sort is a simple sorting algorithm that builds a final sorted array one element at a time. It iterates through an array, removes one element, finds the location where that element belongs in the sorted list, and inserts it there. Insertion sort has best-case O(n) time complexity when the array is already sorted, but worst and average case O(n^2) complexity.
Table of Contents
Open Table of Contents
How Insertion Sort Works
The insertion sort algorithm consists of the following steps:
-
Start with an unsorted array (list).
-
Assume the first element is already sorted. Set a marker for the next element to insert.
-
Select the next element to be inserted into the sorted array. Compare it with the elements in the sorted array from right to left.
-
Shift elements in the sorted array that are greater than the element to be inserted to create space for insertion.
-
Insert the element into its correct position by shifting elements.
-
Set the marker to the next element to be inserted.
-
Repeat steps 3-6 until the entire array is sorted.
This can be visualized as sorting a hand of playing cards. We remove one card at a time, then place it in its correct position by shifting other cards to the right.
Pseudocode for Insertion Sort
insertionSort(array)
marker = 1
while marker < length(array)
valueToInsert = array[marker]
holePosition = marker
while holePosition > 0 and array[holePosition-1] > valueToInsert
array[holePosition] = array[holePosition-1]
holePosition = holePosition - 1
end while
array[holePosition] = valueToInsert
marker = marker + 1
end while
end insertionSort
The pseudocode above demonstrates the key steps of the algorithm:
-
Start the marker at the second element, assuming the first element is sorted.
-
Store the value to insert from the marker position.
-
Find the hole position by comparing the value to insert with elements to the left. Shift elements greater than the value to the right.
-
When the correct position is found, insert the value at the hole position.
-
Increment the marker and repeat for the next unsorted element.
Implementing Insertion Sort in Python
Here is an implementation of insertion sort in Python:
def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i-1
while j >= 0 and key < arr[j] :
arr[j + 1] = arr[j]
j -= 1
arr[j + 1] = key
arr = [12, 11, 13, 5, 6]
insertion_sort(arr)
print(arr)
This implements the exact steps we outlined in the pseudocode earlier:
- Start
i
at 1 to use the first element as sorted. - Take
key
as the next element. - Compare
key
with the element on the left, swapping elements greater thankey
to the right. - When the correct position is found, insert
key
.
Explanation of the Code
Let’s break down what the Python code is doing:
-
range(1, len(arr))
starts the iteration from the 2nd element, with the marker at index 1. -
key = arr[i]
stores the value to be inserted. -
j = i-1
sets the position to compare against and find the insertion point. -
The
while
loop compareskey
to the elements on the left, swapping greater elements to the right. -
When
key
is smaller thanarr[j]
, the correct position is found. -
arr[j + 1] = key
insertskey
at the hole position.
This implements the shifting and insertion steps of the algorithm correctly in Python.
Time Complexity
The time complexity of insertion sort depends on the initial ordering of the array:
-
Best Case: O(n) when array is already sorted. Only one comparison and swap needed per element.
-
Average Case: O(n^2) as elements need to be shifted over for each unsorted element.
-
Worst Case: O(n^2) when array is reverse sorted. Maximum number of comparisons and swaps required.
So insertion sort is very fast if the array is already or nearly sorted. But for reverse sorted or random arrays, it is inefficient compared to algorithms like quicksort and mergesort.
Optimized Insertion Sort in Python
We can optimize the insertion sort implementation by avoiding shifting elements repeatedly for each insertion. This is done by using swaps instead of shifting.
Here is an optimized version:
def insertion_sort(arr):
for i in range(1, len(arr)):
key = arr[i]
j = i - 1
while j >= 0 and key < arr[j]:
arr[j], arr[j + 1] = arr[j + 1], arr[j]
j -= 1
The only change is swapping arr[j]
and arr[j+1]
instead of shifting arr[j]
to the right repeatedly.
This reduces the number of assignments per insertion from O(n) to O(1) by using swapping. But the time complexity remains the same asymptotically.
Insertion Sort Applications
Insertion sort has some advantages that lend itself to certain applications:
-
Simplicity - Easy to implement and understand for small data sets. Useful for educational purposes.
-
Small data sets - Efficient for sorting small arrays with only a few elements. Faster than more complex sorts.
-
Near sorted data - Extremely fast if array is already nearly sorted due to best case O(n) performance.
-
Online sorting - Can sort a stream of incoming data on the fly as new elements arrive. Only partial sorts needed.
-
Stable sort - Does not change the relative order of elements with equal keys. Useful when order matters beyond just sorting values.
-
Linked lists - Efficient for sorting linked lists in-place since no extra memory is needed.
Some examples where insertion sort is preferred:
-
Sorting a small number of elements as part of a larger computation.
-
Sorting cards in online card games after each round where the deck is already partially sorted.
-
Incrementally sorting elements streamed over the network or from a database.
-
Sorting an array that is already nearly sorted from previous passes.
-
Stable sort is required, and data size is small, like sorting names by last name while maintaining relative first name order.
-
Sorting linked lists where in-place sorting is necessary.
So while less efficient overall than O(n log n) sorts, insertion sort still has many uses cases due to its simplicity, small data efficiency, stability, and in-place nature.
Step-By-Step Examples
Let’s go through a few hand worked examples of insertion sort on different array inputs to demonstrate how the algorithm works.
Example 1: Sort [6, 5, 3, 1, 8, 7, 2, 4]
Pass 1:
Start with 6 assumed sorted. Insert 5 before 6. [5, 6, 3, 1, 8, 7, 2, 4]
Pass 2:
Insert 3 before 5. [3, 5, 6, 1, 8, 7, 2, 4]
Pass 3:
Insert 1 before 3. [1, 3, 5, 6, 8, 7, 2, 4]
Pass 4:
8 is greater than preceding numbers. Insert 8 after 6. [1, 3, 5, 6, 8, 7, 2, 4]
Pass 5:
Insert 7 before 8. [1, 3, 5, 6, 7, 8, 2, 4]
Pass 6:
Insert 2 before 7. [1, 3, 5, 6, 2, 7, 8, 4]
Pass 7:
Insert 4 before 7. [1, 3, 5, 6, 2, 4, 7, 8]
The final sorted array is [1, 2, 3, 4, 5, 6, 7, 8].
Example 2: Sort [2, 1, 4, 5, 3]
Pass 1:
2 is assumed sorted. Insert 1 before 2.
[1, 2, 4, 5, 3]
Pass 2:
4 is greater than preceding. Insert after 2. [1, 2, 4, 5, 3]
Pass 3:
5 is greater than preceding. Insert after 4. [1, 2, 4, 5, 3]
Pass 4:
Insert 3 before 5. [1, 2, 4, 3, 5]
The sorted array is [1, 2, 3, 4, 5].
Example 3: Sort [1, 5, 2, 4, 3]
Pass 1:
1 is assumed sorted. 5 is greater, so insert after 1. [1, 5, 2, 4, 3]
Pass 2:
Insert 2 before 5. [1, 2, 5, 4, 3]
Pass 3:
Insert 4 after 5. [1, 2, 5, 4, 3]
Pass 4:
Insert 3 before 4. [1, 2, 5, 3, 4]
The final sorted array is [1, 2, 3, 4, 5].
These examples illustrate how insertion sort increments builds the sorted array by inserting elements into the correct positions at each pass.
Insertion Sort Complexity Analysis
Let’s analyze the time and space complexity of insertion sort in detail.
Time Complexity
-
Best Case: O(n) - Array is already sorted. Each element takes constant time to insert.
-
Average Case: O(n^2) - Elements need to be shifted over to make room for insertion.
-
Worst Case: O(n^2) - Array is reverse sorted. Maximum number of comparisons and shifts needed.
So insertion sort is efficient for small n or nearly sorted data but inefficient for large unsorted arrays. Its quadratic time complexity makes it impractical for sorting large data sets.
Space Complexity
- Space: O(1) - Insertion sort operations are done in-place, requiring constant O(1) space.
Insertion sort has the advantage of minimal memory usage as no additional storage is needed beyond the input array itself.
Stability
- Insertion sort is stable as it does not change the relative order of elements with equal keys. Stability is useful when order beyond just element values matters, like sorting names alphabetically while preserving relative age order.
So in summary, insertion sort is simple to implement, efficient for small or mostly sorted data, stable, and has constant O(1) space complexity. But its quadratic O(n^2) time complexity makes it inefficient for large unsorted arrays.
Comparison with Other Sorting Algorithms
How does insertion sort compare to other common sorting algorithms?
-
It is more efficient than bubble sort and selection sort, which are also in-place, quadratic time sorts.
-
It is less efficient than O(n log n) sorts like merge sort and heapsort.
-
It has simpler implementation than quicksort but is slower on average.
-
It is better than bubble and selection sorts for online sorting as element swaps are faster than full passes, but merge sort is even better suited for online use.
Insertion sort occupies a unique niche of being faster than basic quadratic sorts and stable with minimal memory usage. But for larger data sets, O(n log n) sorts will be faster.
Common Mistakes to Avoid
Some common mistakes to avoid when implementing insertion sort:
-
Forgetting to start the sorted array index at 1, assuming 0th element is sorted.
-
Using a while loop to shift elements rather than swapping. This leads to O(n^2) complexity.
-
Incorrect loop conditions for finding insertion point and shifting elements.
-
Forgetting to increment the marker after each insertion step.
-
Attempting to insert value before index 0 rather than stopping the shift at 0 index.
Carefully following the exact algorithm, using the optimized swap approach, and testing edge cases will help avoid these mistakes.
Conclusion
Insertion sort is a foundational in-place sorting algorithm that builds the final sorted array incrementally by inserting elements into position one by one. It has O(n) best case time but O(n^2) worst and average case performance. The simplicity, stability, and low space complexity of insertion sort make it ideal for small or nearly sorted data sets. But for larger arrays, O(n log n) algorithms will be more performant. Insertion sort continues to find uses in niche applications like online sorting where incremental insertion is needed. A strong grasp of this fundamental algorithm provides the basis for understanding more complex sorting techniques.