Detailed Report on Common Sorting Algorithms
Introduction
Sorting algorithms are fundamental computer science concepts used to arrange elements of a list or array in a specific order (typically numerical or lexicographical). Understanding different sorting algorithms is crucial for optimizing performance, as their efficiency varies significantly based on factors like input size, existing order of elements, and memory constraints. This report details five common sorting algorithms: Bubble Sort, Insertion Sort, Selection Sort, Quick Sort, and Merge Sort, exploring their mechanisms, implementations, and performance characteristics.
1. Bubble Sort
Concept/Mechanism
Bubble Sort is one of the simplest sorting algorithms. It works by repeatedly stepping through the list, comparing each pair of adjacent items, and swapping them if they are in the wrong order. This process is repeated until no swaps are needed, indicating the list is sorted. The algorithm gets its name because smaller or larger elements (depending on the sorting order) "bubble" up to their correct position in the list, similar to bubbles rising in water. In each pass through the list, the largest unsorted element is guaranteed to move to its final sorted position at the end of the unsorted portion.
Algorithm Steps (Code Walkthrough - bubbleSort
)
The implementation uses two nested loops:
- Outer Loop (
for let n = array.length; n >= 0; n--
): This loop controls the number of passes and shrinks the considered portion of the array after each pass. Since the largest element "bubbles" to the end in each pass, the effective size of the array that needs sorting (n
) decreases by one each time. - Inner Loop (
for let i = 0; i < n - 1; i++
): This loop iterates through the currently unsorted portion of the array (from the start up ton-1
). - Comparison and Swap (
if (array[i] > array[i + 1])
): Inside the inner loop, it compares the current element (array[i]
) with the next adjacent element (array[i + 1]
). If the current element is greater than the next one, they are swapped. The swap is efficiently done using array destructuring:[array[i], array[i + 1]] = [array[i + 1], array[i]]
.
// bubble.ts - Core Logic
export function bubbleSort(array: number[]) {
// Outer loop reduces the comparison range after each pass
for (let n = array.length; n >= 0; n--) {
// Inner loop compares adjacent elements
for (let i = 0; i < n - 1; i++) {
// Swap if current element is larger than the next
if (array[i] > array[i + 1]) {
;[array[i], array[i + 1]] = [array[i + 1], array[i]]
}
}
}
return array
}
Time Complexity
- Worst Case: O(n²) - Occurs when the array is sorted in reverse order. Every comparison requires a swap, and both nested loops run their full course.
- Average Case: O(n²) - For a randomly ordered array, it still requires roughly n²/2 comparisons and swaps on average.
- Best Case: O(n) - This applies to optimized versions. If the array is already sorted, an optimized Bubble Sort (which checks if any swaps occurred in a pass) can detect this in the first pass and terminate early, only requiring a single pass through the array. The provided
bubbleSort
function doesn't include this optimization, so it would technically still run in O(n²) time even for a sorted list. However, the conceptual best case for Bubble Sort with optimization is O(n).
The O(n²) complexity arises directly from the two nested loops, each iterating approximately n
times (where n
is the number of elements).
Space Complexity
- O(1) - Bubble Sort is an in-place algorithm. It only requires a constant amount of extra memory for temporary variables during the swap operation, regardless of the input array's size.
2. Insertion Sort
Concept/Mechanism
Insertion Sort builds the final sorted array one item at a time. It iterates through the input array and for each element, it finds the correct position within the already sorted portion of the array (to the left of the current element) and inserts it there. This involves shifting larger elements one position to the right to make space for the inserted element. It's analogous to how one might sort a hand of playing cards.
Algorithm Steps (Code Walkthrough - insertionSort
)
- Outer Loop (
for let i = 1; i < array.length; i++
): Iterates through the array starting from the second element (i=1
), as the first element (i=0
) is considered a sorted sublist of size one. - Store Current Value (
const value = array[i]
): The element to be inserted is stored invalue
. - Initialize
j
(let j = i - 1
):j
points to the last element in the already sorted sublist. - Inner Loop (
for (j; j >= 0; j--)
): This loop moves backward through the sorted sublist (array[0...i-1]
). - Compare and Shift (
if (array[j] > value)
): If the element in the sorted sublist (array[j]
) is greater than thevalue
we want to insert, it's shifted one position to the right (array[j + 1] = array[j]
). - Break (
else { break }
): If an elementarray[j]
is found that is less than or equal tovalue
, it means the correct position forvalue
has been found (just afterj
), so the inner loop breaks. - Insert Value (
array[j + 1] = value
): After the inner loop finishes (either by reaching the beginning or breaking early), thevalue
is placed in its correct position (j+1
).
// insertion.ts - Core Logic
export function insertionSort(array: number[]) {
// Iterate from the second element
for (let i = 1; i < array.length; i++) {
const value = array[i] // Element to insert
let j = i - 1 // Index for sorted part
// Move backwards through sorted part
for (j; j >= 0; j--) {
// Shift elements larger than 'value' to the right
if (array[j] > value) {
array[j + 1] = array[j]
} else {
// Found the correct spot, break
break
}
}
// Insert 'value' into its correct position
array[j + 1] = value
}
return array
}
Time Complexity
- Worst Case: O(n²) - Occurs when the array is sorted in reverse order. Each element needs to be compared against all preceding elements in the sorted sublist, requiring maximal shifting.
- Average Case: O(n²) - For a randomly ordered array, each element needs to be compared with about half of the preceding elements on average.
- Best Case: O(n) - Occurs when the array is already sorted. The inner loop condition (
array[j] > value
) is immediately false for every element, so only the outer loop runsn
times with minimal work inside.
Similar to Bubble Sort, the potential O(n²) complexity stems from the nested loop structure. Insertion Sort often performs better than Bubble Sort in practice (even with the same Big O average complexity) because comparisons and shifts might stop early if an element's position is found quickly.
Space Complexity
- O(1) - Insertion Sort is also an in-place algorithm. It only requires a constant amount of extra space for storing the
value
being inserted and the indexj
.
3. Selection Sort
Concept/Mechanism
Selection Sort divides the input list into two parts: a sorted sublist which is built up from left to right at the front (beginning) of the list, and a sublist of the remaining unsorted items that occupy the rest of the list. Initially, the sorted sublist is empty and the unsorted sublist is the entire input list. The algorithm proceeds by finding the smallest (or largest, depending on sorting order) element in the unsorted sublist, swapping it with the leftmost unsorted element (putting it in sorted order), and moving the sublist boundaries one element to the right.
Algorithm Steps (Code Walkthrough - selectionSort
)
- Outer Loop (
for let i = 0; i < array.length - 1; i++
): Iterates from the first element up to the second-to-last element.i
represents the boundary between the sorted (left) and unsorted (right) portions. - Initialize
minIndex
(let minIndex = i
): Assumes the first element of the unsorted portion (array[i]
) is the minimum initially. - Inner Loop (
for let j = i + 1; j < array.length; j++
): Iterates through the remaining unsorted portion of the array (fromi+1
to the end). - Find Minimum (
if (array[j] < array[minIndex])
): Compares the current element (array[j]
) with the element at the currentminIndex
. Ifarray[j]
is smaller,minIndex
is updated toj
. - Swap (
if (minIndex !== i)
): After the inner loop finishes,minIndex
holds the index of the smallest element in the unsorted portion. If this smallest element isn't already at the beginning of the unsorted portion (i.e.,minIndex !== i
), it's swapped with the element at indexi
.
// selection.ts - Core Logic
export function selectionSort(array: number[]) {
// Iterate through the array defining the sorted boundary
for (let i = 0; i < array.length - 1; i++) {
let minIndex = i // Assume current is minimum
// Iterate through the unsorted part to find the actual minimum
for (let j = i + 1; j < array.length; j++) {
if (array[j] < array[minIndex]) {
minIndex = j // Update minimum index
}
}
// Swap the found minimum element with the first element of the unsorted part
if (minIndex !== i) {
;[array[i], array[minIndex]] = [array[minIndex], array[i]]
}
}
return array
}
Time Complexity
- Worst Case: O(n²)
- Average Case: O(n²)
- Best Case: O(n²)
Selection Sort always performs O(n²) comparisons because the inner loop invariably searches the entire remaining unsorted portion to find the minimum, regardless of the input array's initial order. The number of swaps is O(n) in the worst case (one swap per outer loop iteration), which is better than Bubble Sort's worst-case swaps, but the comparison count dominates the complexity.
Space Complexity
- O(1) - Selection Sort is an in-place algorithm, requiring only constant extra space for storing indices (
minIndex
,i
,j
) and for the swap operation.
4. Quick Sort
Concept/Mechanism
Quick Sort is an efficient, divide-and-conquer sorting algorithm. Its main steps are:
- Pick a Pivot: Choose an element from the array as the pivot.
- Partition: Rearrange the array so that all elements smaller than the pivot come before it, while all elements greater than the pivot come after it. After partitioning, the pivot is in its final sorted position.
- Recurse: Recursively apply the above steps to the sub-array of elements with smaller values and the sub-array of elements with greater values.
The base case for the recursion is when a sub-array has zero or one element, which is inherently sorted. The efficiency heavily depends on the choice of the pivot; ideally, the pivot should divide the array into two roughly equal halves.
Algorithm Steps (Code Walkthrough - quickSort
)
This implementation uses the last element as the pivot and creates new arrays for partitions (less efficient in terms of space than in-place partitioning but simpler to understand conceptually).
- Base Case (
if (array.length <= 1)
): If the array has 0 or 1 element, it's already sorted, so return it. - Choose Pivot (
const pivot = array[array.length - 1]
): Selects the last element as the pivot. - Create Partition Arrays (
const left = []; const right = []
): Initialize empty arrays to hold elements smaller and larger than the pivot. - Partition Loop (
for (let i = 0; i < array.length - 1; i++)
): Iterate through the array (excluding the pivot). - Populate Partitions: If
array[i] < pivot
, pusharray[i]
toleft
. Otherwise (ifarray[i] >= pivot
), push it toright
. - Recursive Calls and Concatenation (
return [...quickSort(left), pivot, ...quickSort(right)]
): Recursively sort theleft
andright
arrays. The final sorted array is constructed by concatenating the sortedleft
array, thepivot
(which is now in its correct position relative to the partitions), and the sortedright
array.
// quick.ts - Core Logic
export function quickSort(array: number[]): number[] {
// Base case: array with 0 or 1 element is sorted
if (array.length <= 1) {
return array
}
// Choose the last element as pivot
const pivot = array[array.length - 1]
const left: number[] = [] // Elements smaller than pivot
const right: number[] = [] // Elements greater than or equal to pivot
// Partition the array (excluding the pivot itself)
for (let i = 0; i < array.length - 1; i++) {
if (array[i] < pivot) {
left.push(array[i])
} else {
right.push(array[i])
}
}
// Recursively sort sub-arrays and combine
return [...quickSort(left), pivot, ...quickSort(right)]
}
Time Complexity
- Worst Case: O(n²) - Occurs when the pivot choice consistently results in highly unbalanced partitions (e.g., picking the smallest or largest element repeatedly, which happens with this implementation on an already sorted or reverse-sorted list). The recursion depth becomes O(n), and partitioning still takes O(n) work at each level.
- Average Case: O(n log n) - When the pivot selection consistently divides the array into reasonably balanced partitions (close to half), the recursion depth is O(log n), and partitioning takes O(n) at each level.
- Best Case: O(n log n) - Occurs when the pivot always divides the array into exactly two equal halves.
Space Complexity
- O(log n) - Average Case. This refers primarily to the space used by the recursion call stack. If partitions are balanced, the maximum depth of recursion is logarithmic.
- O(n) - Worst Case. If partitions are unbalanced, the recursion depth can reach O(n). Additionally, the specific implementation shown (
quickSort
) creates newleft
andright
arrays at each step. In the worst case, these auxiliary arrays can contribute significantly to space usage, potentially reaching O(n) space per recursive call level in the most unbalanced scenarios, though more sophisticated in-place partitioning techniques achieve O(log n) or O(1) auxiliary space complexity (excluding the stack). The transcript analysis suggests O(log n) represents the typical space behaviour related to recursion depth.
5. Merge Sort
Concept/Mechanism
Merge Sort is another efficient, divide-and-conquer algorithm. It works as follows:
- Divide: If the list has more than one element, divide it recursively into two halves (approximately equal) until you have sublists of size 1. A list of size 1 is considered sorted.
- Conquer (Merge): Repeatedly merge the sorted sublists back together, creating larger sorted sublists, until you have merged all sublists into a single, fully sorted list. The merging process involves comparing elements from the two sorted sublists and placing the smaller element into a new, combined list.
Algorithm Steps (Code Walkthrough - mergeSort
)
- Base Case (
if (arr.length <= 1)
): An array with 0 or 1 element is sorted. - Divide (
const middle = Math.floor(arr.length / 2); const left = ...; const right = ...
): Find the middle index. Recursively callmergeSort
on the left half (arr.slice(0, middle)
) and the right half (arr.slice(middle)
). Note thatslice
creates copies. - Merge (
const result = []; while(...) {...}
):- Initialize an empty
result
array and pointersi
(forleft
) andj
(forright
) to 0. - While both
left
andright
sublists have elements remaining (i < left.length && j < right.length
):- Compare
left[i]
andright[j]
. - Push the smaller element onto the
result
array and increment the corresponding pointer (i
orj
).
- Compare
- After the loop, one of the sublists might still have remaining elements (which are guaranteed to be larger than all elements currently in
result
).
- Initialize an empty
- Concatenate Remaining Elements (
return result.concat(...)
): Append any remaining elements from theleft
sublist (left.slice(i)
) and theright
sublist (right.slice(j)
) to theresult
. Only one of these slices will actually contain elements.
// merge.ts - Core Logic
export function mergeSort(arr: number[]): number[] {
// Base case: array with 0 or 1 element is sorted
if (arr.length <= 1) {
return arr
}
// Find the middle point and divide the array
const middle = Math.floor(arr.length / 2)
// Recursively sort the left and right halves
const left = mergeSort(arr.slice(0, middle))
const right = mergeSort(arr.slice(middle))
// Merge the sorted halves
const result: number[] = []
let i = 0, // Pointer for left array
j = 0 // Pointer for right array
// Compare elements from left and right, add smaller to result
while (i < left.length && j < right.length) {
if (left[i] < right[j]) {
result.push(left[i])
i++
} else {
result.push(right[j])
j++
}
}
// Concatenate any remaining elements (from either left or right)
return result.concat(left.slice(i)).concat(right.slice(j))
}
Time Complexity
- Worst Case: O(n log n)
- Average Case: O(n log n)
- Best Case: O(n log n)
Merge Sort consistently divides the array into halves, resulting in a recursion depth of O(log n). The merging process at each level takes O(n) time because every element needs to be looked at and placed into the result array. This leads to a reliable O(n log n) performance regardless of the initial order of elements.
Space Complexity
- O(n) - Merge Sort requires auxiliary space proportional to the number of elements (
n
). This is because the merge step needs temporary arrays (result
in this implementation, plus the arrays created byslice
) to store the combined sublists. It is generally not an in-place algorithm in its standard form.
Comparison Summary
| Algorithm | Time Complexity (Best) | Time Complexity (Average) | Time Complexity (Worst) | Space Complexity (Worst) | In-Place | Stable* | Notes | | :------------- | :--------------------- | :------------------------ | :---------------------- | :----------------------- | :------- | :------ | :--------------------------------------------- | | Bubble Sort | O(n)¹ | O(n²) | O(n²) | O(1) | Yes | Yes | Simple but inefficient for large lists. | | Insertion Sort | O(n) | O(n²) | O(n²) | O(1) | Yes | Yes | Efficient for small or nearly sorted lists. | | Selection Sort | O(n²) | O(n²) | O(n²) | O(1) | Yes | No | Simple, minimizes swaps, inefficient compares. | | Quick Sort | O(n log n) | O(n log n) | O(n²) | O(n)² / O(log n)³ | No² / Yes³| No | Generally fastest on average, recursive. | | Merge Sort | O(n log n) | O(n log n) | O(n log n) | O(n) | No | Yes | Consistent performance, requires extra space. |
- ¹ Best case for optimized Bubble Sort.
- ² The provided implementation uses O(n) space; in-place versions exist.
- ³ Space for in-place Quick Sort (primarily stack space).
- *Stability (maintaining relative order of equal elements) was not discussed in the source material but is a relevant characteristic.
General Complexity Concepts
- Time Complexity: Measures how the runtime of an algorithm scales with the size of the input (n). Big O notation describes the upper bound or worst-case scenario growth rate.
- Space Complexity: Measures the amount of extra memory an algorithm requires, beyond the input storage, as the input size grows. O(1) means constant space (in-place).
- Best, Average, Worst Cases: These describe the algorithm's performance under the most favorable, typical, and least favorable input arrangements, respectively.
- Deriving Complexity: Nested loops iterating up to
n
often lead to O(n²) time. Algorithms that repeatedly halve the problem size (like Quick Sort and Merge Sort) often involve O(log n) factor, leading to O(n log n) when combined with linear work per step.
Conclusion
The choice of sorting algorithm depends heavily on the specific requirements of the application.
- For simplicity and small datasets, Bubble Sort, Insertion Sort, or Selection Sort might suffice, with Insertion Sort often being preferred due to its good performance on nearly sorted data.
- For general-purpose sorting with large datasets where average performance is critical, Quick Sort is often the fastest, provided care is taken with pivot selection to avoid the worst-case O(n²).
- When guaranteed O(n log n) performance is required, even in the worst case, or when stability is important, Merge Sort is an excellent choice, albeit with the cost of O(n) extra space.
Understanding the mechanisms and trade-offs (time vs. space complexity, best/average/worst cases) of these fundamental algorithms is essential for developing efficient software.