Jump to content

Cheatsheet: Difference between revisions

Line 1,439:
 
 
'''Quicksort''' is a good default choice. It tends to be fast in practice, and with some small tweaks its dreaded O(n2)O(n^2)O(n2) worst-case time complexity becomes very unlikely. A tried and true favorite.
It tends to be fast in practice
'''Heapsort''' is a good choice if you can't tolerate a worst-case time complexity of O(n2)O(n^2)O(n2) or need low space costs. The Linux kernel uses heapsort instead of quicksort for both of those reasons.
with some small tweaks its dreaded O(n2)O(n^2)O(n2) worst-case time complexity becomes very unlikely.
'''Merge sort''' is a good choice if you want a stable sorting algorithm. Also, merge sort can easily be extended to handle data sets that can't fit in RAM, where the bottleneck cost is reading and writing the input on disk, not comparing and swapping individual items.
A tried and true favorite.
'''Radix sort''' looks fast, with its O(n)O(n)O(n) worst-case time complexity. But, if you're using it to sort binary numbers, then there's a hidden constant factor that's usually 32 or 64 (depending on how many bits your numbers are). That's often way bigger than O(lg⁡(n))O(\lg(n))O(lg(n)), meaning radix sort tends to be slow in practice.
'''Heapsort''' is a good choice if you can't tolerate a worst-case time complexity of O(n2)O(n^2)O(n2) or need low space costs. The Linux kernel uses heapsort instead of quicksort for both of those reasons.
'''Counting sort''' is a good choice in scenarios where there are small number of distinct values to be sorted. This is pretty rare in practice, and counting sort doesn't get much use.
The Linux kernel uses heapsort instead of quicksort for both of those reasons.
'''Merge sort''' is a good choice if you want a stable sorting algorithm.
can easily be extended to handle data sets that can't fit in RAM
'''Merge sort''' is a good choice if you want a stable sorting algorithm. Also, merge sort can easily be extended to handle data sets that can't fit in RAM, where the bottleneck cost is reading and writing the input on disk, not comparing and swapping individual items.
'''Radix sort''' looks fast, with its O(n)O(n)O(n) worst-case time complexity.
'''Radix sort''' looks fast, with its O(n)O(n)O(n) worst-case time complexity. But, if you're using it to sort binary numbers, then there's a hidden constant factor that's usually 32 or 64 (depending on how many bits your numbers are). That's often way bigger than O(lg⁡(n))O(\lg(n))O(lg(n)), meaning radix sort tends to be slow in practice.
That's often way bigger than O(lg⁡(n))O(\lg(n))O(lg(n)), meaning radix sort tends to be slow in practice.
'''Counting sort''' is a good choice in scenarios where there are small number of distinct values to be sorted. This is pretty rare in practice, and counting sort doesn't get much use.
This is pretty rare in practice, and counting sort doesn't get much use.
Cookies help us deliver our services. By using our services, you agree to our use of cookies.