Def: Let be an ordered pair of indices of an array such that and . is called an **inversion** in the array.

- Swapping two elements that are out of place removes exactly one inversion.
- A sorted array has zero inversions.

Note: Both the above assertions probably have good proofs. I can prove the second one by contradiction very prettily.

Technically, the running time of insertion sort is where is the number of inversions in the array. If the number of inversions is , insertion sort will run in linear time.

Thm: The average number of inversions in an array of distinct numbers is .

Proof: Let $L$ be any list of numbers and be the list in reverse order. Consider any pair of two numbers with (without loss of generality) in . It must be true that is an inversion in exactly one of and . The total number of such pairs in any list and its reverse is . Hence, an average list has half this amount of inversions i.e inversions.

By this theorem, it is evident that insertion sort runs in quadratic time on average.

Thm: Any algorithm that sorts by exchanging adjacent elements requires time on average.

Proof: The average number of inversions is initially as was established by the previous theorem. Since each swap removes at most one inversion, swaps are required to sort the array.

This lower-bound proof is valid for an entire class of sorting algorithms that perform only adjacent exchanges. This includes, for example,

- Insertion Sort
- Bubble Sort
- Selection Sort

This represents a limitation in ‘simple’ sorting algorithms and makes it clear that to achieve more efficiency we need to consider algorithms that do comparisons and exchanges between elements that are far apart. To run faster than quadratic time, an algorithm must eliminate more than one inversion per exchange.