DS&A

Data Structures and Algorithms

Static Analysis

May 072018

In the previous post we analysed how efficient the insertion sort algorithm is. We did dynamic analysis — we analysed the algorithm by actually running it.

Technically, we analysed an implementation of the algorithm; perhaps a cleverer implementation could be more efficient. From one perspective, the implementation is what we really care about:

  • The implementation is what will actually be run in the real world.
  • A more efficient implementation will make a real piece of software faster.

On the other hand, it’s hard to do an accurate dynamic analysis:

  • There will always be some measurement error.
  • It might take a long time to do the analysis.
  • Running time depends on external factors like how fast the computer is, or what other programs are running. Results from different computers, or even the same computer on different days, may be incomparable.

In this post, we’ll get around these problems by making a series of simplifications. We’ll do a kind of static analysis called asymptotic analysis — this means we’ll analyse algorithms without running them, and we’ll assume the inputs are large.

Informal static analysis

Before we begin, let’s consider how we’ve already been analysing efficiency informally. For example, when we analysed array-lists and linked lists, we talked about how many steps it took to perform list operations like get and insert:

  • Getting from an array-list is faster than a linked list, because the array-list can read the value directly from memory, whereas in a linked list the computer might have to follow many pointers.
  • Inserting is slower near the start of an array-list than near the end, since it requires more reading/writing to create space for the new value in the array.
  • Inserting is faster near the start of a linked list than near the end, since it requires following fewer pointers.

This is nothing like dynamic analysis. Instead of running the algorithms and measuring the time, this analysis is hypothetical — what would happen if we run them? And instead of measuring time, we’re measuring how much “work” the algorithm has to do.

Before we go any further, it is well worth revisiting the the interactive models of array-lists and linked lists, and thinking carefully about how many steps it takes to perform each list operation. How is the number of steps represented in each model? When does it depend on the length of the list?

Simplification: count basic operations

The time it takes a program to run is roughly proportional to how many “basic operations” the computer will execute. These include things like reading or writing memory, arithmetic operations, bitwise operations, comparison operations and logical operations — the most basic steps of a computation, which take a constant time to execute.

So, we can judge the efficiency of an algorithm by counting how many basic operations it will perform. This simplification is reasonable, because more basic operations should mean proportionally more time.

On the other hand, different basic operations take different times — for example, adding integers is generally faster than dividing floating-point numbers. So this simplification isn’t always valid, but it makes the analysis much easier, and usually still gets the right answer.

It’s worth clarifying some things:

  • A “basic operation” doesn’t need to be a single CPU instruction, as long as it takes a constant amount of time.
  • We’re not talking about the number of instructions in the code; if a loop causes the same instruction to be executed 1000 times, then we should count it 1000 times.
  • Not every language primitive is a basic operation. For example, Python’s in operator actually does a linear search when you use it on a list, so the number of basic operations depends on the length of the list.

Simplification: assume large inputs

The amount of “work” an algorithm has to do will depend on how large the input is. This could be the length of a list, the size of a set or dictionary, or the number of nodes in a tree or graph.

Computers are blazingly fast at small tasks; even inefficient algorithms are usually fast enough when the input is small. On the other hand, when the input is large, the differences between algorithms can be enormous.

Consider how many comparison operations linear and binary search might have to perform, depending on the length of the list:

n Linear search Binary search
10 10 4
100 100 7
1,000 1,000 10
10,000 10,000 14
100,000 100,000 17
1,000,000 1,000,000 20

How about selection sort and merge sort?

n Selection sort Merge sort
10 45 40
100 4,950 700
1,000 499,500 10,000
10,000 49,995,000 140,000
100,000 4,999,950,000 1,700,000
1,000,000 499,999,500,000 20,000,000

Algebraically, we say n is the “input size”, and we can mathematically estimate how many basic operations an algorithm will perform, as a function of n.(1) When one algorithm is faster than another, it might be difficult to judge for small n, but for large n it’s not even close.

This is important because it will justify all of our other simplifications. When the difference between algorithms is this great, accuracy barely matters — we can make big approximations without invalidating the analysis.

Simplification: count one basic operation

Consider an algorithm which finds the sum of a list of numbers:

def list_sum(numbers):
    total = 0
    for x in numbers:
        total += x
    return total

This algorithm uses a few different basic operations:

  • Reading a value from numbers.
  • Adding a value to total.
  • Assigning a value to total.

For a list of n numbers, the algorithm would “read” n times, “add” n times, and “assign” n + 1 times (including the initial assignment total = 0). So the total number of basic operations is 3n + 1.

We can simplify things by just counting the “add” operations. If the running time is roughly proportional to the 3n + 1 total, then it’s also roughly proportional to the n “add” operations.

Assuming n is large, the difference between n and 3n + 1 is not really important. Whether merge sort does 20 million or 60 million basic operations, that’s still less than the half a trillion for selection sort.

So, it’s fine to count just additions, or just comparisons, and ignore other basic operations — as long as the number is still proportional to the total “work” the algorithm has to do.

Simplification: ignore small terms and constants

The selection sort algorithm iteratively finds the smallest unsorted value, and swaps it into place. If there are n unsorted values, finding the smallest one requires exactly n  1 comparisons.(2)

The unsorted part of the list gets smaller on each iteration; for example, the total number of comparisons for a list of length 10 would be:

9 + 8 + 7 + 6 + 5 + 4 + 3 + 2 + 1 = 45

The formula for this total is:

(n 1) + + 1 = n(n 1)2

When n is large, the difference between n and n  1 doesn’t matter:

n(n 1)2 n × n2 = n22

By the same reasoning as before, the factor of 1/2 doesn’t really matter, so n2 is a good enough approximation for how many comparisons selection sort will do. These approximations are fine, so long as we’re only interested in large inputs.

Conclusion

It’s worth remarking on how abstract this is — given an algorithm we can derive a mathematical formula for how efficient it is. This formula is called the complexity, or time complexity of the algorithm.

The complexity formula doesn’t really tell us how much time the algorithm will take to run; it very roughly measures how much “work” the algorithm has to do, as a function of how “large” the input is.

In the next post, we’ll see how to make sense of complexities, and use them to compare algorithms against each other. We’ll also see a much easier way to derive the complexity of an algorithm, requiring less algebra.

Footnotes

  1. An estimate as a function of n may be specific to a kind of input — usually either random inputs, or the “worst case”.
  2. The first value is automatically the “best so far”, but the remaining n  1 values must be compared.

There are no published comments.

New comment

Your comment will not appear until a moderator approves it.

Atom

hosted on werp.site