# DS&A

Data Structures and Algorithms

## Algorithm Strategies

Apr 302018

Perhaps the most difficult part of our process for writing algorithms is splitting a problem into subproblems. This is more an art than a science — there’s no systematic way to identify subproblems, and each problem might be split into subproblems in many different ways.﻿(1)

Nonetheless, we can learn by looking at examples to see common algorithm strategies,﻿(2) and by practice. In this post we’ll see examples of three common algorithm strategies:

These are not algorithms — they are kinds of algorithm.

### Brute-force

The linear search algorithm “looks at everything in the list one by one, and stops when the target is found”. Here are some of the subproblems it solves:

• Problem: find the index of a target value in a list.
• Subproblem: look at every value in the list, one by one.
• Subproblem: test whether the target is found.

The depth-first search and breadth-first search algorithms “traverse the entire tree, and stop when the target is found”. Here are some of the subproblems they solve:

• Problem: find the index of a target value in a list.
• Subproblem: visit every node in the tree, one by one.
• Subproblem: test whether the target is found.

The algorithm to find the maximum value in a list “looks at everything in the list one by one, and keeps track of the largest so far”. Here are some of the subproblems:

• Problem: find the maximum value in a list.
• Subproblem: look at every value in the list, one by one.
• Subproblem: test whether the current value is larger than the current maximum.

These algorithms are all examples of the brute-force strategy. They solve different problems, and use different “plans” to solve the subproblems, but the strategy is the same: “try everything, and see what works”.

The values to be tested are called the candidate solutions, or just candidates.

#### Generate and test

In each of the above examples, the candidates come from a list or a tree, meaning they’re all stored in memory at once. This is often unnecessary, and instead we can just generate the candidates one at a time, when we need them. For example, consider the following problems, which have brute-force solutions:

• Problem: find a solution to the equation x5 + x = target, where x is an integer between 1 and 100.
• Subproblem: iterate over the numbers from 1 to 100.
• Subproblem: test if a candidate x solves the equation.
• Example: if target = 992436606 then the solution is x = 63.

In Python, a solution could look like this:

``````def solve_equation(target):
# start at the first candidate
x = 1

# while there is a candidate
while x <= 100:
# test the current candidate
if x ** 5 + x == target:
return x

# advance to the next candidate
x += 1

# there are no more candidates to try
raise ValueError('No solution found')``````

Notice how this algorithm works similarly to linear search, but there is no list of all the numbers from 1 to 100 — the algorithm generates them as necessary.

Another example is brute-force password cracking:

• Problem: given an SHA-2 hash, find the password, assuming it is a string of up to 8 lowercase letters.
• Subproblem: iterate over all strings of up to 8 lowercase letters.
• Subproblem: test if a candidate string has the correct SHA-2 hash.

There is no need to store all the candidate strings in a list, as this would be a waste of memory; instead, we can generate them one at a time. That’s more difficult than generating the integers from 1 to 100, but it’s simpler than the original problem of finding the correct password given the hash.

This kind of brute-force search is called “generate and test”. The hard part is usually generating the candidates — this may require splitting into further subproblems.﻿(3)

### Greedy

Consider the selection sort algorithm, which “repeatedly finds the smallest unsorted value and swaps it to the correct index”. Here are some of the subproblems:

• Problem: sort a list.
• Subproblem: iterate over the indices in the list.
• Subproblem: given an index, put the correct value there.

The “correct value” is the smallest one not yet sorted, so there are further subproblems: “find the index of the smallest unsorted value” and “swap the values at two indices”.

Consider also the insertion sort algorithm, which “shifts each value to the correct index”. Here are some of the subproblems:

• Problem: sort a list.
• Subproblem: iterate over the values in the list.
• Subproblem: given a value, shift it to the correct index.

There are further subproblems: “find the correct index” and “shift a value from one index to another”. The “correct index” is the index in the sorted part of the list where the current value will be in order.

These are two quite different decompositions of the same problem, but they follow the same strategy: “build the result piece by piece, always choosing the best piece”. In the case of sorting a list, the result is a sorted list, and the “pieces” are list items. This strategy is called greedy, because at each step the algorithm does what seems best right now, regardless of later choices.﻿(4)

#### Example: change-making

Let’s consider another example: the change-making problem is to make change (i.e. give coins totalling the right value) with as few coins as possible. If we give coins one by one, the “best” coin to give at each step is the largest value that doesn’t exceed the remaining total.

• Problem: make a total value by giving as few coins as possible.
• Subproblem: give coins one by one.
• Subproblem: find the largest coin value that doesn’t exceed the remaining total.
• Subproblem: keep track of the remaining total.
• Example: to make `32` pence, `[20, 10, 2]` uses the fewest possible coins.

This is a real-world problem for anyone programming vending machines.﻿(5) Here’s a solution in Python:

``````# values of UK coins
coin_values = [200, 100, 50, 20, 10, 5, 2, 1]

# coin_values is in descending order, so first one is largest
def find_largest_coin(total):
for value in coin_values:
if value <= total:
return value
raise ValueError("Can't choose any coin")

def make_change(total):
result = []

# give coins one-by-one until the total is reached
while total > 0:
# give the largest possible coin value
value = find_largest_coin(total)
result.append(value)

# keep track of the remaining total
total -= value

return result``````

This is greedy because it builds the result piece by piece, and chooses the “best” piece at each step. The pieces are the coins, and the “best” coin is the largest value that doesn’t exceed the total.

Notice that the subproblem of finding the “best” coin has been solved here by linear search, which is a brute-force algorithm. Nonetheless, the change-making algorithm itself is not brute-force; it finds a combination of coins without trying every possible combination.

#### Warning

The greedy strategy is one of the easiest to use, but beware — the “best” choice now doesn’t necessarily lead to the correct result in the end.

When you write a greedy algorithm, you either need a reason to believe it’s correct for the problem you’re solving, or you should be aware that your algorithm doesn’t always give the best result.

### Divide and conquer

Consider the binary search algorithm. Here are some of the subproblems it solves:

• Problem: find the index of a target value in an ordered list.
• Subproblem: divide the list into two parts.
• Subproblem: find the index of the target value in the correct part.

Notice that one of the subproblems is similar to the original problem — finding the index in part of the list isn’t really simpler than finding the index in the whole list, except the part is shorter.﻿(6)

Consider also the merge sort algorithm:

• Problem: sort a list.
• Subproblem: divide the list into two halves.
• Subproblem: sort both halves.
• Subproblem: merge the sorted halves into a new sorted list.

Normally when we split a problem into subproblems, the subproblems should be simpler than the original problem, otherwise we haven’t made progress towards solving it. In this case, the subproblem — sort the parts — is the same problem as the original, but twice. As above, this is only simpler because the parts are shorter than the whole list.

Both algorithms use the divide and conquer strategy:

1. “Split the input into parts,
2. Solve the same problem on the parts,
3. Use the solutions on the parts to solve the whole”.

This strategy is effective because we get step 2 for free — just make recursive calls on the parts, and trust that you get the correct result. Usually, there’s an easy way to do step 1 or step 3, too; merge sort straightforwardly splits the list down the middle, so the actual sorting is done when the sorted halves are merged together.

#### Footnotes

1. For example, the problem of sorting a list has many different solutions, which decompose the problem in different ways.
2. “Algorithm strategies” are alternatively called “algorithm design paradigms”.
3. The further subproblems are usually:
• Start at the first candidate,
• Test if there is another candidate,
• Advance to the next candidate.
For example, to generate numbers from 1 to 100, these parts are `x = 1`, `x <= 100` and `x += 1` respectively.
4. For example, selection sort on the list `[2, 3, 4, 1]` will begin by swapping the `1` and the `2`, despite this putting the `2` in the wrong place.
5. A vending machine has the extra problem of possibly running out of coins.
6. This property — that the subproblem is a smaller instance of the same problem — suggests we could use recursion. Indeed, there is a recursive version of binary search, but the iterative version is easier.

## New comment

Your comment will not appear until a moderator approves it.

Atom

hosted on werp.site