# Greedy Algorithms

*An advertisement for the 1924 film Greed. Metro-Goldwyn-Mayer, Public domain, via Wikimedia Commons*

There is a concept in computer programming called a greedy algorithm: an algorithm that accomplishes a task by solving a problem in several stages, and at each stage making the most optimal, or “greediest”, decision without needing to take any other information into account. I think this kind of thinking extends far beyond computer science — more of that later — but I want to give some examples and provide a clear introduction for people who aren’t familiar with the subject.

For example, consider the problem of breaking change: suppose I’m a cashier and I need to give a customer 44 cents. To save time, I’d ideally like to give the customer the fewest coins possible. Most likely, we would give the customer a quarter, a dime, a nickel, and then four pennies. It’s obvious to see that the best way to make change is just by choosing the largest coin we can over and over until we’re done. This algorithm is one example of a __greedy algorithm__, which formalizes a strategy used in many areas computer science.

## Designing Greedy Algorithms

That solution is obviously optimal: replacing any of our chosen coins with some other combination of coins would result in more coins overall; exchanging a dime would result in two nickels, or a nickel and five pennies, or ten pennies. The same is true of the nickel and quarter. This is the rough idea behind the more formal proof technique called the “exchange argument”.

Now, let’s generalize a bit. Suppose a customer asks me to break $x$ amount of money, in cents, into the fewest number of quarters, nickels, dimes, and pennies possible. This is called the change-making problem, also known as the coin change problem. How do we give a general algorithm for finding the optimal solution to this problem?

1
2
3

us_coins = [25, 10, 5, 1]
def MakeChange(x: int, coins: List[int]) -> List[int]:
...

We would want a solution with $q$ quarters, $d$ dimes, $n$ nickels, and $p$ pennies to result in the list `[q, d, b, p]`

. In our above example, we would expect `MakeChange(44, us_coins)`

to produce `[1, 1, 1, 1]`

.

So, how do we actually design an algorithm to solve this problem? We can begin by making a couple observations:

- Observation 1: We only give the customer one coin at every step of the problem. Every time we give the customer a coin with value $c$, we then have to solve the change-making problem again with the new value $x - c$.

For example, when I started with $x$ = 44, my first step was to give the customer a quarter, reducing $x$ to $x - c$ = 44 - 25 = 19, and then *solve the problem again* with $x$ = 19.

This means that we can use a strategy called __divide-and-conquer__, which means that we can solve the problem by breaking it down into subproblems, and solving those. In this case, the solution to `MakeChange(x)`

involves choosing a value for `c`

and then solving one subproblem: `MakeChange(x - c)`

. So we have four subproblems: `MakeChange(x - 25)`

, `MakeChange(x - 10)`

, `MakeChange(x - 5)`

, and `MakeChange(x - 1)`

.

- Observation 2: An optimal solution to
`MakeChange(x)`

will involve the optimal solution to`MakeChange(x - c)`

, for some value of $c$.

This is obvious if you break it down: the number of coins we end up with will be one plus the smallest number of coins we can make change with for the remaining amount; this is the least number of coins we can possibly choose by definition. This is called the optimal substructure property.

However, `MakeChange(x - 25)`

, `MakeChange(x - 10)`

, `MakeChange(x - 5)`

, and `MakeChange(x - 1)`

might all result in different numbers of coins, and we want to choose the one that results in the fewest number of coins. We do that by choosing the right value for $c$, which brings us to observation 3:

- Observation 3: At every stage of the problem, we should choose the largest coin we can. That is, choose the largest value of $c$ such that $c < x$.

This is the key observation that means we want to design a *greedy* algorithm. (The word fits the idea of greedily wanting to take the largest coin every time.) This observation is true essentially because it’s never a mistake to take the largest coin: we never end up in a situation where taking one coin forces us to create a disproportionately large number of pennies later, for example.

So choosing the greatest value of $c$ minimizes the number of coins chosen by `MakeChange(x - c)`

.

So finally, our algorithm is as follows:

1
2
3
4
5
6
7

def MakeChange(x: int, coins: List[int]) -> List[int]:
for i, c in enumerate(coins):
if c <= x:
solution = MakeChange(x - c)
solution[i] += 1
return solution
return len(coins) * [0]

The fact that the greedy solution works for the US coin system means that the coin system is __canonical__. (This is actually a number-theoretic property, so whether a system is canonical or not can be determined more intelligently than just seeing if the greedy strategy fails.)

It’s worth mentioning that greedy algorithms have a lot of benefits: they often arrive at a linear-time solution, meaning that the time it takes to solve the problem is roughly proportional to the size of the input, rather than taking exponentially longer. There are also many situations in which we’d want to

There are many other cases of greedy algorithms. Gradient descent in machine learning and optimization and the A* search algorithm both employ greedy strategies. However, A* may still get “caught” if there are cul-de-sacs on the way to the destination, and gradient descent doesn’t always find the optimal solution.

*An example of the greedy A* search algorithm getting caught on a cul-de-sac. Subh83, CC BY 3.0 https://creativecommons.org/licenses/by/3.0, via Wikimedia Commons*