Greed Versus Dynamism: When Can We Make the Greedy Choice?

Gene Callahan, Tandon School of Engineering, New York University
Robert P. Murphy, Free Market Institute, Texas Tech University
Salim Arfaoui, Department of Computer Science and Mathematics, St. Joseph's College, Brooklyn


Abstract: A point of puzzlement for many students studying algorithms is when, exactly, one can apply a greedy algorithm to a problem, and when one must use the more complicated and time-consuming techniques of dynamic programming. This paper suggestions that the existing pedagogical literature does not offer clear guidance on this issue. Furthermore, we suggest how computer science pedagogy might be improved by importing a concept economists use in the context of dynamic programming. That concept is "opportunity cost," and we explain how it can aid students in differentiating "greedy cases" from situations requiring dynamic programing.


Introduction

This paper aims to show that, while the distinction between static and dynamic allocation problems, based on the periods in which the opportunity costs of a choice falls, is well-understood in economics, the idea is not as clear in computer science. And what's more, we suggest that the computer science literature, especially the pedagogical literature, could benefit from incorporating this distinction. Here, for instance, is the way the today's top algorithm textbook describes when a greedy algorithm may be used:

How can we tell whether a greedy algorithm will solve a particular optmization problem? No way works all the time, but the greedy-choice property and optimal substructure are the two key ingredients. If we can demonstrate that the problem has these properties, then we are well on our way to developing a greedy algorithm for it. (CLRS, p. 424)

The economists' idea of the opportunity cost of a choice can help clarify this situation. The opportunity cost of a choice is the most highly valued opportunity that was (or will be) foregone in the making of the choice. For instance, we might be on the fence between going to Bangkok or to Paris for our vacation. If, in the end, we choose Bangkok, then the opportunity cost of the choice is the vacation to Paris that we are not taking.

There are choices for which the opportunity cost is entirely contained in the moment of choice: if I am choosing between blueberry and apple pie, the cost I incur in choosing apple may be only not enjoying a slice of blueberry pie: we can imagine the two choices cost the same amount of money, contain the same number of calories, have the same amount of sugar, and so on.

On the other hand, if I am choosing between a slice of pie and a plate of broccoli, I may want to consider future effects. I may enjoy the pie more at the moment, but later regret its effect on my weight, or on my blood sugar levels. The opportunity cost of choosing the pie now is not just forgoing the broccoli, but also includes the weight gain and blood sugar effects that will only occur in a future "period."

The first example is one in which a greedy algorithm can be used: we simply choose the flavor of pie we prefer, since the future impact of either choice is identical. There is no need to consider other periods. On the other hand, when the opportunity cost spills over into future periods, as in the pie versus broccoli example, we must resort to dynamic programming.

The Treatment of This Topic in the Economic Literature

Ferguson and Lim (2007) capture this idea in their distinction between static and dynamic consumption problems:

In a static consumption problem the opportunity cost of spending an amount of money buy a unit of one commodity (and deriving the marginal utility associated with consuming one more unit of that commodity) is the largest extra utility would have derived from spending money some other on some other commodity. In a dynamic problem the opportunity cost of spending today is the largest extra lifetime utility we could have derived from saving the money and spending it at some point in the future.

In their terms, when we have a static consumption problem (i.e., the opportunity cost is confined to the current period), we can use a greedy algorithm. When the opportunity cost must be considered over future periods, we must resort to a more complicated technique, such as dynamic programming.

The Treatment of This Topic in the Computer Science Literature

The concept of opportunity cost is not unknown in the computer science literature. For instance, Amir et al. (1998) and Borgstrom (2000) employ it to analyze job scheduling problems in a metacomputer. Pillac et al. (2011) employ the concept in analyzing vehicle routing problems. (Cite other examples.) Nevertheless, we have not been able to find an instance in the computer science literature of opportunity cost being used to clarify when a greedy algorithm may be used and when we must turn to dynamic programming.

Lew comes close to stating the economist's distinction, but with less clarity:

Informally, a "greedy" optimization algorithm solves a global minimization or maximization problem by making a sequence of locally optimal decisions. A decision is "local" or "myopic" if it is based on partial information that does not include global knowledge about future consequences of current decisions (Heyman and Sobel, 1984); thus, the current decision, once made, might turn out to be nonoptimal. (Lew 2006, p. 621)

Lew later states:

While we lack a general procedure to determine whether or not a canonical greedy algorithm is optimal, no general procedure exists for noncanonical greedy algorithms either.

Conclusion

Other material