Utility

 

Although up to now we have determined expected payoffs monetarily, obviously this procedure cannot have universal value.  Consider, for example, the simple fact that buying an insurance policy is likely to be not the best action from a strictly monetary point of view, as obviously insurance premiums are higher that the expected cost of potential claims.  Clearly, the true value of buying insurance is not represented in the expected monetary payoff: insurance gives also something else, namely, peace of mind.  In addition, the worth or utility of money depends on how much one has: one hundred dollars have greater utility for a poor man than for Bill Gates.  Moreover, many goods such as the loss of freedom in getting a job or the aesthetic pleasure of listening to music have utilities that can be quantified monetarily with great difficulty, if at all.  In general, individual preferences can be highly subjective.   

Utility theory tries to obtain a numerical value for one’s preferences under the assumption that one behaves consistently in accordance with one’s own tastes.  Such values can be then plugged into decision trees to reach the best decision given one’s information at the time.  Utility theory makes several assumptions, some controversial.  However, instead of looking at them (a feat better left to a course on decision theory), let us consider how the theory proceeds to attribute numerical values to one’s preferences.  The starting point is one familiar from our considerations about betting.  If G is a gamble with two possible outcomes O1 and O2 with utility u(O1) and u(O2)  and with probability Pr(O1) and Pr(O2)=1-Pr(O1), then the utility of G is the sum of the expected utilities of the two outcomes:

 

u(G) =  Pr[u(O1)] + Pr[u(O2)].             (1)

 

In other words, the utility of a bet on two outcomes is the expected utility of the bet.  Suppose now that we want to determine the utilities you attribute to your (possible) outcomes.  First, you need to rank your outcomes in terms of preference, O1being the most preferred and On the least preferred.  (Note that the fact that you can do this is a non-trivial assumption).  For example, you may rank going to the movies highest and going to a football game lowest.  Second, we arbitrarily assign utilities to O1 and On .  (The arbitrariness here is possible because we are interested in arriving at a numerical ranking, and the unit of measure does not matter—think of a thermometer in Fahreneits and in Celsius both accurately measuring temperature).  Now we use a procedure to determine the utilities of the intermediate outcomes.  Let us start with, say, Ok.  We set up (hypothetically!) a reference lottery, namely, a bet in which if you win you get O1 and if you lose you get On.  We ask you to determine for which probability pk you would be indifferent between Ok and the reference lottery.  (Here we assume that there is such a probability—this is the continuity assumption).  Then, the utility you place on Ok is equal to that of the reference bet.  (Here we assume that if you are indifferent between Ok and the reference lottery you value them the same).  Then by (1) we obtain

 

 u(Ok) =  pk [u(O1)] + (1- pk )[u(On)],               (2)

 

which is what we sought. 

For example, suppose we want to find the utility you place on pizza.  First we place arbitrary utilities on going to the movie (say 100) and going to the football game (say, 5).  Then we set up (hypothetically) the reference lottery, a bet B such that if you win you go to the movies and if you lose you go to the football game, and ask you to determine for which probability ppizza of winning you would be indifferent between B and getting a pizza.  Suppose that ppizza =2/3 is such a number.  Then, the utility you place on getting a pizza is

u(pizza) = (2/3)x100 + (1/3)x5 = 68.2.