# Risk and Certainty Equivalence Applet

# Introduction to Risk

Why are some people willing to buy lottery tickets, but at the same time insure themselves against theft, death, or property damage? The cost of a lottery ticket is substantially more than the average winnings one can expect to get. If everyone purchased lottery tickets every week over one's entire life, few people would come out ahead. On the other hand, the premium we pay for insurance is substantially greater than the average cost of claims. If everyone carries car insurance, relatively few people will file claims in a given year above the cost of the premiums.

People differ in how much they are willing to take risks, and what kind of stakes are worth taking a risk for. A lottery ticket costs only a dollar, and has little impact on our lifestyle, while the potential multi-million dollar payoff would impact our lives greatly. Paying a thousand or two thousand in insurance premiums for a house or car is costly, but perhaps is worthwhile if it excuses us from the unlikely but costly lawsuits resulting from our causing injury.

Economists often express one's willingness to take risks through a utility function of money. To understand
the notion of "utility over money," consider the following illustration. A "prize patrol van" pulls up in front of your house. When you open the door,
the representative of the state lottery announces that you have won one million dollars. As you scream in glee, a decibel meter
records the volume (loudness) of your scream. Now imagine the same scenario, only you are told that you have won *two* million dollars.
Do you scream twice as loud? Probably not. Winning a million dollars probably makes you very happy. Winning two million also makes you happy,
but not *twice* as happy as one million dollars.

Consider the following utility function.

The horizontal axis represents the amount of money a person has, and the vertical axis represents "utility" or how much that money is worth to us.
Note that the utility function is convex (loosely, this means that it increases slowly initially, and then faster).
What does this utility function say about risk? It implies that a person is *risk-seeking*, or likely to take
gambles (buy lottery tickets or play blackjack at the casino). To see why, consider a poor college student with a $2 bank
balance at the end of the month, the day before $100 in rent is due. If the student gave up a dollar, his condition would be
little changed. His "utility" of one dollar or of two dollars is almost the same, since neither is enough to pay the rent.
However, $100 is worth substantially more to the student. Therefore, he may be willing to buy a lottery ticket with one or both of
his two dollars, thinking "even if I lose, I am still in trouble with the landlord, but if I win, I am saved!"
The convex utility function represents this - small increases in money above zero wealth have little impact on one's utility,
but larger increases make one substantially better off.

Now consider this utility function:

This utility function is concave - it increases quickly initially and then flattens out. This implies that money, initially, is more valuable than
additional sums of money once we are already rich (why we don't scream twice as loud when we win twice as much).
The utility function represents a person who is *risk-averse* or prefers not to take risks.

A third type of utlity function is this one:

This utility function simply represents that every dollar is worth to us as much as every other. Such a person
is deemed *risk-neutral*

# Certainty Equivalence

Why the shapes of the utility functions pictured above represent risk-seeking, risk-aversion, and risk-neutrality can be made
more clear by considering a certainty equivalent (CE). A CE represents the maximum amount of money we are willing to
pay for some gamble. Alternately, a CE is the minimum premium we are willing to pay to insure us against some risk.
Imagine that I offer you the following bet: I will flip a coin and if it lands *heads* you win nothing, nut if it lands
*tails* I award you $100. How much would you be willing to pay for this chance? If I set up shop on a street corner and offer
passerbys this gamble for $10, it is likely that most would take it (wouldn't you?). As I increase the price to $20
and then $30 and $40, fewer and fewer people would accept. To see why, consider the three utility functions below:

The first is a risk-seeker, the second is risk-averse, and the third is risk-neutral. In each case, the utility of
winning $0 and the utility of winning $100 is denoted on the vertical axis. Since each outcome (*heads* and *tails*) is
equally likely, I am just as likely to get the happiness of $0 as I am of gaining the happiness of $100. This is expressed below:

My *expected utility*, or the happiness I expect to earn from this gamble *on average* is half-way between my
utility from winning $100 and my utility from winning $0, since each is equally likely. The red line in each figure
represents the utility that this gamble will bring me, on average. The interesting question, however, is not how happy this
gamble makes me, but how much I am willing to pay for it.

Notice that for the first person, the gamble between 0 and 100 dollars is worth about $75. This means that the person would be willing to pay me upto $75 for the right to win $100 based on a coin toss. Certainly, this is a "bad bet" since the average winnings are only $50. However, just like some are willing to make bad bets by playing the lottery or entering a casino, this person likes to take risks.

The second person would not be willing to pay anything over $25 dollars. Even though the coin toss, on average, pays $50, the extra risk is not worth it. This can be understood in the form of insurance. Consider a person with $100,000 net worth. Some accident, which has a 50% chance of happening, could cost the person his entire net worth. Therefore, this individual could end the year with a net worth of either $0 or $100,000. Because ending the year without any money is too great a risk, the person purchases insurance against the risk at a cost of $75,000. This way, he is certain to end the year at a net worth of $25,000 regardless of whether or not the accident occurs.

The third person is risk-neutral. Since the *expected value* of the bet is $50 (1/2 chance at 0, 1/2 chance at 100), the person
is willing to pay up to $50 for the bet. This defines risk-neutrality - one is concerned only with the actual expected value.
These numbers ($75, $25, and $50) are called *certainty equivalents*. A certainty equivalent is the minimum amount of money I would rather have
for certain instead of taking some risk. The more risk-averse a person is, the lower is her certainty equivalent.

Now that you are familiar with the basic concepts of risk, you may try them out in a simulation. You will select a level of risk-aversion, and decide on the size and odds of the gamble. The computer will calculate the certainty equivalent for you, allowing you to visualize the effect of risk tolerance.