Decision Utility Function
(Redirected from decision utility function)
Jump to navigation
Jump to search
A Decision Utility Function is a Utility Function that can be used in a decisioning task.
- …
- Example(s):
- Counter-Example(s):
- See: Game Theory.
References
2004
- Paul A. Jensen. https://www.me.utexas.edu/~jensen/ORMM/computation/unit/decision/utility.html
- QUOTE: The decision analysis solution procedures compute the expected value of the decision returns at each chance node. The optimum solution selects decisions that maximize (or minimize) the expected returns. In the presence of uncertain outcomes it may not be true that a decision-maker has the goal of maximizing the expected value of the cash return. For example consider a game that offers a 50% chance of winning $1,000 and a 50% chance of winning $10,000. This is a sure win situation, but the question is how much should a player pay to play the game. One answer would be that the opportunity would be worth the expected value of the cash returns. :[math]\displaystyle{ E(Return) = 0.5*1000 + 0.5*10000 = $5500 }[/math] If one could play this game over and over again, the average gain per play would be the expected value. If the game is played only once, however, there are only two possibilities. The player will either lose $4,500 or gain $4,500. For many, the pain of losing that much cash would be more than the pleasure of gaining the same amount. The player concerned about the risk of loss would act to avert the risk of playing. He might be only be willing to pay a smaller amount than $5500.
One way to model different attitudes toward risk is to use a utility function. Let x be the payoff associated with a game, and let U(x) be the utility of x. A utility function that models aversion to risk is the exponential utility function. We let a be the minimum payoff for some situation and b be the maximum payoff. The exponential utility function is :[math]\displaystyle{ U(x) = [1 - exp(-(x - a)/R)]/ [1 - exp(-(b - a)/R)] }[/math]
- QUOTE: The decision analysis solution procedures compute the expected value of the decision returns at each chance node. The optimum solution selects decisions that maximize (or minimize) the expected returns. In the presence of uncertain outcomes it may not be true that a decision-maker has the goal of maximizing the expected value of the cash return. For example consider a game that offers a 50% chance of winning $1,000 and a 50% chance of winning $10,000. This is a sure win situation, but the question is how much should a player pay to play the game. One answer would be that the opportunity would be worth the expected value of the cash returns. :[math]\displaystyle{ E(Return) = 0.5*1000 + 0.5*10000 = $5500 }[/math] If one could play this game over and over again, the average gain per play would be the expected value. If the game is played only once, however, there are only two possibilities. The player will either lose $4,500 or gain $4,500. For many, the pain of losing that much cash would be more than the pleasure of gaining the same amount. The player concerned about the risk of loss would act to avert the risk of playing. He might be only be willing to pay a smaller amount than $5500.