Monte, Daniel and Said, Maher (2013): The Value of (Bounded) Memory in a Changing World.
This is the latest version of this item.
Preview |
PDF
MPRA_paper_47595.pdf Download (490kB) | Preview |
Abstract
This paper explores the value of memory in decision making in dynamic environments. We examine the decision problem faced by an agent with bounded memory who receives a sequence of signals from a partially observable Markov decision process. We characterize environments in which the optimal memory consists of only two states. In addition, we show that the marginal value of additional memory states need not be positive, and may even be negative in the absence of free disposal.
Item Type: | MPRA Paper |
---|---|
Original Title: | The Value of (Bounded) Memory in a Changing World |
Language: | English |
Keywords: | Bounded memory, Dynamic decision making, Partially observable Markov decision process. |
Subjects: | C - Mathematical and Quantitative Methods > C6 - Mathematical Methods ; Programming Models ; Mathematical and Simulation Modeling > C61 - Optimization Techniques ; Programming Models ; Dynamic Analysis D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D81 - Criteria for Decision-Making under Risk and Uncertainty D - Microeconomics > D8 - Information, Knowledge, and Uncertainty > D83 - Search ; Learning ; Information and Knowledge ; Communication ; Belief ; Unawareness |
Item ID: | 47595 |
Depositing User: | Maher Said |
Date Deposited: | 15 Jun 2013 09:26 |
Last Modified: | 29 Sep 2019 20:57 |
References: | Aumann, R. J., and S. Sorin (1989): "Cooperation and Bounded Recall," Games and Economic Behavior, 1(1), 5-39. Compte, O., and A. Postlewaite (2012): "Belief Formation," Unpublished manuscript, University of Pennsylvania. Compte, O., and A. Postlewaite (2012): “Plausible Cooperation,” Unpublished manuscript, University of Pennsylvania. Cover, T., and M. Hellman (1971): “On Memory Saved by Randomization,” Annals of Mathematical Statistics, 42(3), 1075–1078. Hellman, M., and T. Cover (1970): “Learning with Finite Memory,” Annals of Mathematical Statistics, 41(3), 765–782. Kalai, E., and E. Solan (2003): “Randomization and Simplification in Dynamic Decision-Making,” Journal of Economic Theory, 111(2), 251–264. Kalai, E., and W. Stanford (1988): "Finite Rationality and Interpersonal Complexity in Repeated Games," Econometrica, 56(2), 397-410. Kocer, Y. (2010): “Endogenous Learning with Bounded Memory,” Unpublished manuscript, Princeton University. Lehrer, E. (1988): "Repeated Games with Stationary Bounded Recall Strategies," Journal of Economic Theory, 46(1), 130-144. Lipman, B. L. (1995): “Information Processing and Bounded Rationality: A Survey,” Canadian Journal of Economics, 28(1), 42–67. Miller, D. A., and K. Rozen (2012): “Optimally Empty Promises and Endogenous Supervision,” Unpublished manuscript, Yale University. Monte, D. (2010): “Learning with Bounded Memory in Games,” Unpublished manuscript, Simon Fraser University. Mullainathan, S. (2002): “A Memory-Based Model of Bounded Rationality,” Quarterly Journal of Economics, 117(3), 735–774. Neyman, A. (1985): "Bounded Complexity Justifies Cooperation in the Finitely Repeated Prisoners' Dilemma," Economics Letters, 19(3), 227-229. Romero, J. (2011): “Finite Automata in Undiscounted Repeated Games with Private Monitoring,” Unpublished manuscript, Purdue University. Rubinstein, A. (1986): "Finite Automata Play the Repeated Prisoner's Dilemma," Journal of Economic Theory, 39(1), 83-96. Rubinstein, A. (1998): Modeling Bounded Rationality. MIT Press, Cambridge. Stokey, N. L., and R. E. Lucas, Jr. (1989): Recursive Methods in Economic Dynamics. Harvad University Press, Cambridge. Wilson, A. (2004): “Bounded Memory and Biases in Information Processing,” Unpublished manuscript, Harvard University. |
URI: | https://mpra.ub.uni-muenchen.de/id/eprint/47595 |
Available Versions of this Item
-
Learning in hidden Markov models with bounded memory. (deposited 13 Jul 2010 12:31)
- The Value of (Bounded) Memory in a Changing World. (deposited 15 Jun 2013 09:26) [Currently Displayed]