AAAI Publications, Workshops at the Thirtieth AAAI Conference on Artificial Intelligence

Font Size: 
Quantilizers: A Safer Alternative to Maximizers for Limited Optimization
Jessica Taylor

Last modified: 2016-03-29

Abstract


In the field of AI, expected utility maximizers are commonly used as a model for idealized agents. However, expected utility maximization can lead to unintended solutions when the utility function does not quantify everything the operators care about: imagine, for example, an expected utility maximizer tasked with winning money on the stock market, which has no regard for whether it accidentally causes a market crash. Once AI systems become sufficiently intelligent and powerful, these unintended solutions could become quite dangerous. In this paper, we describe an alternative to expected utility maximization for powerful AI systems, which we call expected utility quantilization. This could allow the construction of AI systems that do not necessarily fall into strange and unanticipated shortcuts and edge cases in pursuit of their goals.

Keywords


AI, expected utility, utility function, optimization

Full Text: PDF