Skip to main content Skip to secondary navigation
Publication

The Allocation of Decision Authority to Human and Artificial Intelligence

The allocation of decision authority by a principal to either a human agent or an artificial intelligence (AI) is examined. The principal trades off an AI's more aligned choice with the need to motivate the human agent to expend effort in learning choice payoffs. When agent effort is desired, it is shown that the principal is more likely to give that agent decision authority, reduce investment in AI reliability and adopt an AI that may be biased. Organizational design considerations are likely to impact on how AI's are trained.

Author(s)
Susan Athey
Kevin Bryan
Joshua Gans
Publication Date
January, 2020