Skip to content Skip to navigation

The Allocation of Decision Authority to Human and Artificial Intelligence

Jan 2020
Working Paper
20-006
By  Susan C. Athey, Kevin A. Bryan, Joshua S. Gans

The allocation of decision authority by a principal to either a human agent or an artificial intelligence (AI) is examined. The principal trades off an AI’s more aligned choice with the need to motivate the human agent to expend effort in learning choice payoffs. When agent effort is desired, it is shown that the principal is more likely to give that agent decision authority, reduce investment in AI reliability and adopt an AI that may be biased. Organizational design considerations are likely to impact on how AI’s are trained.