W. F. Lawless and James M. Grayson
The lack of first principles linking organizational theory and empirical evidence puts the future of autonomous multi-agent system (MAS) missions and their interactions with humans at risk. Characterizing this issue as N increases are the significant trade-offs between costs and computational power to interact among agents and humans. In contrast to the extreme of command or consensus decision-making approaches to manage these tradeoffs, quantizing the pro-con positions in decision-making may produce a robust model of interaction that better integrates social theory with experiment and increases computational power with N. We have found that optimum solutions of ill-defined problems (idp’s) occurred when incommensurable beliefs interacting before neutral decision makers generated sufficient emotion to process information, I, but insufficient to impair the interaction, unexpectedly producing more trust than under the game-theory model of cooperation. We have extended our model to a mathematical theory of organizations, especially mergers; and we introduce random exploration into our model with the goal of revising rational theory to achieve autonomy with an MAS.