Attentional Governance in Human-AI Decision Systems: A Technical Architecture for Judgment Integrity

Main Article Content

Sonali Galhotra, Bhupendra Chaudhary

Abstract

The adoption of artificial intelligence systems has changed how decisions are made in organizations to facilitate interactions between complex ecosystems that determine the generation of informational landscapes by the outputs of algorithms as well as how such landscapes influence judgment that is affected by humans. As an alternative to the human decision-makers, the contemporary AI architectures bring about basic alterations in the circumstances of attention in which the verdict is provided. The traditional systems of governance that are centered on allocation of power and accountability mechanisms have not considered the significance of attention as a determinant of decision integrity. This journal advocates attentional governance as a socio-technical notion, and inattention within good governance ought to be an architectural design that deals with cognitive processes of involvement and not acts of procedural compliance. Such modes of attentional failure as automation bias, information overload, attentional erosion, and signal convergence are outlined in the framework as systematic weaknesses in the human-AI decision system. It is proposed to counteract those weaknesses with five structural mechanisms, which are interpretive checkpoints, which the deliberation process should have explicit and articulate reasons; attention pacing to enable deliberation intervals; triggering escalation to route complex decisions through lengthy deliberation, override rights to be able to go outside of the algorithm advice, and traceability mechanisms to document decision composition. It involves certain leadership capabilities, which are a synthesis of organizational, technical, and cognitive knowledge to be implemented. The organizations must be conscious of the fact that nominal human authority without the power of attention will create the illusion of governance whereby the decision-makers are ratifying systems. The framework provides the leadership in technology with feasible design considerations that will strive to preserve the integrity of judgment at the organizational level and manage the hybrid human-AI teams operating in the unpredictable and complex settings.

Article Details

Section
Articles