Prompt Engineering or Prompt Fraud? Governance Challenges for Audit
Main Article Content
Abstract
Generative Artificial Intelligence (GenAI) has rapidly become a transformative tool across business functions, including finance, internal audit, and compliance. However, its adoption introduces novel risks that existing frameworks are not fully equipped to address. This article defines prompt fraud as the intentional manipulation of AI prompts to produce outputs that bypass traditional internal controls and generate misleading or fraudulent artifacts. Unlike conventional fraud, which targets systems or personnel through established attack vectors, prompt fraud exploits linguistic controls at the reasoning layer of GenAI systems. The concept represents a paradigm shift in how fraud can be perpetrated, as it requires no system-level intrusion, no credential compromise, and no technical exploitation of software vulnerabilities. Instead, it uses the natural language features of large language models to create responses that sound convincing, include false information, or tell misleading stories meant to trick auditors and decision-makers. This article explores the evolving threat landscape surrounding prompt fraud, provides a structured audit framework for its detection and prevention, assesses the control weaknesses that make organizations vulnerable, and proposes mitigation strategies grounded in governance, technology, and human oversight. The paper further examines the roles of internal and external threat actors, the implications of Shadow AI, and the regulatory and ethical dimensions of AI-assisted fraud. It ends by suggesting that organizations should use better audit methods, strong AI management systems, and ongoing monitoring to deal with the fast-changing risks from GenAI in business settings.