AI Runs Payments. Governance Decides What Happens Next

The increasing integration of artificial intelligence (AI) within the payments industry necessitates a robust framework of governance and explainability to foster trust and ensure durable, scalable outcomes. This pivotal insight comes from Annie Drew, Chief Risk and Compliance Officer at WEX, as articulated in a new PYMNTS eBook titled "AI Runs Payments. Governance Decides What Happens Next." The publication, released on April 16, 2026, underscores a critical shift in the industry’s focus from mere AI deployment to its responsible management.
The payments landscape is rapidly evolving, with AI becoming an indispensable tool for enhancing fraud detection, accelerating decision-making processes, and managing the ever-growing volumes of transactions and data. However, as AI’s footprint expands within financial systems, the industry’s primary concern is transitioning from "how to deploy AI" to "how to govern it effectively." This governance must be designed to safeguard public trust, support sustainable growth, and empower more confident decision-making across the sector.
The Evolving Role of AI in Payments
AI’s influence extends to fundamental aspects of financial operations, including access to financial services, the accuracy of fraud detection, and the overall confidence consumers place in payment systems. Consequently, governance cannot be an afterthought; it must be an integral part of AI system design from its inception. Much like the transformative impact of the General Data Protection Regulation (GDPR) on data privacy, risk and compliance professionals are now recognized as essential architects in shaping the foundational frameworks for AI.
A significant hurdle often encountered is the transition of AI systems from experimental phases to live operational environments. While many organizations possess strong protocols for AI model development and testing, these controls can become less stringent when systems begin interacting with real-time transactions and external data streams. AI models do not function in isolation; they are embedded within dynamic ecosystems. For these systems to perform reliably and at scale, governance must be inherently designed for this fluidity, rather than being applied retrospectively.
WEX’s Perspective: Balancing Innovation and Oversight
For companies like WEX, which operate at the nexus of complex transaction flows connecting businesses, suppliers, and financial institutions, robust AI governance is paramount. AI has the potential to significantly strengthen these intricate systems by enabling faster anomaly detection, bolstering payment security, and facilitating more informed strategic decisions. However, the true value of these AI-driven enhancements is contingent upon the underlying governance structure’s strength and reliability. Annie Drew emphasizes that confidence in AI is intrinsically linked to strong governance, coupled with explainable AI systems that consistently deliver transparent outcomes and foster user trust.
The challenge of balancing the imperative for speed in the payments sector with the necessity of diligent oversight is a core concern. Payment companies thrive on efficiency, and businesses demand secure and swift transactions. While AI can significantly contribute to achieving these objectives, the deployment of AI capabilities without clear governance protocols introduces considerable risk and can impede long-term scalability. The solution lies not in stifling innovation but in developing governance processes that can keep pace with technological advancements. This includes establishing clear lines of accountability for AI models, fostering close collaboration among product development, design, risk management, compliance, and technology teams, and implementing continuous monitoring mechanisms for systems once they are operational.
Building Trust Through Collaborative Governance
The intricate nature and vast scale of AI applications in payments underscore why these systems cannot be solely managed by engineering or data science departments. Decisions made regarding AI models have far-reaching implications, impacting regulatory compliance, fraud prevention strategies, customer loyalty, and overall business performance. Integrating business leaders, product and design teams, risk management partners, and technologists into a unified governance framework from the outset is crucial for identifying and mitigating potential blind spots that could arise later in the development or deployment lifecycle.
The future trajectory of artificial intelligence in the payments industry is not one of complete automation. Instead, it is envisioned as a synergistic collaboration between AI and human decision-makers, leading to more intelligent and effective outcomes. In this paradigm, humans retain ultimate accountability for the most critical decisions. As AI becomes increasingly central to the operational fabric of the payments sector, those organizations that will achieve sustained success are those that view governance not as an impediment to progress, but as the bedrock upon which responsible innovation, efficient execution, and enduring trust are built.

The Growing AI Market and Its Implications
The global AI market in financial services has experienced exponential growth. According to a recent market analysis by Statista, the AI in financial services market size was valued at approximately $8.3 billion in 2023 and is projected to reach over $46.5 billion by 2028, exhibiting a compound annual growth rate (CAGR) of over 30%. This rapid expansion highlights the immense potential and widespread adoption of AI technologies across various financial functions, including payments.
This market trajectory is supported by increasing investments from venture capital firms and established financial institutions. For instance, in 2025, major tech companies and financial consortia announced several multi-billion dollar funding rounds for AI startups specializing in fraud detection and payment processing optimization. These investments reflect a strong market belief in AI’s transformative capabilities.
However, this growth also amplifies the need for standardized governance frameworks. Regulatory bodies worldwide are increasingly scrutinizing the use of AI in finance. The European Union’s AI Act, which came into effect in stages starting in 2024, categorizes AI systems based on risk, with high-risk AI applications, including those used in credit scoring and critical financial infrastructure, facing stringent requirements. Similarly, the U.S. Securities and Exchange Commission (SEC) has issued guidance on AI and machine learning in investment management, emphasizing the importance of robust risk management and disclosure.
Addressing the Explainability Challenge
A significant component of effective AI governance is explainability, often referred to as "Explainable AI" (XAI). In the context of payments, understanding why an AI system made a particular decision is crucial. For example, if an AI system flags a transaction as fraudulent, financial institutions need to be able to explain the reasoning behind that decision to customers, regulators, and internal audit teams. This is particularly important in dispute resolution and compliance audits.
The challenge of explainability is multifaceted. Many advanced AI models, such as deep neural networks, operate as "black boxes," making it difficult to trace the decision-making process. Research in XAI aims to develop methods and techniques to make these models more interpretable. This includes techniques like LIME (Local Interpretable Model-agnostic Explanations) and SHAP (SHapley Additive exPlanations), which help to understand the contribution of each feature to a model’s prediction.
For WEX and other payment processors, the ability to provide clear explanations for AI-driven decisions is not just a matter of compliance but also of customer service. When a payment is declined or a transaction is flagged, a transparent explanation can mitigate customer frustration and build confidence. The PYMNTS eBook suggests that explainability should be a core design principle, not an add-on feature, enabling stakeholders to understand the logic behind AI outputs and to identify potential biases or errors.
The Chronology of AI Integration in Payments
The journey of AI in the payments industry has been gradual but accelerating.
- Early 2010s: Initial forays into machine learning for fraud detection and basic analytics. These were largely rule-based systems augmented with statistical models.
- Mid-2010s: Increased adoption of more sophisticated machine learning algorithms for pattern recognition and predictive modeling in areas like credit risk assessment and transaction monitoring.
- Late 2010s: The rise of deep learning, enabling more complex pattern identification and a significant leap in fraud detection accuracy. Early discussions about the ethical implications and need for governance began to emerge.
- Early 2020s: AI became a mainstream tool across the payments value chain, powering real-time risk assessments, personalized customer experiences, and automated customer service. Regulatory bodies started to pay closer attention, leading to preliminary guidance and frameworks.
- Mid-2020s (Current Phase): The focus shifts decisively towards AI governance, risk management, and explainability. Major publications like the PYMNTS eBook reflect this industry-wide realization that responsible deployment is key to long-term success. The development of comprehensive regulatory frameworks for AI in finance is a prominent trend.
Broader Implications and Future Outlook
The insights from Annie Drew and the PYMNTS eBook carry significant implications for the entire financial ecosystem.
- Enhanced Customer Trust: By prioritizing governance and explainability, financial institutions can build stronger relationships with their customers, fostering confidence in the security and fairness of their payment systems.
- Regulatory Compliance: Proactive governance strategies will be essential for navigating the increasingly complex regulatory landscape surrounding AI. Companies that demonstrate strong oversight are likely to face fewer compliance hurdles.
- Scalable Innovation: Robust governance frameworks enable companies to deploy AI solutions at scale without compromising on risk management or ethical considerations. This allows for continuous innovation and adaptation to evolving market demands.
- Talent Development: The emphasis on governance and explainability will necessitate the development of new skill sets within financial institutions, bridging the gap between technical AI expertise and risk management acumen.
The future of AI in payments is inextricably linked to the ability of organizations to manage it responsibly. As AI systems become more sophisticated and more deeply embedded in financial operations, the principles of strong governance, transparency, and accountability will serve as the guiding lights for a trustworthy and efficient payments future. The transition from AI deployment to AI governance is not merely a trend; it is a fundamental necessity for the sustainable growth and integrity of the global payments industry.







