Banking is starting to change how it uses artificial intelligence (AI) in everyday decisions. In the past, banks relied on structured models designed to produce consistent, predictable results. Now, there is growing interest in “agent-based” systems that go beyond making predictions — they can also interact, coordinate tasks, and take action. Rather than replacing existing models entirely, this shift is better understood as a new way of building and using systems alongside what already exists.
The Big Shift: From Prediction to Simulation
This shift reflects a broader move from static prediction to more dynamic, ongoing simulation. Traditional financial models — like those used for credit scoring or trading — typically generate one-time outputs based on historical data and fixed inputs. In contrast, agent-based systems operate continuously. Their outputs can be updated, refined, and connected to follow-up actions, creating a tighter link between data, analysis, and execution.
Decision-making is also becoming more distributed. Instead of relying on a single model, intelligence is spread across multiple processes. To manage this shift in a controlled way, it helps to think in terms of different types of agents — assistive, workflow, and autonomous. This allows institutions to adopt agent-based capabilities gradually, without disrupting existing systems. These agent-based systems can simulate scenarios, coordinate tools, and adjust in real time, turning AI from a one-time output into an active participant that supports multi-step decisions and ongoing processes.
A Taxonomy of Agentic Systems in Finance
To move beyond early experimentation and begin deploying AI more effectively, it is helpful to distinguish between different types of agents that are emerging. A clear framework enables organizations to determine the appropriate level of autonomy for different tasks and when added complexity is justified.

As these systems become more advanced, they also become more expensive, slower to run, and potentially riskier. That makes it important to be deliberate about when to use them. In our experience, organizations often overuse agent-based systems, assuming they are a natural upgrade to existing AI tools. In reality, they work best when applied selectively to complex, high-value problems — not as a default solution.
A simple way to decide when to use agent-based systems is to evaluate the task based on four factors: complexity, value, cost of error, and how feasible the system is to implement.

Using these criteria, AI can be applied in layers. Rule-based automation is best for repetitive, low-variation tasks with structured data. Predictive machine learning works well for tasks like classification, anomaly detection, and forecasting based on historical data. Standard generative AI is useful for unstructured tasks such as summarizing or interpreting documents when no further action is required. Agent-based AI should be reserved for situations that involve multi-step decisions, interaction with external tools, and changing conditions.
Governance, Risk, and the New ‘Trust Test’
As agent-based systems become more capable, they also introduce greater risk, making strong governance essential. Without proper oversight, these systems can behave in unexpected ways or drift beyond their intended purpose. Organizations that are seeing the best results address this by building in continuous testing and stress-testing as part of their normal operations, ensuring systems remain aligned, reliable, and within defined limits.
This approach also applies to human oversight. Instead of adding it after deployment, human involvement should be built directly into the system. Agents can operate independently within clear boundaries but should seamlessly hand off to human decision-makers when situations become too complex, uncertain, or risky. This creates a more effective model where efficiency and control work together rather than against each other.
Transparency is the foundation of both governance and oversight. Trust depends on visibility, so leading organizations ensure that every action, input, and decision can be tracked and reviewed. This end-to-end transparency creates systems that are not only compliant and accountable, but also reliable in day-to-day use.
Conclusion
Looking ahead, the competitive advantage in AI is shifting. It is no longer just about having the most advanced models, but about how well organizations integrate AI into their workflows. This includes using tools effectively, designing systems that can learn and adapt, and maintaining control through human oversight.
In this context, the move toward agent-based systems in finance is not about replacing people, but about redefining how people and machines work together. Success depends on designing workflows where both play a role — agents handling complex, data-heavy tasks at scale, while humans remain accountable for decisions and outcomes.
By moving from isolated models to more connected and well-governed workflows, institutions can unlock higher productivity. Those that manage this transition effectively will not only improve efficiency, but also reshape how they approach risk and capital management.
How We Can Help
At Ankura, we help financial institutions design, validate, and document models and decisioning processes in a way that supports strong governance and control. As AI becomes more widely used across financial services, institutions will need help implementing these systems, testing them thoroughly, and maintaining the documentation needed for regulatory and audit compliance. We work with clients to build solutions that are practical, transparent, and defensible as these tools become more common.
© Copyright 2026. The views expressed herein are those of the author(s) and not necessarily the views of Ankura Consulting Group, LLC, its management, its subsidiaries, its affiliates, or its other professionals. Ankura is not a law firm and cannot provide legal advice.
