Vinay Mummigatti, the EVP of Strategy and Customer Transformation at Skan and former Chief Automation Officer at LexisNexis, on why a first-party data strategy is more important than having the latest models.
He describes transparency and explainability challenges with AI agent implementation.
Mummigatti says the solution is accountability, with human augmentation as the best strategy until explainable AI is possible.
Even as enterprises pour resources into agentic AI, most deployments still hit the same wall. Most AI agents are exposing fragile processes, low-quality data, and unanswered questions about liability instead of delivering on the promise to transform how work gets done. Some experts say that without fixing those foundations first, organizations risk creating a new wave of technical debt masquerading as lasting value.
To find out more, we asked Vinay Mummigatti, former Chief Automation Officer at LexisNexis and Enterprise Head of Intelligent Automation at Bank of America, now EVP of Strategy and Customer Transformation at AI-powered process intelligence platform, Skan. With decades of experience driving large-scale digital transformation, Mummigatti has seen firsthand how promising technologies can fail when disconnected from process fundamentals.
The risk of repeating history: The rush toward agentic AI without a process-first strategy echoes many of the promises made by Robotic Process Automation (RPA), Mummigatti said. Having deployed thousands of bots in his career, he issued a frank warning about the dangers of creating ‘agent sprawl’ on top of already weak foundations. “RPA came with a lot of fanfare. But after 10 years, where are we? We’re in the same place. Now, most of those bots are considered technology debt because they aren’t aligned with process transformation.”
The liability question: Beyond the technical debt, there’s another barrier that AI has yet to overcome: accountability. The “human-in-the-loop” is a non-negotiable permanent fixture for liability and governance today, Mummigatti argued. At least, until AI can effectively replicate human judgment. “The reason we still have humans at every step in the decision-making process is because it gives me somebody to pinpoint for accountability,” Mummigatti explained. “But if you bring agents into that tomorrow, what happens when they make mistakes? Who are you going to blame? Who owns the liability for that?”
Meanwhile, as leaders celebrate the rise of task-based agents and copilots, Mummigatti cautioned, these surface-level wins can create a false sense of progress. Instead, he continued, success with AI is less about having the latest tool and more about first-party data organization and how it matches with models to unlock value. First-party data, or all the information a company collects directly from customers, is what retains the context of an enterprise, according to Mummigatti. “When organizations move beyond isolated tasks to reimagining end-to-end business processes, that’s when the real transformation takes place.”
The efficiency illusion: To illustrate his point, Mummigatti described three categories of AI agent implementation: task agents, service-based agents, and process agents. “Task- and service-based agents are very discrete, which is why they tend to show higher adoption success in Fortune 100s. But overall, they’re more of an efficiency play than an effectiveness play,” Mummigatti explained. “Sure, you can improve the time for a task, but if you look at the entire process, there’s not really a change in time, no improvement in productivity, and no change in eliminating headcount.” Instead, the real opportunity lies with process agents that can manage workflows from start to finish. “When you go to process agents, you are going to impact 80% of the full time employees directly. That is the bigger play.”
The path to sustainable AI: For Mummigatti, true resilience mostly comes down to strong governance and high-quality data, with an emphasis on process modeling, explainability, and accountability. The debate persists on whether AI should take the lead in process excellence or act as an enabler in a human-centric operating model, Mummigatti said. But clearly, the winning strategy is augmentation. “Companies that can figure out that perfect balance between the amplification of human agency powered by AI will be the winners in the next leap of this technology wave.”
Looking ahead, Mummigatti identified another potential solution to AI’s ‘black box’ problem: neuro-symbolic AI. In this emerging field, visual representations of agentic decision flows and hierarchy help make AI decision-making processes transparent and explainable. Only then, Mummigatti said, “can we know exactly how AI made the decisions. But we are not there yet. Perhaps in three to five years.” In the meantime, he concluded, the gap between consumer hype and enterprise reality remains sizable.