Q&A: Getting AI right in supply chain: the route to measurable ROI
May 11, 2026 • 7 min
We spoke with Rohit Tripathi, VP of Industry Strategy for Manufacturing at RELEX, to reframe how the industry should think about AI — starting with the problem, not the technology. Here, he explains why supply chain leaders aren’t asking about AI, what most organizations skip on the path from ambition to execution, and why the decisions leaders make about people and process matter more than any choice of AI model.
Listen to the full conversation:
Q: What’s your perspective on the two very different conversations happening in supply chains and retail about AI: true believers on one side, skeptics on the other? Or is that the wrong question?
A: (Rohit) I’d say that’s the wrong question to ask. Honestly, supply chain leaders are not sitting in boardrooms asking, “Should I use AI?” They’re dealing with real-world challenges: raw material disruptions, margin pressures, and trade volatility. Research from RELEX found that 86% of supply chain leaders are directly impacted by unstable trade policy, while nearly 57% are dealing with raw material disruptions. These are the problems keeping leaders awake at night.
I recently ran one of our customer advisory board sessions, during which the room was asked to name their top supply chain priority. Not one participant mentioned AI. What they did bring up was forecast accuracy, inventory levels, and service performance. These are the same priorities they’ve had over the past five to ten years. But with specialized AI, like machine-learning-powered forecasting, they can tackle these priorities much more quickly. Rather than taking a year, they can be addressed within 12 weeks.
The organizations who have gained substantial value from AI started with a problem before moving to the technology. One RELEX customer, for instance, didn’t start with “we need AI.” They defined a clear, specific problem that needed solving: “our forecast accuracy across stores is not precise enough, and it’s costing us sales every day.” They then identified a solution – AI-driven demand forecasting from RELEX – that could help them overcome this problem. Soon after deployment, they saw improvements in forecast accuracy by nearly 8.5 percentage points, which drove higher availability and freed working capital.
So, you see, technology was the means. But the business problem was the right starting point.
LISTEN: Cutting through the AI hype: How execs separate signal from the noise
Q: What separates the 15% of organizations getting real value from AI from the 85% that stall in deployment?
A: (Rohit) There are three things.
First, the 15% of organizations getting value from AI pick a specific, measurable problem with clear KPIs. For example, let’s say they start with a defined goal to improve forecast accuracy for their top five hundred SKUs from X to Y percentage points. That level of specificity forces accountability and makes progress visible to the entire organization.
Second, rather than hand ownership of objectives to departments, they assign them to individual team members. This means an actual person is responsible for that objective, someone who owns its execution and actively follows through. When there’s a void of accountability, or ownership remains unnamed, transformation projects are unlikely to reach the finish line. Decisions are blocked; tasks slip; problems linger; and momentum grinds to a halt.
Finally, the successful 15% avoid endless trialing. Instead, they spend their time putting things in motion and deploying. They move cautiously, in stages, and ensure results always come from live production environments.
I would like to stress that organizations don’t usually stall due to a lack of ambition. Where they fall short is finding a core business problem to solve, a way to track progress, and named owners for the outcomes.
Q: What is the “missing middle,” and why does ignoring it cause so many AI initiatives to fail?
A: (Rohit) The “missing middle” is the layer between ambition and execution. Here’s a pattern I see repeatedly. A company decides it needs agentic AI, autonomous planning, or GenAI assistance. But it tries to jump straight from spreadsheets to those capabilities, with nothing solid in between.
The post-mortem almost always tells the same story: sophisticated capabilities like agents, deployed without the fundamentals to support them. AI working from an inaccurate forecast has nothing useful to offer. But when AI can access forecasts powered by machine learning, trained on the company’s data, and governed by their business rules, now that’s transformative.
That middle layer, the specialized AI, is what makes the difference. If you skip it, you’ll miss the opportunity. The good news is that you don’t need perfect data to implement the middle layer, just a minimum viable data foundation.
MAAG Food is a strong example of a business implementing a well-governed, reliable middle layer. It deployed ML-driven demand planning from RELEX into its dairy business, and after proving the impact, they extended it into their meat business. 96% of their forecasts are now fully touchless. That means 96% of AI-generated forecasts go straight into execution without manual intervention. That level of trust doesn’t come from using an AI agent, but from the middle layer doing its job proficiently every single day.
READ MORE: Touchless planning: A manufacturer’s guide to AI-driven supply chain excellence
Q: You describe “tribal knowledge” as decisions that happen outside of planning systems. How significant a problem is it for AI deployment, and what can organizations do about it?
A: (Rohit) In my view, this is the most underappreciated risk in supply chain planning. The connections between planning, operational execution, and financial outcomes are rarely codified. They’re trapped inside people’s heads.
Take your senior demand planner. They know that this one particular SKU always spikes in the third week of March because a regional distributor runs a promotion at that time every year, a promotion your system doesn’t capture. That’s tribal knowledge. And it works fine, until that planner retires, changes roles, or goes on vacation. So, when the spike arrives again, everyone is left scrambling to find out why that happened.
What’s more sobering is when you multiply this issue across hundreds of planners, thousands of SKUs, and dozens of markets, it can snowball quickly into a crisis. There’s just an enormous amount of decision context locked away in spreadsheets, emails, unrecorded conversations, and personal memory. But this knowledge is something businesses must capture because AI cannot act on the context it cannot see.
The organizations that resolve this do so through a massive data infrastructure project. They incorporate systems that let them surface exceptions and create rules around real constraints. Every time a planner overrides a forecast, the system captures the reason. Every time a business rule is created, the system documents the constraint behind it. Over time, a codified record of decision context is established, which is essentially a governance and documentation discipline. Technology must follow that discipline, not the other way around.
Q: What does AI transformation require from leadership?
A: (Rohit) The leadership ask is very straightforward: treat AI transformation as much an organizational change program as it is a technology project. Nearly 70% or more of the value of AI comes from the investments leaders make in their people and processes. Only 10% to 20% comes from the AI models themselves. But many leaders invert those percentages. They spend 80% of their time and effort on technology selection while only 20% on change management. That approach works against them from day one.
Leaders fall short when they don’t attach clear ownership to key initiative KPIs, so no one is accountable when there’s no material improvement. Another reason is the absence of an operating cadence. AI adoption needs a weekly rhythm to keep momentum alive and maintain alignment. This includes regular updates and follow-ups tied to the plan. If this isn’t established or enforced, the initiative can soon fall by the wayside.
Then there’s the failure to bring the rest of the organization along on the ride. Organizations that get this right understand that AI deployment is not something that should just ‘happen to’ team members but is instead a journey that they embark on with them.
Leaders also consistently underestimate or overlook the anxiety their teams feel from the disruptive nature of AI. This form of anxiety is most severe in organizations that are the furthest along with their AI programs. The message to the business should not be that their job is under threat, but that jobs are changing, and here is how each team member can change with them.
Q: What does an AI-native supply chain planning environment look like in three years’ time? And what must be true today for organizations to be able to take full advantage of this future?
A: (Rohit) Given the breakneck pace of change, this is a hard question to answer. But the way I see it is, in the next year of supply chain planning with AI, it’s all about accuracy and trust. It’s about creating plans that are reliable enough for the business to execute without manual rework bottlenecking progress. Organizations must prove that AI consistently outperforms the traditional methods it’s poised to replace. With transformations, trust is the prerequisite for everything that follows.
In the next two or three years, there will be a strong focus on scaling: implementing new capabilities into more sites, more factories, tightening execution loops, and bringing in agent-assisted workflows.
As it stands, for many organizations, comprehensive AI transformation is not yet within reach. Trust in fully autonomous AI is actually declining. One study found it dropped from around 45% to 27% in just a year. In a lot of ways, that’s healthy, because, to me, the way forward isn’t faster or more advanced automation. It’s better governance. Machine learning models should be deployed under logical and transparent business rules, at a pace the organization controls, and then it can scale from there.
Q: To wrap up, Rohit. What’s one takeaway you’d want someone who’s listening or reading this interview to remember?
A: (Rohit) Start with a supply chain problem, not a blanket “let’s apply AI” statement. Capture the context that lives in your planners’ heads. Make sure your AI captures that. And invest in your people. That matters more than which AI model you choose.
Want expert insights on the practical application of AI?


